Project Description
Humans make complex inferences about their surrounding environment with virtually no effort due in large part to a sophisticated sense of visual perception. Artificially reproducing the capabilities of the human visual system requires the ability to enforce temporal consistency among pixels observed in sequences of images, which is one of the grand challenges of computer vision. The overarching goal of this project is to create a trainable stochastic framework that leverages semantic information to coherently assimilate spatio-temporal object relationships, thereby enabling novel approaches to develop temporally consistent computer vision models.
Publications
Mozhdehi, R., Medeiros, H., “Deep Convolutional Correlation Iterative Particle Filter for Visual Tracking,” Computer Vision and Image Understanding, 2022.
This material is based upon work supported by the National Science Foundation under Grant No. 2224591.