header image

This paper addresses a new problem, that of multiscale activity recognition. Our goal is to detect and localize a wide range of activities, including individual actions and group activities, which may simultaneously co-occur in high resolution video. The video resolution allows for digital zoom-in (or zoom-out) for examining fine details (or coarser scales), as needed for recognition. The key challenge is how to avoid running a multitude of detectors at all spatiotemporal scales, and yet arrive at a holistically consistent video interpretation. To this end,we use a three-layered AND-OR graph to jointly model group activities, individual actions, and participating objects. The AND-OR graph allows a principled formulation of efficient, cost-sensitive inference via an explore-exploit strategy. Our inference optimally schedules the following computational processes: 1) direct application of activity detectors – called α process; 2) bottom-up inference based on detecting activity parts – called β process; and 3) top-down inference based on detecting activity context – called γ process. The scheduling iteratively maximizes the log-posteriors of the resulting parse graphs. For evaluation, we have compiled and benchmarked a new dataset of high-resolution videos of group and individual activities co-occurring in a courtyard of the UCLA campus. Paper Presentation Code Dataset

under: Main Conference, Publications

This paper addresses recognition of human activities with stochastic structure, characterized by variable space-time arrangements of primitive actions, and conducted by a variable number of actors. We demonstrate that modeling aggregate counts of visual words is surprisingly expressive enough for such a challenging recognition task. An activity is represented by a sum-product network (SPN). SPN is a mixture of bags-of-words (BoWs) with exponentially many mixture components, where sub components are reused by larger ones. SPN consists of terminal nodes representing BoWs, and product and sum nodes organized in a number of layers. The products are aimed at encoding particular configurations of primitive actions, and the sums serve to capture their alternative configurations. The connectivity of SPN and parameters of BoW distributions are learned under weak supervision using the EM algorithm. SPN inference amounts to parsing the SPN graph, which yields the most probable explanation (MPE) of the video in terms of activity detection and localization. SPN inference has linear complexity in the number of nodes, under fairly general conditions, enabling fast and scalable recognition. A new Volleyball dataset is compiled and annotated for evaluation.Our classification accuracy and localization precision and recall are superior to those of the state-of-the-art on the benchmark and our Volleyball datasets. Paper Poster Code Dataset

under: Main Conference, Publications

Marine biologists commonly use underwater videos for their research on studying the behaviors of sea organisms.Their video analysis, however, is typically based on visual inspection. This incurs prohibitively large user costs, and severely limits the scope of biological studies. There is a need for developing vision algorithms that can address specific needs of marine biologists, such as fine-grained categorization of fish motion patterns. This is a difficult problem, because of very small inter-class and large intra-class differences between fish motion patterns. Our approach consists of three steps. First, we apply our new fish detector to identify and localize fish occurrences in each frame, under partial occlusion, and amidst dynamic texture patterns formed by whirls of sand on the sea bed. Then, we conduct tracking-by-detection. Given the similarity between fish detections,defined in terms of fish appearance and motion properties, we formulate fish tracking as transitively linking similar detections between every two consecutive frames,so as to maintain their unique track IDs. Finally, we extract histograms of fish displacements along the estimated tracks.The histograms are classified by the Random Forest technique to recognize distinct classes of fish motion patterns.Evaluation on challenging underwater videos demonstrates that our approach outperforms the state of the art. Paper Poster

under: Publications, Workshop

This is a theoretical paper that proves that probabilistic event logic (PEL) is MAP-equivalent to its conjunctive normal form (PEL-CNF). This allows us to address the NP-hard MAP inference for PEL in a principled manner.We first map the confidence-weighted formulas from a PEL knowledge base to PEL-CNF, and then conduct MAP inference for PEL-CNF using stochastic local search. Our MAP inference leverages the spanning-interval data structure for compactly representing and manipulating entire sets of time intervals without enumerating them. For experimental evaluation,we use the specific domain of volleyball videos. Our experiments demonstrate that the MAP inference for PEL-CNF successfully detects and localizes volleyball events in the face of different types of synthetic noise introduced in the ground-truth video annotations. Paper

under: Publications, Workshop

Given a video, we would like to recognize group activities,localize video parts where these activities occur, anddetect actors involved in them. This advances prior workthat typically focuses only on video classification. We makea number of contributions. First, we specify a new, midlevel,video feature aimed at summarizing local visual cuesinto bags of the right detections (BORDs). BORDs seek toidentify the right people who participate in a target groupactivity among many noisy people detections. Second, weformulate a new, generative, chains model of group activities.Inference of the chains model identifies a subset ofBORDs in the video that belong to occurrences of the activity,and organizes them in an ensemble of temporal chains.The chains extend over, and thus localize, the time intervalsoccupied by the activity. We formulate a new MAP inferencealgorithm that iterates two steps: i) Warps the chainsof BORDs in space and time to their expected locations,so the transformed BORDs can better summarize local visualcues; and ii) Maximizes the posterior probability of thechains. We outperform the state of the art on benchmarkUT-Human Interaction and Collective Activities datasets,under reasonable running times. Paper Poster Code

under: Main Conference, Publications

Multiobject Tracking as Maximum-Weight Independent Set (CVPR 2011)

Posted by: | March 24, 2011 Comments Off on Multiobject Tracking as Maximum-Weight Independent Set (CVPR 2011) |

This paper addresses the problem of simultaneous tracking of multiple targets representing occurrences of distinct object classes in complex scenes. We apply object detectors to every frame, and build a graph of tracklets, defined as pairs of detection responses from every two consecutive frames. The graph helps transitively link the best matching detections that do not violate hard and soft contextual constraints between the resulting tracks. We prove that this data association problem can be formulated as finding the heaviest subset of non-adjacent tracklets in the graph, called the maximum-weight independent set (MWIS). We present a new, polynomial-time MWIS algorithm, and prove that it converges to an optimum. Similarity between object detections, and the contextual constraints between the tracks, used for data association, are learned online from object appearance and motion properties. Long-term occlusions are addressed by iteratively repeating MWIS to hierarchically merge smaller tracks into longer ones. We outperform the state of the art on the benchmark datasets, and show the advantages of simultaneously accounting for soft and hard constraints in multitarget tracking. Paper Presentation Code

under: Main Conference, Publications

Monocular Estimation of 2.1D Sketch (ICIP 2010)

Posted by: | March 24, 2011 Comments Off on Monocular Estimation of 2.1D Sketch (ICIP 2010) |
The 2.1D sketch is a layered representation of occluding and occluded surfaces of the scene. Extracting the 2.1D sketch from a single image is a difficult and important problem arising in many applications. We present a fast and robust algorithm that uses boundaries of image regions and T-junctions, as important visual cues about the scene structure, to estimate the scene layers. The estimation is a quadratic optimization with hinge-loss based constraints, so the 2.1D sketch is smooth in all image areas except on image contours, and image regions forming “stems” of the T-junctions correspond to occluded surfaces in the scene. Quantitative and qualitative results on challenging, real-world images—namely, Stanford depthmap and Berkeley segmentation dataset—demonstrate high accuracy, efficiency, and robustness of our approach. Paper Poster Code
under: Main Conference, Publications

« Newer Posts

Categories