For semantic analysis of activities and events in videos, it is important to capture the spatio-temporal relation among objects in the 3D space. This presentation introduces a probabilistic method that extracts 3D trajectories of objects from 2D videos, captured from a monocular moving camera. Compared to existing methods
that rely on restrictive assumptions, the presented method can extract 3D trajectories with much less restriction by adopting new example-based techniques which compensate the lack of information. Here, the focal length of the camera is estimated based on similar candidates, and used afterwards to compute depths of detected objects.
Contrary to other 3D trajectory extraction methods, this method is able to process videos taken from a stable camera as well as a non-calibrated moving camera without restrictions. For this, a modified Reversible Jump Markov Chain Monte Carlo (RJ-MCMC) particle filtering is adopted due to its suitability for camera odometery without relying on geometrical feature points. Moreover, the method decreases time consumption by reducing the number of object detections and replaced it with keypoint matching. The presentation will be concluded with some experimental results and a short discussion.