PTAM Revealed

ptam_screenshotPTAM (Parallel Tracking & Mapping) is a robust visual SLAM approach developed by Dr. George Klein at Oxford University. It tracks the 3D pose of the camera quite rapidly at frame rate which in turn becomes an ideal platform for implementing marker-less augmented reality. Through this post I’m going to reveal my own insights about PTAM with the help of my hands-on experience with it.

PTAM runs tracking & mapping in two separate threads. Inside the PTAM implementation we can find two files namely, & (NOT the Before start tracking, it demands an initial map of the environment and it was being built by the tracker.  In there’s function called Run (). The tracking thread runs in this function. In order to build the initial map, user should supply a stereo image pair, particularly on a planar surface. PTAM calculates the initial pose with Homography Matrix, whereas the 3D coordinates of the initial map points were generated with Triangulation. Then the tracker grabs each frame in a tight loop and calculates the camera pose. The tracker performs the pose calculation in following manner. This was implemented inside the TrackFrame() function ( 

  • A prior post was estimated from the motion model. The motion model is an estimation technique whereby it allows estimating the pose of the current frame by considering the pose of the previous frame.
  • Iterate through all the map points (features) and re-project the points that are likely to be visible in the current image frame.
  • Find a coarse set of matches (around 50) in the current frame. This was done through a patch search. (PTAM’s author has implemented the patch analysis from the scratch – brilliant).
  • Update the camera pose.
  • Take another set of map points (this time around 1000) and re-project them to the image frame.
  • Find the matches again.
  • Update the pose until the re-projection was minimized. This was done using the M-Estimator.

The tracker calculates the final pose of the new keyframe (inside TrackMap method) and finally this keyframe was added to a Queue.

The MapMaker, which was running asynchronously checks whether there are any keyframes on the queue. If there are any, it fetches each keyframe and finds the 3D coordinates of the new map-points. Depth cannot be calculated with a single keyframe. Therefore the depth was computed by triangulating the current keyframe with the closet keyframe. Obviously this triangulation requires a set of correspondences, which was obtained through an Epipolar search.

Sometimes there might not be enough keyframes visible to find a corresponding point for a new map point (Particularly when the camera moves fast). In that case, MapMaker waits for some keyframes to be added to the queue. Later it runs the data association refinement. The depth of the new map point always depends upon the mean distribution of the depth of other map points inside the keyframe. Finally the new map points (in the code these were called as Candidates) are added to map by the MapMaker. Mapviewer on the other hand uses this map to render and display 3D positions of those map points.


8 thoughts on “PTAM Revealed

  1. hey i am new on linux platform i am compling ptam from readme.txt file from official website but when i tape g++ i got this error: error: gvars3/instances.h: No such file or directory error: ‘GVars3’ is not a namespace-name error: expected namespace-name before ‘;’ token In function ‘int main()’: error: ‘GUI’ was not declared in this scope

    can i get some help pls

  2. Hi houssam ..

    Sorry for my late reply as I was heavily moving around with my research these days…

    well… I hope you’ve edited the Makefile that specifies the location of include files. Again make sure GVars headers are available in the default ‘include’ locations (probably in /usr/local/include or /usr/include).

    If it still fails then the issue might be with the version. One important thing with PTAM is that we’ve got to use the correct versions of libcvd and GVars. In this link download the GVars version gvars3-20090421.tar.gz and libcvd-20090520.tar.gz. I found that the versions that were built in 2009 May works well with PTAM. I discovered this in the Readme.txt under PTAM source folder.

    Anyways I will provide you (via email) the author’s manual for PTAMM where the installation steps work exactly the same for PTAM. Hope this helps.


  3. hi eranda this is me again !!!!!!!!

    so i did compile the ptam well but when i run ./CameraCalibrator i got errors :

    “can find the shared library :*.so ” but the lib it alredy in the path link.
    how can i fix that ? many many thanks

  4. hi, eranda

    in your citation for ptam you said:
    “PTAM’s author has implemented the patch analysis from the scratch – brilliant”
    can you send me some info about this task : patch search.


    1. Hi Houssam

      Their patch analysis functions likes this. you might have seen that PTAM projects it’s initial map points into the current video frame while having a prior pose estimation in place. Successful map points that falls within the current frame are known as the Potentially Visible Set (PVS).

      First a set of image patches are harnessed around these PVS map points. Typically it is a 8X8 image patch if I’m not mistaken. Then those image patches are warped based on affine deformations formed by the pixel displacement, according to the duly motion of the camera.

      These warped image patches are again searched inside the current video frame and compared against the FAST corners…. If a patch is found then it returns true.

      However their patch analysis is complicated as they also perform a sub-pixel level search as well. you can concentrate more on to SearchForPoints function in the Tracker class for a better observation.

      I couldn’t provide an elaborated description as I’m pretty much obsessed with work these days.. and some points are even unclear to me.

      Also try to go through their paper about PTAM which holds a comprehensive discussion about their underlying mechanisms.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s