Parallel Tracking And Global Mapping

It’s been a while since I last blogged as I kept receiving piles and piles of work. I do realize now, that research is not a leisurely activity although it may seem like, for a person who is looking from outside. Apart from usual activities of experiments & literature surveys, doing tutorials, marking assignments, writing conference papers, and making presentations are enough to crush a PhD student to his limits. Nevertheless all these implications help in testifying one’s potential for research and the love for science. So I thought of putting all that matters aside and write a post for the sake of contributing to the scientific knowledge. In particular I’d like to share some of the work I did in my research while taking a short break off from my studies.

Marker-less Augmented Reality has been my primary source of curiosity from the day I started my PhD journey. With my research I am exploring the ways in which we can apply AR into HRI (Human-Robot Interactions) and further improve the collaborative patterns between the man and the robot. Consequently I came up with a state-of-the-art interface based on a well- known marker-less AR platform named PTAMM (Parallel Tracking and Multiple Mapping). The interface that I brought up has the capability for marking an arbitrary point in space persistently with a virtual object (AR object). What does the term persistence mean? Suppose you have an AR object that can be clearly seen through your camera. Now you change the camera perspective, move the camera to a different location and return to your AR object from a different direction.  At this instance you must still see the AR object persistently anchored at its original location. The idea may look simple but systems with such a functionality are still rare, even at the existence of powerful AR frameworks (i.e. PTAM, PTAMM).  This is what I describe as Persistent Augmented Reality. My AR interface provides the capability to appear an AR object persistent over time and space, no matter in which direction you change the camera. See the video below. But how does it work?  What are the concepts? are the questions that you might wonder at this point.

The basic requirement for the persistent notion is that we must have a global camera pose (position and orientation) throughout the course of the camera motion.  In other words the camera pose must be calculated in relation to a global coordinate system so that we can maintain the camera’s pose irrespective of its viewing direction. As I mentioned earlier, the underlying foundation for my interface is PTAMM.  PTAMM provides the ability to generate multiple maps (coordinate frames) at multiple locations within an environment. Its operation is similar to that of PTAM apart from the notion of multiple map generation. We can consolidate these multiple maps together and form a one single map, thus a global map. The origin of the global map is taken as the camera’s starting point at the time of initializing the system. Whenever the camera enters into new regions we can capture more and more local maps and hence expand the global map.  But how can we join these maps? What concepts are valid?

One possible way is to employ Linear Transformations. Consider we have a point x in a local map A. Again imagine this point was observed from another local map, let’s say local map B, and its value is x’. So the relationship between the local map A and the local map B can be expressed as a Linear Transformation,

 x’ = [R|t]x

where R and t are the rotation and translation of map A with regard to map B. In such a manner we can compute the global pose of the camera by carrying out Linear Transformations between the respective local maps.  The system I made this way, is named as PTAGM (Parallel Tracking And Global Mapping).

In a later post, I will describe the application of this AR interface for HRI. In particular I will show you its usefulness for robot navigation. Using this AR interface, we can designate a point in space and then let the robot automatically navigate to that location.  Such a mechanism would be useful for non-robotic experts as they do not need to bear a special maneuvering skill for robot navigation.  Moreover traditional remote controllers do not deliver any impression about the distance to be traveled.  On the other hand, this AR based navigation interface, enables the user navigate a robot a specific distance in a specific angle. Following is a video highlighting the PTAGM’s application for a robot simulator.

Advertisements

One thought on “Parallel Tracking And Global Mapping

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s