AR and its Role in Marking Space

It is perhaps surprising to realize that only two things in this world have troubled man’s ingenuity for centuries, i.e. space and time. These two are absolute benchmarks, often used when making a reference to a physcial object in space or when describing a past incident, though it is hard to understand why do we always attribute our actions or events in relation to space and time. For instance, a special event (a birthday perhaps) can be expressed in relation to time by marking that occurence on a calendar either digitally or manually. We are capable of doing this since ’time’ as we know of, is one dimensional. It is somewhat puzzling at this point, should we deal with space in the same manner, because space is three dimensional and it provides freedom for travelling in multiple directions, as opposed to the single dimensional nature of time. These implications led our curiosity to focus on one implicit feature, yet something strange about space – ”How can we mark space?”. I shall later describe the background for arriving at this notion. For the moment let us accept this question and describe its logic by an analogy with our understanding of space.

From a biological point of view, human beings tend to use physical objects for designating places of interest that often help them representing space and constructing three-dimensional cognitive maps [Egerton, 2005]. The mammalian spatial referencing patterns, as described by Egerton [2005] organise physical objects in the form of a trail, for tracing out specific points in their respective environments. Imagine you were exploring an unknown and complex environment and wanted to find your wayback after an exploration. One solution would be to mark your trail with pebbles. The pebbles would persist and you could readily trace-back your path in return, unless an ill-tempered being removes all the pebbles from your sight after you placed them. Extending this concept, imagine we could mark out any point in space, with pebbles that remain persistent over time. In such a way, we could pin-point an arbitrary location – even a point somewherein front of our eyes – freely in any perspective while tracing out complex paths in all 3 dimensions of space. Extending the idea further, if the pebbles could convey information then they could be used to pass messages or communicate information to other travelers. Further still, if pebbles could express relationships with their neighbors, complex process models could be expressed. Continue reading “AR and its Role in Marking Space”

Advertisements

Spatial Human Robot Interaction Marker Platform (SHRIMP)

ClassicMazeThrough my previous post, I highlighted the Augmented Reality’s (AR) potential to function as a novel paradigm in Human-Robot-Interactions (HRI). Marker-less AR seems more plausible for this work, as it can readily mark points in space, without demanding a prior knowledge of the environment. In other words, we can just look at any random environment and mark any point in that environment real time. By placing a virtual marker, we already saw a demonstration on how can we persistently mark space, so that virtual markers remained persistent under changing perspectives of the camera, often like as if they were real.

Now we will continue from that point onwards, and see how can we apply such an AR-based spatial marker platform into HRI. In this article we make a case study in which we assimilate Augmented Reality into robot navigation. Virtual markers are overlaid on the video feed captured by a camera which in turn is mounted on top of the robot. We mark a point in space, just by placing a virtual AR marker, so then the robot automatically navigates to the location we pointed. My hypothesis here is to prove that just by pointing somewhere in space, we could readily perform HRI tasks – especially navigation. But before moving into application specific details let us dive into some background about HRI and marking space.

Continue reading “Spatial Human Robot Interaction Marker Platform (SHRIMP)”

Parallel Tracking And Global Mapping

It’s been a while since I last blogged as I kept receiving piles and piles of work. I do realize now, that research is not a leisurely activity although it may seem like, for a person who is looking from outside. Apart from usual activities of experiments & literature surveys, doing tutorials, marking assignments, writing conference papers, and making presentations are enough to crush a PhD student to his limits. Nevertheless all these implications help in testifying one’s potential for research and the love for science. So I thought of putting all that matters aside and write a post for the sake of contributing to the scientific knowledge. In particular I’d like to share some of the work I did in my research while taking a short break off from my studies.

Marker-less Augmented Reality has been my primary source of curiosity from the day I started my PhD journey. With my research I am exploring the ways in which we can apply AR into HRI (Human-Robot Interactions) and further improve the collaborative patterns between the man and the robot. Consequently I came up with a state-of-the-art interface based on a well- known marker-less AR platform named PTAMM (Parallel Tracking and Multiple Mapping). The interface that I brought up has the capability for marking an arbitrary point in space persistently with a virtual object (AR object). What does the term persistence mean? Suppose you have an AR object that can be clearly seen through your camera. Now you change the camera perspective, move the camera to a different location and return to your AR object from a different direction.  At this instance you must still see the AR object persistently anchored at its original location. The idea may look simple but systems with such a functionality are still rare, even at the existence of powerful AR frameworks (i.e. PTAM, PTAMM).  This is what I describe as Persistent Augmented Reality. My AR interface provides the capability to appear an AR object persistent over time and space, no matter in which direction you change the camera. See the video below. But how does it work?  What are the concepts? are the questions that you might wonder at this point.

Continue reading “Parallel Tracking And Global Mapping”

PTAM Revealed

ptam_screenshotPTAM (Parallel Tracking & Mapping) is a robust visual SLAM approach developed by Dr. George Klein at Oxford University. It tracks the 3D pose of the camera quite rapidly at frame rate which in turn becomes an ideal platform for implementing marker-less augmented reality. Through this post I’m going to reveal my own insights about PTAM with the help of my hands-on experience with it.

PTAM runs tracking & mapping in two separate threads. Inside the PTAM implementation we can find two files namely, Tracker.cc & MapMaker.cc (NOT the MapViewer.cc). Before start tracking, it demands an initial map of the environment and it was being built by the tracker.  In System.cc there’s function called Run (). The tracking thread runs in this function. In order to build the initial map, user should supply a stereo image pair, particularly on a planar surface. PTAM calculates the initial pose with Homography Matrix, whereas the 3D coordinates of the initial map points were generated with Triangulation. Then the tracker grabs each frame in a tight loop and calculates the camera pose. The tracker performs the pose calculation in following manner. This was implemented inside the TrackFrame() function (Tracker.cc).  Continue reading “PTAM Revealed”

Remote Human-Robot Operations with Adjustable Autonomy

CuriosityThis article was based on the work carried out by [1]. In future NASA’s space missions will include more and more interactive robots. The Curiosity rover that has been recently sent to Mars was a good example for that. These kinds of robots require new remote operation mechanisms for effective use. In such a tele-operated context, a human team should constantly supervise the robot and manually perform tasks whenever needed.

An important aspect of such operations is the ability to allocate tasks between humans and robots effectively. This capability was known as Adjustable Autonomy (Adaptive Autonomy) so that the automation can be smart enough to achieve the autonomy required according to changing situations. Human-robot interactions are closely related to adjustable autonomy, and they both go side-by-side. Apparently, human robot operations are highly dependent upon the scenario so that they become specific to a given robot, thus making it hard to generalize them. Given below is a sub-set of such human robot operations. Continue reading “Remote Human-Robot Operations with Adjustable Autonomy”