AR and its Role in Marking Space

It is perhaps surprising to realize that only two things in this world have troubled man’s ingenuity for centuries, i.e. space and time. These two are absolute benchmarks, often used when making a reference to a physcial object in space or when describing a past incident, though it is hard to understand why do we always attribute our actions or events in relation to space and time. For instance, a special event (a birthday perhaps) can be expressed in relation to time by marking that occurence on a calendar either digitally or manually. We are capable of doing this since ’time’ as we know of, is one dimensional. It is somewhat puzzling at this point, should we deal with space in the same manner, because space is three dimensional and it provides freedom for travelling in multiple directions, as opposed to the single dimensional nature of time. These implications led our curiosity to focus on one implicit feature, yet something strange about space – ”How can we mark space?”. I shall later describe the background for arriving at this notion. For the moment let us accept this question and describe its logic by an analogy with our understanding of space.

From a biological point of view, human beings tend to use physical objects for designating places of interest that often help them representing space and constructing three-dimensional cognitive maps [Egerton, 2005]. The mammalian spatial referencing patterns, as described by Egerton [2005] organise physical objects in the form of a trail, for tracing out specific points in their respective environments. Imagine you were exploring an unknown and complex environment and wanted to find your wayback after an exploration. One solution would be to mark your trail with pebbles. The pebbles would persist and you could readily trace-back your path in return, unless an ill-tempered being removes all the pebbles from your sight after you placed them. Extending this concept, imagine we could mark out any point in space, with pebbles that remain persistent over time. In such a way, we could pin-point an arbitrary location – even a point somewherein front of our eyes – freely in any perspective while tracing out complex paths in all 3 dimensions of space. Extending the idea further, if the pebbles could convey information then they could be used to pass messages or communicate information to other travelers. Further still, if pebbles could express relationships with their neighbors, complex process models could be expressed.

Marking space with pebbles however, is just a primitive example for the spatial referencing problem and it is not a practical approach either. In the first place, we cannot place a pebble in open space since it falls down. So we are bound to mark locations only at ground-level, but not somewhere above it, not somewhere in free space. Secondly, consider when interacting with a robot. Wouldn’t it be useful to lay trails of markers for robots to follow? It is obvious that in this case, we should have the freedom to place or remove markers at our will as we intend to guide robots into different destinations. Thirdly, imagine you are assisting a robot or a person at a remote location providing markers for tracing their route. This requires sharing markers between robots and users via a network. Again at this point, physical objects like pebbles are useless, as sending them over the wire, by no means is possible.

In order to accomplish this, we need the ability to mark out real points in space with virtual markers that behave the way a physical marker would do (i.e. persistent and observable from any point of view). A computational system looks ideal assuming that computer graphics and image processing techniques are somewhat favorable in making such a system. But would it be possible to replace those pebbles completely with virtual graphics, as if we were feeling no difference whether they are real or virtual? If ’yes’ is the answer, a system as such would be useful to develop human-robot interactions (HRI) to a larger extent. Nevertheless still it remains as an open question on how to exactly mark a point in space and hence exploring the possibilities in doing so, is the main focus of this research.

The flourishing new technologies of Augmented Reality (AR) have great potential for solving this problem. The ultimate goal of AR is to deliver the sensation that virtual objects are present in the real world [Cawood et al., 2007]. Special computer vision software are put in place to achieve this effect while rendering virtual 3D objects inside the real-time  video so that it appears to belong to the scene displayed by the camera. Augmenting real world space in such a way produces a stable answer for the core question highlighted above – how can we mark space? Its application for HRI is multidimensional so that it spans across navigation, programming, gaming, Learning from Demonstration (LfD) [Argall et al., 2009], multi-robot collaboration etc . . . Thereby we can raise several other questions at this point.

  1. What types of AR applications are there for HRI frameworks?
  2. What existing methods are available to allow virtual markers to be placed and
    appear through the vision system of a robot?
  3. Is it possible to place a marker (mark space) persistently with AR?If possible, what are the methods, tools, and techniques?
  4. How do we apply an AR based spatial referencing system for HRI?
  5. How do we test whether such an application improves HRI? How can we measure
    its efficiency?

The above list of questions constitutes our main research problems. We testify these
questions with hypothesis as shown below.

If it is possible to develop an AR interface which addresses the problem of marking space persistently, then such an interface improves human-robot interaction.

We establish this hypothesis as null in the beginning, and then continuously evaluate its
correctness or falseness throughout the course of this research.

References

SJ Egerton. From mammals to machines: towards a biologically inspired spatial integration system for autonomous mobile robots. PhD thesis, University of Essex, 2005. URL http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.413641.

Stephan Cawood, Mark Fiala, and DH Steinberg. Augmented reality: A practical guide. Pragmatic Programmers, LLC, 2007, 2007. ISBN 9781934356036. URL http://pragprog.com/book/cfar/augmented-reality.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s