Spatial Human Robot Interaction Marker Platform (SHRIMP)

ClassicMazeThrough my previous post, I highlighted the Augmented Reality’s (AR) potential to function as a novel paradigm in Human-Robot-Interactions (HRI). Marker-less AR seems more plausible for this work, as it can readily mark points in space, without demanding a prior knowledge of the environment. In other words, we can just look at any random environment and mark any point in that environment real time. By placing a virtual marker, we already saw a demonstration on how can we persistently mark space, so that virtual markers remained persistent under changing perspectives of the camera, often like as if they were real.

Now we will continue from that point onwards, and see how can we apply such an AR-based spatial marker platform into HRI. In this article we make a case study in which we assimilate Augmented Reality into robot navigation. Virtual markers are overlaid on the video feed captured by a camera which in turn is mounted on top of the robot. We mark a point in space, just by placing a virtual AR marker, so then the robot automatically navigates to the location we pointed. My hypothesis here is to prove that just by pointing somewhere in space, we could readily perform HRI tasks – especially navigation. But before moving into application specific details let us dive into some background about HRI and marking space.

Continue reading “Spatial Human Robot Interaction Marker Platform (SHRIMP)”

Advertisements

Parallel Tracking And Global Mapping

It’s been a while since I last blogged as I kept receiving piles and piles of work. I do realize now, that research is not a leisurely activity although it may seem like, for a person who is looking from outside. Apart from usual activities of experiments & literature surveys, doing tutorials, marking assignments, writing conference papers, and making presentations are enough to crush a PhD student to his limits. Nevertheless all these implications help in testifying one’s potential for research and the love for science. So I thought of putting all that matters aside and write a post for the sake of contributing to the scientific knowledge. In particular I’d like to share some of the work I did in my research while taking a short break off from my studies.

Marker-less Augmented Reality has been my primary source of curiosity from the day I started my PhD journey. With my research I am exploring the ways in which we can apply AR into HRI (Human-Robot Interactions) and further improve the collaborative patterns between the man and the robot. Consequently I came up with a state-of-the-art interface based on a well- known marker-less AR platform named PTAMM (Parallel Tracking and Multiple Mapping). The interface that I brought up has the capability for marking an arbitrary point in space persistently with a virtual object (AR object). What does the term persistence mean? Suppose you have an AR object that can be clearly seen through your camera. Now you change the camera perspective, move the camera to a different location and return to your AR object from a different direction.  At this instance you must still see the AR object persistently anchored at its original location. The idea may look simple but systems with such a functionality are still rare, even at the existence of powerful AR frameworks (i.e. PTAM, PTAMM).  This is what I describe as Persistent Augmented Reality. My AR interface provides the capability to appear an AR object persistent over time and space, no matter in which direction you change the camera. See the video below. But how does it work?  What are the concepts? are the questions that you might wonder at this point.

Continue reading “Parallel Tracking And Global Mapping”

PTAM Revealed

ptam_screenshotPTAM (Parallel Tracking & Mapping) is a robust visual SLAM approach developed by Dr. George Klein at Oxford University. It tracks the 3D pose of the camera quite rapidly at frame rate which in turn becomes an ideal platform for implementing marker-less augmented reality. Through this post I’m going to reveal my own insights about PTAM with the help of my hands-on experience with it.

PTAM runs tracking & mapping in two separate threads. Inside the PTAM implementation we can find two files namely, Tracker.cc & MapMaker.cc (NOT the MapViewer.cc). Before start tracking, it demands an initial map of the environment and it was being built by the tracker.  In System.cc there’s function called Run (). The tracking thread runs in this function. In order to build the initial map, user should supply a stereo image pair, particularly on a planar surface. PTAM calculates the initial pose with Homography Matrix, whereas the 3D coordinates of the initial map points were generated with Triangulation. Then the tracker grabs each frame in a tight loop and calculates the camera pose. The tracker performs the pose calculation in following manner. This was implemented inside the TrackFrame() function (Tracker.cc).  Continue reading “PTAM Revealed”

Compiling ARToolkit on Ubuntu 10.04

Apparently I’ve given more thought on to ARToolkit these days  (mainly due to my research), so that it makes me  attempt different things with it. Consequently I’ll be writing a series of posts pertaining to my small experiments for future references. I know it’s pretty boring stuff, talking about a dull subject again and again… but it merely gives us a sensation when doing it physically (particularly when you have nothing left to do 😉 ). In general everybody feels great when their imaginations turn into a physical realization.

ARApplicationInUbuntu

So… enough lecturing & let me keep aside my talking. In today’s post I’ll cover-up the installation of ARToolkit in Ubuntu 10.04. Continue reading “Compiling ARToolkit on Ubuntu 10.04”

Demistifying DSVL Configuration

directx-logoIn my previous post I just mentioned how we can use ARToolkit in conjunction with a 3D rendering engine. The camera connection was made by the ARToolkit via DSVL (DirectShow Video Library) . As I promised, through this post, I’ll expose how we can configure DSVL with its supported parameters.

DSVL is a wrapper for DirectShow which in turn is a part of DirectX. We all know that DirectX is the prominent media framework used in windows platforms. Other than DirectShow, Direct3D and DirectSound also come under the brotherhood of DirectX. So what is the specialty of DirectShow? In computers data can be generated in many places, such as file system, network, TV cards or video cameras etc. And data which is produced at each of these locations take many formats. Therefore front-end applications have to explicitly communicate with these underlying data sources and deal with its formats. Obviously this would be pretty cumbersome and overwhelmingly incompatible with different hardware devices. This is where DirectShow comes in to handy. It synchronizes and unifies all the communication flow between our application and the underlying hardware. In particular DirectShow talks directly with the camera drivers, capture card drivers etc… and provide feedback for the user application. This is the simplest way we can understand it, but for more details go here http://msdn.microsoft.com/en-us/library/windows/desktop/dd375454(v=vs.85).aspx .

Continue reading “Demistifying DSVL Configuration”

Combining ARToolkit & OGRE

AR and OGRE

It’s wonderful to see AR Applications, running within our physical environment. For an average guy it really looks like magic. But creating that magic was never easy. A professional magician always has to acquire a proven set of tools and techniques in order to deliver a realistic illusion. So as with the developer! For years ARToolkit served as the fundamental set of software package for crafting AR applications. However it becomes more rewarding to combine ARToolkit with a 3D Graphics Rendering Engine so that a one can create his own 3D graphics or even an animation and turn them into Augmented Reality. Therefore the objective of this article is to share my own experience in combing an AR application with a 3D rendering engine.

Continue reading “Combining ARToolkit & OGRE”

Nature – Is it Just a Coincidence?

It seems to be quite a coincidence how things happen in nature. For instance blooming flowers, reproduction of species, waveforms in the sea appears to be random and chaotic.  There’s no one behind driving these forces. It’s driven by the nature itself. However further insights into these chaotic motions of nature, reveals something unbelievable. That is, things happen in nature according to a pattern.

Obviously nature organizes things in the most effective and easiest manner. Large formations are constructed by combining smaller parts. Yet these large formations are similar in shape to the smaller components. The fern leaf is the best example. The whole fern leaf was formed as a result of recursive placement of smaller leaflets. This phenomenon is known as Fractals – self similar patterns. Self-similar patterns can be seen everywhere. Rock formations, growth pattern of a leaf, surface of broccoli & Cauliflower, Fractal Mountains, lightning, and even clouds express this nature. The growth pattern of fractals are predictable, they are not just random. The following describes how. Continue reading “Nature – Is it Just a Coincidence?”