Spatial Human Robot Interaction Marker Platform (SHRIMP)

ClassicMazeThrough my previous post, I highlighted the Augmented Reality’s (AR) potential to function as a novel paradigm in Human-Robot-Interactions (HRI). Marker-less AR seems more plausible for this work, as it can readily mark points in space, without demanding a prior knowledge of the environment. In other words, we can just look at any random environment and mark any point in that environment real time. By placing a virtual marker, we already saw a demonstration on how can we persistently mark space, so that virtual markers remained persistent under changing perspectives of the camera, often like as if they were real.

Now we will continue from that point onwards, and see how can we apply such an AR-based spatial marker platform into HRI. In this article we make a case study in which we assimilate Augmented Reality into robot navigation. Virtual markers are overlaid on the video feed captured by a camera which in turn is mounted on top of the robot. We mark a point in space, just by placing a virtual AR marker, so then the robot automatically navigates to the location we pointed. My hypothesis here is to prove that just by pointing somewhere in space, we could readily perform HRI tasks – especially navigation. But before moving into application specific details let us dive into some background about HRI and marking space.

Continue reading “Spatial Human Robot Interaction Marker Platform (SHRIMP)”