Through my previous post, I highlighted the Augmented Reality’s (AR) potential to function as a novel paradigm in Human-Robot-Interactions (HRI). Marker-less AR seems more plausible for this work, as it can readily mark points in space, without demanding a prior knowledge of the environment. In other words, we can just look at any random environment and mark any point in that environment real time. By placing a virtual marker, we already saw a demonstration on how can we persistently mark space, so that virtual markers remained persistent under changing perspectives of the camera, often like as if they were real.
Now we will continue from that point onwards, and see how can we apply such an AR-based spatial marker platform into HRI. In this article we make a case study in which we assimilate Augmented Reality into robot navigation. Virtual markers are overlaid on the video feed captured by a camera which in turn is mounted on top of the robot. We mark a point in space, just by placing a virtual AR marker, so then the robot automatically navigates to the location we pointed. My hypothesis here is to prove that just by pointing somewhere in space, we could readily perform HRI tasks – especially navigation. But before moving into application specific details let us dive into some background about HRI and marking space.
You might be wondering at this point, What does it mean to mark space? Imagine a chair in your room. The chair indicates a location for you to go and sit. In other words, a chair indicates a location for you to perform an action. Then consider, you were traveling in a city. A special landmark (i.e. a tall building, statue, post, etc.) becomes naturally indexed within your mind for you to identify your position within the city. To put this in another way, you hold on to the locations of land marks, to build a cognitive map of the city, as you move along the streets. Further still, imagine you were traveling in a jungle which is totally unknown to you beforehand. How could you trace your path? One solution would be carry a bunch of pebbles with you and place pebbles at certain points so that you can readily trace back your path (we all know that this technique was common among early explorers as they didn’t have GPS). The pebbles remain persistent over time and space, no matter in which direction you return to them. In all these scenarios, we can see that spatial activities were often performed by marking space with physical objects (physical markers). This phenomenon is considered natural as mammalian species – including human beings, tend to organize physical objects to represent space and associate their actions in relation to the environmental structure.
In such ways, marking space means a lot for humans since it provides a natural and an intuitive manner to understand and perform human interactions. But what does it mean to bring this notion into the world of HRI? We can see, a spatial referencing system would play its role in multiple facets in the HRI eco-system, since the eco-system itself is multi-disciplinary. Wouldn’t that be easy to point to a location in space and give an instruction to the robot, like ‘Move here’ ? Wouldn’t that be easy to place a virtual AR marker and let a fleet of small robots to flock around that marker; instead of controlling them individually? My hypothesis is such a spatial referencing platform, constructed with AR, would reduce the time it takes to complete HRI tasks. Secondly, the so-called Spatial HRI Marker Platform (SHRIMP) that I built would reduce the workload for non-robotic experts. In order to evaluate my hypothesis, I came-up with a case study.
As shown in the above video, I applied the AR-based marker platform (i.e. SHRIMP) to navigate a ground robot. This robot was operated in an indoor environment, and the region of robot’s traverse is obstacle-free. Although I used an Eddie robot with an external IMU (Inertial Measurement Unit) for this case study, SHRIMP is independent from the type of the robot platform. Here in my study SHRIMP runs on a stationary desktop machine and the control commands are mediated between the robot and the desktop via a wire-less network. On the desktop machine, the human operator could see the robot’s view of the environment with the help of a camera mounted on top of the robot. Using SHRIMP, the operator places a virtual AR object inside the robot’s environment and with a single button press, instructs the robot to navigate to that point. After determining the required distance (meters) and angle (radians) control signals are sent to robot. The program, which is running on robot receives the distance and angle parameters and actuates the robot’s motors to reach the destination. The robot first rotates in the direction of target and then starts moving in a straight line. I employed a closed-loop feedback algorithm in order to maintain a steady straight motion. The robot’s program continuously decides whether it has reached the target location and once it has, the program terminates and waits for the next signal.
In this manner I am trying to demonstrate that it is possible to navigate a robot just by marking a point in space. But such interactions are still open for a debate. Does it really reduce the time to achieve the task? Would users accept this? Does it really help to interact with robots for people with little or no knowledge about robotics? Does it turn out as the next generation style for human robot interaction?
Answers are still out there.. awaiting…