Point and click guidance for Ellie the robotMarch 20th, 2008 - 1:07 pm ICT by admin
New York, March 20 (IANS) Ellie helps people with limited mobility accomplish everyday tasks, getting them things like towels, tablet bottles and telephones. Thank her, and you will probably receive a hum in response. Ellie (written El-E) is a robot - an extremely versatile one. And what makes her unique is the fact that unlike robots struggling to respond to speech or gestures, Ellie works on a unique point-and-click model.
A team of researchers at the Georgia Institute of Technology and Emory University have found a way to instruct a robot to find and deliver an item (even one that it has never seen before) using a laser pointer.
Ellie moves to an item selected with the laser pointer, picks it up and then delivers it to the user, another person or a selected location such as a table. Ellie has been so named for her ability to elevate her arm and for the arm’s resemblance to an elephant trunk, according to a Georgia Tech release.
A video on Ellie was recently presented at the International Conference on Human-Robot Interaction in Amsterdam.
The verbal instructions a person gives to help someone find a desired object are very difficult for a robot to use (the cup over near the couch or the brush next to the red toothbrush).
These types of commands require the robot to understand everyday human language and the objects it describes at a level well beyond the state of the art in language recognition and object perception.
“We humans naturally point at things but we aren’t very accurate, so we use the context of the situation or verbal cues to clarify which object is important,” said Charlie Kemp of Georgia Tech and Emory.
“Robots have some ability to retrieve specific, predefined objects, such as a soda can, but retrieving generic everyday objects has been a challenge for robots.”
The laser pointer interface and methods developed by Kemp’s team overcome this challenge by providing a direct way for people to communicate the location of interest to Ellie and complimentary methods that enable her to pick up an object found at this location.
Through these innovations, Ellie can retrieve objects without understanding what the object is or what it’s called.
In addition to the laser pointer interface, Ellie uses another approach to simplify its task. Indoors, objects are usually found on smooth, flat surfaces with uniform appearance, such as floors, tables, and shelves. Kemp’s team designed Ellie to take advantage of this common structure.
Regardless of the height, Ellie uses the same strategies to localise and pick up the object by elevating its arm and sensors to match the height of the object’s location.
The robot’s ability to reach objects both from the floor and shelves is particularly important for patients with mobility impairments since these locations can be difficult to reach, Kemp said.
El-E uses a custom-built camera that is omni-directional to see most of the room. After the robot detects that a selection has been made with the laser pointer, the robot moves two cameras to look at the laser spot and triangulate its position in three-dimensional space.
Next, the robot estimates where the item is in relation to its body and travels to the location. If the location is above the floor, the robot finds the edge of the surface on which the object is sitting, such as the edge of a table.
El-E autonomously moves to an item selected with a green laser pointer, picks up the item and then delivers it to the user, another person or a selected location such as a table.
Tags: elephant trunk, ellie, emory university, everyday objects, everyday tasks, georgia institute of technology, georgia tech, gestures, human language, human robot interaction, kemp, language recognition, laser pointer, object perception, point and click, predefined objects, robot new york, tablet bottles, verbal cues, verbal instructions