In the future, a robot will come into your room, recognize the bed you sleep on, realize its environment is a bedroom and then it will know to look in the closet where you are hiding. Fun… or scary? The processes that would allow for this type of environmental recognition lie in software called SLAM++ or simultaneous localization and mapping with added object recognition. This cognitive program could lead to computers learning in a way babies do when learning and recalling information of objects, and even allowing for a bit of augmented reality imagination.

 

Renato Salas-Moreno, PhD specializing in Robotic Vision at the Imperial College in London modified pre-existing SLAM software. This version of software was only able of rendering lines, curves and surfaces from objects scanned in its surroundings. But, as Moreno points out, these hills and valleys in the ICP are meaningless without all the information available about a particular object. To supplement these abilities, Moreno equipped the software with an object recognition algorithm that allows the computer to create an entire familiar environment using the objects in its surroundings to move around, memorize and recognize spaces.

 

recog.JPG

(via SLAM++: Simultaneous Localisation and Mapping at the Level of Objects)

 

The new version of the software, called SLAM++, currently uses images manually uploaded to a database of objects it should be able to recognize. The Imperial College team uses scans by a Kinect Fusion to build the computer’s repertoire. Using a moving, depth camera, it constructs different sets of maps, or graphs, that it combines and expands as it recognizes objects in its surroundings that match objects in its database. The team can then put together a map of the computer’s entire environment and use it to track the camera. The system can optimize all the individually recognized images by superimposing them onto a horizontal plane called the floor. This is needed because, as the computer moves and its perspective changes, objects are not automatically placed correctly relative to flat ground. The system can also compare the camera’s sensory data to pre-existing familiar maps to detect when objects have been moved.

 

At the moment, the SLAM++ computer database must have scans of the exact objects it can recognize. However, being able to recognizing and learning new environments and “closing loops” allows the mobile computer to move around its environments faster as it frees up processing. The database also contains useful information about objects like weight, material it is made of and the object’s conventional use and spatial orientation. In the future, the computer system will be able to recognize the general shapes of objects, like all types of chairs. The team hopes the system will use these abilities and enable it to expand on its own database and learn new objects independently.

 

Moreno and his team have also allowed the software to overlay virtual characters appropriately on objects it recognizes like chairs or the floor, creating an augmented reality experience. The software has the potential to be used in video games, virtual reality, and improving CGI alongside AI. The Imperial College team will present SLAM++ at this year’s Computer Vision and Pattern Recognition Conference in Portland, OR, running June 23-28.

 

 

C

See more news at:

http://twitter.com/Cabe_e14