Kille: Learning Objects and Spatial Relations with Kinect
Abstract
In order for humans to have meaningful interactions with a robotic system, this system should be capable of grounding semantic representations to their real-world representations, learn spatial relationships and
communicate using spoken human language. End users need to be able to query the system what objects it already has knowledge of, for more efficient learning. Such systems exist, but require large sample sizes, thus not allowing end users to teach the system more objects when needed.
To overcome this problem, we developed a non-mobile system dubbed Kille, that uses a 3D camera, SIFT features and machine learning to allow a tutor to teach the system objects and spatial relations. The system is built upon the ROS (Robot Operating System) framework and uses Opendial software as a dialogue system,
for which a ROS support was written as part of this project. We describe the hardware of the system, the software used and developed, and we evaluate its performance. Our results show that Kille performs well on small learning sets, considering the low sample size it uses
to learn. In contrast to other approaches, we focus on learning by a tutor presenting objects and not by providing a dataset. Recognition of spatial relations works well, however no definitive conclusions can be drawn. This is largely due to the small number of participants and the subjective nature of spatial relations.
Degree
Student essay
Collections
View/ Open
Date
2020-08-26Author
de Graaf, Erik
Keywords
grounding
spatial relations
object learning
Language
eng