The robot was driven
through the environment three times, one "clean" run
with as low noise level as possible, one "noisy" run with people
walking through the environment and one "home-tour" run, in which a
person is guiding the robot around the house. Each run took around 5
For more information on the time-stamping, the format of the files given for download and specifications of the sensors used, have a look at the respective README's.
Annotation: We will constrain ourselves here to a simple but still rich set of spatial concepts. We start with the human concept of different rooms in a home environment. Furthermore, within each room we selected a number of prominent objects. The objects are manually roughly segmented in the omnidirectional images. The region close to an object is defined as the region from where the object can be used in a manner common for that object. The data was annotated by an inexperienced person. The person has never visited the environment and relied on the omnidirectional image and a rough drawing of the environment map. Every second frame was annotated. See example movie(900KB). The annotation is provided in XML format. The XML structure for one frame is described in frame.xml. .
Download: The data is available via: http://www2.science.uva.nl/sites/cogniron . Some useful MATLAB functions are also provided: reading the annotation, geometric transformations for the omnicam, etc. Detailed description is here.
Credits: Following persons were involved in making this dataset (alphabetical order): Anne Doggenaar, Bas Terwijn, Ben Krose, Edwin Steffens, Elin Anna Topp, Henrik Christensen, Matthijs Spaan, Olaf Booij, Ruben Boumans, Zoran Zivkovic. Special thanks to UNET for making their space available for the experiments.