Michael J. Tarr: Exploring Visual Navigation and Object Recognition with Virtual Reality: Early Lessons from the VENLab
We have developed a large-scale (50'x50') virtual
environment (the VENLab) for studying questions in visual navigation
and object recognition. Unique to our facility is the ability of
subjects to actively walk around an artificial environment in which
we can carefully control the position and the appearance of objects.
The first experiments we have run in the VENLab address four
questions: 1) What is the role of visual flow information in
different regions of the scene with regard to the accuracy of dead
reckoning ? 2) What are the relative roles of dead reckoning and
landmarks in navigation? And are local and global landmarks weighted
equally? 3) Are there differences in the spatial updating that occurs
with regard to viewpoint when orientation changes are generated by
observer motion as opposed to object rotation? 4) Does falling off a
50' virtual cliff hurt?