[click on links for details]:
This project is concerned with how people understand, remember, reason, and update their knowledge about real-world global-scale geography.I'm interested in determining the factors - both geographic and nongeographic (e.g., social; political; affective) - that affect people's estimates of location and distance and their beliefs about the relations among geographical entities. I'm interested in geographical reasoning for its own sake - because geographical knowledge has pragmatic importance and because geographical facts form a complex, real-world knowledge domain that is learned over the lifespan from a variety of sources and experiences (e.g., maps, books, travel, school). Research in this domain can thus reveal how multiple types of representations and strategies influence performance in a particularly rich domain. I'm also interested in geographical reasoning for what it can reveal about estimation processes in general, including what factors contribute to estimation biases and what attributes of new facts can ameliorate such biases. Finally, I'm interested in what geographical reasoning can reveal about spatial reasoning more generally (e.g., the role of categorical vs. perceptual information; the influence of maps and models on performance).
In the initial work in this area, Norman Brown and I found that people's subjective geography is categorical in nature: Both the old and new worlds are divided into nonoverlapping psychological regions with gaps between them and very little discrimination among the cities within them. Some countries have more than one region and some regions consist of more than one country. In addition, the regions can be independently influenced by providing people with accurate information about, for example, the location of just one city. Furthermore, people's location estimates are systematically biased, and the bias increases as the locations being estimated are actually further to the south (towards the equator). Finally, we have found that people who live in vastly different parts of North America (e.g., Canada,the southern and western United States, and Mexico) have the same characteristics in their data.
I am currently investigating how various types of perceptual supports and memory aids (including maps) affect global-scale location estimates. I am also examining some further cross- and within-cultural differences. Additional collaborators on this project have been Hector Cappello, Dennis Kerkman, Bernd Kohler, Aaron McGaffey, Dan Montello, and David Stea.
See the following publications for more information:
Friedman, A. & Brown, N. (2000a). Reasoning about geography. Journal of Experimental Psychology:General, 129 , 193-219.
Friedman, A ., Kerkman, D.D., & Brown,N. (2002). Spatial location judgments: A cross-national comparison of estimation bias in subjective North American geography. Psychonomic Bulletin & Review, 9, 615-623.
Friedman, A. & Kohler, B. (2003). Bidimensional regression: A method for assessing the configural similarity of cognitive maps and other two-dimensional data. PsychologicalMethods, 8, 468-491.
PDF File for Excel spreadsheet.
Friedman, A., Kerkman, D., Brown, N.R.,Stea, D., & Cappello, H. (2005). Cross-cultural similarities and differences in North Americans' geographic location judgments. Psychonomic Bulletin & Review, 12, 1054-1061.
Friedman, A., & Montello, D.R. (2006). Global-scale location and distance estimates: Common representations and strategies in absolute and relative judgments. Journal of Experimental Psychology: Learning, Memory, & Cognition, 32, 333-346.
The ability to discriminate between similar-looking objects (e.g., the side views of a horse and a zebra) and to recognize different-looking views of the same object (e.g., the front and side views of a wolf) are two complex and ecologically-important skills that most organisms must have to survive. My research in this area has involved an ongoing comparative study of human and avian object recognition, in collaboration with Marcia Spetch and several of our students and colleagues. Our long-term goal is to develop an account of how novel 3D objects are learned and recognized and to extend the account to recognition of familiar objects and real-world scenes.
The project has taken us in several directions thus far. Using images of novel, 3D, depth-rotated objects, we at first found that pigeons were more “view-centered” than humans. That is, pigeons tended to recognize objects from the particular points of view that they had learned, but humans could interpolate between two learned views of the same object to recognize the "in-between" views as well as they recognized the learned views. However, after developing an apparatus that allowed us to use actual (but still novel) 3D objects as stimuli, we began exploring species differences in the direct viewing of real 3D objects versus viewing the same objects presented as 2D computer images. The data provided evidence that pigeons represent and process actual objects differently than their pictures.We believe these data provide support for a "view combination" theory of object recognition for both humans and pigeons but that (a) pigeons may see pictures differently than they recognize real objects, and/or (b) pigeons and people may have differently "tuned" representations of the views they have learned.
We also recently found that training birds with several relatively close views of either actual objects or their images facilitated transfer to the other stimulus type. This is the first time such transfer has been shown in a situation where it could not have been based on the similarity of 2D cues such as color, and thus provides the strongest evidence yet that pigeons can only recognize the correspondence between objects and pictures in particular circumstances. Thus, the work has broad implications for using pigeons as an animal model of visual processing for humans.
See the following publications for more information:
Spetch, M., Friedman, A., & Reid, S.L. (2001). The effect of distinctive parts on recognition of depth-rotated objects by pigeons and humans. Journal of Experimental Psychology: General, 130, 238-255.
Friedman, A ., Spetch, M.L., & Lank,I. (2003). An automated apparatus for presenting depth-rotated three-dimensional objects for use in human and animal object recognition research. Behavior Research Methods, Instruments, and Computers, 35, 343-349.
This project began as an extension of the work with Marcia Spetch on human and avian object recognition, because sometimes objects can be recognized not just by their static properties, such as shape or color, but also by their characteristic motion. Our collaborator, Quoc Vuong (Max Planck Institute, Tuebingen, Germany) has shown that motion cues play a role in the speed and accuracy with which people can identify novel objects. We think that motion cues therefore probably also provide an important cue for the recognition of familiar objects. For example, consider a grasshopper and a snake. These chreatures have a characteristic biological motion and we can easily discriminate between them simply by the way they move. Vuong and Tarr (Vision Research, 2004), identified several ways that motion cues might facilitate object recognition for people. These include the possibility that (a) motion may enhance the detection of an object’s structure, and hence, the recovery of shape information; (b) motion provides multiple views of the object’s shape, and thus affords the opportunity for broadly tuned representations; (c) motion may permit meaningful edges to be found more readily, and enhance the segmentation of a scene into discrete objects, or into foreground and background, which is a likely precursor to object recognition; (d) motion may provide information about how 2-D image features change over time; and (e) motion may allow observers to anticipate future views of objects. Thus, in the real world, object recognition may often make use of dynamic information.
We have begun to explore the similarities and differences between birds and humans in the use of dynamic (motion) cues for object recognition. The data thus far show that birds, but not humans, may use motion cues alone to discriminate between objects that are difficult to decompose into parts.
We are currently investigating the question of whether motion facilitates or interferes with the static properties of an object, and also, to extend the research to a comparison of real and pictured moving objects.
See the following publication for more information:
Becoming familiar with an environment and navigating competently through it requires the ability to integrate spatial information from different views and the ability to recognize places and scenes from vantage points you have not experienced. This ability implicates the existence of psychological mechanisms that compare spatial information from current and previously viewed perspectives. Together with David Waller and Eric Hodgson (U. Miami, Ohio), I am exploring whether a mechanism that is believed to underlie the ability to recognize novel views of familiar objects is also used to combine views of coherent, real-world, complex scenes. We have conducted several experiments using photographs as stimuli and have recently constructed a virtual world that will enable better experimental control over a variety of factors. The evidence to date suggests that the view combination mechanism that may underlie both human and avian object recognition may also serve in the recognition of complex scenes. The challenge for the future will be to tease apart the relative contributions of the objects themselves to the recognition of the scenes in which they appear as against the encoding and retrieval of the spatial relations among the objects.
Being able to forage for food and remember where it has been found is an essential survival skill for many species of animals. Cheng (Cognition, 1986) discovered that rats can orient in their environment by using the geometric shape of an enclosure. For example, if food is found in one corner of a rectangular space, the rat will search both in the correct corner and the geometrically equivalent corner. Furthermore, the influence of geometry on the behavior of rats dominates over the influence of landmarks. The dominant use of geometry has been shown in other species including human toddlers (Hermer & Spelke, Cognition, 1996).
Alexandra Twyman, Marcia Spetch and I are investigating whether 4-5 year olds can be taught to overcome the influence of geometric cues and to use available environmental landmarks by training them in an environment without geometric cues -- for example, a room that is in the shape of an equilateral triangle. We are also investigating the role that verbal and nonverbal short-term memory might play in childrens' ability to use landmarks.