Скачать книгу

our understanding of learning in general? One argument for the usefulness of these findings is that they have led to numerous experiments that have shown that active interactions with the environment facilitate learning more than passive “interactions”. In their pivotal work, Held and Hein [1963] showed that kittens that were initially reared without visual experience were unable to learn to understand their world, even after their vision was restored, unless they were given the freedom to move around on their own volition. For comparison with the active learning experience given to this cohort of kittens, an experimental apparatus moved a separate cohort of kittens, who, therefore, experienced “passive” movements. The crucial difference is that the first cohort of kittens received visual stimulation that was contingent with their own self-generated movements, whereas the second cohort experienced visual stimulation that was not contingent with their own movements. This study spurred a wealth of research on the role of active vs. passive experiences on learning. These lines of research have largely confirmed the value of self-generated action for visual perception in many domains.

      In the context of the intersensory redundancy discussed earlier, active learning is the avenue by which multimodal-multisensory signals in the brain arise from the body and environment. In what follows, our goal is to provide a brief overview of some of the empirical work on how these multimodal-multisensory signals affect learning. We briefly outline empirical work that has shown that active interactions facilitate learning in surprising ways at times. We discuss findings from behavioral work and from neuroimaging work. The behavioral data show the usefulness of learning through action in numerous domains and the neuroimaging work suggests the mechanisms, at a brain systems level, that support active learning.

      Actively interacting with one’s environment involves multisensory, multimodal, processing. It is therefore, not surprising then, that these types of interactions facilitate learning in many domains. Here we briefly review findings from behavioral experiments that demonstrate the far-reaching beneficial effects that active experience has on learning.

       2.3.1 Behavioral Research in Adults

      Visual Perception Although somewhat controversial, the majority of research in psychology separates sensation from perception. A primary separation results from findings that perception relies on learning, in that information from prior experience changes how we perceive. When these experiences involve active interaction, we can assume that perception is changed by action. Our first example comes from the phenomenon of size constancy. Size constancy refers to our ability to infer that an object maintains its familiar size despite large changes in retinal size from visual sensation, due to distance. Distance cues that lead to size constancy can be from object movement, observer movement, or a combination of the two. Research has shown that size constancy is dependent on observer movement, not object movement. If size is manipulated through object movement, constancy is worse than if size is manipulated through observer movement [Combe and Wexler 2010].

       2.3.2 Spatial Localization

      It seems intuitive that locomotion is important for understanding the three-dimensional space around us. However, several lines of research support the claim that self-generated movement is the key to understanding spatial location of objects. For instance, if we encode the location of an object and then are blindfolded and walk to a new location, we can point to its original location with great accuracy. If we simply imagine moving to the new location [Rieser et al. 1986] or see the visual information that would occur if we were to move, but without actually moving we cannot accurately locate the target object [Klatzky et al. 1998]. Even if an individual is moved in a wheelchair by another person [Simons and Wang 1998]or in a virtual-reality environment, object localization is worse than if the movement were self-generated [Christou and Bülthoff 1999].

       2.3.3 Three-dimensional Object Structure

      Learning object structure is facilitated by active interactions in numerous ways. For instance, slant perception [Ernst and Banks 2002] and shape from shading cues for object structure [Adams et al. 2004] are both cues for depth perception. Both of these perceptual competencies are facilitated by manual active interaction with the stimuli. Remembering object structure is also facilitated by active manipulation of objects with one’s hands. In a series of studies, novel objects were studied through either an active interaction, where participants rotated the 3D images on a computer screen, or through a passive interaction, where participants observed object rotations that had been generated from another participant. By pairing active and passive learning between subjects, one could compare the learning of a given object that resulted from either visual and haptic sensation with the involvement of the motor system (multimodal) or learning through visual sensation alone (unisensory). Subsequently, participants were tested on object recognition and object matching (see Figure 2.2). Results were straightforward: when objects were learned through active interactions, recognition was enhanced relative to learning through passive interactions [Harman et al. 1999, James et al. 2001, 2002]. These results demonstrate that understanding object structure is facilitated by multimodal exploration relative to unimodal exploration. We hypothesized from these results that active interactions allow the observer to control what parts of the object they saw and in what sequence. This control allows an observer to “hypothesis test” regarding object structure and may be guided by preferences for certain view-points that the observer’s perceptual system has learned to be informative.

      Understanding object structure requires that we have some notion that an object looks different from different viewing perspectives and that does not change its identity (as in Figure 2.2 left, upper-right quadrant presents the same object from different viewpoints, lower-right presents different objects from different view points). This form of object constancy is referred to as viewpoint-independent object recognition. Many researchers and theorists agree that one way that we achieve viewpoint-independent recognition is through mental rotation—the rotation of an incoming image to match a stored representation. A considerable amount of research has shown that the ability to mentally rotate an object image is related to manual rotation in important ways. Manual rotation is a specific form of active interaction that involves the volitional rotation of objects with the hands. That is, mental rotation is facilitated more by manual rotation practice compared to mental rotation practice, and further, that mental and manual rotations are reliant on common mechanisms [Wohlschläger and Wohlschläger 1998, Wexler 1997, Adams et al. 2011]. Thus, multimodal experience contributes to the development of mental rotation ability, a basic process in spatial thinking, which leads to an increase in the ability of an observer to understand its environment.

      Figure 2.2 Examples of novel object decision: same or different object? (From James et al. [2001])

      Thus far, we have reviewed how physical interactions with the world affect our learning and understanding of existing objects. Symbols are objects that have several qualities that make learning and understanding different from 3D objects. When trying to understand symbols and symbolic relations, we must take something arbitrary and relate it to real world entities or relationships. Evidence suggests that symbol understanding is facilitated when symbols are actively produced. Producing symbols by hand, using multiple modalities during encoding (seeing them and producing them manually) facilitates learning symbol structure and meaning more than visual inspection alone. Lakoff and Nunez [2000], p. 49 argue that for symbols to be understood they must be associated with “something meaningful in human cognition that is ultimately grounded in experience and created via neural mechanisms.” Here, grounding in experience is in reference to theories of grounded cognition and refers to the self-generation of meaningful actions, whose purpose is to control the state of the organism within its physical environment by providing it with interpretable

Скачать книгу