Скачать книгу

is important for reading, then deaf people should be at a disadvantage when reading. Many people born deaf struggle to learn to read and do not reach a high level (Antia et al., 2020; Adlof et al., this volume). For those who do, and notwithstanding the hearing loss, phonology appears to be implicated. For instance, Blythe et al. (2018) monitored eye movements as participants read sentences silently. Some of the sentences contained misspellings. Deaf readers, just like those in the control group, were sensitive to whether the erroneous spellings were homophonic errors. Reading was less disrupted for a sentence such as “When mum cooks pasta I like grated cheeze on top of it” than for “When mum cooks pasta I like grated cheene on top of it.” This finding suggests that phonology is activated and that the orthographic code alone may not be enough for fluent reading, even for people with hearing impairments (but see Costello et al., 2021, for a different view). Further evidence in line with this view is that phonological knowledge and exposure to speechreading predict reading in primary school children with hearing impairments (e.g., Antia et al., 2020). A remaining question is whether the phonology‐based effects in deaf people come from spoken language (e.g., via speechreading) or can be based on sign language (e.g., Keck & Wolgemuth, 2020).

      Another group for whom the phonological code might be less salient are those learning to read in a second language. Learning to read in native language draws on years of experience with spoken language, but this familiarity with the phonological structure of the language is lessened in additional languages. If phonological effects on silent reading can be bypassed, one might expect this to be seen when reading a nonnative language. Still, this is not what has been observed. There is evidence for phonological effects in second language reading, much as in first language reading (Chitiri & Willows, 1997; Friesen & Jared, 2012; Jouravlev & Jared, 2018).

      A final line of evidence comes from people with dyslexia. Problems with phonological processing and phonological awareness appear to be the core deficit in dyslexia (Snowling & Hulme, 2020; Wagner, this volume). To the extent that dyslexia is associated with phonological problems, this provides further evidence that phonology is implicated in silent reading.

      Taken together, the preceding findings provide strong evidence that phonology is involved in fluent silent reading. Although a few individuals may be able to reach high levels of reading skill without access to phonology (as defended by Costello et al., 2021), for most people, phonology appears central to both learning to read, and skilled reading.

       Phonology Activation: Addressed or Assembled?

      There are two ways in which written words can activate phonology. First, a word in its entirety can be recognized as a familiar visual stimulus associated with a particular pronunciation. Patterson (1982) called this form of translation “addressed phonology” because the phonology is “looked up” after the visual word has been recognized. The second way is to translate parts of words (e.g., letters) into sounds. Termed “assembled phonology,” this makes it possible to pronounce pseudowords (sequences of letters that look like words, but that do not exist in the language, e.g., teel). Assembled phonology is also needed to pronounce new words (e.g., the name of a new colleague: “Mr. Brissbart”).

      Languages differ in the extent to which assembled phonology is possible. Assembled phonology can only happen in languages with written symbols that represent sounds, such as alphabetic or syllabic languages. Assembled phonology is not possible in scripts in which written symbols primarily refer to word meanings. This is the case for logographic languages, such as Chinese, where words are coded by a combination of roughly 4,500 characters (Perfetti et al., 2013). Although Chinese characters often include systematic components (called radicals) that refer to sounds (Liu et al., 2020), it is virtually impossible to pronounce an unfamiliar Chinese character (see McBride et al., this volume).

      Languages with assembled phonology differ in the extent to which associations between orthography and phonology are transparent, a variable called orthographic depth (Frost et al., 1987; Schmalz et al., 2015; Ziegler & Goswami, 2005). English orthography is low in transparency. The pronunciation of letters (and letter combinations) often differs between words (compare tough – though – through, gave – have, hint – pint). As a result, children learning to read in English require considerably more time to become fluent than children learning to read more transparent languages, such as Spanish, Italian, or Finnish (Caravolas et al., 2013; Galletly & Knight, 2004; Hanley et al., 2004; Lindgren et al., 1985; Reis et al., 2020) (see Caravolas, this volume). Nobody names the word awry correctly on the basis of assembled phonology. In contrast, children reading a transparent language can pronounce nearly all written words on the basis of a simple set of translation rules, allowing them to easily learn new written words through self‐teaching (Share, 1999; Ziegler et al., 2014; Castles & Nation, this volume). The difficulties that children learning to read English face can be simulated by asking adult participants to name pseudowords. Pritchard et al. (2012) reported that some English pseudowords are easy to read aloud (e.g., ent, foz, gert), but that others take a long time to read and elicit more than 20 different pronunciations (e.g., jeich, sheche, tuise). In transparent languages, all words are of the former type; in opaque languages, there are many words of the latter type.

Schematic illustration of masked priming paradigm. A prime is presented briefly between a forward mask consisting of X X X X X X or hash hash hash hash hash hash hash and a target word.

      Phonology‐mediated masked priming has been observed in many studies, at least when prime duration is slightly longer than 50 ms and when there is a considerable difference in phonology between the homophonic prime and the orthographic control prime (for review, see Rastle & Brysbaert, 2006). The activation of assembled phonology seems to be automatic: Phonological priming is still evident when strategic effects are minimized by reducing the proportion of phonological priming trials to only 9% (Xu & Perfetti, 1999). Another important demonstration came from Lukatela and Turvey (1994) who reported priming of the

Скачать книгу