Knowledge is in the driver's seat; Verbal Efficiency Theory Represented Concretely as a School Bus |
A Response to Chapter 6 of Perfett's (1985) Reading Ability: “Verbal Efficiency Theory.” see also Perfetti's (2007) "Reading Ability: Lexical Quality to Comprehension,"in Scientific Studies of Reading.
Summary and Response.
In this chapter, Perfetti outlines verbal efficiency theory as a way to explain individual reading differences. He explains that reading ability is multi-faceted, and that reading can be understood in terms of cost and product (comprehension). If memory and attention is reduced, processing will be inefficient. In order for a process to be efficient, comprehension quality is considered in relation to the level of expenditure within each processing resource. Schema activation and lexical access should be ideally low in energy expenditure while propositional encoding should take the bulk of the effort. When these processes are out of balance, comprehension is compromised.
The limits on efficiency are different for different processes: proposition encoding, even when maximally efficient, will be more resource-costly than a maximally efficient lexical access process. Individual differences are explained as a result of the inefficient processes of text work. These processes (lexical access, propositional assembly, propositional assembly, and text modeling) are cascading; that is, they do not take place one after another but overlap. We don't go through and decode and then "translate" into meaning, but my ESL kids have to do this less proficient processing at times. Because the processes of reading overlap, before, during, and after reading processes are thought of as overlapping, as well.
Verbal efficiency assumes that reading ability consists of being able to allocate resources in accordance with the text work demanded by an ideal text. An ideal text is one that is at the instructional level of the student in the zone of proximal development. Is this allocation a result of metacognition or is it subconscious? Is it voluntary or involuntary? Surely, it's like breathing--if a reader stops to analyze what they should be spending their time doing, it becomes more difficult just as breathing does when one stops to think about it.
Efficiencies vary across texts, processes, and individuals. A text with low readability will require more effort. A text that is difficult for a reader with low memory capacity is easier for a reader with a higher memory capacity. It’s hard for adults to understand how low-level processes can be so important for comprehension (The doctor sentence seems very simple with an overly complex discussion; I didn't understand the big deal until the ephemerella example; perversely, I'd like to see a similar discussion of this chapter and how to make it more readable!).
With regards to text work and reading ability, Perfetti argues that it is possible for two individuals to show equal comprehension when time is uncontrolled, but not when it is kept short, which I found interesting. So then the argument is that extended time won't skew reading ability because these readers just can't "get" it no matter what? This statement reminds me of my Algebra 3/trigonometry exam when I was in high school. I took four hours to complete the exam and made a 69.5. It took all of my available resources to barely pass! I comprehended little, but I tried hard.
There are two hypotheses regarding verbal efficiency, the lexical access hypothesis and the intrinsic memory hypothesis. The former assumes that lexical access is the main cause of difficulties because slow decoding makes it harder to work with text in working memory. Is inefficient access interfering or are low-quality codes produced as a result? Having difficulty decoding could disrupt the integration of propositions, making it difficult to comprehend text as readers struggle to parse phrases, or decoding difficulties create a fragmented code where semantic or phonetic information may be missing. Minor problems with a single word, for me “propositions,” can disrupt comprehension. Code asynchrony is another possible issue. Code asynchrony explains how a student can read a word but not know how it is pronounced so won't know the word even though they “know” it in other contexts. Also, the wrong word can be recalled; I’ve observed this error frequently, especially with “little” words—“a”, “the”, etc.
The intrinsic memory hypothesis is based on lexical access processes being performed efficiently but working memory being limited so that words and propositions are forgotten before meaning is assembled. Perhaps schemata is inefficiently activated. The idea that schemata can be “optimally bound” to specific words, a part of the intrinsic memory hypothesis, could explain what occurs when I read a sentence that continues on the next page and I more often than not fill in the correct word or words. However, if too much schema is activated, it could explain why the mind wanders when we are reading sometimes. There are two possible causes of memory loss: hysteresis and specificity-ordering.
Either hypothesis is plausible; a general linguistic coding process reconciles the two because it affects the speed and quality of lexical access and the manipulation of codes in memory.