THEORETICAL FOUNDATIONS

We start with the assumption that you are aware of the significance of our reading problems and their powerfully negative affect on the intellectual and emotional self-assumptions of our children. Also, that you are aware of the economic consequences of these problems: hundreds of billions of dollars lost (each year). (Click: ‘More On…” (now and hereafter) if before proceeding, you would like to read more detail).

Next, we will assume that you are in general agreement with the following (emphasized) excerpts from the National Institute of Child Health and Human Development’s Research Program in Reading (red emphasis ours):

In essence, to learn to read, the individual must discover that spoken words can be segmented into smaller units of sound, that letters on the page represent these sounds, and that written words have the same number and sequence of sounds heard in a spoken word.

In essence, the beginning reader must learn the connections between the 26 letters of the alphabet and the approximately 44 English-language phonemes. The understanding that written spellings systematically represent the phonemes of spoken words is termed "the alphabetic principle" and is absolutely necessary for the development of accurate and rapid decoding and word reading skills.

Specifically, in order for the novice reader to begin to devote more attention and memory capacity to the text that is being read for strong comprehension to occur, phonological and decoding skills must be applied accurately, fluently and automatically. Laborious application of decoding and word recognition skills while reading text reduces attentional and memory resources, thus impeding reading comprehension.

The most frequent characteristic observed among children and adults with reading disabilities is a slow, labored approach to decoding or "sounding-out" unknown or unfamiliar words and frequent misidentification of familiar words. Oral reading is hesitant and characterized by frequent starts and stops and multiple mispronunciations.

G. Reid Lyon, Ph.D.
Chief, Child Development and Behavior Branch
National Institute of Child Health and Human Development
National Institutes of Health
More On...

Comprehension is negatively affected by and inversely related to the dissipation of attentional and memory resources - the greater demand for these mental resources during decoding, the less mental ‘bandwidth’ available for comprehension processing.

Decoding accuracy, fluency and automaticity, all depend on how well the reader: “learn(s) the connections between the 26 letters of the alphabet and the approximately 44 English-language phonemes.”  It is,  “absolutely necessary” that they be able to “systematically” use the ‘alphabetic-principle’ to decode letters into sounds.

The principal significance of the alphabetic principle is not a principle. Decoding may be initially enabled by a principle (basic truth, law, or assumption -like) but more significantly then how it begins, decoding requires learning to remember and PROCESS a complexly ambiguous 'code'.

This code has two major problems. 1) English has 44 sounds but the alphabet has only 26 letters with which to represent them.  To compensate we have evolved idiomatic ways to make some letters, but not all letters, represent additional sounds depending on which other letter(s) they are accompanied by.  2) In addition to this fundamental phoneme-grapheme correspondence mis-match ambiguity (and in significant part exacerbated by it) our system of spelling has evolved over a thousand ways to spell the 44 sounds with the 26 letters.
(More On…)   

The only way to resolve a letter's sound ambiguity is to process its surrounding letters and their ambiguities. In the case of Cake, there are over 5 possible sounds for the C which can depend on over 7 possible sounds for the A which can also depend on the K's sounds and the 'E's sounds. In addition to this ambiguity, letters in one word can sound different depending on what other words accompany them in the sentence. 'Read' sounds like 'reed' in the present tense and sounds like 'red' in the past.

Unlike math where there is time to work the problems, this disambiguation process has to happen fast enough to sustain the flow of comprehension. In order to sustain their flow, readers have less than 50 thousands of a second to resolve each letter into its right sound. More

Obviously, when learning to read, the process of disambiguating the code’s inherent grapheme-phoneme-spelling ambiguities drains attentional and memory resources and reduces their availability to comprehension processes.

How significant a role does the 'code's' ambiguity play in sapping the processing resources necessary to sustain the flow of 'decoding' above the threshold of sufficiency for strong comprehension?

Because it is currently impossible to compare the performance of developing readers of a phonetic (or significantly less ambiguous) English orthography with developing readers in today's English orthography the research data can’t support any direct answers to the question. However, in theory, it’s clear that this ambiguity problem is substantial:

The greater the number of ambiguous letter-sounds (and letter-sound combinations) coexistent in a word, the greater the number of iterations of ambiguity reduction required before the word can be virtually-heard or spoken. The greater the number of ambiguity reducing iterations (disambiguations) involved the greater the demand on the reader's attentional resources - the longer the 'span' of attention required.  The longer the span of attention required, the greater the vulnerability to miscues in decoding causing drop outs from the decoding-stream-flow-rate necessary to sustain the flow of comprehension.  Again, from NICHD

The most frequent characteristic observed among children and adults with reading disabilities is a slow, labored approach to decoding or "sounding-out" unknown or unfamiliar words and frequent misidentification of familiar words. Oral reading is hesitant and characterized by frequent starts and stops and multiple mispronunciations.

Our hypothesis is that the ‘hesitation’ and ‘frequent starts and stops’ of struggling readers correspond to stutters and dropouts in the flow of decoding caused by code-induced ambiguity-overwhelm. By ambiguity-overwhelm we refer to a drop below the threshold of coherent continuation in the flow of decoding caused by a lack of available processing resources due to the dissipation of the disambiguation process.

Thus, our reading problems stem in very large part from this ‘code’. A human, technological, creation, the ‘design’ of this ‘code’ is as, or more, reflective of the accidents of history then it is of any intentionally well thought out ‘program’. .  (More On…)  A small number of people, (including Benjamin Franklin, Noah Webster, Charles Darwin, Mark Twain, Theodore Roosevelt, Andrew Carnegie, George Bernard Shaw and others) over the past 400 years, have known that the ‘code’ itself was responsible for the social injustices associated with illiteracy. It inspired them to create new alphabets and new systems of spelling.  (More On…) 

However, in part because nothing substantial ever came of their initiatives, the learning to read science community has, for the most part, taken for granted two fundamental assumptions a) the alphabet can not be changed sufficiently to significantly reduce the ambiguity it participates in creating, and b) English spelling can’t (and shouldn’t be (More On…)) simplified to the degree necessary to significantly reduce the ambiguity it participates in creating. Not that these would be problems in and of themselves (as I will show later) but they have led to the more encompassing assumption that our writing system as a whole is an invariant fixture to be taken for granted in our thinking about learning to read.  

Our (also four hundred year old) ‘reading wars’ reflect these assumptions, as does almost all of our research on how to help children learn to read. Phonics instructions and explicit phonemic awareness exercises are both attempts to compensate for the ambiguity inherent in the code by adding another level of complex, instructions-for-disambiguating-instructions to it. Neither, are attempts to address the code itself. (More On…) 

Today, the ambiguity in the code is accepted as part of the environment we must learn to read in.  One consequence of this is that rather than helping learners and their teachers see their problems as ‘technology-interface’ difficulties that the code, not they, are responsible for causing – they assume the fault is theirs and take on the shame and blame they associate with being at fault. (The bizarreness of this is revealed in an analogy that imagines that this ‘code’ was the product of a ‘company’ who takes the attitude that nothing is wrong with their ‘code’ the problems are all in the ‘minds’ of their users. (More On…) 

The other consequence of these assumptions is that they can’t consider or support the kind of creativity that could address the real problem: reducing the disambiguation overhead involved in decoding. This is where our proposal begins.  

Our question: Without changing either the alphabet or English spelling, what can we do to reduce the ambiguity?

Here is what we are proposing:

Enabling idea - we can add a new layer to our orthography that reduces the ambiguity of processing the code without requiring any changes to the alphabet or English spelling.

With modern font technology we can add another dimension of functionality to the concept of a character or letter. Specifically, we can print (paper or screen) letters with shape, size, intensity, attitudinal and spacing variations that, while retaining unambiguous letter recognition features, convey additional information or cues about how a particular letter should sound in the particular word in which it is being encountered. Analogous to serifs that function as ‘cues’ to direct our minds toward visual letter recognition, we can add cues that direct our minds toward letter-sound correspondence recognition.  

Our working hypothesis: 1) a small number of alphabet-general letterface variations, acting as cues, can substantially reduce the disambiguation-dissipation currently working against efficient decoding and learning to read  2) the overhead to learn and process such cues will be a substantially advantageous trade for the ambiguity reduction they make possible.

We call them P-cues as the ‘P’ includes: 'phonic', 'phonemic', 'pronunciation' and ‘parallel processing’. ‘Phonic’ because they cue sounds. Phonemic because they cue sub-word sound boundaries. Pronunciation because they cue how to pronounce the sounds of letters independent of spelling knowledge. Process because they create a parallel process path for decoding to draw upon.

P-Cues (or PQs – mind your Ps and Qs) are a new layer of orthography intended to reduce the ambiguity of the existing layers. They do not directly assist the disambiguation of the correspondence ambiguities in the decoding of the code. They do not represent letter sound correspondences and their general rules of spelling; rather they represent the intended correspondence of spelling to sound at the level of each actual, particular spelling being read. They are not a map to interpret through they are sound-sign-posts embedded in the visual representations of our written language. 

We call the overall system 'Training Wheels for Literacy' because this system is meant to be used during the initial learning to read process and then dropped. It offers some of the benefits of an ITA but without the same transition overhead as the underlying alphabet and spelling has not been changed.

We are suggesting that a small number of alphabet-general letterface variations, acting as pronunciation cues (P-cues), can dramatically reduce the disambiguation-overhead involved in learning to read. 

For a detailed description of the Cues click here.

Return to this site's index to explore this proposal and its less rigorous but more detailed supporting materials.

*We agree that once one can read well enough the spelling system in English provides for a great number of advantages including etymological cues and its ability to creatively combine in new words.  The article "In Defense of the Present English Spelling System: A Juxtaposition of Linguistic, Historical and Socio-linguistic Justifications" makes these and other points very well: http://linguistik.hypermart.net/EngLing.html

©Copyright 2001-2003: Training Wheels for Literacy & Implicity

  

 

 

Hit Counter