Questions and Answers

Excerpts from dialogues with linguists, reading teachers, orthographic reformers and mind-scientists.

 On PCUES:

On Getting PCUES accepted and tried:

On PCUES:

Are Pcues an attempt to sinicize western written languages?

Not at all. I am not trying to replace our system with an ideo/picto-graphic system. The richly combinatoric creative dexterity of the western mind is very likely rooted in the way our alphabet works (see McLuhan and Logan's: Alphabet Effect)

Our proposed cues have no 'content' meaning whatsoever. They are not symbols for anything - they are cues that help developing readers have less confusion about the sound(s) that go with the letters they are reading.  By reducing this confusion, given the accumulated ambiguity inherent in the correspondence of our alphabet and spelling systems, we can reduce the mental overhead which is a primary retarding dissipant to each mind that struggles to read and to our national learning to read progress as a whole.

Reading bogs down in decoding which bogs down in ambiguity. Reduce that ambiguity and decoding efficiency rises, it better reciprocates with phonemic awareness development and flows more optimally toward the threshold of automaticity necessary for comprehension processes to engage well. Our other attempts to help readers all come from the assumption that we must accept the ambiguity inherent in the code. Phonics is a noble effort to compensate rather than address this underlying issue. Our approach begins with the intent to directly address and reduce this ambiguity.

Mind your Ps and Qs - Phonemic Cues for better mind-reading - P-Cues:  Should the logo be PQ's?

I am not attached to any logo or label. I want that to emerge from the advisory board dialogue. As you can see I use the "P" to mean  (at different times)  'phonic', 'phonemic', 'pronunciation' and ‘parallel’. Phonic because they cue sounds, phonemic because they cue sub-sound boundaries, pronunciation because its not knowledge of relations that the cues prompt, its how to pronounce the sounds and parallel because they create a parallel process path for decoding to draw upon. 

  Are Pcues phonic cues?

Yes and they are also more than that.  My proposal is open ended, I don’t know what the balance is between visual cues that reduce letter-sound correspondence ambiguity and the overhead of learning and instituting them.  This is precisely what we need to learn. Though I think the ground floor is cues for sound attributes (loud-soft, long-short, high-low, duration, tempo etc.) I am really proposing that we become cartographers of the ambiguities involved in learning and using the written language and then based on such an understanding apply that to the inertia and find the right sequence of fulcrums. I want to start with reducing the gross ambiguities with the simplest possible cues – then we can look at what other economical/ecological cues can be used to further reduce the ambiguity overhead. I do not know where the boundary lies between severity and frequencies of ambiguities and the overhead of learning and instituting cues.

How does understanding pronunciation guided spelling translate into an improved ability to read and write traditional English?  

Not pronunciation guided spelling – pronunciation guided reading as a parallel path to spelling based decoding. If you see that reading is a multi-pathway process then helping developing readers sustain and extend their flow is helping their comprehension processes help their decoding processes. In addition to the phenomenological processing help, being able to read at higher rates and with more interesting vocabulary improves self-esteem in reading and minimizes the psychologically toxic effects of learning to read which has to be the 2nd biggest retardant (actually my biggest concern).

What about Diacritical Markings?

Few people (outside linguistics) remember the meanings of diacritical markings. I think it is critical that the cues be visual-morphic analogs of the sound attributes they prompt (i.e. larger for louder etc, - thus requiring very little 'memory' to decode the cues.).  I want developing readers to learn to recognize only 4 or 5 distinctions (the thresholds for which have yet to researched) in order to use a cue system. I am also less interested in covering everything than in striking a 'practical' balance between ambiguity irrelevation and overhead (for the developing reader and the institutions who have to buy into a system).  

What about other marking systems?

I think the idea of a marking system is more reasonable than the spelling reforms that require otherwise unaffected constituencies to conform to new spellings,  but as I have said, I personally don't want children to have to learn an interpretive key. That would introduce another loop of processing overhead into the process. I want the key to be replaced by phonimorphic variations in the characters themselves. I am along way from settled on exactly what that will look like but I am provisionally convinced its the right direction. 

How do PCUES compare with the Initial Teaching Alphabet?

From: The advantages and disadvantages of i.t.a. the initial teaching alphabet.

i.t.a.:An independent Evaluation. FW Warburton and Vera Southgate. Pages 152-4 http://www.barnsdle.demon.co.uk/spell/itaaddis.html

1 The use of i.t.a. has made the early stages of learning to read easier and more enjoyable for children. As a consequence they learn to read earlier and in a shorter space of time. 2 There were instances of parents reporting the frustrations experienced by children, who were not yet ready to transfer from i.t.a. to t.o., when they attempt to read t.o. print at home in books, comics, newspapers and other printed materials. 3 Certain parents find it a disadvantage to be unable to give the help requested by their children who are reading or writing in i.t.a. at home. 4 Many parents, teachers and other educators are very conscious of the problem which arises when a family moves and a child who is not a fluent reader in i.t.a. has to attend a school using only t.o.

What we are proposing differs quite a bit from the ITA. We are not suggesting we change the alphabet or the way we spell with it in any way (which ITA did). The only transition would be from having to not having the 'cues' themselves. Our assumption is that because the children would be learning to read with the normal alphabet and spelling (just getting help from the cues in decoding how they 'sound' together) their repeated successes with ever more familiar sub-word decodings will give them a kind of 'training wheels' effect. When we later drop the cues the majority of their reading experience will still be applicable. Recognizing letter combinations/sub-word sounds and how they combine should be much easier (again very closely paralleling the bicycle training wheels metaphor).

What about Accents?

We don’t change the alphabet for each accent. The cues I am suggesting emphasize the way a letter sounds relative to the other letters (like: sounds likes its alphabet letter name or not, silent to loud, blended or distinct, duration, etc.) This should still allow for accent variation. The cues don’t require us to ‘fix’ the sounds in some standardized way that everyone must subscribe to.

How about reading without ‘hearing the sounds’ – such as in Speed Reading?

Whereas some skilled readers develop the ability to bypass the sounding out (virtually or actually) our understanding (from observation and in the literature) is that while learning to read (our only concern) the initial decoding involves some variation (virtually or actually) of hearing the sounds.

What is the role of Pcues in writing?

First, because the variations we are proposing don’t change the fundamental underlying letters or how we spell with them, children could write as they currently do and we could see the variations of letter appearance (the cues) as special case props for reading. Second, (something we want to experiment with) would be to explicitly leverage the concept toward helping children learn to write. In writing as in reading, there is an ambiguity problem. It’s quite probable that the cues can work in reverse and provide another mental resource. By learning 4-to-6 alphabet general variations in the emphasis of writing a letter (not a great extension to what they already have to learn) they have a new tool to help them think about converting sounds to spellings. (I think research here would show that the cues would have a natural dovetailing with the ‘invented spellings’ children come up with).

What about preserving English Spelling?

We agree that once one can read well enough the spelling system in English provides for a great number of advantages (including etymological cues).  I chanced upon an article called "In Defense of the Present English Spelling System: A Juxtaposition of Linguistic, Historical and Sociolinguistic Justifications" that I thought made these and other points very well: http://linguistik.hypermart.net/EngLing.html

This, I think, is why dozens of spelling reform attempts have fallen flat and why my issue with spelling, for now, is confined to how it impedes the initial learning to read process. It is my respect for the arguments for traditional spelling and my acceptance of the alphabet's inertia that leads me to propose a system that rides atop them both without changing either.

This is not to say that I don't think we shouldn't understand the relationship between spelling ambiguities and the growth of negative-to-self-assumptions. I am convinced that the role spelling plays is significant and is, like reading, a field of high  ambiguity that does harm to the learning capacities of children. However, I don't think spelling reform as it has been historically proposed has any chance of overcoming the inertia of the system.  I think the best way to get the spelling system researched and reformed is to lead the educational research community into understanding the significance of the flow of 'ambiguity' during learning. When what I am proposing is demonstrated I think it will open the door to further modifications (spelling) because the value case for reducing unnecessary ambiguity will have been demonstrated in a less threatening way.

How do you think Phonemic Awareness and Decoding are related?

See  http://www.implicity.com/reading/app2twocore.htm#PA  for summaries of what is meant by the terms as they are used by the US DOE and it researchers. My sense is that Phonemic Awareness is the operationalized awareness that words (which can sound very run together) are made up of sub sounds (or can be abstractly represented as a combination of sub-sounds). Its importance is that unless one develops such a 'ground floor distinction' they can't possibly take the next step in relating to written language - they have no inner mental structure to 'placehold' the sub sounds.

PA and Phonological Coding are reciprocal. The oral language doesn't need us to make many PA distinctions in order to use it. Written language is built on those distinctions. PA develops in interaction with reading and writing far more than it does through the use of the spoken language. If someone doesn't get the PA distinction they will not read - but its like saying if someone doesn't learn to walk they won't win a marathon. There is a lot in between, most importantly the decoding which underlies all comprehension.

PA doesn't in itself inform decoding - PA provides the infrastructure necessary to map sound-symbol correspondences with word-sub-sounds - after its functioning its all phonological code processing (decoding).  

What is the role of comprehension in decoding?

Reading is not a serial process. The flow of comprehension provides a kind of 'contextual gravity' that 'attracts up' the flow of decoding by anticipating the flow of meaning and therefore reducing the extent of semantic uncertainty surrounding word recognition processing. This, of course, depends on the decoding process not consuming to much attentional resources. I want to see if we can minimize the decoding disambiguation overhead (not by more code to interpret the ambiguous code with) but by rendering irrelevant the need to disambiguate correspondence ambiguities via providing an  alternative route to say/hear (virtually/actually) the sounds that will keep the flow of decoding above the threshold of comprehension.

Why are you focused on reading?

For me reading is like the tip of an iceberg. The underlying problem is that we as race are virtually unconcerned with the 'flow of ambiguity' in children and its relationship to their learning. If ever we 'get' the significance of the relationship between 'ambiguity' and 'learning' we would proceed to reform virtually every intentional learning environment - we would reduce unnecessary, extraneous ambiguity and develops ways to respond to the children at the level of their felt-thought ambiguity where we couldn't. My interest in reading has less to do with whether children learn to read and more to do with what they are learning about learning - what they are learning about themselves in an environment (our orthography) that is plagued with (technological artifactual) ambiguity that makes them feel at fault for not being able to master it. I see my role in the long run as making the case of becoming ambiguity-centric in our approach to education and educational research. Reading is simply the simplest case.

On Getting PCUES accepted and tried:

Why has the English writing systems resisted reform?

In the early days of writing, writing systems were sacred.

In the medieval days of writing, writing served only the elite few.

In the 16th through 19th centuries the literacy-divide served the upper classes.

In the 20th Century we accepted our orthography as a fixture in our thinking about reading (because all thoughts of changing it had such unacceptable implications).

One Factor in why writing systems have resisted reform is that those who are able to read and write can exercise power over those who can't.  In the 19th Century, the difficulties of English spelling served as an instant social screening tool. Don’t you face the same problem?

Though it seems that the class mechanics (a century ago) were served by how English proficiency provided such a screening tool, I don’t believe it prevails to the level of any similar significance today. I think your argument was once a strong one but is a very weak one in today’s landscape, at least in terms of the U.S., England, Australia, and Canada.

Today, the central issue is institutional inertia, the success of written English in becoming the world’s leading language in so many global communication domains: economics, politics (if only in so far as dealing with the USA on trade or AID) science, technology, medicine, entertainment. The constituencies affected by any change to the language are the most powerful constituencies in the world. The written language works for them – any change a nuisance and disturbance. They collectively resist change because their interests are best served by preserving things the way they are. People who are not above a certain threshold of literacy-proficiency don’t even exist for them (practically speaking – they may exist as beneficiaries of their philanthropy).

What makes P-Cues Different? Why will the institutions that have resisted other attempts at orthographic reform embrace (or not resist) P-Cues?

Past attempts at reform all suffered the same basic problem: people unaffected by the orthography were being asked to submit to changes that, though they helped some people (not among their constituents), were a nuisance for them. The people and organizations for which our orthography isn't a problem don't want to have changes to it forced upon them.

P-Cues do not require any change to our orthography as it is commonly used throughout the world. They offer a learning-on-ramp supplement to orthography that is used only during the initial learning to read process.

I think the good news is that these constituencies that resist change also benefit by a rise in their number of constituents (an increase in the number of literally-proficient people means an increase in the number of constituents (customers)). Their value system is open to support of reform in 2 ways: the obvious philanthropic and the less obvious but more directly beneficial consequences of a rise in their numbers of constituents.

Isn’t reverence for the traditional code an example of how we are shackled to the past?

I think reverence may not be accurate. Again I think we have taken the code for granted because we can't imagine it could be otherwise and have become dull to the issue after hundreds of years of failed attempts to fix it. This is supported by some who believe that the ambiguity that impedes learning to read, becomes helpful to the language once someone can read (analogous to the 'invisible hand' in the market place).

To solve this we will have to get together a diverse enough group, representing the different constituent groups of thought and influence, and begin with agreeing that we are all well-intentioned people who have seen the problem and what to do about it differently. Our real job is to frame the questions they can't help but want to answer.

I think everyone agrees that it (poor letter-sound correspondence) makes reading and writing more complex and harder to teach and learn. It increases the incidence of dyslexia. Isn’t the real issue what to do about it?

I don’t agree that ‘everyone agrees’. I don’t think ‘we’ have traveled very deep into the problem because of the kind of proposals that have come forth about what to do about it. I don't think NICHD or NIH or DOE has a solid research based understanding of the how much this problem is directly implicated in reading problems (they don't even mention it in their research summaries). I think because of the impracticality of reform proposals and how well the orthodoxy has insulated themselves from their assaults, I don't think they really get the dimension of the problem of the code's ambiguity. Like in any endeavor its important to understand the problem without constraining our thoughts to what a solution looks like - take the problem deeply home and then from there get creative. I think the baby was thrown out with the bathwater. People may all generally agree there is a relationship between reading difficulty and code ambiguity but I don't think they have seen the dimension of the problem or made the connection that this ambiguity is responsible for causing a hundred million people to 'hurt' or that its causing hundreds of billions of dollars to be lost.

I don’t think we can travel very deep into the ‘what to do about it’ until we better understand what ‘it’ is – what I call ‘code-induced ambiguity overwhelm’.

There are studies that suggest such a relationship (correspondence problem and reading difficulty) but I am not sure how a definitive study would even be set up.

Ah,  that's the point. How do we design such a study? How do we communicate to the research community the need for such a study? What I am proposing so far may be nothing other than a low-overhead comparative that has lower student-experiment risk because we are not changing the spelling... We have to make our inquiry interesting to the orthodox research community. If we dismiss them they will certainly dismiss us and we won't ever escape the field of the (orthography reformist) converts.

What about the US DOE’s Research Community?

I remain persuaded that the only way any reform proposal will ever travel beyond the 'converts'  (orthography reformists) depends on the mainstream research community 'getting' the necessity of understanding the role of ambiguity in the retardation of the learning to read process. When they see how their own values are impeded by the ambiguity (and its economic and social costs) then perhaps they will become open to an iterative experimental process that can arrive at a smart balance between feared upheaval and reducing the ambiguity. We, this project, need to develop ways to pilot/drive the researchers into working the problem.  I don’t believe the US DOE will ever accept spelling reform proposals as the risk is too great to prove them and because they require otherwise unaffected constituencies to change the way they write.

The US educational community recently woke up to the fact that for the past few hundred years every philosopher with an educational bent has tried to persuade the educational community to incorporate their views. The results have been a mess. The castle walls have been recently and substantially reinforced at the policy level - even the press has been co-opted. If it doesn't have a rigorous scientific research backing the US DOE is encouraging the educational community to stay away from it. Look at the language coming out of congress. They mean to stop distracting experimentation and develop a rigid orthodoxy (pedagogically not necessarily curriculum wise) protected by the research community. In a way this is understandable.

Any reform to the 'code' will have to go through the research community. They currently are numb to the talk of spelling reform. Because they don't see a way of changing the code they accept it. It’s 'behind them'. Their research models ASSUME it.

Those of us concerned with making the learning to read process more developmentally friendly to our children (no matter which vector we may come in on) can either scream into the night with our protest against the insanity of the code, or formulate research projects that get the attention of the research community. If the projects we propose start off with requiring the use of systems of spelling or alphabet modifications they will never get tried. The risk to the students of teaching them systems that are incompatible with the prevailing orthography is prohibitive - it could retard their overall educational process.

We need to demonstrate to the research community in language and concepts relevant to them that this ambiguity problem is worth their attention. To do this we need to develop research models that can explore this without requiring children be guinea pigs to an abnormal orthography. We need to decouple understanding the dimensions of the ambiguity-affect from solutions to it that are so threatening and impractical that the idea of entertaining the problem goes down with them.

This is where P-Cues comes in. I think developing an exploration of the ambiguity problem that doesn't require changing the spelling or the alphabet is an easier pill to swallow for the research community. So long as the cues are well within the perceptual distinction threshold of developing readers and so long as they do not ambiguate the underlying letters, their only downside is weaning readers off of them if they work and that's a better set of problems. 

   

©Copyright 2001 - 2003: Training Wheels for Literacy & Implicity