Is Structured Word Inquiry the answer to America’s reading woes? Part V

At this point, with five recent SWI critiques from Greg Ashman, writing yet one more myself feels like beating a dead horse. In terms of SWI publications, much of Ashman’s criticism has focused on a paper that has come out since I started this series: Bowers’ recent paper “Reconsidering the Evidence That Systematic Phonics Is More Effective Than Alternative Methods of Reading Instruction“. Ashman’s criticisms are quite thorough, and I have nothing to add to them.

Instead, I’ll return to the older Devonshire et al (2013) paper—a paper that discusses efficacy data that supposedly favors SWI over phonics. This paper compares 1st and 2nd graders who spent 6 weeks exposed to 15-25 minutes daily of SWI as opposed to those got “standard classroom” phonics instruction, and finds that the SWI intervention improves word reading scores. While this is reasonable grounds for further investigation, it’s far from the kind of study needed to justify a replacement of systematic phonics with SWI. For one thing, all of the students in the study received phonics instruction in addition to SWI. For another, the comparison involved an instructional time frame of only 6 weeks. Finally, the “standard classroom” phonics being compared to SWI can mean just about anything, including watered-down, unsystematic phonics instruction of the sort that has failed many kids over many years.

To answer the question of whether systematic phonics should be replaced with SWI, Continue reading

“Simple practice effects” and the SAT

Useful article in the Washington Post re: standardized testing and fairness: No one likes the SAT. It’s still the fairest thing about admissions.

I’ll post some of the sections on income and scores in a bit, but this section on tutoring caught my eye:

Highly paid tutors make bold claims about how much they can raise SAT scores (“my students routinely improve their scores by more than 400 points”), but there is no peer-reviewed scientific evidence that coaching can reliably provide more than a modest boost — especially once simple practice effects and other expected improvements from retaking a test are accounted for. For the typical rich kid, a more realistic gain of 50 points would represent the difference between the average students at Syracuse and No. 197 University of Colorado at Boulder — significant, perhaps, but not dramatic.

By Jonathan Wai, Matt Brown and Christopher Chabris | 3/22/2019

Simple practice effects !

yeesh

Continue reading

How can you tell when a student has mastered a skill?

How can you tell whether someone has truly mastered a skill? What is the measurable indicator that a person really knows how to do something? These questions should be at the heart of every teaching decision . . . and every evaluation we make about the success of an educational program. Yet for many educators, and certainly for most parents, answers to these questions are anything but clear. Most of us have grown up in a “percentage correct world” where 100% correct is the best anyone can do. But is perfect accuracy the definition of mastery? . . . In fact, we see many children and adults who can perform skills and demonstrate knowledge accurately enough – given unlimited time to do so. But the real difference that we see in expert performers is that they behave fluently – both accurately and quickly, without hesitation.

Continue reading