Sunday, October 28, 2012

Picking and choosing research (or, "Charter Schools -- The Data")

Not a lot to say here; I'll let the studies speak for themselves.  But long story short, it's pretty funny that in this day and age of "data driven decision making" (as opposed to what?) that certain elements prefer to go with their ideals/beliefs rather than the facts.  Or, even shorter, "School reform is all about politics; it has nothing do with education."  After all, everything considered, our schools are doing fine.

First of all, Stanford's CREDO study:

The study reveals that a decent fraction of charter schools, 17 percent, provide superior education opportunities for their students.  Nearly half of the charter schools nationwide have results that are no different from the local public school options and over a third, 37 percent, deliver learning results that are significantly worse than their student would have realized had they remained in traditional public schools.  These findings underlie the parallel findings of significant state‐by‐state differences in charter school performance and in the national aggregate performance of charter schools.

The US Department of Education's study (middle schools only):

On average, charter middle schools that hold lotteries are neither more nor less successful than traditional public schools in improving student achievement, behavior, and school progress.

A Rand study is also a "mixed" review, showing that there is some good and some bad in charters:

Charter schools in most locales have marginally greater variation in performance than TPSs [Traditional Public Schools], as measured by the achievement-impact estimate for each school, and, in some locations, this may simply reflect greater measurement error associated with the smaller average size of charter schools.
Even the KIPP schools, which I first heard about from Outliers (great book, btw) and greatly admire(d?), have some doubts.  It appears that some of their success is from "selective attrition", as evidenced here and here.  There seems to be this idea that if we just privatize education (or anything), that the magical free market fairy will make everything right.  (Ignoring that privatization can debase things, one of which is education.)   But this just seems to be the way the world works now--ideology trumps reality.  If the reality isn't reflecting ones' political view, well, obviously, the problem is your idea just hasn't been given a fair shot--it certainly isn't that you're just wrong

Update (3/6/13):
How Charter Schools Choose Desirable Students.

Wednesday, October 17, 2012

Learning Styles: Concepts & Evidence

Admittedly I have not read the whole thing--it's the middle of the school year, people, c'mon!--but the summary says it all:

Our review of the learning-styles literature led us to define a particular type of evidence that we see as a minimum precondition for validating the use of a learning-style assessment in an instructional setting. As described earlier, we have been unable to find any evidence that clearly meets this standard. Moreover, several studies that used the appropriate type of research design found results that contradict the most widely held version of the learning-styles hypothesis, namely, what we have referred to as the meshing hypothesis (Constantinidou & Baker, 2002; Massa & Mayer, 2006). The contrast between the enormous popularity of the learning-styles approach within education and the lack of credible evidence for its utility is, in our opinion, striking and disturbing. If classification of students’ learning styles has practical utility, it remains to be demonstrated.

Sunday, October 7, 2012

Learning Styles Don't Exist (Dr. Willingham)

This was the video where I first heard of Dr. Willingham.  It was startling to see how such a popular idea is so wrong.  But then I remembered that this is the field of education (and their pseudo-experts) we're talking about here, and the sense of shock wore off very, very quickly. The irony is that in this day of "data-driven" decisions, we ignore what the science says to go with our "popular" theories.

(Don't get me wrong, I do love education--it's just that the top of the education system is a complete mess and I think it's time for an overhaul THERE, rather than down at the teacher's level.  The troops are fine; it's the generals that need help.)

Dr. Willingham: Merit Pay, Teacher Pay, and VAM

Not a lot to add here; just an excellent video by Dr. Willingham (cognitive psychologist).  Once I have time, I hope to read one of his books (assuming I can find it amongst my stacks of books). 




Saturday, October 6, 2012

Marzano's Database

Updated:  10/9/12 @10:41GMT+5

First of all, while I am NOT an expert statistician, I do know more than the average person and have had formal training/education in the subject.  (I've had some undergraduate and graduate stats classes, used statistics professionally & throughout undergrad, etc.)  So I would love to have a "pure" (read: non-education related) statistician go over the data, but I think I have made a decent start here.

If there's one thing I hate hearing any more in education it is "data" because most of the data they have is crap and/or they have no idea what to do with it.  In the case of Marzano, I often hear "him" (his company & his salespeople) going on about how much research he has.  It's true, his database has 1036 studies in it (I copy & pasted these into a text file, then imported into Excel as a "delimited" file).  But let's take a closer look, because as just about everyone with any connection to the professoriate knows, the quality of educational research is (often extremely) suspect.  This is largely do to the difficulties of doing research on kids (particularly longitudinal research) but there are other problems as well.

If you sort the data by p-values, you quickly start to see some problems.  Marzano's own website declares,

"Basically, if the value in this column is less than .05, the effect size reported for the study can be considered statistically significant at a significance level of 5% (α = .05). In other words, a reasonable inference can be made that the reported effect size is probably not a function of random factors; rather, the reported effect size represents a real change in student learning."

So sort the data by the p-value, delete those that are greater than 0.05 and look what happens:  you're down to 285 studies from the initial 1036.  That means that by his own criteria, only 27.5% of his data is statistically significant (to α=0.05).  (Or, taken another way:  There is a significant probability that the results weren't really results, but random fluctuation between the controls and experiments.  And that's true for nearly three quarters of the data.)  Of the remaining 285 studies, 101 of them have a p-value of zero; I assume this means that either it wasn't reported OR it was such a great experiment that they were able to calculate the p-value down to less than 0.001.  The latter is unrealistic.  (For example, one study has just 4 data points and an "effect size" of 9.25, which is grossly unrealistic.  I don't see how any self-respecting statistician could use or report this, to be blunt.)  So incomplete/unrealistic data in my book gets thrown out--we're down to 185 studies (17.9% of his database).

We're not done yet.  Here are a few other data points I'm going to throw out because I find them too suspect to be reliable for district-wide policy-setting decisions:  Studies involving less than 18 students (n):  79.  Admittedly, this is somewhat arbitrary, but I could probably defend their exclusion* far better than anyone could defend their inclusion.  That brings us down to 106 (10.2% of the total) studies.  I'm going to stop there, but notice that there are also 16 studies that are incomplete; they have no unit length.  Another 21 studies lasted less than a week.  Two studies have controls of less than 10 students, which "seems" too low (one is 4, the other 9).  So even this remaining 10% is somewhat dubious.  But it's not the ten percent that really bothers me, it's the 90%, because Marzano's work--which sadly is influencing policy--is based on all of this bad data.  One other quick question:  How much time was spent on each of these 1000+ studies?  (I'm guessing not a lot; see below.)

In other words, the policy is based on research and the research relies on unreliable data.

So does it surprise anyone that all of these new policies only seem to make things worse?

Also worth reading:  "Marzano - A Successful Fraud", a review of Marzano et al's Classroom Instruction that Works...

To quote the Amazon.com review:

A. Every single reference I checked was itself dubious or misrepresented by the authors.
B. Some of the references were on topics unrelated to the instructional strategies cited.
B. [sic] Some of the numbers from published data were altered to better conform to the author's point of view.
C. Some of the references themselves presented provisional conclusions based on weak results, but were given complete credence by Marzano et al.
D. The authors took weak data from several studies, each based on averaging the results from studies assumed to use similar methods and subject cohorts, and averaged these, compounding the statistical weaknesses. This is especially shocking given that no credible researcher would combine results from studies by different groups that clearly use different methodologies and subject cohorts.


*  My rationale for excluding studies of < 18 students:  This is less than a typical classroom and more importantly, probably less than necessary for reliable statistical analysis (I was always told to use at least 30 data points, but that was in a field--Science/Engineering--that has much more rigorous standards than the social sciences, let alone education).