Tuesday, February 17, 2015

Why Haven't Florida's Test Scores Shot Through the Roof?

We're now in year three of Marzano's "causal" model here in Florida.   With all these effect sizes (where increases of 10-20 percentile points are common) AND with 98 percent of Florida's teachers getting effective or better ratings, shouldn't Florida's test scores be through the roof by now?  Because they're not.  In 2013 they were stagnant from the year before.  And this year?

"The average scores for Florida's class of 2014 were 491 in reading, 485 in math and 472 in writing, all below the national averages and lower than last year's scores, when a smaller group of state students took the test.  Across the country, SAT scores were stagnant..."  (Source.)

So what's the problem?  The model or its implementation?  My guess is both (plus the research behind the model).

Needless to say, its implementation has been a nightmare--the fact we're still trying to implement it three years later (at God only knows what expense) should tell you everything you need to know.  Or you can Google articles like this.

The model itself, despite the claims on Marzano's website, appears to have zero research supporting it (it being the model itself).  Check out his website's claims:



 Going in order (1-5):
1.  Where have I heard this 5000 studies over 50 years claim?   Here, in a journal article co-authored (second author, it seems) by Marzano.  Turns out only 70 (1.4%) of those 5000 studies "met the researchers' criteria for design, control, data analysis, and rigor."  This would appear to be, at best, intentionally misleading.  And considering there's no way these 5000+ studies, done decades before the model existed, were actually testing the model.  Strike one.

2.  This is completely irrelevant to verifying the model.  Once again, "It's good because there's a lot of it--and a number!"  If anything, the fact this much stuff is out there and had no effect, I would say, works better as a counterpoint to its effectiveness.  Strike two.

3.  I won't rehash the hazards of the analyses, but it's important to note that this is not evidence for the model itself. We're now 0 for 3.  (Are these studies double-dipping from #1 above?  I bet they are.)

4.  Finally, something directly studying the model!  Except there's nothing listed.  One would think, with this being the most important thing listed here, there would be a link, or a title, SOMETHING.  Zero for four...

5.  This paper is basically a "Why you should use our Model"; I see nothing in there verifying its effectiveness.

And we're 0 for 5 now.  (Which, if some people get their way, isn't a zero, but a 50%; no wonder "educational experts" approve of this baloney.  Regardless, 50% or 0%, this is a failure.)

So we have a convoluted and horribly expensive evaluation system that doesn't seem to be producing significant (if any) results after 3 years.  Solution?  Let's drop this until a third-party verifies its effectiveness and go back to a cheap, simple solution--which will likely cause no discernible changes to our education system (besides saving money).

Friday, February 6, 2015

Insanity of the Marzano Evaluation Redux...

My "Insanity" post re: the Marzano evaluation system seems to be the only "popular" post I have so I thought it might be time to revisit it--this time in fewer words.  I encourage everyone to sign up at his website to access the database themselves.  Then go through and take a look.  Before we begin, I urge you to keep in mind the GIGO paradigm:



Sorting by p, you quickly see a lot of data that seems to have no value reported; I'm fairly certain that with a sample size of 8, a p of <0.0000 is not going to happen. 


 And, eyeballing the scrollbar, it would seem that ~66-75% of Marzano's database is statistically INsignfiicant.  ("...in Statistics 'significant' means probably true (not due to chance)") Combined with the portion above (all those zeros...), I personally find this to be, well, damning, for lack of a better word.



Now, before putting in these last two pictures, I would like to once again quote Marzano's description of the p values:  "Basically, if the value in this column is less than .05, the effect size reported for the study can be considered statistically significant at a significance level of 5% (that is, the probability of observing such a result by chance is less than 1 in 20). In other words, a reasonable inference can be made that ... the reported effect size represents a real change in student learning."

Behold the "better than a coin flip" point:


And, this is definitely my favorite:  Seven results that are 100% likely to be random fluctuations. 



It's the epitome of hypocrisy that in this day and age of "data driven decision making" we're relying on this type/quality of research to make decisions.  It's like our politicians and educrats have never even heard of the phrase "peer review."  Again: 




To Marzano's credit, he's stated his research is being misused.  But I don't see him making a big fuss in telling states like Florida to stop basing their evaluations on his work, i.e., stop buying his books and materials.



BONUS:  This is one of my favorite pictures, when you sort it by N (C), that is, the size of the control.  You used a study that had ONE kid as a "control"?  Really?  I mean, REALLY?  You don't need to be a statistician to realize that public policy should not even be influenced, yet alone based on this analysis.