Sorting by p, you quickly see a lot of data that seems to have no value reported; I'm fairly certain that with a sample size of 8, a p of <0.0000 is not going to happen.

And, eyeballing the scrollbar, it would seem that ~66-75% of Marzano's database is statistically

__signfiicant. ("...in Statistics__

**IN****'significant' means probably true**(not due to chance)") Combined with the portion above (all those zeros...), I personally find this to be, well,

*damning*, for lack of a better word.

Now, before putting in these last two pictures, I would like to once again quote Marzano's description of the p values: "Basically, if the value in this column is less than .05, the effect size
reported for the study can be considered statistically significant at a
significance level of 5% (that is, the probability of observing such a
result by chance is less than 1 in 20). In other words, a reasonable
inference can be made that ... the reported effect size represents a
real change in student learning."

Behold the "better than a coin flip" point:

And, this is definitely my favorite: Seven results that are 100% likely to be random fluctuations.

It's the epitome of hypocrisy that in this day and age of "data driven decision making" we're relying on this type/quality of research to make decisions. It's like our politicians and educrats have never even heard of the phrase "peer review." Again:

To Marzano's credit, he's stated his research is being misused. But I don't see him making a big fuss in telling states like Florida to stop basing their evaluations on his work, i.e., stop buying his books and materials.

BONUS: This is one of my favorite pictures, when you sort it by N (C), that is, the size of the control. You used a study that had ONE kid as a "control"? Really? I mean,

*REALLY*? You don't need to be a statistician to realize that public policy should not even be influenced, yet alone based on this analysis.

Nice Post. Thanks for Sharing...

ReplyDelete