Thursday, January 17, 2013

Hattie vs. Willingham (and science)

So I'm currently trying to read Dr. Willingham's When Can You Trust the Experts and Dr. Hattie's Visible Learning....  Granted, my bias is for the scientist (Willingham), not the education major.  But I thought it funny that last night I read from Trust things to the effect of watching out for marketing buzz-words preying on enlightenment area thinking:  "research [or evidence] based", unlocking potential, etc."  Turn to Hattie:

Reveals teaching's Holy Grail (right off the cover) 

Yeah,  To paraphrase Willingham:  there is no "magic bullet", no "hidden potential".  (Also, it's...odd...that education's "Holy Grail" would have garnered a total of 12 reviews on Amazon after 3+ years.)

Turn to the back of Visible:

"...represents the largest ever collection of evidence-based reaserch...."

"Evidence-based" was one of the meaningless buzz phrases Willingham said to watch out for.  (What exactly is NON evidence-based research???)

Oddly enough, this shows up just two paragraphs later (also from the rear cover of Visible):

"Although the current evidence-based fad has turned into a debate about test scores...."

Wait, what?  Did the book's back cover just call this book part of a fad???  Methinks they probably should have used "trend" or "movement" if they wanted to promote this book.

I'm going to have an extremely difficult time giving Hattie's book a fair shake; I've already read a few things that make me dubious.  And one thing that, while not discrediting the whole book, shoots a pretty big hole in it:

"Matching style of learning" (pg 195)

d=0.41  Wow, so matching students to their learning styles has an "average" effect?  In other words, something that does not exist has a sizable (average) impact?  Neat.  What does this say about the methodology?

A few other quick points:

1.  This is a synthesis of meta-analyses.  Which I think is the same thing as saying it's a meta-analysis of meta-analyses, but everyone would see the immediate problem here; you're now two levels away from the raw data.  (To continue the banking analogy, this is like the mortgaged-based derivatives; their value collapsed because no one knew what they were really based on after being sliced and diced, repackaged and so on.)  And you've now "massaged" the numbers twice, introducing error the first time, then compounding it.

2.  Who peer-reviewed this work?  (I'm pretty sure the answer is "no one".)

3.  I'm still not convinced these effect-sizes mean a whole lot in and of themselves.  As of right now, I see no measurement of whether or not the effects are real, just that they are "big".  And the definition of "big" is ambiguous.  More on this later; I'm still trying to really figure these out before I open my mouth.

1 comment:

  1. I like your scepticism. I've tried to find peer reviews and look at Hattie's research myself. THe peer reviews show Hattie has made a lot of mistakes, from misrepresentation to calculation errors - e.g., negative probabilities. THe misrepresentation is significant, e.g., Hattie's influence 'Reducing Disruptive Behaviour' he used 3 meta-analysis none of which measure this but rather compare kids with psychological disorders with 'normal' kids. I've created a blog collecting the peer reviews here-