Wednesday, November 20, 2013

"I'm not even going to read the question."

...So said one of my students, prior to my handing out the wonderful test for my VAM score today.  And while this student wound up actually reading (at least) some of the questions, others wanted to start filling out the bubble-sheets before they even got the test.  Others were done with the 40-problem test within ~5-10 minutes. So please, tell me highly-paid "educational experts", why is it a good idea to judge me on these test scores?

How many millions of dollars are being spent collecting junk data for a statistically flawed analysis that is shown to be inappropriate and has essentially no basis as far as effectiveness is concerned?  It's stupid bogus sophistry (BS) like this that makes me think the schools probably would have enough money if the people at the top (Federal, State, District) knew what they were doing.


Wednesday, May 29, 2013

Visible Learning, Invisible Evidence

So I'm "done" (returning) Hattie's Visible Learning tomorrow.  I read over the first two chapters; didn't really focus on the actual "meat" of the book as I don't think the numbers mean squat.  They are at best extremely unreliable; I'd love to see someone try to test some of these numbers.  (i.e., focus on one strategy, test it repeatedly, and see if the results come back anywhere near the average Hattie presents.  Or even take a few [large] random samples of older research and see if the same number comes back up.)

A few of my questions/comments/concerns:

1.  If these effect sizes are accurate, why can a teacher not focus on 2-3 things and thus be more-or-less a "great" teacher?  If these evaluations' [e.g., Marzano] checklists aren't checklists as claimed, that is, "It's stuff [we're] already doing in class"...well...with all these great effects, why isn't virtually every teacher great?  I see three possibilities (not mutually exclusive):
i.    Virtually every teacher is not doing them (and there are a LOT of them) enough.
ii.   Virtually every teacher sucks at virtually every one of them.
iii.  The numbers suck.

(Technically I can think of a fourth but I excluded it; there is the--illogical--possibility that the numbers are somehow not cumulative.  But if that's the case, it destroys the whole argument for implementing these strategies.)

2.  Hattie states that a d=0.4+ is the "zone of desired effects".  Yet he also states, "Further, there are many examples that show small effects may be important" and goes on to mention a study with a d = 0.07 wherein "34 out of every 1,000 people would be saved from a heart attack if they used low dose aspirin on a regular basis".  Well, if it effects 34 out of 1,000 people, it would save 1.9 million out of ~55 million.  I use this latter number because that's how many K-12 students there are in the US.  Obviously this wouldn't be as signficant as a life-or-death situation, but if it's going to help (rather than save) that many kids, is it worth looking into?  To quote Hattie, "This sounds worth it to me." (pg 9)  Hattie's "hinge point" seems purely arbitrary.  This also highlights the difference between the (pseudo)scientific approach of meta-analysis in the medical field and education, which leads to...

3.  Applying a scientific approach to unscientific data results in unscientific results.  And seeing as how this whole book strikes me as just yet another attempt to latch onto science's credibility (something educational research, generally speaking, does not have), that's a big deal.  In fact, there's something absurd about even having to discuss whether the quality of the data matters (pg 11).  Case in point:  He cites Torgerson et al. (2004), who used 29 out of 4,555 potential studies on a subject area.  These were chosen as "quality" (Torgerson's definition) studies because they used randomized controlled trials.  That helps improve the quality of your data, alright, but...what about the other 4,526?  99.4% of the research didn't use random trials?  The best education can typically do (not faulting education, it's just the nature of the beast) is "quasi-experimental" studies.

4.  Another problem with the data that puts a big red flag on all these numbers (again, GIGO):  There are no real (scientific) controls in educational research.  A control is a "yes/no" situation; Group A gets the experimental treatment (e.g., a drug) and Group B does not (e.g., a placebo). Obviously you can't do this without doing something tantamount to child abuse (i.e., standing there and doing absolutely nothing)...but frighteningly, that's the only meaningful "control" there could be.  (And that's one reason why education data will never be scientific in nature.)

5.  Barring a strictly regimented routine (one that could probably be automated via presentation software), it's highly unlikely two teachers using the same "technique" will apply it identically.  (And the same goes for the "controls" above; what teachers replaces the experimental technique with will differ, rending comparisons dicey) This leads to another "apples and oranges" scenario for meta-analysis (albeit admittedly a relatively weak one).

6.  More apples and oranges:  One technique may be effective at one grade level but not another.  I have no problem accepting that having a learning goal may help first graders.  They may need the focal point and they are (I believe...) general subjects/topics.  I have a hard time accepting that having written on the board "student is going to factor tri-nomials" is going to have a significant impact on seniors in algebra.  (Anecdote:  My students have repeatedly mocked/made derisive comments when they see me changing the learning goals.  For example:  "You know we never look at those, right?"  "Yes, I know, it's just something I have to do."  Very empowering, let me tell you...)  Mushing multiple grades together into one statistic is just a bad idea.  Ditto for different subjects (at higher grades).

Sunday, January 27, 2013

Is Teaching Still a Viable Career Path?

One of the worst questions I'm asked nowadays are questions along the lines of "What should I major in?"  To which I have no good answer--I don't know what I'd do in this generation's shoes.  About the only "safe" majors are probably finance/accounting.  A few seem to be trying to go into teaching, to which I offer two main pieces of advice:

1.  Get a degree in the subject you want to teach, not a degree in education of that subject (if applicable).

2.  Get a second major or minor or in some way, shape, or form, start preparing for "Plan B", because there's nearly a 50% chance they'll need it within five years.

I offer this advice for one main reason:  teacher turnover is insane.  The reasons are myriad, but they probably all fall under "burnout" (or "stress", financial or otherwise) in some fashion (whether it's being blamed for all of society's woes, the attacks on their benefits, or what have you).  And it got me thinking: 

Is becoming a teacher a viable career path anymore?

No one goes into teaching for the pay; it's always been (way, in my admittedly biased humble opinion) too low.  But at this point, after stagnation and massive increases in health care costs, is it really viable to go into teaching as a career?  I see it more as a second income for couples at this point; I could not in good conscience recommend it as a "primary" (sole income) career path for college-bound students.  Update:  Though this may finally make it worthwhile; $10,000 bachelor's degrees in science and math education.  (Though I'd still recommend a backup plan.)

Like most middle-class salaries, teaching salaries have stagnated (below).  But again, given that they were already low, has teaching fallen from "middle class" (financially) because of the constant erosion of their salary by rising health care costs?

Average teacher salaries (constant dollars):  (Note:  I wish I had an "average salary" for teachers' first five years; I imagine these figures below are greatly skewed up by the fact that the teaching population is aging.  The current average age is ~41, which means these figures are probably for teachers averaging 15-20 years of experience.)


Ventingmycynicism...now with graphics!

Thursday, January 17, 2013

Hattie vs. Willingham (and science)

So I'm currently trying to read Dr. Willingham's When Can You Trust the Experts and Dr. Hattie's Visible Learning....  Granted, my bias is for the scientist (Willingham), not the education major.  But I thought it funny that last night I read from Trust things to the effect of watching out for marketing buzz-words preying on enlightenment area thinking:  "research [or evidence] based", unlocking potential, etc."  Turn to Hattie:

Reveals teaching's Holy Grail (right off the cover) 

Yeah, um...no.  To paraphrase Willingham:  there is no "magic bullet", no "hidden potential".  (Also, it's...odd...that education's "Holy Grail" would have garnered a total of 12 reviews on Amazon after 3+ years.)

Turn to the back of Visible:

"...represents the largest ever collection of evidence-based reaserch...."

"Evidence-based" was one of the meaningless buzz phrases Willingham said to watch out for.  (What exactly is NON evidence-based research???)

Oddly enough, this shows up just two paragraphs later (also from the rear cover of Visible):

"Although the current evidence-based fad has turned into a debate about test scores...."

Wait, what?  Did the book's back cover just call this book part of a fad???  Methinks they probably should have used "trend" or "movement" if they wanted to promote this book.

I'm going to have an extremely difficult time giving Hattie's book a fair shake; I've already read a few things that make me dubious.  And one thing that, while not discrediting the whole book, shoots a pretty big hole in it:

"Matching style of learning" (pg 195)

d=0.41  Wow, so matching students to their learning styles has an "average" effect?  In other words, something that does not exist has a sizable (average) impact?  Neat.  What does this say about the methodology?

A few other quick points:

1.  This is a synthesis of meta-analyses.  Which I think is the same thing as saying it's a meta-analysis of meta-analyses, but everyone would see the immediate problem here; you're now two levels away from the raw data.  (To continue the banking analogy, this is like the mortgaged-based derivatives; their value collapsed because no one knew what they were really based on after being sliced and diced, repackaged and so on.)  And you've now "massaged" the numbers twice, introducing error the first time, then compounding it.

2.  Who peer-reviewed this work?  (I'm pretty sure the answer is "no one".)

3.  I'm still not convinced these effect-sizes mean a whole lot in and of themselves.  As of right now, I see no measurement of whether or not the effects are real, just that they are "big".  And the definition of "big" is ambiguous.  More on this later; I'm still trying to really figure these out before I open my mouth.

Monday, January 14, 2013

What do Banks and Bad Ideas in Education Have in Common?

They're too big to fail.

As I was reading through Dr. Willingham's When Can You Trust the Experts last night it hit me:  many of these bad ideas (e.g., Marzano, learning styles, no zero policies) are now "too big to fail."  Too many people have made their careers on these ideas.  Too many people have careers based on these ideas (see below).  So even if you could get through the "it must be right because everyone believes it" mentality, you would still have an uphill fight--displacing people in administration that base their livelihoods on (essentially) wasting time and money.  (And of course, these people would have an added incentive to NOT believe what they were doing was useless, and they're in power...which means getting through that "this doesn't work/this isn't real" barrier may very well be impossible.)

My district alone has a seven-person "Accountability and Assessment" department that includes:

Director of Assessment and Accountability
Program Manager for Testing, Grants, Development & Evaluation
Program Manager for Assessment and Data Analysis
Program Evaluator and Data Analyst
Test Development Specialist
Test Warehouse Operator
Clerical Assistant

We won't ask why a seven-person department needs a Director and two managers.  Gotta love that near 1:1 ratio of managers to non-managers!  (And I got $5 that says that the clerical assistant does more real [honest] work than the managers and director combined--and for half their salary.  ;) )

Potential solution:  Education grad schools need to focus more on how to conduct and/or review research.  This will (hopefully...) start putting more knowledgeable people in power down the road, thus avoiding falling for these bad ideas.

Update:  1/19/13

Just heard this and thought it apropos:  "It is difficult to get a man to understand something, when his salary depends upon his not understanding it!"--Upton Sinclair