Friday, September 28, 2012

The Dangers of Groupthink.

Oh look, another article promoting critical thinking & common core.  The irony is delightful.  If only the pushers of these would follow their own (non-)advice and think critically for a few moments.  If only they understood the extreme case of group think that educational "experts" display.

Let's take a look at a few of their claims:

"The goal, say experts, is to ... to create a pipeline of native talent for the millions of STEM jobs going begging—in science, technology, engineering, and math."

Problem:  There is no STEM shortage.  NoneZipZilch.  Repeat after me:  THERE IS NO SHORTAGE OF SCIENCE, TECHNOLOGY, ENGINEERING, AND MATH employees.


"For example: How would a feminist critic view Alfred Hitchcock's Rear Window?"

First of all, let me just say, "What?"  Doing research into a very specific question (on a subject that is hardly objective) is "better learning" than what, exactly?  More importantly,

"They read related texts from different genres, think critically to reach an informed conclusion...."


 Ah yes, "critical thinking"...which used to be called "thinking" before someone decided to make a quick buck off of pushing a phantasmal concept.  What exactly is critical thinking, you ask?

Critical thinking is the intellectually disciplined process of actively and skillfully conceptualizing, applying, analyzing, synthesizing, and/or evaluating information gathered from, or generated by, observation, experience, reflection, reasoning, or communication, as a guide to belief and action. In its exemplary form, it is based on universal intellectual values that transcend subject matter divisions: clarity, accuracy, precision, consistency, relevance, sound evidence, good reasons, depth, breadth, and fairness.
 Whew...that's some top-quality bullsh*ttin', right there.  If that's critical thinking, what is un- or "noncrtical" thinking?  Instinct?  Seriously people, think for a minute, critically or otherwise.  Maybe it means this.  Or just about whatever you want, because like most edubabble, it means "I sure wish I could make this sound smart even though it isn't." (Sort of like "rectifying a consequence for a tardy", which I think in olden days was just called "getting detention".  Note:  How does one "rectify a consequence"?)

"In math, the shift is away from lectures and rote working of equations to the practical application of mathematical processes, often in teams, to real-world situations."

Showing how they're used in real-life is great--except you have to pick a few examples of how it is used, since you can't possibly cover every one.  Which means you're still not really "getting it".  Sure, toss in some examples, but the key is that the kids can actually do the work.  That way, if they are asked to apply it professionally later in their lives, they can.  Practice, practice, practice!

"The idea is to help students gain "a broader understanding of mathematical purpose..."

Again, that's all well and good, but it's more important that they can actually DO the math, isn't it?

"Students will study only the immune, endocrine, and nervous systems rather than all 11 body systems."

And here we have the real crux of the problem:  High School should be getting as broad an education as possible, in my opinion.  College, whether it be community, undergraduate university, or the ultimate in specialization, graduate school, can provide the details once the student has chosen a career path.

Saturday, September 1, 2012

The Insanity of the Marzano Evaluation System...

You have to laugh at the state of education in Florida (and to some extent, the US).  While NAEP scores continue to rise throughout the country for 30+ years, you still have people screaming for the latest fad to save us.  In the case of Florida, they bought into Marzano's "causal" framework.

Let's start with the most obvious problem:  it's ridiculously convoluted.  If I were asked to create a parody of an evaluation system, I couldn't possibly make something more hilarious than Marzano.  Even they state it "works best" with a year of planning and training.  What the hell kind of evaluation system requires a year of EITHER, yet alone both?  (Answer:  A very bad one.)  If you are going to invest this kind of time (and the resources) into a program, you better have extremely high expectations for it.  Sadly, pretty much everyone with a modicum of intelligence knows that this is not going to change anything (yet alone significantly for the better).  It shouldn't be necessary to have to tell ostensibly well-educated people this, but a simpler (and thus more easily/better understood) system will work better than a complicated one.

How many "elements" (items) are in it?  The answer is 60, in four "domains" (edu-speak for "areas"; you can't charge $23/book for using simple language!).  The evaluation system would be better as a bulleted list of "suggestions"--it would have saved millions of dollars.  In that role (a list of "good suggestions"), Marzano is just fine.

But, surely it will be massively successful; after all, Marzano's system has been evaluated by none other than Marzano himself--just check out Research Base and Validation Studies on the Marzano Evaluation Model, April 2011.

To quote this (presumably) non-peer reviewed work, "[The] Marzano Evaluation Model is based on a number of previous, related works that include: What Works in Schools (Marzano, 2003), Classroom Instruction that Works (Marzano, Pickering, & Pollock, 2001), Classroom Management that Works (Marzano, Pickering, & Marzano, 2003), Classroom Assessment and Grading that Work (Marzano, 2006), The Art and Science of Teaching (Marzano, 2007), Effective Supervision: Supporting the Art and Science of Teaching (Marzano, Frontier, & Livingston, 2011)."

That's right--you know Marzano's framework will work because it's based on Marzano's work!  (Also, Phillip Morris would like you to know that smoking increases your libido, cures cancer, and prevents male-pattern baldness.)  Check out the section entitled The Research Base from Which the Model Was Developed:

"Each of the works (cited above) from which the model was developed report substantial research on the elements they address. For example, The Art and Science of Teaching includes over 25 tables reporting the research on the various elements of Domain 1."

I don't believe I've ever seen the number of tables in a book cited as evidence of the book's quality, i.e., "You know it's good because there's a lot of it."   Apparently taking lots of (often old and often poor-quality) semi-related data and mashing it together makes good research.  Only in education research is this even remotely possible.

No Zero Policies and the failure of Educational Research

A personal pet peeve is this "no zero" policy some experts are pushing.  In particular, let's look at what one person pushing this, Dr. Douglas Reeves, has to say:

"First is the use of zeroes for missing work. Despite evidence that grading as punishment does not work (Guskey, 2000) and the mathematical flaw in the use of the zero on a 100-point scale (Reeves, 2004)..."

From the same article as above, "get the facts; gather evidence that will create a rationale for decision making."

Sounds like a plan; let's look at his first citation, Guskey, 2000:

"Instead of prompting greater effort, low grades more often cause students to withdraw from learning.  ... Other students may blame themselves for the low grade, but they may feel helpless to make any improvement (Selby and Murphy 1992)."

And here the problem becomes obvious when you look at their citations:
"Selby, D., and S. Murphy. 1992. Graded or degraded: Perceptions of letter gradeing [sic] for mainstreamed learning-disabled students. British Columbia Journal of Special Education 16 (1): 92-104."

Basing an argument on "mainstreamed learning-disabled" students only?  Even better, you're basing it on SIX of them:

"This study of six mainstreamed students (in grades six and eight) with learning disabilities, their parents, and their teachers..."

So there's one of his two reasons trashed.  Let's look at the other, the "mathematical flaw".

He makes two points.  The first is an assumption:  "To insist on the use of a zero on a 100-point scale is to assert that work that is not turned in deserves a penalty that is many times more severe than that assessed for work that is done wretchedly and is worth a D."  There's not much to say here; he points out that punishing students doesn't work, and I agree.  But giving a zero for zero effort is not "punishing"--that's just GIGO.  The only other time I--and, I'd like to believe every other teacher--would give a zero is in cases of cheating, where it is entirely appropriate to punish someone.  Certainly giving out zero's left-and-right is unfair (and probably unethical), but this is just a simple matter of fairness, in my opinion.

The second argument is also an assumption:  that you must be consistent in that every grade must be 10 points apart or you are being "unfair".    Ironically, he makes the statement that "that many people with advanced degrees, including those with more background in mathematics than the typical teacher, have not applied the ratio standard to their own professional practices."  He's assuming we say that every ten points must mean something; there's nothing that dictates that and again, this "justification" is just an assumption.

This argument is dismissed by assuming that a "failing grade" is anything that doesn't meet a certain threshold.  That is, you have to do so well before we'll consider you competent and after that point, you go up one letter grade for every 10 percent.    His argument relies on the "fairness" that every grade must be 10 points apart.  There is nothing in a piece-wise function that makes this mandatory--it's just something he has assumed must be true.  He also seems to fail to take into consideration his 0-4 point scale radically changes how we assign grades.  Due to the cardinal nature ("ratio standard") of grading it seems to me the scale must remain linear, so 0-4 can just as easily be represented as:

  0-19%  F  ("0")
20-39%  D ("1")
40-59%  C  ("2"  Note: You are now "average" even if you know less than 50% of the material)
60-79%  B  ("3" Slightly above half--60% and up--is now categorized as "above average")
80-100% A  ("4"  And here you have an even larger percent of kids who are "excellent".)

If a student only knew 20% of the material, would you consider that "competent"?  Note that you've doubled the ranges for each of the grades A-D.  If you think we had a grade inflation problem before, wait until this becomes acceptable.  Good luck determining who is truly excellent when "A" (a "4") means the top 20% of kids.  The grades become too ambiguous to be useful under his "four point" (apparently the zero doesn't count?) scale.  Note that if the zero is reserved for ONLY assignments not turned in, the scale becomes even more inflated/ambiguous; 1-25% is a "1", 25-50% is a "2", 51-75% a "3", and the top score will now include everything from 76% up!

But perhaps the greatest bit of wisdom is from a statement he made in the national press, where he (I assume) tried to briefly summarize his "ratio standard":

"It's a classic mathematical dilemma: that the students have a six times greater chance of getting an F," says Douglas Reeves, founder of The Leadership and Learning Center, a Colorado-based educational think tank who has written on the topic."

The "chance" of getting an F?  While chance/luck will certainly play a role in everything we do, short of guessing on every answer, I'm pretty sure ability, practice, and preparation are going to have a far greater influence on your "chance" of getting an F.  In other words, the esteemed doctor's "classic mathematical dilemma" rests on the assumption that grades are random variables.  (Maybe he should consult some of those highfalutin folks with "more background in mathematics than the typical teacher" before making any more statistical arguments...)

So we have some research that was, to be polite, very poorly done (the egregious misinterpretation of the original source would warrant an "F" in my class...I'd be hard pressed to not give him a ZERO.) and a "mathematical dilemma" that is based on several flawed assumptions and would result in some horrible unintended consequences.  And yet our educational leaders are buying this hook, line, and sinker.