Saturday, December 22, 2012

Arming teachers?

I'm not going to pretend to have an answer to this cultural problem, but I did want to comment on the reaction of "let's arm the teachers".  Overall, I am pretty strongly against this idea.  Here is my take:

Pros:

1.  It could be a deterrent.
2.  It could limit casualties.
3.  Relatively cheap and easy implementation.

Cons:

1.  It could be a deterrent, but that's about it.
2.  It could limit casualties, but barring highly idealized situations, won't actually stop these shootings--making it a bit of a half-assed "solution".
3.  Relatively cheap and easy implementation--if you get full buy-in, which is extremely unlikely; it would pretty much require every teacher to be armed to be remotely effective in stopping these shootings.

To expand on the cons a bit (since I'm trying to show why I do NOT support this idea):


Realistically, it most likely won't actually stop a shooting, it can only hope to limit the casualties.  Even if the teacher is armed, a rampaging student will most likely get off several shots before an inexperienced (in firearms & self-defense) teacher could draw the gun.  Further, if said shooter(s) think the teacher has a gun, that may just make said teacher the first target; again, without extensive training, they'd be down before they could do anything (see video above).

While the theory is sound--armed citizens keep people safe--the reality is that it doesn't seem to hold much water.  This is another solution that requires several ideal assumptions to be true--every time.  (e.g., the shooter has to come into a room where the teacher is armed, the teacher has to react faster than the shooter...otherwise, it won't stop the shooting, it will at best minimize casualties).  And how much of a deterrent will the possibility of getting killed be when many of these massacres end with the shooter killing themself?

How many teachers would want to carry a gun?  I like guns--grew up shooting them--which probably (I have no data to back this up; just an assumption) makes me unusual as far as teachers go.  However, I would never take one onto a campus--way too much liability for something to go wrong, for starters.  (God help the first teacher that, for whatever reason, has their weapon accidentally discharge on campus, regardless if anyone is hit.)  But most importantly--have you seen kids these days?  Half of my male students--and a handful of female students--could easily overpower me and take the gun with little problem.  Between that and the potential for accidents, this situation has "mistake" written all over it. 

The last one is more of a cultural awareness.  When you think of a place where teachers have to be armed to keep kids safe, what/where do you think of?  I think of stereotypical third-world places.  Is this really what America has come to?  As the president said, surely we can do better than this.

Tuesday, December 11, 2012

How do you sell a solution to a nonexistent problem?

You make the problem up, that's how!

In the case of (self-described) educational "experts" and their corporate-centric "reforms", nothing could be worse than any data indicating our schools are doing fine.  And the media seems to be right along for the ride.  Case in point:

International test scores expose U.S. educational problems

Now, to their credit over at Huffingtonpost, the headline was changed sometime between this morning and this afternoon to the less-antagonistic International Tests Show East Asian Students Outperform World As U.S. Holds Steady.  The problem is that this still leaves the (false) impression our schools suck compared to the rest of the world.  The article itself states:


Overall, the U.S. ranked sixth in fourth-grade reading, ninth in fourth-grade math, 12th in eighth-grade math, seventh in fourth-grade science and 13th in eighth-grade science.

This is out of 60 countries taking the TIMSS.  This is, considering the challenges we face that other smaller, culturally-homogenous countries do not face, is amazing.*  How did Secretary of Education Arne Duncan handle this fabulous news that we're (far above) average in many categories?

U.S. Secretary of Education Arne Duncan called the U.S. scores encouraging, but described older students' performance as "unacceptable."

Encouraging?  Really?  Yes, we need to address the falling-off in later grades (which is, I'm betting, a cultural problem, not a schooling problem) but that's it?  Not even a "This shows we're on the right track" or "Congratulations to our teachers for making us competitive"?  Does education "reform" have ANYTHING to do with, well, actually educating people or is it just a money-making (and union-busting) scheme?  (Or, to paraphrase the late Dr. Gerald Bracey, "It is important to remember that to 'reform' does not necessarily mean improve, just to reshape.")


*  And let's not forget our horrible funding of schools, which presents all sorts of challenges as this (5+ year old data) shows:
But in the Progress in International Reading Literacy Study, American kids in low poverty schools stomped the top-ranked Swedes. Even kids in schools with up to 50% of the students in poverty attained an average score that, had they constituted a nation, would have ranked 4th. Only American students attending schools with 75%+ poverty scored below the international average of the 35 participating countries.

Sunday, October 28, 2012

Picking and choosing research (or, "Charter Schools -- The Data")

Not a lot to say here; I'll let the studies speak for themselves.  But long story short, it's pretty funny that in this day and age of "data driven decision making" (as opposed to what?) that certain elements prefer to go with their ideals/beliefs rather than the facts.  Or, even shorter, "School reform is all about politics; it has nothing do with education."  After all, everything considered, our schools are doing fine.

First of all, Stanford's CREDO study:

The study reveals that a decent fraction of charter schools, 17 percent, provide superior education opportunities for their students.  Nearly half of the charter schools nationwide have results that are no different from the local public school options and over a third, 37 percent, deliver learning results that are significantly worse than their student would have realized had they remained in traditional public schools.  These findings underlie the parallel findings of significant state‐by‐state differences in charter school performance and in the national aggregate performance of charter schools.

The US Department of Education's study (middle schools only):

On average, charter middle schools that hold lotteries are neither more nor less successful than traditional public schools in improving student achievement, behavior, and school progress.

A Rand study is also a "mixed" review, showing that there is some good and some bad in charters:

Charter schools in most locales have marginally greater variation in performance than TPSs [Traditional Public Schools], as measured by the achievement-impact estimate for each school, and, in some locations, this may simply reflect greater measurement error associated with the smaller average size of charter schools.
Even the KIPP schools, which I first heard about from Outliers (great book, btw) and greatly admire(d?), have some doubts.  It appears that some of their success is from "selective attrition", as evidenced here and here.  There seems to be this idea that if we just privatize education (or anything), that the magical free market fairy will make everything right.  (Ignoring that privatization can debase things, one of which is education.)   But this just seems to be the way the world works now--ideology trumps reality.  If the reality isn't reflecting ones' political view, well, obviously, the problem is your idea just hasn't been given a fair shot--it certainly isn't that you're just wrong

Update (3/6/13):
How Charter Schools Choose Desirable Students.

Wednesday, October 17, 2012

Learning Styles: Concepts & Evidence

Admittedly I have not read the whole thing--it's the middle of the school year, people, c'mon!--but the summary says it all:

Our review of the learning-styles literature led us to define a particular type of evidence that we see as a minimum precondition for validating the use of a learning-style assessment in an instructional setting. As described earlier, we have been unable to find any evidence that clearly meets this standard. Moreover, several studies that used the appropriate type of research design found results that contradict the most widely held version of the learning-styles hypothesis, namely, what we have referred to as the meshing hypothesis (Constantinidou & Baker, 2002; Massa & Mayer, 2006). The contrast between the enormous popularity of the learning-styles approach within education and the lack of credible evidence for its utility is, in our opinion, striking and disturbing. If classification of students’ learning styles has practical utility, it remains to be demonstrated.

Sunday, October 7, 2012

Learning Styles Don't Exist (Dr. Willingham)

This was the video where I first heard of Dr. Willingham.  It was startling to see how such a popular idea is so wrong.  But then I remembered that this is the field of education (and their pseudo-experts) we're talking about here, and the sense of shock wore off very, very quickly. The irony is that in this day of "data-driven" decisions, we ignore what the science says to go with our "popular" theories.

(Don't get me wrong, I do love education--it's just that the top of the education system is a complete mess and I think it's time for an overhaul THERE, rather than down at the teacher's level.  The troops are fine; it's the generals that need help.)

Dr. Willingham: Merit Pay, Teacher Pay, and VAM

Not a lot to add here; just an excellent video by Dr. Willingham (cognitive psychologist).  Once I have time, I hope to read one of his books (assuming I can find it amongst my stacks of books). 




Saturday, October 6, 2012

Marzano's Database

Updated:  10/9/12 @10:41GMT+5

First of all, while I am NOT an expert statistician, I do know more than the average person and have had formal training/education in the subject.  (I've had some undergraduate and graduate stats classes, used statistics professionally & throughout undergrad, etc.)  So I would love to have a "pure" (read: non-education related) statistician go over the data, but I think I have made a decent start here.

If there's one thing I hate hearing any more in education it is "data" because most of the data they have is crap and/or they have no idea what to do with it.  In the case of Marzano, I often hear "him" (his company & his salespeople) going on about how much research he has.  It's true, his database has 1036 studies in it (I copy & pasted these into a text file, then imported into Excel as a "delimited" file).  But let's take a closer look, because as just about everyone with any connection to the professoriate knows, the quality of educational research is (often extremely) suspect.  This is largely do to the difficulties of doing research on kids (particularly longitudinal research) but there are other problems as well.

If you sort the data by p-values, you quickly start to see some problems.  Marzano's own website declares,

"Basically, if the value in this column is less than .05, the effect size reported for the study can be considered statistically significant at a significance level of 5% (α = .05). In other words, a reasonable inference can be made that the reported effect size is probably not a function of random factors; rather, the reported effect size represents a real change in student learning."

So sort the data by the p-value, delete those that are greater than 0.05 and look what happens:  you're down to 285 studies from the initial 1036.  That means that by his own criteria, only 27.5% of his data is statistically significant (to α=0.05).  (Or, taken another way:  There is a significant probability that the results weren't really results, but random fluctuation between the controls and experiments.  And that's true for nearly three quarters of the data.)  Of the remaining 285 studies, 101 of them have a p-value of zero; I assume this means that either it wasn't reported OR it was such a great experiment that they were able to calculate the p-value down to less than 0.001.  The latter is unrealistic.  (For example, one study has just 4 data points and an "effect size" of 9.25, which is grossly unrealistic.  I don't see how any self-respecting statistician could use or report this, to be blunt.)  So incomplete/unrealistic data in my book gets thrown out--we're down to 185 studies (17.9% of his database).

We're not done yet.  Here are a few other data points I'm going to throw out because I find them too suspect to be reliable for district-wide policy-setting decisions:  Studies involving less than 18 students (n):  79.  Admittedly, this is somewhat arbitrary, but I could probably defend their exclusion* far better than anyone could defend their inclusion.  That brings us down to 106 (10.2% of the total) studies.  I'm going to stop there, but notice that there are also 16 studies that are incomplete; they have no unit length.  Another 21 studies lasted less than a week.  Two studies have controls of less than 10 students, which "seems" too low (one is 4, the other 9).  So even this remaining 10% is somewhat dubious.  But it's not the ten percent that really bothers me, it's the 90%, because Marzano's work--which sadly is influencing policy--is based on all of this bad data.  One other quick question:  How much time was spent on each of these 1000+ studies?  (I'm guessing not a lot; see below.)

In other words, the policy is based on research and the research relies on unreliable data.

So does it surprise anyone that all of these new policies only seem to make things worse?

Also worth reading:  "Marzano - A Successful Fraud", a review of Marzano et al's Classroom Instruction that Works...

To quote the Amazon.com review:

A. Every single reference I checked was itself dubious or misrepresented by the authors.
B. Some of the references were on topics unrelated to the instructional strategies cited.
B. [sic] Some of the numbers from published data were altered to better conform to the author's point of view.
C. Some of the references themselves presented provisional conclusions based on weak results, but were given complete credence by Marzano et al.
D. The authors took weak data from several studies, each based on averaging the results from studies assumed to use similar methods and subject cohorts, and averaged these, compounding the statistical weaknesses. This is especially shocking given that no credible researcher would combine results from studies by different groups that clearly use different methodologies and subject cohorts.


*  My rationale for excluding studies of < 18 students:  This is less than a typical classroom and more importantly, probably less than necessary for reliable statistical analysis (I was always told to use at least 30 data points, but that was in a field--Science/Engineering--that has much more rigorous standards than the social sciences, let alone education).

Friday, September 28, 2012

The Dangers of Groupthink.

Oh look, another article promoting critical thinking & common core.  The irony is delightful.  If only the pushers of these would follow their own (non-)advice and think critically for a few moments.  If only they understood the extreme case of group think that educational "experts" display.

Let's take a look at a few of their claims:

"The goal, say experts, is to ... to create a pipeline of native talent for the millions of STEM jobs going begging—in science, technology, engineering, and math."

Problem:  There is no STEM shortage.  NoneZipZilch.  Repeat after me:  THERE IS NO SHORTAGE OF SCIENCE, TECHNOLOGY, ENGINEERING, AND MATH employees.


"For example: How would a feminist critic view Alfred Hitchcock's Rear Window?"

First of all, let me just say, "What?"  Doing research into a very specific question (on a subject that is hardly objective) is "better learning" than what, exactly?  More importantly,

"They read related texts from different genres, think critically to reach an informed conclusion...."


 Ah yes, "critical thinking"...which used to be called "thinking" before someone decided to make a quick buck off of pushing a phantasmal concept.  What exactly is critical thinking, you ask?

Critical thinking is the intellectually disciplined process of actively and skillfully conceptualizing, applying, analyzing, synthesizing, and/or evaluating information gathered from, or generated by, observation, experience, reflection, reasoning, or communication, as a guide to belief and action. In its exemplary form, it is based on universal intellectual values that transcend subject matter divisions: clarity, accuracy, precision, consistency, relevance, sound evidence, good reasons, depth, breadth, and fairness.
 Whew...that's some top-quality bullsh*ttin', right there.  If that's critical thinking, what is un- or "noncrtical" thinking?  Instinct?  Seriously people, think for a minute, critically or otherwise.  Maybe it means this.  Or just about whatever you want, because like most edubabble, it means "I sure wish I could make this sound smart even though it isn't." (Sort of like "rectifying a consequence for a tardy", which I think in olden days was just called "getting detention".  Note:  How does one "rectify a consequence"?)

"In math, the shift is away from lectures and rote working of equations to the practical application of mathematical processes, often in teams, to real-world situations."

Showing how they're used in real-life is great--except you have to pick a few examples of how it is used, since you can't possibly cover every one.  Which means you're still not really "getting it".  Sure, toss in some examples, but the key is that the kids can actually do the work.  That way, if they are asked to apply it professionally later in their lives, they can.  Practice, practice, practice!

"The idea is to help students gain "a broader understanding of mathematical purpose..."

Again, that's all well and good, but it's more important that they can actually DO the math, isn't it?

"Students will study only the immune, endocrine, and nervous systems rather than all 11 body systems."

And here we have the real crux of the problem:  High School should be getting as broad an education as possible, in my opinion.  College, whether it be community, undergraduate university, or the ultimate in specialization, graduate school, can provide the details once the student has chosen a career path.

Saturday, September 1, 2012

The Insanity of the Marzano Evaluation System...

You have to laugh at the state of education in Florida (and to some extent, the US).  While NAEP scores continue to rise throughout the country for 30+ years, you still have people screaming for the latest fad to save us.  In the case of Florida, they bought into Marzano's "causal" framework.

Let's start with the most obvious problem:  it's ridiculously convoluted.  If I were asked to create a parody of an evaluation system, I couldn't possibly make something more hilarious than Marzano.  Even they state it "works best" with a year of planning and training.  What the hell kind of evaluation system requires a year of EITHER, yet alone both?  (Answer:  A very bad one.)  If you are going to invest this kind of time (and the resources) into a program, you better have extremely high expectations for it.  Sadly, pretty much everyone with a modicum of intelligence knows that this is not going to change anything (yet alone significantly for the better).  It shouldn't be necessary to have to tell ostensibly well-educated people this, but a simpler (and thus more easily/better understood) system will work better than a complicated one.

How many "elements" (items) are in it?  The answer is 60, in four "domains" (edu-speak for "areas"; you can't charge $23/book for using simple language!).  The evaluation system would be better as a bulleted list of "suggestions"--it would have saved millions of dollars.  In that role (a list of "good suggestions"), Marzano is just fine.

But, surely it will be massively successful; after all, Marzano's system has been evaluated by none other than Marzano himself--just check out Research Base and Validation Studies on the Marzano Evaluation Model, April 2011.

To quote this (presumably) non-peer reviewed work, "[The] Marzano Evaluation Model is based on a number of previous, related works that include: What Works in Schools (Marzano, 2003), Classroom Instruction that Works (Marzano, Pickering, & Pollock, 2001), Classroom Management that Works (Marzano, Pickering, & Marzano, 2003), Classroom Assessment and Grading that Work (Marzano, 2006), The Art and Science of Teaching (Marzano, 2007), Effective Supervision: Supporting the Art and Science of Teaching (Marzano, Frontier, & Livingston, 2011)."

That's right--you know Marzano's framework will work because it's based on Marzano's work!  (Also, Phillip Morris would like you to know that smoking increases your libido, cures cancer, and prevents male-pattern baldness.)  Check out the section entitled The Research Base from Which the Model Was Developed:

"Each of the works (cited above) from which the model was developed report substantial research on the elements they address. For example, The Art and Science of Teaching includes over 25 tables reporting the research on the various elements of Domain 1."

I don't believe I've ever seen the number of tables in a book cited as evidence of the book's quality, i.e., "You know it's good because there's a lot of it."   Apparently taking lots of (often old and often poor-quality) semi-related data and mashing it together makes good research.  Only in education research is this even remotely possible.

No Zero Policies and the failure of Educational Research

A personal pet peeve is this "no zero" policy some experts are pushing.  In particular, let's look at what one person pushing this, Dr. Douglas Reeves, has to say:

"First is the use of zeroes for missing work. Despite evidence that grading as punishment does not work (Guskey, 2000) and the mathematical flaw in the use of the zero on a 100-point scale (Reeves, 2004)..."

From the same article as above, "get the facts; gather evidence that will create a rationale for decision making."

Sounds like a plan; let's look at his first citation, Guskey, 2000:

"Instead of prompting greater effort, low grades more often cause students to withdraw from learning.  ... Other students may blame themselves for the low grade, but they may feel helpless to make any improvement (Selby and Murphy 1992)."

And here the problem becomes obvious when you look at their citations:
"Selby, D., and S. Murphy. 1992. Graded or degraded: Perceptions of letter gradeing [sic] for mainstreamed learning-disabled students. British Columbia Journal of Special Education 16 (1): 92-104."

Basing an argument on "mainstreamed learning-disabled" students only?  Even better, you're basing it on SIX of them:

"This study of six mainstreamed students (in grades six and eight) with learning disabilities, their parents, and their teachers..."

So there's one of his two reasons trashed.  Let's look at the other, the "mathematical flaw".

He makes two points.  The first is an assumption:  "To insist on the use of a zero on a 100-point scale is to assert that work that is not turned in deserves a penalty that is many times more severe than that assessed for work that is done wretchedly and is worth a D."  There's not much to say here; he points out that punishing students doesn't work, and I agree.  But giving a zero for zero effort is not "punishing"--that's just GIGO.  The only other time I--and, I'd like to believe every other teacher--would give a zero is in cases of cheating, where it is entirely appropriate to punish someone.  Certainly giving out zero's left-and-right is unfair (and probably unethical), but this is just a simple matter of fairness, in my opinion.

The second argument is also an assumption:  that you must be consistent in that every grade must be 10 points apart or you are being "unfair".    Ironically, he makes the statement that "that many people with advanced degrees, including those with more background in mathematics than the typical teacher, have not applied the ratio standard to their own professional practices."  He's assuming we say that every ten points must mean something; there's nothing that dictates that and again, this "justification" is just an assumption.

This argument is dismissed by assuming that a "failing grade" is anything that doesn't meet a certain threshold.  That is, you have to do so well before we'll consider you competent and after that point, you go up one letter grade for every 10 percent.    His argument relies on the "fairness" that every grade must be 10 points apart.  There is nothing in a piece-wise function that makes this mandatory--it's just something he has assumed must be true.  He also seems to fail to take into consideration his 0-4 point scale radically changes how we assign grades.  Due to the cardinal nature ("ratio standard") of grading it seems to me the scale must remain linear, so 0-4 can just as easily be represented as:

  0-19%  F  ("0")
20-39%  D ("1")
40-59%  C  ("2"  Note: You are now "average" even if you know less than 50% of the material)
60-79%  B  ("3" Slightly above half--60% and up--is now categorized as "above average")
80-100% A  ("4"  And here you have an even larger percent of kids who are "excellent".)

If a student only knew 20% of the material, would you consider that "competent"?  Note that you've doubled the ranges for each of the grades A-D.  If you think we had a grade inflation problem before, wait until this becomes acceptable.  Good luck determining who is truly excellent when "A" (a "4") means the top 20% of kids.  The grades become too ambiguous to be useful under his "four point" (apparently the zero doesn't count?) scale.  Note that if the zero is reserved for ONLY assignments not turned in, the scale becomes even more inflated/ambiguous; 1-25% is a "1", 25-50% is a "2", 51-75% a "3", and the top score will now include everything from 76% up!

But perhaps the greatest bit of wisdom is from a statement he made in the national press, where he (I assume) tried to briefly summarize his "ratio standard":

"It's a classic mathematical dilemma: that the students have a six times greater chance of getting an F," says Douglas Reeves, founder of The Leadership and Learning Center, a Colorado-based educational think tank who has written on the topic."

The "chance" of getting an F?  While chance/luck will certainly play a role in everything we do, short of guessing on every answer, I'm pretty sure ability, practice, and preparation are going to have a far greater influence on your "chance" of getting an F.  In other words, the esteemed doctor's "classic mathematical dilemma" rests on the assumption that grades are random variables.  (Maybe he should consult some of those highfalutin folks with "more background in mathematics than the typical teacher" before making any more statistical arguments...)

So we have some research that was, to be polite, very poorly done (the egregious misinterpretation of the original source would warrant an "F" in my class...I'd be hard pressed to not give him a ZERO.) and a "mathematical dilemma" that is based on several flawed assumptions and would result in some horrible unintended consequences.  And yet our educational leaders are buying this hook, line, and sinker.