Monday, July 17, 2017

Public Education's PR Problem (and more spurious "research")

So I finally found an article linking to that old statement I made that teachers make very little impact on test scores (~10%).  The Atlantic article itself is great and highlights the PR/perception (and lack of leadership...) issues facing American public education.  This has been long known; the late Dr. Gerald Bracey wrote about it in 2010 and touched on using the bad perception as a "scare tactic" back in 2007.


...



But back to that 10% figure--why so much emphasis on time-consuming (thus expensive) teacher evaluation models for a small gain of a small percent?  That gave rise for me to once again look to my favorite evaluation system and the evidence supporting it.  I found this statement and things immediately went off the rails:



A summary: 
1.  The research claimed here is incestuous; it was carried out by an affiliated organization.
2.  This is not disclosed properly (see image above, source is on page 3 of this document).
3.  A potential lack of credible (academic) expertise.
4.  No evidence it was peer reviewed.
----------------------------------------------------------------

So the first problem was trying to just find the article, which was strangely difficult--red flag #1.  When you can't even find the abstract, that's odd.  Finally going to Google Scholar, it turns out that the latter one has just two citations (one by a book, one by the RAND Corp.):



The reasons for this became apparent pretty quickly:



1.  Dr. Lindsey Basileo works for Learning Sciences International.  Their About page states, "Our Centers for research and development include Learning Sciences Marzano Center with Dr. Robert J. Marzano, to develop instructional frameworks as well as evaluation and growth systems."  So this isn't independent research, and a gross lack of disclosure.  I would say this is Red Flag #2, but honestly, by academic standards, it's game over at this point--credibility is out the window.  Especially when they didn't make the connection between product and reviewer/researcher clear.

2.  Dr. Basileo's PhD is in Criminology and Criminal Justice (along with her MS and BS).  So while an expert, her expertise doesn't seem particularly relevant.

3.  Michael Toth is the CEO of Learning Sciences International.  Same problem as #1.  Additionally, a few "suspicious" things:  Googling the organization he was former president of ("National Center for the Profession of Teaching") in quotes seems to indicate said organization now exists only on his biography (Guidestar indicates it may have either merged with another organization or gone defunct).    I also noticed he's referred to only as "Michael Toth"; unlike the "Dr." they put in front of Marzano and Basileo and there's no mention of his education on his LinkedIn.  (Admittedly this proves nothing, but given their focus on marketing, I find it suspicious they'd leave off the alphabet soup and the credibility it lends.)

4.  At this point, I think we can safely say this research supporting Marzano's system is incestuous; it came from an affiliated (parent?) organization and this wasn't properly disclosed.  Further, between these obvious problems, the lack of citations, and the difficulty in even finding a copy of the research in question, I think I can safely assume it was not peer-reviewed.  Which from my perspective, renders an already discredited bit of research totally worthless.

Tuesday, February 9, 2016

Rep. Erik Fresen & Company: Three strikes, you're out.

Florida continues to dump a grossly disproportionate amount of capital funding into charters; 50% last year, and possibly an even greater amount this year.  This despite the fact that public schools greatly outnumber charters--about 6:1.  So if there were any ethics/honesty/logic in the funding, you'd think charters would get about 14%, maybe a bit more to try to increase their numbers.

I'd actually be fine with charters except for a few things:

1.  Here in Florida, according to the last full (2013; the 2015 focused on just urban areas) CREDO study, Florida's charters do WORSE than our public schools:


...Florida charter-school students lose the equivalent of seven days of reading instructional time, compared to students in public schools.


They tie for math.  They're delivering an inferior product, so they market it as "choice*."  Strike one.

2.  Fresen in particular claims he's fighting to protect our tax dollars.  This is hard to swallow given that he is ignoring the massive waste and fraud of charters, nationwide.  Florida is no exception.  I'd rather pay a little over budget than flat-out throw the money away.  An inferior product that's wasting money?  Strike two.  Also, can you say #RINO ?

3.  One of the lead proponents of this push for charters, Rep Erik Fresen, (R-Disturbing) is also the education budget committee Chairman.  This is pretty amazing because Fresen's sister is the wife of the CEO of Academica, a charter firm. Fresen himself worked for them for a while as a lobbyist and now works for Civica, an architecture firm that has, among other things, built some schools for Academica.  As for an explanation for how this is possible, I have only one:  Because Florida.  Strike three.

Charters were supposed to be "innovation centers" for education but instead seem to mainly be innovating ways to launder tax dollars into private hands.  From my perspective the conclusion was reached first--we need charters.  It was probably mostly an ideological decision--reality be damned.  Now Fresen et al are looking to justify their decision after the fact (ex post facto if you wanna sound erudite), the reason of the week being over-budget construction projects.  Besides, who cares if the kids' education suffers when you can pass along some of those sweet, sweet tax dollars to your friends and family?


*  Speaking of choice, did you hear that participating in the country's fifth-largest voucher program  generates far worse results than going to a public school?

Tuesday, February 17, 2015

Why Haven't Florida's Test Scores Shot Through the Roof?

We're now in year three of Marzano's "causal" model here in Florida.   With all these effect sizes (where increases of 10-20 percentile points are common) AND with 98 percent of Florida's teachers getting effective or better ratings, shouldn't Florida's test scores be through the roof by now?  Because they're not.  In 2013 they were stagnant from the year before.  And this year?

"The average scores for Florida's class of 2014 were 491 in reading, 485 in math and 472 in writing, all below the national averages and lower than last year's scores, when a smaller group of state students took the test.  Across the country, SAT scores were stagnant..."  (Source.)

So what's the problem?  The model or its implementation?  My guess is both (plus the research behind the model).

Needless to say, its implementation has been a nightmare--the fact we're still trying to implement it three years later (at God only knows what expense) should tell you everything you need to know.  Or you can Google articles like this.

The model itself, despite the claims on Marzano's website, appears to have zero research supporting it (it being the model itself).  Check out his website's claims:



 Going in order (1-5):
1.  Where have I heard this 5000 studies over 50 years claim?   Here, in a journal article co-authored (second author, it seems) by Marzano.  Turns out only 70 (1.4%) of those 5000 studies "met the researchers' criteria for design, control, data analysis, and rigor."  This would appear to be, at best, intentionally misleading.  And considering there's no way these 5000+ studies, done decades before the model existed, were actually testing the model.  Strike one.

2.  This is completely irrelevant to verifying the model.  Once again, "It's good because there's a lot of it--and a number!"  If anything, the fact this much stuff is out there and had no effect, I would say, works better as a counterpoint to its effectiveness.  Strike two.

3.  I won't rehash the hazards of the analyses, but it's important to note that this is not evidence for the model itself. We're now 0 for 3.  (Are these studies double-dipping from #1 above?  I bet they are.)

4.  Finally, something directly studying the model!  Except there's nothing listed.  One would think, with this being the most important thing listed here, there would be a link, or a title, SOMETHING.  Zero for four...

5.  This paper is basically a "Why you should use our Model"; I see nothing in there verifying its effectiveness.

And we're 0 for 5 now.  (Which, if some people get their way, isn't a zero, but a 50%; no wonder "educational experts" approve of this baloney.  Regardless, 50% or 0%, this is a failure.)

So we have a convoluted and horribly expensive evaluation system that doesn't seem to be producing significant (if any) results after 3 years.  Solution?  Let's drop this until a third-party verifies its effectiveness and go back to a cheap, simple solution--which will likely cause no discernible changes to our education system (besides saving money).

Friday, February 6, 2015

Insanity of the Marzano Evaluation Redux...

My "Insanity" post re: the Marzano evaluation system seems to be the only "popular" post I have so I thought it might be time to revisit it--this time in fewer words.  I encourage everyone to sign up at his website to access the database themselves.  Then go through and take a look.  Before we begin, I urge you to keep in mind the GIGO paradigm:



Sorting by p, you quickly see a lot of data that seems to have no value reported; I'm fairly certain that with a sample size of 8, a p of <0.0000 is not going to happen. 


 And, eyeballing the scrollbar, it would seem that ~66-75% of Marzano's database is statistically INsignfiicant.  ("...in Statistics 'significant' means probably true (not due to chance)") Combined with the portion above (all those zeros...), I personally find this to be, well, damning, for lack of a better word.



Now, before putting in these last two pictures, I would like to once again quote Marzano's description of the p values:  "Basically, if the value in this column is less than .05, the effect size reported for the study can be considered statistically significant at a significance level of 5% (that is, the probability of observing such a result by chance is less than 1 in 20). In other words, a reasonable inference can be made that ... the reported effect size represents a real change in student learning."

Behold the "better than a coin flip" point:


And, this is definitely my favorite:  Seven results that are 100% likely to be random fluctuations. 



It's the epitome of hypocrisy that in this day and age of "data driven decision making" we're relying on this type/quality of research to make decisions.  It's like our politicians and educrats have never even heard of the phrase "peer review."  Again: 




To Marzano's credit, he's stated his research is being misused.  But I don't see him making a big fuss in telling states like Florida to stop basing their evaluations on his work, i.e., stop buying his books and materials.



BONUS:  This is one of my favorite pictures, when you sort it by N (C), that is, the size of the control.  You used a study that had ONE kid as a "control"?  Really?  I mean, REALLY?  You don't need to be a statistician to realize that public policy should not even be influenced, yet alone based on this analysis.




Wednesday, December 24, 2014

The civilized veneer of Florida's school accountability system.

One hypothesis is that Jeb Bush's A-F system is a simple and easy to understand method with easy to interpret results.  But this is Florida mixed in with some educrats, sooo....

The system isn't as complicated as I thought (see for yourself here), but there are some oddities.  For example, most people probably think "90+%" when they hear "A."  For Florida, it's 70% (1120/1600) or better.  So a large block for "A" schools.  It gets weirder:  a B has a range of 65%-70% (1050-1119).  A 5% range for B's? Here's the whole shebang:


 (All scores out of 1600)
A:  1120+  (70%, range of 30%)
B:  1040-1119 (65%-70%, range of 5%)
C:  880-1039 (55%-65%, range of 10%)
D:  800-879 (50%-55%, range of 5%)
F:  <800 (below 50%, range of 50%)

It seems like they're trying to use a standard curve (1600 a la the SAT) and the traditional A-F, but not unsurprisingly, failing at both.  But this isn't even the worst part--it's how they arbitrarily shift the numbers they use in the calculations around.  For example, a week ago

"Education Commissioner Pam Stewart said a key reason for the drop in A-rated schools was that the grading formula was changed to make it more difficult to earn a top grade."

Now rewind two years:

"State education officials panicked, and at an emergency meeting last week, the Florida Board of Education decided in a 4-3 vote that the best thing to do was to lower the passing score on this exam.
Let me repeat that: In order to make sure that students succeeded on the test, the passing grade was lowered."

So, when it comes down to it, all the math, all the calculations mean nothing--the numbers are fudged to whatever looks or feels "right" to the powers that be.  Once again, the math is just there to provide a false sense of credibility.

Friday, November 14, 2014

Could grade inflation have "helped" lead us to this testing mania?

 *  Note:  I changed the title to be more clear.  There is no one simple reason for how we got where we are.

While I can't stand the amount of time, tax dollars, and opportunity costs of all these standardized tests, there is one thing that has always gnawed at me--they are, sadly, somewhat relevant.  Granted, they are way overdone, but as I have told my students in the past, with grade inflation, some--nowhere near as many as what we have*--standardized tests are needed.  This article from Angelina Massoia puts it quite elegantly:

"By submitting to the culture of grade inflation, we empower the standardized test to 'accurately' represent us."

Part of the reason I believe that it's going to be hard to stop this testing madness is that to some small degree, the field of education itself is to blame.  The article above notes that "about 43 percent" of all grades given in college are now an A.  More importantly, it's far worse in education (image source):


This is hardly a surprise, I would hope; I'm sure there's plenty of other, older research (such as this, indicating "Education majors enjoyed grade point averages that were .5 to .8 grade points higher than students in the other college majors.").

Again, I am in no way advocating the testing mania.  What I do believe/wonder is if the "reformers" and other test advocates may have been able to get their grip on our system due to grade inflation and if the Colleges of Education don't share some blame* (whoops, sorry again--I meant "accountability") for this.  This article (again from Huffington) goes over the full study above in more detail.  I take issue with it in that it should have made the distinction between "easy" and "academically rigorous."  I can't say whether they are easy, having not been an ed. major but I am finishing up my 5th ed. course in <1 year.  From this experience--and from what colleagues have told me of their own--I can say education majors are far from academically rigorous.  (I dare say that even my regular chemistry class is far more academically challenging then they have been.  Given the past 10+ years of our leaders' use of "rigor" as a buzzword, I find it rather hypocritical that they probably wouldn't recognize it if they saw it.  Or be able to handle it.  But those are subjects for another blog.)


*  And let's be honest, this is being done to blame teachers and schools.  Oh, wait, I'm sorry, I meant "Hold them accountable [if things go bad, otherwise, the reformers will take the credit]."