Hattie & the Quantitative Educational Research Fallacy

One can pick up a thing or two reading John Hattie’s Visible Learning propaganda.  His summary resources on feedback or the idea that different strategies should be employed for different learning phases, while entirely unoriginal, are things that can be useful to look at again.

My issue, though, is the representation of Hattie’s work as an authority.  Some kind of magical evidence that is used with generic terms like ‘research says’ or ‘according to Hattie’.  More than anything, Hattie’s work has inspired me to look more closely under the hood at quantitative educational research more generally.  It is not a pretty sight.

Anyone who has rudimentary skills in statistics should be aware of the extensive compensation mechanisms needed to create valid interpretations of data.  We must be careful in confusing correlation & causation & making generalisations that extend beyond the empirical scope of the study.  Unfortunately, most of us have better things to do then read research studies, so instead, we give up any critical thinking & choose to passively accept the output of the research. All intelligent educators should be spending time looking under the hood of the actual research conducted rather than the just focussing on the final conclusions & output.

Most education & social science research does a great job of pretending to be scientific.  As the key to the scientific method is founded on positivism & empiricism, one might expect such things as replication of studies, particularly in quantitative research.  We might also expect there to be control groups or other mechanisms to account for bias.  While I will be the first to admit that these mechanisms are all delusions in the quest towards objectivity, it seems that it is ludicrous to not replicate them if you accept an empirical paradigm.    In short, it tries to emulate the methodologies & creates a series of justifications to compensate for the lack of ability to implement a control group, to conduct valid replication studies, & to make generalisations beyond the specific research context.

At the heart of comparative analysis, including meta-analysis, is the codification of the data of research outcomes.  Studies are assigned a level of relevance, for example, ‘inquiry learning’ in order to make sense & generalisations across multiple studies.  On the face of this, this may seem logical.  However, most of us know that creating such generic classifications, like mindsets or feedback, are problematic.  Are we talking about Dweck & Duckworth together when we categorise mindsets?  Is problem-based learning the same as other forms of inquiry-like learning?  Even within the same codification, there issues.  The less specific we are in our discussion of research, a step away that only looks at the abstract, the less generalizable & useful it is.

This becomes even more problematic when we start comparing different instructional models that have different goals.  The majority of quantitative educational research uses standardized testing as its primary form of performance measurement.  I am sure this is fantastic if performance on standardized tests is what you equate with learning, but it is even less useful for someone like myself who does not teach in a context that uses standardized tests.  So, when someone critiques or measures the effectiveness of methods, for example, on the basis of a measurement that does not appropriately align with my teaching context, then I can’t help but find its level of relevance dubious.  This is before we even delve into the damage of testing anxiety on the well-being of a large population of learners.

When someone  refers to an ugly barometer that indicates the effect size of what is useful, after my initial urge to vomit, I wonder, why would I care about the average? Surely if I am wanting to take a quantitative measurement, I would look to exclude studies that are poorly conducted, have small sample sizes, & have limited transferable potential to my own context. I would also question any study that does not consider unanticipated effects.  The problem is that most studies are focussed on a very narrow scope of measurement & do not sufficiently try to take into account the diversity of classroom contexts & its potential impact.  Context is, in fact, extremely important.  However, it is something that we can only partially understand.  Nuthall’s work in learning raises concerns about how much is missed in the semi-visible sphere of peer interactions & that there is always that invisible context for every learner.  Given these limitations, even if research was controlled & replicated, it can only be used, at best, as an anecdotal piece of evidence relating to the specific research contexts.

So this is where the idea of quantitative educational research as a fallacy comes in.  The very paradigm that is used to promote its importance is its downfall.  It attempts to be scientific & mimics science’s methods but it cannot come close.  Nor is this desirable.  There is undoubtedly great value in considering scientific research on cognitive psychology, neuroscience, or behavioural psychology.  Even this, though, has its own limitations.  In 2014 I attended some conference sessions where participants discussed research in education.  Three PhDs (myself included) dominated the conversation.  Three different disciplinary areas were represented: education, science, & history.  The education professor strongly voiced the lax standards in education as an academic discipline in its peer review publication standards.  A lot of junk research is out there.  The interesting thing, though, was that all three of us had the same type of approach to how to look at research:

  • Start with the data/evidence or the substance before reading any conclusions.  Judge for yourself what generalizations or conclusions can be made & how reliable the methodology of the study may be.
  • Analyse the argument or hypotheses or conclusions that are being drawn from the research.
  • Evaluate the extent that the data/evidence is sufficient enough to substantiate any claims that are being made.

If I am being honest, this is the type of thinking that secondary students should be undertaking (at least in history or classical studies) & it is not too much to expect a less passive approach to interpreting research from all of teachers.

If the positivist-empirical paradigm is insufficient, what then, can make valid research?  Qualitative research has just as many pitfalls in attempting to replicate the scientific method.  While not perfect, one might turn to critical theory & postmodern research methods.  In their favour is the fact that at least they are not pretending to be something that they are not.  The best research, though, must surely be outside of the academy.  Perhaps it is action-based research or professional inquiry research.  It at least has the advantage of being set within the teaching context that it is being applied to.  What I do know, though, is that there are significant issues with blindly accepting the results of educational academics conducting quantitative research.

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s