The Grace Living Center in Oklahoma (U.S.) is a bit of an educational oddity. It’s a school inside an old people’s home, and one generation is creating a new generation of young readers who outperform national averages. Early-years students participating in the school’s reading programme frequently sit with the old people to read together.
It’s a simple step to take and it works on every measure.
Standards have risen to the point where four-year-olds are reading at the level of seven-year-olds. More than that, they’re learning about stuff that school alone could never teach them–cultural stuff and stuff about communicating with our elders, stuff about what community means and what it was like “in the old days.”
And the old people have seen mental-health benefits, too, from being worked hard by young inquisitive minds. Better still, they’ve seen medical improvements–many have stopped taking their meds. Above all, they’ve found their passions once more, and for young people there is an opportunity to learn about one of the hardest elements of life: death. When one of their ‘teaching community’ dies, all the children are brought together. This is a model for learning that achieves much more besides improved grades: It eliminates the apartheid system between old and young that traditional schools actively create.
Politicians and those making the big decisions in education concentrate solely on curriculum and assessment at their own risk. The meat of the educational sandwich–pedagogy–is what matters most, yet the kind of educational thinking at The Grace Living Center currently is being left to pasture in many decisions.
We know what works well pedagogically, probably better than we know what works well in terms of constructing curricula and assessments. There is research. There are facts.
For example, formative assessment–student-initiated, self, and peer assessment–is far more effective at raising test scores than teaching to the test. Not putting any grades on student work at all, strictly limiting feedback solely to comments, is the most effective means of students eventually gaining top scores.
Go Google it and ye shall find.
Yet I haven’t heard one piece of discourse on formative assessment in the U.S. in 2011 that actually shows an understanding of what it is (the description is nearly always the precise opposite). And I do not know of any schools, anywhere, that have a policy that says that no student receives a grade until the examination (and I would love to be corrected on this).
Those making the decisions nearly always fall for the trap set for them: Our minds are built for ignoring the facts.
George Lakoff, a political commentator and researcher, outlines in his bestseller Don’t Think Of An Elephant the problem faced by those of us trying to convince others with the facts or with research. Everyone interprets “the facts” through their own frame, and if you’re trying to convince someone they’re probably looking through a frame opposite to the one you’re assuming. Lakoff explains:
When I teach the study of framing at [UC] Berkeley, in Cognitive Science 101, the first thing I do is I give my students an exercise. The exercise is: “Don’t think of an elephant! Whatever you do, do not think of an elephant.” I’ve never found a student who is able to do this. Every word, like elephant, evokes a frame, which can be an image or other kinds of knowledge: Elephants are large, have floppy ears and a trunk, are associated with circuses, and so on. The word is defined relative to that frame. When we negate a frame, we evoke the frame.
Richard Nixon found that out the hard way. While under pressure to resign during the Watergate scandal, Nixon addressed the nation on T.V. He stood before the nation and said, “I am not a crook.” And everybody thought about him as a crook.
Here’s a superb FORA.tv video on Idea Framing featuring George Lakoff:
So what? Give me evidence!
At the tail end of November 2011, I was keynoting the JISC online innovation conference. The question about how one uses research about pedagogy to inform decision-making at a high level came up. Peter Bullen, from the University of Hertfordshire in England, posted a fascinating point:
So what? Give me the evidence of the impact!–What kind of evidence?
This is a request that is often heard … Simon Cross responded to the discussion pointing out that this request: Tto provide evidence, is actually more complicated than it first appears.” A short extract from Simony’s [sic] post explains the point:
Not because there is an absence of evidence necessarily (by which I guess we mean data used to justify a position or claim) but because such high-level questions give little guidance about what kind of evidence is being sought? What kind of impact is being looked for? And how much would be sufficient to convince that X is worth trying?–This matters when there are so many stakeholders and such potentially rich/complex research datasets.
In observations from interviews etc. we find there are many criteria that influence how people make decisions to try something out: A recommendation from a trusted colleague may be valued more than a study of a thousand students; something from the same discipline area may be valued considerably higher than from another; what constitutes ‘evidence’ in one discipline may differ radically from another (is 10 hours of interview data sufficient, 20 hours, 100 hours, or does the person simply not trust qualitative data techniques!); something that fits with a particular view of teaching and learning may be subjected to a lower evidence threshold; or people rely on conviction or intuition rather than evidence.
“In short, evidence is important but not always the deal-breaker. So is the real question being asked not ‘show me how others have done this successfully’ but something slightly different: ‘prove to me this will help me do something more successfully’.
It would be good to hear from you: ‘What kind of evidence do we need to persuade colleagues to adopt new approaches to curriculum design?
I’d ask a further question: is Lakoff right in telling us that all research, all data, is really rather futile in the end?