What if everything you knew about education was wrong? - David Didau - E-Book

What if everything you knew about education was wrong? E-Book

David Didau

0,0
22,79 €

oder
-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

If you feel a bit cross at the presumption of some oik daring to suggest everything you know about education might be wrong, please take it with a pinch of salt. What if everything you knew about education was wrong? is just a title. Of course, you probably think a great many things that aren't wrong. The aim of the book is to help you 'murder your darlings'. David Didau will question your most deeply held assumptions about teaching and learning, expose them to the fiery eye of reason and see if they can still walk in a straight line after the experience. It seems reasonable to suggest that only if a theory or approach can withstand the fiercest scrutiny should it be encouraged in classrooms. David makes no apologies for this; why wouldn't you be sceptical of what you're told and what you think you know? As educated professionals, we ought to strive to assemble a more accurate, informed or at least considered understanding of the world around us. Here, David shares with you some tools to help you question your assumptions and assist you in picking through what you believe. He will stew findings from the shiny white laboratories of cognitive psychology, stir in a generous dash of classroom research and serve up a side order of experience and observation. Whether you spit it out or lap it up matters not. If you come out the other end having vigorously and violently disagreed with him, you'll at least have had to think hard about what you believe. The book draws on research from the field of cognitive science to expertly analyse some of the unexamined meta-beliefs in education. In Part 1; 'Why we're wrong', David dismantles what we think we know; examining cognitive traps and biases, assumptions, gut feelings and the problem of evidence. Part 2 delves deeper - 'Through the threshold' - looking at progress, liminality and threshold concepts, the science of learning, and the difference between novices and experts. In Part 3, David asks us the question 'What could we do differently?' and offers some considered insights into spacing and interleaving, the testing effect, the generation effect, reducing feedback and why difficult is desirable. While Part 4 challenges us to consider 'What else might we be getting wrong?'; cogitating formative assessment, lesson observation, grit and growth, differentiation, praise, motivation and creativity.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB
Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



For my father

We think our fathers fools, so wise we grow; Our wiser sons, no doubt will think us so.

Alexander Pope

Foreword by

Robert A. Bjork

Using various versions of the title, “How We Learn versus How We Think We Learn”, I have given talks to different audiences on the surprising discrepancy that exists between what research has revealed about how humans learn and remember versus how people tend to think they learn and remember. The discrepancy is surprising because one might expect that as lifelong users of our memories and learning capabilities, coupled with the “trials and errors of everyday living and learning”,1 we would have come to understand how to optimise not only our own learning, but also the learning of those we are responsible for teaching, whether at home, in schools or in the workplace. The discrepancy is important, too, because as David Didau documents and illustrates so well in this book, optimising the effectiveness of our teaching and our own learning depends on incorporating methods and activities that mesh with how we actually learn versus how we think we learn.

That we tend to have a faulty mental model of how we learn and remember has been a source of continuing fascination to me. Why are we misled? I have speculated that one factor is that the functional architecture of how we learn, remember and forget is unlike the corresponding processes in man-made devices.2 We tend not, of course, to understand the engineering details of how information is stored, added, lost or overwritten in man-made devices, such as video recorders or the memory in a computer, but the functional architecture of such devices is simpler and easier to understand than the complex architecture of human learning and memory. If we do think of ourselves as working like such devices, we become susceptible to thinking, explicitly or implicitly, that exposing ourselves to information and procedures will lead to their being stored in our memories – that they will write themselves on our brains – which could not be further from the truth.

Also, to the extent that we think of ourselves as some kind of recording device, we are unlikely to realise how using our memories shapes our memories. That is, we can fail to appreciate the extent to which retrieving information from our memories increases subsequent access to that information and reduces access to competing information. Retrieving information from a compact disc or computer memory leaves that information and related information unchanged, but that is far from the case with respect to human memory. More globally, to the extent that we think of ourselves as recording devices, we may fail to appreciate the volatility that characterises access to information from our memories as conditions change, events intervene and new learning happens. Information that is readily accessible in one context at one point in time may be completely inaccessible at another point in time in a different context – and vice versa.

We can also be led astray by oversimplifying what it means to have stronger or weaker memories. We may think, for example, that memory traces in our brains are like footprints in the sand that can be shallower or deeper and, hence, more or less resistant to the effects of forgetting. In fact, how memories are represented in our brains is multidimensional: some memory A, for example, may appear stronger than some other memory B by one measure, such as recognition or the subjective sense of familiarity, whereas memory B may appear stronger by some other measure, such as free or cued recall. Basically, by intuition or experience alone, we can never come to realise the amazing array of interactions of encoding conditions and test conditions that have been shown in controlled experiments to affect our ability to retain and recall to-be-learned information. We may have a general idea, even an accurate idea, that some learning activities produce better retention than others, but appreciating fully the complex interactions of encoding conditions, retention interval, type of later test and what cues will or will not be available at the time of the final test requires a whole different level of understanding.

To make things even more challenging for us as learners and/or teachers, conditions of instruction or practice that appear to result in rapid progress and learning can fail to produce good long-term retention of skills and knowledge, or transfer of such skills or knowledge to new situations where they are relevant, whereas other conditions that pose challenges for the learner – and appear to slow the learning process – can enhance such long-term retention and transfer. Conditions of the latter type, which I have labelled “desirable difficulties”,3 include spacing, rather than massing, repeated study opportunities; interleaving, rather than blocking, instruction or practice on the separate components of a given task; providing intermittent, rather than continuous, feedback to learners; varying the conditions of learning, rather than keeping them constant and predictable; and using tests, rather than re-presentations, as learning opportunities.

The key point – one that David Didau emphasises and one that readers of this book should be sure to take away – is that there is a critical distinction in research on learning, one that dates back decades: namely, the distinction between learning and performance. What we can observe and measure during instruction is performance; whereas learning, as reflected by the long-term retention and transfer of skills and knowledge, must be inferred, and, importantly, current performance can be a highly unreliable guide to whether learning has happened. In short, we are at risk of being fooled by current performance, which can lead us, as teachers or instructors, to choose less effective conditions of learning over more effective conditions, and can lead us, as learners ourselves, to prefer poorer conditions of instruction over better conditions of instruction.

Several aspects of this book make it especially valuable. One is that David Didau has not only explained and illustrated the research findings to which I have alluded, as well as other key findings from social psychology and cognitive psychology, but he has also done so in terms of their relevance to real world schools and education. He has also discussed such findings and their implications with respect to historical trends and ideas that have guided, and sometimes misled, educational practices. Finally, and critically, he is able to discuss research findings and their implications for real world teaching from the standpoint of somebody who has been in the trenches, as it were. His career as a teacher and as an administrator in pre-college settings provides a perspective that is lacked by those of us who have spent our careers doing research and teaching in the ivory tower.

Notes

1 Robert A. Bjork, Assessing Our Own Competence: Heuristics and Illusions, in D. Gopher and A. Koriat (eds), Attention and Performance XVII. Cognitive Regulation of Performance: Interaction of Theory and Application (Cambridge, MA: MIT Press, 1999), pp. 435–459.

2 Robert A. Bjork, On the Symbiosis of Learning, Remembering, and Forgetting, in A. S. Benjamin (ed.), Successful Remembering and Successful Forgetting: A Festschrift in Honor of Robert A. Bjork (London: Psychology Press, 2011), pp. 1–22.

3 Robert A. Bjork, Memory and Metamemory Considerations in the Training of Human Beings, in J. Metcalfe and A. Shimamura (eds), Metacognition: Knowing About Knowing (Cambridge, MA: MIT Press, 1994), pp. 185–205.

Foreword by

Dylan Wiliam

Education has always had a rather uneasy relationship with psychology. As Ellen Condliffe Lagemann describes in her account of “the troubling history of education research”, for many years, it was thought that psychology could provide a disciplinary foundation for the practice of education.1 Indeed, for a while, many of those engaged in teacher education behaved as if education was really just applied psychology. Psychologists would determine the optimal conditions for learning, and teachers would then create those conditions in their classrooms. As a result, in the 1960s and 1970s, courses on the psychology of learning featured prominently in most, if not all, pre-service teacher education programmes.

However, even the staunchest proponents of the relevance of psychology to teacher education would hesitate to claim that these courses were successful. Initial teacher education students regarded the courses as irrelevant to what they saw as the task at hand. Perhaps, most importantly, it was clear that the available research was of little use in telling teachers what to do, whether this was in terms of the best way to explain concepts to children or how to get them to behave.

In the education research community, this led many researchers to look to sociology and social anthropology as sources of insights on how to understand classrooms, and, predictably, in many universities, courses on the sociology of education were added to pre-service programmes. But, again, trainee teachers found these courses of limited relevance to their own practice.

Beginning in the 1980s, partly as a response to government initiatives, there was a shift in the way that pre-service teacher education courses were designed. The four-year bachelor of education programmes fell out of favour, and, for secondary teachers at least, the most common route into the profession was a three-year undergraduate degree in a specialist subject, followed by a one-year post-graduate certificate in education (PGCE) programme. Furthermore, because many politicians saw university departments of education as hotbeds of radical left-wing thought (which is bizarre because they really weren’t), they sought to reduce the role of universities in teacher education. First came the idea that 24 weeks of a 36-week PGCE programme had to be spent in schools, and this was quickly followed by specifications of what students should be learning on PGCEs, together with inspections of these programmes, with funded student numbers tied to the results of these inspections.

Predictably, PGCE programmes concentrated on ensuring that teachers mastered hundreds of ‘competences’ on the practicalities of teaching, and any systematic exposure to the ‘foundation disciplines’ of psychology or sociology was at best marginalised and in many cases dropped entirely. By 1990 it was common to find that a university department of education did not have a single card-carrying psychologist on its faculty (by which I mean someone who would have been eligible for membership, if not actually a member, of the British Psychological Society).

As a card-carrying psychologist myself, I hadn’t realised how profoundly teacher training had moved away from psychology until, in the late 1990s, Paul Black and I started working with teachers to help them develop their practice of formative assessment. What surprised us most was that every group of teachers with whom we worked asked us, typically about three months into a project, for some formal input on the psychology of learning. Our emphasis on questioning as a way of eliciting evidence about student learning, and feedback that would be useful to students, required the teachers to use mental models of what was happening in their students’ heads. Most of the teachers with whom we were working, including many who were perceived as highly effective, had no such models. It turned out that it was possible to be regarded as a highly effective teacher with no idea what was happening in the minds of students.

The irony in all this is that, just as university departments of education began to dispense with the psychology of education as a key input into teacher training, psychologists were producing insights into learning in real, as opposed to laboratory settings, that had relatively straightforward applications to practice.

Now, of course, it is unlikely that the psychology of learning will ever be developed to the point where psychology will tell teachers what to do. To build a bridge, you need to know about the behaviour of steel and stone when compressed and stretched, but knowing all this will never tell you what the bridge should look like. In the same way, psychology will never tell teachers how to teach, but there are now clear principles emerging about how we learn best; principles that teachers can use to make teaching more effective, such as the fact that spaced practice is better than massed practice and the benefits of frequent classroom testing for long-term retention.

This is what makes the book you have in your hands so important and exciting. There are many excellent accounts of recent work in cognitive science (most of which are listed in the bibliography at the end of this book), and some of them also do a good job of drawing out the implications of this research for learning. However, to my knowledge, this is the first book that gets the cognitive science right and at the same time is written from a profound understanding of the reality of classrooms.

The title of this book, What If Everything You Knew About Education Was Wrong?, says it all really. This book does not claim that everything we know about education is, in fact, incorrect. Rather it is an invitation to reflect on our beliefs about teaching and learning, and to examine in detail whether our assumptions are as well-founded as we would like them to be. You will see that David and I have debated a number of issues, and, in particular, the evidence for the usefulness of what has, in the UK at least, become known as Assessment for Learning. Engaging in this debate has forced me to clarify some of my ideas and modify others, and I have also become clearer about how to communicate them to others. I suspect that David and I still disagree about some of these issues, but being open to the idea that we might be wrong allows us both to continue to develop our thinking about how to harness the power of education to transform lives.

In short, this is my new favourite book on education. I read it from cover to cover before writing this foreword, and I plan to revisit it regularly. If I was still running a PGCE programme it would be required reading for my students, and I can think of no better choice for a book-study for experienced teachers. Anyone seriously interested in education should read this book.

Notes

1 Ellen Condliffe Lagemann, An Elusive Science: The Troubling History of Education Research (Chicago, IL: Chicago University Press, 2000), p. 282.

Acknowledgements

This book has lived in my head for some years, and without the help and guidance of a great many people that’s where it would have stayed. Instrumental in the process of getting it down on paper have been the education community on Twitter. I struggle to bring to mind everyone whose thinking, advice and support have helped me along the way, but chief among them are Martin Robinson (@SurrealAnarchy) for some cracking quotes and avuncular advice; Carl Hendrick (@CarlHendrick) for telling me about negative capability; James Theobold (@jamestheo) for letting me steal some of his best ideas; Glen Gilchrist (@mrgpg) for explaining the difference between causation and correlation; Gerald Haigh (@geraldhaigh1) for the anecdote about forgetting in music; David Thomas (@dmthomas90) for attempting to explain game theory; Andrew Smith (@OldAndrewUK) for providing so many examples of poor school leadership and showing me the error of my ways; Laura McInerney (@miss_mcinerney) for also asking what might happen if everything we knew about education was wrong; Nick Rose (@turnfordblog) for being “the angriest man in education” as well as one of the most sensible; Kris Boulton (@Kris_Boulton) for making me really think about retrieval induced forgetting; Sam Freedman (@samfr) for letting me pick his considerable brain; Rob Coe (@ProfCoe) for giving up his time to patiently explain things to me; Cristina Milos (@sureallyno) for challenging and supporting in equal measure; Stuart Lock (@StuartLock) for travelling the same path; Harry Fletcher-Wood (@HFletcherWood) for spotting mistakes and providing clarifications; Daniel T. Willingham (@DTWillingham) for writing such consistently thought provoking articles and books; Pedro De Bruyckere (@thebandb) for exposing me to so much fascinating research; Greg Ashman (@greg_ashman) for saying the same thing enough times that I started to understand it; Dan Brinton (@dan_brinton) for blowing all that smoke and Phil Beadle (@PhilBeadle) for all the al fresco wine consumption.

Of those I need to single out, Bob Bjork is the foremost. His decades’ worth of research and thinking has formed the basis of my ideas. Quite literally, without his theory of memory, the observation that performance and learning need to be disassociated and the concept of ‘desirable difficulties’, there would have been no book. Any weaknesses in my presentation of his work are wholly due to my limited understanding.

I also need to publically acknowledge the huge debt owed to Dylan Wiliam (@dylanwiliam), not only for allowing me to reproduce his comments on my blog posts critiquing his work, but also for being consistently wise, generous and clear sighted. The time he has given to debating with me has greatly enhanced both my thinking and the content of this book.

Nothing really compares to the approval of one’s heroes and to say that I’m grateful to Bob and Dylan for contributing such enthusiastic forewords is something of an understatement. Both are giants in their respected fields and I am both proud and humbled to have stood on their shoulders.

Then there are those who have directly contributed their words to the book. Their contributions show other ways in which we can be wrong in education. Jack Marwood’s (@Jack_Marwood) magisterial demolition of the way we use data in schools (Appendix 1) and Andrew Sabisky’s (@andrewsabisky) elegant unpicking of how the nature vs. nurture debate should inform our thinking on ‘closing the gap’ (Appendix 2) have both added significantly to my thesis and expanded it into fields of which I am lamentably ignorant. I must also thank Joe Kirby (@Joe_Kirby) for allowing me to use material from his excellent blog, Pragmatic Education. These additions have made Chapter 24 so much better than it would otherwise have been.

As well as thanking Caroline Lenton, Emma Tuck, Bev Randell, Rosalie Williams and all the usual suspects at Crown House for working their magic behind the scenes, I need to pay tribute to the contribution of my editor, Peter Young. Despite our battles and my stubborn and short sighted refusal to accept his wisdom on several points, his ferocious insight, extensive knowledge, attention to detail and willingness to read absolutely everything I’ve read in order to pull me up on sloppy interpretations has been the making of this book. Without him it would have been a much poorer effort.

Thanks also to Graham, my father, for reading through my drafts, spotting so many errors and making so many useful suggestions.

And, finally, Rosie – for putting up with so much nonsense.

Contents

Title PageDedicationEpigraphForeword byRobert A. BjorkForeword byDylan WiliamAcknowledgementsFigures and tablesIntroductionPart 1.Why we’re wrong1.Don’t trust your gut2.Traps and biases3.Challenging assumptions4.Why we disagree and how we might agree5.You can prove anything with evidence!Part 2.Through the threshold6.The myth of progress7.Liminality and threshold concepts8.Learning: from lab to classroom9.The input/output myth10.The difference between experts and novicesPart 3.What could we do differently?11.Deliberately difficult12.The spacing effect13.Interleaving14.The testing effect15.The generation effect16.Variety17.Reducing feedback18.Easy vs. hardPart 4.What else might we be getting wrong?19.Why formative assessment might be wrong20.Why lesson observation doesn’t work21.Grit and growth22.The dark art of differentiation23.The problem with praise24.Motivation: when the going gets tough, the tough get going25.Are schools killing creativity?Conclusion: The cult of outstandingAppendix 1. Data by numbers by Jack MarwoodAppendix 2. Five myths about intelligence by Andrew SabiskyIndexCopyright

Figures and tables

Figure 1.1. The blind spot test

Figure 1.2. Checker shadow illusion

Figure 1.3. Checker shadow illusion version 2

Figure 1.4. Jastrow’s duck/rabbit illusion

Figure 1.5. Piracy and global warming

Figure 2.1. Class size – inverted U

Figure 5.1. The learning pyramid

Figure 5.2. Dale’s cone of experience

Figure 5.3. The brain

Figure 6.1. What progress looks like

Figure 9.1. The Italian Game

Figure 9.2. Ebbinghaus’ forgetting curve

Figure 10.1. The teaching sequence for independence

Figure 12.1. The spacing effect

Figure 17.1. Feedback for clarification

Figure 17.2. Feedback to increase effort

Figure 17.3. Feedback to increase aspiration

Figure 22.1. Possible effects of differentiation on PISA scores

Figure A2.1. Carroll’s three stratum model of human intelligence

Table 12.1. Optimal intervals for retaining information

Table 13.1. Example of a spaced and interleaved revision timetable

Table 17.1. Possible responses to feedback

Table 24.1. The effort matrix

Table 24.2. Disruption/effort

Table A2.1. Heritability factors

The ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed, the world is ruled by little else. Practical men, who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist.

John Maynard Keynes

If God held enclosed in his right hand all truth, and in his left hand the ever-striving drive for truth, even with the corollary of erring forever and ever, and were to say to me: Choose! – I would humbly fall down at his left hand and say: “Father, give! Pure truth is indeed only for you alone.”

G. E. Lessing

If human nature were not base, but thoroughly honourable, we should in every debate have no other aim than the discovery of truth; we should not in the least care whether the truth proved to be in favour of the opinion which we had begun by expressing, or of the opinion of our adversary. That we should regard as a matter of no moment, or, at any rate, of very secondary consequence; but, as things are, it is the main concern. Our innate vanity, which is particularly sensitive in reference to our intellectual powers, will not suffer us to allow that our first position was wrong and our adversary’s right. The way out of this difficulty would be simply to take the trouble always to form a correct judgment. For this a man would have to think before he spoke. But, with most men, innate vanity is accompanied by loquacity and innate dishonesty. They speak before they think; and even though they may afterwards perceive that they are wrong, and that what they assert is false, they want it to seem the contrary. The interest in truth, which may be presumed to have been their only motive when they stated the proposition alleged to be true, now gives way to the interests of vanity: and so, for the sake of vanity, what is true must seem false, and what is false must seem true.

Arthur Schopenhauer

While people are entitled to their illusions, they are not entitled to a limitless enjoyment of them and they are not entitled to impose them upon others.

Christopher Hitchens

Believe those who are seeking the truth; doubt those who find it.

André Gide

I don’t necessarily agree with everything I say.

Marshall McLuhan

Introduction

I beseech you, in the bowels of Christ, think it possible you may be mistaken.

Oliver Cromwell

This is a book about teaching, but it is not a manual on how to teach. It is a book about ideas, but not, I hope, ideological. It is a book about thinking and questioning and challenging, but it also attempts some possible answers.

By training and inclination I’m a teacher. The ideas in this book are therefore viewed through the prism of my experience of working in schools, but they should be equally applicable to every other area where people want, or are required, to learn. The intention is to help you to develop the healthy scepticism needed to spot bad ideas masquerading as common sense. In so doing, I hope this will provide a better appreciation both of what ‘learning’ might mean and how we might get better at it.

If you feel a bit cross at the presumption of some oik daring to suggest everything you know about education might be wrong, please take it with a pinch of salt. It’s just a title. Of course, you probably think a great many things that aren’t wrong. The question refers to education in the widest as well as the most narrow senses. Although I’m explicitly critical of certain policies and practices, what I’m really criticising is certainty. My hope is that you will consider the implications of being wrong and consider what you would do differently if your most cherished beliefs about education turned out not to be true.

Naturally, there are countless things that you do, day in, day out, which you take completely for granted and that work just fine. By the same token, there are probably very many other things that might be wrong with education but which fall outside the scope of this book. I’ve chosen to write about those beliefs and certainties that I’ve found most confounding in my career as a teacher. These are often concepts and ideas that we accept so unquestioningly that we’ve stopped thinking about them because we think with them.

To that end, I have identified certain ideas and ways of thinking which you may well find challenging or troublesome. These are the threshold concepts* of the book:

Seeing shouldn’t always result in believing (Chapter 1).We are all victims of cognitive bias (Chapter 2).Compromise doesn’t always result in effective solutions (Chapter 4).Evidence is not the same as proof (Chapter 5).Progress is a gradual, non-linear process (Chapters 6 and 7).Learning is invisible (Chapter 8).Current performance is not only a poor indication of learning, it actually seems to prevent it (Chapters 8 and 9).Forgetting aids learning (Chapter 9).Experts and novices learn differently (Chapter 10).Making learning more difficult can make it more durable (Chapter 11).

The main thing I think we’re wrong about is the belief that we can see learning. This conviction has probably been around for as long as there have been teachers and students. It is so deeply embedded in the way we see the world that we don’t even think about it: it is a self-evident truth. Almost everything teachers are asked to do is predicated on this simple idea. We teach, children learn. This is the input/output myth. We may have an inkling that things aren’t quite this straightforward, but we act as if they are. Pretty much every lesson taught by every teacher in every school depends on the idea that we can see learning happen.

If we’re wrong about this, what else might we be wrong about? If it’s true that learning is invisible, where does that leave Assessment for Learning, lesson observation and the whole concept of ‘outstanding’ teaching? Up a particularly filthy creek in a paper canoe, that’s where!

Education has become like the woman in the gospels who ‘suffered under many doctors’. Everyone is happy to prescribe their own favourite medicine. And how do they know it works? Because it ‘feels right’. But often what has the greatest impact on pupils’ learning is deeply and bafflingly counter-intuitive. In Part 1 we will dismantle the flaws in our thinking on which the edifice of belief depends. We’ll survey the tangle of assumptions that have grown up around the education debate and hack through the current vogue for research and evidence in our attempt to find some solutions.

You see, there are some things in which we might be able to place our trust. These are not magic beans – they’re the product of rigorous, repeatable scientific research, and they’re free! For over a century, boffins have been investigating how we learn and remember. Nowadays, this gets called cognitive science, but investigations into these areas go back at least as far as Plato. They picked up speed in the 19th century with such thinkers as William James and Hermann Ebbinghaus, and an explosion of research in the latter half of the 20th century started to indicate that what we thought we knew about learning was widely misunderstood and that the facts were deeply surprising. This will be the focus of Part 2.

One man who has had a very particular influence on the field of cognitive psychology, especially in the area of learning, memory and forgetting, but is little known within education circles, is Professor Robert A. Bjork of UCLA. Despite spending much of the past five decades amassing a trove of fascinating insights on remembering, forgetting and learning, his research – and that of other cognitive psychologists – has, until very recently, received little attention in the secret garden of education. Why this is I’m not sure. But one of my hopes in writing this book is that teachers and policy makers are made more aware of this hidden body of knowledge. Because curriculum time is always limited we need to decide which is more important: teaching or learning. Do we want to make sure we teach as much as possible or that students learn as much as possible? Do we want pupils to perform well in a lesson or in the future? Do we want them to learn quickly or do we want that learning to last? You can’t necessarily have both, so in Part 3 we’ll take a look at some of Bjork’s ideas about making learning harder.

Then, in Part 4, we’ll look at how we can rethink some of the classroom practices we take for granted and consider whether we might benefit from doing them differently. Although we could pick on all sorts of sacred calves, the ones we will concentrate on are formative assessment, lesson observation, differentiation, character, praise, motivation and creativity.

You might find this book provocative. It’s meant to be. My hope is that we have enough in common to discuss our beliefs about education without anyone getting too upset. Obviously, there are no guarantees that this will play out as I intend. Misunderstandings will occur; mistakes will be made, and you may need to adjust your views to some degree.†

Of course, I acknowledge that I have no idea of the best way to teach your subject to your students. That is (or should be) your area of expertise. I don’t intend to make sweeping assumptions or assertions about what you should be doing, just speculations on what you could be doing. As far as I’m concerned, teachers can teach standing up, sitting down, hopping on one leg, wearing flip-flops or with a bag over their heads. I’m not interested in how students are seated, what techniques are used in classrooms or what resources are produced. I’m broadly keen on marking books, but I don’t much care whether teachers use green or red pen, pencil, invisible ink or human blood. I’m in favour of having high expectations for every pupil, but it should be up to individuals what this looks like. I’m a fan of hard work and suspicious of fun for fun’s sake, but that’s just me; you should do what you deem best. It really doesn’t matter what you do, as long as it’s effective.

And that’s the problem. An awful lot of what teachers think of as ‘effective’ only seems so because it works for them; the alarming truth is that this doesn’t mean it works for their pupils – judging your impact is a little more complicated than that.

I want to make it very clear here, right at the outset, that I offer no guarantees and no assurances that what I suggest will ‘work’. There is no template you can simply adopt to solve the problems you face. Regrettably, life – and especially education – is rarely that simple. Anyone who makes such an offer is not to be trusted. Your experiences will be different to mine; you will have worked in different contexts, with different students and you may well have different values and aspirations. But whatever our differences, being prepared to subject your beliefs to a fearless examination will, if nothing else, make you a more thoughtful educator. The offer I make is that if you’ve given sufficient thought to what you believe and actively looked for errors in your thinking, all will probably be well. Just as, if you do anything simply because someone told you to, it will probably fail. In the words of Shakespeare’s Hamlet, “There is nothing either good or bad, but thinking makes it so.”

So, if you disagree with any or all of the points I make, that’s fine. Really. I’m not trying to convince you of anything, except that you are sometimes wrong. What you do with that information is entirely up to you. You see, we’re all wrong at times. Naturally no one sets out to be wrong. No one ever at any point in history pursued a course of action firm in the belief that they were wrong to do so. Everything we do, we do in the belief that we are right. But believing that we are right necessarily means that there must be times when others are wrong. I’m going to spend some time explaining all this in Chapters 1 and 2, but for now, just try to entertain the uncomfortable possibility that you may be wrong. Or, if it makes you feel better, blame someone else.

The aim of this book is to help you ‘murder your darlings’. We will question your most deeply held assumptions about teaching and learning, expose them to the fiery eye of reason and see if they can still walk in a straight line after the experience. It seems reasonable to suggest that only if a theory or approach can withstand the fiercest scrutiny should it be encouraged in classrooms. I make no apologies for this; why wouldn’t you be sceptical of what you’re told and what you think you know? As educated professionals, we ought to strive to assemble a more accurate, informed or at least considered understanding of the world around us.

To that end, I will share with you some tools to help you question your assumptions and assist you in picking through what you believe. We will stew findings from the shiny white laboratories of cognitive psychology, stir in a generous dash of classroom research and serve up a side order of experience and observation. Whether you spit it out or lap it up matters not. If you come out the other end having vigorously and violently disagreed with me, you’ll at least have had to think hard about what you believe.

And I’ll be happy with that.

* If you’re desperate to find out what on earth a threshold concept might be, feel free to skip ahead to Chapter 7.

† Ideally this readjustment would be a two-way process with your experiences colouring my perceptions as mine colour yours, but because you’re reading a book it’s hard to participate. Rather than just dismissing me as a fool and a charlatan, if you do feel compelled to set me straight on anything, please do visit my website and offer your criticism and raise your concerns: www.learningspy.co.uk.

Part 1

Why we’re wrong

Part 1

Why we’re wrong

Before we get started, have a go at answering the following questions:

1. Have you ever been wrong?

2. Might you ever be wrong?

3. List five things you’ve been told about education which you think might possibly be wrong:

4. Have you ever acted on any of this information or anything else about which you weren’t positive?

5. If so, why?

Now, check your answers below.

...................................................

If you’ve answered yes to questions 1 and 2, well done. You can skip Part 1 if you like and pass straight through the threshold. If you answered no, I’m going to attempt to persuade you that you might be wrong. Read on.

If you managed to list one or more items in response to question 3, well done. There are undoubtedly more. If you weren’t able to think of anything, stick around.

If you answered yes to question 4, I congratulate you on your ability to face the uncomfortable truth. If you answered no, you’re either a very superior being or just plain wrong.

And if you answered ‘I don’t know’ to question 5, welcome to my world. This is exactly where I found myself before I began the process of thinking about the content of this book. I hope my journey is of some use to you.

Chapter 1

Don’t trust your gut

Man prefers to believe what he prefers to be true.

Francis Bacon

Nobody wants to be wrong – it feels terrible. In order to protect ourselves from acknowledging our mistakes, we have developed a sophisticated array of techniques that prevent us from having to accept such an awful reality. In this way we maintain our feeling of being right. This isn’t me being smug by the way. Obviously, I’m as susceptible to self-deception as anyone else; as they say, denial ain’t just a river in Egypt.

There are two very good reasons for most of the mistakes we make. Firstly, we don’t make decisions based on what we know. Our decisions are based on what feels right. We’re influenced by the times and places in which we live, the information most readily available and which we’ve heard most recently, peer pressure and social norms, and the carrots and sticks of those in authority. We base our decisions both on our selfish perceptions of current needs and wants and on more benevolent desires to positively affect change. And all of this is distilled by the judgements we make of the current situation. But our values and our sense of what’s right and wrong can lead us into making some very dubious decisions.

Secondly, we’re deplorable at admitting we don’t know. Because of the way we’re judged, it’s far less risky to be wrong than it is to admit ignorance. If we’re confident enough, people assume we must know what we’re talking about. Most of us would prefer a clear answer, even if it turns out to be wrong, than an admission that someone is unsure. Because no one likes a ditherer, certainty has become a proxy for competence. Added to this, very often we don’t know that we don’t know.

Feeling uncertain is uncomfortable, so when we’re asked a hard question we very often substitute that question for an easier one. If we’re asked, “Will this year’s exam classes achieve their target grades?”, how could we know? It’s impossible to answer this question honestly. But no one wants to hear, “I don’t know,” so we switch it for an easier question like, “How do I feel about these students?” This is much easier to answer – we make our prediction without ever realising we’re not actually answering the question we were asked.

Despite it being relatively easy to spot other people making mistakes, it’s devilishly difficult to set them straight. Early in my career as an English teacher, I noticed that children would arrive in secondary school with a clear and set belief that a comma is placed where you take a breath. This is obviously untrue: what if you suffered with asthma? So how has this become an accepted fact? Well, mainly because many teachers believe it to be true. This piece of homespun wisdom has been passed down from teacher to student as sure and certain knowledge, probably for centuries. If you do enough digging, it turns out punctuation marks were originally notation for actors on how to read scripts. It’s still fairly useful advice that you might take a breath where you see a comma, but it’s a staggeringly unhelpful rule on how to use them.*

I’ve spent many years howling this tiny nugget of truth at the moon, but it remains utterly predictable that every year children arrive at secondary school with no idea how to use commas. Teaching correct comma use depends on a good deal of basic grammatical knowledge. It’s a lot easier to teach a proxy which is sort of true. Although the ‘take a breath’ rule allows students to mimic how writing should work, it prevents a proper understanding of the process. And so the misunderstanding remains. As is often observed, a lie can travel halfway around the world before the truth has had time to find its boots, let alone tug them on.

This kind of ‘wrongness’ is easy to see. It’s much more difficult when what we believe validates who we are. Many of our beliefs define us; a challenge to our beliefs is a challenge to our sense of self. No surprise then that we resist such challenges. Here are some things which defined me and which I used to believe were certain:

Good lessons involve children learning in groups with minimal intervention from the teacher.Teachers should minimise the time they spend talking in class and particularly avoid whole class teaching.Children should be active; passivity is a sure sign they’re not learning.Children should make rapid and sustained progress every lesson.Lessons should be fun, relevant to children’s experiences and differentiated so that no one is forced to struggle with a difficult concept.Children are naturally good and any misbehaviour on their part must be my fault.Teaching children facts is a fascistic attempt to impose middle class values and beliefs.

These are all things I was either explicitly taught as part of my training to be a teacher or that I picked up tacitly as being self-evidently true. Maybe you believe some or all of these things to be true too. It’s not so much that I think these statements are definitively wrong, more that the processes by which I came to believe them were deeply flawed. In education (as in many other areas I’m sure), it would appear to be standard practice to present ideological positions as facts. Like many teachers, I had no idea how deeply certain ideas are contested as I was only offered one side of the debate.

I’ll unpick how and why I now think these ideas are wrong in Chapter 3, but before that I need to soften you up a bit. If the rest of the book is going to work, I need you to accept the possibility that you might sometimes be wrong, even if we quibble about the specifics of exactly what you might be wrong about. You see, we’re all wrong, all the time, about almost everything. Look around: everyone you’ve ever met is regularly wrong. To err is human.

In our culture, everyone is a critic. We delight in other people’s errors, yet are reluctant to acknowledge our own. Perhaps your friends or family members have benefitted from you pointing out their mistakes? Funny how they fail to appreciate your efforts, isn’t it? No matter how obvious it is to you that they’re absolutely and spectacularly wrong, they just don’t seem able to see it. And that’s true of us all. We can almost never see when we ourselves are wrong. Wittgenstein got it dead right when he pointed out: “If there were a verb meaning ‘to believe falsely’, it would not have any significant first person, present indicative.”1 That is to say, saying “I believe falsely” is a logical impossibility – if we believe it, how could we think of it as false? Once we know a thing to be false we no longer believe it.† This makes it hard to recognise when we are lying to ourselves or even acknowledge we’re wrong after the fact. Even when confronted with irrefutable evidence, we can still doubt what is staring us in the face and find ways of keeping our beliefs intact.

Part of the problem is perceptual. We’re prone to blind spots; there are things we, quite literally, cannot see. We all have a physiological blind spot: due to the way the optic nerve connects to our eyes, there are no rods or cones to detect light where it joins the back of the eye, which means there is an area of our vision – about six degrees of visual angle – that is not perceived. You might think we would notice a great patch of nothingness in our field of vision but we don’t. We infer what’s in the blind spot based on surrounding detail and information from our other eye, and our brain fills in the blank. So, whatever the scene, whether a static landscape or rush hour traffic, our brain copies details from the surrounding images and pastes in what it thinks should be there. For the most part our brains get it right, but occasionally they paste in a bit of empty motorway when what’s actually there is a motorbike.

Maybe you’re unconvinced? Fortunately there’s a very simple blind spot test:

Figure 1.1. The blind spot test

Close your right eye and focus your left eye on the cross. Hold the page about 25 cm in front of you and gradually bring it closer. At some point the left-hand spot will disappear. If you do this with your right eye focused on the cross, at some point the right-hand spot will disappear.

So, how can we trust when our perception is accurate and when it’s not? Worryingly, we can’t. But the problem goes further. French philosopher Henri Bergson observed, “The eye sees only what the mind is prepared to comprehend.” Quite literally, what we are able to perceive is restricted to what our brain thinks is there.

Further, Belgian psychologist Albert Michotte demonstrated that we ‘see’ causality where it doesn’t exist. We know from our experience of the world that if we kick a ball, the ball will move. Our foot making contact is the cause. We then extrapolate from this to infer causal connections where there are none. Michotte designed a series of illustrations to demonstrate this phenomenon. If one object speeds across a screen, appears to make contact with a second object and that object then moves, it looks like the first object’s momentum is the cause of the second object’s movement. But it’s just an illusion – the ‘illusion of causality’. He showed that with a delay of a second, we no longer see this cause and effect. If a large circle moves quickly across the screen preceded by a small circle, it looks like the large circle is chasing the small circle.‡ We attribute causality depending on speed, timing, direction and many other factors. All we physically see is movement, but there’s more to perception than meets the eye. Consider how we infer causes to complex events: if we see a teacher teach two lessons we consider inadequate, we infer that they’re an inadequate teacher.

This leads us to naive realism – the belief that our senses provide us with an objective and reliable awareness of the world. We tend to believe that we see objects as they really are, when in fact what we see is just our own internal representation of the world. And why wouldn’t we? If an interactive whiteboard falls on our head, it’ll hurt! But while we may agree that the world is made of matter, which can be perceived, matter exists independently of our observations: the whiteboard will still be smashed on the floor even if no one was there to see it fall. Mostly this doesn’t signify; what we see tends to be similar enough to what others see as to make no difference. But sometimes the perceptual differences are such that we do not agree on the meaning and therefore on the action to be taken.

The existence of optical illusions proves not only that our senses can be mistaken, but more importantly they also demonstrate how the unconscious processes we use to construct an internal reality from raw sense data can go awry.

Figure 1.2. Checker shadow illusionSource:http://web.mit.edu/persci/people/adelson/checkershadow_illusion.html.

In Edward H. Adelson’s checker shadow illusion (Figure 1.2), the squares labelled A and B are the exact same shades of grey. No really. The shadow cast by the cylinder makes B as dark as A, but because the squares surrounding B are also darker you may not believe it.

Here’s a second version of the illusion:

Figure 1.3. Checker shadow illusion version 2Source:http://persci.mit.edu/gallery/checkershadow/proof.

We know A is a dark square and B is a light square. Seeing the squares as the same shade is rejected by our brain as unhelpful. We are unable to see what is right there in front of us. This neatly proves that there cannot be a simple, direct correspondence between what we perceive as being out there and what’s actually out there. Our brains edit our perceptions so that we literally see something that isn’t there. When I first saw this I couldn’t accept that the evidence of my eyes could be so wrong. I had to print out a copy, cut out the squares and position them side by side in order to see the truth. Illusions like this are “a gateway drug to humility”.2 They teach us what it is like to confront the fact we’re wrong in a relatively non-threatening way.

Here’s another example. Log on to the internet and watch this video before reading further: http://goo.gl/ZXEGQ7.§

The research of Daniel Simons and Christopher Chabris into ‘inattention blindness’ reveals a similar capacity for wrongness.3 Their experiment, the Invisible Gorilla, has become famous – if you’ve not seen it before, it can be startling: between 40–50 per cent of people fail to see the gorilla. And if you have seen it before, did you notice one of the black T-shirted players leave the stage? You did? Did you also see the curtain change from red to gold? Vanishingly few people see all these things. And practically no one sees all these changes and still manages to count the passes! Intuitively, we don’t believe that almost half the people who first see that clip would fail to see someone in a gorilla suit walk on stage and beat his chest for a full nine seconds. But we are wrong.

So is it never OK to believe the evidence of our own eyes? Of course there are times when we absolutely should accept the evidence of our own eyes over what we’re told. If you had read some research which stated that children are safe in nurseries and were then to visit a nursery and see a child being slapped, it would be ludicrous to deny the evidence of what you’d seen over the research that refuted it. But we would be foolish indeed to draw any conclusion about all nurseries, or all children, based merely on the evidence of our own eyes. For the most part ‘anecdotal evidence’ is an oxymoron. We’re always guessing and predicting several steps beyond the available evidence.

Cognitive illusions can be as profound as perceptual illusions

Should we place our trust in research, or can we rely on our own experiences? Of course first-hand observations can sometimes be trusted. Often, if it walks like a duck and sounds like a duck, we should accept that it’s a duck. But it’s possible to be so eager to accept we’re right and others are wrong that we start seeing ducks where they don’t exist. It’s essential for anyone interested in what might be true, rather than what they might prefer to be true, to take the view that the more complicated the situation, the more likely we are to have missed something.

Sometimes when it looks like a duck it’s actually a rabbit.

Figure 1.4. Jastrow’s duck/rabbit illusion

The philosopher Ludwig Wittgenstein records his confusion with the seeming impossibility that the same image could contain multiple contrary meanings and asked: “But how is it possible to see an object according to an interpretation? – The question represents it as a queer fact, as if something were being forced into a form it did not really fit. But no squeezing, no forcing took place here.”4

We cannot hold both perceptions in mind simultaneously. Once we become aware that both forms can be inferred from the lines on the paper we can see either duck or rabbit at will, but we can’t see them both at the same time. Our mind flips from one perception to the other, but the possibility of seeing both duck and rabbit remains constant. When we persuade ourselves and others that what we see is what must be there, could we be missing what else might be there?

Possibly though ‘truth’ is relative. How could we ever hope to divine objective truth when all we have at our disposal are perception, logic and faith? Some things we decide are true based on the evidence of our senses. This might work for small, quotidian truths, but I’m not sure it works for anything much beyond this. And as we’ve seen, we cannot trust the evidence of our own eyes.

Logic enables us to make inferences based on what we already know. For example, dogs can’t fly. I don’t have to see every dog to know a particular dog will be flightless. While this may be true, it is also limited to our actual experience. Logic is a notoriously poor predictor of exceptions – the black swans that force us to change our beliefs. The existence of black swans¶ teaches us that our logic must be ‘falsifiable’|| – that is to say, we must be able to conceive of a theory as being incorrect. If we cannot, then it is an unhelpful way of seeing the world; nothing could reasonably convince us we might be wrong. For a theory to be logically coherent, we must be able to agree the circumstances in which it would be wrong.

And finally faith. Although we can obtain documentary evidence, we take our birthdate on faith; we’ve no actual way of knowing for sure that what we’re told is correct. Whenever we accept the authority of others we act on faith. For example, we take the fact that the Battle of Hastings took place in 1066 on faith. But perhaps that’s no surprise; the psychologist Daniel Gilbert suggests we are predisposed to take pretty much anything, even obviously nonsensical or ludicrous things, on faith. He submits that in order to try to understand a statement we must first believe it. Only when we have worked out what it would mean for the statement to be true can we choose not to believe it.5 So although certain beliefs are contested, I’m willing to accept, for instance, that the Holocaust occurred, that Neil Armstrong walked on the moon and that Elvis didn’t. Others may not be so eager to accept these articles of faith, but in order not to do so they must first believe them.

The point of this apparent digression is that there are all too many ‘truths’ about education that I’ve been prepared to take on faith, which, as we will see, have turned out to be plain wrong. Truth as such is more slippery than all that. Can contradictory truths coexist? Or can truth sometimes be subjective? Can a thing just be true for me? Sadly, a belief in subjective truth is incoherent. Subjectivism states that no claims about reality can be objectively true or false, but this itself is an objective claim about reality. That doesn’t prove it’s false, just that it’s incoherent. No rational person can truly believe in a subjective reality. But why then is it such a popular misconception? Well, some things (like taste) really are subjective. More importantly, subjectivism appeals to us because it seems like the only alternative that captures the idea that our perspective matters. We may each look at the same situation and come to different conclusions, and we might all be right. Clearly it’s possible to have different viewpoints, so it seems truth can be conditional or provisional within an objective framework. But this is confusing and clashes with the equally powerful belief that there must be some things which are always true.

The fusion of these beliefs is enactivism: there really is an objective reality out there, but we cannot perceive it directly. Instead we share in the generation of meaning; we don’t just exchange information and ideas, we change the world by our participation in it. We take the evidence of our senses and construct our own individual models of the world. But because we start with the same objective reality, our individual constructed realities have lots of points of contact. Although we all perceive education differently, usually we’re all talking about more or less the same thing.

Objective reality is just stuff that happens. Meaning and purpose only exist as we construct reality. Because we’re constantly interacting with each other, individual realities are permeable. When others interact with us we often have to adjust our view of reality. In this way we cooperate in the creation of a shared, constructed reality. We can encounter the same manifestations of reality but have a profoundly different experience of it. The psychologist Daniel Kahneman calls this effect What You See Is All There Is6 – if we’re not aware of a thing it fails to exist. For example, when we plan a lesson it exists only in our own imagination; as soon as we get in front of the class and teach the lesson every pupil will interpret it differently in order to make it real for them. Their experience is all there is, and they will only remember that experience. Our hope is that we have enough in common for students to think about and remember the ideas we want them to learn about.

As well as faulty perceptual systems, we are also at the mercy of faulty thinking, far more than most of us would believe possible. A good deal of what we believe to be right is based on emotional feedback. We are predisposed to fall for a comforting lie rather than wrestle with an inconvenient truth. And we tend to be comforted by what’s familiar rather than what makes logical sense. We go with what ‘feels right’ and allow our preferences to inform our beliefs. If we’re asked to explain these beliefs, we post-rationalise them; we layer on a sensible logical structure and bury the emotional roots because we instinctively know that it’s not OK to say, “Because it just feels right.”**

Most of the time this doesn’t matter. When we’re dealing with stuff that fits with our world view, or just seems sensible, we’re pretty accommodating; we accept working assumptions without questioning them. But when a cognitive clash occurs, when our beliefs are challenged, then rationality is trumped by self-perception and vested interest.

In the 1950s, psychologist Leon Festinger proposed the theory of cognitive dissonance which suggests that we are programmed to hold our attitudes and beliefs in harmony, or as Festinger put it, cognitive consistency.7 Attempting to hold two contradictory thoughts or beliefs (cognitions) in our heads at the same time results in us experiencing a deeply uncomfortable sense of dissonance. This leads us to take one of the following actions:

Change our beliefs to fit the new evidence.Seek out new evidence which confirms the belief we’d prefer to hold.Reduce the importance of disconfirming evidence.

So, for instance, if we’re told that good teachers mark frequently, and we believe ourselves to be a good teacher and yet we can never seem to make headway into that teetering pile of books, we will experience cognitive dissonance. This feeling is so unpleasant that we will justify our beliefs in such a way that we can make these apparently opposing ideas fit neatly into our world view and self-image. We either excuse ourselves: