22,79 €
In Exam Literacy: A guide to doing what works (and not what doesn't) to better prepare students for exams,Jake Hunton focuses on the latest cognitive research into revision techniques and delivers proven strategies which actually work. Foreword by Professor John Dunlosky. 'Read, highlight, reread, repeat if such a revision cycle sounds all too wearily familiar, you and your students need a better route to exam success. And in light of the recent decision to make all subjects at GCSE linear, so that students will be tested in one-off sittings, it will be even more important that students are well equipped to acquire and recall key content ahead of their exams. In this wide-ranging guide to effective exam preparation,Jake Hunton casts a careful eye over a wide range of research into revision techniques and details the strategies which have been proven to deliver the best results. With plenty of practical suggestions and subject-specific examples, Exam Literacy provides teachers with user-friendly advice on how they can make the content they cover stick, and shares up-to-date, evidence-based information on: - The nature of learning and the various types of memory. - How to improve students' retention of knowledge and recall of content. - Why popular revision techniques, such as rereading, highlighting and summarising,may not be as effective as you think. - How revision strategies that have been identified as beingmore effective such as interleaving, elaborative interrogation, self-explanation and retrieval practice can be embedded into day-to-day teaching. - How students can be encouraged to make use of these winning strategies when revising independently.
Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:
Jake Hunton
A guide to doing what works (and not what doesn’t) to better prepare students for exams
Department of Psychological Sciences, Director, Science of Learning and Education Center, Kent State University
Learning is difficult. Or at least, learning anything novel and complex is difficult. There is no way around it. So much so that any technique which is described as ‘it makes learning easy’ almost certainly does not hold true, because learning is not easy and it cannot be made easy. What makes matters worse is that students and teachers can mistake fast-and-easy progress with actual learning success when, in fact, many strategies that give rise to fast progress also lead to fast forgetting. And by using ineffective techniques and schedules for learning, students (and teachers) may inadvertently be making it more difficult to reach their learning goals. All of this can be frustrating, both for teachers who seek to help their students retain what they have learned and for students who want to succeed but too often find themselves struggling.
What is the solution? That is, in order to reduce frustrations and ultimately improve student achievement, how should instructors teach and how should students guide their own learning? If you have ever asked questions like these, do not put down this book – because in Exam Literacy Jake Hunton provides the answers in an easy-to-read volume that will inspire teachers and students alike. His perspective is especially noteworthy. After spending many years experiencing the same frustrations while teaching students foreign languages, Jake realized he needed to make some changes, and in order to do so he turned to empirical evidence (the hardcore research that indicates what really works) about which learning techniques are most effective and how to use them with fidelity.
Some of what he learned you may find surprising, such as that some of the most effective techniques can be implemented in the classroom, yet many teachers do not know about these techniques or how to use them effectively. One reason for teachers’ limited knowledge can be traced back to the textbooks used to educate teachers – recent surveys in the United States and in the Netherlands indicate that the majority of these textbooks do not even mention these techniques!
To help spread the great news, Jake provides a user’s guide to some of the most effective techniques, and he does so in a humorous and engaging manner that will be accessible to any interested reader. So, if you want to gain insight into how to improve your students’ learning (or even your own), then read on – this book will no doubt become an invaluable resource for you and anyone who embraces lifelong learning in school and beyond.
When I started thinking about writing this book I came up with the title, ‘Exam Literacy: How to Beat the Exam’. Are you cringing as much as I am? I am extremely grateful to David Bowman at Crown House not only for his constant support but also for his gentle rejection of that title.
I began by writing a guide for the steps students could follow, including how to download exam papers, specifications, mark schemes and examiner reports. Oh, and how to right-click and make a folder for each set of documents. Initially, it hurt a bit when the more I read, the more I realised I didn’t know, and how writing a guide about how to organise exam material into folders and build up a store of tips and tricks from examiner reports didn’t feel particularly relevant any more.
I am also grateful to all those involved in education – the teaching community, the teacher-bloggers and tweeters – including, of course, those who have been so helpful and supportive in allowing me to refer to their work; in particular, Dr Yana Weinstein from the Learning Scientists, Professor John Sweller, David Didau and, of course, Professor John Dunlosky. All of them have acted as my unwitting psychologists in helping to rein in my levels of bias to more tolerable levels. That is my own view though, and I’m sticking to it.
I am not wrong either to say a huge merci, danke and gracias to the Crown House Publishing team, my brilliant editor, Emma Tuck, and my family – former head teacher Mum, Debbie; deputy head teacher brother, Jude (@judehunton); his wife and MFL teacher, Mariana (@srahunton); my Dad, Colin, and his wife, Gill; my wife, Emily, and, of course, my wonderful son, Tristan.
Thank you also to the four wonderful schools – and their fantastic, hard-working staff and students – in which I have had the pleasure of working and training: the Grove School in Market Drayton, Alsager School in Cheshire, Arthur Terry School in Sutton Coldfield and Heart of England School in the West Midlands.
In 1997, at Easter, I copied out new notes while looking at my old notes from my GCSE business studies textbook.
In 1999, I stared at the notes I had made in class for A level geography.
In March 2003, I sat on a train rereading notes from a lecture on Spanish morphology.
In April 2005, I highlighted passages from a book on psychometric testing.
This was all in preparation for exams at which I might have done better.
I sat these exams not knowing that there might be more effective ways of studying when away from the classroom, ways which might help to make studying more effortful yet rewarding.
I didn’t know how much I didn’t know when revising or restudying.
A three-hour block of time copying out of the textbook felt like good, old-fashioned, solid revision which should serve me well. There was a tangible product to my revision which meant I felt like I was going to do very well in the exam because, of course, how couldn’t I, what with all that observable product at the end of my studying?
The strategy of copying out the notes and looking at the notes made it feel as if I was doing something productive. I would judge how well I had studied by the length of time I had dedicated to doing it.
I’m not sure what I expected from staring at the notes I had made in A level geography – perhaps that the key terms and definitions would evaporate off the page, go through a process of condensation and fall as precipitation, percolating into my long-term memory.
Rereading lulled me into a nice, fuzzy sense of comfortable cognitive ease. I confused familiarity with my notes with checking whether I actually knew the material when they weren’t there.
Highlighting passages from a book on psychometric testing also lulled me into thinking that I knew the material much better than I did.
None of these study strategies were as effective as they might have been had I known more about techniques that could have told me what I knew, or didn’t know, and perhaps helped to better embed what I wanted to embed in my long-term memory.
In 2007, I taught some lessons where I limited teacher-talk time to no more than 20 minutes.
In 2008, I managed to finish teaching my GCSE language course by Easter to allow some time at the end to revise. The course was based on the textbook and taught in a blocked order of topics.
In 2009, I gave the students plenty of past papers to do at home plus vocabulary to learn, but I didn’t think to teach them any strategies on how they could study away from the classroom.
As both student and teacher, I didn’t know what I didn’t know – and some of what I did know was based on fragments of what I had been told was right, so there were a few urban myths among my thinking (the excellent Urban Myths about Learning and Education hadn’t been written back then.1)
When I first started thinking about writing this book three years ago, I admit that, as well as an analysis of potentially more effective study skills, I also began to consider ways that I could be more creative with exam materials. How about a card sort to match up comments from examiner reports with questions on the exam papers? How about designing a PowerPoint with the exam paper question, the mark scheme and the examiner report? How about students designing their own exam paper at an early stage in the course? How about cutting up sample answers and then giving the students a time limit to match the answers with the grades? How about teaching students how to use an online exam paper generator and setting a homework in which they create a question for their friend to complete? And so on.
I knew that exams were important when I started teaching, but I’m shocked to recall how little else I knew about them. I didn’t know they were the source of so much debate and controversy. I didn’t realise that an educational fault line runs right down the exam room and through wobbly, graffiti-daubed desks. Exams good, exams bad; exams too much, exams not enough. It took me a long time to see the political debate around exams. And to be honest, I’m not sure that I fully engage with it now.
The focus has changed a little since I started writing this book, so while there are a few references here and there to summative testing, it is more of a discussion about learning strategies which might work more effectively versus those that might not, with an overlap between the classroom and possible transfer to outside the classroom. I stress might: they are learning strategies which have shown promise versus the ones that have shown less promise.
The book is written from the point of view of a teacher who wants to know more about effective learning strategies and how (or if) they transfer away from the classroom. Some of the areas covered include:
Outsourcing study skills versus teachers teaching them within their subject domain.
Study skills/learning strategies which have been identified as those which might be less effective than others.
Study skills/learning strategies which have been identified as those which might be more effective than others.
Potential examples of how the techniques which might be more effective could look.
The overlap between learning strategies in the classroom and away from the classroom.
I’m grateful to all the researchers and bloggers out there while I have been researching this book. There is always so much to read and so much to learn that even when you feel you are finally satisfied something new comes along – another study, another blog, another way to challenge your thinking – that you question what you believed in and start rethinking and rewriting again. My own bias and I have disagreed a number of times throughout.
I hope you enjoy the debate.
1 See Pedro De Bruyckere, Paul A. Kirschner and Casper D. Hulshof, Urban Myths about Learning and Education (London: Academic Press, 2015).
Part 1
Chapter 1
There are no magic potions to reach for when exam season approaches. There is no Asterix and Obelix ‘Getanexamfix’ druid. Unfortunately, as far as I know, there are no magic exam beans either. The next new initiative might not be a panacea but, in fact, another way to foster an atmosphere of pernicious accountability and ‘support’ a teacher out of the profession.
Nor are there any silver bullets to ensuring student academic success. Sometimes, though, the talk of exam success and students getting excellent grades can conjure up images of exam factories – huge, cold, industrial complexes where students are drilled in Victorian-style classrooms, writing out arithmetic on slate grey iPads.
When I started teaching I had no real understanding of how the memory works and even less of a clue about cognitive science. I thought that pace in the classroom was key (partly through received wisdom and partly through my own vanity: ‘If you want to see great pace in lessons then go to Jake’s classroom!’).
This was both comical and sad, as I really did think that doing things quickly would impress observers and keep the students engaged. It did impress observers, but I don’t know if it actually helped to engage the students.1 I fear it didn’t because when I started working at a new school, I began teaching lessons at such a brisk pace that the students complained they couldn’t understand as I was speaking and doing things too quickly. Fears of accountability fuelled my hyperactivity and led to little or no time for the students to understand the material or process it properly.
Pace became a ‘tick-box’ item in lesson observations, added to the list of ‘things we must see in a lesson observation’, such as differentiation. This sometimes led to three different sets of worksheets for clearly identifiable groups of students who, no matter how much stealth you could put into surreptitiously organising the class into ‘higher ability’, ‘middle ability’ and ‘lower ability’, the students would always know. In the end, both the students and I became embarrassed by the whole thing. I now know that my own understanding of differentiation was rather ill-founded and not based on ‘responsive teaching’.2
I also conducted mini-plenaries (perhaps it’s just the terminology that’s a problem, since if they were considered as ‘retrieval practice’ then mini-plenaries might be thought of more positively) and peer assessment without any awareness of the potential for the Dunning–Kruger effect – that is, the cognitive bias in which individuals who are unskilled at a task mistakenly believe themselves to possess greater ability than they do. An alternative, perhaps somewhat cruder, definition is that you’re too incompetent to know that you are incompetent.
I’m not necessarily saying that pace was, and is, a bad thing; just that because I had picked up that it impressed people, it became one of the things I would do when being observed, and also something to look out for when I was required to do lesson observations. Seeking to confirm a prejudiced view was a skew that I never even knew I had.
It felt strange, nonetheless, that in my observed lessons where I limited teacher-talk time and ensured my pace was good, I was given mostly outstanding; yet I always felt that the students learned more from me standing at the front and teaching in a slower and more didactic manner, followed up by some guided practice. This was the style I reverted back to when teaching sans observer, especially when the exam season loomed large.
Giving students summative tasks to improve a summative outcome was also something I believed would help them to learn better over time: if I test them on the big thing in the way they are tested in exams, they will definitely get better at that big thing. This approach influenced the thinking behind a card sort I devised which involved matching up examiner reports and mark schemes.
As a language teacher, I also used listening tasks from textbooks and past papers to try to improve students’ listening skills on a later summative listening test. It felt like I was doing my job, primarily because that was how I understood it should work from my teacher training. The fact that students’ working memories were being overloaded because the listening exercises were too complex and the skill had not been broken down did not occur to me. (One of the advantages of deliberate practice – where a skill is broken down into smaller tasks – is that there is less of a load on working memory.)
By designing writing tasks which were summative assessments and then expecting students to improve on their next summative assessment, I was confusing learning and performance. Daisy Christodoulou (@daisychristo) notes that learning is about storing detailed knowledge in long-term memory whereas performance is about using that learning in a specific situation.3 The two have very different purposes.
In a blog post on enhancing students’ chances at succeeding at listening, Gianfranco Conti (@gianfrancocont9) raises the following issues:
Teachers do not teach listening skills, they quiz students through listening comprehensions, which are tests through and through;
They usually do not train students in the mastery of bottom-up processing skills (decoding, parsing, etc.).4
Rather than focusing on breaking down the skill of listening to ensure the students had mastered bottom-up processing skills, I instead played them extract after extract of a listening comprehension from a textbook. I wasn’t aware that breaking down the skill would have been effective in building the students’ listening skills because the practice looks different from the final skill. It’s similar to using past papers to improve students’ grades – it doesn’t necessarily work.5
Maths teacher David Thomas (@dmthomas90) describes how over-focusing on exams can take the joy out of learning in the classroom. He observes that were it possible to teach assessment objectives directly then it would make sense for every piece of work to be a ‘mini-GCSE exam’, but this isn’t possible as they are focused on generic skills, and these skills ‘can only be acquired indirectly: by learning the component parts that build up to make the whole such as pieces of contextual knowledge, rules of grammar, or fluency in procedures’. Furthermore, ‘these components look very different to the skill being sought – just as doing drills in football practice looks very different to playing a football match, and playing scales on a violin looks very different to giving a recital’.6
The idea of being ‘exam literate’ might sound superficial (e.g. knowing key parts of the mark scheme or building up a store of key points from the examiner report), but in fact it is about spending time adopting some of the tenets of deliberate practice and building up mental models in each domain.
Just as adopting a deliberate practice model does not look like the final task, so exam literacy does not look like the final exam. I remember thinking that I was quite clever to come up with a homework task early on in a Year 12 Spanish course which got the students to design their own exam papers, and another time when I designed practice tasks which mirrored the exact style of the questions the students would face in their writing exam (even mimicking the dotted style of the lines on which students would write their answers!). I mistakenly thought that if they were familiar with the format of the paper then there would be no surprises in the exam.
The relative merits of different approaches has been a common topic of debate on Twitter and in the edublogosphere over the last few years. For example, there is a great chapter by Olivia Dyer (@oliviaparisdyer) on drill and didactic teaching in Katharine Birbalsingh’s Battle Hymn of the Tiger Teachers,7 and plenty of wonderful blog posts setting out commonsense approaches combined with aspects of cognitive science, as well as how to best plan a curriculum. A great place to start might be to have a look at any one of the Learning Scientists’ blog posts.8
The education debate seems to have been shifting towards questioning what was once generally accepted about how best to teach in the classroom and, more pertinently for this book, learning strategies that are backed up by evidence about how students can learn more effectively. Things also seem to be moving towards not so much how to teach but what to teach.
It’s tempting to think that everyone has moved on from learning styles and the like when you listen to the Twitterati, but myths masquerading as sound evidence may still be prevalent.9 (Incidentally, Dylan Wiliam, writing on the Deans for Impact blog with reference to learning styles, says: ‘it could be that the whole idea of learning-styles research is misguided because its basic assumption – that the purpose of instructional design is to make learning easy – may just be incorrect’.10) The idea that learning strategies which are designed to make it easier for the learner may actually be inhibiting learning, as well as the idea that making certain conditions more demanding for learners could help their learning, feature a number of times in this book.
The first exam results that I had with a class were good, solid results: a set of meat-and-potato results that I had spent two years cooking up using a mix of trial and error, received wisdom and slavishly following the textbook (the scheme of work). Learning and performance were quite often confused using my own brand of end-of-the-lesson-pseudo-football-manager-encouragement-speak, with ‘Great performance in today’s lesson, guys!’ featuring quite prominently.
The fact that after the exam some students came to speak to me about the paper – telling me some of the words they could remember but asking me what many other words that I knew I had taught them meant – forced me to question why curriculum coverage had been paramount. I had to finish that textbook chapter on transport before the students’ study leave could begin (what happens if gare routière comes up on the exam?). Revision could not, and should not, take place before I had covered all of the topics in the textbook.
Tired of feeling like I hadn’t done my job if the students couldn’t recall or recognise words in their exams, I dared to abandon the textbook and do a little basic research on the vocabulary that had come up consistently in previous exams. Alongside teaching the topics, I started to practise and test language that I thought would be beneficial to the students, and practised and tested this content no matter what topic they were studying. (This took a simple form – projecting the list onto a whiteboard, covering up the meanings of words and phrases and then calling out the Spanish and waiting for the students to shout out the English, whole-class retrieval-style.)
When the students found that they could actually recall things in assessments and mini low stakes tests that they couldn’t do before, I felt a little more emboldened. I didn’t share this strategy with anyone other than the teachers in my family and, of course, the students themselves. The results for this class were excellent. The class had frighteningly high predicted grades but the final results made the local papers!11 I include this not to boast, but to demonstrate the impact of choosing to reject a dogmatic mentality about having to finish the textbook at all costs and instead ensuring the students had actually learned something.
OK, I admit that the proxy for that improvement was the exam, but what was going on in the lessons in the lead-up to the exam did not reflect the exam task. (Dare I be so bold as to claim that it was a sort of stumbled upon crude version of deliberate practice?) For example, rather than setting more and more past reading papers to try to improve the students’ reading paper marks, what became the norm was practising and testing short phrases and vocabulary (which I had identified as enabling the students to achieve a sort of semi-automaticity with their reading comprehension) at spaced intervals across the course.
The shift was based on trial and error and a questioning of accepted practice. Following the class’s excellent exam results, I couldn’t explain with any evidence other than the results themselves and the students’ own anecdotal comments about how they could remember more language now or why what I had done had worked better to create the right conditions for them to succeed.
When I found out that there were concepts like ‘spaced practice’ and ‘retrieval practice’ (perhaps it was a sort of bias on my part to hunt them down as a way to confirm why I was doing what I was doing), I found an evidence base for what I had been doing. I just didn’t know why it was working in the context of the students’ improved knowledge (and improved results). I did then, and still do somewhat, bandy the terms around a fair bit, believing I have found the answer.
An army of like-minded practitioners, the researchED-ers, are also honing in on sorting the eduwheat from the pseudochaff. David Didau tweeted the last line of Dylan Wiliam’s presentation slide at researchED Washington in 2016: ‘All teachers & leaders need to be critical consumers of research.’12 When I started my PGCE, even teacher-led research could take the form of discussing learning style questionnaires with students. One shudders to think. We were all passive consumers of this ‘research’ and what came to us from teacher training materials, never really asking for additional evidence of impact. I don’t know why I didn’t feel able to be more critical at the time – perhaps the fear of appearing arrogant or overly negative in front of more experienced colleagues or a consciousness of my lack of knowledge. Probably a mix of the two. I was doubtless a victim of groupthink bias.
There is always some sort of evidence to suggest that an initiative has worked, but what evidence is the right evidence? Of course, much depends on what you think the purpose of education is as to what evidence is relevant.13 If you believe that one of the main purposes of education is to help to make learners cleverer, then having some evidence which shows how one approach might work better than another (under certain conditions) seems an eminently sensible place to start.
Dr Gary Jones (another researchED-er) says, ‘disregarding sound evidence and relying on personal experience or the popular ideas of educational gurus, consultants and bloggers is daily practice. Indeed, a major challenge for educational leaders is to distinguish between “popular” ideas and those “that work”.’ He goes on to name a number of ideas ‘whose use is not justified by research’. One of the practices is ‘Encouraging re-reading and highlighting to memorise key ideas.’14
This practice featured in a review of study skills by John Dunlosky and colleagues in which ten different study techniques designed to boost student learning were analysed.15 These were elaborative interrogation, self-explanation, summarisation, highlighting/underlining, the keyword mnemonic, imagery for text, rereading, practice testing, distributed practice and interleaved practice.
Some of the above strategies are designed to support learning facts, some to improve comprehension and some to do a bit of both. The strategies identified as being most effective across a range of materials were practice testing and distributed practice.16 Other strategies that were rated as promising but require more research were interleaved practice, elaborative interrogation and self-explanation.
Returning to Dylan Wiliam’s session at researchED, as part of his presentation he included the following bullet point: ‘in education, the right question is, “Under what conditions does this work?”’ This, I think, would seem pretty apt for all of the techniques discussed in the Dunlosky review. Elaborative interrogation, for instance, is referred to as being more effective with factual information. You can always refer to some sort of evidence to make a point, of course, but it comes back to the question: what evidence is the right evidence?
In addition to the strategies referred to already by Jones in Evidence-Based Practice, the other approaches identified in the Dunlosky review as being less effective study skills were summarisation, the keyword mnemonic and imagery for text.
I didn’t know about any of this during the first few years of teaching, and perhaps without Twitter I may have missed further opportunities to engage with the ideas in the review as well as the thought-provoking blogs and tweets out there.17 For example, this helpful tweet from Carl Hendrick (@C_Hendrick), head of learning and research at Wellington College, pointed me in the direction of a YouTube clip featuring Professor Dunlosky and associate professor Joe Kim: ‘Every teacher should watch this: John Dunlosky – “Improving Student Success: Some Principles from Cognitive Science”’.18
According to Gary Davies (@gazbd), one of the reasons why teachers don’t engage with research is because they ‘don’t feel as if engaging with research is worth their time’.19 Davies goes on to say that this is a problem with the research, not a problem with the teachers. One of the causes of a lack of engagement is teachers’ lack of expertise in assessing and evaluating research and in becoming researchers themselves. Davies adds: ‘we cannot trust education researchers to do the right research or to think about the implications for classroom practice’.
In the comment section on the same blog, Alex Quigley suggests that there are lots of sources that can bridge the gap between the research evidence and teachers interested in developing their classroom practice through evidence-based practice. He recommends Barak Rosenshine’s ‘Principles of Instruction: Research-Based Strategies That All Teachers Should Know’ as being ‘useable and accessible’.20
Perhaps one of the biggest issues is that academic educational research cannot always be translated into a meaningful and teacher-friendly classroom version. I’m not a researcher and don’t claim to be. I’m commenting on the research I’ve read from a teacher’s point of view. There are enough ‘perhaps’, ‘mights’ and ‘maybes’ in this book to suggest that I might be hedging my bets, but the number of times they’re used should convey that I’m making no wild claims that X or Y is a panacea. It’s more about wanting to be research informed. Also, if all the talk about research and evidence can help to moderate the potential for bias and foster a more considered view, then perhaps that’s no bad thing.21
By the way, this book is intended to be a synthesis of ideas from some of the amazing writers and bloggers out there – a compendium of approaches that might help with more effective learning strategies – of course, interspersed with my own views.22 (Any mistakes or accidental misrepresentations are mine.)
A brilliant report by Nick Rose and Susanna Eriksson-Lee from TeachFirst, entitled Putting Evidence to Work, made a distinction between evidence-based practice and evidence-informed practice. With regard to evidence-based practice, the example they use to define this is: ‘A head teacher deciding whether to buy in a specific reading intervention package based on an EEF summary of an RCT suggesting the programme’s effectiveness.’ Whereas with evidence-informed practice, the example is: ‘A teacher implementing spaced retrieval practice within their regular classroom teaching based on findings from cognitive science; or a teacher trying to improve the quality of feedback given to pupils based on the general guidance from the EEF toolkit.’23
That said, this tweet from PGCE tutor Daryn Egan-Simon is well worth bearing in mind: ‘Prediction: Cognitive science will become the next educational fad to be misinterpreted & poorly implemented in schools across the country.’24 A tweet from one of the Learning Scientists in response is also worth repeating: ‘Not on our watch! @AceThatTest’.25
In 2015, Geoff Petty, author of Teaching Today and Evidence-Based Teaching, wrote an informative piece entitled ‘The Uses and Abuses of Evidence in Education’ referring, among other issues, to the types of bias to be aware of and a possible way of attaining quality assurance over the kinds of evidence that we use to benefit our own practice.26 Petty describes how triangulating evidence between qualitative and quantitative research and the most effective teachers (in terms of a value-added measure) may be the best way to ascertain what might be most effective.
In a blog post, the writer Nick Rose (@Nick_J_Rose) analyses a number of generic study skills, including mnemonics, summarising and self-testing. Rose describes an experience in one of the schools in which he taught where an outsourced company was brought in to teach Year 11 students effective study skills. I can empathise with Rose when he says that he ‘wasn’t particularly convinced the costly exercise improved the quality or effectiveness of revision that our students undertook after the event’.27
Rose argues that teaching generic mnemonic strategies, such as those used on the study skills day, independently of showing students how they could be transferred to their own subjects, is problematic. He cites a study by David Perkins and Gavriel Salomon on transfer of learning in which they state: ‘In areas as diverse as chess play, physics problem solving, and medical diagnosis, expert performance has been shown to depend on a large knowledge base of rather specialized knowledge … General cross-domain principles, it has been argued, play a rather weak role.’28 The nature of each subject that students study at school is arguably highly situated and consequently domain specific. This can make things difficult to pin down in terms of transferable learning from one domain to the next – unless each teacher in each subject domain teaches the skills to the students themselves.
In a vast meta-analysis on the ‘Effects of Learning Skills Interventions on Student Learning’ in 1996, John Hattie, John Biggs and Nola Purdie reviewed 51 studies in which interventions designed to improve student learning involving one or a number of different study skills were analysed. They evaluated various studies where interventions outside normal teaching circumstances tried to improve learning. They found that the further away from usual teaching practices these interventions were, the more difficult it was to identify any measurable results. They concluded that situated cognition (i.e. knowledge constructed within and linked to the social, cultural and physical context of an activity), apart from training for simple mnemonic recall, is recommended where activities are within the same domain as the subject being studied and promote a high degree of active learner involvement and metacognitive awareness.29
The lack of transfer across subject domains is seemingly a key issue. Modelling strategies with the content from the subject domain that is to be revised, with the subject expert teaching how students might apply the learning strategy, obviously relies on an expert’s knowledge of that domain. This is in contrast to the way that costly specialist intervention companies teach revision strategies: modelling how certain techniques (e.g. memory palace technique, loci method) might work to help Year 11 students remember key details from a part-time actor’s invented story about a red car travelling at 40 mph towards Birmingham on a Saturday afternoon at 3 p.m.
I share Rose’s view about revision gurus and companies failing to explain to students how revision strategies actually transfer to the various subject domains that the students take at GCSE. I also agree with deputy head teacher Ruth Powley (@powley_r) when she observes that ‘Revision strategies should be subject-specific’. She adds:
Evidence Into Practice suggests here that, ‘the sorts of “study skills” events (which schools often outsource to external providers) are unlikely to have any positive impact on student outcomes. A better plan might be to teach teachers the various mnemonic techniques and encourage them to find examples of where the ideas might be profitably applied within their own subject domain.’30
Before having read lots of blogs and studies on this, I wondered if all of this was a straw man. It probably is, of course, for those schools which don’t outsource their revision techniques.
Therefore, the examples provided in this book are domain-specific models (or snapshots) of the learning strategies, designed by subject teachers in the domain in which they are modelled and completed by an erstwhile GCSE student (yours truly!). The modelling of how the answers could look when using elaborative interrogation and self-explanation alongside subject content represent my interpretations of applying these techniques (trialled using my own limited domain knowledge).
Then again, perhaps it is too simplistic to talk in terms of domains as solely subjects. In a podcast led by maths teacher, TES maths adviser and host of the Mr Barton Maths Podcast, Craig Barton (@mrbartonmaths), with TeachFirst maths teacher and Up Learn’s director of education, Kris Boulton, Boulton discusses what constitutes a domain, wondering whether maths is a domain or whether geometry is a domain within maths.31 Having said that, it still makes sense that the subject teacher who has expertise in these domains (within a domain) is the one to lead on teaching the students how to apply a technique to support their learning away from the classroom.
In What If Everything You Knew About Education Was Wrong?, David Didau defines ‘transfer’ as ‘applying knowledge learned in one situation to a new situation’.32 In What Every Teacher Needs to Know About … Psychology, Didau and Nick Rose go on to discuss context and transfer, referring to ‘far transfer’: ‘So-called “far transfer” between different subject domains – the idea that you could learn the skill of analysis in history and then apply it to physics – is much more difficult than is often supposed.’33
In terms of a study skill which might transfer across several domains’ worth of content, how about a type of practice testing (aka retrieval practice) known as a free recall test or ‘brain dump’. This will be discussed in more detail later, but put simply it is about reading some content, putting the content out of sight and then writing out everything you can recall from memory with no prompts. Couldn’t this relatively simple technique transfer across a multitude of domain content?
In ‘When and Where Do We Apply What We Learn?’, Susan Barnett and Stephen Ceci present a taxonomy for far transfer.34 Six of the factors referred to involve the context in which transfer takes place, one of which is transfer from one knowledge domain to another. The example given on the taxonomy for ‘near transfer’ in the knowledge domain is mouse versus rat. The ‘far transfer’ is science versus arts. The authors point out that physics and chemistry would most likely have more aspects in common than physics and English, and would therefore be nearer to each other in transfer terms.
So, are students who are delivered outsourced revision sessions being as well catered for in terms of the transfer of learning strategies as they would be if their teachers (i.e. domain-specific experts) were teaching them how to apply the learning strategies? It’s straw man time again.
Practice testing or retrieval practice has been established in the edublogosphere for a while, so readers may already know it’s nothing new. However, in the context of the Dunlosky review, it is worth bringing together some of the references on practice testing and the whys and wherefores as to its gold-star rating in the authors’ summary paper, ‘What Works, What Doesn’t’.35
In What If Everything You Knew About Education Was Wrong?, Didau discusses the benefits of the testing effect.36 Put simply, testing students is not limited to simply getting them to sit an exam under test conditions. The process of getting them to retrieve information (testing their retrieval) benefits retention in the longer term, helping to cement that knowledge. Throw in some feedback and it’s a potentially powerful practice strategy.
In Making Good Progress, Daisy Christodoulou describes quizzing as being a beneficial tool which allows both teacher and student to find out if they have grasped something or not. Christodoulou also refers, as does Didau, to the view that testing through recall of information strengthens memory itself and improves understanding.37
Alex Quigley (@HuntingEnglish) discusses practice testing in The Confident Teacher (acknowledging that retrieval practice doesn’t sound quite as threatening as practice testing), where he compares retrieval practice with a study skill already mentioned: rereading. By getting students to do a sort of free recall test at the start of the lesson (for more on this see Chapters 3 and 4), writing out everything they can remember (retrieval practice) – versus rereading notes and copying them out – Quigley points out that it is the more effortful retrieval that leads to greater encoding in long-term memory. While retrieval without notes is harder in the first instance, the act of retrieval leads to the information being more strongly embedded in the long-term memory. It is a ‘deliberate difficulty’.38
While the example that Quigley refers to is classroom based, clearly the same process could be applied away from the classroom for a student who is revising: reading notes one day, leaving a little time for the forgetting to kick in and then writing down everything they can remember about the topic as a free recall test. This could then be followed at a later point by looking back at the notes and identifying any misconceptions in the information they have written down during the free recall test. Finally, they can test if what has been retrieved during the free recall test transfers by completing some practice questions, testing the material retrieved during the free recall test or, perhaps even better, seeing if the material tested in the free recall test can be applied to another area in the same domain or even transfer it to another domain – the dream!
It could also take the form of revision homework. While we’re touching on homework, it’s worth mentioning a blog by Joe Kirby (@joe__kirby) which suggests some brilliant ideas for revision and self-quizzing.39
While retrieval practice is not a panacea, perhaps teaching students what it means and how to use it alongside domain-specific knowledge could be a positive move towards promoting a learning strategy which might transfer to students working at home or later on in life.40 It is certainly more effective than one-, two- or three-off generic revision skills teaching provided by an outsourced company.
In Learning as a Generative Activity, Logan Fiorella and Richard E. Mayer define generative learning as ‘helping learners to actively make sense of the material so they can build meaningful learning outcomes that allow them to transfer what they have learned to solving new problems’. They go on to clarify that it is ‘a process of sense making, in which you try to understand what is presented by actively selecting relevant pieces of the presented information, mentally organizing them, and integrating them with other knowledge you already have’.41 The authors set out the stages of generative learning that learners go through, with learners initially selecting the pertinent material to deal with, arranging it into ‘a coherent cognitive structure in working memory’,42 and finally integrating the material with previous knowledge from long-term memory.
The idea of helping the learner to understand the material is a key theme in Learning as a Generative Activity. Perhaps it is no coincidence, therefore, that elaborative interrogation is regarded as a promising revision technique by Dunlosky et al. Some of the techniques discussed in the Dunlosky review, and in Learning as a Generative Activity, involve elaboration and are essentially ‘ways of translating the lesson into another form of representation’.43
Asking why and encouraging students to elaborate on their answer is also mentioned as part of a whole-school revision strategy by one of the authors of Making Every Lesson Count, Shaun Allison (@shaun_allison), in his blog post on ‘Supporting Learning Through Effective Revision Techniques’:
… we remember things when we have to think about them. So when supporting students with revision we should be doing more of the following:
Testing.
Spacing it out.
Keep asking ‘why’?
Building on what they know.
Getting them to explain their steps in problem solving.44
The idea that there is some sort of elaboration which creates meaning to something being revised through connecting new information to existing knowledge is key. Indeed, without some form of hard mental effort the effects of retrieval practice may be diminished. Nick Rose observes that ‘the testing effect disappears where there is no mental effort involved in the retrieval’.45
In Cognitive Psychology, E. Bruce Goldstein, in a section on how to study more effectively, also refers to elaboration – that is, ‘thinking about what you are reading and giving it meaning by relating it to other things that you know’ – as an important aspect of studying.46
The intention of this book is to discuss what has been shown to work more and less effectively as set out in the Dunlosky review and present how some of the more effective techniques could look. I have also included references to strategies discussed in Learning as a Generative Activity, as well as potential ways of elaborating on the content through modelling generative learning activities.
There is some overlap between the strategies discussed in the Dunlosky review, Learning as a Generative Activity and the brilliant Learning Scientists’ ‘Six Strategies for Effective Learning’.47 For instance, self-explanation is discussed in the Dunlosky review and in Learning as a Generative Activity. Interestingly, summarisation is not recognised as such an effective technique as practice testing in the Dunlosky review, but it is included in the strategies which promote generative learning in Learning as a Generative Activity. As summarisation has not made the cut in the Dunlosky review as a top-mark technique, I have not included it alongside the high-to-middling impact techniques in Chapter 4.
The strategies from the three crossover sources are set out in the table below.
Another observation is that in the Dunlosky review, imagery for text did not meet the criteria for a top rating, but dual coding is referred to by the Learning Scientists and learning by imagining features as a technique to foster generative learning.
In fact, while we’re here, one of the Learning Scientists’ strategies which is not referred to explicitly in the Dunlosky review or in Learning as a Generative Activity is concrete examples – that is, teaching abstract examples as concrete examples. Here is a domain-specific strategy (created by Joan Fuller, head of computer science at Heart of England School in Solihull) which helps to show how an abstract concept could be delivered by referring to a concrete example. Domain-specific examples created by domain experts; well, I couldn’t have created this example myself, could I?
The domain name system is just like a postcode.
Postcode, e.g. TR19 7AA
A postcode gives a structured label to a complicated and long building address
It also groups together several buildings which are near to each other to make it easy to find and deliver post and parcels