Rock’n’Roll Teaching

1-2-3-4 Hey Ho, LET’S GO! 

No, with apologies to The Ramones and anyone I’ve lured in here on a false promise, there is actually no rock’n’roll in this post. No music at all, come to that. 

I blame Steve Richards. Steve is a UK political commentator par excellence, with a podcast and live show called Rock’n’Roll Politics. I highly recommend. But there’s no rock’n’roll in them, either. 

In the live show he talks, without notes, for about 45 minutes; has a short break; then comes back and does another 45.   I can’t tell whether he uses notes for the podcast, but I’d wager he doesn’t. 

It’s astounding. His range of reference, his depth of knowledge, his understanding of the big trends and small nuances are all brought to bear in some acute analysis of where we are, how we got here and where we might go next. And all, apparently, off the cuff. 

Except, of course, it’s not off the cuff at all. Steve’s been plying his trade for, and I hope he won’t mind me saying this, years. There is absolutely no way he could do this, in the way that he does, were he a rookie. 

The young Steve wouldn’t, couldn’t, have the hinterland to be able to get up on stage and talk with such authority, wit and depth about such a vast range of issues. He just wouldn’t possess the experience, the wisdom, the accumulated bank of anecdotes and examples which allow slightly older Steve to speak so fluently and, apparently, effortlessly. 

But it’s not effortless either. Steve has spent a career in and around Westminster: talking, reading, writing, researching, revisiting, reappraising. That amounts to a lot of hours of hard graft. He couldn’t do what he does without it.  

And here’s the link to teaching. We’ve all heard people say, “My best lessons are often the ones I haven’t planned!” Rock’n’roll teaching, in other words: just getting up and doing your thing, apparently without effort and certainly without forethought. 

Rubbish. Piffle. Utter, total, complete tripe. 

The only reason someone is able to teach an effective lesson without any planning – or, as a colleague calls it, “door handle planning”: planned as you enter the room – is because they are already expert. 

They know to start with some retrieval practice. They know not to speak too fast, or go too slowly. They know not to overload, but also to challenge. They know roughly how long things take. They know that the pupil on the end of the second row will need some additional help and that the one at the front will be chewing gum – and what to do about it. 

And how do they know this? Because they’ve done it a hundred times before. Through experience, trial and error, feedback, reflection – heck, maybe even reading a blog or a book or going to a conference – they know, more or less, what’s going to work, with that class, on that topic, at that time of day, on that day of the week. 

It’s true that they may have been more on their toes in that unplanned lesson, their teaching antennae more attuned. That might have sharpened their practice. They might also feel a rush of endorphins and relief at the end of what’s turned out pretty well, actually. 

But the lesson wasn’t good because it was door handle planned. It’s because, like Steve, they’ve done the hard graft.  

So, don’t lionize the off the cuff. It devalues your own hard work and sends all the wrong impressions to your less experienced teachers. You don’t get to door handle plan until you’ve desk planned many, many times over. 

Not very rock’n’roll at all, really.  

Don’t do stupid stuff

“Don’t do stupid stuff.”

That, according to multiple sources, was Barack Obama’s guiding principle for foreign policy.  (And yes, I know he didn’t actually say “stuff”, but this is a family blog.)

I think that is an excellent mantra for most things, including teaching.  I doubt anyone would disagree.  Because, as Obama supposedly went on to ask, “Who exactly is in the stupid stuff caucus?”  In other words, who is so stupid that they would do stupid stuff? 

And yet, just recently I’ve read a blog and thread that are so self-evidently sensible, so incontrovertibly logical, so crushingly right, that I can’t believe anyone would disagree with them.  But they do, or at least are not practising what’s preached; as one of the authors says, “In very many lessons, it seems to be a missing piece of the pedagogy.”

So let’s put this to the test.  I’m going to outline what was written, and you can decide whether you agree.

First, Tom Sherrington (@teacherhead). In this blog, he says that you need to reassure yourself that your class has understood your explanation before you move on.  So far, we’re all with him, aren’t we? You’d deffo be in the stupid stuff caucus to think otherwise. (“No, I prefer to move on before my class have understood what I’m on about.”) 

He then explains that just asking, “OK everyone?  Got that?” or similar is a really bad way to check understanding.  Wait what?

All you’ll get is a murmur of assent, at best. That tells you nothing. Instead, says Tom, do a quick bit of think/pair/share, asking them to explain what you’ve just told them. Then pick some random samples to hear (except not random really: ask those you know you need to focus on). Only then, if you are satisfied, go on.

Second, Adam Boxer (@adamboxer1).  In this thread he explains how to make the most of whole class questioning.  In brief, he advocates unrandomised cold calling: targeting particular questions at particular pupils.  This allows you to tailor what you ask of whom.

Lolly sticks or spinny wheels or “pick a random student” don’t do that: you might end up with a really hard question going to a weaker pupil, to everyone’s embarrassment and your shame.  Hands-up rewards only the enthusiastic and confident – which, as it goes, I do like to do sometimes, but only on the understanding that I will sometimes ignore them.

Both of those ideas fall firmly into the “opposite of stupid stuff” basket.  I’m not even going to put a conditional on it.  No “in most cases” or “in my experience.”  They just do.

Take Tom’s.  We’ve all done the “Any questions?  No?  Good,” and we all know it’s flawed. As a pupil it can be hard to expose your own weaknesses; it take some serious confidence, of the type not often found in school children, to admit to not understanding something your teacher has just told you and which they obviously expect you to understand.

And for Adam’s, if all you’ve done is take hands-up, or not properly directed your questions, you’ve done everyone a disservice: you’ve ignored the quieter ones, not given the less confident the chance to get things right, not tested your assumptions about who’s learned what. 

If you think they’re wrong, I’d love to know why, and your alternatives.  I don’t want to be in stupid stuff caucus, after all.

These things take time

You know what gets my goat? Lots of things, actually. I’m a 52 year old man, it’s what happens. But what’s getting my (really rather unfortunate) goat right now is edutwitter’s propensity to hand out simplistic solutions to intractable problems. 

One of the many intractable problems in teaching is how to save time. We’re all very busy and anything that allows us time to breathe is welcome. As a result, effort-reducing tips are pounced upon, retweeted, liked and gratefully incorporated into our working lives. Makes total sense. 

One such was a recent tweet which exhorted SLT to save colleagues’ time through (among other things) “no marking of exercise books.” I wrote, “No marking?” A response, from a third party, ran thus: “Of books. It’s pointless.” 

Marking books is pointless? Hold the phone! Think of the time I can save, the things I can do, the places I’ll go! 

Except that, when I probed a bit further, it became clear that it wasn’t that simple.  

You’ll have realised already that, “Marking books is pointless” doesn’t mean, “Don’t do any marking.” As I replied, “I imagine this means, ‘Don’t mark the day to day stuff, just things you can meaningfully feed back on.’”  

I continued, “Even so, if you don’t check the day to day, how do you know they’ve noted the right stuff, correctly?” 

Excellent question, if I may say so, to which the usual answer is, and indeed was, go round and scan as they work.  

Let’s think about that. 

First, what am I going to check? Not that they’ve done their homework: I can do that by asking them to hold it up so I can see it (you’re welcome). I won’t be checking a full assessment either, obviously. That needs proper attention. 

No, as a History and Politics teacher I’m more likely to want to know, let’s say, how well they can construct a persuasive paragraph or analyse a source.  

To do that, I actually need to read what they’ve written. So that’s what I’m looking for when I mark books or patrol the room: those practice paragraphs or analytical sections which will tell me how they are getting on and, crucially, inform what I need to do. So far, so fine. 

Next, how many can I see in a lesson? I teach in the independent sector and my largest KS3 class is 24. Many of you will have classes much larger. I see them for 50 minutes, three times a fortnight. There is no way I can regularly, usefully, check even 24 books, let alone 34, just by scanning them as I teach.  

In my 50 minutes, which may not be 50 minutes if they are late out of assembly, coming a long way from a previous lesson, or I get held up: 

  • I will probably want to do some retrieval from last lesson. 
  • I may want to introduce a new concept. 
  • I am likely to encourage paired discussion and listen in while they are doing it, before coming back to a group discussion. 
  • I may (may) then want them to do some writing, during which time I could circulate and scan books. How many of those persuasive paragraphs can I actually look at, properly, in that time? Five, maybe? Eight? Probably not many more, given their importance.  

And then I have to wait until next lesson to try to get round another five or so. Very quickly, it will be a long time since I’ve seen some of their books. All the while, the number of things I want to see is mounting up. 

Maybe in other subjects it’s different. Maybe teachers of something with less of a written element can get round books more effectively in class. But in mine, I really can’t see how circulating can be a direct replacement for actually taking in the books and looking at them. So, “Go round and check as you teach” doesn’t render marking books obsolete. 

Happily, some of my interlocutors were more nuanced.  One recommended, across a couple of tweets, a combination of live verbal feedback while circulating, use of mini-whiteboards to check understanding of key qestions, frequent low stakes self marking quizzes, modelled and partially worked examples within schemes of learning with frequent deliberate independent practice opportunities, live marking as students complete, self/peer mark responses using a RAG rating sheet for each question type. 

Goodness. Admirable. What a lot that packs in. Mini-white boards. Low stakes quizzes. Worked examples. Independent practice. Live marking. Self and peer marking. RAG ratings. And some circulation. Sounds brilliant. I mean genuinely brilliant. 

But. That response is no quick fix to a workload issue. It’s a whole pedagogical approach that takes time and effort to understand, practise and inculcate. It’s a million miles from, “Marking books is pointless.”  

So let’s stop kidding ourselves. Marking books isn’t pointless. Rather, it may (may) be rendered so if MWBs, low stakes quizzes, circulating round the class and a range of other things can give you confidence that you know how each of your pupils is getting on, without actually taking in their work and looking at it.  

But who’s going to retweet that? 

Not dead yet

Regular readers of this blog (still there, Mum?) will have come to appreciate its meticulous attention to detail, the carefully crafted sentences, the devastating use of apposite quotations, the unrivalled intellectual rigour and the pin-sharp analysis.

Well, not this time.  I’ve just read something that’s really ticked me off and I’m going to respond, rapidly and quite possibly injudiciously.  At least I’m not doing it by tweet, though.  I’m not that reckless.  But, there may still be spelling mistakes, loose sentence construction and even a swearword.  OK?  OK.

The thing I’ve read is Grainne Hallahan’s recent TES article entitled Where did all the over-50s teachers go? Let me be clear from the outset: it’s not Grainne that’s annoyed me.  It’s a good article, well researched, and you should read it.

No, what’s annoyed me are one comment and one, er, sense.  Let’s go in order.

Quite early in the article Jack Worth, workforce lead at the National Foundation for Educational Research, is discussing the decrease in the numbers of teachers over 50.  He is quoted as saying, “[This] can be partly explained because there was a big cohort of teachers over 50 who, in 2010, were approaching retirement.”

And later: “More teachers than usual decided…that they had had enough of teaching and this may have been particularly the case for teachers over 50 as they were approaching retirement anyway.”

Well, give me a moment to lever myself off the scrapheap.  I am over 50.  I am teacher.  And I am in no way approaching retirement.

On a practical level, it’s still many years away.  But, much more importantly, the quotations suggest that 50+ teachers have one foot out the door, one eye on the armchair, and are just marking time until no marking time.

Stuff that.  Last book I read?  Peps Mccrea.  Next on the list?  Hendrick and Kirschner.  June 10th: presenting at my third ResearchEd for this school year.  Last Tuesday: delivered second CPD session of the year: Rosenshine last time, cognitive load theory this.

And I’m no kind of exception.  Go to any educonference and you’ll find plenty of us superannuated has-beens, investing the same time and energy in our professional development as the young guns. Hell, some of us even give the keynotes. 

Sit in any common room and listen to the ancients speak of how they want to get better at teaching Liberalism or photosynthesis.  We still care, ladies and gentlemen.  We know we’re not the finished article.

Which leads me to the second part of the article that set me off.   It’s harder to sum up, as it’s never explicitly said, but it’s the sense that the next generation of superstar educators should listen to us greyhairs because we’ve been round the block a few times.

Don’t patronize me.  Respect my teaching because it’s good, evidence informed, responsive.  Notice that I’m still creating new resources because I’m keeping with up scholarship. Bow down and worship my extraordinary ability to draw brilliant answers out of the most disaffected child.

But don’t think that just because I’m old, I must be good.  I’ve been playing cricket for years and I’m still distinctly average.  Talk to me because you can learn from me, not only because I’ve got some war stories.

So gather round, my young apprentices, and remember: we 50+ers are still here, and still learning, and that’s why you should enjoy being our colleagues.  Not only because we’re not dead yet.

“Why not make a pillory?” Pointless homework and how to avoid it.

This is a picture of a History homework done by a Y7 of my acquaintance (not at my school, I hasten to add).  She had an entire half term to do it.  It’s a snakes and ladders style board game about being a successful medieval king.

FullSizeRender

Beautiful, isn’t it?  It took hours.  Finding and shaping the board, measuring and drawing the lines, drawing and cutting out the swords and the rats, making the counters, writing the rules.  And – oh yes – about 30 minutes deciding what information to write in the squares.

This, then, is the epitome of a Pointless Homework.  Ostensibly it’s a History task, but the amount of history that has been learned or used is minimal.  It’s good to uphold the law, it’s bad to raise taxes too high, it’s good to have an heir, it’s bad to be invaded.  Incisive!  The rest of the time has been spent making a game, which hasn’t furthered her knowledge or understanding of history, or honed her historical skills, in the slightest.

(As an aside, this is far from the most egregious example of a pointless homework I’ve come across.  That accolade goes to a worksheet I found at my previous school when I began teaching in 1997.  It was on medieval punishments.  The homework task was this: “Make your own stocks or pillory.  Use cardboard or even wood!”  Imagine my delight when, returning to the school after a thirteen year absence, I found the self-same worksheet still sitting in the filing cabinet.  Like seeing an old, if slightly eccentric, friend.)

Nevertheless, the board game is still pointless and, sadly, far from unique.  We’ve all heard of – may even have been party to – homeworks such as “make a model of a cell” (you can insert your own subject’s version here).  The occasional twitter threads about this bear witness to the prevalence of such tasks across the curriculum.

Why does this happen?  Here are some of the justifications I’ve heard for “Build a model of a castle” and the like, with my comments underneath.

  1. We’ve done it for a long time.

Longevity does not of itself automatically confer benefits.

2. The children really like it.

a) My child didn’t.  If only by the law of averages she cannot have been the only one.  

b) Even if they do all like it, that’s no justification. Most children also really like sweets…

3. It’s important that every child gets to do something they are good at, and practical tasks help the ones less good at reading and writing to shine.

Yes, children should be allowed to succeed.  But:

a) Children who are not good at reading and writing really need to get better at it, and that will not happen by making board games in a subject that requires good literacy.

b) I would be very surprised to find my daughter spending a whole term of, say, DTE homework researching and writing an essay because “it’s good for children who aren’t good at the practical elements of DTE to be able to shine in this subject.” (NB this is not meant to be a dig at DTE, I’m just inventing an example to make a point.)

4. It helps them with their time management.

At KS3 at least, expecting them sensibly to spread a single task over half a term learn how to spread a task over several weeks is, how shall I put it, ambitious. 

What, then, makes a Pointful Homework?  To my mind, in my subjects (History and Politics; but more widely too I reckon) it boils down to this: a task which either cements, acquires or applies knowledge.  For example:

  • Cementing.  This is basically learning.  Sometimes some time (or all the time, at Michaela and probably elsewhere too) does need to be given over to just getting stuff into brains.
  • Acquiring. This could happen in lots of ways.  At my school we are big on what we call pre-paration and others might term flipped learning: that is, reading, writing or researching something as the basis for the following lesson.  This isn’t a blog about how best to acquire knowledge, but just briefly, don’t say “make notes” unless you’ve taught them exactly what you mean, and and unless you’re sure they can’t just do it on autopilot.
  • Applying.   It’s not enough to acquire and cement: what’s the point of massive knowledge if you don’t develop the skills to use it effectively?  Homework is a good time to practise applying what has been learned, to familiar questions where more cement needs to be applied, or to new ones where the aim is to challenge.

These three types of homework – cementing, acquiring and applying – allow a huge range of interesting, accessible, challenging and, well, pointful activities.  From self-quizzing to reading via mind-mapping and MyMaths there are endless possibilities.  They also provide a neat sanity check: if your task is doing none of the three things, it could be a Pointless Homework.  However, to work well the three types require adherence to the spirit rather than the letter of their law.  For example, some knowledge was applied to the making of the board game. Not much though, and only to the extent of “What are the biggest generalisations I can think of that will look good in a game?”  Similarly, copying a chunk of text from a book is, strictly speaking, acquiring knowledge.  It won’t sink in though.

In the end, what you set for homework will depend hugely on your pedagogical preferences.  But if those preferences involve the equivalent of sewing and stuffing a model of a cell, I urge you to reconsider.  Cement, acquire, apply.  That said, imagine if a child actually did bring a home-made pillory into school.

The mundanity of excellence

Or, we are what we repeatedly do.

I was recently introduced to a new phrase: the mundanity of excellence.  (Thanks, Alex Richardson – @1917AndAllThat).  I love it, and think it’s highly applicable to teachers.  Here’s why. 

We can all produce the odd brilliant lesson. You know the sort: the ones you really prepped for, maybe created some sh*t hot new resources or lined up a fantastic sequence of questions. Or maybe you know, you just know, that you’ve laid all the groundwork and today’s discussion will fly.  These are lessons they love, you’re proud of, and you all remember. 

But, those aren’t the lessons that make you a brilliant teacher.  They just show that you can do it from time to time.  Similar examples: 

  • In the Euro 2016 football tournament, Wales’ Hal Robson-Kanu scored an undeniably world class goal.  But he’s no-one’s idea of a world-class striker. 
  • My wife has a theory that every band has one great song in them.  But that doesn’t make them a great band.  Yes, I’m talking about you, Babylon Zoo and, er, Whigfield
  • [You can insert your own, more culturally sophisticated, example here.] 

By contrast, the truly excellent teachers aren’t the ones who sometimes knock it out the park.  They are the ones who sometimes knock it out the park but always, always make good contact.  The ones who produce not one great single, but an album that’s almost all killer and hardly any filler. (I mean, we all have off days, right?) 

Socrates put it thus: “As it is not one swallow or a fine day that makes a spring, so it is not one day or a short time that makes a man blessed and happy.”  Marvellous.  But, sacrilegious as it may be, I prefer the more modern take by American historian Will Durant: “We are what we repeatedly do.  Excellence, then, is not an act, but a habit.” 

Not an act, but a habit.  Not a once-a-season thirty yard screamer.  Not a once-a-month painstakingly prepped, expertly engineered debate on the causes of World War One.  

Instead, the screamer now and again but all the mundane, literally everyday, things you do to make yourself an excellent teacher. Things like: 

  • doing your register, accurately and on time; 
  • turning up for break duty, even in the rain; 
  • upholding the uniform rules, so that there’s a consistent line among all staff; 
  • marking regularly and helpfully (but not overly frequently); 
  • being pleased to see your pupils, even if you’re not feeling like it; 
  • asking how your tutee’s poorly pet dog is getting on; 
  • expecting high standards in your classroom, of behaviour, effort and achievement; 
  • directing your questions to appropriate recipients; 
  • looking the part; 
  • not snarking at rules you don’t like or policies you don’t agree with (there’s a time and a place); 
  • helping colleagues in a jam; 
  • helping photocopiers in a jam; 
  • carefully calibrating the amount of challenge; 
  • remembering that not every lesson can be perfect; 
  • reflecting on your practice, and trying new things sometimes. 

None of these are glamorous.  They aren’t all immediately noticeable and won’t all be apparent to colleagues, line managers or SLT.  They won’t have the commentators off their feet or the critics losing their minds.  But they are some of the basics, the building blocks of brilliance.  Do them consistently well and you’re well on the way to excellence.  We are, after all, what we repeatedly do.   

Trust me. I’m a teacher.

Teachers like to be trusted.  We are, mostly, qualified; we take pride in our work; we know our subjects; we know our classes; we make dozens of professional judgements every day.  So not only do we like to be trusted, we deserve to be trusted.

All (well, most) of that is entirely reasonable.  But one part, a crucial part, is rather vague: the word “trust”.  I’m not talking here about safeguarding-style trust.  I mean the kind of trust that teachers are entitled to expect from the their managers and SLT.  (I’ll leave OFSTED out of this; they are already under heavy scrutiny, for obvious reasons.)  How much trust is that, and what does it look like in practice?

I’ve discovered from a couple of recent twitter conversations that for some people almost any amount of active management amounts to a lack of trust, even an affront to their professionalism. Nevertheless, I have a pretty fixed view that managerial oversight, done well, builds trust and improves professionalism.  Here’s why.

1. The right approach shows trust. At my school we have Professional Learning, not performance management or appraisal. The phrase is carefully chosen.  It shows trust and teamwork: I am not seeking to judge, I am seeking to find ways to help you get better at your chosen profession;  I believe that you want to learn; you are entitled to that respect.

The terminology is important.  It sets the tone.

2. Good targets beget improvement and trust. A good manager sets relevant and challenging targets which, if worked towards, will help the teacher improve.  One twit-con I had revolved around whether “Read [insert name of edu-book]” was an acceptable target.  Some felt that it was: a teacher with additional pedagogical knowledge will be better at their job.  I think otherwise.

It’s obviously a good thing that teachers read more about their subject and their craft. If a member of my team came to me and asked whether, as one of their professional learning goals, they could have “Read Hendrick and Kirschner’s How Learning Happens” I’d be delighted that they were engaging with pedagogical theory.  But I wouldn’t want the target to stop there.  I’m interested in what they are reading; I want to know what they think about it; I think there might be something others (including me) could gain from their thoughts.  So I’d make the target different.  Something like, “Read How Learning Happens by [date]; discuss your thoughts on it with Sam [me] by [date]; present key findings to a department meeting on [date]; implement and review a new technique by [date].” We might even break it down to reading a couple of chapters at a time, so that nothing gets lost along the way and to keep the demands reasonable.

In that way, my team member gets to read the book they want to read.  They also know that I am keen for them to do it, to get their thoughts on it and to pass those on to others. That shows high levels of trust: I trust their views sufficiently to think that they are likely to be worth passing on.  It would, I believe, give my team member a sense of ownership, responsibility and pride that they have been entrusted to help their colleagues improve.  Such goals sit beautifully with a professional learning approach.

3. If we want to be recognised as professionals, we need to expect some level of oversight. Trust and carte blanche are worlds apart.  Trust has to be earned and maintained; one’s professional judgement is never unimpeachable, however qualified and experienced one may be.  So, teachers should expect to be in some way “managed”.  It’s absolutely in order for a manager to ask a member of their team why they have done a particular thing in a particular way at a particular time; or how they are getting with their professional learning goals; or to pop in to lessons (by agreement – otherwise there may well be a trust issue).

A non-teaching organisation I previously worked for actively invited oversight because we wanted to show we were confident in our work and proud of what we did.  I think that’s the way to go.  You’re a proud professional, right?  You do a pretty good job, don’t you?  Then let your manager, and others, see that.  Welcome the oversight.  Done properly, it’s not scary and it will probably make you better. 

Trust me on that.  I’m a teacher.

Cognitive Load Theory: mind the gap(s)

I’ve got a worry about Cognitive Load Theory (CLT). I think there’s a gap between what’s meant by the theory, and how it’s actually being used.  I reckon there are four key areas where this is happening, and none of it in a good way. 

Gap 1: between those who see CLT as a manual of teaching, and those who don’t. 

The experts say it’s not a manual.  Here’s edu-legend Dylan Wiliam. 

Clear enough, from one who knows.  But wait, what’s this, from the Education Corner website (“the internet’s most comprehensive guide to the best education sites, resources and articles on the Web” apparently)? 

And it’s not just Education Corner.  Mark Enser, while admittedly not quite as unequivocal, wrote this in a TES article from 2019“CLT is a theoretical model that seeks to explain how learning takes place and which methods of “instructional design” (or “teaching” to you and I) will be most effective as a result.” 

Enser goes on to reference a document from the New South Wales education department which lists lots of ways to enact CLT in your classroom, including tailoring lessons according to existing knowledge, using worked examples for new content, gradually increasing independent problem solving, cutting out inessential information and presenting all the essential information together.  Eminently sensible. But it sounds, to my ears at least, a bit like teaching us how to teach. 

So you can see why Education Corner might be so effusive.  But, when someone as steeped in the knowledge as Dylan Wiliam says CLT doesn’t – can’t – teach us how to teach, we should listen.  I think what he means is that CLT, while very useful, can’t tell you what to do in every situation.  Even if he doesn’t mean that, it’s a sensible rationale.   

CLT can give you some helpful tips on how to conduct your teaching, but it’s not a manual.  It’s an approach, a set of core values that could run through your practice like the writing in a stick of rock.  If you forget that, and stop adapting your teaching to the pupils in front of you, you’ll fall through the gap between theory and practice.  “But I cleaned up my powerpoints and did worked examples, just like CLT said!” “Ah, but did you notice that half your class didn’t actually need all those worked examples, so in fact you were holding them back?”  

Gap 2: between where the theory is right now, and where you are (or might be) 

A little while ago we all knew, or thought we knew, that there were three types of cognitive load: intrinsic, extraneous and germane.  Not any more.  Now, there are only two: the first two.  Germane is no longer germane.  Don’t take my word for it.  Here’s Paul Kirschner, from three years ago: 

  

“John” is, I assume, John Sweller, Mr CLT himself, who indeed wrote this as far back as 2010: “Germane cognitive load does not constitute an independent source of cognitive load. It merely refers to the working memory resources available to deal with the element interactivity associated with intrinsic cognitive load.”  

If you knew this, great.  You’re keeping up.  But I fear that lots of people don’t.  Back in September, I blogged about CLT and quoted this video, which at the time had 30,000 views on YouTube.  It states, very definitely, that all three types of load exist.   

To be fair, it was made in 2018, before Kirschner’s tweet, but still eight years after Sweller’s disavowal of germane as an independent source of load. But people are still watching it: 51,000 views now.  I wonder how many of them have thought, “Surely this guy knows that germane is old hat?”  Probably not many at all, and fair enough: not everyone has the time or inclination to keep up, and the video is very persuasive.  But still, it’s wrong, and people are watching. 

You don’t have to look far to find lots of other blogs and articles and videos and tweets referencing the three loads.  They’re also wrong but, the internet being what it is, that’s not clear when you read them.  Which makes me wonder: what else don’t you (we) know about advances in CLT that also make a material difference to it?  There could well be a yawning gap between what the theorists say and what the practitioners are doing.  That’s bad. 

Gap 3: between the complicated reality and the simplistic approximations.  

I’m going to struggle to add much of my own here, because this very recent post from Greg Ashman sums up much of what I suspect. He says: 

“We have a long and ignoble tradition in education of latching on to a new buzzword or trend and using it to excuse whatever it is we intended to do anyway. It looks like cognitive load theory’s profile is now such that it has become one of these trends — a bittersweet experience for those of us who wish to see it better known.” 

If – and I hope Greg doesn’t mind – I can extrapolate a little, I think education doesn’t just latch on to excuse; it latches on to simplified versions which ostensibly justify new (or recycled) ideas. Take Growth Mindset. I’m pretty sure Carol Dweck didn’t intend it to be reduced to motivational posters reminding us to “Be Our Best!” and that “We Can’t Do It – YET!” But that‘s what it’s become: little more than a plea to think positively.  

I worry that CLT is going the same way. Cleaning up your powerpoints and doing a few worked examples is not adhering to CLT. It’s latching on to a “Sixty Second Explainer” version from one of those videos that shows the narrator apparently drawing helpful diagrams as they speak. “No time to actually investigate CLT? But also want to be in with the zeitgeist? Then this video is for you!”  

Gap 4. between what you might be doing, and what you should actually be doing. 

An expanded version of Gap 3, really. Let’s look at worked examples. Anyone with even a passing knowledge of CLT appreciates that you need some “I do, we do, you do” to make it work. High success rate, gradual removal of the scaffolding. All good.  

Except probably not good enough. Oliver Lovell, author of Sweller’s Cognitive Load Theory In Action, says in this webchat with Tom Sherrington: 

“The key difference between [CLT’s requirements] and what we usually do is the level of the persistence of the instruction and of the scaffolding and of the support of the students…We often assume that after the modelling [i.e. showing them how to do it] the students have got it, but really that’s just the start.  They still need a lot more scaffolding, and persistence in that scaffolding, before they get to the truly independent problem solving or question answering.” 

So you can’t just show them a couple of times and say, “Great, I’ve done worked examples, so according to CLT, all is well.” It’s not.  You need to do loads, loads more.  Not “I do, we do, you do,” so much as “I do, you do, I do, you do, I do, you do,” and repeat to fade. 

Unless you are doing that, you’re not doing CLT properly.  

How to mind the gap 

As usual, pop music has the answer. 

  1. Make like Wham! in I’m Your Man: “if you’re gonna do it, do it right.” [Don’t just use some dumbed down approximation.] 
  1. Channel James Brown in Get Up (I feel like being a) Sex Machine: “stay on the scene.” [Keep up with developments – but leave the sex machine bits out of the classroom.] 
  1. Add a word to The Walker Brothers’ Make It Easy On Yourself: “don’t”. [This stuff is hard and requires thought.] 

Enough now. Ta. 

Cognitive Load Theory: a life of its own

In 1971, in Palo Alto, California, the Stanford Prison Experiment took place.  You’ve probably heard of it. In case not, here’s a quick rundown.

The SPE was led by Philip Zimbardo, of Stanford University’s psychology department.  It involved about 20 students, all volunteers.  One morning, half were “arrested” for theft or armed robbery.  They hadn’t really committed the crimes, but the arrests looked, and doubtless felt, real enough.   The prisoners were taken away in police cars, stripped naked, given numbers and prison garb, and locked up three to a cell.  

The other volunteers took on the role of prison guards.  They were to keep order.

Within hours, the inmates rebelled.  The guards responded with fire extinguishers, physical punishment, sleep deprivation and refusals to let the prisoners empty their sanitation buckets.  The researchers concluded that these otherwise upstanding students had acted in such appalling ways as a result of their environment.  Put otherwise good people into bad situations, and their behaviour will deteriorate.

However, according to Rutger Bregman’s book Human Kind, the study was “a hoax.”  The scientist in charge had told the guards how to act, rather than letting their behaviour develop unencumbered.  Zimbardo denied it for years.  As a result, according to Bregman,

“In the decades since the experiment, millions of people have fallen for Philip Zimbardo’s staged farce.”

And as for Zimbardo himself, asked in 2018 how he felt about his research, he said:

“It’s the most famous study in the history of psychology at this point.  There’s no study that people talk about fifty years later.  It’s got a life of its own now.”

He’s not wrong.  Today (18 September 2022) I googled “Stanford Prison Experiment.”  The very first hit – apart, of course, from Wikipedia – took me to SimplyPsychology.org.  It describes the experiment and its conclusions in some depth.  It doesn’t say anything about it being a hoax.  A life of its own indeed.

———————————–

I feel like this about cognitive load theory (CLT).  I’m not saying it’s a hoax, nor impugning the good auspices of its proponents.  I’m also not saying that I don’t buy it.  What I am saying is that it’s got a life of its own, and that may not be a universal good.

What is CLT?

If you’re not totally familiar with CLT, an admirably succinct definition was given by Oliver Lovell, author of Sweller’s Cognitive Load Theory in Action, in this webchat with Tom Sherrington: “to reduce extraneous load and optimize intrinsic load.”

What this means is that our students’ working memory, which as we know is limited to around 4-7 items at any one time, can be filled by:

  • intrinsic cognitive load.  This refers to the innate difficulty of whatever it is you are teaching, be it fractions or the backhand or erosion; or,
  • extraneous cognitive load.  These are the distractions provided by the environment: what’s on the walls, the flickering of the light that doesn’t quite work and, most pertinently, the amusing yet irrelevant cartoons on your worksheets, the crazy animations in your powerpoints, the jokes in your explanations – anything that stops your class thinking about what you want them to concentrate on.

There’s also often held to be a third form of cognitive load: germane.  This is the good stuff: the transfer of knowledge from working into long-term memory, the “Ah! I get it now!” moment of clarity. In other words, the actual, y’know, learning.  CLT often posits that we need to reduce extraneous load, so that we can manage the intrinsic and leave space for germane.

This leads me to another key point about CLT: the loads are usually thought to be cumulative.  This well watched (40,000+ views at time of writing, probably 41,000 by the time you read this), admirably clear introductory video to CLT describes (below) and shows (right) it like this: “These [the different types of load] add up in order, meaning that if the demands of the first two are too great, there’s less room for germane load before cognitive overload occurs.”

Simple, right?  This tells us, nice and succinctly, that our job as teachers is to ensure our students think about the right things (the intrinsic load), with minimal distraction (the extraneous load) so that there’s brain space left over for learning (the germane load). You can read a much fuller and better explanation by the redoubtable Adam Boxer, here.

Evidently, a belief in CLT will affect the way you teach.  According to the New South Wales Department of Education, which produces excellently accessible and actionable documents, you should:

In practice, this means take account of what your pupils already know; strip back your explanations, powerpoints and worksheets to what’s essential; make sure you put important things together, rather than putting X on one slide and Y, which relates directly to X, three slides later (because then your class will be trying to remember back to the first slide, which uses up working memory); scaffold, then gradually remove the struts.  Sensible stuff.

A life of its own?

Most certainly.  How do I know?  Here are four examples.

  1. This tweet. It’s gone down in edu-history.

(Sweller is the academic most closely associated with the birth of CLT.  We’ll return to him later.)

2. This result from a quick google:

I confess that I’ve not actually read this blog, but it’s just one example of what you get if you google CLT.  People everywhere are extolling its benefits.

3. This, from Daniel Muijs while still at Ofsted: Developing the education inspection framework: how we used cognitive load theory.  The blog is clear that CLT was not the sole basis for the framework, and links to some criticisms of the theory.  But still, OFSTED using CLT?  Not a bad namecheck.

4. Books like Tom Sherrington’s Rosenshine’s Principles in Action.  The black and red are the racing silks of the evidence informed teaching movement, of which I am a supporter, by the way.  This one is not specifically about CLT (though the Lovell one I mentioned earlier, which comes similarly attired, is), but Rosenshine’s principles are, more or less, the implementation manual.  It’s a good book.  Lots of helpful strategies.  I recommend it.

So what’s the problem?  If all these edu-gurus love CLT, why do I worry about CLT’s popularity?  Here are five reasons.

  • We can’t readily measure cognitive load.​
  • Worked examples, the delivery method most closely associated with CLT, may not work.​
  • Even if they do work, you may not be doing them properly.​
  • The CLT world is ever changing.
  • You may like it for the wrong reasons

I’ll take each in turn.

We can’t readily measure cognitive load

If we are to base our teaching on the amount of cognitive load we are imposing, we need to be able to measure that load. Stands to reason. The problem is, we aren’t very good at doing that.

In this article, Professor Ton de Jong of the University of Twente explains how cognitive load is commonly measured. Daniel Muijs also links to the article in his Ofsted blog, above. De Jong writes:

“The most frequently used self-report scale in educational science was introduced by Paas (1992). The questionnaire consists of one item in which learners indicate their “perceived amount of mental effort” on a nine point rating scale [see below] (Paas 1992, p. 430). In research that uses this measure, reported effort it seen as an index of cognitive load.”

So, students measure their own perceived mental effort in a task.  In Paas’ original scale it ran from “Very, very low” to “Very, very high”, although others prefer a five point scale, others ten, others 100, and still others use different terms.  Some studies into CLT ask this question at the end of the task, others during.  From this perceived mental effort we deduce cognitive load: lots of effort = lots of load.  

I’m no scientist or psychologist, and maybe this represents the best we can do at the moment.  If so, great.  But it’s still a bit less robust than I had imagined.  More to the point, trying to distinguish between “very low” and “very very low”, or “rather high” and “high” is, shall we say, tricky.  It reminds me of mark schemes in History (my subject) which require me to differentiate between “clear” and “very clear” explanations.  Basically impossible, and totally subjective. 

And not only that.  Here, Professor Stewart Martin, Emeritus Professor of Education at Hull University, tells us this:

“A small amount of mental effort could be taken to mean that the learning task generated a small cognitive load (it was an easy task), or that the task was difficult but the learner possessed high expertise (the learner found the task easy because of their high degree of competence), or it could equally indicate that the cognitive load demanded was so high that the learner gave up trying to understand or complete the task.”

I concur.  I can quite see that the reason why a pupil in my class might not expend a great deal of mental effort on my stunning worksheets.  They might be hungry, or desperately fancy the person next to them, or have a football match later, or have noticed that one of my shirt buttons has been accidentally left undone.  Distractions happen.

Even if we can manage those distractions (I check my buttons, and zip, before leaving for lessons, without fail), questionnaires about cognitive load don’t always distinguish between the three types.  That’s a shame, because we need to know how of much there is of each, so we know how to rebalance.  Some surveys have tried, using formulations like this:

  • How difficult was the learning content for you?​ (Trying to get at intrinsic load.)
  • How difficult was it for you to learn with the material?​ (Extraneous.)
  • How much did you concentrate during learning? (Germane.)

I think these are tricky.  Even though I know what they are driving at, I reckon I’d be hard pushed accurately to distinguish between difficulty imposed by what I’m being taught, and how I’m being taught it.

Others have tried to use secondary or additional tests as measures of CL, the idea being that the better people do in those, the better managed the CL has been in the original tests.  Still others have used physiological methods, such as pupillary tracking.  Apparently the latter show most promise, but they’s not exactly accessible to most of us. Further, lest you think I’ve hand picked my academics from some sort of anti-CLT underground, complete with closed Facebook group and secret handshake, how about this from a 2019 article, whose authors include Sweller and Paas themselves: 

“It is assumed that low or high performance on the secondary task are indicative of high and low cognitive load imposed by the primary task [i.e. the higher the load in the primary task, the worse people will do in the secondary].  However, secondary task techniques have been criticised for their intrusiveness (i.e. imposing an extra cognitive load that may interfere with the primary task; Paas et al. 2003) and inability to differentiate between different types of cognitive load.”

And that’s not all.  Here’s Ton de Jong again:

“The literature shows a wide variety of scores on the one-time questionnaire, with a specific score sometimes associated with ‘good’ and sometimes with ‘poor’ performance.  Therefore, it seems as if there is no consistency in what can be called a high (let alone a too high) cognitive load score or a low score.” 

Dammit.  Got any better news for us, Stewart Martin?

“Attempts to obtain direct objective measures of the theory’s central theoretical construct – cognitive load – have proved elusive. This obstacle represents the most significant outstanding challenge for successfully embedding the theoretical and experimental work on cognitive load in empirical data from authentic learning situations.”

Oh.  But let’s leave the final words of this segment to the late Roxana Moreno, of the University of New Mexico:

“The conclusion is clear: there are no standard, reliable, and valid measures for the main constructs of the theory.”

Ka-boom.

Worked examples may not work

Worked examples are often touted as a good way of putting CLT principles into classroom practice.  The idea is that by showing your pupils exactly what to do, you reduce the CL.  Unlike discovery learning, in which pupils try to work out answers with less guidance and which, so the theory goes, imposes greater demands on working memory, worked examples allow laser focus on the right way to do things.  Then you gradually reduce the scaffolding until your class is happily working away with minimal guidance.

You’ve almost certainly used worked examples yourself, any time you’ve demonstrated something on the board, via a visualiser or on a worksheet, or by deploying the “I do, we do, you do” technique.

Sadly, worked examples may not be as effective as we’d hope or even expect, since it does seem to make a lot of intuitive sense.  Moreno (2006) pointed out that the research is somewhat inconclusive:

“Reisslein et al.’s (2006) results contradict the findings of Renkl et al. (2002, 2004)…​ Catrambone and Yuasa’s (2006) results fail to replicate Atkinson, Renkl & Merrill, 2003; Chi et al., 1989; Renkl et al., 1998)…​Finally, Gerjets et al.’s (2006) research contradicts two CLT hypotheses.”

She posits that if we are going to say, with conviction, that worked examples reduce CL and therefore help learning, we need to be able to show this.  But we can’t, because we can’t measure CL very well.

It gets worse.  2018 research at the University of California found that “The transfer of learning…is weakest to problems involving worked examples.”  This means that pupils’ ability to apply a newly learned principle to a range of problems, beyond the precise ones they have studied, is not much enhanced by teaching using worked examples.  Worked examples might make you very proficient at solving the kind of problems you have already encountered, but they may not help you apply that understanding to similar but different situations.  As Moreno says,

“Despite their promise, there is strong evidence that worked examples don’t always work and yet, the cognitive load field is unable to produce reliable explanations for why this is the case.”

That’s a problem.  If we can’t definitively show that worked examples reduce CL, nor that they help pupils do very much more than solve the types of problems they’ve previously worked on, then worked examples may not work very well at all, however logically they flow from CLT.

Even if worked examples do work, you may not be doing them properly.​

In that webchat between Lovell and Tom Sherrington, Lovell explains that while every teacher uses worked examples in some way (see above), most teachers model a couple of examples on the board, check for understanding, ask for questions then give some additional practice problems.  However, that’s not what CLT requires of worked examples.  Lovell says:

“The key difference between [CLT’s requirements] and what we usually do is the level of the persistence of the instruction and of the scaffolding and of the support of the students…We often assume that after the modelling [i.e. showing them how to do it] the students have got it, but really that’s just the start.  They still need a lot more scaffolding, and persistence in that scaffolding, before they get to the truly independent problem solving or question answering.”

So you can’t just show them a couple of times and say, “Great, I’ve done worked examples, so according to CLT, all is well.” It’s not.  You need to do loads, loads more.  Not “I do, we do, you do,” so much as “I do, you do, I do, you do, I do, you do,” and repeat to fade.

The CLT world is ever changing

“Germane cognitive load does not constitute an independent source of cognitive load. It merely refers to the working memory resources available to deal with the element interactivity associated with intrinsic cognitive load.”

Not my words.  Not Ton de Jong, Stewart Martin or even Roxana Moreno.  These are the words of John Sweller himself, the OG of CLT, the man whose papers provided the impetus for the whole damn lot.  And this isn’t recent: the quotation is from a 2010 paper.

Hold the phone!  Here we have Sweller, da man, telling us that germane isn’t germane. 

It’s taken others a while to cotton on, but in 2019 Paul Kirschner, an eminent fellow traveller, tweeted thus:

(John is, I assume, Sweller.  Kalyuga is another academic in the field.)

Unfortunately, most people don’t seem to have kept up. In 2017 the aforementioned NSW Centre for Education Statistics and Education went into print about CLT.  I’ve annotated part of their document.

That’s not all.  Remember the video I mentioned, the one with 40,000+ views?   Here are its first 15 words, with my comments.

“Cognitive load comes in three types: intrinsic, extraneous and germane. [No it doesn’t.]  These add up in order. [We don’t know that, and even if they do, as germane isn’t a load, presumably it can’t be included].” 

Admittedly both these were pre-Kirschner’s tweet, but well after Sweller’s article.  Even so, it’s not clear that even these luminaries are correct.  Here’s edublogger and writer Michael Pershan:

“Even as Sweller has moved away from germane load, many other researchers operating within the CLT framework continue to use the concept…Whatever problems Sweller now sees with the notion of germane load, others prominent within the field do not share his concerns.”

So even if you are keeping up – in fact, particularly if you are keeping up – how do you know what CLT actually says any more?  Of course, in one sense this is just evidence of healthy debate, a theory being usefully tested and hardened.  But it would seem to be a work in progress. Maybe – and I confess to speculating here – this is why Dylan Wiliam appears to have moderated his position:

Still an endorsement, and not necessarily a contradiction his earlier view that CLT is “the single most important thing for teachers to know,” but perhaps rather less unequivocal.

So you need to stay up to date, otherwise you’ll still be banging on about germane cognitive load.  Although maybe you should be.  I don’t know. 

You may like CLT for the wrong reasons

Way back in 2008 the great Daniel Willingham, of whom I am an enormous fan (ask any of my classes), made a video called Learning Styles Don’t Exist.  I’m sure that somewhere he once apologised for its “garage band quality”, which was rather endearing, but the information is good and, as it happens, I buy it.  Of particular interest here is that Willingham himself says that the learning styles theory, specifically the visual/auditory/kinaesthetic variety, “seems to make a lot of sense.”  He even asks, “Why does it seem so right?” He gives three reasons:

  1. Because everyone believes it;
  2. Because something close to the theory is right;
  3. Because of confirmation bias (though he doesn’t use that term).

We can apply much of this to CLT. 

  • Everyone believes it?  I refer you to the “Life of its own” section, above.  All those red and black books.  Wiliam, Sherrington, Lovell and many other heavy edu-hitters.  That’s not to say it’s wrong – just that lots of people, if not exactly everyone, believes it.
  • Something close to the theory is right?  Here we get a little more subjective.  It seems to me that some of CLT’s proposals, or at least the teaching strategies based on them, probably are right.  Overcrowded powerpoint slides, oh-so-hilarious worksheets, teacher as entertainer – all of these would seem to make concentrating on learning more difficult.  It may even be that they are sensible because they derive from CLT.  But, as we have seen, it may not. We just don’t know.  For what it’s worth, I reckon there’s something in it.  Enough to power my whole pedagogical approach, though?  Not sure.
  • Confirmation bias?  Oh yes.  Your students wrote rubbish essays?  You probably overloaded their working memory with new information.  They can’t factorise?  I expect you didn’t do enough worked examples.  They forgot to bring their swimming kit?  Well, that was one of four messages you gave out in form time yesterday.  

Alternatively, perhaps you explained the essay really badly.  Maybe, during your factorising lesson, they were all thinking about hockey later on.  Perhaps they were hungry in form time, or focused on the latest gossip.  The CLT explanation might feel right, but is it?  How can you know?

Willingham’s video makes a similar point.  He imagines a teacher trying to explain the structure of the atom, but “it’s not really clicking.  Finally, you say, ‘Picture the solar system. The nucleus of the atom is like the sun, and the electrons are like the planets spinning round it.’ The student understands and you think, ‘Aha!  The student must be a visual learner.’  But maybe that was just a good analogy that would have helped any student, or maybe the student needed just one more example for the idea to click.  Why the student understood at that point is actually ambiguous.” (My emphasis.)

The same could be said of CLT.  You’ve read about it, maybe been to talk or a webinar.  So you strip down your explanations, use worked examples, rid your slides of diversions and – ta-dah! – your students ace the test.  Cause and effect?  Maybe.  Probably your teaching, free of frills and frippery, was clearer and sharper, and thus more likely to produce the desired results.  But can you be sure that was due to your adherence to CLT?  Or was it just that you thought harder about how you were going to explain things?  As Willingham says, “if you already believe, ambiguous situations are interpreted as consistent.” 

Conclusion

Let me be clear.  I am not out to get CLT.  I like Rosenshine, my powerpoints are clean, my teaching proceeds in small steps.  If I do tell the odd joke it’s because I’m hilarious, and I understand the need to get back to the serious stuff, sharpish. 

What I am trying to do, though, is suggest that the foundations of CLT may not be completely sound, and that the way it’s implemented may not be completely in line with its recommendations.  That, I think, is important to know if you plan to base your teaching on CLT. 

If CLT is the way you want to go, don’t let my ramblings put you off.  But the above may give you some pause for thought.  Not too much, though.  You might suffer some cognitive overload. 

Influence: lessons from business for teaching, part 4

Background 

If you’ve read Part 1 on Liking, Part 2 on Social Proof or Part 3 on Authority, you can skip this bit and go straight to Part 4.  If not, it’ll help. 

Car dealers. Marketing executives. Phone companies. Waiters.  Teachers. What do we all have in common? We all want people to do what we want. Buy stuff, read stuff, eat stuff, do stuff, don’t do stuff, do stuff differently. 

It’s not always easy, though. Usually the stuff you (we) want people to do is stuff they aren’t already doing.  Or if they are doing it, they aren’t doing it enough, or in quite the right way.  We all know that though.  So, why this blog?   

Well, there I was, idly flicking through Freakonomics Radio, when I came across an episode called How To Get Anyone To Do Anything.  Always a sucker for a quick fix (Get rock hard abs fast without exercise or diet?  Yes please!) I dived in.   

The episode was an interview with Robert Cialdini, author of Influence: the psychology of persuasion. First published in 1984 and, I’m told, a classic of the genre, it was updated in 2021, hence the podcast.  In it, Cialdini takes host Stephen Dubner through some of the key principles that people he calls “compliance professionals” use to get us to do those things they want us to, but which we probably wouldn’t without some gentle encouragement. 

It was good.  So I bought the book.  And in this short series of blogs, I’m going to outline some of Cialdini’s theories and how they might be applicable to various roles in school.  He identifies seven “levers of influence” but I’ll stick to four: liking, social proof, authority, and commitment and consistency. 

A couple of disclaimers: I haven’t interrogated Cialdini’s sources, nor sought corroboration for his claims.  I also note from various reviews that lots of other people have said and written similar things, and no doubt some have contradicted them.  Be that as it may, I found lots of the book was relatable and applicable to teaching, and I thought you might too.  Here goes. 

Part 4: Commitment and Consistency

In this chapter, Cialdini explains how you can use the power of commitment to encourage the behaviour you’re after. 

The basic premise is this: “Once we make a choice or take a stand, we encounter personal and interpersonal pressures to think and behave consistently with that commitment.”  So far, so expected.  But here’s the bit I really like: “Moreover, those pressures will cause us to respond in ways that justify our decision.”

Cialdini gives lots of examples to back this up, but I’ll summarise just one.  Residents of a California neighbourhood were asked to have a billboard with the words “Drive Carefully” placed on their front lawn.  The sign was enormous, blocking much of the view of the house, and looked awful.  Only 17% agreed to have the sign, apart from one group where the figure was a whopping 76%.  Two weeks earlier, that group had received a visit from a volunteer worker, who asked them to display a three inch square sign that said, “Be a Safe Driver.”  It was such a little request, and so hard to refuse, that almost everyone agreed.  But, fascinatingly, this seemed to make them far more likely to comply with another, much more intrusive, request – to host the big Drive Carefully sign.

Even more remarkably, other homeowners were asked to sign a petition to “keep California beautiful.”  Who wouldn’t do that?  A couple of weeks later, a volunteer popped round to the same homeowners and asked them to have the big Drive Carefully signs in their front gardens.  About half agreed, even though their recent commitment had been to a different public service topic.

The researchers concluded that signing the beautification petition caused people to see themselves as public-spirited people who acted on their civic principles (and who knows, maybe they actually were).   So, when asked to do something else public spirited, “they complied in order to be consistent with their newly formed self-images.”  As the researchers put it, once someone has agreed to a request, “he may become, in his own eyes, the kind of person who does this sort of thing…who takes action on things he believes in, who co-operates on good causes.”

So, if we can get people to commit to something, they may well alter their subsequent behaviour to fit in with the view of themselves that they, and others, now have.  Helpfully, Cialdini goes on to explain how these commitments can be made most effective.  They must be active, public, effortful and, most important of all, freely chosen. Let’s take each in term (briefly, promise).

  • Active. Basically, this means write it down. A written commitment provides physical evidence of the intention.  Not only does this ensure we can’t deny making the commitment, it can also persuade those around us that the commitment reflects what we really think.  This brings in the awesome power of social proof (remember Part 2?  Course you do!): Cialdini notes that shortly after hearing their neighbours considered them charitable, people gave much more money to a fundraiser.  So, a written commitment can change our view of ourselves, and others’ view of us, which in turn reinforces the likelihood that we will behave in ways congruent with our commitment.

  • Public. “Whenever one takes a stand visible to others, there arises a drive to maintain that stand in order to look like a consistent person…The more public a stand, the more reluctant we are to change it.”  That’s why we’re always being told that we are more likely to stick to goals if we tell others about them.  I won’t linger on this one.  You know it’s true.  It’s also why you love that quotation from JM Keynes: “When the facts change, I change my mind.  What do you do?”  Gives you a lovely get-out.

  • Effortful. “The evidence is clear: the more effort that goes into a commitment, the greater its ability to influence the attitudes and actions of those who made it.” Cialdini describes a frankly eye-watering event that marks entry to adulthood for males in a particular African tribe, and the brutal initiation ceremonies common to a number of American university fraternities.  In both cases, he says, “the severity of an initiation ceremony heightens the newcomer’s commitment to the group.”  To be honest (and we saw in Part 3 how important honesty is in gaining trust) I think imposing frat-house style entry requirements for your new Y7s might be going a bit far, but making it seem like a Big Deal to join your school might perform a similar function.  I’ve referenced the Michaela School’s extensive pre-joining bootcamp for pupils before, but this could be another reason why it works so well.

  • Freely chosen. “We accept inner responsibility for a behaviour when we think we have chosen to perform it in the absence of strong outside pressure.”  An external stimulus to act (or not act), such as a reward or threat of a sanction, might influence behaviour but, says Cialdini, we won’t feel committed to the particular act.  In fact, such external pressures could even have the reverse effect, causing people to be reluctant to perform the behaviour in their absence.  (“There’s no reward, this time, for completing my homework by the deadline?  I won’t bother, then.”)  If Cialdini is right, this doesn’t necessarily mean allowing pupils simply to choose what to do.  It means giving them a chance to make a relevant decision.  He cites a study which showed that young children told not to play with robots because it was wrong and because if they did they’d be in trouble, complied at the time.  But six weeks later, given no further instruction and in the absence of the person who’d spoken to them before, almost all of them played with the robot.  A similar group, told only that it was wrong to play with the robot, eschewed it to the same extent as the initial group, and did so again six weeks later.  The researchers concluded that this was because the children felt they had made the decision not to play with the robot, rather than not doing so for fear of a telling off.  I suspect quite a lot of secondary school age children, at least, would view this as “treating us like adults, not children.”   So perhaps don’t say, “Don’t do this or you’ll be in detention,” but, “Don’t do this, because it runs against the core values you signed up to when you made that active, public and effortful commitment to them.”

One more piece of Cialdini advice.  Remind people of the commitments they’ve made.  This helps restore the commitment, but also prods people to recall that they are the kind of person who makes that kind of commitment, and therefore want to live up to that standard.

Two quick examples to prove these points.  The first stems from Tom Bennett’s Running the Room.  You know those classroom rules we get the kids to agree to at the start of the year? Be on time, bring your book, don’t interrupt, do your homework, etc.  All well and good, but, Tom says, unless we remind people of them, and revisit (including practise) them regularly, they will get forgotten.  Linking Tom and Cialdini, the commitments could easily be active (written down), public (have them on the wall, or in their exercise books), freely chosen (developed with the pupils) and regularly reprised.  It won’t make everyone behave just so, but it should give you a flying start – and if you add to it social proof, a bit of authority and a dash of liking, you’re well on your way.

The second is from my own, current, experience. I’ve introduced a thing to Y8 called Make It Happen.  Everyone chooses (freely) two goals, one school based and one not.  They also note down the milestones they will need to pass along the way, when they will pass them, and what they need to do to get there.  Bespoke stickers are available at each milestone.  We launched this last year, using OneNote to share electronic templates to be filled in, which tutors could then check.  While some people really took to it, most – OK, almost everyone – didn’t.  Same this year.  Having read Cialdini, here’s what I’m going to do to make Make It Happen happen.

  1. A couple of weeks before the launch, run an electronic survey, asking people whether they think they are more likely to work to achieve their goals if they write them down or just think about them; keep them to themselves or go public; have to work hard for them or if they’re easy; are told what they are or can freely choose them.  This should encourage lots of them to see themselves as the kind of people who would set goals and monitor their progress.
  2. When launching, deploy social proof by getting some older children who have benefited from MIH to explain to Y8 why it’s such a good idea.  Also, ensure it’s all couched in terms of why this is a good thing to do, so they will want to do it and feel they have freely chosen to engage.
  3. Bin the electronics and go paper-based.  This will adhere to the “Active” principle.  I have a sense that writing something is more commitment-forming than typing.
  4. Put up a list of everyone’s goals in each form room.  Not all the milestones, that’s too much. Just the end goals.  This meets the “Public” principle.
  5. Make it quite hard to complete the initial sheet.  That is, have several boxes on there, all of which need to be filled in, so it’s “effortful.”
  6. Return to it formally (e.g. in tutor time) and regularly.

Some of this we’ve already done.  So maybe it’s just a crap idea that will never work.  But I don’t think so, and I won’t give up until I’ve given it the best chance of success.