Reading the runes or why the dodo was a no-no.
Reading
KS2
There was a lot of concern expressed on the day of the KS2
reading test for 2016, with many teachers reporting how upset pupils were at
the texts and the questions. The results that emerged two months later seemed
to indicate that the anxieties were justified, with reading attainment dipping
below writing, maths and SPAG and huge
variation existing between schools. The subsequent self-analysis and
self-laceration in which schools and year 6 teachers have indulged has led to
several questions about why many pupils performed so badly and some answers
have been forthcoming, with successful schools talking about how they emphasise
reading throughout the key stage 2 and have taught exam technique – including
answering on only two of the texts and ducking any 3 mark questions. More
reading and more explicit teaching of inference and of vocabulary have been
mentioned as possible solutions to the difficulties many erstwhile successful
schools find themselves in. A scrutiny of the information available in Raiseonline
seems to support these suggestions.
Schools can access their own results on Raiseonline,
including a breakdown of how pupils performed on individual questions. Some
companies are offering to provide this sort of analysis at a price but it is
easy enough to do yourselves and you can get down to individual pupil level.
National data is also available and it is of some interest.
The paper consisted of three texts: a piece of Swallows and Amazons type fiction
created in-house; a bizarre tale of giraffe-riding African savannah; and a
non-fiction piece on the dodo (most children I came across pronounced this’
doodoo’, obediently following their synthetic phonics training).
Although the first text seemed to me to so culturally
loaded that I assumed many children would have looked at it, sighed and moved
on, on average 97.6% of pupils attempted
each question, slightly less than the 98.6% average for text 2 and many more
than the 64.4% average for text 3. The
low number for attempts on text 3 questions may be a result of tiredness or
weak time-management by pupils, might reflect some gaming of the system and
concentration on the ostensibly easier texts or suggest that non-fiction poses
problems for children and approaches to it are not taught as thoroughly as they
are for fiction. Non-fiction may also figure less in reading-for-pleasure
initiatives and offer up vocabulary, sentence structures and content that are
unfamiliar. I would certainly advise investigating any patterns of
question-answering in your school before making any sweeping changes to
practice. The reluctance to answer on non-fiction certainly needs addressing I think and it pops
up again when looking at what pupils got right and wrong.
The data also provides a count of how girls and boys
responded and the differences between FSM and non-FSM pupils but my quick
glance suggests the differences were pretty insignificant – presumably because
of the rigorous trialling that STA carries out in advance of the papers being
set (something that may also explain the patterns of question popularity and
correctness in what was always meant to be a ‘stepped’ paper). SEN and Non-SEN
comparisons show variation but this may
be as expected.
In addition to giving us a count of which questions were attempted,
the data gives us the numbers of questions which were correctly answered. The
11 questions which were answered least well are listed below in a table. I have
made a comment on each of them and also indicated which aspects of the content
domain (similar to what we used to call assessment focuses) the questions set
out to assess. Before we get there however, here is a list of the content
domain aspects and the number of questions assigned in the 2016 paper:
2a give / explain the meaning of words in context. 10 questions
2b retrieve and record information / identify key details from fiction and
non-fiction. 13 questions (1x 3-mark)
2c summarise main ideas from more than one paragraph. 1 question
2d make inferences from the text / explain and justify inferences with
evidence from the text. 12 questions (1x 3-mark 4x 2-mark)
2e predict what might happen from details stated and implied. 1 question
(1x 3-mark)
2f identify / explain how information / narrative content is related and
contributes to meaning as a whole. 1 question
2g identify / explain how meaning is enhanced through choice of words and
phrases 1 question. (1x 2-mark)
2h make comparisons within the text. 0 questions.
It is certainly worth considering how well your pupils did
on each of these. It is also worth considering the extent to which you teach each
of these explicitly and how you go about it. As one might expect from the
average number of attempts’ data, it was the non-fiction text that put most
children in the doo doo. On average each question on non-fiction was answered correctly by only 31.2%
of pupils. Text 2 saw a success rate of 51.7% and text 1 was at 69.7%. This may
indicate the relative difficulty of texts or questions, or their place in the
test, but again, this is worth exploring with the questions I raised about
non-fiction being raised again.
The questions answered correctly by most pupils were:
question 2, which assessed content domain 2a; 12b, 9b and 12d, which assessed content
domain 2b. Question 1, assessing 2a, was also done well. Thus, most children
seem to be ok with information retrieval and, with caveats that will become
evident shortly, with explaining some words in context.
Questions answered least successfully
question
|
Content
domain
|
Comment
|
16
|
2d
|
Only 15% of pupils got this inference
question right. It required two points to be made – only hinted at by its 2
mark status - and makes some demands on vocabulary, as indicated by the
phrase cited at the question’s start -milled
around in bewilderment. While I think pupils may do their fair share of
bewildered milling, I suspect they don’t describe it as such.
|
33
|
2c
|
16%. The
last question on the paper and requiring kids to sequence 6 paragraphs across
the texts from summaries given. Potentially time consuming, this does require
the ability to identify paragraph topics and relate the given summaries to
their own view of the text.
|
29
|
2a
|
A vocabulary
question close to the paper’s end. Parched
is the word and pupils need to define it in context. Only 21% did.
|
30
|
2a
|
Joint
with 25. Another late question and another vocabulary one. It’s a simple
find-and-copy question but the use of the agentless passive (it is thought that) is the target. Only 23% recognised this form
of scientific writing.
|
25
|
2a
|
Joint
with 30. Another late question on the non-fiction text and it is a
find-and-copy from somewhere on page 10. Apart from having to range across
the page, pupils are required to understand that some of the animals on Mauritius is synonymous with much of the island’s…wildlife and that unique, in this context, means only found there.
|
32
|
2d
|
Another
late question which could range across the whole text and requires inference.
It also requires them to know that rehabilitate
means change the image of in this
context – although question 31 had proferred a definition, accepted by 35% of
pupils, which stated that rehabilitation
actually meant rebuild the reputation of the dodo.
Thus vocabulary is pretty important here and slight shifts in meaning may have
confused.
|
24
|
2d
|
Inference
and on the non-fiction text again. Only 27% of pupils understood curious and unafraid and could work
out a reason why the dodos were so blasé.
|
19
|
2d
|
Inference
but not on the non-fiction text!
Kids need to understand triumphant
and overcome any confusion about what a war thog is (how some pupils said it
to me). 28% did.
|
26b
|
2b
|
Back to
the dodo and an information retrieval question which are generally done well
but here only 32% got it.. Pupils did need to know that extinct meant wiped out.
|
21
|
2d
|
Joint
with 23. Inference required and worth 3 marks. 34% were recorded as getting
it correct but the data does not make clear if correct means 3, 2 or 1 mark.
Markers have flagged this up as a tough question to assess, maybe because its
apparent open-endedness – why might the character appeal- is closed down in
the mark scheme with markers having to decide what is ‘acceptable’. It is
about points and evidence which do need teaching.
|
23
|
2a
|
Back to
vocabulary and the dodo. One would assume that spat (the past tense of spit)
was part of everyday language but maybe pupils don’t associate it with force,
speed or ejecting something unwanted.
|
Vocabulary questions, though figuring in the best answered
questions too, seem to offer challenge, especially in non-fiction work, late on
in the paper and when the words are not part of everyday language or used in
everyday ways. Vocabulary is an important part of inference questions too,
partly because of how the questions are framed but also because they are
inextricably intertwined. Some vocabulary questions were done well especially
early on in the test and where there were options. Question 2 was find the
synonym for rival with 4 options
being given and 85% ticking the correct box with equal, neighbouring and important
rejected in favour of competing.
Question 1 was a find-and-copy – a word meaning relatives from long ago. Ancestors
was understood or well guessed by 79%.
Both rivals and ancestors are well within the experience
of most pupils.
When dealing with vocabulary, as part of inference as well
as on its own, it is unlikely that simply doing words of the week or issuing lists to learn will be helpful. Words need to
be seen in varying contexts and their varying uses, nuances and implications
understood. Having a wide range of experience will clearly help build
vocabulary but where this isn’t possible ( and many children I know have never
petted warthogs on the savannah, rowed a boat to a private island or ridden a
giraffe) then the vicarious experience of reading lots of books and doing lots
of drama can help. I would also suggest that lots of non-fiction reading can
extend the knowledge base required (even if one doesn’t go the whole hog on
Hirschian notions of cultural literacy) and introduce pupils to forms, sentence
structures and language beyond those they experience at home. Giving explicit
attention to how factual texts are constructed and having kids write them
themselves may be valuable in many ways.
To do well on the reading paper, say successful teachers I
have talked to, schools need to do some or all of the following:
·
Teach pupils to ‘game’ the test - knocking off
one mark questions and not spending aeons on three markers
·
Have kids focus on the first two texts – thus
increasing the time available by a third and reducing panic too. Not
recommended for those able readers aiming to achieve at greater depth
·
Teach ways of approaching an unfamiliar text –
chunking it, identifying the key points of paragraphs, seeking out topic
sentences for example, plus puzzling over unusual uses of language, checking
comprehension as they go… .
·
Give pupils the confidence and skill to take a
stab at what a word might mean and check if it makes sense in context
·
Teach word roots and seize opportunities to
expand passive and active vocabularies
·
Enjoy words publicly
·
Read widely across a range of genres and engage
pupils in discussion of books at every opportunity
·
Read non-fiction and write non-fiction, looking
at how writers engage readers and present facts and opinions
·
Read challenging texts, i.e. those which may
take some effort to grasp because they are older, more specialized or more
complex. Tease out meaning together and consider what makes them difficult.
·
Organise lots of visits,
·
Do lots of drama, including hot seating and
role play so pupils develop empathy and a vocabulary to describe experiences
and emotions they couldn’t realistically have in day-to-day life
·
Make sure all pupils are middle class or better
and have interested, involved parents.