Assessment within the EYFS

Accountability dominates English state education, an ideology bolstered by student performance in standardised assessments. From September 2020 children will be required to complete six different statutory standardised tests before secondary school; three of which form part of the Early Years Foundation Stage and take place between the ages of two and five.

A concern with condensing a young child’s learning experience into an ideology propped by scorecard metrics, is that such tests do not typically determine how much a child has learned, how well they have learned and in respect of the anonymity ruling in the Reception Baseline Assessment (RBA), who has learned what (Meisels, 2006, p. 2). While this discourse is not opposed to testing and acknowledges the requirement for accountability in public resource management, it argues by favouring decontextualised comparative developmental stage assessment, policymakers place the needs of an exclusive, commercial culture (Apple, 1996) before those of society’s youngest citizens.

Standardised accountability is evident throughout EYFS statutory documents (DoE, 2017; DoE, 2020), as are elements that consider a young child’s “…interests and learning styles” (DoE, 2017, p. 13). However such elements run in opposition to performative agencies like the early years national statutory summative assessment (NSSA), created “to understand a child’s performance in relation to national expectations and comparisons.” (DoE, 2020, p. 9).

National comparative assessments rely on the efficacy of standardised measures of attainment (Ball, 2003). These are first applied within the EYFS during the statutory progress check at age-two. The efficacy of this assessment is debatable due to its extensive overlap with the ASQ-3 and ASQ:SE-2 assessments carried out within the Healthy Child Programme (HCP) (DoH, 2016). The HCP is conducted by local health visitors: registered nurses or midwives who have completed four years professional training before working “with families to give pre-school-age children the best possible start in life” (NHS, n.d.). Their depth of professional knowledge; combined with an ability to access the spheres of influence experienced by young children, and the comprehensiveness of the ASQ-3 and ASQ:SE-2, mean the gravity of the HCP objective far outweighs that of the EYFS. By comparison, the EYFS’ narrow focus on three areas of universal performativity, make the progress check’s value-add difficult to determine. Unless its intentions look beyond those of a young child’s “best possible start in life.” (DoE, 2017, p. 5).

By defining and imposing assessments such as the developmental stage progress check at age-two, policymakers hold early years settings accountable to specific returns. Whether developmental stage testing is capable of delivering such accountability is questionable. Research in cognitive development neuroscience reveals young children to think and reason in a similar way to adults, with life experience the only differentiating factor to knowledge and know-how (Goswami, 2015). Such developments suggest neuroscientists “no longer widely believe[d] that there are different developmental stages in learning to think”. (Goswami, 2015, p. 1). Further, as such assessments are based solely on what is considered preferential development, their design appears to promote characteristics contained within a narrow field of judgement (Ball, 2003). A field based on exclusive notions of educational normality: one created with little consideration to factors such as gender, race, class, religion, language and/or location (Bradbury, 2011). Factors deemed highly correlated to a young child’s learning journey (Raudenbush, 2005).

These considerations make the September 2020 rollout of the RBA a curious development. This scorecard test assesses age-four ability as a benchmark to progress achieved by age 11. In doing so policymakers aim to hold primary schools accountable for student knowledge; knowledge linked to the characteristics of educational norms policymakers deem appropriate, and exclusive of the highly correlated factors previously outlined. Its inability to measure student learning growth rates, means this “misleadingly objective and hyper-rational” (Ball, 2003, p. 217) aspiration is ignorant of the value added by an institution’s processes and practice; and assumes continuity between the EYFS and the National Curriculum by determining that, for example, age-four results in synthetic phonics directly correlate to age 11 reading ability. There is evidence to suggest this is not the case (Davis, 2013; Lyle, 2014; Rosen, 2012); something policymakers are not oblivious to (Hazel, 2019).

These are perpetual lines of enquiry with illusive resolve. That policymakers push for flawed accountability assessment strategies, strategies too rigid to translate “complex social processes and events” (Ball, 2003, p. 217) and authentic learning development, appears irrational. However data required by the OECD for both the heavily criticised (Urban & Swadener, 2016) International Early Learning Study (IELS), and Programme for International Student Assessment (PISA) follow similar flawed strategies (Urban & Swadener, 2016). The perpetual lines of enquiry with illusive resolve are globally relevant. In buying into the neoliberal narrative of competitive educational excellence on a global scale, policymakers use educational strategy as an indicator of England’s global economic viability (Stevens & Weale, 2003). This makes these strategies an investment decision, one designed to alleviate society’s compulsion to compete “in the global race” (DoE, 2013, p. 9). Intentions are clear. The needs of society’s youngest citizens are eclipsed by those of an exclusive, commercial culture.

It is now understood why undeviating continuity between the EYFS and National Curriculum is required. Maintaining conformity to a structure of perpetual international comparison requires seamless and specific data flow, and as economic performance corresponds to educational performance (Stevens & Weale, 2003), any transition within a school environment requires order to maintain high levels of performativity. A requirement made apparent at the flection point between the two curriculums, the point where the early years NSSA is poised.

The NSSA spans from the final term of the year in which a child reaches age-five, to their last primary school year. Assessment within the early years NSSA is developmentally staged with every child ranked within ‘school readiness’ subjects, against a criteria of ‘expected’, ‘emerging’ and ‘exceeding’ (DoE, 2020, p. 14). This assessment is the first time a child faces the potentiality of failure: the first time results are used to shoe-horn children into decontextualised learning trajectories based on exclusive notions of educational normality. Judged and ranked, the young child’s learning is shaped into appropriate forms. School ready, National Curriculum ready, all “in the hope of producing certain desired effects and averting certain undesired events.” (Rose, 1999, p. 52).

Yet a quiet revolution is afoot. Spurred by elements of the EYFS considered at the outset of this discourse, early years assessment is having a contextualised value-led rebellion. Beyond the decontextualised accountability of standardised assessments, the EYFS Framework (DoE, 2017) enables day-to-day and termly interaction to focus on child-led evaluations that are based on ‘how’ a child learns rather than ‘what’ they know. Such Characteristics of Effective Learning (DoE, 2017) are recorded in-the-moment by evidence such as photos, videos, artwork and critical reflections: documented ‘wow’ moments that are a far cry from formal developmental stage accountability assessments focused on economic returns. These qualitative evaluations recognise diverse skill development, and acknowledge young children as complex, diverse citizens who do not come in standardised forms (Robinson & Aronica, 2015, p. 160). Such EYFS elements are gaining traction beyond early years practise into Key Stage One as primary schools seek funding beyond the State (Groundwork, n.d.) to enable such contextualised value-led interaction (Ephgrave, 2012).

It appears the structural continuity required between the EYFS and National Curriculum flows both ways, and in doing so their rigid assessment structures may, one day, be required to become a little more messy, slightly more complex and uniquely child-led.

———-

Apple, M. (1996). Cultural politics and education. Buckingham: Open University Press.

Ball, S. (2003). The teacher’s soul and the terrors of performativity. Journal Of Education Policy18(2), 215-228. doi: 10.1080/0268093022000043065

Bradbury, A. (2011). Learner Identities, Assessment and Equality in Early Years Education (Ph.D). University of London.

Bradbury, A. (2013). Understanding early years inequality. London: Routledge.

Davis, A. (2013). To read or not to read: decoding Synthetic Phonics. Impact, (20), 19-29. doi: 10.1111/2048-416x.2013.12000.x

Department of Education. (2013). More Great Childcare. Retrieved 30 November 2019 from https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/219660/More_20Great_20Childcare_20v2.pdf

Department of Education. (2017). Statutory framework for the early years foundation stage. London. Retrieved from https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/596629/EYFS_STATUTORY_FRAMEWORK_2017.pdf

Department of Education. (2020). Early years foundation stage profile handbook. London: Standards and Testing Agency.

Department of Health. (2016). Developing a public health outcome measure for children aged 2 – 2½ using ASQ-3™. London: Department of Health.

Ephgrave, A. (2012). The Reception Year in Action, revised and updated edition (2nd ed.). London: Routledge.

Goswami, U. (2015). CHILDREN’S COGNITIVE DEVELOPMENT AND LEARNING. York: Cambridge Primary Review Trust. Retrieved from https://cprtrust.org.uk/wp-content/uploads/2015/02/COMPLETE-REPORT-Goswami-Childrens-Cognitive-Development-and-Learning.pdf

Groundwork. Retrieved 14 February 2020, from https://www.groundwork.org.uk/national-grants/grants_tesco-community-grants/

Hazell, W. (2019). Nick Gibb: The phonics wars are ‘over’. Retrieved 13 February 2020, from https://www.tes.com/news/nick-gibb-phonics-wars-are-over

Lyle, S. (2014). The limits of phonics teaching. School Leadership Today, (5.5), 68-74.

Meisels, S. (2006). Accountability in Early Childhood: No easy answers. London: Erikson Institute.

NHS. Retrieved 14 February 2020, from https://www.healthcareers.nhs.uk/explore-roles/public-health/roles-public-health/health-visitor

Raudenbush, S. (2005). Newsmaker Interview: How NCLB testing can leave some schools behind. Preschool Matters3(2), 11-12.

Robinson, K., & Aronica, L. (2015). Creative schools. London: Penguin Books.

Rose, N. (2010). Powers of freedom. Cambridge: Cambridge Univ. Press.

Rosen, M. (2012). A major scandal? – government approved phonics schemes [Blog]. Retrieved from http://michaelrosenblog.blogspot.com/2012/06/government-approved-phonics-scheme.html

Stevens, P., & Weale, M. (2003). Education and Economic Growth [Conference Paper]. National Institute of Economic and Social Research. Retrieved 14 February 2020, from http://cee.lse.ac.uk/conference_papers/28_11_2003/martin_weale.pdf

Urban, M., & Swadener, B. (2016). Democratic accountability and contextualise systemic evaluation. International Critical Childhood Policy Studies5(1), 6-18.

Post a comment

You may use the following HTML:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>