In the Netherlands a task-force, supported by the institute of curriculum development, Stichting Leerplan Ontwikkeling (SLO), has been assembled to define core objectives and learning goals for digital literacy that are to be finalized by 2019. The SLO defined digital literacy wi
...
In the Netherlands a task-force, supported by the institute of curriculum development, Stichting Leerplan Ontwikkeling (SLO), has been assembled to define core objectives and learning goals for digital literacy that are to be finalized by 2019. The SLO defined digital literacy with the following four elements: ICT basic skills, computational thinking, information skills and media awareness. Computational thinking, frequently taught through programming lessons, is often considered to be the most challenging component of digital literacy. Despite all the work around planning and curriculum building, it is important that teachers’ perspective, their involvement and professional development is thoroughly investigated and prepared to let them teach the concepts of computational thinking (CT) with confidence. With a limited amount of research focusing on teachers, this study aimed at identifying the needs for teachers who are getting started teaching the concepts of CT. In the first part of the study examination of teachers’ TPACK took place via semi-structured interviews. Assessment of learning was identified as a major challenge/need. Teachers expressed not being able to measure how much and what a pupil has learned about CT-concepts and primarily used computer-based practical assignments as an assessment tool in a formative way. In the second part of the study a lead-up survey was created to gain more insight about the used CT assessment tools and teachers’ attitude towards CT assessment. To test whether attitude towards assessment and assessment tool suitability was different for programming lessons, respondents were asked to rate statements and assessment tool suitability within three different contexts (i.e. education in general, their own subject and programming lessons). Despite a noticeable decrease in summative score and an increase in formative score for programming lessons, no statistically significant difference was found across the three contexts when measuring latent variables for formative and summative assessment. However, several statistically significant differences were found measuring the suitability of assessment tools. In terms of programming lessons, teachers rated not practical and not computer-based assessment tools significantly lower in suitability while a set of small practical computer-based assignments were rated significantly higher in suitability. Future research can focus on examining teachers’ arguments with regard to the suitability scores that they have given and measuring attitude towards assessment for programming in general.