Topic 2 How to combine and synthetic data retrieved from traditional and innovative digital assessment

E – assessment method

Self-assessment

Self-assessment refers to the ability of learners to evaluate the process of their learning as well as the quality of their completed tasks. It is considered as an integrated part of self-regulated learning since the learner is engaged in monitoring and evaluating of both learning process and outcomes. During self-assessment, the learner has usually to evaluate his/her learning against some performance criteria (Brown & Harris, 2014).

However, self-assessment is most effective when the learner engages in critical reflection that may lead to significant insights and enhances self-understanding. During self-reflection, learners have to be able to examine their thoughts and emotions, to question their assumptions and the way they perceive and interpret events while taking into consideration external factors (Desjarlais & Smith, 2011; Melrose, 2017). Hence, through critical reflection the learner has to change thinking and consider new ideas. This, in turn may prompt incidental learning (unexpected learning which is not related to predetermined goals) and open their thinking beyond the boundaries of a particular discipline or a learning event (Melrose, 2017).

Technology-supported self-assessment

Research on the impact of assessment modes during self-assessment confirms the above data. High school students engaged in self-assessment in physics through computer/mobile devices showed increased motivation compared to paper based. Apart from increased motivation, they also demonstrated higher learning performance. Most importantly, low achievers had the highest gains compared to medium and high achieving learners (Nikou & Economides, 2016). Furthermore, online self-assessment predicts exam results even when class attendance was taken into account (Buchanan, 2001) and improves final exams pass rates (Ćukušić, Garaća & Jadrić, 2014). Similarly, learners who engaged in computer self assessment achieved 10% better exam results compared to those who did not (Wilson, Boyd, Chen & Jamal, 2011.

Evidence on self-assessment

A systematic review of studies has demonstrated the positive impact of self-assessment on learning and achievement across a range of grades and subjects (Brown & Harris, 2014) and on student’s self regulated strategies and self-efficacy (Panadero et al., 2017). It has been suggested that engagement in self-assessment enhances deeper learning and therefore learners have better performance. Subsequently, this generates feelings of worth, a perception of improved capability that increases their self-efficacy. Further evidence points out that self-efficacy was one of the constructs with the strongest effect on learning for adults along with goal level, persistence and effort (Sitzmann & Ely, 2011)

Learners’ outcomes differ according to their ability although evidence is contradictory. Low performing students have larger learning gains (Sadler & Good, 2006) while other researchers suggest that average students who are more accurate in their self-assessment benefit the most (Boud, Lawson & Thompson (2013). Conclusively, self-assessment is an essential component of innovative assessment not only for improving performance but mostly as a valuable means for learner’s empowerment and self-sustained learning.

Peer-assessment

Peer-assessment refers to “a reciprocal process whereby students’ produce feedback reviews on the work of peers and receive feedback from peers on their own work” (Nicol, Thomson & Breslin, 2014: 102). Peer-assessment can be formative or summative, quantitative (providing grades) or qualitative (providing extended verbal feedback) and a variety of products can be peer-assessed such as written assignments, presentations, portfolios, oral statements, scientific problems etc. (Topping, 2017).

On the other hand, producing reviews learners have the opportunity to critically think, to apply criteria and engage in reflection. Nicol and colleagues (2014: 116) have suggested that during the reviewing process learners evaluate peer work “against an internal representation of their own work”. Apart from the external criteria provided by the educator, learners use implicit criteria deriving from their own experience when completing similar with their peers’ assignment. When students have to review a number of peer-work they come in front to a greater range of possibilities compared to the alternatives offered by one person even if that one is an expert. In turn, the learner may generate richer criteria but most importantly, the experience of applying such criteria in practice has shown to facilitate internalisation and transfer of learning (Price & O’Donovan, 2006; Nicol et al., 2014)

Peer-assessment 2

Learners find it easier to analyse others’ work compared to their own because they can adopt a distanced perspective. Furthermore, by reviewing a variety of examples they gradually become aware of the desired performance (Black, Harrison, Lee, Marshall & Wiliam, 2003). Reinholz (2016) suggest that through peer assessment learners develop objective lenses, which they can later apply to their own work. They also have to explain their own reasoning that promotes self-awareness. Therefore, apart from the development of communication skills and conceptual understanding, peer-assessment supports the development of self-assessment (Black et al., 2003). Finally, the act of critical appraisal will assist them in their future careers as they will have to appraise and comment on the work or performance of others as well as enhance their ability to produce quality work and therefore, prompt the development of their professional skills (Topping, 2017).

Conclusively, peer-assessment becomes a constructive task through which the learner has to receive and give feedback, provide informed judgements, extract meaning and implement suggestions for improvement, yet practice and constant monitoring from educators is needed, if optimal learning outcomes are to be achieved. Most importantly, it fosters a sense of shared purpose and responsibility for learning which empowers learners and prepares them for their future learning needs.

Technology-supported peer-assessment

Students who participated in peerScholar report that they liked the anonymity and reading the opinions of their peers while they acknowledged that peer-feedback helped them to improve their work (Collimore, Paré & Joordens, 2015). Most importantly, peer-assessment assisted the development of critical thinking skills as students had to examine their peers’ assignment, point out the strengths and weakness of their work, justify their comments and make suggestions for improvement (Pare & Joordens, 2009, 2008). This in turn, influenced their own work as they became more competent in applying assessment criteria (Li, Liu & Steckelberg, 2010) and develop self-regulatory strategies (Gikandi & Morrow, 2016). Interestingly, peer- and teacher marking were almost similar while evidence indicate that 5-6 peer assessors are the optimal for a valid outcome (Paré & Joordens, 2009).

Evidence on peer-assessment

Systematic review of studies points out the positive effects of peer-assessment on learners’ achievement (van Gennip, Segers & Tillema, 2009). In relation to the quality of peers’ feedback, the use of justifications significantly improved performance but the effect diminished for students with high 21 performance (Gielen, Peeters, Dochy, Onghena & Struyven, 2010). There is also evidence that peer assessment has a positive impact on learners’ motivation (Hsia, Huang & Hwang, 2016; Lai & Hwang, 2015), creativity (Hwang, Hung & Chen, 2014), self-regulation skills (Gikandi & Morrow, 2016), self efficacy (Hsia et al., 2016), critical thinking (Harrison, O’Hara & McNamara, 2015; Lai & Hwang, 2015; Nicol et al., 2014), problem-solving skills (Hwang et al., 2014; Moore & Teather, 2013) and overall enhancement of student learning and performance (Hsia et al., 2016; Hwang et al., 2014; Kablan, 2014; Mulder, Baik, Naylor & Pearce 2014).

Nevertheless, the effectiveness of peer-assessment depends on several aspects. In particular, learners should have opportunities to give and receive peer-feedback more than once in a particular task, to discuss about their given and received feedback (Gikandi & Morrow, 2016; Reinholz, 2016) and direct attention to the learning task, task processing strategies and self-regulation strategies instead of the ‘self’ (Hattie & Timperley, 2007).

Digital badges

Digital badges are digital visual rewards for non-tangible accomplished tasks, competencies, providing an account of one’s life-long learning trajectory. They may refer to either autonomous or prescribed learning pathways and are awarded by groups, institutions or organizations (Frederiksen, 2013; Gibson, Ostashewski, Flintoff, Grant & Knight, 2013; Anderson & Staub, 2015; O’Byrne, Schenke, Willis & Hickey, 2015; Liyanagunawardena, Scalzavara & Williams, 2017; Carey & Stefaniak, 2018; Hofer et al., 2018). They are available online, contain metadata (e.g. information about the issuer, evaluation criteria, process and result of the accomplishment) that validate acquired skills (Gibson et al., 2013; Anderson & Staub, 2015; Devedžić & Jovanović, 2015; Ellis, Nunn & Avella, 2016; Eaglen Bertrando, 2017) and acknowledge prior learning (Lius, 2016). Some badges are credentials of learning within a close system (e.g. Duolingo for foreign languages) yet, most of them are open and their metadata can be transferred into other systems (Farmer & West, 2016).

Benefits of digital badges

There is evidence that digital badges can overcome the assessment challenges of traditional courses as they can recognize diverse learning trajectories and competencies that previously were not acknowledged, such as 21st century skills and social skills (Abramovich, 2016; Farmer & West, 2016). They appear as a response to the revolution of the e-world, shifting achievement measurement from exams to personalised accomplishments (O’Byrne et al., 2015). Moreover, badging can bridge formal and informal learning as it can strengthen the learning outcomes from traditional degree programs (Carey & Stefaniak, 2018).

Digital badges encourage learners to personalise performance by planning in advance, even select content and criteria that are relevant to their preferences and needs (Farmer & West, 2016). Learners can develop their own learning path and accomplish a task in small fractions (granular learning) following the pace that suits them (Brauer & Siklander, 2017; Eaglen Bertrando, 2017; Carey & Stefaniak, 2018). On the other hand, educators provide scaffolding, guidance, support and encourage peer- and self assessment (Jovanovic & Devedzic, 2014; Anderson & Staub, 2015; Devedžić & Jovanović, 2015). In this way, learners can self-regulate their professional development. There are successful examples of collaboration between universities and professional organisations that have developed badging programs. In United States, the National Science Teachers Association has collaborated with NASA and Penn State University and developed 63 professional development activities for educators. They were free to select activities and create their own learning journey and even decide about their level of achievement (high achievement: badge award; low achievement: stamp award) (Farmer & West, 2016). Given that both educators and learners had choices, the design of the program encouraged the development of autonomy and self-direction.

Evidence on effectiveness

Research points out that digital badges have a positive impact on learners’ participation. Learners in MOOC courses with a badge system participated five times more (voting, posing questions and responding to questions) compared to courses without a badge system (Anderson, Hutterlocher, Kleinberg & Leskovec, 2013). In another study, students who had access to a badging system were significantly more engaged with the online learning tool (peer wise) and answered more questions compared to those who did not have access to badges. Yet, there was no effect on the number of learners’ questions (Denny, 2013).

Simulations

Simulations create scenarios-based environment that imitate the real world. They are dynamic tools where learners can apply their knowledge, practice skills, adopt various roles and experiment with different strategies in a safe environment. Most importantly learners can observe the outcomes of their actions, thereby assume responsibility of their decisions (Vlachopoulos & Makri, 2017). Simulations are also integrated in many games. Simulation games/scenarios are greatly used in Health Sciences, Biology and Business Marketing and are considered as ideal instruments for situated learning and transferring of knowledge in the workplace (Lukosch, Kurapati, Groen, & Verbraeck, 2016

Benefits of simulation

Furthermore, investigation of students’ mistakes has shown that simulations are ideal for training in 25 decision making within complex and dynamic situations (Pasin & Giroux, 2011; Lin & Tu, 2012). The most effective instructional design features for simulation-based education are: variation in task complexity, opportunities for repetitive practice, practice over a period of time, learners’ cognitive engagement (through task variation, intentional task sequencing, feedback, multiple repetitions), the use of multiple learning strategies, training tailored to individual learning needs, mastery learning of a clearly-defined standard of performance, provision of feedback during or after the simulation activity, longer time in practice and variation in the clinical context (Cook et al., 2013).

The role of the instructor is important, as s/he has to emphasize the learning goals, facilitate and support learners when new information and high order skills are involved (Kovalik & Kuo, 2012; Wouters & van Oostendorp, 2013). In particular, s/he has to prompt students to formulate hypothesis, describe observations, provide explanation and interpret the context to construct knowledge and deepen their understanding (Hämäläinen & Oksanen, 2014).

Research has also highlighted the benefits of debriefing which constitutes an essential component of simulation-based education (Tannenbaum & Cerasoli, 2012/2013).

Debriefing

Debriefing refers to a discussion between two or more individuals where aspects of performance are analysed with the aim to gain insight that impacts professional practice. It is a form of formative assessment as the new insights are co created by the instructor and the learner during discussion and aims to improve learners’ current performance through constructive feedback (Eppich & Cheng, 2015). There are various models of debriefing, which usually use methods such as self-assessment, focused facilitation, directive assessment or a combination of them. In self-assessment the learner has to identify what went well, what problems occurred and suggest solutions to remedy them. During focused facilitation, the learner has to focus on performance deficits, discuss the reasons of their appearance and identify solutions. The discussion may take place between the learner and an expert but peers may also participate. During directive assessment, the instructor provides feedback in a didactic manner. S/he clarifies important learning points and provides information when knowledge gaps or performance deficits are identified (Cheng et al., 2015).

Evidence on effectiveness

Various studies suggest that simulation assist self-assessment (Arias Aranda, Haro Domiguez & Romerosa Martinez, 2010), higher order thinking (Crocco, Offenholley & Hernandez, 2016) and the development of complex cognitive skills (Siewiorek, Saarinen, Lainema & Lehtinen, 2012) which facilitate deep learning. When simulated scenarios incorporate problem solving and reflective practices, metacognitive thinking is considerably enhanced (Hou, 2015). Multi role simulations where students have to develop arguments, make judgements and evaluate situations also assist the development of critical thinking and self-awareness (Silvia, 2012). In addition, when the learner has control on the level of difficulty and receives feedback after the simulation, self-efficacy and transfer of learning is significantly improved (Gegenfurtner, Quesada-Pallarès & Knogler, 2014).

Learning analytics

As a general premise, learning analytic tools can aggregate data and generate information about learners’ behavior and activities (e.g. learners’ learning records, strategies applied, learning content accessed, questions-answers posed, learners’ engagement with the online system). Based on such information, the system can provide intelligent feedback to both educators and learners (Li & Chen, 2013) which can be presented in various formats, including information visualization within 29 dashboards . As a field, learning analytics has mainly focused on areas such as performance prediction, detection of at-risk students, data visualization, intelligent feedback, course recommendation, learners’ skill estimation and detection of their behavioral patterns, planning and scheduling, analysis of social networks and concept maps’ development (Sin & Muthu, 2015).

Systematic reviews point out that most research focus on student prediction and technical aspects of data mining (Papamitsiou & Economides, 2014; Sin & Muthu, 2015). Moreover, even if the learning analytics tools seemed to be focusing on visualising learner engagement and activity providing early alerts, these data visualisations were not necessarily ‘actionable’ in the way that learning analytics should eventually lead to a targeted pedagogical intervention. In other words, they do not reveal what actions should be taken to improve learning and teaching. In addition, efforts focus less on innovative pedagogical processes and practices or on helping educational organisations to fully embrace the digital era (Ferguson et al., 2016).

Evidence of the use and effectiveness

Improving students’ learning habits:

CLARA is a tool that aims to make students aware of their learning dispositions (the habits of minds they bring to their learning). The survey tool platform generates a ‘learning power’ profile visualisation for each student and interventions based on these learning profiles. In addition, students receive coaching and mentoring from trained peers and staff. The tool is developed by the University of Technology, Sydney and a case study is provided in Ferguson and colleagues (2016: 121).

Helping students to reflect:

Open Essayist is a tool that provides automated feedback to learners on draft essays in order to support their reflection and development. It presents a computer-based analysis of the most important sections and key words in a draft so that learners can compare those to what they intended to convey and adjust their writing in the light of that comparison (more information can be found in Ferguson et al., 2016: 64).

Metrics for soft skills 

Hard and soft skills badges

Under this topic we reviewed evidence of the use and effectiveness, learning analytics, debriefing and simulations and benefits of badges in order to provide better capability in order to combine and synthetize data retrieved from traditional and innovative digital assessment .

Grading Soft Skills (GRASS) has been a 3-year research project financially supported by EU (project reference number: 543029-LLP-1-2013-1-RS-KA3-KA3MP), focusing on representing soft skills of learners of various ages and at different levels of education in a quantitative, measurable way, so that these skills can become the subject of formal validation and recognition.https://sites.google.com/site/llpgrassproject/

Think about soft skill that could be represent a badge and what is needed as key indicator for reaching that achievement ?

Define one competence badge that you need for your project

Further reading

E-assessment of prior learning: a pilot study of interactive assessment of staff with no formal education who are working in Swedish elderly care

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3998952/pdf/1471-2318-14-52.pdf

Case studies of innovative assessment

https://publications.jrc.ec.europa.eu/repository/bitstream/JRC118113/jrc118113_1._evidence_of_innovative_assessment._literature_review_and_case_studies.pdf

Digital innovation: A review and synthesis

https://onlinelibrary.wiley.com/doi/abs/10.1111/isj.12193

Learning Analytics for Peer-assessment: (Dis)advantages, Reliability and Implementation

https://www.researchgate.net/publication/318091790_Learning_Analytics_for_Peer-assessment_Disadvantages_Reliability_and_Implementation