Designing effective assessment and feedback
Each perspective makes different assumptions about the nature of learning and suggests different approaches to assessment and feedback.
The associative perspective emphasises that learning is about acquiring competence through concept linking and bringing together component skills. Assessment might involve both small-scale testing of basic understanding and skills and more complex assignments where understanding and skills are assessed in larger contexts (for example, through projects or work-related assignments). Feedback would involve supporting learners as they build more complex understanding and skills.
The constructivist perspective emphasises that learners must actively construct their own understanding. Assessment would focus on the extent to which learners can structure and restructure material for different purposes without the help of others (for example, through inquiry-based tasks), and feedback would support learners in becoming more self directed. Hence this approach requires that learners reflect, self-assess and generate feedback on their own learning. Assessment can engage or deter, develop or overburden, but it cannot be avoided in programmes of formal learning.
Designing effective assessment and feedback
The social constructivist perspective emphasises the role of others in constructing understanding. Dialogue and collaboration are seen as key to learning success. Assessment would involve group tasks and assignments, sometimes with individual contributions being assessed. This perspective emphasises that feedback is not just teacher-provided but must be rich and varied, deriving also from peers during collaboration and discussion.
The situative perspective sees learning as arising from participation in communities of practice. Learners participate in many learning communities during their studies which prepare them to become members of professional communities (learning to think and act like a lawyer or an engineer, for example). This perspective is consistent with social constructivism but also emphasises identity formation. Assessment tasks would be authentic and modelled on what happens in professional practice; feedback would involve peers, disciplinary experts and those in relevant roles and professions.
These four learning perspectives should not be seen as incompatible. Indeed, learning designs for tasks, modules and programmes will invariably draw on a mix of these perspectives and on their different assessment and feedback approaches.
The Assessment Process
Assessment is a constant cycle of improvement. Data gathering is ongoing. The goal of assessment, whether for an academic department or a program, is to provide: (a) a clear conceptualization of intended student learning outcomes, (b) a description of how these outcomes are assessed and measured, (c) a description of the results obtained from these measures, and (d) a description of how these results validate current practices or point to changes needed to improve student learning.
Academic departments or programs need to constantly ask: What do we want students to be able to know, do, and appreciate and how do we know that students are achieving the intended learning outcomes? After implementing an assessment plan and measuring student learning outcomes, departments, programs, and units need to analyze the results obtained and use those results to make necessary changes or improvements to the unit or program.
Steps towards effective technology enhanced assessment and feedback
- Applications of technology to assessment and feedback are embedded in the institutional and/or faculty vision for high-quality learning, teaching and assessment
- Principles of good assessment and feedback underpin the use of technology – for example, assessment designs exploit technology to motivate learning, encourage time on task, facilitate self assessment and enable learners to act on feedback
- Applications of technology are informed by a clear understanding of the purpose of the task, the ICT skills and diverse needs of learners and the specific requirements of the contexts in which the assessment or feedback takes place
- Technology is used to facilitate enhancements previously difficult to achieve at scale such as peer assessment
- Optimum use is made of e-enabled assessment management and administration systems to monitor learners’ progress and improve teaching and learning
- Technology augments, streamlines or enhances current provision, and is not used for its own sake
The Assessment Process
This initial session raises these types of scoping questions:
- How broad and deep should the Digital Assessment be and where should we focus our efforts?
- What is the timeline/project schedule for the Digital Assessment?
- Which personnel will participate in the Digital Assessment?
Step 1: Clearly define and identify the learning outcomes
Each program should formulate between 3 and 5 learning outcomes that describe what students should be able to do (abilities), to know (knowledge), and appreciate (values and attitudes) following completion of the program. The learning outcomes for each program will include Public Affairs learning outcomes addressing community engagement, cultural competence, and ethical leadership.
The Assessment Process
Step 2: Select appropriate assessment measures and assess the learning outcomes
Multiple ways of assessing the learning outcomes are usually selected and used. Although direct and indirect measures of learning can be used, it is usually recommended to focus on direct measures of learning. Levels of student performance for each outcome is often described and assessed with the use of rubrics.
It is important to determine how the data will be collected and who will be responsible for data collection. Results are always reported in aggregate format to protect the confidentiality of the students assessed.
The Assessment Process
Step 3: Analyze the results of the outcomes assessed
It is important to analyze and report the results of the assessments in a meaningful way.
Step 4: Adjust or improve programs following the results of the learning outcomes assessed
Assessment results are worthless if they are not used. This step is a critical step of the assessment process. The assessment process has failed if the results do not lead to adjustments or improvements in programs. The results of assessments should be disseminated widely to faculty in the department in order to seek their input on how to improve programs from the assessment results. In some instances, changes will be minor and easy to implement. In other instances, substantial changes will be necessary and recommended and may require several years to be fully implemented.
Designing assessment in a digital age
Dialogue and communication: Online interaction via forums, blogs, email and voice boards can enrich feedback and help to clarify learning goals and standards. Distance and time constraints can be overcome.
Immediacy and contingency: Interactive online tests and tools in the hand (such as voting devices and internet connected mobile phones) can facilitate learner-led, on-demand formative assessment. Rapid feedback can then correct misconceptions and guide further study
Authenticity: Online simulations and video technologies can increase the discriminatory power of assessment and support risk-free rehearsal of real-world skills in professional and vocational education.
Speed and ease of processing: Assessment delivery and management systems can provide instant feedback to learners and practitioners, yielding robust information for curriculum review and quality assurance processes. Interoperability standards can facilitate transfer of data between institutional systems.
Self-evaluative, self-regulated learning: Activities such as peer assessment, collection of evidence and reflection on achievements in e-portfolios and blogs can generate ownership of learning and promote higher-order thinking skills, in turn improving performance in summative assessment.
Additionality: Technology can make possible the assessment of skills and processes that were previously difficult to measure, including the dynamic processes involved in learning. Technology can also add a personal quality to feedback, even in large-group contexts, and, through efficiencies gained from asynchronous communication and automated marking, can enable practitioners to make more productive use of their time
Traditional unseen, time-constrained written exams
Traditional unseen written exams still make up the lion’s share of assessment in higher education, though in some disciplines, for example mathematics, engineering and sciences courses, this situation is considerably balanced by the inclusion of practical work, projects and other contributions to the evidence on the basis of which we grade and classify students.
- Relatively economical. Exams can be more cost-effective than many of the alternatives (though this depends on economies of scale when large numbers of students are examined, and also on how much time and money needs to be spent to ensure appropriate moderation of assessors’ performance). However, any form of assessment can only be truly said to be cost-effective if it is actually effective in its contribution to students’ learning.
- Equality of opportunity. Exams are demonstrably fair in that students have all the same tasks to do in the same way and within the same timescale. (However, not all things are equal in exams – ask any hay-fever sufferer, or candidate with menstrual problems).
- We know whose work it is. It is easier to be sure that the work being assessed was done by the candidate, and not by other people. For this reason, exams can be considered to be an ‘anti plagiarism assessment’ device, and although there are instances of attempting to cheat in exam rooms, good invigilation practice and well-planned design of the room (and the questions themselves) can eliminate most cheating.
- Teaching staff are familiar with exams. Familiarity does not always equate with validity, but the base of experience that teaching staff already have with traditional unseen exams means that at least some of the problems arising from them are well known, and sometimes well-addressed.
- Exams cause students to get down to learning. Even if the assessment method has problems, it certainly causes students to engage deliberately with the subject matter being covered by exams, and this can be worthwhile particularly for those ‘harder’ physical sciences areas where students may not otherwise spend the time and energy that is needed to make sense of the subject matter.
- Students get little or no feedback about the detail of their performance, which is therefore a wasted as far as feedback is concerned. Though it can be argued that the purpose of such exams is measurement rather than feedback, the counter-argument is that most exams represent lost learning opportunities because of this lack of feedback. Where students are given the opportunity to see their marked scripts (even with no more feedback than seeing the subtotals and total marks awarded along the way), they learn a great deal about exactly what went wrong with some of their answers, as well as having the chance to receive confirmation regarding the questions they answered well.
- Badly set exams encourage surface learning, with students consciously clearing their minds of one subject as they prepare for exams in the next subject. In a discipline such as physical sciences, it is inappropriate to encourage students to put out of their minds important subject areas, where they will need to retain their mastery for later stages in their studies.
- Technique is too important. Exams tend to measure how good students are at answering exam questions, rather than how well they have learned. In physical sciences exams lending themselves to problems and calculations, students may miss out on the need to develop other important skills, such as writing effectively and expressing themselves coherently.
- Exams only represent a snapshot of student performance, rather than a reliable indicator of it. How students perform in traditional exams depends on so many other factors than their grasp of the subject being tested. Students’ state of mind on the day, their luck or otherwise in tackling a good question first, their state of health, and many other irrelevant factors creep in.
In many ways these are similar to traditional exams, but with the major difference that students are allowed to take in with them sources of reference material. Alternatively, candidates may be issued with a standard set of resource materials that they can consult during the exam, and are informed in advance about what will be available to them, so that they can prepare themselves by practising to apply the resource materials. Sometimes, in addition the ‘timed’ element is relaxed or abandoned, allowing students to answer questions with the aid of their chosen materials, and at their own pace.
These have many of the advantages of traditional exams, with the addition of:
- Less stress on memories! The emphasis is taken away from students being required to remember facts, figures, formulae, and other such information.
- Measuring retrieval skills. It is possible to set questions which measure how well students can use and apply information, and how well they can find their way round the contents of books and even databases.
- Slower writers helped? If coupled with a relaxation in the timed dimension (eg a nominal ‘2-hour’ paper where students are allowed to spend up to three hours if they wish) some of the pressure is taken away from those students who happen to be slower at writing down their answers (and also students who happen to think more slowly).
- Not enough books or resources! It is hard to ensure that all students are equally equipped regarding the books they bring into the exam with them. Limited stocks of library books (and the impossibility of students purchasing their own copies of expensive books) means that some students may be disadvantaged.
- Need bigger desks? Students necessarily require more desk-space for open-book exams if they are to be able to use several sources of reference as they compose their answers to exam questions. This means fewer students can be accommodated in a given exam room than with traditional unseen exams, and therefore open book exams are rather less cost-effective in terms of accommodation and invigilation.
These include multiple-choice exams, and several other types of formats where students are not required to write ‘full’ answers, but are involved in making true/false decisions, or identifying reasons to support assertions, or fill in blanks or complete statements, and so on. It is of course possible to design mixed exams, combining free-response traditional questions with structured ones. Some kinds of structured exams can be computer-based, and technology can be used both to process students’ scores and to provide feedback to them. In the following discussion, I will concentrate on the benefits and drawbacks of multiple choice questions. Many of the same points also apply at least in part to other types of structured exam questions, such as true-false, short-answer, and sequencing questions.
- Greater syllabus coverage: it is possible, in a limited time, to test students’ understanding of a much greater cross-section of a syllabus than could be done in the same time by getting students to write in detail about a few parts of the syllabus.
- Multiple choice exams test how fast students think, rather than how fast they write. The level of their thinking depends on how skilled the question-setters have been.
- Students waste less time. For example, questions can already show, for example, formulae, definitions, equations, statements (correct and wrong) and students can be asked to select the correct one, without having to provide it for themselves.
- Saving staff time and energy. With optical mark readers, it is possible to mark paper-based multiple choice exams very cost-effectively, and avoid the tedium and subjectivity which affect the marking of traditional exams.
- Computer-based tests can save even more time. As well as processing all of the scores, computer software can work out how each question performs, calculating the discrimination index and facility value of each question. This allows the questions which work well as testing devices to be identified, and selected for future exams.
- Testing higher-level skills? Multiple choice exams can move the emphasis away from memory, and towards the ability to interpret information and make good decisions. However, the accusation is often made that such exams seem only to test lower cognitive skills, and there are numerous examples which seem to support this argument. There are, however, examples where high level skills are being tested effectively, and more attention needs to be given to the design of such testing to build on these.
- The guess factor. Students can often gain marks by lucky guesses rather than correct decisions.
- Designing structured questions takes time and skill. It is harder to design good multiple-choice questions than it is to write traditional open-ended questions. In particular, it can be difficult to think of the last distractor or to make it look sufficiently plausible. It is sometimes difficult to prevent the correct answer or best option standing out as being the one to choose.
- Black and white or shades of grey? While it is straightforward enough to reward students with marks for correct choices (with zero marks for choosing distractors), it is more difficult to handle subjects where there is a ‘best’ option, and a ‘next-best’ one, and so on. • Where multiple-choice exams are being set on computers, check that the tests are secure. Students can be ingenious at getting into computer files that are intended to be secret!
- The danger of impersonators? The fact that exams composed entirely of multiple-choice questions do not require students to give any evidence of their handwriting increases the risk of substitution of candidates
In some subjects, assessment is dominated by essay-writing. Traditional (and open-book) exams often require students to write essays. Assessed coursework often takes the form of essays. It is well known that essay-answers tend to be harder to mark, and more time-consuming to assess, than quantitative or numerical questions. There are still some useful functions to be served by including some essay questions in exams or coursework assessments, but perhaps we need to face up to the fact that reliability in marking essays is often unsatisfactory, and refrain from using essays to the extent that they are used at present.
- Essays allow for student individuality and expression. They are a medium in which the ‘best’ students can distinguish themselves. This means, however, that the marking criteria for essays must be flexible enough to be able to reward student individuality fairly.
- Essays can reflect the depth of student learning. Writing freely about a topic is a process which demonstrates understanding and grasp of the material involved.
- Essay-writing is a measure of students’ written style. It is useful to include good written communication somewhere in the overall assessment strategy. The danger of students in science disciplines missing out on the development of such skills is becoming increasingly recognised.
- Essay-writing is very much an art in itself. Students from some backgrounds are disadvantaged regarding essay-writing skills as they have simply never been coached in how to write essays well. For example, a strong beginning, a coherent and logical middle, and a firm and decisive conclusion combine to make up the hallmarks of a good essay. The danger becomes that when essays are overused in assessment strategies, the presence of these hallmarks is measured time and time again, and students who happen to have perfected the art of delivering these hallmarks are repeatedly rewarded irrespective of any other strengths and weaknesses they may have.
- Essays take a great deal of time to mark objectively. Even with well-thought-out assessment criteria, it is not unusual for markers to need to work back through the first dozen or so of the essays they have already marked, as they become aware of the things that the best students are doing with the questions, and the difficulties experienced by other students.
- ‘Halo effects’ are significant. If the last essay answer you marked was an excellent one, you may tend to approach the next one with greater expectations, and be more severe in your assessment decisions based upon it.
Assessed reports make up at least part of the coursework component of many courses. Report-writing is one of the most problematic study-skills areas in which to work out how and what to advise students to do to develop their approaches. The format, layout, style and nature of an acceptable report varies greatly from one discipline to another, and even from one assessor to another in the same discipline. The most common kinds of report that many students write are those associated with their practical, laboratory or field work. Several of the suggestions offered in this section relate particularly to report-writing in science and engineering disciplines, but can readily be extended to other subject areas.
- Report writing is a skill relevant to many jobs. In many careers and professional areas that physical sciences students are likely to meet, the ability to put together a convincing and precise report is useful. Report writing can therefore provide a medium where specific skills relevant to professional activity can be addressed.
- Reports can be the end-product of useful learning activities. For example, the task of writing reports can involve students in research, practical work, analysis of data, comparing measured findings with literature values, prioritising, and many other useful processes. Sometimes these processes are hard or impossible to assess directly, and reports provide secondary evidence that these processes have been involved successfully (or not).
- Report-writing can allow students to display their talents. The fact that students can have more control when they write reports than when they answer exam questions, allows students to display their individual strengths.
- Collaboration can be difficult to detect. For example with laboratory work, there may be a black market in old reports! Also, when students are working in pairs or groups in practical work, it can be difficult to set the boundaries between collaborative work and individual interpretation of results.
- Report-writing can take a lot of student time. When reports are assessed and count towards final grades, there is the danger that students spend too much time writing reports at the expense of getting to grips with their subject matter in a way which will ensure that they succeed in other forms of assessment such as exams.
- Report-marking can take a lot of staff time. With increased numbers of students, it becomes more difficult to find the time to mark piles of reports and to maintain the quality and quantity of feedback given to students about their work.
Many areas of study involve practical work, but it is often much more difficult to assess such work in its own right; assessing reports of practical work may only involve measuring the quality of the end-product of the practical work, and not the work itself, compromising the validity of the assessment. The following discussion attempts to help you to think of ways of addressing the assessment of the practical work itself.
- Practical work is really important in some disciplines. In many areas of physical sciences, practical skills are just as important as theoretical competences. Students proceeding to research or industry will be expected to have acquired a wide range of practical skills.
- Employers may need to know how good students’ practical skills are (and not just how good their reports are). It is therefore useful to reserve part of our overall assessment for practical skills themselves, and not just the final written products of practical work.
- Practical work is learning-by-doing. Increasing the significance of practical work by attaching assessment to it helps students approach such work more earnestly and critically.
- It is often difficult to assess practical work in its own right. It is usually much easier to assess the end-point of practical work, rather than the processes and skills involved in their own right.
- It can be difficult to agree on assessment criteria for practical skills. There may be several ways of performing a task well, requiring a range of alternative assessment criteria.
- Students may be inhibited when someone is observing their performance. When doing laboratory work, for example, it can be very distracting to be watched! Similar considerations apply to practical exercises such as interviewing, counselling, advising, and other ‘soft skills’ which are part of the agenda of many courses.
Building up portfolios of evidence of achievement is becoming much more common, following on from the use of Records of Achievement at school. Typically, portfolios are compilations of evidence of students’ achievements, including major pieces of their work, feedback comments from tutors, and reflective analyses by the students themselves. It seems probable that in due course, degree classifications will no longer be regarded as sufficient evidence of students’ knowledge, skills and competences, and that profiles will be used increasingly to augment the indicators of students achievements, with portfolios to provide in-depth evidence. With physical sciences, some of the assessment formats we have already addressed make useful components of portfolios of evidence, particularly good examples of practical reports. Probably the most effective way of leading students to generate portfolios is to build them in as an assessed part of a course. Here, the intention is to alert you to some of the more general features to take into account when assessing student portfolios. You may, however, also be thinking about building your own portfolio to evidence your teaching practice, and can build on some of the suggestions below to make this process more effective and efficient.
- Portfolios tell much more about students than exam results. They can contain evidence reflecting a wide range of skills and attributes, and can reflect students’ work at its best, rather than just a cross section on a particular occasion.
- Portfolios can reflect development. Most other forms of assessment are more like ‘snapshots’ of particular levels of development, but portfolios can illustrate progression. This information reflects how fast students can learn from feedback, and is especially relevant to employers of graduates straight from university.
- Portfolios can reflect attitudes and values as well as skills and knowledge. This too makes them particularly useful to employers, looking for the ‘right kind’ of applicants for jobs.
- Portfolios take a lot of looking at! It can take a long time to assess a set of portfolios. The same difficulty extends beyond assessment; even though portfolios may contain material of considerable interest and value to prospective employers, it is still much easier to draw up interview shortlists on the basis of paper qualifications and grades. However, there is increasing recognition that it is not costeffective to skimp on time spent selecting the best candidate for a post. This is as true for the selection of physical sciences lecturers as for the selection of students for jobs. Lecturers are increasingly expected to produce hard evidence of the quality of their teaching and research, as well as to demonstrate how they teach to those involved in their appointment.
Giving presentations to an audience requires substantially different skills from writing answers to exam questions. Also, it can be argued that the communications skills involved in giving good presentations are much more relevant to professional competences needed in the world of work. It is particularly useful to develop Physical sciences students’ presentations skills if they are likely to go on to research, so that they can give effective presentations at conferences. It is therefore increasingly common to have assessed presentations as part of students’ overall assessment diet.
- There is no doubt whose performance is being assessed. When students give individual presentations, the credit they earn can be duly given to them with confidence.
- Students take presentations quite seriously. The fact that they are preparing for a public performance usually ensures that their research and preparation are addressed well, and therefore they are likely to engage in deep learning about the topic concerned.
- Presentations can also be done as collaborative work. When it is less important to award to students individual credit for presentations, the benefits of students working together as teams, preparing and giving presentations, can be realised.
- Where presentations are followed by question-and-answer sessions, students can develop some of the skills they may need in oral examinations or interviews. Perhaps the most significant advantage of developing these skills in this way is that students can learn a great deal from watching each others’ performances.
- With large classes, a round of presentations takes a long time. This can be countered by splitting the large class into groups of (say) 20 students, and facilitating peer-assessment of the presentations within each group on the basis of a set of assessment criteria agreed and weighted by the whole class.
- Some students find giving presentations very traumatic! However, it can be argued that the same is true of most forms of assessment, not least traditional exams.
- The evidence is transient. Should an appeal be made, unless the presentations have all been recorded, there may be limited evidence available to reconsider the merit of a particular presentation.
- Presentations cannot be anonymous. It can prove difficult to eliminate subjective bias.
In many courses, one of the most important kinds of work undertaken by students takes the form of individual projects, often relating theory to practice beyond the college environment. Such projects are usually an important element in the overall work of each student, and are individual in nature.
- Project work gives students the opportunity to develop their strategies for tackling research questions and scenarios. Students’ project work often counts significantly in their final-year degree performance, and research opportunities for the most successful students may depend primarily on the skills they demonstrated through project work.
- Projects can be integrative. They can help students to link theories to practice, and to bring together different topics (and even different disciplines) into a combined frame of reference.
- Project work can help assessors to identify the best students. Because project work necessarily involves a significant degree of student autonomy, it does not favour those students who just happen to be good at tackling traditional assessment formats.
- Project work takes a lot of marking! Each project is different, and needs to be assessed carefully. It is not possible for assessors to ‘learn the scheme, and steam ahead’ when marking a pile of student projects.
- Projects are necessarily different. This means that some will be ‘easier’, some will be tough, and it becomes difficult to decide how to balance the assessment dividend between students who tackled something straightforward and did it well, as opposed to students who tried something really difficult, and got bogged down in it.
- Projects are relatively final. They are usually one-off elements of assessment. When students fail to complete a project, or fail to get a difficult one started at all, it is rarely feasible to set them a replacement one.
Poster-displays and exhibitions in software like Padlet
When students are asked to synthesise the outcomes of their learning and/or research into a self explanatory poster, (individually or in groups), which can be assessed on the spot, it can be an extremely valuable process. More and more conferences are providing poster-display opportunities as an effective way of disseminating findings and ideas. This kind of assessment can provide practice in developing the skills relevant to communicating by such visual means.
- Poster-displays and exhibitions can be a positive step towards diversifying assessment. Some students are much more at home producing something visual, or something tangible, than at meeting the requirements of traditional assessment formats such as exams, essays, or reports.
- Poster-displays and exhibitions can provide opportunities for students to engage in peer assessment. The act of participating in the assessment process deepens students’ learning, and can add variety to their educational experience.
- Such assessment formats can help students to develop a wide range of useful, transferable skills. This can pave the way towards the effective communication of research findings, as well as developing communication skills in directions complementary to those involving the written (or printed) word.
- However valid the assessment may be, it can be more difficult to make the assessment of posters or exhibitions demonstrably reliable. It is harder to formulate ‘sharp’ assessment criteria for diverse assessment artefacts, and a degree of subjectivity may necessarily creep into their assessment.
- It is harder to bring the normal quality assurance procedures into assessment of this kind. For example, it can be difficult to bring in external examiners, or to preserve the artefacts upon which assessment decisions have been made so that assessment can be revisited if necessary (for example for candidates who end up on degree classification borderlines).
- It can take more effort to link assessment of this sort to stated intended learning outcomes. This is not least because poster-displays and exhibitions are likely to be addressing a range of learning outcomes simultaneously, some of which are subject-based, but others of which will address the development of key transferable skills.
Increasing use is being made of assessment based on students’ performance in the workplace, whether on placements, as part of work-based learning programmes, or during practice elements of courses. With students in physical sciences disciplines, work-based learning can open up to them invaluable experience in government or commercial laboratories. Often, a variety of assessors are used, sometimes giving rise to concerns about how consistent assessment practice between the workplace and the institution can be assured. Traditional means of assessment are often unsuitable in contexts where what is important is not easily measured by written accounts. Many courses include a placement period, and the increasing use of accreditation of prior experiential learning in credit accumulation systems means that we need to look at ways of assessing material produced by students in work contexts, rather than just things students write up when back at college after their placements.
- Work-based learning can balance the assessment picture. Future employers are likely to be at least as interested in students’ work-related competences as in academic performance, and assessing work-based learning can give useful information about students’ competences beyond the curriculum.
- Assessing placement learning helps students to take placements more seriously. As with anything else, if they’re not assessed, some students will not really get down to learning from their placements.
- Assessing placement learning helps to make your other assessments closer to practice. Although it is difficult to assess placement learning reliably, the validity of the related learning may outweigh this difficulty, and help you to tune in more successfully to real-world problems, situation and practices in the rest of your assessment practice.
- Assessing placement learning can bring you closer to employers who can help you. It is sometimes possible to involve external people such as employers in some in-college forms of assessment, for example student presentations, interview technique practising, and so on. The contacts you make with employers during placement supervision and assessment can help you to identify those who have much to offer you.
- Reliability of assessment is difficult to achieve. Placements tend to be highly individual, and students’ opportunities to provide evidence which lends itself well to assessment can vary greatly from one placement to another.
- Some students will have much better placements than others. Some students will have the opportunity to demonstrate their flair and potential, while others will be constrained into relatively routine work practices.