How to Assess Doctors and Health Professionals - Mike Davis - E-Book

How to Assess Doctors and Health Professionals E-Book

Mike Davis

0,0
31,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

This important book offers an introduction to the theory and the varying types of assessment for health care professionals. The book includes information on such topics as Where have work based assessments come from?; Why do we have different parts to the same exam like MCQs and OSCEs?; How do colleges decide who has passed or not?; Why can people pick their own assessors for their MSF?; The role of formative assessment Portfolios and their value. The book avoids jargon, is clear and succinct, and gives the pros and cons of the different assessment processes.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 223

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Title page

Copyright page

Acknowledgements

About the authors

Lead authors

Additional authors

Foreword

Preface

Chapter 1 Purpose of assessment

Why do we assess?

What are we assessing?

Conclusions

Chapter 2 Principles of assessment

Validity

Reliability

Specificity

Feasibility

Fidelity

Formal and informal assessment: assessment of learning as opposed to assessment for learning

Formative and summative assessment

Ipsative assessment

Continuous assessment

Conclusions

Chapter 3 E-assessment – the use of computer-based technologies in assessment

Benefits and uses of technology in assessment

Computer-based testing

Assessment management

Enhancing learning and feedback

Some final thoughts

Chapter 4 Feedback

What is feedback?

Principles of effective feedback

Feedback and reflective practice

Providing feedback: challenges

Incorporating feedback into assessment

Workplace-based assessments and multi-source feedback

Conclusions

Chapter 5 Portfolios

What is a portfolio?

Why use portfolios in assessment?

Reflective practice

Assessing reflection

Constructing and assessing portfolios

E-portfolios

Conclusions

Chapter 6 Revalidation

Origins and definition

Aims of revalidation

The revalidation process

Remediation

Conclusions

Chapter 7 Assessment types: Part 1

Classification of assessment tools

Tests of cognitive abilities – verbal, written or computer-based

Conclusions

Chapter 8 Assessment types: Part 2

Competency-based tests of performance

Conclusions

Chapter 9 Programmatic assessment

An ideal programmatic assessment for outcomes set in Tomorrow’s Doctors 2009

Summary

Chapter 10 Conclusion

Index

This edition first published 2013, © 2013 by Blackwell Publishing Ltd

Blackwell Publishing was acquired by John Wiley & Sons in February 2007. Blackwell’s publishing program has been merged with Wiley’s global Scientific, Technical and Medical business to form Wiley-Blackwell.

Registered office: John Wiley & Sons, Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK

Editorial offices: 9600 Garsington Road, Oxford, OX4 2DQ, UK

The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK

111 River Street, Hoboken, NJ 07030-5774, USA

For details of our global editorial offices, for customer services and for information about how to apply for permission to reuse the copyright material in this book please see our website at www.wiley.com/wiley-blackwell.

The right of the authors to be identified as the authors of this work has been asserted in accordance with the UK Copyright, Designs and Patents Act 1988.

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by the UK Copyright, Designs and Patents Act 1988, without the prior permission of the publisher.

Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The publisher is not associated with any product or vendor mentioned in this book. This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold on the understanding that the publisher is not engaged in rendering professional services. If professional advice or other expert assistance is required, the services of a competent professional should be sought.

Library of Congress Cataloging-in-Publication Data

How to assess doctors and health professionals / Mike Davis ... [et al.].

p. ; cm.

 Includes bibliographical references and index.

 ISBN 978-1-4443-3056-4 (paper)

 I. Davis, Mike, 1947-

 [DNLM: 1. Education, Medical–standards. 2. Educational Measurement–methods. 3. Clinical Competence–standards. 4. Education, Professional–standards. W 18]

 610.711–dc23

2012041073

A catalogue record for this book is available from the British Library.

Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books.

Cover design by: Meaden Creative

Acknowledgements

Judy McKimm and Kirsty Forrest would like to say thank you to students, colleagues, family and friends for their wisdom, insights and support which have helped shape and inform this book.

Mike Davis would like to thank friends and colleagues from ALSG/RC(UK) Generic Instructor Course communities for their input into his thinking for some time. He would also like to give special thanks to Christine for insight, encouragement and support, particularly over the past 18 months.

Thanks also to Alison Quinn for her contributions to the chapters on assessment types.

About the authors

Lead authors

Mike Davis is a freelance consultant in continuing medical education. He is lead educator with Advanced Life Support Group where he has been involved in the development of the virtual learning environments elements across a wide range of blended courses. He is an educator with the ATLS instructor course, the Royal College of Surgeons’ Train the Trainer course, and was lead educator during the development and refinement of the European Trauma Course.

Judy McKimm is Professor and Dean of Medical Education at Swansea University. She has wide experience in undergraduate and postgraduate medical and healthcare education and her research and publications are primarily in medical education, teaching and learning, educational and clinical leadership development and professional practice. She has worked in many countries on educational and health management capacity building initiatives, most recently in the Pacific.

Kirsty Forrest is a consultant anaesthetist in Leeds. She is also clinical education advisor for the Yorkshire and Humber Deanery and co-chair of the Student Selected Components course of the MBChB at Leeds University. She has been involved in educational research for 10 years and awarded funding via a university fellowship and the Higher Education Academy. She is co-author and editor of a number of best-selling medical textbooks.

Additional authors

Steve Capey initially trained as a pharmacologist at the University of Wales College of Medicine and is currently the Assessment Director for the MBBCh programme at the Swansea University College of Medicine. He has been instrumental in the development of new medical curricula at Keele and Swansea. His research interests are in innovative integrated assessment systems and he has presented his findings at national and international conferences on medical education and assessment.

Jacky Hanson is a consultant in emergency medicine at Lancashire Teaching Hospitals NHS Trust (LTHTR). She has taught extensively at undergraduate and postgraduate levels and on Advanced Life Support courses. She is the Director of the Lancashire Simulation Centre, LTHTR involved in training in both clinical and human factor skills, using blended learning. She was Director of Continuing Professional Development and Revalidation for the College of Emergency Medicine. She is an examiner for the Fellowship of the College of Emergency Medicine.

Kamran Khan has developed a passion for medical education and acquired higher qualification in this field alongside his parent speciality in anaesthetics. He has been in clinical academic posts, first at the University of Oxford and currently at the University of Manchester. He is Associate lead for Assessments at the Manchester Medical School. He has extensively presented and published at the national and international levels in his fields of academic interest.

Alison Quinn has taken a year out of an anaesthetic training programme for a Fellowship in Medical Education and Simulation. She also holds an honorary lecturer post with the University of Manchester, Medical School, working in the Assessments Team. She is also studying for a Post Graduate Certificate in Medical Education.

Foreword

Well done for picking up this small tome. Keep reading, you will find its contents important.

First, ask yourself why you should be interested in assessment? After all, it is going to be another drain on your time and may result in first hand experience of conflict resolution with examinees and other examiners.

One reason is something you know from your own personal experience: all tests need to ensure they are fair and up-to-date. Abdicating responsibility increases the chance that they will dumb down to the lowest common denominator. This may achieve a target of some central committee but it is unlikely to help the patient who is being cared for by a medical practitioner with substandard abilities.

Although the medical profession is replete with assessments, they are not a panacea for other educational problems. It is therefore important to understand what they can and cannot do. Properly constructed and used, assessments can provide a powerful, positive educational experience which motivates the learner to move on to higher levels of competence. In contrast, a poorly constructed or applied assessment can produce practitioners with little confidence and highly developed avoidance strategies for the subject.

After many years as examinees, we typically find our initial involvement with assessment may be as instructors on courses or examiners of medical students in formative assessments. Later on, we may become involved with high stakes examinations such as university finals and college diplomas. Subsequently, some of us become part of a team devising and developing assessments. Running alongside this is our own professional need to keep taking further assessments, either as part of a chosen speciality training programme or to complete revalidation. This book provides you with the key information needed to carry out these various roles. The authors have taken a pragmatic approach, covering the educational theory in as much detail as is required to understand the strengths and weaknesses of the various assessment tools that you will come across.

There is no ideal single tool in existence that can adequately assess the range of cognitive, psychomotor and affective competences that are required to practice as a medical student, trainee or specialist. Inevitably, those abilities that are easiest to measure are not necessarily the most important – especially as the training evolves and deeper learning occurs. The authors therefore describe how to use a collection of tests, with each component targeted at specific competencies listed in the curriculum. They also show that having the tests linked to the educational outcome enables the examination and teaching to reinforce one another. When assessment develops as an afterthought, it invariably fails to meet the range and depth required to ensure the appropriate level of skill has been achieved.

The important role assessment has in learning is often forgotten and powerful learning opportunities are missed. The authors address this by discussing the educational potential in both formative and summative assessments. This obviously brings up the issue of feedback and how this can be best carried out. Failing an assessment is never pleasant, particularly when it is a high stakes exam. These examinees will therefore present a range of needs and emotions. In some cases this can result in people resorting to litigation when they feel incorrect decisions have been made. Examiners, and their boards, therefore need to be absolutely sure that the competences assessed were suitable and carried out in an approved way using the optimal tools. In this way appropriate feedback and advice can be given. Furthermore, those involved in the assessment can be confident when questioned, sometimes in a court of law, as to the decisions they made.

The holy grail of any medical educational programme is to produce an improvement in patient care. Consequently, assessments should ideally be carried out in the workplace. This book provides advice on balancing the often conflicting desires of validity, reliability, specificity, feasibility and fidelity. Such a strategy also involves the use of learning technologies as they can help with assessment development, its administration, marking and analysis. e-Portfolios are another manifestation of computer-assisted learning and assessment that examiners need to be familiar with. How learning technologies can be best used, and errors to avoid, are addressed by the authors.

Therefore I hope you continue to read this book as it will make the process of assessment more transparent, relevant and comprehensible. You will then be able to bring these same qualities to the next assessment you carry out.

Peter Driscoll

BSc, MD, FCEM

Honorary Senior Lecturer, School of Medicine, University of St Andrews

Professor, College of Emergency Medicine

Preface

This book is designed to complement an earlier volume in the How to . . .  series, How to Teach Continuing Medical Education by Mike Davis and Kirsty Forrest, and is intended to fill the very obvious gap in that edition. This is not to suggest, however, that assessment is an afterthought. As Harden wrote:

Assessment shouldbe seen as integral to any course or training programme andnot merely an add-on [1].

There is a tendency to take assessment for granted in the early years of a career in medical education and not to question why assessments are structured in the way that they are. Medical education is fairly idiosyncratic in the way that it assesses learners. In addition to more conventional assessments such as essays, multiple choice questions or presentations, assessment methods such as Objective Structured Clinical Examinations (OSCEs), long cases, Extended Matching Questions (EMQs) and Mini Clinical Evaluation Exercises (mini-CEXs) are primarily used only in medical education. Students and trainees on the whole are usually on the receiving end of assessment rather than acting as assessors themselves and, in the same way that fish are not conscious of the water, trainees are submerged in the assessment environment and it only impinges on them when there is some kind of cathartic event, usually associated with not achieving a desired standard.

All of us have been involved in assessment at one time or another. We have certainly all had important decisions made about our careers on the basis of assessments. Many of us have also been involved in the assessment of students or trainees at some point in our career; however, not all of us have considered the evidence base that surrounds assessment. We would not consider making life-changing decisions about a patient without having an understanding of the condition they are suffering from and the same is true about assessment. When we assess students in high stakes exams we are making decisions about them that could have life-changing potential.

What this book is designed to do is make explicit some of the characteristics of different types of assessments from the point of view of assessors and clinical teachers. It explores some of the theories associated with assessment and examines how these are manifested in what assessors do and why and in a variety of settings, ranging from the informal to the most formal. Each chapter includes some activities and considers the issues around the experiences of the reader.

The first chapter takes the reader through the purpose of assessment. The reasons why assessment is so important in the education of students and doctors are examined with parallels drawn with non-clinical examples. The chapter goes on to describe the different aspects or dimensions of what we are actually aiming to assess with the assessments we use. It ends with a discussion around the meaning and implications of the terms ‘competence’, ‘performance’ and ‘expert’.

The second chapter concentrates on the key principles of assessment and covers terms that you are probably more familiar with such as validity and reliability. Again, we offer some medical as well as non-medical examples which we think will help with your understanding of these principles in practice.

Chapter 3 considers the use of learning technologies which are moving at a great pace in all fields of medical education, assessment is no different. New forms of technology are not only helping with the administration of formative and summative assessments, but are also used in the construction of specific individual tests tailored to students’ needs and abilities.

Feedback is consistently mentioned by students and trainees as a problem. Many learners do not recognise that feedback is given and often when feedback is provided, it is poorly structured with no relationship to the learning context. Feedback motivates improved performance, whether this is from others or from yourself (through reflection). Opportunities to provide feedback are often wasted and many medical education assessors are particularly guilty of this. Chapter 4 explores the reasons behind these issues and how teachers can develop ways of improving the giving and receiving of feedback.

In Chapter 5, we look at portfolio assessments which are becoming more commonplace within medical education. This is probably one area of assessment that is viewed with the most cynicism by mature colleagues. Like all forms of assessment it is only as good as its design and clarity of purpose and structure. While little evidence exists around the impact on learning of the tool, research has suggested that those trainees who are unable or unwilling to complete portfolios have subsequent training issues around professionalism.

Revalidation is a hot topic for all doctors. After passing our college exams doctors of a certain age certainly did not expect to be ‘examined’ again. We took part in our yearly appraisal, diligently presenting our gathered internal and external CPD points. Many doctors are worried about what revalidation is going to look like and the impact it will have on our practice. The final chapter sets out the GMC’s current plans which to date do not look too much different from existing practices. However, for those of us in technical specialties, the spectre of simulation-based assessments is probably on the horizon.

Many different types of assessment are used in medical education in clinical and non-clinical settings and these are discussed in more detail in Chapters 7 and 8. In these chapters, the reader is guided through a discussion of each of these types, exploring how they are administered and their strengths and weaknesses. Chapter 9 discusses how these assessments are best combined into a coherent assessment programme.

For many educators, assessment brings one of the greatest challenges in teaching practice. It is when we are called to make judgements, not only on our learners but on the courses we teach and how we teach them. In writing this book, we are looking for ways this process can be made more enjoyable, effective, easier and more transparent.

We have purposefully not included a full bibliography with this text as there are many already in circulation and freely available to the reader, although we have provided some relevant references within each chapter. We believe that the glossary offered by the General Medical Council is a useful and thorough guide:

General Medical Council. Glossary for the Regulation of Medical Education and Training. Available from http://www.gmc-uk.org/GMC_glossary_for_medical_education_and_training.pdf_47998840.pdf (last accessed 3 October 2012)

Reference

1 Harden, R. Ten questions to ask when planning a course or curriculum. Med Educ 1986;20:356–65.

Chapter 1

Purpose of assessment

Learning outcomes
By the end of this chapter, you will be able to:
Demonstrate an understanding of the purposes of assessment and why assessment is importantExplain what we assessDemonstrate understanding of criterion and norm referencingExplain the terms competence, performance and expert

Assessment is often a source of considerable anxiety within the educational community as a whole, and the medical community is no exception. The aim of this chapter is to explore the rationale for and some of the key principles of assessment in the context of undergraduate, postgraduate and continuing medical education.

Assessment especially benefits from the coupling of theory with practice and the opportunity to develop ‘an academic dialogue’ [1]. However, it presents some challenges. As Cleland et al. [2] write:

[assessors need to] explore their (sometimes conflicting) roles as educators and assessors, and how they manage these roles, which are often conducted simultaneously in assessment situations.

These tensions have increased with research and developments within educational assessment. Consequently, assessment has become more rigorous, systematic and objective. However, there is a potential gap between these developments and the role of clinicians, many of whom still think of their role in assessment as giving a subjective judgement of a one-to-one encounter.

Why do we assess?

The first question to ask is ‘Why do we assess?’ Before we go any further, consider the following scenarios:

A cohort of first year undergraduates coming to the end of their first module in their degree programme.A cohort of final year undergraduates doing their finals (last exams).Students on a postgraduate distance learning programme who have completed all their taught modules and have to submit their dissertations before they graduate.Anaesthetic trainees attending a simulator centre who need to be signed off on their initial tests of competences prior to going on call.A group of doctors and nurses on a 3-day residential advance life support (ALS) resuscitation course.A group of physicians undertaking the membership examination.A surgeon submitting an MD thesis.

Why, in each of these scenarios, do you think there is need for assessment?

You might consider some or all of the following:

Ensuring patient safetyPredicting future behaviourSatisfying university requirementsJudging level of learner achievementMonitoring learners’ progressMotivating learnersMeasuring effectiveness of teachingBecause they have pass to progressProfessional/regulatory requirementsProfessional developmentPublic expectations

There may be more than one reason for each scenario. On the other hand, not all reasons apply to all cases: for instance, the surgeon’s MD would have little in common with an ALS course.

Activity 1.1

Please review and complete the following table, indicating (x) what you think are the reasons to assess in that situation:

Having completed the Activity 1.1, and reviewed the authors’ completed version at the end of the Chapter, not all of these reasons are universally applicable. Let us look at some of these in more detail.

Ensuring patient safety

In certain contexts (e.g. work-based training and assessment) there is a close relationship between what a doctor does in managing a patient’s case and the outcome for that patient. In other contexts, the relationship between trainees’ actions and their impact on a potential patient is more tentative (e.g. the management of a case in a simulation suite cannot harm a real person but successful management in that setting might predict a competent performance on the ward). Assessment, therefore (i.e. passing or failing), is a potential proxy for good or bad outcomes in practice. A system that trains doctors and accredits them as being capable clinicians within an assessment regime is making an assumption that success in a simulated setting will transfer to clinical practice. We explore these issues later in the book.

Predicting future behaviour

The following extract illustrates the potential for uncertainty about the predictive nature of some assessments:

The exam that really worked for me was the 11-plus. I was a very poor classroom performer and as a working-class student had no cultural springboard into education. It was a game-changer. That’s the best thing I can say about the grammar school system – once I was at grammar school it was a different story. It was pure Darwinism – exams all the way. I was less keen on A-levels, as they coincided with the storms of adolescence and I did disastrously. I got two Cs and a D and had to go into the army. I eventually managed to get a place at Leicester University. Fortunately, it turned out to have a very good English Department.

(John Sutherland, PhD, Emeritus Professor of English Literature [3])

For medicine, A-level grades are not a very good predictor of long-term student ability within medical school, especially in clinical settings. However, there is evidence that medical student grades in the first year of university are a very good predictor of future medical school grades.

Judging level of learner achievement

Both parties in the learning process are interested in the extent to which learners have achieved the outcomes set for them, whether for formative or summative purposes. The obvious example for medics is membership exams.

Monitoring learner progress

Informal, formative or ipsative (see Chapter 2 for more on these) assessment allows the outcomes from the assessment event to feed back into the learning process:

Both parties need to know what is still to be done; where there are gaps in learner understanding or inaccurate perceptionsTo initiate remediation opportunitiesTo determine ongoing programmes of studyTo feed into future sessional learning objectives and teaching plansTo feed into curriculum review

This is often described as feedback and will be explored more thoroughly in Chapter 4. It can be in passing (informal), designed to improve performance (formative) and related to the learner’s level (ipsative).

Motivating learners

The extent to which examinations motivate learners depends on the likely outcome. For some people, examinations can be highly demotivating – if they fail them. Take this quotation, for example:

I was never very good at exams, having a poor memory and finding the examination process rather artificial, and there never seemed to be enough time to follow up things that really interested me [4]

Assessment can also be a driver for learning:

I am a big fan of exams. I think they’re more meritocratic than coursework, especially at GCSE and A-level, when there’s a lot of hot-housing by parents. I think stress can help bring out the best in you in an exam – there’s something cleansing about it. I think we’re far too averse to stress now. Exams are also good for teachers, as the last thing you want is continuing assessment. (Tristram Hunt, Lecturer in Modern British History [3])

Measuring effectiveness of teaching

Any course that has a significantly high failure rate has to look at a number of things, including the validity and reliability of its assessment regime, but also at the way in which it teaches the course or programme of study. A number of years ago, a Royal College took its membership examinations to the Indian sub-continent. The examination diet produced a 2% pass rate as candidates were almost completely out of their depth in the Objective Structured Clinical Examinations (OSCEs), never having experienced that teaching modality before. What the College had omitted to do was to prepare candidates to take the examination through a well-designed teaching programme.

Public expectation

While the public is less in awe of medical practitioners than once they were, they do have an expectation that doctors will know what to do and how to do it before they start to look after a group of patients. It is unlikely that they would be satisfied with the following notion:

No physician is really good before he has killed one or two patients. (Hindu proverb [5])

Regulation (revalidation and/or recertification)

Issues related to revalidation are explored in Chapter 6.

Professional development

Continuing professional development can be seen as synonymous with learning and being fit to practice, and the assessment of its effectiveness is often the product of reflection and personal insight [6]. However, this is not considered by all to be adequate and revalidation is being introduced to address some of its limitations (see Chapter 6 for more on this).

Passing to progress/university requirements

Compare:

The best joke question I ever saw was on the Basic Science final in Med School. One 4-hour exam on 2 years of material. It read: ‘Given 1 liter of water, 10 moles of ATP and an Oreo cookie, create life. Show all formulas.’ [7]

with:

Back when I went to Oxford, the entrance exams for women were different. The one for Oxford I found most challenging was the general classics paper. It was a 3.5 hour paper – you had half an hour to think, then one hour for each question. I still remember one of the questions – ‘compare the ideas of empire in Greece and Rome’. That was a real high jump intellectually. Exams are good things. They prepare you for later life with the stress and anticipation. (Susan Greenfield, Professor of Synaptic Pharmacology [3])