No one assessment method is able to assess all the abilities and characteristics that IMGs must demonstrate in the workplace.
Overview of workplace-based assessment methods
This lesson describes the methods that have been developed for workplace-based assessment, and outlines strategies for their use. As explained in Lesson 2, no one assessment method is able to assess all the abilities and characteristics that IMGs must demonstrate in the workplace. Indeed, a range of complementary assessment methods are needed to assess the abilities and characteristics that together constitute effective clinical performance.
The methods adopted to assess an IMG’s performance should possess two important characteristics:
- The data collected through these methods should be reliable and valid
- The approach should guide and support learning.6,7,8
The key skills expected of the IMG must be aligned with the assessment method used and the feedback provided.
The key is to assess ‘important things’ to ensure that learning is being driven in the right direction.
It is commonly stated that assessment drives learning. What is assessed conveys to an IMG the things that are important, both to learn and to do. The key therefore is to assess ‘important things’ to ensure that learning is being driven in the right direction. Learning and skill development are fundamental in an IMG’s progress towards required levels of clinical performance.
The following discussion of workplace-based assessment methods draws on a 2007 review conducted by the Office of Postgraduate Medical Education, University of Sydney, in its review of work-based assessment methods. While the Office of Postgraduate Medical Education has been since dissolved, a 2010 publication titled Assessment Methods in Undergraduate Medical Education contains similar information that may be referenced in regards to the following discussion.
There are many strategies available for assessing workplace-based performance. This course will focus on the more commonly used strategies which are supported by studies of the reliability (accuracy) of the data they provide, and of the validity of the judgments made on the basis of these data. These strategies are:
- Mini-Clinical Evaluation Exercise (mini-CEX)
- Direct Observation of Procedural Skills (DOPS)
- Case-based Discussion (CBD)
- In-Training Assessment (ITA) / Structured Supervision Reports
- Multisource Feedback (360 degree assessment).
The first two of these methods are referred to as direct methods of assessment, as they are based upon direct observation of an IMG’s performance in the workplace. The latter two methods are indirect methods of assessment, as they are based upon records of an IMG’s performance. ITA may or may not include direct observation. Effective workplace-based assessment of IMGs should incorporate several of these strategies, and should include direct observation of the IMG in-patient encounters. The discussion below will begin with a consideration of direct methods.
Mini-CEX
The ratings recorded on the form, when cumulated over multiple patients, multiple observers, and different clinical tasks will provide a defensible basis for the judgment made.
What is a mini-CEX?
The process of directly observing a doctor in a focused patient encounter for purposes of assessment is called a Mini-Clinical Evaluation Exercise (mini-CEX). A mini-CEX entails observing an IMG with a real patient (typically for 10-15 minutes) on a focused task such as taking a history, examining or counselling a patient, whilst recording judgments of the IMG’s performance on a rating form and then conducting a feedback session with that individual on his/her performance (a further 10-15 minutes). The feedback session following the observation should be highly focused in guiding the IMG’s learning by identifying the strengths and weaknesses of his/her performance (formative assessment), and planning for their subsequent learning and skill development. The ratings recorded on the form, when cumulated over multiple patients, multiple observers, and different clinical tasks will provide a defensible basis for a judgment of an IMG’s level of overall performance.
Many different mini-CEX rating forms exist. One where reliability and validity has been widely studied was developed by John Norcini (1995).9 This form has been specifically adapted and evaluated for use with IMGs in Australia and Canada with effective outcomes and is recommended for use in assessing IMGs in the Standard Pathway.10 This form is presented in Appendix 1, together with definitions of the components of performance rated on the form. Ratings are elicited relative to the following aspects of an IMG’s performance: medical interviewing skills, physical examination skills, professionalism/humanistic qualities, counselling skills, clinical judgment, organisation/efficiency, and overall clinical competence. Performance is rated using a standardised nine-point scale where 1, 2 and 3 are unsatisfactory, 4, 5 and 6 are satisfactory, and 7, 8 and 9 are superior. The form provides an option for noting that a particular aspect of performance was insufficiently observed or unobserved. The form also elicits narrative information on the details of the encounter, and provides space for providing feedback on the performance observed.
How is the mini-CEX put into practice?
The following steps guide the effective use of the mini-CEX.
‘Attributes’ of assessors
A mini-CEX assessor should be clinically competent in the area of the patient’s problem(s). The assessor can be one of the IMG’s clinical supervisors, a senior vocational trainee, or a visiting doctor who is serving as an external assessor of the IMG.
Orientation and training of assessors
Assessors should be trained to use the mini-CEX rating form, to be consistent, to reference their assessment to the same standard, and to provide effective feedback. Such a program for providing assessor orientation and training is described in Lesson 8 and in Appendix 2.
Orientation for IMGs
IMGs should be oriented to the mini-CEX assessment process, the rating form and the descriptions of the rating categories. Ideally, they should be given the opportunity to engage in some formative or practice mini-CEX assessments prior to participating in those that will ‘count’.
Schedule of mini-CEX observations
Each IMG should undergo a number of mini-CEX assessments conducted by a number of different assessors. Approximately 30 minutes should be allocated for each assessment, to observe the encounter, complete the rating form and conduct a feedback session. Support staff may be assigned the responsibility for scheduling mini-CEX assessments, for obtaining permission from patients as appropriate for their observed encounter with IMG, etc.
Selecting the encounters to be observed
A ‘blueprint’ is constructed to guide the selection of encounters to be observed. This blueprint enables a systematic selection of patients comprising a range of:
- problems from different systems/disciplines;
- clinical tasks (for example, history-taking, examination, counselling); and
- clinical settings (for example, doctors’ rooms, emergency departments, in-patient clinics) within which the observations will occur.
The development of a blueprint is discussed in Lesson 5.
Assessors need to be sufficiently familiar with the patient to enable them to critically judge the performance being reviewed. For example, if a physical examination is being observed, the assessor needs to be aware of the patient’s history (to the degree that it may guide which aspects of the physical examination are undertaken) and the patient’s physical findings, to enable an assessment of the IMG’s accuracy in eliciting these findings.
The assessor’s role in the mini-CEX assessment process
Mini-CEX assessors should remain totally uninvolved in the encounter (no comments, suggestions, questions), and be as unobtrusive as possible (become a ‘fixture’ in the room, ideally so that the IMG and the patient forget that the assessor is there), unless there are risks to patient safety. If an assessor identifies issues to follow-up with the patient (for example, check findings, refine a treatment plan), this should be done after the IMG has completed the encounter with the patient. The rating form should be completed and then discussed with the IMG. All questions on the form should be completed with both effective and ineffective aspects of performance noted.
Provision of effective feedback and a plan for further development
Principles of giving effective feedback are outlined in Lesson 7, including the challenge of giving feedback where performances are poor.
Return of mini-CEX materials
The mini-CEX evaluation forms should be returned to a designated administrative person or unit for data entry and record keeping.
The importance of following the recommended protocol for mini-CEX assessments cannot be over-emphasised in order to be able to defend a final judgment on an IMG’s overall level of performance. Such a defense will be a function of the reliability of the composite rating derived from the mini-CEX over multiple observations, and the validity of this rating as a global measure of an IMG’s level of clinical performance. The former is largely a function of the number of mini-CEX observations and observers, and reliability estimates can be calculated. The latter is a logical function of the representative sampling of performance across the spectrum of clinical situations in which the IMG would be expected to be proficient.
Evidence supporting the reliability and validity of the mini-CEX?
The key to obtaining reliable scores from the mini-CEX is to ensure that IMGs are observed in multiple encounters and by several different assessors. The key to making valid inferences is to ensure that they are observed over a representative sample of patient problems and clinical tasks.
The key to obtaining reliable scores from the mini-CEX is to ensure that IMGs are observed in multiple encounters and by several different assessors. The key to making valid inferences of an IMG’s ability from their mini-CEX scores is to ensure that they are observed over a representative sample of clinical dimensions, clinical areas, and over a range of clinical settings. A recent mini-CEX study conducted in Australia (Nair et al, 2008) found that scores derived from as few as ten mini-CEX encounters possessed a reliability coefficient exceeding 0.80.11 This result is consistent with those from overseas studies.12,13,14 Nair et al reported that the process had face validity; that is, the IMGs viewed the mini-CEX as superior to most other assessment methods, including OSCEs, for assessing their clinical performance. Other studies have provided evidence of construct validity, reporting high correlations between mini-CEX and other measures of performance for undergraduate and postgraduate trainees (Kogan et al 2003, Norcini et al, 2003).15,16
Measuring professional attributes with the mini-CEX
A form of mini-CEX called the Professional Mini Clinical Exercise (PMEX) has been developed to assess behaviours related to professional attributes.17 The four main attributes assessed by the PMEX are doctor-patient relationship skills, reflective skills, time management skills and inter-professional relationship skills, and these are captured by 24 behaviour items. The PMEX rating form uses a four-point scale where 1 is unacceptable, 2 is below expectation, 3 is met expectations, and 4 is exceeded expectations.
Direct Observation of Procedural Skills (DOPS)
What is a DOPS?
The Direct Observation of Procedural Skills (DOPS) is a form of mini-CEX where the focus is on observing and assessing an IMG’s performance of a procedure on a real patient. A DOPS assessment generally requires 10-15 minutes of observation time followed by 5 minutes of feedback and the completion of the DOPS rating form. The issues discussed earlier in the lesson on the mini-CEX are all relevant to a DOPS. Many different DOPS rating forms have been developed and, at the vocational training level, forms have been developed that are specific to a given procedure. In the context of assessing IMGs, it is recommended that a generic DOPS rating form be employed. One such form is presented in Appendix 1. It has been adapted from a form developed in the UK for the Foundation Program, which has demonstrated acceptable reliability and validity.18,19,20 This form elicits assessor ratings on component skills related to the procedure observed, such as obtaining informed consent, appropriate pre-procedure preparation, technical ability, communications skills and overall clinical competence in performing the procedure.
How is a DOPS put into practice?
A DOPS assessment should focus on the core skills that IMGs should possess when undertaking an investigative or therapeutic clinical procedure. DOPS is a focused observation or ‘snapshot’ of an IMG undertaking the procedure. Not all elements need be assessed on each occasion. The studies cited above have shown that multiple DOPS over time, using multiple assessors, provide a valid, reliable measure of performance with procedures. The following steps guide the effective use of the DOPS.
- The patient should give verbal consent to be involved in the DOPS.
- The assessor should be capable of doing the procedure being assesed.
- The assessor should directly and unobtrusively observe the IMG performing the procedure in a normal clinical environment.
- The assessor should provide feedback to the IMG immediately after the assessment, following the guidelines on feedback as presented above for the mini-CEX. The feedback session provides an opportunity to explore the IMG’s knowledge level related to the procedure, where appropriate.
- The DOPS rating form should be completed using the scale ranging from extremely poor to extremely good. For example, on a nine-point scale a score of 1-3 would be considered unsatisfactory, 4-6 satisfactory and 7-9 above that expected for an IMG at the PGY1 level. The ratings and comments recorded on the rating form should be discussed with the IMG, if not already discussed in the feedback session.
The logistics of arranging DOPS assessments can be challenging. Morris et al reported that opportunities for DOPS are found in emergency departments and in the operating theatre during routine procedures.21 As with the mini-CEX, decisions on which procedures to sample in DOPS observations are best guided by a blueprint. The Australian Curriculum Framework for Junior Doctors provides a list of such procedures to which a blueprint can be referenced. Assessors should be medical practitioners who are familiar with the case being reviewed, who possess expertise relative to the patient’s problems, and who have received orientation or training in the case-based discussion assessment process.
Evidence supporting the reliability and validity of the DOPS
Currently, there are few reports available on studies of the reliability or validity of DOPS. In a review of tools used to assess procedural skills of surgical residents, the reliability of the assessment is reported to be enhanced through the use of objective and structured performance criteria, such as those on DOPS assessment forms.22 Furthermore, a DOPS assessment appears to possess high face validity; that is, a DOPS assessment ‘looks to be valid’ because it is a structured assessment of an IMG’s ability to perform a procedure with a real patient in a real clinical setting.
Case-Based Discussion (CBD)
What is a CBD?
Case-based discussion (CBD) is an alternative term for chart stimulated recall, an assessment technique originally developed by the American Board of Emergency Medicine. It is designed to allow the assessor to probe the candidate’s clinical reasoning, decision making and application of medical knowledge in direct relation to patient care in real clinical situations. It is a validated and reliable tool for assessing the performance of candidates and identifying those in difficulty. The CBD tool has greater validity and reliability when aligned with specific constructs in discussing the patient (i.e. elements of history, examination, investigations, problem solving, management, referral and discharge planning).
Case-based discussion is designed to:
- improve clinical decision making, clinical knowledge and patient management
- improve clinical record keeping
- provide candidates with an opportunity to reflect on and discuss their approach to the patient and identify strategies to improve their practice
- enable assessors to share their professional knowledge and experience in a collegial way
- enable candidates to access experts in clinical decision making and understand the rationale for preferred management choices
- guide learning through structured feedback
- identify areas for development as part of the continuum of learning
- assist candidates in identifying strategies to improve their practice.
For more information, see the background paper, Norcini J, Burch V. Workplace-based assessment as an assessment tool: AMEE Guide No. 31. Medical Teacher. 2007; 29: 860-862 (PDF 1.9 MB).
How is a CBD put into practice?
As in other workplace-based assessment methods, the expectation of IMGs on the Standard Pathway (workplace-based assessment) is that IMGs will engage in multiple case-based discussions and with multiple assessors during their supervised practice program. As the performance expectations of IMGs are indexed to those of a PGY1 junior doctor, guidance in selecting the cases to be discussed can be obtained from the list of conditions presented in the Australian Curriculum Framework for Junior Doctors. Assessors should be medical practitioners who are familiar with the case being reviewed, who possess expertise relative to the patient’s problems, and who have received orientation or training in the case-based discussion assessment process.
The guidelines for conducting the case-based discussion are very similar to those for the feedback session for the mini-CEX. As the goal of a case-based discussion is to obtain an assessment of the IMG’s clinical reasoning and decision-making, the discussion should be interactive. For example, the assessor could pose questions which elicit the IMG’s interpretation of data in the record, the reasons for particular tests being ordered and what the results mean, what other tests could have been ordered, recommendations on the next steps in the management of the patient, treatment options and what the IMG would recommend and why, as well as the prognosis, and so on. The assessment form is completed following the assessment encounter.
Evidence of the reliability and validity of case-based discussions?
Several studies support the validity of case-based discussions. Maatsch et al 1983 found a high correlation between case-based discussion scores and the initial certification score (which took place ten years earlier) of the doctors taking part in the study.23 Furthermore, the doctors involved in the study considered this method to be the most valid measure of their practising ability. Other studies have shown that case-based discussions correlate well with scores on previous and current oral examinations.24,25
In-Training Assessments (ITA) / Structured Supervision reports
What is ITA?
In-training assessment reports (also referred to as ‘structured supervision reports’) are based upon direct observation of IMGs in real clinical settings over a period of time. Observations are carried out by the supervisor(s) assigned to the IMG, but others may play a role. For example, nurses and other health team members are often asked to contribute to in-training assessments of communication and inter-personal skills, ethical behaviour, reliability and professional integrity. Pharmacists could be asked for comment on prescribing ability. Patient feedback, peer assessment, self assessment and medical record audits may also contribute to the judgments recorded in in-training assessment reports. The use of multiple sources of information as a basis for ratings of IMG performance is highly effective in reducing the subjectivity of these ratings, although subjectivity of in-training assessments remains a concern. In many studies of in-training assessment systems, the individuals being assessed have not been directly observed, reducing significantly the reliability of the assessment and its ability to discriminate accurately.
For example, a study of medical students found that nineteen separate in-training assessments would be needed to achieve a reliability coefficient of 0.80. A value of 0.80 is the ‘rule of thumb’ reliability that should be obtained when making decisions about an individual’s performance.26 Nonetheless, in-training assessments remain an important means for workplace-based assessment, as they enable a broad range of professional behaviours to be assessed. For example, in-training assessments are able to capture evidence of behaviours such as honesty, being reliable and working well as a team member.
How are ITAs put into practice?
Structured ITA reports contribute to evidence about the IMG’s progress through the required supervision period.
Structured in-training assessment reports are widely used in medical training in Australia, serving to signify trainees’ preparedness to move to the next level or rotation of training. The progress of IMGs in the Standard Pathway (AMC Examination) has a history of being monitored in this way, with structured in-training assessment reports contributing to decisions about the IMG’s progress through the required supervision period. For IMGs in the Standard Pathway who elect to replace the clinical examination option with a workplace based option, it is likely that in-training assessment will continue to be a component of their workplace-based assessment process. As a replacement for the clinical examination, in contrast to a supplement to that examination, it will no longer suffice to be the only method of workplace-based assessment.
A working party of the Confederation of Postgraduate Medical Education Councils (CPMEC) is developing a set of three in-training assessment forms in conjunction with the Australian Curriculum Framework for Junior Doctors. It is recommended that these forms be used to assess IMGs in the PGY1 program. These forms will cover self-assessment, mid-term assessment, and end-of-term assessment and are presented at the Assessment Resources page of the CPMEC website. The introductory sections to the forms provide instructions for their use. Of particular importance is that completion of the forms is based on direct observations over time of the IMG performing in the workplace, and that observations and input in completing the forms be sought from multiple sources including other medical practitioners, nurses, allied health personnel, and patients. This latter strategy also addresses the problem faced by clinical supervisors who establish a collegial relationship or ‘employment’ relationship with an IMG who is not performing well. Such relationships place the supervisor in a potential conflict of interest position and, in the absence of input from multiple ‘observers’, may compromise the in-training assessment report.
Multisource feedback (360 degree assessment)
What is multisource feedback, and how is it put into practice?
The use of multiple assessors helps to address the ‘conflict of interest’ problem.
Multisource feedback, or 360 degree assessment as it is more commonly called, provides evidence on performance of IMGs from a variety of sources. These sources may include colleagues, other co-workers (nurses, allied health) and patients. Questionnaires, completed by each of these groups, assess an IMG’s performance over time, in contrast to in a specific patient encounter. This assessment method is gaining popularity in medicine, and has been used with medical students through to sub-specialists. Multisource feedback enables the assessment of a group of proficiencies that underpin safe and effective clinical practice, yet are often difficult to assess. Included in these proficiencies are interpersonal and communication skills, team work, professionalism, clinical management, and teaching abilities.
In the context of assessing trainees at the PGY1 level, the Foundation Program in the UK has developed a single questionnaire called the ‘mini-PAT’ (Peer Assessment Tool) which elicits ratings from fellows, senior trainees, nurses and allied health personnel. This form is discussed by Norcini et al (2007).27 In IMG assessment in Canada, 360 degree assessment has been pursued on a broader basis with separate forms for colleagues, co-workers, self, and patients. These forms are also presented in Appendix 1, with the permission of the Medical Council of Canada. Both of these options could be useful in the assessment of IMGs in the Standard Pathway, though the 360 degree approach is favoured in the Canadian system.
Evidence of the reliability and validity of multisource feedback?
Some studies of 360 degree assessment with practising doctors have shown the technique to possess limited ability to discriminate levels of performance, with average ratings typically being high (for example, 4.6 out of 5). This limitation does not appear to be as great in the context of IMG assessment, where the range of performance is much wider than that observed with local graduates. Other studies from Canada, the United States and Scotland have shown that 360 degree assessment can be a reliable, valid and feasible approach that contributes to improved practice.28 Reliability analyses indicate that samples of 8-10 co-workers, 8-10 medical colleagues and 25 patients are required.
Other assessment methods
There are a number of other methods of workplace-based assessment that could be employed in the assessment of IMGs. Several of these methods were summarised by the Office of Postgraduate Medical Education, University of Sydney, in its review of work-based assessment methods. While the Office of Postgraduate Medical Education has been dissolved, a similar summary can be found in the 2010 publication, Assessment Methods in Undergraduate Medical Education.
These methods include:
- Incognito standardised patients, showing up unannounced in an IMG’s clinical list;
- A portfolio of evidence gathered by IMGs to document learning experiences during a supervised period of practice, which is used as a basis for identifying progress that has been made;
- Videotaped consultations of IMGs as they work through a list of patient consultations in their regular clinical setting.
Other methods are discussed in Norcini & Burch (2007).29 These methods include:
- Clinical encounter cards which document mini-CEX-type encounters on 10x15cm cards, which include a rating scale for each of the clinical dimensions assessed, and provide a space for assessors to record the feedback given;
- Clinical work sampling which record data on specific patient encounters by medical staff observing and assessing the IMG;
- Blended patient encounters involving an IMG-patient encounter (focused interview or physical examination) which is observed by the assessor at the bedside of a patient unknown to the IMG. Using the clinical findings gleaned during the IMG-patient encounter, the IMG then presents the assessor with a patient diagnosis and differential diagnosis.
These methods outlined above are not used as extensively as the methods described in detail in this lesson, nor are there data supporting their reliability and validity. As such, they are not currently recommended for adoption in workplace-based assessment of IMGs in the Standard Pathway.
Formulating a workplace-based assessment strategy for IMGs in the Standard Pathway
Authorities seeking to develop their system of workplace-based assessment of IMGs in the Standard Pathway should formulate an assessment strategy which draws on the methods and strategies presented in this lesson. Importantly, the assessment strategy should:
- Comprise several assessment methods that, in combination, provide a comprehensive assessment of the clinical performance expected of IMGs who successfully complete the Standard Pathway;
- Include multiple observations of the IMG in clinical settings and in clinical encounters over a period of time;
- Draw on the judgments of multiple assessors.
Summary
This lesson has described methods of workplace-based assessment for IMGs in the Standard Pathway (workplace-based assessment) as an alternative to the existing AMC clinical examination. Successful completion of the Standard Pathway assessment program should address whether or not an IMG possesses an adequate and appropriate set of clinical skills and other essential characteristics to practise safely and effectively within the Australian health care environment.
References:
6 Frederiksen N. The real test bias: Influences on testing and teaching and learning. Am Psychol 1984;39:193-202.
7 Swanson DB, Norman GR, Linn RL. Performance-based assessment: Lessons from the health professions. Educ Res 1995;24:5-11.
8 Shepard LA. The role of assessment in a learning culture. Educ Res 2000;29:4-14
9 Norcini J, Blank L, Arnold G, Kimball H. The mini-CEX (clinical evaluation exercise): a preliminary investigation. Ann of Intern Med1995;123(10):795-799.
10 Nair BR, Alexander HG, McGrath BP, Parvathy MS, Kilsby EC, Wenzel J, Frank IB, Pachev GS, Page GG. The mini clinical evaluation exercise (mini-CEX) for assessing clinical performance of international medical graduates Med J Aust 2008;189 (3):159-161.
11 ibid.
12 Cruess R, McIlroy J, Cruess S, Ginsburg S, Steinert Y. The professionalism mini-evaluation exercise: A preliminary investigation. Acad Med 2006;81(10 Suppl):S74-S78.
13 op. cit. Norcini et al. 1995 #9.
14 Norcini JJ. Peer assessment of competence. Med Educ 2003;37(6 ):539-543.
15 Kogan J, Bellini L, Shea J. Feasibility, reliability and validity of the mini-clinical evaluation exercise (mini-CEX) in a medicine core clerkship. Acad Med 2003: 78 (10 Suppl) S33-35.
16 Norcini J, Blank L, Duffy F, Fortna G. The mini-CEX: a method for assessing clinical skills. Ann Intern Med 2003: 138(6) 476-81
17 op. cit. Cruess R, et al. 2006 #12.
18 Wragg A, Wade W, Fuller G, Cowan G, Mills P. Assessing the performance of specialist registrars. Clin Med 2003;3(2):131-4.
19 Wilkinson J, Benjamin A, Wade W. Assessing the performance of doctors in training. BMJ 2003;327:s91-2.
20 Davies H, Archer J, Heard S. Assessment tools for Foundation Programmes—a practical guide. BMJ Career Focus 2005;330(7484):195-6.
21 Morris A, Hewitt J, Roberts C. Practical experience of using directly observed procedures, mini clinical evaluation examinations, and peer observation in pre-registration house officer (FY1) trainees. Postgrad Med J 2006;82:285-88.
22 Reznick R. Teaching and testing technical skills. Am J Surg 1993;165:358-61.
23 Maatsch JL, Huang R, Downing S, Barker B 1983 Predictive validity of medical specialist examinations. Final report for Grant HS02038-04, National Center of Health Services Research. Office of Medical Education research and Development, Michigan State University, East Lansing, MI.
24 Norman GR, David D, Painvin A, Lindsay E, Rath D, Ragbeer M 1989 Comprehensive assessment of clinical competence of family/general physicians using multiple measures. Proceedings of the Research in Medical Education Conference, pp75-79.
25 Solomon DJ, Reinhart MA, Bridgham RG, Munger BS, Starnaman S. An assessment of an oral examination format for evaluating clinical competence in emergency medicine. Acad Med 1990;65:S43-S44.
26 Daelmans HEM, van der Hem-Stokroos HH, Hoogenboom RJI, Scherpbier AJJA, Stehouwer CDA, van der Vleuten CPM. Feasibility and reliability of an in-training assessment programme in an undergraduate clerkship. Med Educ 2004;38(12):1270-1277.
27 Norcini J, Burch V. Workplace-based assessment as an educational tool: AMEE Guide No. 31. Med Teach 130 2007;29:9,855-871.
28 Garman AN, Tyler JL, Darnall JS. Development and validation of a 360-degree-feedback instrument for healthcare administrators. Journal of Healthcare Management 2004;49(5):307-21.
29 op. cit. Norcini JJ, Burch V. 2007 #27