Standards for passing the assessment must be consistent with the purpose of the assessment, and must be defensible.
In any assessment, a standard has to be set to enable assessors to form judgments on candidates’ performance. Standards for passing the assessment have to be set in a manner consistent with the purpose of the assessment and must be defensible. There are several methods available for using a formal approach to setting a standard but in essence all are to some degree arbitrary and rest on the judgment of a group of experienced and trained assessors.
Standards may be relative or absolute. Relative standards use methods such as bell curves to set a proportion of candidates that will pass or fail. One candidate’s performance is thus compared to other candidates’ performances in forming the judgment. Absolute standards describe what the candidate must demonstrate to pass, and is a more appropriate approach in workplace-based assessment.
The overall standard for workplace-based assessment for IMGs in the Standard Pathway has been set to that which one would expect from a minimally or just competent medical officer at the end of PGY1. For those developing workplace-based assessment programs, while the overall standard has been set, decisions need to be made about how to apply this standard to the individual and combined assessment formats.
Deciding on a passing standard:
Issues to consider in deciding a passing standard are as follows:
- Consider the format of assessment used and the scoring system for each. Note if a numerical score is produced or not.
Decide on the degree of compensation allowed. It has been recommended that multiple methods be used in workplace-based assessment. This then raises the issue of how to arrive at a decision to combine results from multiple methods to arrive at an overall pass or fail result for each candidate. A compensatory approach is where results from different formats are combined so that excellent performance in one component can make up for poor performance in another. This may not be desirable, particularly where the individual components assess different aspects of clinical performance.
Decisions are needed on the degree of compensation that can occur with multiple uses of the same format (for example combining the result of the mini-CEXs) or when trying to combine the result from different assessment formats (for example, when using the results from mini-CEXs and from supervisors’ reports).
If compensation is not to be allowed, that is, all components of the assessment must be passed, the validity and reliability of those individual components must be established to ensure a defensible decision is made.
- Set a passing standard. As the standard set for IMGs on the Standard Pathway (workplace-based assessment) is the end of PGY1, it is important that assessors involved in the program are familiar with the standards which apply to PGY1 medical officers and trained to use this standard in their individual assessments.
Decisions will have to be made on how to combine the component scores/performances; some methods may provide a numerical score. There are many methods of undertaking a formal standard setting procedure; for example, articles by Norcini (1987, 2003), Ben-David (2000) and Livingston (1982) outline these methods30,31,32,33.
Typically the methods depend on either:
(a) reviewing assessment content and making judgments about the performance of minimally competent or borderline candidates on these assessments; for example, Angoff or Ebel methods, or;
(b) considering the performance of a group of candidates on the test; for example, Borderline Group Method.34,35,36
After the assessment has been completed, review the consequences of the standard set. The overall passing rates should be reviewed and input sought from a multidisciplinary group of clinicians involved in the assessment process.
References:
30 Norcini JJ , Lipner RS, Langdon LO, Strecker CA. A comparison of three variations on a standard-setting method. J Educ Meas 1987;24:56-64.
31 Norcini JJ. Setting standards on educational tests. Med Educ 2003;37:464-469.
32 Ben-David MF. AMEE Guide #18 Standard setting in student assessment. Med Teach 2000 22:2;120-130.
33 Livingston SA, Zieky MJ. Passing scores: A manual for setting standards of performance on educational and occupational tests. 1982 Princeton NJ: Educational Testing Service.
34 Angoff WH. Scales, norms and equivalent scores. In Thorndike RL (ed) Educational Measurement. American Council on Education. 1971 Washington DC. 508-600.
35 Ebel RL. Determination of the passing score. In Ebel RL (eds) Essentials of Educational Measurement (3rd ed). 1979 Prentice-Hall, Englewood Cliffs, NJ 3337-42.
36 op. cit. Ben-David MF. 2000 #31.