UWorld-Nursing-Logo

Home » Educators » Comparing Assessment Types in Nursing Education

Comparing Assessment Types in Nursing Education

  • Last Updated: March 4, 2024

Nursing instructor working in assessments

  • NCLEX-PN , NCLEX-RN

There are several different types of assessments that nursing instructors may use to measure their students’ progress. Among the most common are summative, formative, and benchmark assessments. We’ll explore the strengths and weaknesses of each, as well as their unique benefits for both instructors and students. We’ll also discuss how partnering with UWorld Nursing can elevate the effectiveness of your assessments and better prepare your students for NCLEX success.

Formative Assessment and Summative Assessment

Formative Assessments in Nursing

Formative assessments are used to identify a student’s strengths and weaknesses, allowing for immediate remediation. Unlike other assessment types, these are generally low-stakes and used to aid in the learning process rather than determine a grade.

Pros and Cons of Formative Assessments

Formative assessments are a great diagnostic tool rather than an evaluative one. They can be a tremendous help to both instructors and students in terms of identifying and addressing knowledge gaps. In nursing education, high-quality formative assessments are designed to pinpoint specific areas where individuals, groups, or the entire class need improvement. This targeted feedback helps students strengthen their clinical judgment and understanding, ultimately leading to better patient care.

UWorld Content for Formative Assessments

UWorld University partners can create unlimited formative assessments using classic and NGN items from our QBank. To create a formative assessment, simply select relevant questions based on subject, system, or client needs categories. Just like in our student QBanks, these questions come with detailed explanations to help with remediation. Instructors can also review student performance at an individual and class-wide level.

Educator and student working on Next Generation NCLEX question from UWorld’s Learning Platform for Nursing.

Familiarizing Students with Adaptive Testing

Students can take unlimited computerized adaptive tests (CAT) practice tests in the self-study area of the Learning Platform. While not true formative assessments, they serve as an excellent preparation tool because they mirror the adaptive conditions of the real NCLEX. Upon completing a UWorld CAT, students receive their overall score, their level of preparedness, and the difficulty factor of the questions they answered. Instructors can also view the results to help with remediation.

Summative Assessments in Nursing

Summative assessments evaluate a student’s knowledge of a subject at the end of an instructional period based on a uniform standard (e.g., a midterm or final exam).

Pros and Cons of Summative Assessments

Summative assessments are useful for determining a student’s overall understanding of a topic. In nursing, their high-stakes nature can be used as an opportunity to teach productive study strategies and habits in anticipation of the NCLEX. However, summative assessments typically don’t give students an opportunity to learn from their mistakes. They can be used to assess the effectiveness of a course or program but are not intended to be diagnostic for students.

UWorld Content for Summative Assessments

Instructors can assign up to six summative assessments with NCLEX-style questions through our Learning Platform for Nursing. Each assessment contains 100 unique NGN and classic questions that cannot be found in our student or faculty QBanks. Upon completion, students can review their answers and read detailed rationales for each question, while instructors can view thorough performance reports.

Maximizing Your UWorld Nursing Assessments

When used correctly, UWorld Nursing assessments are a remarkably powerful remediation and NCLEX preparation tool. Your students will get firsthand experience with NGN-style questions and time constraints, as well as exposure to the most relevant NCLEX topics. Instructors can then identify which students are on track or falling behind. Here’s how:

  • Assign at least three assessments each semester (or one about every two months)
  • Have your students review the answer explanations upon completing each assessment
  • View in-depth performance reports to track your students’ progress and identify at-risk students
  • Based on these results, you can create unique assignments for individuals or groups to turn their weaknesses into strengths

The analytics instructors receive are twofold. First is a breakdown of student performance across subjects, systems, and topic areas. Second is an accurate prediction of each student’s chance of passing the NCLEX (low, borderline, high, or very high) based on statistically validated scoring.

What about Benchmark Testing in Nursing?

Benchmark assessments measure a student’s progress toward reaching an educational goal over time. Administered in predefined intervals, such as at the beginning and end of a nursing program, educators generally use benchmark assessments to compare their students’ understanding against a uniform standard (determined by accrediting bodies).

Pros and Cons of Benchmark Assessments

The greatest benefit of using benchmark assessments is the ability for educators and administrators to compare their students’ performance against national standards. A weakness is that benchmark testing requires additional safeguards to ensure results are fair and accurate (e.g., providing a secure or proctored testing environment).

Legislative Changes to Benchmark Testing in Nursing Education

Benchmark testing is a hot topic in nursing education, with compelling arguments for and against the practice. It’s important to note that your program may be impacted by legislation limiting the use of benchmark tests created by private entities. Texas is a recent example:

In Fall 2023, the Texas Legislature passed a bill with stricter regulations on standardized examinations used by nursing schools and educational programs. While there are a number of ways standardized benchmark tests can still be implemented, the goal is to prohibit their use as a graduation requirement and minimize their impact on students’ grades.

Is There a Best Assessment Type for Nursing Students?

Summative, formative, and benchmark assessments all have their place in nursing education. Because summative and benchmark assessments are evaluative in nature, they can help determine if students are on track with educational objectives; however, formative assessments are better at identifying at-risk students earlier and increasing student engagement in the learning process.

Regardless of your course structure, the UWorld Learning Platform for Nursing can be used to elevate student performance through NCLEX-style questions, detailed reports, and built-in remediation methods. Our resources are flexible and align with the new AACN Essentials , enabling easy integration with any nursing curricula.

Educator and student working on Next Generation NCLEX question from UWorld’s Learning Platform for Nursing.

High-yield videos, thousands of practice questions, multiple self-assessment tests, and more.

Latest From the UWorld Nursing Blog

Nurse talking with an elderly patient, filling out private medical forms.

The Importance of Ethics in Nursing Education

Multiracial, multiethnic medical team of doctors and nurses in the hospital.

Equity and Inclusion in Nursing Education

Young busy female nurse is concentrating on schoolwork while sitting at desk at workplace in clinic.

Blended Learning and Flipped Classrooms in Nursing Education

We use cookies to learn how you use our website and to ensure that you have the best possible experience. By continuing to use our website, you are accepting the use of cookies. Learn More

Nursing Course and Curriculum Development – University of Brighton

Innovative – inspirational – inclusive.

summative assessment examples nursing

Summative assessment

Summative assessment i.e. assessment of learning

Summative assessment enables students to demonstrate the extent of their learning which will contribute to their overall degree classification.

The module specification must state (in the ‘Assessment tasks’ section) :

  • The details of the assessment for the module
  • The minimum pass mark
  • The type of assessment task and weighting
  • In each academic level at least 1 module will need to offer flexibility with an alternative assessment task to support inclusive practice

Examples of summative assessments (and alternatives) :

  • Oral presentation (Poster presentation, Webinar, Podcast)
  • Leaflet (webpage)
  • Written reflection (Blog, Vlog)
  • Clinical Link Learning Activities (open skill)
  • Practical exam (OSCE, VIVA)
  • Written exam (open book)
  • Group task e.g. students work in a group to write a 1000 word essay – social interaction increases academic writing skills and positive social support

Whilst a variety of assessment types can help students who have different strengths, it is important that assessment tasks are repeated to enable feed forward.

Aim to have clinically relevant assessment tasks following the nursing model – nursing assessment / plan / implement / evaluate e.g

  • assess the needs of a person and family affected by
  • plan a relevant care package or approach to care which could be carried out in your area
  • create a question / problem that replicates the real-life context as closely as possible
  • compare different theories in same situation
  • see Clinical Link Learning Activities for more examples

Parameters for a 20 credit module (equivalent to 35 hours student effort):

  • 2500 – 3500 word essay
  • 2.5 – 3 hour written exam
  • 20 – 25 minute presentation
  • 5 – 6 minute video production

For modules with more than 1 assessment task the output is proportionate to the weighting e.g. 50% weighting 10 minute presentation and 1500 word essay.

Moderation :

Every marker marks the same submission and discusses feedback and mark to agree the standard for marking all remaining scripts – this will support consistency.

There is a risk that latent criteria are applied to the marking of assessments e.g. academic writing style or other content not part of the learning outcomes.  It is therefore really important that what markers are expecting to assess aligns with the learning outcomes and the assessment task.

Print Friendly, PDF & Email

We use cookies to personalise content, to provide social media features and to analyse our traffic. Read our detailed cookie policy

Nursing Education Network

summative assessment examples nursing

Formative & Summative Assessment

Introduce and provide an overview of formative & summative assessment:

  • Describe key concepts related to formative and summative assessment
  • Formative and summative assessment in healthcare
  • Making learning visible
  • Key learning resources

Formative & Summative Assessment Presentation [Download]

Share this:

  • Click to share on Twitter (Opens in new window)
  • Click to share on Facebook (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Pinterest (Opens in new window)
  • Click to share on Reddit (Opens in new window)
  • Click to email a link to a friend (Opens in new window)

Leave a Reply Cancel reply

' src=

  • Already have a WordPress.com account? Log in now.
  • Subscribe Subscribed
  • Copy shortlink
  • Report this content
  • View post in Reader
  • Manage subscriptions
  • Collapse this bar

helpful professor logo

25 Summative Assessment Examples

25 Summative Assessment Examples

Chris Drew (PhD)

Dr. Chris Drew is the founder of the Helpful Professor. He holds a PhD in education and has published over 20 articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education. [Image Descriptor: Photo of Chris]

Learn about our Editorial Process

Summative assessment is a type of achievmeent assessment that occurs at the end of a unit of work. Its goal is to evaluate what students have learned or the skills they have developed. It is compared to a formative assessment that takes place in the middle of the unit of work for feedback to students and learners.

Performance is evaluated according to specific criteria, and usually result in a final grade or percentage achieved.

The scores of individual students are then compared to established benchmarks which can result in significant consequences for the student.

A traditional example of summative evaluation is a standardized test such as the SATs. The SATs help colleges determine which students should be admitted.

However, summative assessment doesn’t have to be in a paper-and-pencil format. Project-based learning, performance-based assessments, and authentic assessments can all be forms of summative assessment.

Summative vs Formative Assessment

Summative assessments are one of two main types of assessment. The other is formative assessment.

Whereas summative assessment occurs at the end of a unit of work, a formative assessment takes place in the middle of the unit so teachers and students can get feedback on progress and make accommodations to stay on track.

Summative assessments tend to be much higher-stakes because they reflect a final judgment about a student’s learning, skills, and knowledge:

“Passing bestows important benefits, such as receiving a high school diploma, a scholarship, or entry into college, and failure can affect a child’s future employment prospects and earning potential as an adult” (States et al, 2018, p. 3).

Formative vs summative assessment

Summative Assessment Examples

Looking for real-life examples of well-known summative tests? Skip to the next section .

1. Multiple-Choice Exam

student completing an exam

Definition: A multiple-choice exam is an assessment where students select the correct answer from several options.

Benefit: This format allows for quick and objective grading of students’ knowledge on a wide range of topics.

Limitation: It can encourage guessing and may not measure deep understanding or the ability to synthesize information.

Design Tip: Craft distractors that are plausible to better assess students’ mastery of the material.

2. Final Essay

student completing an exam

Definition: A final essay is a comprehensive writing assessment that requires students to articulate their understanding and analysis of a topic.

Benefit: Essays assess critical thinking, reasoning, and the ability to communicate ideas in writing.

Limitation: Grading can be subjective and time-consuming, potentially leading to inconsistencies.

Design Tip: Provide clear, detailed rubrics that specify criteria for grading to ensure consistency and transparency.

3. Lab Practical Exam

student completing an exam

Definition: A lab practical exam tests students’ ability to perform scientific experiments and apply theoretical knowledge practically.

Benefit: It directly assesses practical skills and procedural knowledge in a realistic setting.

Limitation: These exams can be resource-intensive and challenging to standardize across different settings or institutions.

Design Tip: Design scenarios that replicate real-life problems students might encounter in their field.

4. Reflective Journal

reflective journal

Definition: A reflective journal is an assessment where students regularly record learning experiences, personal growth, and emotional responses.

Benefit: Encourages continuous learning and self-assessment, helping students link theory with practice.

Limitation: It’s subjective and heavily dependent on students’ self-reporting and engagement levels.

Design Tip: Provide prompts to guide reflections and ensure they are focused and meaningful.

5. Open-Book Examination

student completing an exam

Definition: An open-book examination allows students to refer to their textbooks and notes while answering questions.

Benefit: Tests students’ ability to locate and apply information rather than memorize facts.

Limitation: It may not accurately gauge memorization or the ability to quickly recall information.

Design Tip: Focus questions on problem-solving and application to prevent students from merely copying information.

6. Group Presentation

students completing an exam

Definition: A group presentation is an assessment where students collaboratively prepare and deliver a presentation on a given topic.

Benefit: Enhances teamwork skills and the ability to communicate ideas publicly.

Limitation: Individual contributions can be uneven, making it difficult to assess students individually.

Design Tip: Clearly define roles and expectations for all group members to ensure fair participation.

7. Poster Presentation

poster

Definition: A poster presentation requires students to summarize their research or project findings on a poster and often defend their work in a public setting.

Benefit: Develops skills in summarizing complex information and public speaking.

Limitation: Space limitations may restrict the amount of information that can be presented.

Design Tip: Encourage the use of clear visual aids and a logical layout to effectively communicate key points.

8. Infographic

infographic

Definition: An infographic is a visual representation of information, data, or knowledge intended to present information quickly and clearly.

Benefit: Helps develop skills in designing effective and attractive presentations of complex data.

Limitation: Over-simplification might lead to misinterpretation or omission of critical nuances.

Design Tip: Teach principles of visual design and data integrity to enhance the educational value of infographics.

9. Portfolio Assessment

student portfolio

Definition: Portfolio assessment involves collecting a student’s work over time, demonstrating learning, progress, and achievement.

Benefit: Provides a comprehensive view of a student’s abilities and improvements over time.

Limitation: Can be logistically challenging to manage and time-consuming to assess thoroughly.

Design Tip: Use clear guidelines and checklists to help students know what to include and ensure consistency in assessment.

10. Project-Based Assessment

student completing an exam

Definition: Project-based assessment evaluates students’ abilities to apply knowledge to real-world challenges through extended projects.

Benefit: Encourages practical application of skills and fosters problem-solving and critical thinking.

Limitation: Time-intensive and may require significant resources to implement effectively.

Design Tip: Align projects with real-world problems relevant to the students’ future careers to increase engagement and applicability.

11. Oral Exams

student completing an exam

Definition: Oral exams involve students answering questions spoken by an examiner to assess their knowledge and thinking skills.

Benefit: Allows immediate clarification of answers and assessment of communication skills.

Limitation: Can be stressful for students and result in performance anxiety, affecting their scores.

Design Tip: Create a supportive environment and clear guidelines to help reduce anxiety and improve performance.

12. Capstone Project

a student's capstone project

Definition: A capstone project is a multifaceted assignment that serves as a culminating academic and intellectual experience for students.

Benefit: Integrates knowledge and skills from various areas, fostering holistic learning and innovation.

Limitation: Requires extensive time and resources to supervise and assess effectively.

Design Tip: Ensure clear objectives and support structures are in place to guide students through complex projects.

Real-Life Summative Assessments

  • Final Exams for a College Course: At the end of the semester at university, there is usually a final exam that will determine if you pass. There are also often formative tests mid-way through the course (known in England as ICAs and the USA as midterms).
  • SATs: The SATs are the primary United States college admissions tests. They are a summative assessment because they provide a final grade that can determine whether a student gets into college or not.
  • AP Exams: The AP Exams take place at the end of Advanced Placement courses to also determine college readiness.
  • Piano Exams: The ABRSM administers piano exams to test if a student can move up a grade (from grades 1 to 8), which demonstrates their achievements in piano proficiency.
  • Sporting Competitions: A sporting competition such as a swimming race is summative because it leads to a result or ranking that cannot be reneged. However, as there will always be future competitions, they could also be treated as summative – especially if it’s not the ultimate competition in any given sport.
  • Drivers License Test: A drivers license test is pass-fail, and represents the culmination of practice in driving skills.
  • IELTS: Language tests like IELTS are summative assessments of a person’s ability to speak a language (in the case of IELTS, it’s English).
  • Citizenship Test: Citizenship tests are pass-fail, and often high-stakes. There is no room for formative assessment here.
  • Dissertation Submission: A final dissertation submission for a postgraduate degree is often sent to an external reviewer who will give it a pass-fail grade.
  • CPR Course: Trainees in a 2-day first-aid and CPR course have to perform on a dummy while being observed by a licensed trainer.
  • PISA Tests: The PISA test is a standardized test commissioned by the OECD to provide a final score of students’ mathematic, science, and reading literacy across the world, which leads to a league table of nations.
  • The MCATs: The MCATs are tests that students conduct to see whether they can get into medical school. They require significant study and preparation before test day.
  • The Bar: The Bar exam is an exam prospective lawyers must sit in order to be accepted as lawyers in their jurisdiction.

Summative assessment allows teachers to determine if their students have reached the defined behavioral objectives . It can occur at the end of a unit, an academic term, or academic year.

The assessment usually results in a grade or a percentage that is recorded in the student’s file. These scores are then used in a variety of ways and are meant to provide a snapshot of the student’s progress.

Although the SAT or ACT are common examples of summative assessment, it can actually take many forms. Teachers might ask their students to give an oral presentation, perform a short role-play, or complete a project-based assignment. 

Brookhart, S. M. (2004). Assessment theory for college classrooms. New Directions for Teaching and Learning , 100 , 5-14. https://doi.org/10.1002/tl.165

Dixon, D. D., & Worrell, F. C. (2016). Formative and summative assessment in the classroom. Theory into Practice , 55 , 153-159. https://doi.org/10.1080/00405841.2016.1148989

Geiser, S., & Santelices, M. V. (2007). Validity of high-school grades in predicting student success beyond the freshman year: High-school record vs. standardized tests as indicators of four-year college outcomes. Research and Occasional Paper Series. Berkeley, CA: Center for Studies in Higher Education, University of California.

Kibble J. D. (2017). Best practices in summative assessment. Advances in Physiology Education , 41 (1), 110–119. https://doi.org/10.1152/advan.00116.2016

Lungu, S., Matafwali, B., & Banja, M. K. (2021). Formative and summative assessment practices by teachers in early childhood education centres in Lusaka, Zambia. European Journal of Education Studies, 8 (2), 44-65.

States, J., Detrich, R., & Keyworth, R. (2018). Summative Assessment (Wing Institute Original Paper). https://doi.org/10.13140/RG.2.2.16788.19844

Chris

  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 10 Reasons you’re Perpetually Single
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 20 Montessori Toddler Bedrooms (Design Inspiration)
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 21 Montessori Homeschool Setups
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 101 Hidden Talents Examples

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

  • Research article
  • Open access
  • Published: 08 June 2021

Comparing formative and summative simulation-based assessment in undergraduate nursing students: nursing competency acquisition and clinical simulation satisfaction

  • Oscar Arrogante 1 ,
  • Gracia María González-Romero 1 ,
  • Eva María López-Torre 1 ,
  • Laura Carrión-García 1 &
  • Alberto Polo 1  

BMC Nursing volume  20 , Article number:  92 ( 2021 ) Cite this article

20k Accesses

24 Citations

1 Altmetric

Metrics details

Formative and summative evaluation are widely employed in simulated-based assessment. The aims of our study were to evaluate the acquisition of nursing competencies through clinical simulation in undergraduate nursing students and to compare their satisfaction with this methodology using these two evaluation strategies.

Two hundred eighteen undergraduate nursing students participated in a cross-sectional study, using a mixed-method. MAES© (self-learning methodology in simulated environments) sessions were developed to assess students by formative evaluation. Objective Structured Clinical Examination sessions were conducted to assess students by summative evaluation. Simulated scenarios recreated clinical cases of critical patients. Students´ performance in all simulated scenarios were assessed using checklists. A validated questionnaire was used to evaluate satisfaction with clinical simulation. Quantitative data were analysed using the IBM SPSS Statistics version 24.0 software, whereas qualitative data were analysed using the ATLAS-ti version 8.0 software.

Most nursing students showed adequate clinical competence. Satisfaction with clinical simulation was higher when students were assessed using formative evaluation. The main students’ complaints with summative evaluation were related to reduced time for performing simulated scenarios and increased anxiety during their clinical performance.

The best solution to reduce students’ complaints with summative evaluation is to orient them to the simulated environment. It should be recommended to combine both evaluation strategies in simulated-based assessment, providing students feedback in summative evaluation, as well as evaluating their achievement of learning outcomes in formative evaluation.

Peer Review reports

Clinical simulation methodology has increased exponentially over the last few years and has gained acceptance in nursing education. Simulation-based education (SBE) is considered an effective educational methodology for nursing students to achieve the competencies needed for their professional future [ 1 – 5 ]. In addition, simulation-based educational programs have demonstrated to be more useful than traditional teaching methodologies [ 4 , 6 ]. As a result, most nursing faculties are integrating this methodology into their study plans [ 7 ]. SBE has the potential to shorten the learning curve for students, increase the fusion between theoretical knowledge and clinical practice, establish deficient areas in students, develop communication and technical skills acquisition, improve patient safety, standardise the curriculum and teaching contents, and offer observations of real-time clinical decision making [ 5 , 6 , 8 , 9 ].

SBE offers an excellent opportunity to perform not only observed competency-based teaching, but also the assessment of these competencies. Simulated-based assessment (SBA) is aimed at evaluating various professional skills, including knowledge, technical and clinical skills, communication, and decision-making; as well as higher-order competencies such as patient safety and teamwork skills [ 1 – 4 , 10 ]. Compared with other traditional assessment methods (i.e. written or oral test), SBA offers the opportunity to evaluate the actual performance in an environment similar to the ‘real’ clinical practice, assess multidimensional professional competencies, and present standard clinical scenarios to all students [ 1 – 4 , 10 ].

The main SBA strategies are formative and summative evaluation. Formative evaluation is conducted to establish students’ progression during the course [ 11 ]. This evaluation strategy is helpful to educators in improving students’ deficient areas and testing their knowledge [ 12 ]. Employing this evaluation strategy, educators give students feedback about their performance. Subsequently, students self-reflect to evaluate their learning and determine their deficient areas. In this sense, formative evaluation includes an ideal phase to achieve the purposes of strategy: the debriefing [ 13 ]. International Nursing Association for Clinical Simulation and Learning (INACSL) defines debriefing as a reflective process immediately following the simulation-based experience where ‘participants explore their emotions and question, reflect, and provide feedback to one another’. Its aim is ‘to move toward assimilation and accommodation to transfer learning to future situations’ [ 14 ]. Therefore, debriefing is a basic component for learning to be effective after the simulation [ 15 , 16 ]. Furthermore, MAES© (according to its Spanish initials of self-learning methodology in simulated environments) is a clinical simulation methodology created to perform formative evaluations [ 17 ]. MAES© allows evaluating specifically nursing competencies acquired by several nursing students at the same time. MAES© is structured through the union of other active learning methodologies such as self-directed learning, problem-based learning, peer education and simulation-based learning. Specifically, students acquire and develop competencies through self-directed learning, as they voluntarily choose competencies to learn. Furthermore, this methodology encourages students to be the protagonists of their learning process, since they can choose the case they want to study, design the clinical simulation scenario and, finally, actively participate during the debriefing phase [ 17 ]. This methodology meets all the requirements defined by the INACSL Standards of Best Practice [ 18 ]. Compared to traditional simulation-based learning (where simulated clinical scenarios are designed by the teaching team and led by facilitators), the MAES© methodology (where simulated clinical scenarios are designed and led by students) provides students nursing a better learning process and clinical performance [ 19 ]. Currently, the MAES© methodology is used in clinical simulation sessions with nursing students in some universities, not only in Spain but also in Norway, Portugal and Brazil [ 20 ].

In contrast, summative evaluation is used to establish the learning outcomes achieved by students at the end of the course [ 11 ]. This evaluation strategy is helpful to educators in evaluating students’ learning, the competencies acquired by them and their academic achievement [ 12 ]. This assessment is essential in the education process to determine readiness and competence for certification and accreditation [ 10 , 21 ]. Accordingly, Objective Structured Clinical Examination (OSCE) is commonly conducted in SBA as a summative evaluation to evaluate students’ clinical competence [ 22 ]. Consequently, OSCE has been used by educational institutions as a valid and reliable method of assessment. OSCE most commonly consists of a ‘round-robin’ of multiple short testing stations, in each of which students must demonstrate defined clinical competencies, while educators evaluate their performance according to predetermined criteria using a standardized marking scheme, such as checklists. Students must rotate through these stations where educators assess students’ performance in clinical examination, technical skills, clinical judgment and decision-making skill during the nursing process [ 22 , 23 ]. This strategy of summative evaluation incorporates actors performing as simulated patients. Therefore, OSCE allows assessing students’ clinical competence in a real-life simulated clinical environment. After simulated scenarios, this evaluation strategy provides educators with an opportunity to give students constructive feedback according to their achieved results in the checklist [ 10 , 21 – 23 ].

Despite both evaluation strategies are widely employed in SBA, there is scarce evidence about the possible differences in satisfaction with clinical simulation when nursing students are assessed using formative and summative evaluation. Considering the high satisfaction with the formative evaluation perceived by our students during the implementation of the MAES© methodology, we were concerned if this satisfaction would be similar using the same simulated clinical scenarios through a summative evaluation. Additionally, we were concerned about the reasons why this satisfaction would be different using both strategies of SBA. Therefore, the aims of our study were to evaluate the acquisition of nursing competencies through clinical simulation methodology in undergraduate nursing students, as well as to compare their satisfaction with this methodology using two strategies of SBA, such as formative and summative evaluation. In this sense, our research hypothesis is that both strategies of SBA are effective in acquiring nursing competencies, but student satisfaction with the formative evaluation is higher than with the summative evaluation.

Study design and setting

A descriptive cross-sectional study using a mixed-method and analysing both quantitative and qualitative data. The study was conducted from September 2018 to May 2019 in a University Centre of Health Sciences in Madrid (Spain). This centre offers Physiotherapy and Nursing Degrees.

Participants

The study included 3rd-year undergraduate students (106 students participated in MAES© sessions within the subject ‘Nursing care for critical patients’) and 4th-year undergraduate students (112 students participated in OSCE sessions within the subject ‘Supervised clinical placements – Advanced level’) in Nursing Degree. It should be noted, 4th-year undergraduate students had completed all their clinical placements and they had to approve OSCE sessions to achieve their certification.

Clinical simulation sessions

To assess the clinical performance of 3rd-year undergraduate students using formative evaluation, MAES© sessions were conducted. This methodology consists of 6 elements in a minimum of two sessions [ 17 ]: Team selection and creation of group identity (students are grouped into teams and they create their own identity), voluntary choice of subject of study (each team will freely choose a topic that will serve as inspiration for the design of a simulation scenario), establishment of baseline and programming skills to be acquired through brainstorming (the students, by teams, decide what they know about the subject and then what they want to learn from it, as well as the clinical and non- technical skills they would like to acquire with the case they have chosen), design of a clinical simulation scenario in which the students practice the skills to be acquired (each team commits to designing a scenario in the simulation room), execution of the simulated clinical experience (another team, different from the one that has designed the case, will enter the high-fidelity simulation room and will have a simulation experience), and finally debriefing and presentation of the acquired skills (in addition to analysing the performance of the participants in the scenario, the students explain what they learned during the design of the case and look for evidence of the learning objectives).

Alternatively, OSCE sessions were developed to assess the clinical performance of 4th-year undergraduate students using summative evaluation. Both MAES© and OSCE sessions recreated critically ill patients with diagnoses of Exacerbation of Chronic Obstructive Pulmonary Disease (COPD), acute coronary syndrome haemorrhage in a postsurgical, and severe traumatic brain injury.

It should be noted that the implementation of all MAES© and OSCEs sessions followed the Standards of Best Practice recommended by the INACSL [ 14 , 24 – 26 ]. In this way, all the stages included in a high-fidelity session were accomplished: pre-briefing, briefing, simulated scenario, and debriefing. Specifically, a session with all nursing students was carried out 1 week before the performance of OSCE stations to establish a safe psychological learning environment and familiarize students with this summative evaluation. In this pre-briefing phase, we implemented several activities based on practices recommended by the INACSL Standards Committee [ 24 , 25 ] and Rudolph, Raemer, and Simon [ 27 ] for establishing a psychologically safe context. Although traditional OSCEs do not usually include the debriefing phase, we decided to include this phase in all OSCEs carried out in our university centre, since we consider this phase is quite relevant to nursing students’ learning process and their imminent professional career.

Critically ill patient’s role was performed by an advanced simulator mannequin (NursingAnne® by Laerdal Medical AS) in all simulated scenarios. A confederate (a health professional who acts in a simulated scenario) performed the role of a registered nurse or a physician who could help students as required. Occasionally, this confederate could perform the role of a relative of a critically ill patient. Nursing students formed work teams of 2–3 students in all MAES© and OSCE sessions. Specifically, each work team formed in MAES© sessions received a brief description of simulated scenario 2 months before and students had to propose 3 NIC (Nursing Interventions Classification) interventions [ 28 ], and 5 related nursing activities with each of them, to resolve the critical situation. In contrast, the critical situation was presented to each work team formed in OSCE sessions for 2 min before entering the simulated scenario. During all simulated experiences, professors were monitoring and controlling the simulation with a sophisticated computer program in a dedicated control room. All simulated scenarios lasted 10 min.

After each clinical simulated scenario was concluded, a debriefing was carried out to give students feedback about their performance. Debriefings in MAES© sessions were conducted according to the Gather, Analyse, and Summarise (GAS) method, a structured debriefing model developed by Phrampus and O’Donnell [ 29 ]. According to this method, the debriefing questions used were: What went well during your performance?; What did not go so well during your performance?; How can you do better next time? . Additionally, MAES© includes an expository phase in debriefings, where the students who performed the simulated scenario establish the contributions of scientific evidence about its resolution [ 17 ]. Each debriefing lasted 20 min in MAES© sessions. In contrast, debriefings in OSCE sessions lasted 10 min and they were carried out according to the Plus-Delta debriefing tool [ 30 ], a technique recommended when time is limited. Consequently, the debriefing questions were reduced to two questions: What went well during your performance?; What did not go so well during your performance? . Within these debriefings, professors communicated to students the total score obtained in the appropriate checklist. Each debriefing lasted 10 min in OSCE sessions. After all debriefings, students completed the questionnaires to evaluate their satisfaction with clinical simulation. In OSCE sessions, students had to report their satisfaction only with the scenario performed, which took part in a series of clinical stations.

In summary, Table  1 shows the required elements for formative and summative evaluation according to the Standards of Best Practice for participant evaluation recommended by the INACSL [ 18 ]. It should be noted that our MAES© and OSCE sessions accomplished these required elements.

Instruments

Clinical performance.

Professors assessed students’ clinical performance using checklists (‘Yes’/‘No’). In MAES© sessions, checklists were based on the 5 most important nursing activities included in the NIC [ 28 ] selected by nursing students. Table  2 shows the checklist of the most important NIC interventions and its related nursing activities selected by nursing students in the Exacerbation of Chronic Obstructive Pulmonary Disease (COPD) simulated scenario. In contrast, checklists for evaluating OSCE sessions were based on nursing activities selected by consensus among professors, registered nurses, and clinical placement mentors. Nursing activities were divided into 5 categories: nursing assessment, clinical judgment/decision-making, clinical management/nursing care, communication/interpersonal relationships, and teamwork. Table  3 shows the checklist of nursing activities that nursing students had to perform in COPD simulated scenario. During the execution of all simulated scenarios, professors checked if the participants perform or not the nursing activities selected.

Clinical simulation satisfaction

To determine satisfaction with clinical simulation perceived by nursing students, the Satisfaction Scale Questionnaire with High-Fidelity Clinical Simulation [ 31 ] was used after each clinical simulation session. This questionnaire consists of 33 items with a 5-point Likert scale ranging from ‘strongly disagree’ to ‘totally agree’. These items are divided into 8 scales: simulation utility, characteristics of cases and applications, communication, self-reflection on performance, increased self-confidence, relation between theory and practice, facilities and equipment and negative aspects of simulation. Cronbach’s α values for each scale ranged from .914 to .918 and total scale presents satisfactory internal consistency (Cronbach’s α value = .920). This questionnaire includes a final question about any opinion or suggestion that participating students wish to reflect after the simulation experience.

Data analysis

Quantitative data were analysed using IBM SPSS Statistics version 24.0 software for Windows (IBM Corp., Armonk, NY, USA). Descriptive statistics were calculated to interpret the results obtained in demographic data, clinical performance, and satisfaction with clinical simulation. The dependent variables after the program in the two groups were analyzed using independent t-tests. The differences in the mean changes between the two groups were analyzed using an independent t-test. Cohen’s d was calculated to analyse the effect size for t-tests. Statistical tests were two-sided (α = 0.05), so the statistical significance was set at 0.05. Subsequently, all students’ opinions and comments were analysed using the ATLAS-ti version 8.0 software (Scientific Software Development GmbH, Berlin, Germany). All the information contained in these qualitative data were stored, managed, classified and organized through this software. All the reiterated words, sentences or ideas were grouped into themes using a thematic analysis [ 32 ]. It should be noted that the students’ opinions and comments were preceded by the letter ‘S’ (student) and numerically labelled.

A total of 218 nursing students participated in the study (106 students were trained through MAES© sessions, whereas 112 students were assessed through OSCE sessions). The age of students ranged from 20 to 43 years (mean = 23.28; SD = 4.376). Most students were women ( n  = 184; 84.4%).

In formative evaluation, professors checked 93.2% of students selected adequately both NIC interventions and its related nursing activities for the resolution of the clinical simulated scenario. Subsequently, these professors checked 85.6% of students, who participated in each simulated scenario, performed the nursing activities previously selected by them. In summative evaluation, students obtained total scores ranged from 65 to 95 points (mean = 7.43; SD = .408).

Descriptive data for each scale of satisfaction with clinical simulation questionnaire, t-test, and effect sizes (d) of differences between two evaluation strategies are shown in Table  4 . Statistically significant differences were found between two evaluation strategies for all scales of the satisfaction with clinical simulation questionnaire. Students´ satisfaction with clinical simulation was higher for all scales of the questionnaire when they were assessed using formative evaluation, including the ‘negative aspects of simulation’ scale, where the students perceived fewer negative aspects. The effect size of these differences was large (including the total score of the questionnaire) (Cohen’s d values > .8), except for the ‘facilities and equipment’ scale, which effect size was medium (Cohen’s d value > .5) [ 33 ].

Table  5 shows specifically descriptive data, t-test, and effect sizes (d) of differences between both evaluation strategies for each item of the clinical simulation satisfaction questionnaire. Statistically significant differences were found between two evaluation strategies for all items of the questionnaire, except for items ‘I have improved communication with the family’, ‘I have improved communication with the patient’, and ‘I lost calm during any of the cases’. Students´ satisfaction with clinical simulation was higher in formative evaluation sessions for most items, except for item ‘simulation has made me more aware/worried about clinical practice’, where students informed being more aware and worried in summative evaluation sessions. Most effect sizes of these differences were small or medium (Cohen’s d values ranged from .238 to .709) [ 33 ]. The largest effect sizes of these differences were obtained for items ‘timing for each simulation case has been adequate’ (d = 1.107), ‘overall satisfaction of sessions’ (d = .953), and ‘simulation has made me more aware/worried about clinical practice’ (d = -.947). In contrast, the smallest effect sizes of these differences were obtained for items ‘simulation allows us to plan the patient care effectively’ (d = .238) and ‘the degree of cases difficulty was appropriate to my knowledge’ (d = .257).

In addition, participating students provided 74 opinions or suggestions expressed through short comments. Most students’ comments were related to 3 main themes after the thematic analysis: utility of clinical simulation methodology (S45: ‘it has been a useful activity and it helped us to recognize our mistakes and fixing knowledge’, S94: ‘to link theory to practice is essential’), to spend more time on this methodology (S113: ‘I would ask for more practices of this type‘, S178: ‘I feel very happy, but it should be done more frequently’), and its integration into other subjects (S21: ‘I consider this activity should be implemented in more subjects’, S64: ‘I wish there were more simulations in more subjects’). Finally, students´ comments about summative evaluation sessions included other 2 main themes related to: limited time of simulation experience (S134: ‘time is short’, S197: ‘there is no time to perform activities and assess properly’) and students´ anxiety (S123: ‘I was very nervous because people were evaluating me around’, S187: ‘I was more nervous than in a real situation’).

The most significant results obtained in our study are the nursing competency acquisition through clinical simulation by nursing students and the different level of their satisfaction with this methodology depending on the evaluation strategy employed.

Firstly, professors in this study verified most students acquired the nursing competencies to resolve each clinical situation. In our study, professors verified that most nursing students performed the majority of the nursing activities required for the resolution of each MAES© session and OSCE station. This result confirms the findings in other studies that have demonstrated nursing competency acquisition by nursing students through clinical simulation [ 34 , 35 ], and specifically nursing competencies related to critical patient management [ 9 , 36 ].

Secondly, students’ satisfaction assessed using both evaluation strategies could be considered high in most items of the questionnaire, regarding their mean scores (quite close to the maximum score in the response scale of the satisfaction questionnaire). The high level of satisfaction expressed by nursing students with clinical simulation obtained in this study is also congruent with empirical evidence, which confirms that this methodology is a useful tool for their learning process [ 6 , 31 , 37 – 40 ].

However, satisfaction with clinical simulation was higher when students were assessed using formative evaluation. The main students’ complaints with summative evaluation were related to reduced time for performing simulated scenarios and increased anxiety during their clinical performance. Reduced time is a frequent complaint of students in OSCE [ 23 , 41 ] and clinical simulation methodology [ 5 , 6 , 10 ]. Professors, registered nurses, and clinical placement mentors tested all simulated scenarios and their checklist in this study. They checked the time was enough for its resolution. Another criticism of summative evaluation is increased anxiety. However, several studies have demonstrated during clinical simulation students’ anxiety increase [ 42 , 43 ] and it is considered as the most disadvantage of clinical simulation [ 1 – 10 ]. In this sense, anxiety may influence negatively students’ learning process [ 42 , 43 ]. Although the current simulation methodology can mimic the real medical environment to a great degree, it might still be questionable whether students´ performance in the testing environment really represents their true ability. Test anxiety might increase in an unfamiliar testing environment; difficulty to handle unfamiliar technology (i.e., monitor, defibrillator, or other devices that may be different from the ones used in the examinee’s specific clinical environment) or even the need to ‘act as if’ in an artificial scenario (i.e., talking to a simulator, examining a ‘patient’ knowing he/she is an actor or a mannequin) might all compromise examinees’ performance. The best solution to reduce these complaints is the orientation of students to the simulated environment [ 10 , 21 – 23 ].

Nevertheless, it should be noted that the diversity in the satisfaction scores obtained in our study could be supported not by the choice of the assessment strategy, but precisely by the different purposes of formative and summative assessment. In this sense, there is a component of anxiety that is intrinsic in summative assessment, which must certify the acquisition of competencies [ 10 – 12 , 21 ]. In contrast, this aspect is not present in formative assessment, which is intended to help the student understand the distance to reach the expected level of competence, without penalty effects [ 10 – 12 ].

Both SBA strategies allow educators to evaluate students’ knowledge and apply it in a clinical setting. However, formative evaluation is identified as ‘assessment for learning’ and summative evaluation as ‘assessment of learning’ [ 44 ]. Using formative evaluation, educators’ responsibility is to ensure not only what students are learning in the classroom, but also the outcomes of their learning process [ 45 ]. In this sense, formative assessment by itself is not enough to determine educational outcomes [ 46 ]. Consequently, a checklist for evaluating students’ clinical performance was included in MAES© sessions. Alternatively, educators cannot make any corrections in students’ performance using summative evaluation [ 45 ]. Gavriel [ 44 ] suggests providing students feedback in this SBA strategy. Therefore, a debriefing phase was included after each OSCE session in our study. The significance of debriefing recognised by nursing students in our study is also congruent with the most evidence found  [ 13 , 15 , 16 , 47 ]. Nursing students appreciate feedback about their performance during simulation experience and, consequently, debriefing is considered as the most rewarding phase in clinical simulation by them  [ 5 , 6 , 48 ]. In addition, nursing students in our study expressed they could learn from their mistakes in debriefing. Learn from error is one of the most advantages of clinical simulation shown in several studies  [ 5 , 6 , 49 ] and mistakes should be considered learning opportunities rather than there being embarrassment or punitive consequences  [ 50 ].

Furthermore, nursing students who participated in our study considered the practical utility of clinical simulation as another advantage of this teaching methodology. This result is congruent with previous studies [ 5 , 6 ]. Specifically, our students indicated this methodology is useful to bridge the gap between theory and practice [ 51 , 52 ]. In this sense, clinical simulation has proven to reduce this gap and, consequently, it has demonstrated to shorten the gap between classrooms and clinical practices  [ 5 , 6 , 51 , 52 ]. Therefore, as this teaching methodology relates theory and practice, it helps nursing students to be prepared for their clinical practices and future careers. According to Benner’s model of skill acquisition in nursing [ 53 ], nursing students become competent nurses through this learning process, acquiring a degree of safety and clinical experience before their professional careers [ 54 ]. Although our research indicates clinical simulation is a useful methodology for the acquisition and learning process of competencies mainly related to adequate management and nursing care of critically ill patients, this acquisition and learning process could be extended to most nursing care settings and its required nursing competencies.

Limitations and future research

Although checklists employed in OSCE have been criticized for their subjective construction [ 10 , 21 – 23 ], they were constructed with the expert consensus of nursing professors, registered nurses and clinical placement mentors. Alternatively, the self-reported questionnaire used to evaluate clinical simulation satisfaction has strong validity. All simulated scenarios were similar in MAES© and OSCE sessions (same clinical situations, patients, actors and number of participating students), although the debriefing method employed after them was different. This difference was due to reduced time in OSCE sessions. Furthermore, it should be pointed out that the two groups of students involved in our study were from different course years and they were exposed to different strategies of SBA. In this sense, future studies should compare nursing students’ satisfaction with both strategies of SBA in the same group of students and using the same debriefing method. Finally, future research should combine formative and summative evaluation for assessing the clinical performance of undergraduate nursing students in simulated scenarios.

It is needed to provide students feedback about their clinical performance when they are assessed using summative evaluation. Furthermore, it is needed to evaluate whether they achieve learning outcomes when they are assessed using formative evaluation. Consequently, it should be recommended to combine both evaluation strategies in SBA. Although students expressed high satisfaction with clinical simulation methodology, they perceived a reduced time and increased anxiety when they are assessed by summative evaluation. The best solution is the orientation of students to the simulated environment.

Availability of data and materials

The datasets analysed during the current study are available from the corresponding author on reasonable request.

Martins J, Baptista R, Coutinho V, Fernandes M, Fernandes A. Simulation in nursing and midwifery education. Copenhagen: World Health Organization Regional Office for Europe; 2018.

Google Scholar  

Cant RP, Cooper SJ. Simulation-based learning in nurse education: systematic review. J Adv Nurs. 2010;66:3–15.

Article   PubMed   Google Scholar  

Chernikova O, Heitzmann N, Stadler M, Holzberger D, Seidel T, Fischer F. Simulation-based learning in higher education: a meta-analysis. Rev Educ Res. 2020;90:499–541.

Article   Google Scholar  

Kim J, Park JH, Shin S. Effectiveness of simulation-based nursing education depending on fidelity: a meta-analysis. BMC Med Educ. 2016;16:152.

Article   PubMed   PubMed Central   Google Scholar  

Ricketts B. The role of simulation for learning within pre-registration nursing education—a literature review. Nurse Educ Today. 2011;31:650–4.

PubMed   Google Scholar  

Shin S, Park JH, Kim JH. Effectiveness of patient simulation in nursing education: meta-analysis. Nurse Educ Today. 2015;35:176–82.

Bagnasco A, Pagnucci N, Tolotti A, Rosa F, Torre G, Sasso L. The role of simulation in developing communication and gestural skills in medical students. BMC Med Educ. 2014;14:106.

Oh PJ, Jeon KD, Koh MS. The effects of simulation-based learning using standardized patients in nursing students: a meta-analysis. Nurse Educ Today. 2015;35:e6–e15.

Stayt LC, Merriman C, Ricketts B, Morton S, Simpson T. Recognizing and managing a deteriorating patient: a randomized controlled trial investigating the effectiveness of clinical simulation in improving clinical performance in undergraduate nursing students. J Adv Nurs. 2015;71:2563–74.

Ryall T, Judd BK, Gordon CJ. Simulation-based assessments in health professional education: a systematic review. J Multidiscip Healthc. 2016;9:69–82.

PubMed   PubMed Central   Google Scholar  

Billings DM, Halstead JA. Teaching in nursing: a guide for faculty. 4th ed. St. Louis: Elsevier; 2012.

Nichols PD, Meyers JL, Burling KS. A framework for evaluating and planning assessments intended to improve student achievement. Educ Meas Issues Pract. 2009;28:14–23.

Cant RP, Cooper SJ. The benefits of debriefing as formative feedback in nurse education. Aust J Adv Nurs. 2011;29:37–47.

INACSL Standards Committee. INACSL Standards of Best Practice: Simulation SM Simulation Glossary. Clin Simul Nurs. 2016;12:S39–47.

Dufrene C, Young A. Successful debriefing-best methods to achieve positive learning outcomes: a literature review. Nurse Educ Today. 2014;34:372–6.

Levett-Jones T, Lapkin S. A systematic review of the effectiveness of simulation debriefing in health professional education. Nurse Educ Today. 2014;34:e58–63.

Díaz JL, Leal C, García JA, Hernández E, Adánez MG, Sáez A. Self-learning methodology in simulated environments (MAES©): elements and characteristics. Clin Simul Nurs. 2016;12:268–74.

INACSL Standards Committee. INACSL Standards of Best Practice: Simulation SM : Participant Evaluation. Clin Simul Nurs. 2016;12:S26–9.

Díaz Agea JL, Megías Nicolás A, García Méndez JA, Adánez Martínez MG, Leal CC. Improving simulation performance through self-learning methodology in simulated environments (MAES©). Nurse Educ Today. 2019;76:62–7.

Díaz Agea JL, Ramos-Morcillo AJ, Amo Setien FJ, Ruzafa-Martínez M, Hueso-Montoro C, Leal-Costa C. Perceptions about the self-learning methodology in simulated environments in nursing students: a mixed study. Int J Environ Res Public Health. 2019;16:4646.

Article   PubMed Central   Google Scholar  

Oermann MH, Kardong-Edgren S, Rizzolo MA. Summative simulated-based assessment in nursing programs. J Nurs Educ. 2016;55:323–8.

Harden RM, Gleeson FA. Assessment of clinical competence using an objective structured clinical examination (OSCE). Med Educ. 1979;13:41–54.

CAS   PubMed   Google Scholar  

Mitchell ML, Henderson A, Groves M, Dalton M, Nulty D. The objective structured clinical examination (OSCE): optimising its value in the undergraduate nursing curriculum. Nurse Educ Today. 2009;29:394–404.

INACSL Standards Committee. INACSL Standards of Best Practice: Simulation SM Simulation Design. Clin Simul Nurs. 2016;12:S5–S12.

INACSL Standards Committee. INACSL Standards of Best Practice: Simulation SM Facilitation. Clin Simul Nurs. 2016;12:S16–20.

INACSL Standards Committee. INACSL Standards of Best Practice: Simulation SM Debriefing. Clin Simul Nurs. 2016;12:S21–5.

Rudolph JW, Raemer D, Simon R. Establishing a safe container for learning in simulation: the role of the presimulation briefing. Simul Healthc. 2014;9:339–49.

Butcher HK, Bulechek GM, Dochterman JMM, Wagner C. Nursing Interventions Classification (NIC). 7th ed. St. Louis: Elsevier; 2018.

Phrampus PE, O’Donnell JM. Debriefing using a structured and supported approach. In: AI AIL, De Maria JS, Schwartz AD, Sim AJ, editors. The comprehensive textbook of healthcare simulation. New York: Springer; 2013. p. 73–84.

Chapter   Google Scholar  

Decker S, Fey M, Sideras S, Caballero S, Rockstraw L, Boese T, et al. Standards of best practice: simulation standard VI: the debriefing process. Clin Simul Nurs. 2013;9:S26–9.

Alconero-Camarero AR, Gualdrón-Romero A, Sarabia-Cobo CM, Martínez-Arce A. Clinical simulation as a learning tool in undergraduate nursing: validation of a questionnaire. Nurse Educ Today. 2016;39:128–34.

Mayan M. Essentials of qualitative inquiry. Walnut Creek: Left Coast Press, Inc.; 2009.

Cohen L, Manion L, Morrison K. Research methods in education. 7th ed. London: Routledge; 2011.

Lapkin S, Levett-Jones T, Bellchambers H, Fernandez R. Effectiveness of patient simulation manikins in teaching clinical reasoning skills to undergraduate nursing students: a systematic review. Clin Simul Nurs. 2010;6:207–22.

McGaghie WC, Issenberg SB, Petrusa ER, Scalese RJ. Revisiting “a critical review of simulation-based medical education research: 2003-2009”. Med Educ. 2016;50:986–91.

Abelsson A, Bisholt B. Nurse students learning acute care by simulation - focus on observation and debriefing. Nurse Educ Pract. 2017;24:6–13.

Bland AJ, Topping A, Wood BA. Concept analysis of simulation as a learning strategy in the education of undergraduate nursing students. Nurse Educ Today. 2011;31:664–70.

Franklin AE, Burns P, Lee CS. Psychometric testing on the NLN student satisfaction and self-confidence in learning, design scale simulation, and educational practices questionnaire using a sample of pre-licensure novice nurses. Nurse Educ Today. 2014;34:1298–304.

Levett-Jones T, McCoy M, Lapkin S, Noble D, Hoffman K, Dempsey J, et al. The development and psychometric testing of the satisfaction with simulation experience scale. Nurse Educ Today. 2011;31:705–10.

Zapko KA, Ferranto MLG, Blasiman R, Shelestak D. Evaluating best educational practices, student satisfaction, and self-confidence in simulation: a descriptive study. Nurse Educ Today. 2018;60:28–34.

Kelly MA, Mitchell ML, Henderson A, Jeffrey CA, Groves M, Nulty DD, et al. OSCE best practice guidelines-applicability for nursing simulations. Adv Simul. 2016;1:10.

Cantrell ML, Meyer SL, Mosack V. Effects of simulation on nursing student stress: an integrative review. J Nurs Educ. 2017;56:139–44.

Nielsen B, Harder N. Causes of student anxiety during simulation: what the literature says. Clin Simul Nurs. 2013;9:e507–12.

Gavriel J. Assessment for learning: a wider (classroom-researched) perspective is important for formative assessment and self-directed learning in general practice. Educ Prim Care. 2013;24:93–6.

Taras M. Summative and formative assessment. Act Learn High Educ. 2008;9:172–82.

Wunder LL, Glymph DC, Newman J, Gonzalez V, Gonzalez JE, Groom JA. Objective structured clinical examination as an educational initiative for summative simulation competency evaluation of first-year student registered nurse anesthetists’ clinical skills. AANA J. 2014;82:419–25.

Neill MA, Wotton K. High-fidelity simulation debriefing in nursing education: a literature review. Clin Simul Nurs. 2011;7:e161–8.

Norman J. Systematic review of the literature on simulation in nursing education. ABNF J. 2012;23:24–8.

King A, Holder MGJr, Ahmed RA. Error as allies: error management training in health professions education. BMJ Qual Saf. 2013;22:516–9.

Higgins M, Ishimaru A, Holcombe R, Fowler A. Examining organizational learning in schools: the role of psychological safety, experimentation, and leadership that reinforces learning. J Educ Change. 2012;13:67–94.

Hope A, Garside J, Prescott S. Rethinking theory and practice: Pre-registration student nurses experiences of simulation teaching and learning in the acquisition of clinical skills in preparation for practice. Nurse Educ Today. 2011;31:711–7.

Lisko SA, O’Dell V. Integration of theory and practice: experiential learning theory and nursing education. Nurs Educ Perspect. 2010;31:106–8.

Benner P. From novice to expert: excellence and power in clinical nursing practice. Menlo Park: Addison-Wesley Publishing; 1984.

Book   Google Scholar  

Nickless LJ. The use of simulation to address the acute care skills deficit in pre-registration nursing students: a clinical skill perspective. Nurse Educ Pract. 2011;11:199–205.

Download references

Acknowledgements

The authors appreciate the collaboration of nursing students who participated in the study.

STROBE statement

All methods were carried out in accordance with the 22-item checklist of the consolidated criteria for reporting cross-sectional studies (STROBE).

The authors have no sources of funding to declare.

Author information

Authors and affiliations.

Fundación San Juan de Dios, Centro de Ciencias de la Salud San Rafael, Universidad de Nebrija, Paseo de La Habana, 70, 28036, Madrid, Spain

Oscar Arrogante, Gracia María González-Romero, Eva María López-Torre, Laura Carrión-García & Alberto Polo

You can also search for this author in PubMed   Google Scholar

Contributions

OA: Conceptualization, Data Collection, Formal Analysis, Writing – Original Draft, Writing - Review & Editing, Supervision; GMGR: Conceptualization, Data Collection, Writing - Review & Editing; EMLT: Conceptualization, Writing - Review & Editing; LCG: Conceptualization, Data Collection, Writing - Review & Editing; AP: Conceptualization, Data Collection, Formal Analysis, Writing - Review & Editing, Supervision. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Oscar Arrogante .

Ethics declarations

Ethics approval and consent to participate.

The research committee of the Centro Universitario de Ciencias de la Salud San Rafael-Nebrija approved the study (P_2018_012). According to the ethical standards, all participants received written informed consent and written information about the study and its goals. Additionally, written informed consent for audio-video recording was obtained from all participants.

Consent for publication

Not applicable.

Competing interests

The authors declare they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Arrogante, O., González-Romero, G.M., López-Torre, E.M. et al. Comparing formative and summative simulation-based assessment in undergraduate nursing students: nursing competency acquisition and clinical simulation satisfaction. BMC Nurs 20 , 92 (2021). https://doi.org/10.1186/s12912-021-00614-2

Download citation

Received : 09 February 2021

Accepted : 17 May 2021

Published : 08 June 2021

DOI : https://doi.org/10.1186/s12912-021-00614-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Clinical competence
  • High Fidelity simulation training
  • Nursing students

BMC Nursing

ISSN: 1472-6955

summative assessment examples nursing

  • Advancing simulation practice
  • Open access
  • Published: 28 December 2022

Simulation-based summative assessment in healthcare: an overview of key principles for practice

  • Clément Buléon   ORCID: orcid.org/0000-0003-4550-3827 1 , 2 , 3 ,
  • Laurent Mattatia 4 ,
  • Rebecca D. Minehart 3 , 5 , 6 ,
  • Jenny W. Rudolph 3 , 5 , 6 ,
  • Fernande J. Lois 7 ,
  • Erwan Guillouet 1 , 2 ,
  • Anne-Laure Philippon 8 ,
  • Olivier Brissaud 9 ,
  • Antoine Lefevre-Scelles 10 ,
  • Dan Benhamou 11 ,
  • François Lecomte 12 ,
  • the SoFraSimS Assessment with simulation group ,
  • Anne Bellot ,
  • Isabelle Crublé ,
  • Guillaume Philippot ,
  • Thierry Vanderlinden ,
  • Sébastien Batrancourt ,
  • Claire Boithias-Guerot ,
  • Jean Bréaud ,
  • Philine de Vries ,
  • Louis Sibert ,
  • Thierry Sécheresse ,
  • Virginie Boulant ,
  • Louis Delamarre ,
  • Laurent Grillet ,
  • Marianne Jund ,
  • Christophe Mathurin ,
  • Jacques Berthod ,
  • Blaise Debien ,
  • Olivier Gacia ,
  • Guillaume Der Sahakian ,
  • Sylvain Boet ,
  • Denis Oriot &
  • Jean-Michel Chabot  

Advances in Simulation volume  7 , Article number:  42 ( 2022 ) Cite this article

8285 Accesses

10 Citations

7 Altmetric

Metrics details

Healthcare curricula need summative assessments relevant to and representative of clinical situations to best select and train learners. Simulation provides multiple benefits with a growing literature base proving its utility for training in a formative context. Advancing to the next step, “the use of simulation for summative assessment” requires rigorous and evidence-based development because any summative assessment is high stakes for participants, trainers, and programs. The first step of this process is to identify the baseline from which we can start.

First, using a modified nominal group technique, a task force of 34 panelists defined topics to clarify the why, how, what, when, and who for using simulation-based summative assessment (SBSA). Second, each topic was explored by a group of panelists based on state-of-the-art literature reviews technique with a snowball method to identify further references. Our goal was to identify current knowledge and potential recommendations for future directions. Results were cross-checked among groups and reviewed by an independent expert committee.

Seven topics were selected by the task force: “What can be assessed in simulation?”, “Assessment tools for SBSA”, “Consequences of undergoing the SBSA process”, “Scenarios for SBSA”, “Debriefing, video, and research for SBSA”, “Trainers for SBSA”, and “Implementation of SBSA in healthcare”. Together, these seven explorations provide an overview of what is known and can be done with relative certainty, and what is unknown and probably needs further investigation. Based on this work, we highlighted the trustworthiness of different summative assessment-related conclusions, the remaining important problems and questions, and their consequences for participants and institutions of how SBSA is conducted.

Our results identified among the seven topics one area with robust evidence in the literature (“What can be assessed in simulation?”), three areas with evidence that require guidance by expert opinion (“Assessment tools for SBSA”, “Scenarios for SBSA”, “Implementation of SBSA in healthcare”), and three areas with weak or emerging evidence (“Consequences of undergoing the SBSA process”, “Debriefing for SBSA”, “Trainers for SBSA”). Using SBSA holds much promise, with increasing demand for this application. Due to the important stakes involved, it must be rigorously conducted and supervised. Guidelines for good practice should be formalized to help with conduct and implementation. We believe this baseline can direct future investigation and the development of guidelines.

There is a critical need for summative assessment in healthcare education [ 1 ]. Summative assessment is high stakes, both for graduation certification and for recertification in continuing medical education [ 2 , 3 , 4 , 5 ]. Knowing the consequences, the decision to validate or not validate the competencies must be reliable, based on rigorous processes, and supported by data [ 6 ]. Current methods of summative assessment such as written or oral exams are imperfect and need to be improved to better benefit programs, learners, and ultimately patients [ 7 ]. A good summative assessment should sufficiently reflect clinical practice to provide a meaningful assessment of competencies [ 1 , 8 ]. While some could argue that oral exams are a form of verbal simulation, hands-on simulation can be seen as a solution to complement current summative assessments and enhance their accuracy by bringing these tools closer to assessing the necessary competencies [ 1 , 2 ].

Simulation is now well established in the healthcare curriculum as part of a modern, comprehensive approach to medical education (e.g., competency-based medical education) [ 9 , 10 , 11 ]. Rich in various modalities, simulation provides training in a wide range of technical and non-technical skills across all disciplines. Simulation adds value to the educational training process particularly with feedback and formative assessment [ 9 ]. With the widespread use of simulation in the formative setting, the next logical step is using simulation for summative assessment.

The shift from formative to summative assessment using simulation in healthcare must be thoughtful, evidence-based, and rigorous. Program directors and educators may find it challenging to move from formative to summative use of simulation. There are currently limited experiences (e.g., OSCE [ 12 , 13 ]) but not established guidelines on how to proceed. The evidence needed for the feasibility, the validity, and the definition of the requirement for simulation-based summative assessment (SBSA) in healthcare education has not yet been formally gathered. With this evidence, we can hope to build a rigorous and fair pathway to SBSA.

The purpose of this work is to review current knowledge for SBSA by clarifying the guidance on why, how, what, when, and who. We aim at identifying areas (i) with robust evidence in the literature, (ii) with evidence that requires guidance by expert opinion, and (iii) with weak or emerging evidence. This may serve as a basis for future research and guideline development for the safe and effective use of SBSA (Fig.  1 ).

figure 1

Study question and topic level of evidence

First, we performed a modified Nominal Group Technique (NGT) to define the further questions to be explored in order to have the most comprehensive understanding of SBSA. We followed recommendations on NGT for conducting and reporting this research [ 14 ]. Second, we conducted state-of-the-art literature reviews to assess the current knowledge on the questions/topics identified by the modified NGT. This work did not require Institutional Review Board involvement.

A discussion on the use of SBSA was led by executive committee members of the Société Francophone de Simulation en Santé (SoFraSimS) in a plenary session and involved congress participants in May 2018 at the SoFraSimS annual meeting in Strasbourg, France. Key points addressed during this meeting were the growing interest in using SBSA, its informal uses, and its inclusion in some formal healthcare curricula. The discussion identified that these important topics lacked current guidelines. To reduce knowledge gaps, the SoFraSimS executive committee assigned one of its members (FL, one of the authors) to lead and act as a NGT facilitator for a task force on SBSA. The task force’s mission was to map the current landscape of SBSA, the current knowledge and gaps; and potentially to identify experts’ recommendations.

Task force characteristics

The task force panelists were recruited among volunteer simulation healthcare trainers in French-speaking countries after a call for application by SoFraSimS in May 2019. Recruiting criteria were a minimum of 5 years of experience in simulation and a direct involvement in simulation programs development or currently running. There were 34 (12 women and 22 men) from 3 countries (Belgium, France, Switzerland) included. Twenty-three were physicians and 11 were nurses, while 12 total had academic positions. All were experienced trainers in simulation for more than 7 years and were involved or responsible for initial training or continuing education programs with simulation. The task force leader (FL) was responsible for recruiting panelists, organizing, and coordinating the modified NGT, synthesizing responses, and writing the final report. A facilitator (CB) assisted the task force leader with the modified NGT, the synthesis of responses, and the writing of the final report. Both NGT facilitators (FL and CB) had more than 14 years of experience in simulation, had experience in research in simulation, and were responsive to simulation programs development and running.

First part: initial question and modified nominal group technique (NGT)

To answer the challenging question of “What do we need to know for a safe and effective SBSA practice?”, following the French Haute Autorité de Santé guidelines [ 15 ], we applied a modified nominal group technique (NGT) approach [ 16 ] between September and October 2019. The goal of our modified NGT was to define the further questions to be explored to have the most comprehensive understanding of the current SBSA (Fig.  2 ). The modifications to NGT included interactions that were not in-person and were asynchronous for some. Those modifications were introduced as a result of the geographic dispersion of the panelists across multiple countries and the context of the COVID-19 pandemic.

figure 2

Study flowchart

The first two steps of the NGT (generation of ideas and round robin) facilitated by the task force leader (FL) were conducted online simultaneously and asynchronously via email exchanges and online surveys over a 6-week period. For the initiation of the first step (generation of ideas), the task force leader (FL) sent an initial non-exhaustive literature review of 95 articles and proposed the initial following items for reflection: definition of assessment, educational principles of simulation, place of summative assessment and its implementation, assessment of technical and non-technical skills in initial training, continuing education, and interprofessional training. The task force leader (FL) asked the panelists to formulate topics or questions to propose for exploration in Part 2 based on their knowledge and the literature provided Panelists independently elaborated proposals and sent them back to the task force leader (FL) who regularly synthesized them and sent the status of the questions/topics to the whole task force while preserving the anonymity of the contributors and asking them to check the accuracy of the synthesized elements (second step, as a “round robin”).

The third step of the NGT (clarification) was carried out during a 2-h video conference session. All panelists were able to discuss the proposed ideas, group the ideas into topics, and make the necessary clarifications. As a result of this step, 24 preliminary questions were defined for the fourth step (Supplemental Digital Content 1).

The fourth step of the NGT (vote) consisted of four distinct asynchronous and anonymous online vote rounds that led to a final set of topics with related sub-questions (Supplemental Digital content 2). Panelists were asked to vote to regroup, separate, keep, or discard questions/topics. All vote rounds followed similar validation rules. We [NGT facilitators (FL and CB)] kept items (either questions or topics) with more than 70% approval ratings by panelists. We reworded and resubmitted in the next round all items with 30–70% approval. We discarded items with less than 30% approval. The task force discussed discrepancies and achieved final ratings with a complete agreement for all items. For each round, we sent reminders to reach a minimum participation rate of 80% of the panelists. Then, we split the task force into 7 groups, one for each of the 7 topics defined at the end of the vote (step 4).

Second part: literature review

From November 2019 to October 2020, the groups each identified existing literature containing the current knowledge, and potential recommendations on the topic they were to address. This identification was done based on a non-systematic review of the existing literature. To identify existing literature, the groups conducted state-of-the-art reviews [ 17 ] and expanded their reviews with a snowballing literature review technique [ 18 ] based on the articles’ references. The selected literature search performed by each group was inserted into the task force's common library on SBSA in healthcare as it was conducted.

For references, we searched electronic databases (MEDLINE), gray literature databases (including digital theses), simulation societies and centers’ websites, generic web searches (e.g., Google Scholar), and reference lists from articles. We selected publications related to simulation in healthcare with keywords “summative assessment,” “summative evaluation,” and also specific keywords related to topics. The search was iterative to seek all available data until saturation was achieved. Ninety-five references were initially provided to the task force by the NGT facilitator leader (FL). At the end of the work, the task force common library contained a total of 261 references.

Techniques to enhance trustworthiness from primary reports to the final report

The groups’ primary reports were reviewed and critiqued by other groups. After group cross-reviewing, primary reports were compiled by NGT facilitators (FL and CB) in a single report. This report, responding to the 7 topics, was drafted in December 2020 and submitted as a single report to an external review committee composed of 4 senior experts in education, training, and research from 3 countries (Belgium, Canada, France) with at least 15 years of experience in simulation. NGT facilitators (FL and CB) responded directly to reviewers when possible and sought assistance from the groups when necessary. The final version of the report was approved by the SoFraSimS executive committee in January 2021.

First part: modified nominal group technique (NGT)

The first two steps of the NGT by their nature (generation of ideas and “round robin”) did not provide results. The third step (clarification phase), identified 24 preliminary questions (Supplemental digital content 1) to be submitted to the fourth step (vote). The 4 rounds of voting (step 4) resulted in 7 topics with sub-questions (Supplemental Digital content 2): (1) “What can be assessed in simulation?” (2) “Assessment tools for SBSA,” (3) “Consequences of undergoing the SBSA process,” (4) “Simulation scenarios for SBSA,” (5) “Debriefing, video, research and SBSA strategies,” (6) Trainers for SBSA,” (7) “Implementation of SBSA in healthcare”. These 7 topics and their sub-questions were the starting point for the state-of-the-art literature reviews of each group for the second part.

For each of the 7 topics, the groups highlighted what appears to be validated in the literature, the remaining important problems and questions, and their consequences for participants and institutions of how SBSA is conducted. Results in this section present the major ideas and principles from the literature review, including their nuances where necessary.

What can be assessed in simulation?

Healthcare faculty and institutions must ensure that each graduate is practice ready. Readiness to practice implies mastering certain competencies, which is dependent on learning them appropriately. The competency approach involves explicit definitions of the acquired core competencies necessary to be a “good professional.” Professional competency could be defined as the ability of a professional to use judgment, knowledge, skills, and attitudes associated with their profession to solve complex problems [ 19 , 20 , 21 ]. Competency is a complex “knowing how to act” based on the effective mobilization and combination of a variety of internal and external resources in a range of situations [ 19 ]. Competency is not directly observable; it is the performance in a situation that can be observed [ 19 ]. Performance can vary depending on human factors such as stress, fatigue, etc.… During simulation, competencies can be assessed by observing “key” actions using assessment tools [ 22 ]. Simulation’s limitations must consider when defining the assessable competencies. Not all simulation methods are equivalent to assessing specific competencies [ 22 ].

Most healthcare competencies can be assessed with simulation, throughout at curriculum, if certain conditions are met. First, the competency being assessed summatively must have already been assessed formatively with simulation [ 23 , 24 ]. Second, validated assessment tools must be available to conduct this summative assessment [ 25 , 26 ]. These tools must be reliable, objective, reproducible, acceptable, and practical [ 27 , 28 , 29 , 30 ]. The small number of currently validated tools limits the use of simulation for competency certification [ 31 ]. Third, it is not necessary or desirable to certify all competencies [ 32 ]. The situations chosen must be sufficiently frequent in the student’s future professional practice (or potentially impactful for the patient) and must be hard or impossible to assess and validate in other circumstances (e.g., clinical internships) [ 2 ]. Fourth, simulation can be used for certification throughout the curriculum [ 33 , 34 , 35 ]. Finally, limitations for the use of simulation throughout the curriculum may be a lack of logistical resources [ 36 ]. Based on our findings in the literature, we have summarized in Table 1 the educational consideration when implementing a SBSA.

Assessment tools for simulation-based summative assessment

One of the challenges of assessing competency lies in the quality of the measurement tools [ 31 ]. A tool that allows the raters to collect data must also allow them to give meaning to their assessment, while securing that it is really measuring what it aims to [ 25 , 37 ]. A tool must be valid and, capable of measuring the assessed competency with fidelity and, reliability while providing reproducible data [ 38 ]. Since a competency is not directly measurable, it will be analyzed on the basis of learning expectations, the most “concrete” and observable form of a competency [ 19 ]. Several authors have described definitions of the concept of validity and the steps to achieve it [ 38 , 39 , 40 , 41 ]. Despite different validation approaches, the objectives are similar: to ensure that the tool is valid, the scoring items reflect the assessed competency, and the contents are appropriated for the targeted learners and raters [ 20 , 39 , 42 , 43 ]. A tool should have psychometric characteristics that allow users to be confident of its reproducibility, discriminatory nature, reliability, and external consistency [ 44 ]. A way to ensure that a tool has acceptable validity is to compare it to existing and validated tools that assess the same skills for the same learners. Finally, it is important to consider the consequences of the test to determine whether it best discriminates competent students from others [ 38 , 43 ].

Like a diagnostic score, a relevant assessment tool must be specific [ 30 , 39 , 41 ]. It is not good or bad, but valid through a rigorous validation process [ 39 , 41 , 42 ]. This validation process determines whether the tool measures what it is supposed to measure and whether this measurement is reproducible at different times (test–retest) or with 2 observers simultaneously. It also determines if the tool results are correlated with another measure of the same ability or competency and if the consequences of the tool results are related to the learners’ actual competency [ 38 ].

Following Messick’s framework, which aimed to gather different sources of validity in one global concept (unified validity), Downing describes five sources of validity, which must be assessed with the validation process [ 38 , 45 , 46 ]. Table 2 presents an illustration of the development used in SBSA according to the unified validity framework for a technical task [ 38 , 45 , 46 ]. An alternative framework using three sources of validity for teamwork’s non-technical skills are presented in Table 3 .

A tool is validated in a language. Theoretically, this tool can only be used in this language, given the nuances present with interpretation [ 49 ]. In certain circumstances, a “translated” tool, but not a “translated and validated in a specific language” tool, can lead to semantic biases that can affect the meaning of the content and its representation [ 49 , 50 , 51 , 52 , 53 , 54 , 55 ]. For each assessment sequence, validity criteria consist of using different tools in different assessment situations and integrating them into a comprehensive program which considers all aspects of competency. The rating made with a validated tool for one situation must be combined with other assessment situations, since there is no “ideal” tool [ 28 , 56 ] A given tool can be used with different professions or with participants at different levels of expertise or in different languages if it is validated for these situations [ 57 , 58 ]. In a summative context, a tool must have demonstrated a high-level of validity to be used because of the high stake for the participants [ 56 ]. Finally, the use or creation of an assessment tool requires trainers to question its various aspects, from how it was created to its reproducibility and the meaning of the results generated [ 59 , 60 ].

Two types of assessment tools should be distinguished: tools that can be adapted to different situations and tools that are specific to a situation [ 61 ]. Thus, technical skills may have a dedicated assessment tool (e.g., intraosseous) [ 47 ] or an assessment grid generated from a list of pre-established and validated items (e.g., TAPAS scale) [ 62 ]. Non-technical skills can be observed using scales that are not situation-specific (e.g., ANTS, NOTECHS) [ 63 , 64 ] or that are situation-specific (e.g., TEAM scale for resuscitation) [ 57 , 65 ]. Assessment tools should be provided to participants and should be included in the scenario framework, at least as a reference [ 66 , 67 , 68 , 69 ]. In the summative assessment of a procedure, structured assessment tools should probably be used, using a structured objective assessment form for technical skills [ 70 ]. The use of a scale, in the context of the assessment of a technical gesture, seems essential. As with other tools, any scale must be validated beforehand [ 47 , 70 , 71 , 72 ].

Consequences of undergoing the simulation-based summative assessment process

Summative assessment has two notable consequences on learning strategies. First, it may drive the learner’s behavior during the assessment, while it is essential to assess the competencies targeted, not the ability of the participant to adapt to the assessment tool [ 6 ]. Second, the pedagogy key concept of “pedagogical alignment” must be respected [ 23 , 73 ]. It means that assessment methods must be coherent with the pedagogical activities and objectives. For this to happen, participants must have formative simulation training focusing on the assessed competencies prior to the SBSA [ 24 ].

Participants have been reported as experiencing commonly mild (e.g., appearing slightly upset, distracted, teary-eyed, quiet, or resistant to participating in the debriefing) or moderate (e.g., crying, making loud, and frustrated comments) psychological events in the simulation [ 74 ]. While voluntary recruitment for formative simulation is commonplace, all students are required to take summative assessments in training. This required participation in high-stake assessment may have a more consequential psychological impact [ 26 , 75 ]. This impact can be modulated by training and assessment conditions [ 75 ]. First, the repetition of formative simulations reduces the psychological impact of SBSA on participants [ 76 ]. Second, the transparency on the objectives and methods of assessment limits detrimental psychological impact [ 77 , 78 ]. Finally, detrimental psychological impacts are increased by abnormally high physiological or emotional stress such as fatigue, and stressful events in the 36 h preceding the assessment, such that students with a history of post-traumatic stress disorder or psychological disorder may be strongly and negatively impacted by the simulation [ 76 , 79 , 80 , 81 ].

It is necessary to optimize SBSA implementation to limit its pedagogical and psychological negative impacts. Ideally, during the summative assessment, it has been proposed to take into account the formative assessment that has already been carried out [ 1 , 20 , 21 ]. Similarly in continuing education, the professional context of the person assessed should be considered. In the event of failure, it will be necessary to ensure sympathetic feedback and to propose a new assessment if necessary [ 21 ].

Scenarios for simulation-based summative assessment

Some authors argue that there are differences between summative and formative assessment scenarios [ 76 , 79 , 80 , 81 ]. The development of a SBSA scenario begins with the choice of a theme, which is most often agreed upon by experts at the local level [ 66 ]. The themes are most often chosen based on the participants’ competencies to be assessed and included in the competencies requirement for the initial [ 82 ] and continuing education [ 35 , 83 ]. A literature review even suggests the need to choose themes covering all the competences to be assessed [ 41 ]. These choices of themes and objectives also depend on the simulation tools technically available: “The themes were chosen if and only if the simulation tools were capable of reproducing “a realistic simulation” of the case.” [ 84 ].

The main quality criterion for SBSA is that the cases selected and developed are guided by the assessment objectives [ 85 ]. It is necessary to be clear about the assessment objectives of each scenario to select the right assessment tool [ 86 ]. Scenarios should meet four main principles: predictability, programmability, standardizability, and reproducibility [ 25 ]. Scenario writing should include a specific script, cues, timing, and events to practice and assess the targeted competencies [ 87 ]. The implementation of variable scenarios remains a challenge [ 88 ]. Indeed, most authors develop only one scenario per topic and skill to be assessed [ 85 ]. There are no recommendations for setting a predictable duration for a scenario [ 89 ]. Based on our findings we suggest some key elements for structuring a SBSA scenario in Table 4 . For technical skill assessment, scenarios will be short and the assessment is based on an analytical score [ 82 , 89 ]. For non-technical skill assessment, scenarios will be longer and the assessment based on analytical and holistic scores [ 82 , 89 ].

Debriefing, video, and research for simulation-based summative assessment

Studies have shown that debriefings are essential in formative assessment [ 90 , 91 ]. No such studies are available for summative assessment. Good practice requires debriefing in both formative and summative simulation-based assessments [ 92 , 93 ]. In SBSA, debriefing is often brief feedback given at the end of the simulation session, in groups [ 85 , 94 , 95 ], or individually [ 83 ]. Debriefing can also be done later with a trainer and help of video, or via written reports [ 96 ]. These debriefings make it possible to assess clinical skills for summative assessment purposes [ 97 ]. Some tools have been developed to facilitate this assessment of clinical reasoning [ 97 ].

Video can be used for four purposes: session preparation, simulation improvement, debriefing, and rating (Table 5 ) [ 95 , 98 ]. In SBSA sessions, video can be used during the prebriefing to provide participants with standardized and reproducible information [ 99 ]. A video can increase the realism of the situation during the simulation with ultrasound loops and laparoscopy footage. Simulation records can be reviewed either for debriefing or rating purposes [ 34 , 71 , 100 , 101 ]. A video is very useful for the training raters (e.g., for calibration and recalibration) [ 102 ]. It enables raters to rate the participants’ performance offline and to have an external review if necessary [ 34 , 71 , 101 ]. Despite the technical difficulties to be considered [ 42 , 103 ], it can be expected that video-based automated scoring assistance will facilitate assessments in the future.

The constraints associated with the use of video rely on the participants’ agreement, the compliance with local rules, and that the structure in charge of the assessment with video secures the protection of the rights of individuals and data safety, both at a national and at the higher (e.g., European GDPR) level [ 104 , 105 ].

In Table 5 , we list the main uses of video during simulation sessions found in the literature.

Research in SBSA can focus, as in formative assessment, on the optimization of simulation processes (programs, structures, human resources). Research can also explore the development and validation of summative assessment tools, the automation and assistance of assessment resources, and the pedagogical and clinical consequences of SBSA.

Trainers for simulation-based summative assessment

Trainers for SBSA probably need specific skills because of the high number of potential errors or biases in SBSA, despite the quest for objectivity (Table 6 ) [ 106 ]. The difficulty in ensuring objectivity is likely the reason why the use of self or peer assessment in the context of SBSA is not well documented and the literature does not yet support it [ 59 , 60 , 107 , 108 ].

SBSA requires the development of specific scenarios, staged in a reproducible way, and the mastery of assessment tools to avoid assessment bias [ 111 , 112 , 113 , 114 ]. Fulfilling those requirements calls for specific abilities to fit with the different roles of the trainer. These different roles of trainers would require specific initial and ongoing training tailored to their tasks [ 111 , 113 ]. In the future, concepts of the roles and tasks of these trainers should be integrated into any “training of trainers” in simulation.

Implementation of simulation-based summative assessment in healthcare

The use of SBSA has been described by Harden in 1975 with Objective and Structured Clinical Examination (OSCE) tests for medical students [ 115 ]. The summative use of simulation has been introduced in different ways depending on the professional field and the country [ 116 ]. There is more literature on certification at the undergraduate and graduate levels than on recertification at the postgraduate level. The use of SBSA in re-certification is currently more limited [ 83 , 117 ]. Participation is often mandated, and it does not provide a formal assessment of competency [ 83 ]. Some countries are defining processes for the maintenance of certification in which simulation is likely to play a role (e.g., in the USA [ 118 ] and France [ 116 ]). Recommendations regarding the development of SBSA for OSCE were issued by the AMEE (Association for Medical Education in Europe) in 2013 [ 12 , 119 ]. Combined with other recommendations that address the organization of examinations on other immersive simulation modalities, in particular, full-scale sessions using complex mannequins [ 22 , 85 ], they give us a solid foundation for the implementation of SBSA.

The overall process to ensure a high-quality examination by simulation is therefore defined but particularly demanding. It mobilizes many material and human resources (administrative staff, trainers, standardized patients, and healthcare professionals) and requires a long development time (several months to years depending on the stakes) [ 36 ]. We believe that the steps to overcome during the implementation of SBSA range from setting up a coordination team, to supervising the writers, the raters, and the standardized patients, as well as taking into account the logistical and practical pitfalls.

The development of a competency framework valid for an entire curriculum (e.g., medical studies) satisfies a fundamental need [ 7 , 120 ]. This development allows identifying competencies to be assessed with simulation, those to be assessed by other methods, and those requiring triangulation by several assessment methods. This identification then guides scenarios’ writing and examination’s development with good content validity. Scenarios and examinations will form a bank of reproducible assessment exercises. The examination quality process, including psychometric analyses, is part of the development process from the beginning [ 85 ].

We have summarized in Table 7 the different steps in the implementation of SBSA.

Recertification

Recertification programs for various healthcare domains are currently being implemented or planned in many countries (e.g., in the USA [ 118 ] and France [ 116 ]). This is a continuation of the movement to promote the maintenance of competencies. Examples can be cited in France with the creation of an agency for continuing professional development or in the USA with the Maintenance Of Certification [ 83 , 126 ]. The certification of health care facilities and even teams is also being studied [ 116 ]. Simulation is regularly integrated into these processes (e.g., in the USA [ 118 ] and France [ 116 ]). Although we found some commonalities basis between the certification and recertification processes, there are many differences (Table 8 ).

Currently, when simulation-based training is mandatory (e.g., within the American Board of Anesthesiology’s “Maintenance Of Certification in Anesthesiology,” or MOCA 2.0® in the US), it is most often a formative process [ 34 , 83 ]. SBSA has a place in the recertification process, but there are many pitfalls to avoid. In the short term, we believe that it will be easier to incorporate formative sessions as a first step. The current consensus seems to be that there should be no pass/fail recertification simulation without personalized global professional support, but which is not limited to a binary aptitude/inaptitude approach [ 21 , 116 ].

Many important issues and questions remain regarding the field of SBSA. This discussion will return to our identified 7 topics and highlight these points, their implications for the future, and some possible leads for future research and guidelines development for the safe and effective use of this tool in SBSA.

SBSA is currently mainly used in initial training in uni-professional and individual settings via standardized patients or task trainers (OSCE) [ 12 , 13 ]. In the future, SBSA will also be used in continuing education for professionals who will be assessed throughout their career (re-certification) as well as in interprofessional settings [ 83 ]. When certifying competencies, it is important to keep in mind the differences between the desired competencies and the observed performances [ 128 ]. Indeed, it must be that “what is a competency” is specifically defined [ 6 , 19 , 21 ]. Competencies are what we wish to evaluate during the summative assessment to validate or revalidate a professional for his/her practice. Performance is what can be observed during an assessment [ 20 , 21 ]. In this context, we consider three unresolved issues. The first issue is that an assessment only gives access to a performance at a given moment (“Performance is a snapshot of a competency”), whereas one would like to assess a competency more generally [ 128 ]. The second issue is: How does an observed performance—especially in simulation—reveal a real competency in real life? [ 129 ] In other words, does the success or failure of a single SBSA really reflect actual real-life competency? [ 130 ] The third issue is the assessment of a team performance/competency [ 131 , 132 , 133 ]. Until now, SBSA has come from the academic field and has been an individual assessment (e.g., OSCE). Future SBSA could involve teams, driven by governing bodies, institutions, or insurances [ 134 , 135 ]. The competency of a team is not the sum of the competencies of the individuals who compose it. How can we proceed to assess teams as a specific entity, both composed of individuals and independent of them? To make progress in answering these three issues, we believe it is probably necessary to consider the approximation between observed and assessed performance and competency as acceptable, but only by specifying the scope of validity. Research in these areas is needed to define it and answer these questions.

The consequence of undergoing SBSA has focused on the psychological aspect and have set aside the more usual consequences such as achieving (or not) the minimum passing score. Future research should embrace more global SBSA consequence field, including how reliable SBSA is at determining how someone is competent.

Rigor and method in the development and selection of assessment tools are paramount to the quality of the summative assessment [ 136 ]. The literature shows that is necessary that assessment tools be specific to their intended use that their intrinsic characteristics be described and that they be validated [ 38 , 40 , 41 , 137 ]. These specific characteristics must be respected to avoid two common issues [ 1 , 6 ]. The first issue is that of a poorly designed or constructed assessment tool. This tool can only give poor assessments because it will be unable to capture performance correctly and therefore to approach the skill to be assessed in a satisfactory way [ 56 ]. The second issue is related to poor or incomplete tool evaluation or inadequate tool selection. If the tool is poorly evaluated, its quality is unknown [ 56 ]. The scope of the assessment that is done with it is limited by the imprecision of the tool’s quality. If the tool is poorly selected, it will not accurately capture the performance being assessed. Again, summative assessment will be compromised. It is currently difficult to find tools that meet all the required quality and validation criteria [ 56 ]. On the one hand, this requires complex and rigorous work; on the other hand, the potential number of tools required is large. Thus, the overall volume of work to rigorously produce assessment tools is substantial. However, the literature provides the characteristics of validity (content, response process, internal structure, comparison with other variables, and consequences), and the process of developing qualitative and reliable assessment tools [ 38 , 39 , 40 , 41 , 45 ]. It therefore seems important to systematize the use of these guidelines for the selection, development, and validation of assessment tools [ 137 ]. Work in this area is needed and network collaboration could be a solution to move forward more quickly toward a bank of valid and validated assessment tools [ 39 ].

We had focused our discussion on the consequences of SBSA excluding the determining of the competencies and passing rates. Establishing and maintaining psychological safety is mandatory in simulation [ 138 ]. Considering the psychological and physiological consequences of SBSA is fundamental to control and limit negative impacts. Summative assessment has consequences for both the participants and the trainers [ 139 ]. These consequences are often ignored or underestimated. However, these consequences can have an impact on the conduct or results of the summative assessment. The consequences can be positive or negative. The “testing effect” can have a positive impact on long-term memory [ 139 ]. On the other hand, negative psychological (e.g., stress or post-traumatic stress disease), and physiological (e.g., sleep) consequences can occur or degrade a fragile state [ 139 , 140 ]. These negative consequences can lead to questioning the tools used and the assessments made. These consequences must therefore be logically considered when designing and conducting the SBSA. We believe that strategies to mitigate their impact must be put in place. The trainers and the participants must be aware of these difficulties to better anticipate them. It is a real duality for the trainer: he/she has to carry out the assessment in order to determine a mark and at the same time guarantee the psychological safety of the participants. It seems fundamental to us that trainers master all aspects of SBSA as well as the concept of the safe container [ 138 ] to maximize the chances of a good experience for all. We believe that ensuring a fluid pedagogical continuum, from training to (re)certification in both initial and continuing education using modern pedagogical techniques (e.g., mastery learning, rapid cycle deliberate practice) [ 141 , 142 , 143 , 144 ] could help maximize the psychological and physiological safety of participants.

The structure and use of scenarios in a summative setting are unique and therefore require specific development and skills [ 83 , 88 ]. SBSA scenarios differ from formative assessment scenarios by the different educational objectives that guide their development. Summative scenarios are designed to assess a skill through observation of performance, while formative scenarios are designed to learn and progress in mastering this same skill. Although there may be a continuum between the two, they remain distinct. SBSA scenarios must be predictable, programmable, standardizable, and reproductible [ 25 ] to ensure fairly assessed performances among participants. This is even more crucial when standardized patients are involved (e.g., OSCE) [ 119 , 145 ]. In this case, a specific script with expectations and training is needed for the standardized patient. The problem is that currently there are many formative scenarios but few summative scenarios. The rigor and expertise required to develop them is time-consuming and requires expert trainer resources. We believe that a goal should be to homogenize the scenarios, along with preparing the human resources who will implement them (trainers and standardized patients) and their application. We believe one solution would be to develop a methodology for converting formative scenarios into summative ones in order to create a structuring model for summative scenarios. This would reinvest the time and expertise already used for developing = formative scenarios.

Debriefing for simulation-based summative assessment

The place of debriefing in SBSA is currently undefined and raises important questions that need exploration [ 77 , 90 , 146 , 147 , 148 ]. Debriefing for formative assessment promotes knowledge retention and helps to anchor good behaviors while correcting less ideal ones [ 149 , 150 , 151 ]. In general, taking an exam promotes learning of the topic [ 139 , 152 ]. Formative assessment without a debriefing has been shown to be detrimental, so it is reasonable to assume that the same is true in summative assessment [ 91 ]. The ideal modalities for debriefing in SBSA are currently unknown [ 77 , 90 , 146 , 147 , 148 ]. Integrating debriefing into SBSA raises a number of organizational, pedagogical, cognitive, and ethical issues that need to be clarified. From an organizational perspective, we consider that debriefing is time and human resource-consuming. The extent of the organizational impact varies according to whether the feedback is automatized, standardized, personalized, and collective or individual. From an educational perspective, debriefing ensures pedagogical continuity and continued learning. We believe this notion is nuanced, depending on whether the debriefing is integrated into the summative assessment or instead follows the assessment while focusing on formative assessment elements. We believe that if the debriefing is part of the SBSA, it is no longer a “teaching moment.” This must be factored into the instructional strategy. How should the trainer prioritize debriefing points between those established in advance for the summative assessment and those that would emerge from any individuals’ performance? From a cognitive perspective, whether the debriefing is integrated into the summative assessment may alter the interactions between the trainer and the participants. We believe that if the debriefing is integrated into the SBSA, the participant will sometimes be faced with the cognitive dilemma of whether to express his/her “true” opinions or instead attempt to provide the expected answers. The trainer then becomes uncertain of what he/she is actually assessing. Finally, from an ethical perspective, in the case of a mediocre or substandard clinical performance, there is a question of how the trainer resolves discrepancies between observed behavior and what the participant reveals during the debriefing. What weight should be given to the simulation and to the debriefing for the final rating? We believe there is probably no single solution to how and when the debriefing is conducted during a summative assessment but rather promote the idea of adapting debriefing approaches (e.g., group or individualized debriefing) to various conditions (e.g., success or failure in the summative assessment). These questions need to be explored to provide answers as to how debriefing should be ideally conducted in SBSA. We believe a balance must be found that is ethically and pedagogically satisfactory, does not induce a cognitive dilemma for the trainer, and is practically manageable.

The skills and training of trainers required for SBSA are crucial and must be defined [ 136 , 153 ]. We consider that skills and training for SBSA closely mirror skills and training needed for formative assessment in simulation. This continuity is part of the pedagogical alignment. These different steps have common characteristics (e.g., rules in simulation, scenario flow) and specific ones (e.g., using assessment tools, validating competence). To ensure pedagogical continuity, the trainers who supervise these courses must be trained in and be masterful in simulation, adhering to pedagogical theories. We believe training for SBSA represents new skills and a potentially greater cognitive load for the trainers. It is necessary to provide solutions to both of these issues. For the new skills of trainers, we consider it necessary to adapt or complete the training of trainers by integrating knowledge and skills needed for properly conducting SBSA: good assessment practices, assessment bias mitigation, rater calibration, mastery of assessment tools, etc. [ 154 ]. To optimize the cognitive load induced by the tasks and challenges of SBSA, we suggest that it could be helpful to divide the tasks between the different trainers’ roles. We believe that conducting a SBSA therefore requires three types of trainers whose training is adapted to their specific role. First, three are the trainer-designers who are responsible for designing the assessment situation, selecting the assessment tool(s), training the trainer-rater(s), and supervising the SBSA sessions. Second, there should be the trainer-operators responsible for running the simulation conditions that support the assessment. Third, there are the trainer-raters who conduct the assessment using the assessment tool(s) selected by the trainer-designer(s) for which these trainer-raters have been specifically trained. The high-stake nature of SBSA requires a high level of rigor and professionalism from the three levels of trainers, which implies they have a working definition of the skills and the necessary training to be up to the task.

Implementing simulation-based summative assessment in healthcare

Implementing SBSA is delicate, requires rigor, respect for each step, and must be evidence-based. While OSCEs are simulation-based, simulation is not limited to OSCEs. Summative assessment with OSCEs has been used and studied for many years [ 12 , 13 ]. This literature is therefore a valuable source for learning lessons about summative assessment applied to simulation as a whole [ 22 , 85 , 155 ]. Knowledge from OSCE summative assessment needs to be supplemented so that simulation can perform summative assessment according to good evidence-based practices. Given the high stakes of SBSA, we believe it necessary to rigorously and methodically adapt what is already validated during implementation (e.g., scenarios, tools) and to proceed with caution for what has not yet been validated. As described above, many steps and prerequisites are necessary for optimal implementation, including (but not limited to) identifying objectives; identifying and validating assessment tools; preparing simulations scenarios, trainers, and raters; and planning a global strategy beginning from integrating the summative assessment in the curriculum to the managing the consequences of this assessment. SBSA must be conducted within a strict framework for its own sake and that of the people involved. Poor implementation would be detrimental to all participants, trainers, and the practice SBSA. This risk is greater for recertification than for certification [ 156 ], while initial training is able to accommodate SBSA easily because it is familiar (e.g., trainees engage in OSCEs at some point in their education), including SBSA in recertifying practicing professionals is not as obvious and may be context-dependent [ 157 ]. We understand that the consequences of failed recertification are potentially more impactful, both psychologically and for professional practice. We believe that solutions must be developed, tested, and validated that both fill gaps and preserve professionals and patients. Implementing SBSA therefore must be progressive, rigorous, and evidence-based to be accepted and successful [ 158 ].

Limitations

This work has some limitations that should be emphasized. First, this work covers only a limited number of issues related to SBSA. The entire topic is possibly not covered and we may not have explored other questions of interest. Nevertheless, the NGT methodology allowed this work to focus on those issues that were most relevant and challenging to the panel. Second, the literature review method (state-of-the-art literature reviews expanded with a snowball technique) does not guarantee exhaustiveness, and publications on the topic may have escaped the screening phase. However, it is likely that we have identified key articles focused on the questions explored. Potentially unidentified articles would therefore either not be important to the topic or would address questions not selected by the NGT. Third, this work was done by a French-speaking group, and a Francophone-specific approach to simulation, although not described to our knowledge, cannot be ruled out. This risk is reduced by the fact that the work is based on international literature from different specialties in different countries and that the panelists and reviewers were from different countries. Fourth, the analysis and discussion of the consequences of SBSA were focused on psychological consequences. This does not cover the full range of consequences including the impact on subsequent curricula or career pathways. Data in the literature exist on the subject and probably deserve a specific body of work. Despite these limitations, however, we believe this work is valuable because it raises questions and offers some leads toward solutions.

Conclusions

The use of SBSA is very promising with a growing demand for its application. Indeed, SBSA is a logical extension of simulation-based formative assessment and competency-based medical education development. It is probably wise to anticipate and plan for approaches to SBSA, as many important moving parts, questions, and consequences are emerging. Clearly identifying these elements and their interactions will aid in developing reliable, accurate, and reproducible models. All this requires a meticulous and rigorous approach to preparation commensurate with the challenges of certifying or recertifying the skills of healthcare professionals. We have explored the current knowledge on SBSA and have now shared an initial mapping of the topic. Among the seven topics investigate, we have identified (i) areas with robust evidence (what can be assessed with simulation?); (ii) areas with limited evidence that can be assisted by expert opinion and research (assessment tools, scenarios, and implementation); and (iii) areas with weak or emerging evidence requiring guidance by expert opinion and research (consequences, debriefing, and trainers) (Fig.  1 ). We modestly hope that this work can help reflection on SBSA for future investigations and can drive guideline development for SBSA.

Availability of data and materials

All data generated or analyzed during this study are included in this published article.

Abbreviations

General data protection regulation

Nominal group technique

Objective structured clinical examination

Simulation-based summative assessment

van der Vleuten CPM, Schuwirth LWT. Assessment in the context of problem-based learning. Adv Health Sci Educ Theory Pract. 2019;24:903–14.

Article   Google Scholar  

Boulet JR. Summative assessment in medicine: the promise of simulation for high-stakes evaluation. Acad Emerg Med. 2008;15:1017–24.

Green M, Tariq R, Green P. Improving patient safety through simulation training in anesthesiology: where are we? Anesthesiol Res Pract. 2016;2016:4237523.

Google Scholar  

Krage R, Erwteman M. State-of-the-art usage of simulation in anesthesia: skills and teamwork. Curr Opin Anaesthesiol. 2015;28:727–34.

Askew K, Manthey DE, Potisek NM, Hu Y, Goforth J, McDonough K, et al. Practical application of assessment principles in the development of an innovative clinical performance evaluation in the entrustable professional activity era. Med Sci Educ. 2020;30:499–504.

Wass V, Van der Vleuten C, Shatzer J, Jones R. Assessment of clinical competence. Lancet. 2001;357:945–9.

Article   CAS   Google Scholar  

Boulet JR, Murray D. Review article: assessment in anesthesiology education. Can J Anaesth. 2012;59:182–92.

Bauer D, Lahner F-M, Schmitz FM, Guttormsen S, Huwendiek S. An overview of and approach to selecting appropriate patient representations in teaching and summative assessment in medical education. Swiss Med Wkly. 2020;150: w20382.

Park CS. Simulation and quality improvement in anesthesiology. Anesthesiol Clin. 2011;29:13–28.

Higham H, Baxendale B. To err is human: use of simulation to enhance training and patient safety in anaesthesia. British Journal of Anaesthesia [Internet]. 2017 [cited 2021 Sep 16];119:i106–14. Available from: https://www.sciencedirect.com/science/article/pii/S0007091217541215 .

Mann S, Truelove AH, Beesley T, Howden S, Egan R. Resident perceptions of competency-based medical education. Can Med Educ J. 2020;11:e31-43.

Khan KZ3, Ramachandran S, Gaunt K, Pushkar P. The objective structured clinical examination (OSCE): AMEE Guide No. 81. Part I: an historical and theoretical perspective. Med Teach. 2013;35(9):e1437-1446.

Daniels VJ, Pugh D. Twelve tips for developing an OSCE that measures what you want. Med Teach. 2018;40:1208–13.

Humphrey-Murto S, Varpio L, Gonsalves C, Wood TJ. Using consensus group methods such as Delphi and Nominal Group in medical education research. Med Teach. 2017;39:14–9.

Haute Autorité de Santé. Recommandations par consensus formalisé (RCF) [Internet]. Haute Autorité de Santé. 2011 [cited 2020 Oct 29]. Available from: https://www.has-sante.fr/jcms/c_272505/fr/recommandations-par-consensus-formalise-rcf .

Humphrey-Murto S, Varpio L, Wood TJ, Gonsalves C, Ufholz L-A, Mascioli K, et al. The use of the delphi and other consensus group methods in medical education research: a review. Academic Medicine [Internet]. 2017 [cited 2021 Jul 20];92:1491–8. Available from: https://journals.lww.com/academicmedicine/Fulltext/2017/10000/The_Use_of_the_Delphi_and_Other_Consensus_Group.38.aspx .

Booth A, Sutton A, Papaioannou D. Systematic approaches to a successful literature review [Internet]. Second edition. Los Angeles: Sage; 2016. Available from: https://uk.sagepub.com/sites/default/files/upm-assets/78595_book_item_78595.pdf .

Morgan DL. Snowball Sampling. In: Given LM, editor. The Sage encyclopedia of qualitative research methods [Internet]. Los Angeles, Calif: Sage Publications; 2008. p. 815–6. Available from: http://www.yanchukvladimir.com/docs/Library/Sage%20Encyclopedia%20of%20Qualitative%20Research%20Methods-%202008.pdf .

ten Cate O, Scheele F. Competency-based postgraduate training: can we bridge the gap between theory and clinical practice? Acad Med. 2007;82:542–7.

Miller GE. The assessment of clinical skills/competence/performance. Acad Med. 1990;65:S63-67.

Epstein RM. Assessment in medical education. N Engl J Med. 2007;356:387–96.

Boulet JR, Murray DJ. Simulation-based assessment in anesthesiology: requirements for practical implementation. Anesthesiology. 2010;112:1041–52.

Bédard D, Béchard JP. L’innovation pédagogique dans le supérieur : un vaste chantier. Innover dans l’enseignement supérieur. Paris: Presses Universitaires de France; 2009. p. 29–43.

Biggs J. Enhancing teaching through constructive alignment. High Educ [Internet]. 1996 [cited 2020 Oct 25];32:347–64. Available from: https://doi.org/10.1007/BF00138871 .

Wong AK. Full scale computer simulators in anesthesia training and evaluation. Can J Anaesth. 2004;51:455–64.

Messick S. Evidence and ethics in the evaluation of tests. Educational Researcher [Internet]. 1981 [cited 2020 Mar 19];10:9–20. Available from: http://journals.sagepub.com/doi/ https://doi.org/10.3102/0013189X010009009 .

Bould MD, Crabtree NA, Naik VN. Assessment of procedural skills in anaesthesia. Br J Anaesth. 2009;103:472–83.

Schuwirth LWT, van der Vleuten CPM. Programmatic assessment and Kane’s validity perspective. Med Educ. 2012;46:38–48.

Brailovsky C, Charlin B, Beausoleil S, Coté S, Van der Vleuten C. Measurement of clinical reflective capacity early in training as a predictor of clinical reasoning performance at the end of residency: an experimental study on the script concordance test. Med Educ. 2001;35:430–6.

van der Vleuten CPM, Schuwirth LWT. Assessing professional competence: from methods to programmes. Med Educ. 2005;39:309–17.

Gordon M, Farnan J, Grafton-Clarke C, Ahmed R, Gurbutt D, McLachlan J, et al. Non-technical skills assessments in undergraduate medical education: a focused BEME systematic review: BEME Guide No. 54. Med Teach. 2019;41(7):732–45.

Jouquan J. L’évaluation des apprentissages des étudiants en formation médicale initiale. Pédagogie Médicale [Internet]. 2002 [cited 2020 Feb 2];3:38–52. Available from: http://www.pedagogie-medicale.org/ https://doi.org/10.1051/pmed:2002006 .

Gale TCE, Roberts MJ, Sice PJ, Langton JA, Patterson FC, Carr AS, et al. Predictive validity of a selection centre testing non-technical skills for recruitment to training in anaesthesia. Br J Anaesth. 2010;105:603–9.

Gallagher CJ, Tan JM. The current status of simulation in the maintenance of certification in anesthesia. Int Anesthesiol Clin. 2010;48:83–99.

S DeMaria Jr ST Samuelson AD Schwartz AJ Sim AI Levine Simulation-based assessment and retraining for the anesthesiologist seeking reentry to clinical practice: a case series. Anesthesiology [Internet]. 2013 [cited 2021 Sep 6];119:206–17 Available from: https://doi.org/10.1097/ALN.0b013e31829761c8 .

Amin Z, Boulet JR, Cook DA, Ellaway R, Fahal A, Kneebone R, et al. Technology-enabled assessment of health professions education: consensus statement and recommendations from the Ottawa 2010 conference. Medical Teacher [Internet]. 2011 [cited 2021 Jul 7];33:364–9. Available from: http://www.tandfonline.com/doi/full/ https://doi.org/10.3109/0142159X.2011.565832 .

Scallon G. L’évaluation des apprentissages dans une approche par compétences. Bruxelles: De Boeck Université-Bruxelles; 2007.

Downing SM. Validity: on meaningful interpretation of assessment data. Med Educ. 2003;37:830–7.

Cook DA, Hatala R. Validation of educational assessments: a primer for simulation and beyond. Adv Simul [Internet]. 2016 [cited 2021 Aug 24];1:31. Available from: http://advancesinsimulation.biomedcentral.com/articles/ https://doi.org/10.1186/s41077-016-0033-y .

Kane MT. Validating the interpretations and uses of test scores. Journal of Educational Measurement [Internet]. 2013 [cited 2020 Sep 9];50:1–73. Available from: https://onlinelibrary.wiley.com/doi/abs/ https://doi.org/10.1111/jedm.12000 .

Cook DA, Brydges R, Ginsburg S, Hatala R. A contemporary approach to validity arguments: a practical guide to Kane’s framework. Med Educ. 2015;49:560–75.

DA Cook B Zendejas SJ Hamstra R Hatala R Brydges What counts as validity evidence? Examples and prevalence in a systematic review of simulation-based assessment. Adv in Health Sci Educ [Internet]. 2014 [cited 2020 Feb 2];19:233–50 Available from: https://doi.org/10.1007/s10459-013-9458-4 .

Cook DA, Lineberry M. Consequences validity evidence: evaluating the impact of educational assessments. Acad Med [Internet]. 2016 [cited 2020 Oct 24];91:785–95. Available from: http://journals.lww.com/00001888-201606000-00018 .

Tavakol M, Dennick R. Post-examination analysis of objective tests. Med Teach. 2011;33:447–58.

Messick S. The interplay of evidence and consequences in the validation of performance assessments. Educational Researcher [Internet]. 1994 [cited 2021 Feb 15];23:13–23. Available from: http://journals.sagepub.com/doi/ https://doi.org/10.3102/0013189X023002013 .

Validity MS. Education measurement. 3rd ed. New York: R. L. Linn; 1989. p. 13–103.

Oriot D, Darrieux E, Boureau-Voultoury A, Ragot S, Scépi M. Validation of a performance assessment scale for simulated intraosseous access. Simul Healthc. 2012;7:171–5.

Guise J-M, Deering SH, Kanki BG, Osterweil P, Li H, Mori M, et al. Validation of a tool to measure and promote clinical teamwork. Simul Healthc. 2008;3:217–23.

Sousa VD, Rojjanasrirat W. Translation, adaptation and validation of instruments or scales for use in cross-cultural health care research: a clear and user-friendly guideline: Validation of instruments or scales. Journal of Evaluation in Clinical Practice . 2011 [cited 2022 Jul 22];17:268–74. Available from: https://onlinelibrary.wiley.com/doi/ https://doi.org/10.1111/j.1365-2753.2010.01434.x .

Stoyanova-Piroth G, Milanov I, Stambolieva K. Translation, adaptation and validation of the Bulgarian version of the King’s Parkinson’s Disease Pain Scale. BMC Neurol [Internet]. 2021 [cited 2022 Jul 22];21:357. Available from: https://bmcneurol.biomedcentral.com/articles/ https://doi.org/10.1186/s12883-021-02392-5 .

Behari M, Srivastava A, Achtani R, Nandal N, Dutta R. Pain assessment in Indian Parkinson’s disease patients using King’s Parkinson’s disease pain scale. Ann Indian Acad Neurol [Internet]. 2020 [cited 2022 Jul 22];0:0. Available from: http://www.annalsofian.org/preprintarticle.asp?id=300170;type=0 .

Guillemin F, Bombardier C, Beaton D. Cross-cultural adaptation of health-related quality of life measures: literature review and proposed guidelines. Journal of Clinical Epidemiology [Internet]. 1993 [cited 2022 Jul 22];46:1417–32. Available from: https://linkinghub.elsevier.com/retrieve/pii/089543569390142N .

Franc JM, Verde M, Gallardo AR, Carenzo L, Ingrassia PL. An Italian version of the Ottawa crisis resource management global rating scale: a reliable and valid tool for assessment of simulation performance. Intern Emerg Med. 2017;12:651–6.

Gosselin É, Marceau M, Vincelette C, Daneau C-O, Lavoie S, Ledoux I. French translation and validation of the Mayo High Performance Teamwork Scale for nursing students in a high-fidelity simulation context. Clinical Simulation in Nursing [Internet]. 2019 [cited 2022 Jul 25];30:25–33. Available from: https://linkinghub.elsevier.com/retrieve/pii/S1876139918301890 .

Sánchez-Marco M, Escribano S, Cabañero-Martínez M-J, Espinosa-Ramírez S, José Muñoz-Reig M, Juliá-Sanchis R. Cross-cultural adaptation and validation of two crisis resource management scales. International Emergency Nursing [Internet]. 2021 [cited 2022 Jul 25];57:101016. Available from: https://www.sciencedirect.com/science/article/pii/S1755599X21000549 .

Schuwirth LWT, Van der Vleuten CPM. Programmatic assessment: from assessment of learning to assessment for learning. Medical Teacher [Internet]. 2011 [cited 2021 Sep 6];33:478–85. Available from: http://www.tandfonline.com/doi/full/ https://doi.org/10.3109/0142159X.2011.565828 .

Maignan M, Koch F-X, Chaix J, Phellouzat P, Binauld G, Collomb Muret R, et al. Team Emergency Assessment Measure (TEAM) for the assessment of non-technical skills during resuscitation: validation of the French version. Resuscitation [Internet]. 2016 [cited 2019 Mar 12];101:115–20. Available from: http://www.sciencedirect.com/science/article/pii/S0300957215008989 .

Pires S, Monteiro S, Pereira A, Chaló D, Melo E, Rodrigues A. Non-technical skills assessment for prelicensure nursing students: an integrative review. Nurse Educ Today. 2017;58:19–24.

Khan R, Payne MWC, Chahine S. Peer assessment in the objective structured clinical examination: a scoping review. Med Teach. 2017;39:745–56.

Hegg RM, Ivan KF, Tone J, Morten A. Comparison of peer assessment and faculty assessment in an interprofessional simulation-based team training program. Nurse Educ Pract. 2019;42: 102666.

Scavone BM, Sproviero MT, McCarthy RJ, Wong CA, Sullivan JT, Siddall VJ, et al. Development of an objective scoring system for measurement of resident performance on the human patient simulator. Anesthesiology. 2006;105:260–6.

Oriot D, Bridier A, Ghazali DA. Development and assessment of an evaluation tool for team clinical performance: the Team Average Performance Assessment Scale (TAPAS). Health Care : Current Reviews [Internet]. 2016 [cited 2018 Jan 17];4:1–7. Available from: https://www.omicsonline.org/open-access/development-and-assessment-of-an-evaluation-tool-for-team-clinicalperformance-the-team-average-performance-assessment-scale-tapas-2375-4273-1000164.php?aid=72394 .

Flin R, Patey R, Glavin R, Maran N. Anaesthetists’ non-technical skills. Br J Anaesth. 2010;105:38–44.

Mishra A, Catchpole K, McCulloch P. The Oxford NOTECHS System: reliability and validity of a tool for measuring teamwork behaviour in the operating theatre. Quality and Safety in Health Care [Internet]. 2009 [cited 2021 Jul 6];18:104–8. Available from: https://qualitysafety.bmj.com/lookup/doi/ https://doi.org/10.1136/qshc.2007.024760 .

Cooper S, Cant R, Porter J, Sellick K, Somers G, Kinsman L, et al. Rating medical emergency teamwork performance: development of the Team Emergency Assessment Measure (TEAM). Resuscitation. 2010;81:446–52.

Adler MD, Trainor JL, Siddall VJ, McGaghie WC. Development and evaluation of high-fidelity simulation case scenarios for pediatric resident education. Ambul Pediatr. 2007;7:182–6.

Brydges R, Hatala R, Zendejas B, Erwin PJ, Cook DA. Linking simulation-based educational assessments and patient-related outcomes: a systematic review and meta-analysis. Acad Med. 2015;90:246–56.

Cazzell M, Howe C. Using Objective Structured Clinical Evaluation for Simulation Evaluation: Checklist Considerations for Interrater Reliability. Clinical Simulation In Nursing [Internet]. 2012;8(6):e219–25. [cited 2019 Dec 14] Available from: https://www.nursingsimulation.org/article/S1876-1399(11)00249-0/abstract .

Maignan M, Viglino D, Collomb Muret R, Vejux N, Wiel E, Jacquin L, et al. Intensity of care delivered by prehospital emergency medical service physicians to patients with deliberate self-poisoning: results from a 2-day cross-sectional study in France. Intern Emerg Med. 2019;14:981–8.

Alcaraz-Mateos E, Jiang X “Sara,” Mohammed AAR, Turic I, Hernández-Sabater L, Caballero-Alemán F, et al. A novel simulator model and standardized assessment tools for fine needle aspiration cytology training. Diagn Cytopathol [Internet]. 2019 [cited 2020 Feb 3];47:297–301. Available from: http://doi.wiley.com/ https://doi.org/10.1002/dc.24105 .

I Ghaderi M Vaillancourt G Sroka PA Kaneva MC Vassiliou I Choy Evaluation of surgical performance during laparoscopic incisional hernia repair: a multicenter study. Surg Endosc [Internet]. et al 2011 [cited 2020 Feb 2];25:2555–63 Available from: https://doi.org/10.1007/s00464-011-1586-4 .

IJgosse WM, Leijte E, Ganni S, Luursema J-M, Francis NK, Jakimowicz JJ, et al. Competency assessment tool for laparoscopic suturing: development and reliability evaluation. Surg Endosc. 2020;34(7):2947–53.

Pelaccia T, Tardif J. In: Comment [mieux] former et évaluer les étudiants en médecine et en sciences de la santé? 1ère. Louvain-la-Neuve: De Boeck supérieur; 2016. p. 343–56. (Guides pratiques).

Henricksen JW, Altenburg C, Reeder RW. Operationalizing healthcare simulation psychological safety: a descriptive analysis of an intervention. Simul Healthc. 2017;12:289–97.

Gaba DM. Simulations that are challenging to the psyche of participants: how much should we worry and about what? Simulation in Healthcare: The Journal of the Society for Simulation in Healthcare [Internet]. 2013 [cited 2020 Mar 17];8:4–7. Available from: http://journals.lww.com/01266021-201302000-00002 .

Ghazali DA, Breque C, Sosner P, Lesbordes M, Chavagnat J-J, Ragot S, et al. Stress response in the daily lives of simulation repeaters. A randomized controlled trial assessing stress evolution over one year of repetitive immersive simulations. PLoS One. 2019;14(7):e0220111.

Rudolph JW, Simon R, Raemer DB, Eppich WJ. Debriefing as formative assessment: closing performance gaps in medical education. Acad Emerg Med. 2008;15:1010–6.

Kang SJ, Min HY. Psychological safety in nursing simulation. Nurse Educ. 2019;44:E6-9.

Howard SK, Gaba DM, Smith BE, Weinger MB, Herndon C, Keshavacharya S, et al. Simulation study of rested versus sleep-deprived anesthesiologists. Anesthesiology. 2003;98(6):1345–55.

Neuschwander A, Job A, Younes A, Mignon A, Delgoulet C, Cabon P, et al. Impact of sleep deprivation on anaesthesia residents’ non-technical skills: a pilot simulation-based prospective randomized trial. Br J Anaesth. 2017;119:125–31.

Eastridge BJ, Hamilton EC, O’Keefe GE, Rege RV, Valentine RJ, Jones DJ, et al. Effect of sleep deprivation on the performance of simulated laparoscopic surgical skill. Am J Surg. 2003;186:169–74.

Boulet JR, Murray D, Kras J, Woodhouse J, McAllister J, Ziv A. Reliability and validity of a simulation-based acute care skills assessment for medical students and residents. Anesthesiology. 2003;99:1270–80.

Levine AI, Flynn BC, Bryson EO, Demaria S. Simulation-based Maintenance of Certification in Anesthesiology (MOCA) course optimization: use of multi-modality educational activities. J Clin Anesth. 2012;24:68–74.

Boulet JR, Murray D, Kras J, Woodhouse J. Setting performance standards for mannequin-based acute-care scenarios: an examinee-centered approach. Simul Healthc. 2008;3:72–81.

Furman GE, Smee S, Wilson C. Quality assurance best practices for simulation-based examinations. Simul Healthc. 2010;5:226–31.

Kane MT. The assessment of professional competence. Eval Health Prof [Internet]. 1992 [cited 2022 Jul 22];15:163–82. Available from: http://journals.sagepub.com/doi/ https://doi.org/10.1177/016327879201500203 .

Blum RH, Boulet JR, Cooper JB, Muret-Wagstaff SL. Harvard Assessment of Anesthesia Resident Performance Research Group. Simulation-based assessment to identify critical gaps in safe anesthesia resident performance. Anesthesiol. 2014;120(1):129–41.

Rizzolo MA, Kardong-Edgren S, Oermann MH, Jeffries PR. The national league for nursing project to explore the use of simulation for high-stakes assessment: process, outcomes, and recommendations: nursing education perspectives . 2015 [cited 2020 Feb 3];36:299–303. Available from: http://Insights.ovid.com/crossref?an=00024776-201509000-00006 .

Mudumbai SC, Gaba DM, Boulet JR, Howard SK, Davies MF. External validation of simulation-based assessments with other performance measures of third-year anesthesiology residents. Simul Healthc. 2012;7:73–80.

Fanning RM, Gaba DM. The role of debriefing in simulation-based learning. Simul Healthc. 2007;2:115–25.

Savoldelli GL, Naik VN, Park J, Joo HS, Chow R, Hamstra SJ. Value of debriefing during simulated crisis managementoral versus video-assisted oral feedback. Anesthesiology . American Society of Anesthesiologists; 2006 [cited 2020 Oct 19];105:279–85. Available from: https://pubs.asahq.org/anesthesiology/article/105/2/279/6669/Value-of-Debriefing-during-Simulated-Crisis .

Haute Autorité de Santé. Guide de bonnes pratiques en simulation en santé . 2012 [cited 2020 Feb 2]. Available from: https://www.has-sante.fr/upload/docs/application/pdf/2013-01/guide_bonnes_pratiques_simulation_sante_guide.pdf .

INACSL Standards Committee. INACSL Standards of best practice: simulation. Simulation design. Clinical Simulation In Nursing . 2016 [cited 2020 Feb 2];12:S5–12. Available from: https://www.nursingsimulation.org/article/S1876-1399(16)30126-8/abstract .

Norcini J, Anderson B, Bollela V, Burch V, Costa MJ, Duvivier R, et al. Criteria for good assessment: consensus statement and recommendations from the Ottawa 2010 Conference. Med Teach. 2011;33:206–14.

Gantt LT. The effect of preparation on anxiety and performance in summative simulations. Clinical Simulation in Nursing. 2013 [cited 2020 Feb 2];9:e25–33. Available from: http://www.sciencedirect.com/science/article/pii/S1876139911001277 .

Frey-Vogel AS, Scott-Vernaglia SE, Carter LP, Huang GC. Simulation for milestone assessment: use of a longitudinal curriculum for pediatric residents. Simul Healthc. 2016;11:286–92.

Durning SJ, Artino A, Boulet J, La Rochelle J, Van der Vleuten C, Arze B, et al. The feasibility, reliability, and validity of a post-encounter form for evaluating clinical reasoning. Med Teach. 2012;34:30–7.

Stone J. Moving interprofessional learning forward through formal assessment. Medical Education. 2010 [cited 2020 Feb 12];44:396–403. Available from: http://doi.wiley.com/ https://doi.org/10.1111/j.1365-2923.2009.03607.x .

Manser T, Dieckmann P, Wehner T, Rallf M. Comparison of anaesthetists’ activity patterns in the operating room and during simulation. Ergonomics. 2007;50:246–60.

Perrenoud P. Évaluation formative et évaluation certificative : postures contradictoires ou complémentaires ? Formation Professionnelle suisse . 2001 [cited 2020 Oct 29];4:25–8. Available from: https://www.unige.ch/fapse/SSE/teachers/perrenoud/php_main/php_2001/2001_13.html .

Atesok K, Hurwitz S, Anderson DD, Satava R, Thomas GW, Tufescu T, et al. Advancing simulation-based orthopaedic surgical skills training: an analysis of the challenges to implementation. Adv Orthop. 2019;2019:1–7.

Chiu M, Tarshis J, Antoniou A, Bosma TL, Burjorjee JE, Cowie N, et al. Simulation-based assessment of anesthesiology residents’ competence: development and implementation of the Canadian National Anesthesiology Simulation Curriculum (CanNASC). Can J Anesth/J Can Anesth. 2016 [cited 2020 Feb 2];63:1357–63. Available from: https://doi.org/10.1007/s12630-016-0733-8 .

TC Everett RJ McKinnon E Ng P Kulkarni BCR Borges M Letal Simulation-based assessment in anesthesia: an international multicentre validation study. Can J Anesth, J Can Anesth. et al 2019 [cited 2020 Feb 2];66:1440–9 Available from: https://doi.org/10.1007/s12630-019-01488-4 .

Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (Text with EEA relevance). May 4, 2016. Available from: http://data.europa.eu/eli/reg/2016/679/2016-05-04/eng .

Commission Nationale de l’Informatique et des Libertés. RGPD : passer à l’action. 2021 [cited 2021 Jul 8]. Available from: https://www.cnil.fr/fr/rgpd-passer-a-laction .

Ten Cate O, Regehr G. The power of subjectivity in the assessment of medical trainees. Acad Med. 2019;94:333–7.

Weller JM, Robinson BJ, Jolly B, Watterson LM, Joseph M, Bajenov S, et al. Psychometric characteristics of simulation-based assessment in anaesthesia and accuracy of self-assessed scores. Anaesthesia. 2005;60:245–50.

Wikander L, Bouchoucha SL. Facilitating peer based learning through summative assessment - an adaptation of the objective structured clinical assessment tool for the blended learning environment. Nurse Educ Pract. 2018;28:40–5.

Gaugler BB, Rudolph AS. The influence of assessee performance variation on assessors’ judgments. Pers Psychol. 1992;45:77–98.

Feldman M, Lazzara EH, Vanderbilt AA, DiazGranados D. Rater training to support high-stakes simulation-based assessments. J Contin Educ Health Prof. 2012 [cited 2019 Dec 14];32:279–86. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3646087/ .

Pelgrim E a. M, Kramer AWM, Mokkink HGA, van den Elsen L, Grol RPTM, van der Vleuten CPM. In-training assessment using direct observation of single-patient encounters: a literature review. Adv Health Sci Educ Theory Pract. 2011;16(1):131–42.

Downing SM, Tekian A, Yudkowsky R. Procedures for establishing defensible absolute passing scores on performance examinations in health professions education. Teach Learn Med. 2006;18:50–7.

Berkenstadt H, Ziv A, Gafni N, Sidi A. Incorporating simulation-based objective structured clinical examination into the Israeli National Board Examination in Anesthesiology. Anesth Analg. 2006;102:853–8.

Hedge JW, Kavanagh MJ. Improving the accuracy of performance evaluations: comparison of three methods of performance appraiser training. J Appl Psychol. 1988;73:68–73.

Harden RM, Stevenson M, Downie WW, Wilson GM. Assessment of clinical competence using objective structured examination. Br Med J. 1975;1:447–51.

Uzan S. Mission de recertification des médecins - Exercer une médecine de qualit | Vie publique.fr. Ministère des Solidarités et de la Santé - Ministère de l’Enseignement supérieur, de la Recherche et de l’Innovation; 2018 Nov. Available from: https://www.vie-publique.fr/rapport/37741-mission-de-recertification-des-medecins-exercer-une-medecine-de-qualit .

Mann KV, MacDonald AC, Norcini JJ. Reliability of objective structured clinical examinations: four years of experience in a surgical clerkship. Teaching and Learning in Medicine. 1990 [cited 2021 May 1];2:219–24. Available from: http://www.tandfonline.com/doi/abs/ https://doi.org/10.1080/10401339009539464 .

Maintenance Of Certification in Anesthesiology (MOCA) 2.0. [cited 2021 Sep 18]. Available from: https://theaba.org/about%20moca%202.0.html .

Khan KZ, Gaunt K, Ramachandran S, Pushkar P. The Objective Structured Clinical Examination (OSCE): AMEE Guide No. 81. Part II: Organisation & Administration. Med Teach. 2013 [cited 2020 Oct 29];35:e1447–63. Available from: http://www.tandfonline.com/doi/full/ https://doi.org/10.3109/0142159X.2013.818635 .

Coderre S, Woloschuk W, McLaughlin K. Twelve tips for blueprinting. Med Teach. 2009;31:322–4.

Murray DJ, Boulet JR. Anesthesiology board certification changes: a real-time example of “assessment drives learning.” Anesthesiology. 2018;128:704–6.

Roberts C, Newble D, Jolly B, Reed M, Hampton K. Assuring the quality of high-stakes undergraduate assessments of clinical competence. Med Teach. 2006;28:535–43.

Newble D. Techniques for measuring clinical competence: objective structured clinical examinations. Med Educ. 2004;38:199–203.

Der Sahakian G, Lecomte F, Buléon C, Guevara F, Jaffrelot M, Alinier G. Référentiel sur l’élaboration de scénarios de simulation en immersion clinique.  Paris: Société Francophone de Simulation en Santé; 2017 p. 22. Available from: https://sofrasims.org/wp-content/uploads/2019/10/R%C3%A9f%C3%A9rentiel-Scenario-Simulation-Sofrasims.pdf .

Lewis KL, Bohnert CA, Gammon WL, Hölzer H, Lyman L, Smith C, et al. The Association of Standardized Patient Educators (ASPE) Standards of Best Practice (SOBP). Adv Simul. 2017;2:10.

Board of Directors of the American Board of Medical Specialties (ABMS). Standards for the ABMS Program for Maintenance of Certification (MOC). American Board of Medical Specialties; 2014 Jan p. 13. Available from: https://www.abms.org/media/1109/standards-for-the-abms-program-for-moc-final.pdf .

Hodges B, McNaughton N, Regehr G, Tiberius R, Hanson M. The challenge of creating new OSCE measures to capture the characteristics of expertise. Med Educ. 2002;36:742–8.

Hays RB, Davies HA, Beard JD, Caldon LJM, Farmer EA, Finucane PM, et al. Selecting performance assessment methods for experienced physicians. Med Educ. 2002;36:910–7.

Ram P, Grol R, Rethans JJ, Schouten B, van der Vleuten C, Kester A. Assessment of general practitioners by video observation of communicative and medical performance in daily practice: issues of validity, reliability and feasibility. Med Educ. 1999;33:447–54.

Weersink K, Hall AK, Rich J, Szulewski A, Dagnone JD. Simulation versus real-world performance: a direct comparison of emergency medicine resident resuscitation entrustment scoring. Adv Simul. 2019 [cited 2020 Feb 12];4:9. Available from: https://advancesinsimulation.biomedcentral.com/articles/ https://doi.org/10.1186/s41077-019-0099-4 .

Buljac-Samardzic M, Doekhie KD, van Wijngaarden JDH. Interventions to improve team effectiveness within health care: a systematic review of the past decade. Hum Resour Health. 2020;18:2.

Eddy K, Jordan Z, Stephenson M. Health professionals’ experience of teamwork education in acute hospital settings: a systematic review of qualitative literature. JBI Database System Rev Implement Rep. 2016;14:96–137.

Leblanc VR. Review article: simulation in anesthesia: state of the science and looking forward. Can J Anaesth. 2012;59:193–202.

Hanscom R. Medical simulation from an insurer’s perspective. Acad Emerg Med. 2008;15:984–7.

McCarthy J, Cooper JB. Malpractice insurance carrier provides premium incentive for simulation-based training and believes it has made a difference. Anesth Patient Saf Found Newsl. 2007 [cited 2021 Sep 17];17. Available from: https://www.apsf.org/article/malpractice-insurance-carrier-provides-premium-incentive-for-simulation-based-training-and-believes-it-has-made-a-difference/ .

Edler AA, Fanning RG, Chen MichaelI, Claure R, Almazan D, Struyk B, et al. Patient simulation: a literary synthesis of assessment tools in anesthesiology. J Educ Eval Health Prof. 2009 [cited 2021 Sep 17];6:3. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2796725/ .

Borgersen NJ, Naur TMH, Sørensen SMD, Bjerrum F, Konge L, Subhi Y, et al. Gathering validity evidence for surgical simulation: a systematic review. Annals of Surgery. 2018 [cited 2022 Sep 25];267:1063–8. Available from: https://journals.lww.com/00000658-201806000-00014 .

Rudolph JW, Raemer DB, Simon R. Establishing a safe container for learning in simulation: the role of the presimulation briefing. Simul Healthc. 2014;9:339–49.

Cilliers FJ, Schuwirth LW, Adendorff HJ, Herman N, van der Vleuten CP. The mechanism of impact of summative assessment on medical students’ learning. Adv Health Sci Educ Theory Pract. 2010;15:695–715.

Hadi MA, Ali M, Haseeb A, Mohamed MMA, Elrggal ME, Cheema E. Impact of test anxiety on pharmacy students’ performance in Objective Structured Clinical Examination: a cross-sectional survey. Int J Pharm Pract. 2018;26:191–4.

Dunn W, Dong Y, Zendejas B, Ruparel R, Farley D. Simulation, mastery learning and healthcare. Am J Med Sci. 2017;353:158–65.

McGaghie WC. Mastery learning: it is time for medical education to join the 21st century. Acad Med. 2015;90:1438–41.

Ng C, Primiani N, Orchanian-Cheff A. Rapid cycle deliberate practice in healthcare simulation: a scoping review. Med Sci Educ. 2021;31:2105–20.

Taras J, Everett T. Rapid cycle deliberate practice in medical education - a systematic review. Cureus. 2017;9: e1180.

Cleland JA, Abe K, Rethans J-J. The use of simulated patients in medical education: AMEE Guide No 42. Med Teach. 2009;31:477–86.

Garden AL, Le Fevre DM, Waddington HL, Weller JM. Debriefing after simulation-based non-technical skill training in healthcare: a systematic review of effective practice. Anaesth Intensive Care. 2015;43:300–8.

Sawyer T, Eppich W, Brett-Fleegler M, Grant V, Cheng A. More than one way to debrief: a critical review of healthcare simulation debriefing methods. Simul Healthc. 2016;11:209–17.

Rudolph JW, Simon R, Dufresne RL, Raemer DB. There’s no such thing as “nonjudgmental” debriefing: a theory and method for debriefing with good judgment. Simul Healthc. 2006;1:49–55.

Levett-Jones T, Lapkin S. A systematic review of the effectiveness of simulation debriefing in health professional education. Nurse Educ Today. 2014;34:e58-63.

Palaganas JC, Fey M, Simon R. Structured debriefing in simulation-based education. AACN Adv Crit Care. 2016;27:78–85.

Rudolph JW, Foldy EG, Robinson T, Kendall S, Taylor SS, Simon R. Helping without harming: the instructor’s feedback dilemma in debriefing–a case study. Simul Healthc. 2013;8:304–16.

Larsen DP, Butler AC, Roediger III HL. Test-enhanced learning in medical education. Medical Education. 2008 [cited 2021 Aug 25];42:959–66. Available from: https://onlinelibrary.wiley.com/doi/ https://doi.org/10.1111/j.1365-2923.2008.03124.x .

Koster MA, Soffler M. Navigate the challenges of simulation for assessment: a faculty development workshop. MedEdPORTAL. 2021;17:11114.

Devitt JH, Kurrek MM, Cohen MM, Fish K, Fish P, Murphy PM, et al. Testing the raters: inter-rater reliability of standardized anaesthesia simulator performance. Can J Anaesth. 1997;44:924–8.

Kelly MA, Mitchell ML, Henderson A, Jeffrey CA, Groves M, Nulty DD, et al. OSCE best practice guidelines—applicability for nursing simulations. Adv Simul. 2016 [cited 2020 Feb 3];1:10. Available from: http://advancesinsimulation.biomedcentral.com/articles/ https://doi.org/10.1186/s41077-016-0014-1 .

Weinger MB, Banerjee A, Burden AR, McIvor WR, Boulet J, Cooper JB, et al. Simulation-based assessment of the management of critical events by board-certified anesthesiologists. Anesthesiology. 2017;127:475–89.

Sinz E, Banerjee A, Steadman R, Shotwell MS, Slagle J, McIvor WR, et al. Reliability of simulation-based assessment for practicing physicians: performance is context-specific. BMC Med Educ. 2021;21:207.

Ryall T, Judd BK, Gordon CJ. Simulation-based assessments in health professional education: a systematic review. J Multidiscip Healthc. 2016;9:69–82.

Download references

Acknowledgements

The authors thank SoFraSimS Assessment with simulation group members: Anne Bellot, Isabelle Crublé, Guillaume Philippot, Thierry Vanderlinden, Sébastien Batrancourt, Claire Boithias-Guerot, Jean Bréaud, Philine de Vries, Louis Sibert, Thierry Sécheresse, Virginie Boulant, Louis Delamarre, Laurent Grillet, Marianne Jund, Christophe Mathurin, Jacques Berthod, Blaise Debien, and Olivier Gacia who have contributed to this work. The authors thank the external experts committee members: Guillaume Der Sahakian, Sylvain Boet, Denis Oriot and Jean-Michel Chabot; and the SoFraSimS executive Committee for their review and feedback.

This work has been supported by the French Speaking Society for Simulation in Healthcare (SoFraSimS).

This work is a part of CB PhD which has been support by grants from the French Society for Anesthesiology and Intensive Care (SFAR), the Arthur Sachs-Harvard Foundation, the University Hospital of Caen, the North-West University Hospitals Group (G4), and the Charles Nicolle Foundation. Funding bodies did not have any role in the design of the study, collection, analysis, and interpretation of the data and in writing the manuscript.

Author information

Authors and affiliations.

Department of Anesthesiology, Intensive Care and Perioperative Medicine, Caen Normandy University Hospital, 6th Floor, Caen, France

Clément Buléon & Erwan Guillouet

Medical School, University of Caen Normandy, Caen, France

Center for Medical Simulation, Boston, MA, USA

Clément Buléon, Rebecca D. Minehart & Jenny W. Rudolph

Department of Anesthesiology, Intensive Care and Perioperative Medicine, Nîmes University Hospital, Nîmes, France

Laurent Mattatia

Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Boston, MA, USA

Rebecca D. Minehart & Jenny W. Rudolph

Harvard Medical School, Boston, MA, USA

Department of Anesthesiology, Intensive Care and Perioperative Medicine, Liège University Hospital, Liège, Belgique

Fernande J. Lois

Department of Emergency Medicine, Pitié Salpêtrière University Hospital, APHP, Paris, France

Anne-Laure Philippon

Department of Pediatric Intensive Care, Pellegrin University Hospital, Bordeaux, France

Olivier Brissaud

Department of Emergency Medicine, Rouen University Hospital, Rouen, France

Antoine Lefevre-Scelles

Department of Anesthesiology, Intensive Care and Perioperative Medicine, Kremlin Bicêtre University Hospital, APHP, Paris, France

Dan Benhamou

Department of Emergency Medicine, Cochin University Hospital, APHP, Paris, France

François Lecomte

You can also search for this author in PubMed   Google Scholar

Contributions

CB helped with the study conception and design, data contribution, data analysis, data interpretation, writing, visualization, review, and editing. FL helped with the study conception and design, data contribution, data analysis, data interpretation, writing, review, and editing. RDM, JWR, and DB helped with the study writing, and review and editing. JWR and DB helped with the data interpretation, writing, and review and editing. LM, FJL, EG, ALP, OB, and ALS helped with the data contribution, data analysis, data interpretation, and review. The authors read and approved the final manuscript.

Corresponding author

Correspondence to Clément Buléon .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Buléon, C., Mattatia, L., Minehart, R.D. et al. Simulation-based summative assessment in healthcare: an overview of key principles for practice. Adv Simul 7 , 42 (2022). https://doi.org/10.1186/s41077-022-00238-9

Download citation

Received : 02 March 2022

Accepted : 30 November 2022

Published : 28 December 2022

DOI : https://doi.org/10.1186/s41077-022-00238-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Medical education
  • Competency-based education

Advances in Simulation

ISSN: 2059-0628

summative assessment examples nursing

Warning: The NCBI web site requires JavaScript to function. more...

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Global Forum on Innovation in Health Professional Education; Board on Global Health; Institute of Medicine. Assessing Health Professional Education: Workshop Summary. Washington (DC): National Academies Press (US); 2014 Sep 19.

Cover of Assessing Health Professional Education

Assessing Health Professional Education: Workshop Summary.

  • Hardcopy Version at National Academies Press

1 Setting the Stage

Key messages.

  • Both summative and formative assessments are critical components of a competency-based system. (Holmboe, Norcini)
  • Understanding why the assessment is being conducted and how the purpose aligns with the desired outcomes is key to undertaking an assessment. (Holmboe, Norcini)
  • By combining a demonstration of knowledge with acquisition of skills, and by testing for an ability to apply both knowledge and skills in new situations, a message is sent to learners that knowledge, skills, application, and ability are all important elements for their education. (Holmboe, Norcini)
  • Too little time is spent on formative assessment. (Holmboe, Norcini)
  • There is a need for greater faculty development in the area of assessment. (Aschenbrener, Bezuidenhout, Holmboe, Norcini, Sewankambo)
  • Although it is a useful tool, most individuals are not good at self-assessments. (Baker, Holmboe, Norcini, Reeves)
  • Regardless of how well learners are trained, dangerous situations leading to medical errors will persist if there is no support of the larger organizational structures emphasizing the need for a culture of safety. (Finnegan, Gaines, Malone, Palsdottir, Talbott)

In setting the stage for the workshop, John Norcini from the Foundation for Advancement of International Medical Education and Research (FAIMER) described assessment as a powerful tool for directing learning by signaling what is important for a learner to know and understand. In this way, he said, assessments can motivate learners to acquire greater knowledge and skills in order to demonstrate that learning has occurred. The summative assessment measures achievement, while formative assessments focus on the learning process and whether the activities the learners engaged in helped them to better understand and demonstrate competency. As such, both summative and formative assessments are critical components of a competency-based system. A competency-based model directs learning based on intended outcomes of a learner ( Sullivan, 1995 ; Harris et al., 2010 ) in the particular context of where the training takes place. Although it is outcome oriented, competency-based education also relies on continuous and frequent assessments for obtaining specific competencies ( Holmboe et al., 2010 ).

  • THE PURPOSE OF ASSESSMENT

According to Norcini, assessment involves testing, measuring, collecting and combining information, and providing feedback ( Norcini et al., 2011 ). Understanding why the assessment is being conducted and how the purpose aligns with the desired outcomes is key to undertaking an assessment. Norcini posed a list of potential purposes of the assessment in health professional education, which might include some or all of the following:

  • Enhance learning by pointing out flaws in a skill or errors in knowledge.
  • Ensure safety by demonstrating that learning has occurred.
  • Guide learning in a particular direction outlined by the assessment questions or methods.
  • Motivate learners to seek greater knowledge in a particular area.
  • Provide feedback to the educator or trainer that benchmarks progress of the learner.

Highlighting the fourth bullet, Norcini emphasized that a purpose of assessment is to “create learning.” In order to learn, one needs to be able to retrieve and use the information taken in. To underscore this point, Norcini cited an example involving students who took a test three times and ultimately scored better on that test than students who read a relevant article three times ( Roediger and Karpicke, 2006 ). This is known as the “testing effect” where it is believed that tests can actually enhance retention even when those tests are given without any feedback. Norcini described the testing effect hypothesis that assessments create learning because it forces not only retrieval but also application of information and signals to students what is important and what should be emphasized in their studies and experiential learning.

Forum Co-Chair Afaf Meleis from the University of Pennsylvania School of Nursing questioned whether there is a danger in using assessments that direct studying toward the assessment tool rather than opening new ways of critical thinking. Norcini responded in the positive, saying that because the risk is always present, the assessment tool must be carefully selected. Historically, tests have been designed around fact memorization. Roughly 20 to 25 years ago, the standardized patient was introduced into assessments that moved beyond the simple memorization–regurgitation model. By combining a demonstration of knowledge with acquisition of skills, and by testing for an ability to apply both knowledge and skills in new situations, a message is sent to learners that knowledge, skills, application, and ability are all important elements for their education.

Assessment Outcomes and Criteria

As might be expected, said Norcini, the most important outcome of an assessment differs based on one's perspective. Students are concerned about being able to demonstrate their competence, educators and educational institutions are interested in producing competent health professionals who are accountable, and regulatory bodies are mainly focused on accountability and maintenance of professional competence. Users of the health system are also concerned that health professionals are accountable and competent, but in addition, they want to know if providers are being efficient with their resources.

Desired outcomes of an assessment differ not only based on perspective as noted above, but also based on the context within which the assessment is being conducted. And although there are certain characteristics of a good assessment, Norcini emphasized that no single set of criteria applies equally to all assessment situations. Despite all of the diversity in reasons for conducting assessments and the settings within which the assessments are conducted, Norcini reported on how participants at the Ottawa Conference were able to come together to produce a unified set of seven criteria needed for a good assessment ( Norcini et al., 2011 ). These conference participants also explored how these criteria might be modified based on the purpose of the assessment and the stakeholder(s) using it. The criteria were presented to the Forum members for discussion at the workshop and can be found in Table 1-1 .

TABLE 1-1. Criteria Needed for a Good Assessment, Produced at the Ottawa Conference.

Criteria Needed for a Good Assessment, Produced at the Ottawa Conference.

In considering the criteria outlined by Norcini, Forum Co-Chair Jordan Cohen from George Washington University asked if it is possible to use these principles of assessment for assessing how well teams function and work interprofessionally. Norcini responded with a resounding affirmation that the principles apply regardless of the assessment situation, although the challenges increase dramatically. This is an area, he said, that is a growing area of research. For example, the 360-degree assessment is one way to measure teams, and there is considerable work under way in using simulation to assess health professional teams.

Assessment as a Catalyst for Learning

Warren Newton, representing the American Board of Family Medicine, asked about Norcini's use of the term catalyzing learning . Norcini responded that it is one thing to tell a student what is important to learn and another thing to provide students with feedback based on the assessment that drives their learning. The latter is a much more specific way of signaling what is important, and it is used to create learning among students. Newton then asked about the activity costs of assessment versus other kinds of activities. He pointed out that many of the Forum members manage both faculties and clinical systems; this prompted the question, how much time should be spent in assessment as part of the overall teaching role? Norcini responded by looking at the types of assessments, saying that far too much time is often devoted to summative assessment and too little time is spent on formative assessment; he added that formative assessment is the piece that drives learning and the part that is integrated with learning. Furthermore, assessments can be done relatively efficiently, especially if the assessors collaborate with partners across the institution. Norcini believes there could be greater sharing of resources across institutions, which would lead to better and more efficient assessments. Another advantage is the cost savings that can be achieved by spreading the fixed costs across institutions; these costs typically represent the largest expenses associated with assessments.

Assessment's Impact on Patients and Society

Forum member and workshop co-chair Eric Holmboe from the American Board of Internal Medicine (ABIM) moderated the question-and-answer session with John Norcini, and brought up assessment from a public perspective. He asked the audience what the return on investment would be if the assessment were not in place—if health professionals were licensed who are insufficiently prepared, and allowed to practice throughout a 30-year career? The cost to society would be much less if time was spent, particularly on the formative side, to make sure health professionals acquire the competence needed to be effective. Holmboe said that often assessors look at the short-term costs and the time costs without recognizing that not putting in sufficient effort comes at a heavy cost over time. And, there has not been a strong concerted effort to embed assessment into daily activities, like bedside rounds; this might be a form of observation and assessment that could be more effectively exploited. There are also a number of multisource tools that are relatively low tech and involve a series of observations; however, what is lacking in these tools is how to make them sufficiently reliable so appropriate judgments and inferences can be extracted.

Forum and workshop planning committee member Patricia Hinton Walker from the Uniformed Services University of Health Sciences followed Holmboe's lead and asked about including the public on the health team and how an assessment might be conducted that includes not just patients but students as well. Norcini responded again by emphasizing the value of multisource feedback for team assessments as well as other opportunities, such as ethics panels that can make use of the patient's competence in a particular area. He went on to say that the assessment process would lack validity if patients were not involved in the assessment. But in follow-up, Walker commented that students are somewhat separated from patients and families. Norcini pointed out this is an area of keen interest with researchers in the United Kingdom who are incorporating patients into the education of all health care providers through family interviews. Holmboe also brought up the longitudinal integrated clerkships (LICs) where students are assigned a group of patients and a family to follow over all 4 years of their training. The families play a major role in the assessment and feedback process of the trainees, said Holmboe. Although it is a resource intensive model, there are data from Australia, Canada, South Africa, and the United States looking into using LICs as an organizing principle ( Norris et al., 2009 ; Hirsh et al., 2012 ). The Commonwealth Medical School in Scranton has actually moved to an entirely LIC-based model so every student at Commonwealth will be in an LIC-type model for their entire medical education.

Walker also wanted to know Holmboe's and Norcini's views on “high-stakes assessments.” In Holmboe's opinion, there needs to be some form of public accountability through a summative assessment (Norcini agreed). At the ABIM, Holmboe views the certification exam as part of their public accountability as well as an act of professionalism. But for him, the bigger issue is the inclusion of more formative assessments during training and education rather than relying so much on summative examinations. Norcini added that he sees formative assessment as a mechanism for addressing trainee errors at a much earlier stage than waiting until the end for the summative assessment.

Jacob Buck from the University of Maryland School of Social Work, who joined the workshop as a participant, asked what the target of the assessment should be—is it to have healthier individuals and populations, or is it to graduate smarter health providers? In response, Norcini took apart the goal of the assessment. If the goal is to take better care of patients, then the focus would be on the demonstration of the skills in a practice environment and likely not a multiple choice test. In his opinion, the triple aim of improving health and care at lower costs may be the desired outcome from education, so an assessment could be designed to achieve that goal. Forum member Pamela Jefferies from Johns Hopkins University did not disagree, but she asked how one might measure interprofessional education (IPE) in the practice environment while patients are involved. Holmboe responded that this gets at some of the complexities of assessing experiential learning acquisition of a learner. Holmboe also raised the complexity of finding training sites where high-quality interprofessional care can be experienced so the learners can be assessed against a gold standard. It is not surprising that learners who do not experience high-quality, interprofessional care are not well prepared to work in these environments. Jeffries suggested that interprofessional clinical simulations could help bridge the gap for learners who are not trained through an embedded IPE clinical or related work experience.

  • STRUCTURE AND IMPLEMENTATION OF ASSESSMENT

Looking at the assessment from a different lens, Forum member Bjorg Palsdottir, who represents the Belgian organization Training for Health Equity Network (THEnet), wanted to know more about who is doing the assessing and how that person might prepare to undertake this role. Norcini acknowledged the need for greater faculty development in this area because health professionals are not trained in education or assessment. Forum member and workshop planning committee member Carol Aschenbrener from the Association of American Medical Colleges agreed, but also felt that the shortage of modern, clinical practice sites in which to embed the learner is another major impediment. In her opinion, it is the clinical sites that need greater scrutiny and that, if pushed toward modernization through assessment, could be the lever for greater, more relevant faculty development. According to Holmboe, measuring practice characteristics unfortunately remains difficult although the tools are improving, particularly with the introduction of the Patient-Centered Medical Home (PCMH). For example, the National Committee for Quality Assurance (NCQA) PCMH developed the NCQA 2011 Medical Home Assessment Tool that providers and staff can use to assess how their practice operates compared to PCMH 2011 standards ( Ingram and Primary Care Development Corporation, 2011 ). This tool looks mostly at structure and process, said Holmboe, but researchers are beginning to embed outcomes into the assessment that might make it a good starting place for measuring practice characteristics that could be then be applied in education.

Another example Holmboe described is the Dartmouth Microsystem Improvement Curriculum (DMIC). This is a set of tools that incorporates success characteristics associated with high-functioning practices ( The Dartmouth Institute, 2013 ). It uses action learning to instruct providers on how to assess and improve a clinical work environment in order to ultimately provide better patient care. The Idealized Design of Clinical Office Practices (IDCOP) from the Institute for Healthcare Improvement is yet another tool ( IHI, 2014 ). It attempts to demonstrate that through appropriate clinical office practice redesign, performance improvements can be achieved that respond to patients' needs and desires. Goals of the IDCOP model are better clinical outcomes, lower costs, higher satisfaction, and improved efficiency ( IHI, 2000 ). Holmboe acknowledged that these examples are clinically oriented, and he would be interested to learn about other models (although no other models were offered by the participants).

Assessing Cultural Competence

Afaf Meleis asked how one might assess the social mission of health professional learners and design a tool that assesses cultural competence. Neither Norcini nor Holmboe knew of any good models to assess either of these areas, but Holmboe repeated that work within social accountability and professionalism can only be assessed if learners actually experience a work environment that has role models in these areas—and it is the responsibility of the professionals to create these opportunities. Norcini agreed with Meleis, saying that cultural competence is a critical issue to assess. He added that it is absolutely essential that assessors scrutinize the methods used and the results obtained to ensure no one is disadvantaged for cultural reasons. Meleis encouraged Norcini to add multicultural perspective to his list of criteria needed for a good assessment.

Assessment by Peers

Forum member Beverly Malone from the National League for Nursing questioned the role of peer assessment in formative and summative assessments given the inherent challenges associated with this type of assessment. Norcini responded that peer assessments are underutilized particularly when it comes to the assessment of teachers, although a set of measures is being developed for assessing teachers that includes peer assessment. Norcini added that another way to assess teachers is to look at the outcomes of students. Holmboe pointed out that one of the risks to using student outcomes as assessment tools of educators is when the experiences are not well designed so interactions with peers, patients, or others are brief or casual. Attempting to assess learners' knowledge, skills, or ability in these types of brief and casual encounters are simply not useful, said Holmboe.

Assessment by Patients

The next question changed the focus of the conversation from the learner to the patient: a patient encounter is a one-time event, so what methodologies are in place to ensure equivalence when incorporating the patient's very particular set of experiences? Norcini admitted that there are biases so, in order to counter those, he samples the patient population of a provider as broadly as possible to include different patients on different occasions. In his opinion, there are at least three reasons for including patients in the assessment of providers:

Patients are reluctant to criticize their provider so when they do, the provider has a major issue that should be addressed.

Patients can be used to compare providers with their colleagues.

Patient feedback makes a major difference in provider performance.

Time-Efficient Assessments

Another comment made during this question-and-answer session was a personal example from Forum member Joanna Cain, representing the American Congress of Obstetricians and Gynecologists and the American Board of Obstetrics and Gynecology, who described how her colleagues in the operating room (OR) use a time-efficient model of formative assessment. In their model, every operation ends with a “60-second” gathering of the team to discuss what did and did not go well. Holmboe applauded their use of formative assessment, but he cautioned against using time limitations as an excuse for not engaging in a complete assessment process. In his view, assessment is a professional obligation that demonstrates the return on investment. With that caveat, Holmboe reported that multiple 2- to 3-minute shared observations can be a rich source of information, and more opportunities for such assessments would be useful. In fact, as the OR example showed, quick assessments are attractive to many health professionals who keep busy schedules. Quick assessments can drive culture as colleagues observe the value in this form of individual and peer assessment, information sharing, and team building.

Self-Assessment

In hearing the previous discussion, Jordan Cohen commented that self-reflection is a potentially important tool. Norcini partly agreed, because although it is a useful tool, most individuals are not good at self-assessments. Holmboe added to the response that self-directed assessment defined by Eva and Regehr (2011) as a global judgment of one's ability in a particular domain is as Norcini described. The real value is found when self-assessors seek comments and feedback from others, especially those outside their own profession or discipline ( Sargeant, 2008 ). But despite the valuable information this form of assessment can provide, it is not used as often as other forms of assessment.

  • MAKING ASSESSMENT MEANINGFUL

Following the orienting discussion, Forum members engaged in interprofessional table discussions to delve more deeply into the value of formative and summative assessments. Each table in the room included Forum members, a health professional student representative, and a user of the health care system. The purpose of engaging students and patient representatives was to enrich the discussions at each table by infusing different perspectives into the conversations. Students identified by members of the Forum were invited to attend the workshop and represented the fields of social work, public health, medicine, nursing, pharmacy, and speech, language, and hearing. Forum member and workshop co-chair Darla Coffey from the Council on Social Work Education led the session. Coffey suggested that communication might be a focus of the discussions about assessment. One person from each group was designated to present to the entire group the summary of the discussions that took place at his or her table. The results of these discussions can be found in Table 1-2 (value of summative assessments) and Table 1-3 (value of formative assessments). The responses were informed by group discussion and should not be construed as consensus.

TABLE 1-2. Summative Assessment Discussion Question: From the Perspective of Assessment of Learning, What Do You Think Makes a Good Assessment Tool/Measure?

Summative Assessment Discussion Question: From the Perspective of Assessment of Learning, What Do You Think Makes a Good Assessment Tool/Measure?

TABLE 1-3. Formative Assessment Discussion Question: From the Perspective of Assessment for Learning, What Do You Think Makes a Good Assessment Tool/Measure?

Formative Assessment Discussion Question: From the Perspective of Assessment for Learning, What Do You Think Makes a Good Assessment Tool/Measure?

The Challenge of Uneven Power Structures

In addition to the points listed in the Tables 1-2 and 1-3 , Forum member Richard Talbott, representing the Association of Schools of the Allied Health Professions, brought up challenges associated with assessing supervisors or others who may be possess greater power than the assessor, due to fear of reprisal. He believes that the first goal within communication is to dismantle the power structure so anyone can feel comfortable in speaking up. In this type of setting, individuals may feel more comfortable giving honest assessments. This would include patients and caretakers, and it would create positive role models for learners to emulate. Bjorg Palsdottir then discussed the hidden curriculum and how negative role models have an ability to imprint negative experiences on learners regardless of the educational training received in the classroom.

This comment was underscored by yet another Forum member, who cited an example of an aggressive attending physician. Their program director confronted the physician about his aggression by emphasizing the risk to safety, saying, “If you are intimidating people, you are not a safe practitioner.” One needs to understand how to navigate potentially delicate situations created by uneven power structures when one is challenging the hierarchy, said the Forum member. It takes practice, but it can be done. Workshop planning committee member Meg Gaines from the University of Wisconsin Law School took this point a step further, saying that it was an ethical imperative to speak up.

This topic resonated with the Forum's public health representative John Finnegan from the Association of Schools and Programs of Public Health (ASPPH), who was reminded of the 2005 Joint Commission report that cited communication failures as the leading root cause for medical errors ( Joint Commission Resources, Inc., 2005 ). This does not mean the wrong information was always transmitted; rather, oftentimes nothing was said due to a fear of retribution. Regardless of how well learners are trained, said Finnegan, dangerous situations leading to medical errors will persist if there is no support of the larger organizational structures emphasizing the need for a culture of safety.

Assessment as a Driver for Change

Darla Coffey then asked the members and the students and patient representatives to consider how assessments could be a catalyst for change in the educational and health care systems. Much of the discussion revolved around the idea of better integrating education and practice; Forum member George Thibault from the Josiah Macy Jr. Foundation was a vocal advocate for rethinking health professional education and practice as one system. Forum member Lucinda Maine, the representative from the American Association of Colleges of Pharmacy, thought this could possibly be accomplished within her field by improving the assessment skills of their volunteer instructors and preceptors. In her view, this would make it easier to suggest changes in practice environments that could strengthen relationships within the continuum of education to practice. But, said Aschenbrener, for there to be any benefits to health professional education, assessments need to be reviewed at least annually for their alignment with the predetermined educational goals and the set level of student achievement.

The representative from the Association of American Veterinary Medical Colleges, Chris Olsen, felt that for assessment to drive change, it would need to be part of the expectation. Too often, assessments are carried out without taking the critical last step of using the information to drive change. Individual participants at the workshop provided their thoughts on how assessments in the context of education could drive changes in the practice environment. For example, workshop planning committee member Lucy Mac Gabhann, a law student at the University of Maryland, suggested that in a community setting, student assessment might influence policy. And Forum member Jan De Maeseneer from Ghent University in Belgium thought that students exposed to resource-constrained neighborhoods would develop a sensitivity to the social inequalities in health. However, others expressed doubt that assessments could affect change when the organizational culture is based on hierarchy and imbalances in power structures that are perpetuated through the hidden curriculum and role modeling. Beverly Malone pointed out that such a culture puts patients at risk when open and honest communication is avoided due to a fear of reprisal. John Finnegan fervently agreed, saying that communication in an organizational setting is strongly influenced by that culture, and no matter how much one tries to educate around it, the larger organizational framework will prevail. That must change, he said; there has to be a safe culture where communication is not feared in order for assessment to drive change in education and practice.

Yet another view was expressed by George Thibault, who pushed for health professions education and health care delivery to be taken as one unit with one goal. In this way, the impact of assessments is considered on both education and practice simultaneously. The educational reforms are informed by the delivery changes, and the delivery changes are informed by the education changes. If education and practice continue to be dichotomized, he said, valuable learning opportunities across the continuum will be missed. Workshop planning committee member Cathi Grus from the American Psychological Association commented on the opportunity for learning from assessments that are bidirectional. To her, such learning meant engaging patients in the design of the feedback that would be provided to students, and as such could send a powerful message to the learner of what is important to the end user of the health system. What is important, said Grus, is that all involved have an understanding of the goals of the assessment in order to maximize its impact.

  • The Dartmouth Institute. Dartmouth microsystem improvement curriculum: Microsystem action learning series. 2013. [January 6, 2014]. http: ​//clinicalmicrosystem ​.org/materials/curriculum .
  • Eva KW, Regehr G. Exploring the divergence between self-assessment and self-monitoring. Advances in Health Sciences Education. 2011; 16 (3):311–329. [ PMC free article : PMC3139875 ] [ PubMed : 21113820 ]
  • Harris P, Snell L, Talbot M, Harden RM. Competency-based medical education: Implications for undergraduate programs. Medical Teacher. 2010; 32 (8):646–650. [ PubMed : 20662575 ]
  • Hirsh D, Gaufberg E, Ogur B, Cohen P, Krupat E, Cox M, Pelletier S, Bor D. Educational outcomes of the Harvard Medical School-Cambridge Integrated Clerkship: A way forward for medical education. Academic Medicine. 2012; 87 (5):643–650. [ PubMed : 22450189 ]
  • Holmboe ES, Sherbino J, Long DM, Swing SR, Frank JR. The role of assessment in competency-based medical education. Medical Teacher. 2010; 32 (8):676–682. [ PubMed : 20662580 ]
  • IHI (Institute for Healthcare Improvement). Idealized design of clinical office practices. Boston, MA: 2000.
  • IHI. Idealized design of the clinical office practice (IDCOP): Overview. 2014. [January 6, 2014]. http://www ​.ihi.org/offerings ​/Initiatives ​/PastStrategicInitiatives ​/IDCOP/Pages/default.aspx .
  • Ingram DJ.Primary Care Development Corporation. NCQA 2011 Medical Home Assessment Tool. 2011. [January 6, 2014]. http://www ​.pcdc.org/resources ​/patient-centered-medical-home ​/pcdc-pcmh ​/pcdc-pcmh-resources ​/PCDC-PCMH/ncqa-2011-medical-home.html .
  • Joint Commission Resources, Inc. The Joint Commission guide to improving staff communication. Oakbook Terrace, IL: Joint Commission Resources; 2005.
  • Norcini J, Anderson BB, Burch V, Costa MJ, Duvivier R, Galbraith R, Hays R, Kent A, Perrott V, Roberts T. Criteria for good assessment: Consensus statement and recommendations from the Ottawa 2010 conference. Medical Teacher. 2011; 33 (3):206–214. [ PubMed : 21345060 ]
  • Norris TE, Schaad DC, DeWitt D, Ogur B, Hunt DD. Longitudinal integrated clerkships for medical students: An innovation adopted by medical schools in Australia, Canada, South Africa, and the United States. Academic Medicine. 2009; 84 (7):902–907. [ PubMed : 19550184 ]
  • Roediger HL, Karpicke JD. The power of testing memory: Basic research and implications for educational practice. Perspectives on Psychological Science. 2006; 1 (3):181–210. [ PubMed : 26151629 ]
  • Sargeant J. Toward a common understanding of self-assessment. Journal of Continuing Education in the Health Professions. 2008; 28 (1):1–4. [ PubMed : 18366124 ]
  • Sullivan RS. The competency-based approach to training. Washington, DC: U.S. Agency for International Development; 1995.
  • Cite this Page Global Forum on Innovation in Health Professional Education; Board on Global Health; Institute of Medicine. Assessing Health Professional Education: Workshop Summary. Washington (DC): National Academies Press (US); 2014 Sep 19. 1, Setting the Stage.
  • PDF version of this title (1.3M)

In this Page

Related information.

  • PMC PubMed Central citations
  • PubMed Links to PubMed

Recent Activity

  • Setting the Stage - Assessing Health Professional Education Setting the Stage - Assessing Health Professional Education

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Summative Simulated-Based Assessment in Nursing Programs

  • PMID: 27224460
  • DOI: 10.3928/01484834-20160516-04

Background: Summative simulated-based assessments are intended to determine students' competence in practice. These assessments need to be carefully designed and implemented especially when the results are used to make high-stakes decisions.

Method: Critical steps need to be followed to design simulations for summative assessment, ensure the validity of the assessment and reliability of the ratings, and train evaluators.

Results: Guidelines for using simulation for assessment in nursing are suggested. The guidelines are based on the literature and the authors' experiences from a project that examined the feasibility of using simulation for determining students' competence in performance at the end of the nursing program.

Conclusion: Summative simulation-based assessments need to be valid, measuring the knowledge and skills they are intended to, and reliable, with results being reproduced by different evaluators and by the same evaluator at another time. [J Nurs Educ. 2016;55(6):323-328.].

Copyright 2016, SLACK Incorporated.

PubMed Disclaimer

Similar articles

  • Evaluation of the McMahon Competence Assessment Instrument for Use with Midwifery Students During a Simulated Shoulder Dystocia. McMahon E, Jevitt C, Aronson B. McMahon E, et al. J Midwifery Womens Health. 2018 Mar;63(2):221-226. doi: 10.1111/jmwh.12721. Epub 2018 Mar 13. J Midwifery Womens Health. 2018. PMID: 29533504
  • The evaluation of a framework for measuring the non-technical ward round skills of final year nursing students: An observational study. Murray K, McKenzie K, Kelleher M. Murray K, et al. Nurse Educ Today. 2016 Oct;45:87-90. doi: 10.1016/j.nedt.2016.06.024. Epub 2016 Jul 12. Nurse Educ Today. 2016. PMID: 27429411
  • The effect of simulation on skill performance: a need for change in pediatric nursing education. Bowling AM. Bowling AM. J Pediatr Nurs. 2015 May-Jun;30(3):439-46. doi: 10.1016/j.pedn.2014.12.010. Epub 2015 Jan 13. J Pediatr Nurs. 2015. PMID: 25595245
  • Competence Acquisition Using Simulated Learning Experiences: A Concept Analysis. Hansen J, Bratt M. Hansen J, et al. Nurs Educ Perspect. 2015 Mar/Apr;36(2):102-107. doi: 10.5480/13-1198. Nurs Educ Perspect. 2015. PMID: 29194134 Review.
  • Pain Assessment and Management in Nursing Education Using Computer-based Simulations. Romero-Hall E. Romero-Hall E. Pain Manag Nurs. 2015 Aug;16(4):609-16. doi: 10.1016/j.pmn.2014.11.001. Pain Manag Nurs. 2015. PMID: 26256223 Review.
  • Impact of high-fidelity simulation exposure of nursing students with their objective structured clinical examination: A quasi-experimental study. Guerrero JG, Rosales NS, Castro GMT. Guerrero JG, et al. Nurs Open. 2023 Feb;10(2):765-772. doi: 10.1002/nop2.1343. Epub 2022 Aug 28. Nurs Open. 2023. PMID: 36030532 Free PMC article.
  • The feasibility of simulation-based high-stakes assessment in emergency medicine settings: A scoping review. Alsulimani LK. Alsulimani LK. J Educ Health Promot. 2021 Nov 30;10:441. doi: 10.4103/jehp.jehp_1127_20. eCollection 2021. J Educ Health Promot. 2021. PMID: 35071647 Free PMC article. Review.
  • Comparing formative and summative simulation-based assessment in undergraduate nursing students: nursing competency acquisition and clinical simulation satisfaction. Arrogante O, González-Romero GM, López-Torre EM, Carrión-García L, Polo A. Arrogante O, et al. BMC Nurs. 2021 Jun 8;20(1):92. doi: 10.1186/s12912-021-00614-2. BMC Nurs. 2021. PMID: 34103020 Free PMC article.
  • Reliability of simulation-based assessment for practicing physicians: performance is context-specific. Sinz E, Banerjee A, Steadman R, Shotwell MS, Slagle J, McIvor WR, Torsher L, Burden A, Cooper JB, DeMaria S Jr, Levine AI, Park C, Gaba DM, Weinger MB, Boulet JR. Sinz E, et al. BMC Med Educ. 2021 Apr 12;21(1):207. doi: 10.1186/s12909-021-02617-8. BMC Med Educ. 2021. PMID: 33845837 Free PMC article.
  • High-Fidelity Virtual Objective Structured Clinical Examinations with Standardized Patients in Nursing Students: An Innovative Proposal during the COVID-19 Pandemic. Arrogante O, López-Torre EM, Carrión-García L, Polo A, Jiménez-Rodríguez D. Arrogante O, et al. Healthcare (Basel). 2021 Mar 20;9(3):355. doi: 10.3390/healthcare9030355. Healthcare (Basel). 2021. PMID: 33804700 Free PMC article.
  • Search in MeSH

LinkOut - more resources

Full text sources.

  • Ovid Technologies, Inc.

Other Literature Sources

  • scite Smart Citations

full text provider logo

  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

Utah State University

Search Utah State University:

Usu baccalaureate nursing program: systematic plan of evaluation, standard 5 outcomes.

Nursing program assessment demonstrates the extent of student learning at or near the end of the program as well as program outcome achievement using a systematic plan for evaluation (SPE).

The faculty create and implement a written SPE* for each nursing program type to determine the extent of the achievement of each end-of-program student learning outcome and program outcome, and additionally for graduate programs the role-specific nursing competencies, to inform program decision-making to maintain or improve student and program performance.

Criterion 5.1

The systematic plan for evaluation describes the process for regular summative nursing program-level assessment of student learning outcome achievement. The faculty will:

  • use a variety of appropriate direct outcome assessment methods to ensure comprehensive summative assessment for each end-of-program student learning outcome;
  • establish a specific, measurable expected level of achievement outcome statement for each summative assessment method;
  • collect aggregate assessment data at regular intervals (determined by the faculty) to ensure sufficiency of data to inform decision-making and disaggregate the data to promote meaningful analysis; provide justification for data that are not disaggregated;
  • analyze assessment data (aggregate and/or disaggregate) at regular intervals (determined by the faculty) and when necessary, implement actions based on the analysis to maintain and/or improve end-of-program student learning outcome achievement;
  • maintain documentation for the three most recent years of the assessment data (aggregate and/or disaggregate), the analysis of data, and the use of data analysis in program decision-making to maintain and/or improve students’ end-of-program student learning outcome achievement; and
  • share the analysis of the end-of-program student learning outcome data with communities of interest.

End-of-Program Student Learning Outcome and Program Outcomes

ATI Comprehensive Predictor Examination

The composite score (adjusted group score) for the cohort on the ATI Comprehensive Predictor examination will be at or above the group national mean.

During final semester of the program

2024: Group score: 78.2%
(Individual program: 71.8%)

2023: Group score: 77.5%
(Individual program: 71.8%)

2022: Group score: 77.6% (Individual program: 71.6%)

2021: Group score: 78.4% (Individual program: 71.6%)

2020: Group score: 77.5% (National mean: 71.6%)

2024: ELA met, will continue to monitor.

2023: ELA met, will continue to monitor.

2022: ELA met, will continue to monitor.

2021: ELA met, will continue to monitor.

2020: ELA met, will continue to monitor.

Integrate reliable evidence from multiple perspectives to inform safe nursing practice and make reasonable clinical decisions.

NURS 4215 Capstone Practicum final clinical evaluation tool

100% of students completing NURS 4215 Capstone Practicum will receive rankings of satisfactory on each component of the final clinical evaluation tool.

Annually for 3 years: All SLOs (2019-2021)

 

Year 4 (2022): SLOs 1, 2, & 3

 

Rotating cycle to be repeated pending results

 

Year 7 (2025): SLOs 1, 2, & 3

 

SP 2023: 100%

SP 2022: 100%

SP 2021: 100%

SP 2020: 100%

2023: ELA met, will continue to monitor.

2022: ELA met, will continue to monitor.

2021: ELA met, will continue to monitor.

2020: ELA met, will continue to monitor.

USU Department of Nursing Exit Survey item: end-of-program student learning outcomes

85% of students agree that they were able to achieve each end-of-program student learning outcome (SLO) on the USU Department of Nursing Exit Survey.

SP 2023: 99.6%

SP 2022: 99.6%

SP 2021: 100%

SP 2020: 100%

ATI Comprehensive Predictor Examination Nursing Judgment category

The composite score for the cohort on the ATI Comprehensive Predictor Examination Nursing Judgment category will be at least 75%.

SP 2023: 74.4% (Target >75%)

SP 2022: 81.3% (Target: >75%)

SP 2021: 80.8% (Target: >75%)

SP 2020: 76.1 (Target: >75%)
 

Synthesize knowledge from nursing and a liberal education in the planning and provision of holistic nursing care across the lifespan and continuum of health care environments.

NURS 4215 Capstone Practicum final clinical evaluation tool

100% of students completing NURS 4215 Capstone Practicum will receive rankings of satisfactory on each component of the final clinical evaluation tool.

Annually for 3 years: All SLOs (2019-2021)

 

Year 4 (2022): SLOs 1, 2, & 3

 

Rotating cycle to be repeated pending results

 

Year 7 (2025): SLOs 1, 2, & 3

 

SP 2023: 100%

SP 2022: 100%

SP 2021: 100%

SP 2020: 100%

2023: ELA met, will continue to monitor.

2022: ELA met, will continue to monitor.

2021: ELA met, will continue to monitor.

2020: ELA met, will continue to monitor.

USU Department of Nursing Exit Survey item: end-of-program student learning outcomes

 

85% of students agree that they were able to achieve each end-of-program student learning outcome (SLO) on the USU Department of Nursing Exit Survey.

SP 2023: 99.6%

SP 2022: 99.6%

SP 2021: 100%

SP 2020: 100%

1) ATI Comprehensive Predictor Examination Psychosocial Integrity category

1) The composite score for the cohort in the Psychosocial Integrity category on the ATI Comprehensive Predictor examination will be at or above the group national mean.

SP 2023: 79.9% (79.3% above national mean)

SP 2022: 79.3% (65.5% above national mean)

SP 2021: SP 2021: 76.4% (63.3% above program mean)

SP 2020: 72.9% (National mean: 71.5%)

2) ATI Comprehensive Predictor Examination Physiological Adaptation category

2) The composite score for the cohort in the Physiological Adaptation category on the ATI Comprehensive Predictor examination will be or above the group national mean.

SP 2023: 79.1% (75.9% above national mean)

SP 2022: 77.3% (79.3% above national mean)

SP 2021: 77.6% (83.3% above program mean)

SP 2020: 75.1% (National mean: 72.0)

3) ATI Comprehensive Predictor Examination Pharmacological and Parenteral Therapies category

The composite score for the cohort in the Pharmacological and Parenteral Therapies category on the ATI Comprehensive Predictor examination will be at or above the group national mean.

SP 2023: 76.2% (75.9% above national mean)

SP 2022: 78.3% (75.9% above national mean)

SP 2021: 76.1% (86.7% above program mean)

SP 2020: 77.2% (National mean: 72.2%)

Employ the nursing process and patient care technologies and information systems to support safe nursing practice.

 

NURS 4215 Capstone Practicum final clinical evaluation tool

100% of students completing NURS 4215 Capstone Practicum will receive rankings of satisfactory on each component of the final clinical evaluation tool.

Annually for 3 years: All SLOs (2019-2021)

 

Year 4 (2022): SLOs 1, 2, & 3

 

Rotating cycle to be repeated pending results

 

Year 7 (2025): SLOs 1, 2, & 3

 

SP 2023: 100%

SP 2022: 100%

SP 2021: 100%

SP 2020: 100%

2023: ELA met, will continue to monitor.

2022: ELA met, will continue to monitor.

2021: ELA met, will continue to monitor.

2020: ELA met, will continue to monitor.

USU Department of Nursing Exit Survey item: end-of-program student learning outcomes

 

85% of students agree that they were able to achieve each end-of-program student learning outcome (SLO) on the USU Department of Nursing Exit Survey.

SP 2023: 99.6%

SP 2022: 99.6%

SP 2021: 100%

SP 2020: 100%

1) ATI Comprehensive Predictor Examination Management of Care category

1) The composite score for the cohort in the Management of Care category on the ATI Comprehensive Predictor examination will be at or above the group national mean.

SP 2023: 79.2% (65.5% above national mean))

SP 2022: 74.7% (58.6% above national mean)

SP 2021: 77.6% (60.0% above program mean)

SP 2020: 78.3% (National mean: 77.1%)

2) ATI Comprehensive Predictor Examination Safety and Infection Control category

The composite score for the cohort in the Safety and Infection Control category on the ATI Comprehensive Predictor examination will be at or above the group national mean.

SP 2023: 77.4% (72.4% above national mean)

SP 2022: 82.8% (75.9% above national mean)

SP 2021: 87.0% (90.0% above program mean)

SP 2020: 72.9% (National mean: 67.3%)

3) ATI Comprehensive Predictor Examination Basic Care and Comfort category

3) The composite score for the cohort in the Basic Care and Comfort category on the ATI Comprehensive Predictor examination will be at or above the group national mean.

SP 2023: 73.7% (72.4% above national mean)

SP 2022: 80.3% (90.0% above national mean)

SP 2021: 80.3% (90.0% above program mean)

SP 2020: 78.1% (National mean: 70.9%)

4) ATI Comprehensive Predictor Examination Pharmacological and Parenteral Therapies category

4) The composite score for the cohort in the Pharmacological and Parenteral Therapies category on the ATI Comprehensive Predictor examination will be at or above the group national mean.

SP 2023: 76.2% (75.9% above national mean)

SP 2022: 78.3% (75.9% above national mean)

SP 2021: 76.0% (86.7% above program mean)

SP 2020: 77.2% (National mean: 72.2%)

5) ATI Comprehensive Predictor Examination Reduction of Risk category

5) The composite score for the cohort in the Reduction of Risk category on the ATI Comprehensive Predictor examination will be at or above the group national mean.

SP 2023: 70.0% (37.9% above national mean)

SP 2022: 79.5% (69.0% above national mean)

SP 2021: 81.5% (70.0% above program mean)

SP 2020: 77.2% (National mean: 72.2%)

Utilize interpersonal and inter-professional communication in collaboration for the promotion of optimal health for individuals, families, communities, and populations.

 

NURS 4215 Capstone Practicum final clinical evaluation tool

100% of students completing NURS 4215 Capstone Practicum will receive rankings of satisfactory on each component of the final clinical evaluation tool.

Annually for 3 years: All SLOs (2019-2021)

 

Year 5: SLOs 4, 5, & 6 (2023)

 

Rotating cycle to be repeated pending results

SP 2023: 100%

SP 2021: 100%

SP 2020: 100%

SP 2019: 100%

2023: ELA met, will continue to monitor.

2021: ELA met, will continue to monitor.

2020: ELA met, will continue to monitor.

2019: ELA met, will continue to monitor.

USU Department of Nursing Exit Survey item: end-of-program student learning outcomes

 

85% of students agree that they were able to achieve each end-of-program student learning outcome (SLO) on the USU Department of Nursing Exit Survey.

SP 2023: 100%

SP 2021: 100%

SP 2020: 100%

SP 2019: 90.5%

ATI Comprehensive Predictor Examination Management of Care category

The composite score for the cohort in the Management of Care category on the ATI Comprehensive Predictor examination will be at or above the group national mean.

SP 2023: 79.2% (65.5% above national mean)

SP 2021: 77.6% (60.0% above program mean)

SP 2020: 78.3% (National mean: 77.1%)

SP 2019: 73.5% (National mean: 77.1%)

Apply ethical and legal standards of professional nursing including professional accountability and responsibility in the provision of professional nursing care.

 

NURS 4215 Capstone Practicum final clinical evaluation tool

100% of students completing NURS 4215 Capstone Practicum will receive rankings of satisfactory on each component of the final clinical evaluation tool.

Annually for 3 years: All SLOs (2019-2021)

 

Year 5: SLOs 4, 5, & 6 (2023)

 

Rotating cycle to be repeated pending results

SP 2023: 100%

SP 2021: 100%

SP 2020: 100%

SP 2019: 100%

2023: ELA met, will continue to monitor.

 

2021: ELA met, will continue to monitor.

2020: ELA met, will continue to monitor.

2019: ELA met, will continue to monitor.

USU Department of Nursing Exit Survey item: end-of-program student learning outcomes

 

85% of students agree that they were able to achieve each end-of-program student learning outcome (SLO) on the USU Department of Nursing Exit Survey.

SP 2023: 100%

SP 2021: 100%

SP 2020: 100%

SP 2019: 90.5%

ATI Comprehensive Predictor Examination Management of Care category

The composite score for the cohort in the Management of Care category on the ATI Comprehensive Predictor examination will be at or above the group national mean.

SP 2023: SP 2023: 79.2% (65.5% above national mean)

SP 2021: 77.6% (60.0% above program mean)

SP 2020: 78.3% (National mean: 77.1%)

SP 2019: 73.5% (National mean: 77.1%)

Integrate leadership and management skills, and knowledge of health care policy, regulatory processes, and cost-effectiveness for the improvement of quality care and patient safety.

 

NURS 4215 Capstone Practicum final clinical evaluation tool

100% of students completing NURS 4215 Capstone Practicum will receive rankings of satisfactory on each component of the final clinical evaluation tool.

Annually for 3 years: All SLOs (2019-2021)

 

Year 5: SLOs 4, 5, & 6 (2023)

 

Rotating cycle to be repeated pending results

SP 2023: 100%

SP 2021: 100%

SP 2020: 100%

SP 2019: 100%

2023: ELA met, will continue to monitor.

2021: ELA met, will continue to monitor.

2020: ELA met, will continue to monitor.

2019: ELA met, will continue to monitor.

USU Department of Nursing Exit Survey item: end-of-program student learning outcomes

 

85% of students agree that they were able to achieve each end-of-program student learning outcome (SLO) on the USU Department of Nursing Exit Survey.

SP 2023: 100%

SP 2021: 100%

SP 2020: 100%

SP 2019: 90.5%

ATI Comprehensive Predictor Examination Management of Care category

The composite score for the cohort in the Management of Care category on the ATI Comprehensive Predictor examination will be at or above the group national mean.

SP 2023: 79.2% (76.3% above program mean)

SP 2021: 77.6% (60.0% above program mean)

SP 2020: 78.3% (National mean: 77.1%)

SP 2019: 73.5% (National mean: 77.1%)

Incorporate principles of health education, promotion, and disease prevention in the professional nursing care of individuals, families, communities, and populations.

 

NURS 4215 Capstone Practicum final clinical evaluation tool

100% of students completing NURS 4215 Capstone Practicum will receive rankings of satisfactory on each component of the final clinical evaluation tool.

Annually for 3 years: All SLOs (2019-2021)

 

Year 6: SLOs 7 & 8 (2024)

 

Rotating cycle to be repeated pending results

SP 2024: 100%

SP 2021: 100%

SP 2020: 100%

SP 2019: 100%

2024: ELA met, will continue to monitor.

2021: ELA met, will continue to monitor.

2020: ELA met, will continue to monitor.

2019: ELA met, will continue to monitor.

USU Department of Nursing Exit Survey item: end-of-program student learning outcomes

 

85% of students agree that they were able to achieve each end-of-program student learning outcome (SLO) on the USU Department of Nursing Exit Survey.

SP 2024: 95.8%

SP 2021: 100%

SP 2020: 100%

SP 2019: 90.5%

ATI Comprehensive Predictor Examination Health Promotion and Maintenance category

The composite score for the cohort in the Health Promotion and Maintenance category on the ATI Comprehensive Predictor examination will be at or above the group national mean.

SP 2024: 83.6% (86.7% above National mean)

SP 2021: 69.8% (60.0% above program mean)

SP 2020: 76.3% (National mean: 68.9%)

SP 2019: 68.5% (National mean: 68.9%)

Value caring, respect, dignity, hope, and the human spirit in the provision of professional nursing care.

 

NURS 4215 Capstone Practicum final clinical evaluation tool

100% of students completing NURS 4215 Capstone Practicum will receive rankings of satisfactory on each component of the final clinical evaluation tool.

Annually for 3 years: All SLOs (2019-2021)

 

Year 6: SLOs 7 & 8 (2024)

 

Rotating cycle to be repeated pending results

SP 2024: 100%

SP 2021: 100%

SP 2020: 100%

SP 2019: 100%

2024: ELA met, will continue to monitor.

2021: ELA met, will continue to monitor.

2020: ELA met, will continue to monitor.

2019: ELA met, will continue to monitor.

USU Department of Nursing Exit Survey item: end-of-program student learning outcomes

 

85% of students agree that they were able to achieve each end-of-program student learning outcome (SLO) on the USU Department of Nursing Exit Survey.

SP 2024: 87.5%

SP 2021: 100%

SP 2020: 100%

SP 2019: 90.5%

ATI Comprehensive Predictor Examination Psychosocial Integrity category

The composite score for the cohort in the Psychosocial Integrity category on the ATI Comprehensive Predictor examination will be at or above the group national mean.

SP 2024: 83.3% (90.0% above National mean)

SP 2021: 76.4% (63.3% above program mean) 

SP 2020: 72.9% (National mean: 71.5%)

SP 2019: 71.3% (National mean: 71.5%

10 Summative Assessment Examples to Try This School Year

Written by Jordan Nisbet

  • Teaching Strategies

Elementary students taking a summative assessment in a classroom.

  • A formative and summative assessment definition
  • Difference between formative and summative assessment
  • Pros and cons of summative assessment
  • 9 effective and engaging summative assessment examples
  • Helpful summative assessment strategies

When gauging student learning, two approaches likely come to mind: a formative or summative assessment.

Fortunately, feeling pressure to choose one or the other isn’t necessary. These two types of learning assessment actually serve different and necessary purposes. 

Definitions: What’s the difference between formative and summative assessment?

summative assessment examples nursing

Formative assessment occurs regularly throughout a unit, chapter, or term to help track not only how student learning is improving, but how your teaching can, too.

According to a WestEd article , teachers love using various formative assessments because they help meet students’ individual learning needs and foster an environment for ongoing feedback.

Take one-minute papers, for example. Giving your students a solo writing task about today’s lesson can help you see how well students understand new content.

Catching these struggles or learning gaps immediately is better than finding out during a summative assessment.

Such an assessment could include:

  • In-lesson polls
  • Partner quizzes
  • Self-evaluations
  • Ed-tech games
  • One-minute papers
  • Visuals (e.g., diagrams, charts or maps) to demonstrate learning
  • Exit tickets

So, what is a summative assessment?

summative assessment examples nursing

Credit: Alberto G.

It occurs at the end of a unit, chapter, or term and is most commonly associated with final projects, standardized tests, or district benchmarks.

Typically heavily weighted and graded, it evaluates what a student has learned and how much they understand.

There are various types of summative assessment. Here are some common examples of summative assessment in practice:

  • End-of-unit test
  • End-of-chapter test
  • Achievement tests
  • Standardized tests
  • Final projects or portfolios

Teachers and administrators use the final result to assess student progress, and to evaluate schools and districts. For teachers, this could mean changing how you teach a certain unit or chapter. For administrators, this data could help clarify which programs (if any) require tweaking or removal.

The differences between formative and summative assessment

While we just defined the two, there are five key differences between formative and summative assessments requiring a more in-depth explanation.

Formative assessment:

  • Occurs through a chapter or unit
  • Improves how students learn
  • Covers small content areas
  • Monitors how students are learning
  • Focuses on the process of student learning

Summative assessment:

  • Occurs at the end of a chapter or unit
  • Evaluates what students learn
  • Covers complete content areas
  • Assigns a grade to students' understanding
  • Emphasizes the product of student learning

During vs after

Teachers use formative assessment at many points during a unit or chapter to help guide student learning.

Summative assessment comes in after completing a content area to gauge student understanding.

Improving vs evaluating

If anyone knows how much the learning process is a constant work in progress, it’s you! This is why formative assessment is so helpful — it won’t always guarantee students understand concepts, but it will improve how they learn.

Summative assessment, on the other hand, simply evaluates what they’ve learned. In her book, Balanced Assessment: From Formative to Summative, renowned educator Kay Burke writes, “The only feedback comes in the form of a letter grade, percentage grade, pass/fail grade, or label such as ‘exceeds standards’ or ‘needs improvement.’”

summative assessment examples nursing

Little vs large

Let’s say chapter one in the math textbook has three subchapters (i.e., 1.1, 1.2 and 1.3). A teacher conducting formative assessments will assign mini tasks or assignments throughout each individual content area.

Whereas, if you’d like an idea of how your class understood the complete chapter, you’d give them a test covering a large content area including all three parts.

Monitoring vs grading

Formative assessment is extremely effective as a means to monitor individual students’ learning styles. It helps catch problems early, giving you more time to address and adapt to different problem areas.

Summative assessments are used to evaluate and grade students’ overall understanding of what you’ve taught. Think report card comments : did students achieve the learning goal(s) you set for them or not?

😮 😄 😂 #reportcard #funny #memes #comics #samecooke #schooldays #music #classic #letsgo #gooutmore #showlove pic.twitter.com/qQ2jen1Z8k — Goldstar Events (@goldstar) January 20, 2019

Process vs product

“It’s not about the destination; it’s about the journey”? This age-old saying sums up formative and summative assessments fairly accurately.

The former focuses on the process of student learning. You’ll use it to identify areas of strength and weakness among your students — and to make necessary changes to accommodate their learning needs.

The latter emphasizes the product of student learning. To discover the product’s “value”, you can ask yourself questions, such as: At the end of an instructional unit, did the student’s grade exceed the class standard, or pass according to a district’s benchmark?

In other words, formative methods are an assessment for learning whereas summative ones are an assessment of learning .

Now that you’ve got a more thorough understanding of these evaluations, let’s dive into the love-hate relationship teachers like yourself may have with summative assessments.

Perceived disadvantages of summative assessment

The pros are plenty. However, before getting to that list, let’s outline some of its perceived cons. Summative assessment may:

1) Offer minimal room for creativity

Rigid and strict assignments or tests can lead to a regurgitation of information. Some students may be able to rewrite facts from one page to another, but others need to understand the “why” before giving an answer.

2) Not accurately reflect learning

“Teaching to the test” refers to educators who dedicate more time teaching lessons that will be emphasized on district-specific tests.

A survey conducted by Harvard’s Carnegie-Knight Task Force on the Future of Journalism asked teachers whether or not “preparing students to pass mandated standardized tests” affects their teaching.

A significant 60% said it either “dictates most of” or “substantially affects” their teaching. While this can result in higher scores, curriculum distortion can prevent students from learning other foundational subject areas.

3) Ignore (and miss) timely learning needs

summative assessment examples nursing

Because summative assessment occurs at the end of units or terms, teachers can fail to identify and remedy students’ knowledge gaps or misconceptions as they arise.

Unfortunately, by this point, there’s often little or no time to rectify a student’s mark, which can affect them in subsequent units or grades.

4) Result in a lack of motivation

The University of London’s Evidence for Policy and Practice conducted a 19-study systematic review of the impact summative assessment and tests have on students’ motivation for learning.

Contrary to popular belief, researchers found a correlation between students who scored poorly on national curriculum tests and experienced lower self-esteem, and an unwillingness to put more effort into future test prep. Beforehand, interestingly, “there was no correlation between self-esteem and achievement.”

For some students, summative assessment can sometimes be seen as 'high stakes' testing due to the pressure on them to perform well. That said, 'low-stakes' assessments can also be used in the form of quizzes or practice tests.

Repeated practice tests reinforce the low self-image of the lower-achieving students… When test scores are a source or pride and the community, pressure is brought to bear on the school for high scores.

Similarly, parents bring pressure on their children when the result has consequences for attendance at high social status schools. For many students, this increases their anxiety, even though they recognize their parents as being supportive.

5) Be inauthentic

Summative assessment has received criticism for its perceived inaccuracy in providing a full and balanced measure of student learning.

Consider this, for example: Your student, who’s a hands-on, auditory learner, has a math test today. It comes in a traditional paper format as well as a computer program format, which reads the questions aloud for students.

Chances are the student will opt for the latter test format. What’s more, this student’s test results will likely be higher and more accurate.

The reality is that curricula — let alone standardized tests — typically don’t allow for this kind of accommodation. This is the exact reason educators and advocates such as Chuck Hitchcock, Anne Meyer, David Rose, and Richard Jackson believe:

Curriculum matters and ‘fixing’ the one-size-fits-all, inflexible curriculum will occupy both special and general educators well into the future… Students with diverse learning needs are not ‘the problem’; barriers in the curriculum itself are the root of the difficulty.

6) Be biased

Depending on a school district’s demographic, summative assessment — including standardized tests — can present biases if a group of students is unfairly graded based on their race, ethnicity, religion, gender, or social class.

In his presentation at Kansas State University, emeritus professor in the UCLA Graduate School of Education and Information Studies, Dr. W. James Popham, explained summative assessment bias:

This doesn’t necessarily mean that if minority students are outperformed on a summative test by majority students that the test is biased against that minority. It may instead indicate that the minority students have not been provided with the appropriate instruction…

An example of content bias against girls would be one in which students are asked to compare the weights of several objects, including a football. Since girls are less likely to have handled a football, they might find the item more difficult than boys, even though they have mastered the concept measured by the item.

Importance and benefits of summative assessment

summative assessment examples nursing

Overall, these are valid points raised against summative assessment. However, it does offer fantastic benefits for teachers and students alike!

Summative assessment can:

1) Motivate students to study and pay closer attention

Although we mentioned lack of motivation above, this isn’t true for every student. In fact, you’ve probably encountered numerous students for whom summative assessments are an incredible source of motivation to put more effort into their studies.

For example, final exams are a common type of summative assessment that students may encounter at the end of a semester or school year. This pivotal moment gives students a milestone to achieve and a chance to demonstrate their knowledge.

In May 2017, the College Board released a statement about whether coaching truly boosts test scores:

Data shows studying for the SAT for 20 hours on free Official SAT Practice on Khan Academy is associated with an average score gain of 115 points, nearly double the average score gain compared to students who don’t use Khan Academy. Out of nearly 250,000 test-takers studied, more than 16,000 gained 200 points or more between the PSAT/NMSQT and SAT…

In addition to the 115-point average score increase associated with 20 hours of practice, shorter practice periods also correlate with meaningful score gains. For example, 6 to 8 hours of practice on Official SAT Practice is associated with an average 90-point increase.

2) Allow students to apply what they’ve learned

summative assessment examples nursing

It’s one thing to memorize multiplication tables (which is a good skill), but another to apply those skills in math word problems or real-world examples.

Summative assessments — excluding, for example, multiple choice tests — help you see which students can retain and apply what they’ve learned.

3) Help identify gaps in student learning

Before moving on to a new unit, it’s vital to make sure students are keeping up. Naturally, some will be ahead while others will lag behind. In either case, giving them a summative assessment will provide you with a general overview of where your class stands as a whole.

Let’s say your class just wrote a test on multiplication and division. If all students scored high on multiplication but one quarter of students scored low on division, you’ll know to focus more on teaching division to those students moving forward.

4) Help identify possible teaching gaps

summative assessment examples nursing

Credit: woodleywonderworks

In addition to identifying student learning gaps , summative assessment can help target where your teaching style or lesson plans may have missed the mark.

Have you ever been grading tests before, to your horror, realizing almost none of your students hit the benchmark you hoped for? When this happens, the low grades are not necessarily related to study time.

For example, you may need to adjust your teaching methods by:

  • Including/excluding word problems
  • Incorporating more visual components
  • Innovative summative assessments (we list some below!)

5) Give teachers valuable insights

summative assessment examples nursing

Credit: Kevin Jarrett

Summative assessments can highlight what worked and what didn’t throughout the school year. Once you pinpoint how, where and what lessons need tweaking, making informed adjustments for next year becomes easier.

In this world nothing can be said to be certain, except death and taxes… and, for teachers, new students year after year. So although old students may miss out on changes you’ve made to your lessons, new ones get to reap the benefits.

This not only improves your skills as an educator, but will ensure a more enriching educational experience for generations of students to come.

6) Contribute positively to learning outcomes

Certain summative assessments also provide valuable data at district, national, and global levels. Depending on average test scores, this can determine whether or not certain schools receive funding, programs stay or go, curriculum changes occur, and more. Burke writes:

Summative assessments also provide the public and policymakers with a sense of the results of their investment in education and give educators a forum for proving whether instruction works – or does not work.

The seven aims of summative assessment

summative assessment examples nursing

Dr. Nancy P. Gallavan, a professor of teacher education at the University of Central Arkansas, believes teachers can use performance-based summative assessments at any grade level.

However, in an article for Corwin , she suggests crafting yours with seven aims in mind:

  • Accompanied  with appropriate time and task management
  • Achievable  as in-class activities and out-of-class assignments
  • Active  involvement in planning, preparation, and performance
  • Applicable  to academic standards and expectations
  • Appropriate  to your students’ learning styles, needs, and interests
  • Attractive  to your students on an individual and group level
  • Authentic  to curricular content and context

Ideally, the assessment method should also measure a student’s performance accurately against the learning objectives set at the beginning of the course.

Keeping these goals in mind, here’s a list of innovative ways to conduct summative assessments in your classroom!

Summative assessment examples: 9 ways to make test time fun

summative assessment examples nursing

If you want to switch things up this summative assessment season, keep reading. While you can’t change what’s on standardized tests, you can create activities to ensure your students are exhibiting and applying their understanding and skills to end-of-chapter or -unit assessments. In a refreshing way.

Why not give them the opportunity to express their understanding in ways that apply to different learning styles?

Note : As a general guideline, students should incorporate recognition and recall, logic and reasoning, as well as skills and application that cover major concepts and practices (including content areas you emphasized in your lessons).

1) One, two, three… action!

Write a script and create a short play, movie, or song about a concept or strategy of your choosing.

This video from Science Rap Academy is a great — and advanced — example of students who created a song about how blue-eyed children can come from two brown-eyed parents:

Using a tool such as iPhone Fake Text Generator , have students craft a mock text message conversation conveying a complex concept from the unit, or each chapter of that unit.

Students could create a back-and-forth conversation between two historical figures about a world event, or two friends helping each other with complex math concepts.

Have your students create a five to 10-minute podcast episode about core concepts from each unit. This is an exciting option because it can become an ongoing project.

Individually or in groups, specific students can be in charge of each end-of-chapter or -unit podcast. If your students have a cumulative test towards the end of the year or term, the podcast can even function as a study tool they created together.

summative assessment examples nursing

Credit : Brad Flickinger

You can use online tools such as Record MP3 Online or Vocaroo to get your class started!

4) Infographic

Creating a detailed infographic for a final project is an effective way for students to reinforce what they’ve learned. They can cover definitions, key facts, statistics, research, how-to info, graphics, etc.

You can even put up the most impressive infographics in your classroom. Over time, you’ll have an arsenal of in-depth, visually-appealing infographics students can use when studying for chapter or unit tests.

5) Compare and contrast

summative assessment examples nursing

Venn diagrams are an old — yet effective — tool perfect for visualizing just about anything! Whether you teach history or social studies, English or math, or something in between, Venn diagrams can help certain learners visualize the relationship between different things.

For example, they can compare book characters, locations around the world, scientific concepts, and more just like the examples below:

6) Living museum

This creative summative assessment is similar to one, two, three… action! Individuals will plan and prepare an exhibit (concept) in the Living Museum (classroom). Let’s say the unit your class just completed covered five core concepts.

Five students will set up around the classroom while the teacher walks from exhibit to exhibit. Upon reaching the first student, the teacher will push an imaginary button, bringing the exhibit “to life.” The student will do a two to three-minute presentation; afterwards, the teacher will move on to the next one.

7) Ed-Tech games

Now more than ever, students are growing up saturated with smartphones, tablets, and video games. That’s why educators should show students how to use technology in the classroom effectively and productively.

More and more educators are bringing digital tools into the learning process. Pew Research Center surveyed 2,462 teachers and reported that digital technologies have helped in teaching their middle and high school students.

Some of the findings were quite eye-opening:

  • 80% report using the internet at least weekly to help them create lesson plans
  • 84% report using the internet at least weekly to find content that will engage students
  • 69% say the internet has a “major impact on their ability to share ideas with other teachers
  • 80% report getting email alerts or updates at least weekly that allow them to follow developments in their field
  • 92% say the internet has a “major impact” on their ability to access content, resources, and materials for their teaching
  • 67% say the internet has a “major impact” on their ability to interact with parents and 57% say it has had such an impact on enabling their interaction with students

To make the most of EdTech, find a tool that actually engages your students in learning and gives you the insightful data and reports you need to adjust your instruction

Tip: Teaching math from 1st to 8th grade? Use Prodigy!

With Prodigy Math, you can:

  • Deliver engaging assessments: Prodigy's game-based approach makes assessments fun for students.
  • Spot and solve learning gaps: See which students need more support at the touch of a button.
  • Reduce test anxiety: Prodigy has been shown to build math confidence.

Plus, it's all available to educators at no cost. See how it works below! 👇

8) Shark Tank/Dragon’s Den

Yes, just like the reality TV show! You can show an episode or two to your class or get them to watch the show at home. Next, have students pitch a product or invention that can help change the world outside of school for the better.

This innovative summative assessment is one that’ll definitely require some more thought and creativity. But it’s important that, as educators, we help students realize they can have a huge positive impact on the world in which they live.

9) Free choice

If a student chooses to come up with their own summative assessment, you’ll need to vet it first. It’ll likely take some collaboration to arrive at something sufficient.

However, giving students the freedom to explore content areas that interest them most could surprise you. Sometimes, it’s during those projects they form a newfound passion and are wildly successful in completing the task.

summative assessment examples nursing

We’re sure there are countless other innovative summative assessment ideas out there, but we hope this list gets your creative juices flowing.

With the exclusion of standardized state and national tests, one of the greatest misconceptions about summative assessments is that they’re all about paper and pencil. Our hope in creating this list was to help you see how fun and engaging summative assessments can truly be.

10) Group projects

Group projects aren't just a fun way to break the monotony, but a dynamic and interactive form of summative assessment. Here's why:

  • Collaborative learning: Group projects encourage students to work as a team, fostering their communication and collaboration skills. They learn to listen, negotiate, and empathize, which are crucial skills in and beyond the classroom.
  • Promotes critical thinking: When students interact with each other, they get to explore different perspectives. They challenge each other's understanding, leading to stimulating debates and problem-solving sessions that boost critical thinking.
  • In-depth assessment: Group projects offer teachers a unique lens to evaluate both individual performances and group dynamics. It's like getting a sneak peek into their world - you get to see how they perform under different circumstances and how they interact with each other.
  • Catering to different learning styles: Given the interactive nature of group projects, they can cater to different learning styles - auditory, visual, and kinesthetic. Every student gets a chance to shine!

However, it's important to set clear instructions and criteria to ensure fairness. Remember, it's not just about the final product - it's about the process too.

Some interesting examples of group projects include:

  • Create a Mini Documentary: Students could work together to research a historical event and create a mini documentary presenting their findings.
  • Plan a Community Service Project: This could involve identifying a problem in the local community and creating a detailed plan to address it.
  • Design a Mobile App: For a more tech-focused project, students could identify a problem and design an app that solves it.

Summative assessment strategies for keeping tests clear and fair

summative assessment examples nursing

In addition to using the summative assessment examples above to accommodate your students’ learning styles, these tips and strategies should also help:

  • Use a rubric  — Rubrics help set a standard for how your class should perform on a test or assignment. They outline test length, how in-depth it will be, and what you require of them to achieve the highest possible grades.
  • Design clear, effective questions  — When designing tests, do your best to use language, phrases, and examples similar to those used during lessons. This’ll help keep your tests aligned with the material you’ve covered.
  • Try blind grading  — Most teachers prefer knowing whose tests they’re grading. But if you want to provide wholly unbiased grades and feedback, try blind grading. You can request your students write their names on the bottom of the last test page or the back.
  • Assess comprehensiveness  — Make sure the broad, overarching connections you’re hoping students can make are reasonable and fluid. For example, if the test covers measurement, geometry and spatial sense, you should avoid including questions about patterning and algebra.
  • Create a final test after, not before, teaching the lessons  — Don’t put the horse before the carriage. Plans can change and student learning can demand different emphases from year to year. If you have a test outline, perfect! But expect to embrace and make some changes from time to time.
  • Make it real-world relevant  — How many times have you heard students ask, “When am I going to use this in real life?” Far too often students assume math, for example, is irrelevant to their lives and write it off as a subject they don’t need. When crafting test questions, use  culturally-relevant word problems  to illustrate a subject’s true relevance.

Enter the Balanced Assessment Model

Throughout your teaching career, you’ll spend a lot of time with formative and summative assessments. While some teachers emphasize one over the other, it’s vital to recognize the extent to which they’re interconnected.

In the book Classroom Assessment for Student Learning , Richard Stiggins, one of the first educators to advocate for the concept of assessment for learning, proposes something called “a balanced assessment system that takes advantage of assessment of learning and assessment for learning.”

If you use both effectively, they inform one another and “assessment becomes more than just an index of school success. It also serves as the cause of that success.”

In fact, Stiggins argues teachers should view these two types of assessment as “in sync.”

They can even be the  exact same thing — only the purpose and the timing of the assessment determine its label. Formative assessments provide the training wheels that allow students to practice and gain confidence while riding their bikes around the enclosed school parking lot.

Once the training wheels come off, the students face their summative assessment as they ride off into the sunset on only two wheels, prepared to navigate the twists and turns of the road to arrive safely at their final destination.

Conclusion: Going beyond the test

Implementing these innovative summative assessment examples should engage your students in new and exciting ways.

What’s more, they’ll have the opportunity to express and apply what they’ve learned in creative ways that solidify student learning.

So, what do you think — are you ready to try out these summative assessment ideas? Prodigy is a game-based learning platform teachers use to keep their students engaged.

Sign up for a free teacher account  and set an  Assessment  today!

Share this article

Table of Contents

Hey teachers! 👋

Turn math assessments into enjoyable experiences with Prodigy's game-based approach. Get ready for eager learners!

IMAGES

  1. 25 Summative Assessment Examples (2024)

    summative assessment examples nursing

  2. Nursing Assessment

    summative assessment examples nursing

  3. Summative Assessment Examples For Nursing Students

    summative assessment examples nursing

  4. summative evaluation second term

    summative assessment examples nursing

  5. Summative Assessment Template

    summative assessment examples nursing

  6. Assessment 2 Summative Assessment Guidelines for the Science of Nursing

    summative assessment examples nursing

VIDEO

  1. SUMMATIVE ASSESSMENT 2 IN INQUIRY

  2. Summative assessment -2 9th class English question paper 📝📝📝

  3. Summative Assessment English. Practical Performance. #summativeassessment

  4. Final Summative Assessment (Part II: Video Essay)

  5. summative assessment #dsssb #cdp

  6. Summative assessment (4th std English Work book answers) Term-1

COMMENTS

  1. Comparing formative and summative simulation-based assessment in

    Background Formative and summative evaluation are widely employed in simulated-based assessment. The aims of our study were to evaluate the acquisition of nursing competencies through clinical simulation in undergraduate nursing students and to compare their satisfaction with this methodology using these two evaluation strategies.

  2. PDF Clinical facilitator's Guide to Learning and Assessment on Clinical

    The document contains links to the relevant documents and videos to assist in the student's understanding. Of particular mention, videos have been produced to show examples of goal-setting, mid-placement and summative assessment interviews with Clinical Facilitators.

  3. Simulation-based summative assessment in healthcare: an overview of key

    Healthcare curricula need summative assessments relevant to and representative of clinical situations to best select and train learners. Simulation provides multiple benefits with a growing literature base proving its utility for training in a formative ...

  4. Summative, Formative, & Benchmark Assessments in Nursing

    Summative, formative, and benchmark assessments all have their place in nursing education. Because summative and benchmark assessments are evaluative in nature, they can help determine if students are on track with educational objectives; however, formative assessments are better at identifying at-risk students earlier and increasing student ...

  5. Summative assessment

    Summative assessment i.e. assessment of learning. Summative assessment enables students to demonstrate the extent of their learning which will contribute to their overall degree classification. The module specification must state (in the 'Assessment tasks' section): The details of the assessment for the module. The minimum pass mark.

  6. Healthcare Simulation Standards of Best PracticeTM Evaluation of

    Summative evaluation focuses on the measurement of outcomes or achievement of objectives at a discrete moment in time, for example, at the end of a program of study. 2 High-stakes evaluation refers to an assessment that has major implications or consequences based on the result or the outcome, such as merit pay, progression or grades.

  7. Summative Simulated-Based Assessment in Nursing Programs

    Background:Summative simulated-based assessments are intended to determine students' competence in practice. These assessments need to be carefully designed and implemented especially when the results are used to make high-stakes decisions. Method:...

  8. Formative & Summative Assessment

    Formative & Summative Assessment. Paul Ross. 12/11/2019. FOANed, formative, Nurse Educator, Nursing, summative. Introduce and provide an overview of formative & summative assessment: Describe key concepts related to formative and summative assessment. Formative and summative assessment in healthcare.

  9. 25 Summative Assessment Examples (2024)

    ️ Definition Summative assessment is a type of achievmeent assessment that occurs at the end of a unit of work. Its goal is to evaluate what students have learned or the skills they have developed.

  10. PDF Guiding Principles for Competency-Based Education

    Assessment: Formative assessment is intended to enhance learning without consequences or to inform progression decisions. Summative assessment is intended for making a decision regarding attainment of the competency or a key step towards competency demonstration, ability to perform the competency without or limited supervision, or pass/fail.

  11. Assessment and evaluation: Nursing education and ACEN ...

    Introduction Assessment and evaluation are two fundamental processes used in nursing education. This article is written to provide a better understanding of the concepts of assessment and evaluation and how each applies to nursing education and Accreditation Commission for Education in Nursing (ACEN) accreditation.

  12. Comparing formative and summative simulation-based assessment in

    Background Formative and summative evaluation are widely employed in simulated-based assessment. The aims of our study were to evaluate the acquisition of nursing competencies through clinical simulation in undergraduate nursing students and to compare their satisfaction with this methodology using these two evaluation strategies. Methods Two hundred eighteen undergraduate nursing students ...

  13. Best practices in summative assessment

    Assessment often falls somewhere between these pure summative and formative poles, for example, when grade incentives are provided for assignments or quizzes during a course. Therefore, there is a continuum of summative to formative assessment depending on the primary intended purpose, although feedback to learners should be a common feature.

  14. Simulation-based summative assessment in healthcare: an overview of key

    Background Healthcare curricula need summative assessments relevant to and representative of clinical situations to best select and train learners. Simulation provides multiple benefits with a growing literature base proving its utility for training in a formative context. Advancing to the next step, "the use of simulation for summative assessment" requires rigorous and evidence-based ...

  15. Summative assessment of clinical practice of student nurses: A ...

    Objectives: To provide an overview of summative assessment of student nurses' practice currently in use. Design: Narrative review and synthesis of qualitative and quantitative studies. Data sources: With the support of an information specialist, the data were collected from scientific databases which included CINAHL, PubMed, Medic, ISI Web of ...

  16. Setting the Stage

    The summative assessment measures achievement, while formative assessments focus on the learning process and whether the activities the learners engaged in helped them to better understand and demonstrate competency. As such, both summative and formative assessments are critical components of a competency-based system.

  17. Summative assessment of clinical practice of student nurses: A review

    To provide an overview of summative assessment of student nurses' practice currently in use.Narrative review and synthesis of qualitative and quantita…

  18. Summative Simulated-Based Assessment in Nursing Programs

    Background: Summative simulated-based assessments are intended to determine students' competence in practice. These assessments need to be carefully designed and implemented especially when the results are used to make high-stakes decisions. Method: Critical steps need to be followed to design simulations for summative assessment, ensure the ...

  19. Plan of Evaluation

    The systematic plan for evaluation describes the process for regular summative nursing program-level assessment of student learning outcome achievement. The faculty will: share the analysis of the end-of-program student learning outcome data with communities of interest.

  20. PDF DUKEHealth JA

    8/8/2018 Summative and Formative Assessment Adrienne small, DNP, FNP-c, CNE Duke University School of Nursing Objectives Participants Will be able to: Define formative assessment for clinical learning Define summative assessment for clinical learning Apply formative and summative strategies to personal nursing education practice.

  21. PDF Microsoft Word

    Summative assessment evaluates student learning, knowledge, proficiency, or success at the conclusion of a unit, course, or program. Summative assessments are almost always formally graded and often heavily weighted (though they do not need to be). Summative assessment can be used to great effect in conjunction and alignment with formative ...

  22. 10 Summative Assessment Examples to Try This School Year

    Definitions: What's the difference between formative and summative assessment? Formative assessment occurs regularly throughout a unit, chapter, or term to help track not only how student learning is improving, but how your teaching can, too.