Secondnature

Evaluating Business Presentations: A Six Point Presenter Skills Assessment Checklist

Posted by Belinda Huckle  |  On April 18, 2024  |  In Presentation Training, Tips & Advice

In this Article...quick links

1. Ability to analyse an audience effectively and tailor the message accordingly

2. ability to develop a clear, well-structured presentation/pitch that is compelling and persuasive, 3. ability to connect with and maintain the engagement of the audience, 4. ability to prepare effective slides that support and strengthen the clarity of the message, 5. ability to appear confident, natural and in control, 6. ability to summarise and close a presentation to achieve the required/desired outcome, effective presentation skills are essential to growth, and follow us on social media for some more great presentation tips:, don’t forget to download our presenter skills assessment form.

For many business people, speaking in front of clients, customers, their bosses or even their own large team is not a skill that comes naturally. So it’s likely that within your organisation, and indeed within your own team, you’ll find varying levels of presenting ability. Without an objective way to assess the presenter skills needed to make a good presentation, convincing someone that presentation coaching could enhance their job performance (benefiting your business), boost their promotion prospects (benefiting their career) and significantly increase their self confidence (benefiting their broader life choices) becomes more challenging.

Businessman delivering a great presentation

So, how do you evaluate the presenting skills of your people to find out, objectively, where the skill gaps lie? Well, you work out your presentation skills evaluation criteria and then measure/assess your people against them. 

To help you, in this article we’re sharing the six crucial questions we believe you need to ask to not only make a professional assessment of your people’s presenting skills, but to showcase what makes a great presentation. We use them in our six-point Presenter Skills Assessment checklist ( which we’re giving away as a free download at the end of this blog post ). The answers to these questions will allow you to identify the presenter skills strengths and weaknesses (i.e. skills development opportunities) of anyone in your team or organisation, from the Managing Director down. You can then put presenter skills training or coaching in place so that everyone who needs it can learn the skills to deliver business presentations face-to-face, or online with confidence, impact and purpose.

Read on to discover what makes a great presentation and how to evaluate a presenter using our six-point Presenter Skills Assessment criteria so you can make a professional judgement of your people’s presenting skills.

If you ask most people what makes a great presentation, they will likely comment on tangible things like structure, content, delivery and slides. While these are all critical aspects of a great presentation, a more fundamental and crucial part is often overlooked – understanding your audience .  So, when you watch people in your organisation or team present, look for clues to see whether they really understand their audience and the particular situation they are currently in, such as:

  • Is their content tight, tailored and relevant, or just generic?
  • Is the information pitched at the right level?
  • Is there a clear ‘What’s In It For Them’?
  • Are they using language and terminology that reflects how their audience talk?
  • Have they addressed all of the pain points adequately?
  • Is the audience focused and engaged, or do they seem distracted?

For your people, getting to know their audience, and more importantly, understanding them, should always be the first step in pulling together a presentation. Comprehending the challenges, existing knowledge and level of detail the audience expects lays the foundation of a winning presentation. From there, the content can be structured to get the presenter’s message across in the most persuasive way, and the delivery tuned to best engage those listening.

Businesswoman making a great presentation

Flow and structure are both important elements in a presentation as both impact the effectiveness of the message and are essential components in understanding what makes a good presentation and what makes a good speech. When analysing this aspect of your people’s presentations look for a clear, easy to follow agenda, and related narrative, which is logical and persuasive.

Things to look for include:

  • Did the presentation ‘tell a story’ with a clear purpose at the start, defined chapters throughout and a strong close?
  • Were transitions smooth between the ‘chapters’ of the presentation?
  • Were visual aids, handouts or audience involvement techniques used where needed?
  • Were the challenges, solutions and potential risks of any argument defined clearly for the audience?
  • Were the benefits and potential ROI quantified/explained thoroughly?
  • Did the presentation end with a clear destination/call to action or the next steps?

For the message to stick and the audience to walk away with relevant information they are willing to act on, the presentation should flow seamlessly through each part, building momentum and interest along the way. If not, the information can lose impact and the presentation its direction. Then the audience may not feel equipped, inspired or compelled to implement the takeaways.

Connecting with your audience and keeping them engaged throughout can really be the difference between giving a great presentation and one that falls flat. This is no easy feat but is certainly a skill that can be learned. To do it well, your team need a good understanding of the audience (as mentioned above) to ensure the content is on target. Ask yourself, did they cover what’s relevant and leave out what isn’t? 

Delivery is important here too. This includes being able to build a natural rapport with the audience, speaking in a confident, conversational tone, and using expressive vocals, body language and gestures to bring the message to life. On top of this, the slides need to be clear, engaging and add interest to the narrative. Which leads us to point 4…

Man making a great visual presentation

It’s not uncommon for slides to be used first and foremost as visual prompts for the speaker. While they can be used for this purpose, the first priority of a slide (or any visual aid) should always be to support and strengthen the clarity of the message. For example, in the case of complex topics, slides should be used to visualise data , reinforcing and amplifying your message. This ensures that your slides are used to aid understanding, rather than merely prompting the speaker.

The main problem we see with people’s slides is that they are bloated with information, hard to read, distracting or unclear in their meaning. 

The best slides are visually impactful, with graphics, graphs or images instead of lines and lines of text or bullet points. The last thing you want is your audience to be focused on deciphering the multiple lines of text. Instead your slides should be clear in their message and add reinforcement to the argument or story that is being shared. How true is this of your people’s slides?

Most people find speaking in front of an audience (both small and large) at least a little confronting. However, for some, the nerves and anxiety they feel can distract from their presentation and the impact of their message. If members of your team lack confidence, both in their ideas and in themselves, it will create awkwardness and undermine their credibility and authority. This can crush a presenter and their reputation. 

This is something that you will very easily pick up on, but the good news is that it is definitely an area that can be improved through training and practice. Giving your team the tools and training they need to become more confident and influential presenters can deliver amazing results, which is really rewarding for both the individual and the organisation.

Audience applauding a great presentation

No matter how well a presentation goes, the closing statement can still make or break it. It’s a good idea to include a recap on the main points as well as a clear call to action which outlines what is required to achieve the desired outcome.

In assessing your people’s ability to do this, you can ask the following questions:

  • Did they summarise the key points clearly and concisely?
  • Were the next steps outlined in a way that seems achievable?
  • What was the feeling in the room at the close? Were people inspired, motivated, convinced? Or were they flat, disinterested, not persuaded? 

Closing a presentation with a well-rounded overview and achievable action plan should leave the audience with a sense that they have gained something out of the presentation and have all that they need to take the next steps to overcome their problem or make something happen.

It’s widely accepted that effective communication is a critical skill in business today. On top of this, if you can develop a team of confident presenters, you and they will experience countless opportunities for growth and success.

Once you’ve identified where the skill gaps lie, you can provide targeted training to address it. Whether it’s feeling confident presenting to your leadership team or answering unfielded questions , understanding their strengths and weaknesses in presenting will only boost their presenting skills. This then creates an ideal environment for collaboration and innovation, as each individual is confident to share their ideas. They can also clearly and persuasively share the key messaging of the business on a wider scale – and they and the business will experience dramatic results.

Tailored Training to Fill Your Presentation Skill Gaps

If you’re looking to build the presentation skills of your team through personalised training or coaching that is tailored to your business, we can help. For nearly 20 years we have been Australia’s Business Presentation Skills Experts , training & coaching thousands of people in an A-Z of global blue-chip organisations. All our programs incorporate personalised feedback, advice and guidance to take business presenters further. To find out more, click on one of the buttons below:

Check out our In-Person Programs AU

  • Work Email Address * Please enter your email address and then click ‘download’ below

Belinda Huckle

Written By Belinda Huckle

Co-Founder & Managing Director

Belinda is the Co-Founder and Managing Director of SecondNature International. With a determination to drive a paradigm shift in the delivery of presentation skills training both In-Person and Online, she is a strong advocate of a more personal and sustainable presentation skills training methodology.

Belinda believes that people don’t have to change who they are to be the presenter they want to be. So she developed a coaching approach that harnesses people’s unique personality to build their own authentic presentation style and personal brand.

She has helped to transform the presentation skills of people around the world in an A-Z of organisations including Amazon, BBC, Brother, BT, CocaCola, DHL, EE, ESRI, IpsosMORI, Heineken, MARS Inc., Moody’s, Moonpig, Nationwide, Pfizer, Publicis Groupe, Roche, Savills, Triumph and Walmart – to name just a few.

A total commitment to quality, service, your people and you.

Mayo's Clinics

  • Email Subscription

Use Clear Criteria and Methodologies When Evaluating PowerPoint Presentations

Use Clear Criteria and Methodologies When Evaluating PowerPoint Presentations

Dr. Fred Mayo explains the three major methods for presentation evaluation: self, peer and professional. An added bonus: ready-made student evaluation form.

By Dr. Fred Mayo, CHE, CHT

In the last issue, we discussed making interactive presentations and this month we will focus on evaluating presentations. For many of us, encouraging and supporting students in making presentations is already a challenge; assessing their merit is often just another unwelcome teaching chore.

There are three major methods for evaluating presentation – self evaluations, peer evaluations, and professional evaluations. Of course, the most important issue is establishing evaluation criteria.

Criteria for Evaluating Presentations One of the best ways to help students create and deliver good presentations involves providing them with information about how their presentations will be evaluated. Some of the criteria that you can use to assess presentations include:

  • Focus of the presentation
  • Clarity and coherence of the content
  • Thoroughness of the ideas presented and the analysis
  • Clarity of the presentation
  • Effective use of facts, statistics and details
  • Lack of grammatical and spelling errors
  • Design of the slides
  • Effective use of images
  • Clarity of voice projection and appropriate volume
  • Completion of the presentation within the allotted time frame

Feel free to use these criteria or to develop your own that more specifically match your teaching situation.

Self Evaluations When teaching public speaking and making presentations, I often encouraged students to rate their own presentations after they delivered them. Many times, they were very insightful about what could have been improved. Others just could not complete this part of the assignment. Sometimes, I use their evaluations to make comments on what they recognized in their presentations. However, their evaluations did not overly influence the grade except that a more thorough evaluation improved their grade and a weak evaluation could hurt their presentation grade.

Questions I asked them to consider included:

  • How do you think it went?
  • What could you have done differently to make it better?
  • What did you do that you are particularly proud of accomplishing?
  • What did you learn from preparing for and delivering this presentation?
  • What would you change next time?

Peer Evaluations One way to provide the most feedback for students involves encouraging – or requiring – each student evaluate each other’s presentation. It forces them to watch the presentation both for content and delivery and helps them learn to discriminate between an excellent and an ordinary presentation. The more presentations they observe or watch, the more they learn.

In classes where students are required to deliver presentations, I have students evaluate the presentations they observe using a form I designed. The students in the audience give the evaluation or feedback forms to the presenter as soon as it is over. I do not collect them or review them to encourage honest comments and more direct feedback. Also, students do not use their names when completing the form. That way the presenter gets a picture from all the students in the audience – including me – and cannot discount the comments by recognizing the author.

A version of the form that I use is reproduced below – feel free to adopt or adapt it to your own use and classroom situation.

evaluation form

Professional Evaluations When conducting your professional evaluation of a presentation, remember to consider when and how to deliver oral comments as opposed to a completed form. I complete a written evaluation (shown above) along with all the students so they get some immediate feedback. I also take notes on the presentation and decide a grade as well. After the conclusion of the presentation, whether it was an individual or team presentation, I lead a class discussion on the presentation material. That way, students get to hear some immediate comments as well as reading the written peer evaluations.

I usually ask for a copy of the presentation prior to the delivery date. (Getting the PowerPoint slides ahead also helps me ensure I have all the presentations loaded on the projector or computer so we do not waste class time.) Students either email it to me or place it on our classroom management system. I will provide their letter grade and make comments on the design of the presentation on the copy they gave me. However, I don’t explain the final grade right after the presentation since it is often hard for students who have just made a presentation to hear comments.

Summary Each of these suggestions may prompt you to try your own ideas. Remember that students improve when they receive thoughtful and useful feedback from their peers and you as their teacher. I encourage you to use this form or develop a form so that the criteria used to evaluate the presentations are clear and explained ahead of time. Now, you can enjoy evaluating their presentations.

Dr. Fred Mayo, CHE, CHT, is retired as a clinical professor of hotel and tourism management at New York University. As principal of Mayo Consulting Services, he continues to teach around the globe and is a regular presenter at CAFÉ events nationwide.

Assessing Oral Presentation Performance: Designing a Rubric and Testing its Validity with an Expert Group

Stan van Ginkel at Hogeschool Utrecht

  • Hogeschool Utrecht
  • This person is not on ResearchGate, or hasn't claimed this research yet.

Martin Mulder at MM Consultancy for Education and Training

  • MM Consultancy for Education and Training

Asko Mononen at Laurea Universities of Applied Sciences

  • Laurea Universities of Applied Sciences

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations
  • J COMPUT ASSIST LEAR

Hans Hummel

  • Hugo Huurdeman
  • Julia Fischmann

Stan van Ginkel

  • Víctor Raul Ferrer-Pardo

Irene Jimenez-Perez

  • Cindy Bolaños-Mendoza
  • Federico X. Dominguez Bonini

Karen Yambay de Armijos

  • Fatemeh Saneie Kashanifar

Danilah Salleh

  • Anouchka Bonnes
  • Sarah Krochinak

Sunny Cui

  • Babatunde Ajayi

Esther Kim

  • STUD HIGH EDUC

Yalcin Yalaki

  • Internet High Educ

Omid Noroozi

  • Vincent Chan

Karen Murphy

  • Educ Res Rev

John Hattie

  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

Presentations

An engaging presentation has a clear purpose; shows good understanding of the topic and its importance; is correctly pitched at the audience; and provides the audience with a key message to take forward.    

Being a good presenter takes much practice. Practising and improving your presentations will also increase your oral communication skills and confidence.

Presentations are assessed on your content (what you present), your delivery (how you present it) and your visual aids (how they aid your presentation). Always check your assignment guidelines and marking rubric for specific information relating to your subject or discipline.

Understanding requirements

Review your topic, assessment criteria and marking rubric well in advance of the presentation.

  • What is the reason for your presentation? Is it to present findings from research? Summarise a topic? Inform a client? Lead a discussion? Or to inspire?
  • What is your topic? 
  • Who will be your audience?
  • Will it be online or in-person? Is it an individual or team presentation?
  • How much time do you and/or your team have for presenting?
  • What visual aids can you use?

A presentation must be carefully planned. Focus on what you must include in your presentation to make it engaging and inspiring.

  • What is the purpose of your work? What is the research question you set out to answer?
  • What are your key findings? What research is yet to be done?
  • What part of your topic and/or findings is most important?
  • What is relevant or applicable to your audience?
  • Do you want the audience actively involved? How will you achieve that?
  • Overall, what do you want to achieve? To inform, inspire, convince, summarise?
  • For online presentations, check access to required systems, programs, or platforms.

Structuring

Organise and structure your material and communicate it effectively to your audience.

Introduction (5% to 10% of time)

  • Introduce yourself. Say your name clearly for the audience and assessors.
  • Grab the attention of your audience by using a related story, a statistic or asking a question.
  • Describe your topic area and purpose, objective, or the question you set out to answer.
  • Be clear about your purpose. ‘The purpose is to…’. ‘I will focus on...’
  • Explain the need or reasons for your work. ‘This is important because…’
  • Describe the themes of your talk. ‘First, I will…, then, I will…, and finally…’
  • Present your work logically. Develop a coherent story.
  • Don’t cover too much on one slide.
  • Do transitions well; indicate when you move to another theme. ‘My next point is ...’
  • Use examples/diagrams to explain key points. Real-life examples will engage your audience.
  • Present only your main points. Don’t try to fit in all your work or theory, as time is limited.   

Conclusion (5% to 10% of time)

  • Summarise your key points. ‘In conclusion...’, ‘To recap the main points…’
  • Discuss your achievements in relation to your objectives (from Introduction).
  • Explain reasons if objectives haven’t been achieved and how or what could be improved.
  • Discuss the implications of your work and restate your key (take home) message.
  • Thank your audience and invite questions.

For online presentations – check all technology and programs before your presentation. Do practice runs to familiarise yourself with presenting online.

Language and voice

  • Use plain language. Keep it simple. Avoid slang and acronyms.
  • Emphasise the key points. Repeat them using different phrasing.
  • Check your pronunciation of difficult and unusual words. Pronounce keywords correctly.
  • Project your voice. Speak loudly so that everyone will hear you. 
  • Speak slowly and clearly. Don’t rush. Pace yourself. 125 - 150 words per minute is good.
  • Vary your voice quality. Use tonal variations. Don’t be monotonal. 
  • Use pauses and don't be afraid of short periods of silence. They give you and your audience a chance to think. Pauses also give the audience members time to take notes.
  • For online presentations, check your audio system and sound level.

Non-verbal cues

  • Stand or sit up straight (if online), hold your head up and try to look relaxed.
  • Make eye contact with audience members or look directly at the camera.
  • Avoid turning away from the audience or camera when looking at presentation slides.
  • Pay attention to your group members when they are speaking.

Engaging your audience 

  • Always pay attention to your audience. Check that the audience is engaged. ‘Does this make sense?’. ‘Is this clear?’.
  • Treat your presentation as a conversation between you and your audience. Always speak to the audience, avoid reading from notes.
  • Speak confidently. Confident presenters can better engage an audience.
  • Run polls, questionnaires or pose questions to your audience.
  • Include short activities. ‘Discuss… with the person next to you… and report back…’.
  • Be prepared to pause the presentation to respond to questions or clarify points.
  • Use exemplars or prototypes to demonstrate or illustrate key points.
  • Prepare questions for your Q&A in case there are no questions from the audience.
  • For online presentations, use chat, whiteboards, annotations, polls, breakout rooms and other functionalities of programs such as Zoom. 

Visual aids

Powerpoint slides.

  • Keep the text brief. ‘Less in more’. Use the 6 X 6 rule: 6 lines with 6 words each per slide.
  • Include a title slide with your topic, name (or names, if team) contact details and course.
  • Keep your slide design simple and formatting consistent.
  • Use headings, sub-headings, and bullet points.
  • Select figures, tables, and images carefully. Too many may overwhelm your audience.
  • Label and number figures etc. and use in-text referencing as appropriate.
  • Discuss or refer to all figures etc. in your delivery.  
  • Slow down for images and figures etc. The audience may need time to absorb such information.
  • Check your slides for readability. Proofread for grammar and referencing. 
  • Include a slide with a list of references.

Don’t spend all your time producing visual aids. They are necessary. But your content and delivery are equally or sometimes more important. Check your marking rubric or assessment guidelines for how you will be assessed and spend your time accordingly.

Team presentations

You share the workload in a team presentation. It is a team effort. Each must contribute equally to developing the team presentation. You may be assessed as a team or individually. You may also be expected to present and/or answer questions. Always check your assignment guidelines and marking rubric for subject or discipline-specific requirements.

  • Check if all members are required to speak. Put more confident speakers first and/or last.
  • Introduce all your team members at the start of presentation. 
  • Do transitions well. Clearly signal your hand over to next speaker. ‘The next speaker is …’.  
  • Briefly summarise what previous speaker covered to show connection/segue to your part.
  • Always pay attention to the (active) speaker and be ready to add to the presentation.

Reducing anxiety

  • Be very well prepared. Practise until you feel confident about all your material. If you are not confident, it is harder to reduce anxiety.
  • Practice in front of your friends and family. Ask for constructive feedback to improve.
  • Treat your audience like they’re your friends. Your audience is interested in what you have to say and wants you to do well - that is why they are there.
  • During in-person presentations, make eye contact with people you know in the room.
  • Being anxious is normal. Take deep breaths to calm yourself. Tell yourself, all will be well!

Further resources

  • 5 public speaking tips to persuade any audience 4 minute video from La Trobe's 3-Minute-Thesis (3MT) champion Nicole Shackleton
  • 7 skills of every good speaker
  • Presentations - Chapter in 'Academic Success' (eBook)
  • Inspirational speech from Barak Obama (YouTube)
  • 10 most popular TEDx talks

Pathfinder link

Still have questions? Do you want to talk to an expert? Peer Learning Advisors or Academic Skills and Language Advisors  are available.

  • << Previous: Literature reviews
  • Next: Exams >>

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • BMC Med Educ

Logo of bmcmedu

Development and validation of the oral presentation evaluation scale (OPES) for nursing students

Yi-chien chiang.

1 Department of Nursing, Chang Gung University of Science and Technology, Division of Pediatric Hematology and Oncology, Linkou Chang Gung Memorial Hospital, Taoyuan City, Taiwan, Republic of China

Hsiang-Chun Lee

2 Department of Nursing, Chang Gung University of Science and Technology, Taoyuan City, Taiwan, Republic of China

Tsung-Lan Chu

3 Administration Center of Quality Management Department, Chang Gung Medical Foundation, Taoyuan City, Taiwan, Republic of China

Chia-Ling Wu

Ya-chu hsiao.

4 Department of Nursing, Chang Gung University of Science and Technology; Administration Center of Quality Management Department, Linkou Chang Gung Memorial Hospital, No.261, Wenhua 1st Rd., Guishan Dist, Taoyuan City, 333 03 Taiwan, Republic of China

Associated Data

The datasets and materials of this study are available to the corresponding author on request.

Oral presentations are an important educational component for nursing students and nursing educators need to provide students with an assessment of presentations as feedback for improving this skill. However, there are no reliable validated tools available for objective evaluations of presentations. We aimed to develop and validate an oral presentation evaluation scale (OPES) for nursing students when learning effective oral presentations skills and could be used by students to self-rate their own performance, and potentially in the future for educators to assess student presentations.

The self-report OPES was developed using 28 items generated from a review of the literature about oral presentations and with qualitative face-to-face interviews with university oral presentation tutors and nursing students. Evidence for the internal structure of the 28-item scale was conducted with exploratory and confirmatory factor analysis (EFA and CFA, respectively), and internal consistency. Relationships with Personal Report of Communication Apprehension and Self-Perceived Communication Competence to conduct the relationships with other variables evidence.

Nursing students’ ( n  = 325) responses to the scale provided the data for the EFA, which resulted in three factors: accuracy of content, effective communication, and clarity of speech. These factors explained 64.75% of the total variance. Eight items were dropped from the original item pool. The Cronbach’s α value was .94 for the total scale and ranged from .84 to .93 for the three factors. The internal structure evidence was examined with CFA using data from a second group of 325 students, and an additional five items were deleted. Except for the adjusted goodness of fit, fit indices of the model were acceptable, which was below the minimum criteria. The final 15-item OPES was significantly correlated with the students’ scores for the Personal Report of Communication Apprehension scale ( r  = −.51, p  < .001) and Self-Perceived Communication Competence Scale ( r  = .45, p  < .001), indicating excellent evidence of the relationships to other variables with other self-report assessments of communication.

Conclusions

The OPES could be adopted as a self-assessment instrument for nursing students when learning oral presentation skills. Further studies are needed to determine if the OPES is a valid instrument for nursing educators’ objective evaluations of student presentations across nursing programs.

Supplementary Information

The online version contains supplementary material available at 10.1186/s12909-022-03376-w.

Competence in oral presentations is important for medical professionals to communicate an idea to others, including those in the nursing professions. Delivering concise oral presentations is a useful and necessary skill for nurses [ 1 , 2 ]. Strong oral presentation skills not only impact the quality of nurse-client communications and the effectiveness of teamwork among groups of healthcare professionals, but also promotion, leadership, and professional development [ 2 ]. Nurses are also responsible for delivering health-related knowledge to patients and the community. Therefore, one important part of the curriculum for nursing students is the delivery of oral presentations related to healthcare issues. A self-assessment instrument for oral presentations could provide students with insight into what skills need improvement.

Three components have been identified as important for improving communication. First, a presenter’s self-esteem can influence the physio-psychological reaction towards the presentation; presenters with low self-esteem experience greater levels of anxiety during presentations [ 3 ]. Therefore, increasing a student’s self-efficacy can increase confidence in their ability to effectively communicate, which can reduce anxiety [ 3 , 4 ]. Second, Liao (2014) reported improving speaking efficacy can also improve oral communications and collaborative learning among students could improve speech efficacy and decrease speech anxiety [ 5 ]. A study by De Grez et al. provided students with a list of skills to practice, which allowed them to feel more comfortable when a formal presentation was required, increased presentation skills, and improved communication by improving self-regulation [ 6 ]. Third, Carlson and Smith-Howell (1995) determined quality and accuracy of the information presented was also an important aspect of public speaking performances [ 7 ]. Therefore, all three above mentioned components are important skills for effective communication during an oral presentation.

Instruments that provide an assessment of a public speaking performance are critical for helping students’ improve oral presentation skills [ 7 ]. One study found peer evaluations were higher than those of university tutors for student presentations, using a student-developed assessment form [ 8 ]. The assessment criteria included content (40%), presentation (40%), and structure (20%); the maximum percent in each domain was given for “excellence”, which was relative to a minimum “threshold”. Multiple “excellence” and “threshold” benchmarks were described for each domain. For example, benchmarks included the use of clear and appropriate language, enthusiasm, and keeping the audience interested. However, the percentage score did not provide any information about what specific benchmarks were met. Thus, these quantitative scores did not include feedback on specific criteria that could enhance future presentations.

At the other extreme is an assessment that is limited to one aspect of the presentation and is too detailed to evaluate the performance efficiently. An example of this is the 40-item tool developed by Tsang (2018) [ 6 ] to evaluate oral presentation skills, which measured several domains: voice (volume and speed), facial expressions, passion, and control of time. An assessment tool developed by De Grez et al. (2009) includes several domains: three subcategories for content (quality of introduction, structure, and conclusion), five subcategories of expression (eye-contact, vocal delivery, enthusiasm, interaction with audience, and body-language), and a general quality [ 9 ]. Many items overlap, making it hard to distinguish specific qualities. Other evaluation tools include criteria that are difficult to objectively measure, such as body language, eye-contact, and interactions with the audience [ 10 ]. Finally, most of the previous tools were developed without testing the reliability and validity of the instrument.

Nurses have the responsibility of not only providing medical care, but also medical information to other healthcare professionals, patients, and members of the community. Therefore, improving nursing students’ speaking skills is an important part of the curriculum. A self-report instrument for measuring nursing students’ subjective assessment of their presentation skills could help increase competence in oral communication. However, to date, there is a no reliable and valid instrument of evaluating oral presentation performance in nursing education. Therefore, the aim of this study was to develop a self-assessment instrument for nursing students that could guide them in understanding their strengths and development areas in aspects of oral presentations. Development of a scale that is a valid and reliable instrument for nursing students could then be examined for use as a scale for objective evaluations of oral presentations by peers and nurse educators.

Study design

This study developed and validated an oral presentation evaluation scale (OPES) that could be employed as a self-assessment instrument for students when learning skills for effective oral presentations. The instrument was developed in two phases: Phase I (item generation and revision) and Phase II (scale development) [ 11 ]. The phase I was aimed to generate items by a qualitative method and to collect content evidence for the OPES. The phase II focused on scale development which was established internal structure evidence including CFA, EFA, and internal consistency of the scale for the OPES. In addition, the phase II collected the evidence of OPES on relationships with other variables. Because we hope to also use the instrument as an aid for nurse educators in objective evaluations of nursing students’ oral presentations, both students and educators were involved in item generation and revision. Only nursing students participated in Phase II.

Approval was obtained from Chang Gung Medical Foundation institutional review board (ID: 201702148B0) prior to initiation of the study. Informed consent was obtained from all participants prior to data collection. All participants being interviewed for item generation in phase I provided signed informed consent indicating willingness to be audiotaped during the interview. All the study methods were carried out in accordance with relevant guidelines and regulations.

Phase I: item generation and item revision

Participants.

A sample of nurse educators ( n  = 8) and nursing students (n  = 11) participated in the interviews for item generation. Nursing students give oral presentations to meet the curriculum requirement, therefore the educators were university tutors experienced in coaching nursing students preparing to give an oral presentation. Nurse educators specializing in various areas of nursing, such as acute care, psychology, and community care were recruited if they had at least 10 years’ experience coaching university students. The mean age of the educators was 52.1 years ( SD  = 4.26), 75% were female, and the mean amount of teaching experience was 22.6 years ( SD  = 4.07). Students were included if they had given at least one oral presentation and were willing to share their experiences of oral presentation. The mean age of the students was 20.7 ( SD  = 1.90), 81.8% were female; 36.3%, four were second year students, three were third students, and four were in their fourth year.

An additional eight educators participated in the evaluation of content evidence of the ORES. All had over 10 years’ experience in coaching students in giving an oral presentation that would be evaluated for a grade.

Item generation

Development of item domains involved deductive evaluations of the about oral presentations [ 2 , 3 , 6 – 8 , 12 – 14 ]. Three domains were determined to be important components of an oral presentation: accuracy of content, effective communication, and clarity of speech. Inductive qualitative data from face-to-face semi-structured interviews with nurse educators and nursing student participants were used to identify domain items [ 11 ]. Details of interview participants are described in the section above. The interviews with nurse educators and students followed an interview guide (Table  1 ) and lasted approximately 30–50 min for educators and 20–30 min for students. Deduction of the literature and induction of the interview data was used to determine categories considered important for the objective evaluation of oral presentations.

Interview guide for semi-structured interviews with nurse educators and nursing students for item generation

Participant groupQuestions
Educator1.What has been your reaction to oral reports or presentations given by your students?
2. What problems commonly occur when students are giving oral reports or presentations?
3. In your opinion, what do you consider a good presentation, and could you describe the characteristics?
4. How do you evaluate the performance of the student’s oral reports or presentations? Are there any difficulties or problems evaluating the oral reports?
Student1. Would you please tell me about your experiences of giving an oral report or presentation?
2. In your opinion, what is a good presentation and what are some of the important characteristics?

Analysis of interview data. Audio recordings of the interviews were transcribed verbatim at the conclusion of each interview. Interview data were analyzed by the first, second, and corresponding author, all experts in qualitative studies. The first and second authors coded the interview data to identify items educators and student described as being important to the experience of an oral presentation [ 11 ]. The corresponding author grouped the coded items into constructs important for oral presentations. Meetings with the three researchers were conducted to discuss the findings; if there were differences in interpretation, an outside expert in qualitative studies was included in the discussions until consensus was reached among the three researchers.

Analysis of the interview data indicated items involved in preparation, presentation, and post-presentation were important to the three domains of accuracy of content, effective communication, and clarity of speech. Items for accuracy of content involved preparation (being well-prepared before the presentation; preparing materials suitable for the target audience; practicing the presentation in advance) and post-presentation reflection; and discussing the content of the presentation with classmates and teachers. Items for effective communication involved the presentation itself: obtain the attention of the audience; provide materials that are reliable and valuable; express confidence and enthusiasm; interact with the audience; and respond to questions from the audience. The third domain, clarity of speech, involved of items could be, post-presentation, involved a student’s ability to reflect on the content and performance of their presentation and willingness to obtain feedback from peers and teachers.

Item revision: content evidence

Based on themes that emerged during, 28 items were generated. Content evidence of the 28 items of the OPES was established with a panel of eight experts who were educators that had not participated in the face-to-face interviews. The experts were provided with a description of the research purpose, a list of the proposed items, and were asked to rate each item on a 4-point Likert scale (1 = not representative, 2 = item needs major revision, 3 = representative but needs minor revision, 4 = representative). For item-level content validity index (I-CVI) was determined by the total items rated 3 or 4 divided by the total number of experts; scale-level content validity index (S-CVI) was determined by the total items rated 3 or 4 divided by the total number of items.

Based on the suggestions of the experts, six items of the OPES were reworded for clarity: item 12 was revised from “The presentation is riveting” to “The presenter’s performance is brilliant; it resonates with the audience and arouses their interests”. Two items were deleted because they duplicated other items: “demonstrates confidence” and “presents enthusiasm” were combined and item 22 became, “demonstrates confidence and enthusiasm properly”. The item “the presentation allows for proper timing and sequencing” and “the length of time of the presentation is well controlled” were also combined into item 9, “The content of presentation follows the rules, allowing for the proper timing and sequence”. Thus, a total of 26 items were included in the OPES at this phase. The I-CVI value was .88 ~ 1 and the scale-level CVI/universal agreement was .75, indicating that the OPES was an acceptable instrument for measuring an oral presentation [ 11 ].

Phase II: scale development

Phase II, scale development, aimed to establish the internal structure evidence for OPES. The evidence of relation to other variables was also evaluated as well in this phase. More specifically, the internal structure evidence for OPES was evaluated by exploratory factor analysis (EFA) and confirmatory factor analysis (CFA). The evidence of relationships to other variables was determined by examining the relationships between the OPES and the PRCA and SPCC [ 15 ].

A sample of nursing students was recruited purposively from a university in Taiwan. Students were included if they were: (a) full-time students; (b) had declared nursing as their major; and (c) were in their sophomore, junior, or senior year. First-year university students (freshman) were excluded. A bulletin about the survey study was posted outside of classrooms; 707 students attend these classes. The bulletin included a description of the inclusion criteria and instructions to appear at the classroom on a given day and time if students were interested in participating in the study. Students who appeared at the classroom on the scheduled day ( N  = 650) were given a packet containing a demographic questionnaire (age, gender, year in school), a consent form, the OPES instrument, and two scales for measuring aspects of communication, the Personal Report of Communication Apprehension (PRCA) and the Self-Perceived Communication Competence (SPCC); the documents were labeled with an identification number to anonymize the data. The 650 students were divided into two groups, based on the demographic data using the SPSS random case selection procedure, (Version 23.0; SPSS Inc., Chicago, IL, USA). The selection procedure was performed repeatedly until the homogeneity of the baseline characteristics was established between the two groups ( p  > .05). The mean age of the participants was 20.5 years ( SD  = 0.98) and 87.1% were female ( n  = 566). Participants were comprised of third-year students (40.6%, n  = 274), fourth year (37.9%, n  = 246) and second year (21.5%, n  = 93). The survey data for half the group (calibration sample, n  = 325) was used for EFA; the survey data from the other half (the validation sample, n  = 325) was used for CFA. Scores from the PRCA and SPCC instruments were used for evaluating the evidence of relationships to other variables.

The aims of phase II were to collect the scale of internal structure evidence, which identify the items that nursing students perceived as important during an oral presentation and to determine the domains that fit a set of items. The 325 nursing students for EFA (described above) were completed the data collection. We used EFA to evaluate the internal structure of the scale. The items were presented in random order and were not nested according to constructs. Internal consistency of the scale was determined by calculating Cronbach’s alpha.

Then, the next step involved determining if the newly developed OPES was a reliable and valid self-report scale for subjective assessments of nursing students’ previous oral presentations. Participants (the second group of 325 students) were asked, “How often do you incorporate each item into your oral presentations?”. Responses were scored on a 5-point Likert scale with 1 = never to 5 = always; higher scores indicated a better performance. The latent structure of the scale was examined with CFA.

Finally, the evidence of relationships with other variables of the OPES was determined by examining the relationships between the OPES and the PRCA and SPCC, described below.

The 24-item PRCA scale

The PRCA scale is a self-report instrument for measuring communication apprehension, which is an individual’s level of fear or anxiety associated with either real or anticipated communication with a person or persons [ 12 ]. The 24 scale items are comprised of statements concerning feelings about communicating with others. Four subscales are used for different situations: group discussions, interpersonal communications, meetings, and public speaking. Each item is scored on a 5-point Likert scale from 1 (strongly disagree) to 5 (strongly agree); scores range from 24 to 120, with higher scores indicating greater communication anxiety. The PRCA has been demonstrated to be a reliable and valid scale across a wide range of related studies [ 5 , 13 , 14 , 16 , 17 ]. The Cronbach’s alpha for the scale is .90 [ 18 ]. We received permission from the owner of the copyright to translate the scale into Chinese. Translation of the scale into Chinese by a member of the research team who was fluent in English was followed by back-translation from a differed bi-lingual member of the team to ensure semantic validity of the translated PRCA scale. The Cronbach’s alpha value in the present study was .93.

The 12-item SPCC scale

The SPCC scale evaluates a persons’ self-perceived competence in a variety of communication contexts and with a variety of types of receivers. Each item is a situation which requires communication, such as “Present a talk to a group of strangers”, or “Talk with a friend”. Participants respond to each situation by ranking their level of competence from 0 (completely incompetent) to 100 (completely competent). The Cronbach’s alpha for reliability of the scale is .85. The SPCC has been used in similar studies [ 13 , 19 ]. We received permission owner of the copyright to translate the scale into Chinese. Translation of the SPCC scale into Chinese by a member of the research team who was fluent in English was followed by back-translation from a differed bi-lingual member of the team to ensure semantic validity of the translated scale. The Cronbach’s alpha value in the present study was .941.

Statistical analysis

Data were analyzed using SPSS for Windows 23 (SPSS Inc., Chicago, IL, USA). Data from the 325 students designated for EFA was used to determine the internal structure evidence of the OPES. The Kaiser-Meyer-Olkin measure for sampling adequacy and Bartlett’s test of sphericity demonstrated factor analysis was appropriate [ 20 ]. Principal component analysis (PCA) was performed on the 26 items to extract the major contributing factors; varimax rotation determined relationships between the items and contributing factors. Factors with an eigenvalue > 1 were further inspected. A factor loading greater than .50 was regarded as significantly relevant [ 21 ].

All item deletions were incorporated one by one, and the EFA model was respecified after each deletion, which reduced the number of items in accordance with a priori criteria. In the EFA phase, the internal consistency of each construct was examined using Cronbach’s alpha, with a value of .70 or higher considered acceptable [ 22 ].

Data from the 325 students designated for CFA was used to validate the factor structure of the OPES. In this phase, items with a factor loading less than .50 were deleted [ 21 ]. The goodness of the model fit was assessed using the following: absolute fit indices, including goodness of fit index (GFI), adjusted goodness of fit index (AGFI), standardized root mean squared residual (SRMR), and the root mean square error of approximation (RMSEA); relative fit indices, normed and non-normed fit index (NFI and NNFI, respectively), and comparative fit index (CFI); and the parsimony NFI, CFI, and likelihood ratio ( x 2 /df ) [ 23 ].

In addition to the validity testing, a research team, which included a statistician, determined the appropriateness of either deleting or retaining each item. The convergent validity (internal quality of the items and factor structures), was further verified using standardized factor loading, with values of .50 or higher considered acceptable, and average variance extraction (AVE), with values of .5 or higher considered acceptable [ 21 ]. Convergent reliability (CR) was assessed using the construct reliability from the CFA, with values of .7 or higher considered acceptable [ 24 ]. The AVE and correlation matrices among the latent constructs were used to establish discriminant validity of the instrument. The square root of the AVE of each construct was required to reach a value that was larger than the correlation coefficient between itself and the other constructs [ 24 ].

The evidence of relationships with other variables was determined by examining the relationship of nursing students’ scores ( N  = 650) on the newly developed OPES with scores for constructs of communication of the translated scales for PRCA and SPCC. The hypotheses between OPES to PRCA and SPCC individually indicated the strong self-reported presentation competence were associated with lower communication anxiety and greater communication competence.

Development of the OPES: internal structure evidence

EFA was performed sequentially six times until there were no items with a loading factor < .50 or that were cross-loaded, and six items were deleted (Table  2 ). EFA resulted in 20 items with a three factors solution, which represented 64.75% of the variance of the OPES. The Cronbach’s alpha estimates for the total scale was .94. indicating the scale had sound internal reliability (Table ​ (Table2). 2 ). The three factors were labeled in accordance with the item content via a panel discussion and had Cronbach’s alpha values of .93, .89, and .84 for factors 1, 2 and 3, respectively.

Summary of exploratory factor analysis: descriptive statistics, factor loading, and reliability for nursing students ( N  = 325)

ScoreFactor loading
ItemDescriptionMeanSD123
7The content of the presentation matches the theme4.250.62.76.20.17
14Presentation aids, such as PowerPoint and posters, highlight key points of the report4.210.74.75.21.30
15Proper use of presentation aids such as PowerPoint and posters4.320.69.74.12.28
8The content of the presentation is clear and focused4.020.69.72.36.11
10The content of the presentation is organized and logical3.930.75.72.38.13
4Preparation of presentation aids, such as PowerPoint and posters, in advance4.53.67.70−.10.20
16Presentation aids, such as PowerPoint and posters, help the audience understand the content of the presentation4.260.68.69.20.37
9The organization of the presentation is structured to provide the necessary information, while also adhering to time limitations4.100.69.68.30.18
11The content of the presentation provides correct information4.120.66.68.31.10
1Preparation of the content in accordance with the theme and rules in advance4.490.61.64−.02.39
13The entire content of the presentation is prepared in a way that is understandable to the audience3.990.77.61.40.09
22Presenter demonstrates confidence and an appropriate level of enthusiasm3.920.91.17.83.25
21Presenter uses body language in a manner that increases the audience’s interest in learning3.500.95.09.81.22
24Presenter interacts with the audience using eye contact during the question and answer session3.650.92.15.77.24
23Presenter responds to the audience’s questions properly3.630.87.23.77.17
12The presenter’s performance is brilliant; it resonates with the audience and arouses their interests3.430.78.43.65.04
17The pronunciation of the words in the presentation is correct3.980.82.31.29.74
18The tone and volume of the presenter’s voice is appropriate3.820.82.22.50.70
19The words and phrases of the presenter are smooth and fluent3.700.82.26.52.65
20The clothing worn by the presenter is appropriate4.160.77.33.12.57
Eigenvalue (sum of squared loading)6.014.342.60
Explained variance30.03%21.72%13.00%
Cumulative variance30.03%51.75%64.75%
Cronbach’s α for each subscale.93.89.84
Cronbach’s α for the total scale.94
ItemDeleted following EFA
2Considers the background or needs of the audience to prepare the content of the presentation in advance3.940.84
3Discusses the content of the presentation with experts, teachers or peers (classmates) in advance3.940.89
5Practices several times in private in before the presentation3.960.89
6Invites classmates or teachers to watch a rehearsal before the presentation3.391.04
25Reflects on the experience as well as the strengths and weaknesses of the presentation3.830.85
26Obtains feedback from peers (e.g. classmates), teachers, or an audience3.920.81

Abbreviations : SD standard deviation, EFA exploratory factor analysis

Factor 1, Accuracy of Content, was comprised of 11 items and explained 30.03% of the variance. Items in Accuracy of Content evaluated agreement between the topic (theme) and content of the presentation, use of presentation aids to highlight the key points of the presentation, and adherence to time limitations. These items included statements such as: “The content of the presentation matches the theme” (item 7), “Presentation aids, such as PowerPoint and posters, highlight key points of the report” (item 14), and “The organization of the presentation is structured to provide the necessary information, while also adhering to time limitations” (item 9). Factor 2, “Effective Communication”, was comprised of five items, which explained 21.72% of the total variance. Effective Communication evaluated the attitude and expression of the presenter. Statements included “Demonstrates confidence and an appropriate level of enthusiasm” (item 22), “Uses body language in a manner that increases the audience’s interest in learning” (item 21), and “Interacts with the audience using eye contact and a question and answer session” (item 24). Factor 3, “Clarity of Speech” was comprised of four items, which explained 13.00% of the total variance. Factor 3 evaluated the presenter’s pronunciation with statements such as “The words and phrases of the presenter are smooth and fluent” (item 19).

The factor structure of the 20-items of the EFA were examined with CFA. We sequentially removed items 1, 4, 20, 15, and 16, based on modification indices. The resultant 15-item scale had acceptable fit indices for the 3-factor model of the OPES for chi-square ( x 2 /df  = 2.851), RMSEA (.076), NNFI (.933), and CFI = .945. However, the AGFI, which was .876, was below the acceptable criteria of .9. A panel discussion with the researchers determined that items 4, 15, and 16 were similar in meaning to item 14; item 1 was similar in meaning to item 7. Therefore, the panel accepted the results of the modified CFA model of the OPES with 15 items and 3-factors.

As illustrated in Table  3 and Fig.  1 , all standardized factor loadings exceeded the threshold of .50, and the AVE for each construct ranged from .517 to .676, indicating acceptable convergent validity. In addition, the CR was greater than .70 for the three constructs (range = .862 to .901), providing further evidence for the reliability of the instrument [ 25 ]. As shown in Table  4 , all square roots of the AVE for each construct (values in the diagonal elements) were greater than the corresponding inter-construct correlations (values below the diagonal) [ 24 , 25 ]. These findings provide further support for the validity of the OPES.

Confirmatory factor analysis: convergent reliability and validity of the OPES scale for nursing students ( n  = 325)

Construct/ItemItem scoreFactor loadingReliability
Mean λ CRAVE
Accuracy of content.881.517
 Item 74.250.60.69513.774***.483
 Item 144.230.68.66012.863***.435
 Item 83.980.66.78616.352***.617
 Item 103.880.69.82817.703***.686
 Item 94.030.72.76615.753***.586
 Item 114.080.65.69713.835***.486
 Item 133.920.78.56910.687***.324
Effective Communication.901.647
 Item 223.580.91.89420.230***.799
 Item 213.430.97.81717.548***.668
 Item 243.690.91.79416.816***.631
 Item 233.640.87.85418.802***.730
 Item 123.410.79.63912.490***.408
Clarity of speech.862.676
 Item 173.940.76.76515.541***.586
 Item 183.810.79.88119.002***.776
 Item 193.700.76.81717.026***.667

Note . λ standardized factor loading, R 2 reliability of item (squared multiple correlation, SMC), CR construct (component/composite) reliability, AVE average variance extraction

*** p  < .001

An external file that holds a picture, illustration, etc.
Object name is 12909_2022_3376_Fig1_HTML.jpg

The standardized estimates of CFA model for validation sample

Correlations among the latent variables from confirmatory factor analysis of the OPES scale for nursing students ( n  = 325)

Construct123
1. Accuracy of content.719
2. Effective communication.696***.804
3. Clarity of speech.597***.703***.822

a The value in the diagonal element is the square root of AVE of each construct

Development of the OPES: relationships with other variables

Relationships with other variable evidence was examined with correlation coefficients for the total score and subscale scores of the OPES with the total score and subscale scores of the PRCA and SPCC (Table  5 ) from all nursing students who participated in the study and complete all three scales ( N  = 650). Correlation coefficients for the total score of the OPES with total scores for the PRCA and SPCC were − .51 and .45, respectively (both p  < .001). Correlation coefficients for subscale scores of the OPES with the subscale scores of the PRCA and SPCC were all significant ( p  < .001), indicating strong valid evidence of the scale as a self-assessment for effective communication.

Correlation coefficients for total scores and subscale scores for the OPES, PRCA, and SPCC

Instruments & subscales12345678910111213141516
1. OPES
2. Accuracy of content
3. Effective Communication
4. Clarity of speech
5. PRCA
6. Group discussion
7. Meetings
8. Interpersonal
9. Public Speaking
10. SPCC
11. Public
12. Meeting
13. Group
14. Dyad
15. Stranger
16. Acquaintance
17. Friend

OPES Oral Presentation Evaluation Scale, PRCA Personal Report of Communication Apprehension, SPCC Self-Perceived Communication Competence

Bold figures all p  < .001.

The 15-item OPES was found to be a reliable and valid instrument for nursing students’ self-assessments of their performance during previous oral presentations. The strength of this study is that the initial items were developed using both literature review and interviews with nurse educators, who were university tutors in oral presentation skills, as well as nursing students at different stages of the educational process. Another strength of this study is the multiple methods used to establish the validity and reliability of the OPES, including internal structure evidence (both EFA and CFA) and relationships with other variables [ 15 , 26 ].

Similar to previous to other oral presentation instruments, content analysis of items of the OPES generated from the interviews with educators and students indicated accuracy of the content of a presentation and effective communication were important factors for a good performance [ 3 – 6 , 8 ]. Other studies have also included self-esteem as a factor that can influence the impact of an oral presentation [ 3 ], however, the subscale of effective communication included the item “Demonstrates confidence and an appropriate level of enthusiasm”, which a quality of self-esteem. The third domain was identified as clarity of speech, which is unique to our study.

Constructs that focus on a person’s ability to deliver accurate content are important components for evaluations of classroom speaking because they have been shown to be fundamental elements of public speaking ([ 7 ]). Accuracy of content as it applies to oral presentation for nurses is important not only for communicating information involving healthcare education for patients, but also for communicating with team members providing medical care in a clinical setting.

The two other factors identified in the OPES, effective communication and clarity of speech, are similar to constructs for delivery of a presentation, which include interacting with the audience through body-language, eye-contact, and question and answer sessions. These behaviors indicate the presenter is confident and enthusiastic, which engages and captures the attention of an audience. It seems logical that the voice, pronunciation, and fluency of speech were not independent factors because the presenter’s voice qualities all are keys to effectively delivering a presentation. A clear and correct pronunciation, appropriate tone and volume of a presentation assists audiences in more easily receiving and understanding the content.

Our 15-item OPES scale evaluated the performance based on outcome. The original scale was composed of 26 items that were derived from qualitative interviews with nursing students and university tutors in oral presentations. These items were the result of asking about important qualities at three timepoints of a presentation: before, during, and after. However, most of the items that were deleted were those about the period before the presentation (1 to 6); two items (25 and 26) were about the period after the presentation. Analysis did not reflect the qualitative interview data expressed by educators and students regarding the importance of preparing with practice and rehearsal, and the importance of peer and teacher evaluations. Other studies have suggested that preparation and self-reflection is important for a good presentation, which includes awareness of the audience receiving the presentation, meeting the needs of the audience, defining the purpose of the presentation, use of appropriate technology to augment information, and repeated practices to reduce anxiety [ 2 , 5 , 27 ]. However, these items were deleted in the scale validation stage, possibly because it is not possible to objectively evaluate how much time and effort the presenter has devoted to the oral presentation.

The deletion of item 20, “The clothing worn by the presenter is appropriate” was also not surprising. During the interviews, educators and students expressed different opinions about the importance of clothing for a presentation. Many of the educators believed the presenter should be dressed formally; students believed the presenter should be neatly dressed. These two perspectives might reflect generational differences. However, these results are reminders assessments should be based on a structured and objective scale, rather than one’s personal attitude and stereotype of what should be important about an oral presentation.

The application of the OPES may be useful not only for educators but also for students. The OPES could be used a checklist to help students determine how well their presentation matches the 15 items, which could draw attention to deficiencies in their speech before the presentation is given. Once the presentation has been given, the OPES could be used as a self-evaluation form, which could help them make modifications to improve the next the next presentation. Educators could use the OPES to evaluate a performance during tutoring sessions with students, which could help identify specific areas needing improvement prior to the oral presentation. Although, analysis of the scale was based on data from nursing students, additional assessments with other populations of healthcare students should be conducted to determine if the OPES is applicable for evaluating oral presentations for students in general.

Limitations

This study had several limitations. Participants were selected by non-random sampling, therefore, additional studies with nursing students from other nursing schools would strengthen the validity and reliability of the scale. In addition, the OPES was developed using empirical data, rather than basing it on a theoretical framework, such as anxiety and public speaking. Therefore, the validity of the OPES for use in other types of student populations or cultures that differ significantly from our sample population should be established in future studies. Finally, the OPES was in the study was examined as a self-assessment instrument for nursing students who rated themselves based on their perceived abilities previous oral presentations rather than from peer or nurse educator evaluations. Therefore, applicability of the scale as an assessment instrument for educators providing an objective score of nursing students’ real-life oral presentations needs to be validated in future studies.

This newly developed 15-item OPES is the first report of a valid self-assessment instrument for providing nursing students with feedback about whether necessary targets for a successful oral presentation are reached. Therefore, it could be adopted as a self-assessment instrument for nursing students when learning what oral presentation require skills require strengthening. However, further studies are needed to determine if the OPES is a valid instrument for use by student peers or nursing educators evaluating student presentations across nursing programs.

Acknowledgements

The authors thank all the participants for their kind cooperation and contribution to the study.

Authors’ contributions

All authors conceptualized and designed the study. Data were collected by Y-CH and H-CL. Data analysis was conducted by Y-CH and Y-CC. The first draft of the manuscript was written by Y-CH, Y-CC, and all authors contributed to subsequent revisions. All authors read and approved the final submission.

This study was supported by grants from the Ministry of Science and Technology Taiwan (MOST 107–2511-H-255-007), Ministry of Education (PSR1090283), and the Chang Gung Medical Research Fund (CMRPF3K0021, BMRP704, BMRPA63).

Availability of data and materials

Declarations.

All the study methods and materials have been performed in accordance with the Declaration of Helsink. The study protocol and the procedures of the study were approved by Chang Gung Medical Foundation institutional review board (number: 201702148B0) for the protection of participants’ confidentiality. All of the participants received oral and written explanations of the study and its procedures, as well as informed consent was obtained from all subjects.

Not applicable.

No conflict of interest has been declared by the authors.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Yi-Chien Chiang, Email: wt.ude.tsugc.wg@gnaihccy .

Hsiang-Chun Lee, Email: wt.ude.tsugc.wg@eelyhtac .

Tsung-Lan Chu, Email: wt.gro.hmgc@57cej .

Chia-Ling Wu, Email: wt.ude.tsugc.liam@uwlc .

Ya-Chu Hsiao, Email: wt.ude.tsugc.wg@oaihsjy .

Rubric for Evaluating Student Presentations

  • Kellie Hayden
  • Categories : Student assessment tools & principles
  • Tags : Teaching methods, tools & strategies

Rubric for Evaluating Student Presentations

Make Assessing Easier with a Rubric

The rubric that you use to assess your student presentations needs to be clear and easy to read by your students. A well-thought out rubric will also make it easier to grade speeches.

Before directing students to create a presentation, you need to tell them how they will be evaluated with the rubric. For every rubric, there are certain criteria listed or specific areas to be assessed. For the rubric download that is included, the following are the criteria: content, eye contact, volume and clarity, flow, confidence and attitude, visual aids, and time.

Student Speech Presentation Rubric Download

Assessment Tool Explained in Detail

Use a Rubric to Assess Presentations

Content : The information in the speech should be organized. It should have an engaging introduction that grabs the audience’s attention. The body of the speech should include details, facts and statistics to support the main idea. The conclusion should wrap up the speech and leave the audiences with something to remember.

In addition, the speech should be accurate. Teachers should decide how students should cite their sources if they are used. These should be turned in at the time of the speech. Good speakers will mention their sources during the speech.

Last, the content should be clear. The information should be understandable for the audience and not confusing or ambiguous.

Eye Contact

Students eyes should not be riveted to the paper or note cards that they prepare for the presentation. It is best if students write talking points on their note cards. These are main points that they want to discuss. If students write their whole speech on the note cards, they will be more likely to read the speech word-for-word, which is boring and usually monotone.

Students should not stare at one person or at the floor. It is best if they can make eye contact with everyone in the room at least once during the presentation. Staring at a spot on the wall is not great, but is better than staring at their shoes or their papers.

Volume and Clarity

Students should be loud enough so that people sitting in the back of the room can hear and understand them. They should not scream or yell. They need to practice using their diaphragm to project their voice.

Clarity means not talking too fast, mumbling, slurring or stuttering. When students are nervous, this tends to happen. Practice will help with this problem.

When speaking, the speaker should not have distracting pauses during the speech. Sometimes a speaker may pause for effect; this is to tell the audience that what he or she is going to say next is important. However, when students pause because they become confused or forget the speech, this is distracting.

Another problem is verbal fillers. Student may say “um,” “er” or “uh” when they are thinking or between ideas. Some people do it unintentionally when they are nervous.

If students chronically say “um” or use any type of verbal filler, they first need to be made aware of the problem while practicing. To fix this problem, a trusted friend can point out when they doing during practice. This will help students be aware when they are saying the verbal fillers.

Confidence and Attitude

When students speak, they should stand tall and exude confidence to show that what they are going to say is important. If they are nervous or are not sure about their speech, they should not slouch. They need to give their speech with enthusiasm and poise. If it appears that the student does not care about his or her topic, why should the audience? Confidence can many times make a boring speech topic memorable.

Visual Aids

The visual that a student uses should aid the speech. This aid should explain a facts or an important point in more detail with graphics, diagrams, pictures or graphs.

These can be presented as projected diagrams, large photos, posters, electronic slide presentations, short clips of videos, 3-D models, etc. It is important that all visual aids be neat, creative and colorful. A poorly executed visual aid can take away from a strong speech.

One of the biggest mistakes that students make is that they do not mention the visual aid in the speech. Students need to plan when the visual aid will be used in the speech and what they will say about it.

Another problem with slide presentations is that students read word-for-word what is on each slide. The audience can read. Students need to talk about the slide and/or offer additional information that is not on the slide.

The teacher needs to set the time limit. Some teachers like to give a range. For example, the teacher can ask for short speeches to be1-2 minutes or 2-5 minutes. Longer ones could be 10-15 minutes. Many students will not speak long enough while others will ramble on way beyond the limit. The best way for students to improve their time limit is to practice.

The key to a good speech is for students to write out an outline, make note cards and practice. The speech presentation rubric allows your students to understand your expectations.

  • A Research Guide.com. Chapter 3. Public Speaking .
  • 10 Fail Proof Tips for Delivering a Powerful Speech by K. Stone on DumbLittleMan.
  • Photo credit: Kellie Hayden
  • Planning Student Presentations by Laura Goering for Carleton College.

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Assessing oral presentations - criteria checklist

Profile image of Jan Deurwaarder

Course documents used to assess oral communication - presenting a topic to an audience.

Related Papers

Andrew Leichsenring

Oral presentations are a common form of summative assessment in tertiary level English as a Foreign Language (EFL) syllabi. There will be an array of teaching and learning elements to be considered by a teacher in their set-up and execution of an ICT-based oral presentation activity that goes beyond having students stand in front of a class group and talk about a subject. Teaching effective oral presentation skills to university-level learners requires an understanding of how to maximize learning opportunities to persuasively convey a message orally and visually to an audience.

presentation assessment criteria

Timtnew Somrue

ken pelicano

ihyh9h9hiuohuihuiohiuohftydftycfygvuhg

Aysha Sharif

The purpose of this paper is to highlight the importance of oral presentation skills/public speaking for fresh graduates of all disciplines and emphasize developing online materials to cater the needs of the students on a large scale worldwide using the platform of technology in the field of English language teaching. An ESP course has been developed online in this study in the form of a blog called " English for Oral Presentation Skills " providing course view and outline, course objectives, learning outcomes and activities to be conducted by the teacher. The focus of this study is to provide students with language used in different stages of the presentation which include language for introduction, transitional language, language for conclusion and the language for Q & A session. It also includes general language implications to be considered while teaching English for oral presentation skills. It also highlights the information on different types of presentations and the structure of presentations. The overall course includes the assessment criteria as well, in order to make sure whether the students are able to present with proper language skills. Hence, this study is a successful attempt in providing large audience with the skills of public speaking using technology in language teaching.

Ariana Vacaretu

The guidebook complements another intellectual output of the project, “Syllabus for the elective course Mathematics research workshop/ Studying math through research”, in that it reveals how the project team operationalised two transversal competences and a specifically mathematical competence, preparing for the development of the methodology (methods and tools) for assessing them. The guidebook addresses teachers and experts in didactics who are interested in developing competence assessment tools. We are confident that the process of developing the competence assessment methodology / instruments described in this guidebook may prove useful for specialists interested in competence assessment. The guidebook is structured in four chapters. The first chapter presents aspects connected to assessment in the mathematics research workshops – what we know about how assessment is done in such research workshops, why we aim to assess competences students develop in the research workshops, and some aspects that should be kept in mind when assessing competences. In the second chapter, we share the diagram of competences students develop in the research workshops, and operationalise / define the three competences students develop in these workshops: collaborative problem solving, use of aids and tools, and written and oral communication skills for sharing the research results. Chapter three includes the methodology of assessing the above-mentioned competences, which was tested over the period of an academic year, and then revised. The last chapter shares the conclusions we drew upon testing the assessment methods and tools, as well as a few ideas related to how our approach can be continued.

Journal of Language Teaching and Research

Tariq Elyas

Simsim Samasim

The paper examines the value of oral communication as it is taught, practised and assessed across two environmental degrees and determines whether a competent level of oral communication training, assessment and practice in oral genres and oral skill development has been achieved. To undertake this investigation the authors applied defined levels of attainment for oral communication and mapped the levels of attainment in core units and location over a program of study; audited oral assessment strategies; examined lecturer and student reflections of oral communication learning activities, and examined graduates&#39; reflections through the course experience questionnaire. We found that both degrees currently use several genres of oral communication as a learning activity and a learning outcome, but there is limited training in oral communication and few units are using higher level authentic learning activities. Hence students are not experiencing varied oral genres, purposes, and au...

Loading Preview

Sorry, preview is currently unavailable. You can download the paper by clicking the button above.

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

Center for Teaching and Learning

Step 4: develop assessment criteria and rubrics.

Just as we align assessments with the course learning objectives, we also align the grading criteria for each assessment with the goals of that unit of content or practice, especially for assignments than cannot be graded through automation the way that multiple-choice tests can. Grading criteria articulate what is important in each assessment, what knowledge or skills students should be able to demonstrate, and how they can best communicate that to you. When you share grading criteria with students, you help them understand what to focus on and how to demonstrate their learning successfully. From good assessment criteria, you can develop a grading rubric .

Develop Your Assessment Criteria | Decide on a Rating Scale | Create the Rubric

Developing Your Assessment Criteria

Good assessment criteria are

  • Clear and easy to understand as a guide for students
  • Attainable rather than beyond students’ grasp in the current place in the course
  • Significant in terms of the learning students should demonstrate
  • Relevant in that they assess student learning toward course objectives related to that one assessment.

To create your grading criteria, consider the following questions:

  • What is the most significant content or knowledge students should be able to demonstrate understanding of at this point in the course?
  • What specific skills, techniques, or applications should students be able to use to demonstrate using at this point in the course?
  • What secondary skills or practices are important for students to demonstrate in this assessment? (for example, critical thinking, public speaking skills, or writing as well as more abstract concepts such as completeness, creativity, precision, or problem-solving abilities)
  • Do the criteria align with the objectives for both the assessment and the course?

Once you have developed some ideas about the assessment’s grading criteria, double-check to make sure the criteria are observable, measurable, significant, and distinct from each other.

Assessment Criteria Example Using the questions above, the performance criteria in the example below were designed for an assignment in which students had to create an explainer video about a scientific concept for a specified audience. Each elements can be observed and measured based on both expert instructor and peer feedback, and each is significant because it relates to the course and assignment learning goals.

presentation assessment criteria

Additional Assessment Criteria Resources Developing Grading Criteria (Vanderbilt University) Creating Grading Criteria (Brown University) Sample Criteria (Brown University) Developing Grading Criteria (Temple University)

Decide on a Rating Scale

Deciding what scale you will use for an assessment depends on the type of learning you want students to demonstrate and the type of feedback you want to give students on this particular assignment or test. For example, for an introductory lab report early in the semester, you might not be as concerned with advanced levels of precision as much as correct displays of data and the tone of the report; therefore, grading heavily on copy editing or advanced analysis would not be appropriate. The criteria would likely more rigorous by the end of the semester, as you build up to the advanced level you want students to reach in the course.

Rating scales turn the grading criteria you have defined into levels of performance expectations for the students that can then be interpreted as a letter, number, or level. Common rating scales include

  • A, B, C, etc. (without or without + and -)
  • 100 point scale with defined cut-off for a letter grade if desired (ex. a B = 89-80; or a B+ = 89-87, B = 86-83, B- = 82-80)
  • Yes or no, present or not present (if the rubric is a checklist of items students must show)
  • below expectations, meets expectations, exceeds expectations
  • not demonstrated, poor, average, good, excellent

Once you have decided on a scale for the type of assignment and the learning you want students to demonstrate, you can use the scale to clearly articulate what each level of performance looks like, such as defining what A, B, C, etc. level work would look like for each grading criteria. What would distinguish a student who earns a B from one who earns a C? What would distinguish a student who excelled in demonstrating use of a tool from a student who clearly was not familiar with it? Write these distinctions out in descriptive notes or brief paragraphs.

​ Ethical Implications of Rating Scales There are ethical implications in each of these types of rating skills. On a project worth 100 points, what is the objective difference between earning an 85 or and 87? On an exceeds/meets/does not meet scale, how can those levels be objectively applied? Different understandings of "fairness" can lead to several ways of grading that might disadvantage some students.  Learn more about equitable grading practices here.

Create the Rubric

Rubrics Can Make Grading More Effective

  • Provide students with more complete and targeted feedback
  • Make grading more timely by enabling the provision of feedback soon after assignment is submitted/presented.
  • Standardize assessment criteria among those assigning/assessing the same assignment.
  • Facilitate peer evaluation of early drafts of assignment.

Rubrics Can Help Student Learning

  • Convey your expectations about the assignment through a classroom discussion of the rubric prior to the beginning of the assignment
  • Level the playing field by clarifying academic expectations and assignments so that all students understand regardless of their educational backgrounds.(e.g. define what we expect analysis, critical thinking, or even introductions/conclusions should include)
  • Promote student independence and motivation by enabling self-assessment
  • Prepare students to use detailed feedback.

Rubrics Have Other Uses:

  • Track development of student skills over several assignments
  • Facilitate communication with others (e.g. TAs, communication center, tutors, other faculty, etc)
  • Refine own teaching skills (e.g. by responding to common areas of weaknesses, feedback on how well teaching strategies are working in preparing students for their assignments).

In this video, CTL's Dr. Carol Subino Sullivan discusses the value of the different types of rubrics.

Many non-test-based assessments might seem daunting to grade, but a well-designed rubric can alleviate some of that work. A rubric is a table that usually has these parts:  

  • a clear description of the learning activity being assessed
  • criteria by which the activity will be evaluated
  • a rating scale identifying different levels of performance
  • descriptions of the level of performance a student must reach to earn that level.  

When you define the criteria and pre-define what acceptable performance for each of those criteria looks like ahead of time, you can use the rubric to compare with student work and assign grades or points for each criteria accordingly. Rubrics work very well for projects, papers/reports, and presentations , as well as in peer review, and good rubrics can save instructors and TAs time when grading .  

Sample Rubrics This final rubric for the scientific concept explainer video combines the assessment criteria and the holistic rating scale:

presentation assessment criteria

When using this rubric, which can be easily adapted to use a present/not present rating scale or a letter grade scale, you can use a combination of checking items off and adding written (or audio/video) comments in the different boxes to provide the student more detailed feedback. 

As a second example, this descriptive rubric was used to ask students to peer assess and self-assess their contributions to a collaborative project. The rating scale is 1 through 4, and each description of performance builds on the previous. ( See the full rubric with scales for both product and process here. This rubric was designed for students working in teams to assess their own contributions to the project as well as their peers.)

presentation assessment criteria

Building a Rubric in Canvas Assignments You can create rubrics for assignments and discussions boards in Canvas. Review these Canvas guides for tips and tricks. Rubrics Overview for Instructors What are rubrics?  How do I align a rubric with a learning outcome? How do I add a rubric to an assignment? How do I add a rubric to a quiz? How do I add a rubric to a graded discussion? How do I use a rubric to grade submissions in SpeedGrader? How do I manage rubrics in a course?

Additional Resources for Developing Rubrics Designing Grading Rubrics  (Brown University) Step-by-step process for creating an effective, fair, and efficient grading rubric. 

Creating and Using Rubrics  (Carnegie Mellon University) Explores the basics of rubric design along with multiple examples for grading different types of assignments.

Using Rubrics  (Cornell University) Argument for the value of rubrics to support student learning.

Rubrics  (University of California Berkeley) Shares "fun facts" about rubrics, and links the rubric guidelines from many higher ed organizations such as the AAC&U.

Creating and Using Rubrics  (Yale University) Introduces different styles of rubrics and ways to decide what style to use given your course's learning goals.

Best Practices for Designing Effective Resources (Arizona State University) Comprehensive overview of rubric design principles.

  Return to Main Menu | Return to Step 3 | Go to Step 5 Determine Feedback Strategy

Accessibility Information

Download Microsoft Products   >      Download Adobe Reader   >

Department of History

Mark scheme for presentations.

Different students may legitimately approach their presentations in different ways and sometimes particular strength in one area can offset weakness in another. But the following criteria gives you an idea of the areas to think about when preparing and presenting, and what makes for a good presentation.

First Class (marks of 74+)

  • Information: detailed, accurate, relevant; key points highlighted;
  • Structure: rigorously argued, logical, easy to follow;
  • Analysis and Interpretation: extensive evidence of independent thought and critical analysis;
  • Use of relevant and accurate Evidence: key points supported with highly relevant and accurate evidence, critically evaluated;
  • Presentation Skills: clear, lively, imaginative; good use of visual aids (where appropriate);
  • Time Management: perfectly timed, well organised;
  • Group Skills: engages well with group; encourages discussion and responds well to questions.

2.1 Upper Second (62-68)

  • Information: detailed, accurate, relevant;
  • Structure: generally clearly argued and logical;
  • Analysis and Interpretation: attempts to go beyond the ideas presented in secondary literature;
  • Use of relevant and accurate Evidence: most points illustrated with relevant and accurate evidence;
  • Presentation Skills: generally clear, lively; use of appropriate visual aids;
  • Time Management : well organised, more or less to time;
  • Group Skills: attempts to engage with group and responds reasonably well to questions.

2.2 Lower Second (52-58)

  • Information: generally accurate and relevant, but perhaps some gaps and/or irrelevant material;
  • Structure: not always clear or logical; may be overly influenced by secondary literature rather than the requirements of the topic;
  • Analysis and Interpretation: little attempt to go beyond or criticise secondary literature;
  • Use of relevant and accurate Evidence: some illustrative material, but not critically evaluated and/or some inaccuracies and irrelevancies;
  • Presentation Skills: c onveys meaning, but sometimes unclear or clumsy;
  • T ime Management: more or less right length, but some material not covered properly as a result, OR, significantly over-runs;
  • Group Skills: responds reasonably well to questions, but makes no real attempt to engage with group or promote discussion

Third (42-48)

  • Information: limited knowledge, with some significant gaps and/or errors;
  • Structure: argument underdeveloped and not entirely clear;
  • Analysis and Interpretation : fairly superficial and generally derivative and uncritical;
  • Use of relevant and accurate Evidence : some mentioned, but not integrated into presentation or evaluated; the evidence used may not be relevant or accurate
  • Presentation Skills: not always clear or easy to follow; unimaginative and unengaging;
  • Time Management : significantly over time; material fairly disorganised and rushed;
  • Group Skills: uncomfortable responding to questions; no attempt at engaging with group.

Fail (0-40)

  • Information: very limited, with many errors and gaps;
  • Structure: muddled, incoherent;
  • Analysis and Interpretation: entirely derivative, generally superficial;
  • Use of relevant and accurate Evidence: little or no evidence discussed; or irrelevant and inaccurate.
  • Presentation Skills: clumsy, disjointed, difficult to follow, dull;
  • Time Management : significantly under or over time; has clearly not tried out
  • material beforehand; disorganised;
  • Group Skills : poor.

Presentation Evaluation Criteria

Presentation Evaluation Criteria

The ability to make coherent, well organized and clear presentations are an essential skill for business professionals. The following criteria are the basis for evaluating your final case presentation. Similar to the other performance measurement evaluation methods, the presentation criteria narrative begins with an outline of the criteria. The subsequent rubric outlines how this criteria leads to a grade.

Presentation criteria

Your presentation must start with a delivery of key conclusions and recommendations. It is not a recapitulation of your entire analysis. The subsequent parts of your presentation should clearly lead the audience to understand how you arrived at your conclusions and recommendations. Address key implementation actions, how you would measure the effectiveness of your recommendations and implementation. Some part of your presentation should address competitor’s responses and defensive measures your client can take. Conclude your presentation with a recap of your recommendations and the key points that led you to make these recommendations.

You have a clearly developed message that flows naturally from your presentation. The transitions are smooth. The presentation is succinct and not choppy.

Organization

Follow the format provided in the outline. Introduce your team and the agenda you will follow. Provide handouts to the audience prior to beginning your presentation. Indicate when you would like to take questions.

Although I have given you a format for your presentation and require the use of Power Point, you can add originality to the presentation to capture and hold the audience’s attention. You can also go too far in your creativity. If your presentation uses annoying or distracting sounds, for example, it negatively impacts on creativity.

Speaking skills

The criteria include: poise, clear articulation, proper volume, steady rate, good posture, eye contact, enthusiasm, and confidence. The speakers do not read (e.g., note cards, read the overhead transparencies).

Balance between speakers

Each member of the team should participate equally in the presentation. This includes responding to questions.

You have 20 minutes to make your presentation. This is the typical amount of time that you can expect before a group of senior managers. You must

Question responsiveness

This criteria pertains to your team’s ability to anticipate questions from the management team and to address the questions that they raise. You do not necessarily have to be able to answer every question. You should be able to understand and, if necessary, indicate that you need to conduct additional analysis.

Presentation analysis assessment form

Team: ______Case: ______

Overall presentation score: ______

  • Qualifications search
  • Apprenticeship search
  • Occupational maps
  • Ensuring quality
  • Qualifications
  • Apprenticeships
  • Have your say
  • Search the Apprenticeships

Customer service specialist

A temporary dispensation has been applied to the ST0071 end-point assessment plan for apprentices who are re-sitting or re-taking only the observation element of the now-retired version 1.0 EPA. The dispensation will last from 20/03/24 to 20/07/24. End-point assessment organisations (EPAOs) delivering EPAs for the apprenticeship will implement the dispensation as required, supported and monitored by the relevant EQA provider. The key changes are: Apprentices who have failed only the observation assessment method prior to the adjustment being implemented, on the now-retired version 1.0 EPA and are re-sitting or re-taking only the observation method on version 1.1 of the EPA, will be permitted 15 minutes of questioning following their observation re-sit or re-take to meet the criteria “resolve complex issues by being able to choose from and successfully apply a wide range of approaches”

A temporary dispensation has been applied to the ST0071 V1.1 end point assessment plan for this apprenticeship. The dispensation will remain live until the 1 apprentice identified in the dispensation request has completed their EPA, including any resits and retakes, when it will then be withdrawn. The end-point assessment organisation (EPAO) delivering the EPA for the 1 apprentice will implement the dispensation as agreed by IfATE, supported and monitored by the relevant EQA provider (EQAP). The key changes are: The apprentice identified will be allowed to conduct a simulated observation in place of the practical observation assessment method. The dispensation applies to the agreed apprentice only. EPAOs must contact IfATE for each individual case. If your organisation is planning delivery of this EPA, you should follow the requirements as detailed in the plan as the dispensation is intended to support a specific apprentice. Please contact the Institute for Apprenticeships and Technical Education via [email protected] should you require any further clarity.

Overview of the role

Dealing with customer queries, purchases and complaints.

Reference Number: ST0071

Details of standard.

Role / Occupation: Customer Service Specialist

Overview: The main purpose of a customer service specialist is to be a ‘professional’ for direct customer support within all sectors and organisation types. You are an advocate of Customer Service who acts as a referral point for dealing with more complex or technical customer requests, complaints, and queries. You are often an escalation point for complicated or ongoing customer problems. As an expert in your organisation’s products and/or services, you share knowledge with your wider team and colleagues. You gather and analyse data and customer information that influences change and improvements in service. Utilising both organisational and generic IT systems to carry out your role with an awareness of other digital technologies. This could be in many types of environment including contact centres, retail, webchat, service industry or any customer service point.

Business Knowledge and Understanding

  • Understand what continuous improvement means in a service environment and how your recommendations for change impact your organisation
  • Understand the impact your service provision has on the wider organisation and the value it adds
  • Understand your organisation’s current business strategy in relation to customers and make recommendations for its future
  • Understand the principles and benefits of being able to think about the future when taking action or making service related decisions
  • Understand a range of leadership styles and apply them successfully in a customer service environment

Customer Journey knowledge

  • Understand and critically evaluate the possible journeys of your customers, including challenges and the end-to-end experience
  • Understand the reasons why customer issues and complex situations sometimes need referral or escalation for specialist attention
  • Understand the underpinning business processes that support you in bringing about the best outcome for customers and your organisation
  • Understand commercial factors and authority limits for delivering the required customer experience

Knowing your customers and their needs/ Customer Insight

  • Know your internal and external customers and how their behaviour may require different approaches from you
  • Understand how to analyse, use and present a range of information to provide customer insight
  • Understand what drives loyalty, retention and satisfaction and how they impact on your organisation
  • Understand different customer types and the role of emotions in bringing about a successful outcome
  • Understand how customer expectations can differ between cultures, ages and social profiles

Customer service culture and environment awareness

  • Keep current, knowledge and understanding of regulatory considerations, drivers and impacts in relation to how you deliver for customers
  • Understand your business environment and culture and the position of customer service within it
  • Understand your organisation structure and what role each department needs to play in delivering Customer Service and what the consequences are should things go wrong
  • Understand how to find and use industry best practice to enhance your own knowledge

Business-focused service delivery

  • Demonstrate a continuous improvement and future focussed approach to customer service delivery including decision making and providing recommendations or advice
  • Resolve complex issues by being able to choose from and successfully apply a wide range of approaches
  • Find solutions that meet your organisations needs as well as the customer requirements

Providing a positive customer experience

  • Through advanced questioning, listening and summarising negotiate mutually beneficial outcomes

- Providing a positive customer experience (cont.)

  • Manage challenging and complicated situations within your level of authority and make recommendations to enable and deliver change to service or strategy
  • Use clear explanations, provide options and solutions to influence and help customers make choices and agree next steps
  • Explore and interpret the customer experience to inform and influence achieving a positive result for customer satisfaction
  • Demonstrate a cost conscious mind-set when meeting customer and the business needs
  • Identifying where highs and lows of the customer journey produce a range of emotions in the customer
  • Use written and verbal communication to simplify and provide complex information in a way that supports positive customer outcome in the relevant format

Working with your customers / customer insights

  • Proactively gather customer feedback, through a variety of methods. Critically analyse, and evaluate the meaning, implication and facts and act upon it
  • Analyse your customer types, to identify or anticipate their potential needs and expectations when providing your service

Customer service performance

  • Maintain a positive relationship even when you are unable to deliver the customer’s expected outcome
  • When managing referrals or escalations take into account historical interactions and challenges to determine next steps

Service improvement

  • Analyse the end to end service experience, seeking input from others where required, supporting development of solutions
  • Make recommendations based on your findings to enable improvement
  • Make recommendations and implement where possible, changes in line with new and relevant legislation, regulations and industry best practice

Behaviours / Attitude

Develop self

  • Proactively keep your service, industry and best practice knowledge and skills up-to-date
  • Consider personal goals related to service and take action towards achieving them

Ownership/ Responsibility

  • Personally commit to and take ownership for actions to resolve customer issues to the satisfaction of the customer and your organisation
  • Exercises proactivity and creativity when identifying solutions to customer and organisational issues
  • Make realistic promises and deliver on them

Team working

  • Work effectively and collaboratively with colleagues at all levels to achieve results.
  • Recognise colleagues as internal customers
  • Share knowledge and experience with others to support colleague development
  • Adopt a positive and enthusiastic attitude being open minded and able to tailor your service to each customer
  • Be adaptable and flexible to your customer needs whilst continuing to work within the agreed customer service environment

Presentation

  • Demonstrate brand advocacy, values and belief when dealing with customer requests to build trust, credibility and satisfaction
  • Ensure your personal presentation, in all forms of communication, reflects positively on your organisation’s brand

The apprenticeship will take typically 15 months to complete depending on experience.

Entry Requirements:

Organisations will set their own entry criteria and are more likely to select individuals with more advanced inter- personal skills, experience of working with customers in some capacity. You must achieve level 2 English and maths prior to taking the end point assessment.

Link to professional registration:

Completion of this apprenticeship will lead to eligibility to join the Institute of Customer Service as an Individual member at Professional level. Should you choose to progress on a customer service career path, you may be eligible for further professional membership including management.

Level: Level 3.

Review: The apprenticeship should be reviewed after a maximum of 3 years.

Crown copyright © 2024. You may re-use this information (not including logos) free of charge in any format or medium, under the terms of the Open Government Licence. Visit www.nationalarchives.gov.uk/doc/open-government-licence

Customer service specialist assessment plan

Find apprenticeship training providers that deliver this standard, find an end-point assessment organisation, are you considering applying to assess against this standard.

If you are interested in becoming an apprentice -

you can find out more at www.gov.uk becoming an apprentice.

You can also search for an apprenticeship.

For all other queries please contact us.

If you are a potential employer -

you can find out more about hiring apprentices at www.gov.uk/employinganapprentice.

If you have a query about the apprenticeship standard content or Trailblazer membership

the trailblazer contact for this standard is [email protected]

Print the occupational standard (including PDF)

Version log.

Version Change detail Earliest start date Latest start date Latest end date
End-point assessment plan revised 12/02/2024 Not set Not set
Approved for delivery 10/05/2018 11/02/2024 Not set

Share this page

Is this webpage useful, thank you for your feedback, tell us about your experience.

Unfortunately we don't fully support your browser. If you have the option to, please upgrade to a newer version or use Mozilla Firefox , Microsoft Edge , Google Chrome , or Safari 14 or newer. If you are unable to, and need support, please send us your feedback .

We'd appreciate your feedback. Tell us what you think! opens in new tab/window

CRediT author statement

CRediT (Contributor Roles Taxonomy) was introduced with the intention of recognizing individual author contributions, reducing authorship disputes and facilitating collaboration. The idea came about following a 2012 collaborative workshop led by Harvard University and the Wellcome Trust, with input from researchers, the International Committee of Medical Journal Editors (ICMJE) and publishers, including Elsevier, represented by Cell Press.

CRediT offers authors the opportunity to share an accurate and detailed description of their diverse contributions to the published work.

The corresponding author is responsible for ensuring that the descriptions are accurate and agreed by all authors

The role(s) of all authors should be listed, using the relevant above categories

Authors may have contributed in multiple roles

CRediT in no way changes the journal’s criteria to qualify for authorship

CRediT statements should be provided during the submission process and will appear above the acknowledgment section of the published paper as shown further below.

Term

Definition

Conceptualization

Ideas; formulation or evolution of overarching research goals and aims

Methodology

Development or design of methodology; creation of models

Software

Programming, software development; designing computer programs; implementation of the computer code and supporting algorithms; testing of existing code components

Validation

Verification, whether as a part of the activity or separate, of the overall replication/ reproducibility of results/experiments and other research outputs

Formal analysis

Application of statistical, mathematical, computational, or other formal techniques to analyze or synthesize study data

Investigation

Conducting a research and investigation process, specifically performing the experiments, or data/evidence collection

Resources

Provision of study materials, reagents, materials, patients, laboratory samples, animals, instrumentation, computing resources, or other analysis tools

Data Curation

Management activities to annotate (produce metadata), scrub data and maintain research data (including software code, where it is necessary for interpreting the data itself) for initial use and later reuse

Writing - Original Draft

Preparation, creation and/or presentation of the published work, specifically writing the initial draft (including substantive translation)

Writing - Review & Editing

Preparation, creation and/or presentation of the published work by those from the original research group, specifically critical review, commentary or revision – including pre-or postpublication stages

Visualization

Preparation, creation and/or presentation of the published work, specifically visualization/ data presentation

Supervision

Oversight and leadership responsibility for the research activity planning and execution, including mentorship external to the core team

Project administration

Management and coordination responsibility for the research activity planning and execution

Funding acquisition

Acquisition of the financial support for the project leading to this publication

*Reproduced from Brand et al. (2015), Learned Publishing 28(2), with permission of the authors.

Sample CRediT author statement

Zhang San:  Conceptualization, Methodology, Software  Priya Singh. : Data curation, Writing- Original draft preparation.  Wang Wu : Visualization, Investigation.  Jan Jansen :  Supervision. : Ajay Kumar : Software, Validation.:  Sun Qi:  Writing- Reviewing and Editing,

Read more about CRediT  here opens in new tab/window  or check out this  article from  Authors' Updat e:  CRediT where credit's due .

  • Research article
  • Open access
  • Published: 07 June 2024

Patient eligibility for trials with imaging response assessment at the time of molecular tumor board presentation

  • Nabeel Mansour   ORCID: orcid.org/0000-0002-3467-1916 1 ,
  • Kathrin Heinrich 2 , 3 ,
  • Danmei Zhang 2 , 3 , 5 ,
  • Michael Winkelmann 1 ,
  • Maria Ingenerf 1 ,
  • Lukas Gold 1 ,
  • Konstantin Klambauer 1 ,
  • Martina Rudelius 4 ,
  • Frederick Klauschen 4 ,
  • Michael von Bergwelt-Baildon 2 , 3 , 5 ,
  • Jens Ricke 1 ,
  • Volker Heinemann 2 , 3 ,
  • C. Benedikt Westphalen 2 , 3 &
  • Wolfgang G. Kunz 1 , 3  

Cancer Imaging volume  24 , Article number:  70 ( 2024 ) Cite this article

267 Accesses

1 Altmetric

Metrics details

To assess the eligibility of patients with advanced or recurrent solid malignancies presented to a molecular tumor board (MTB) at a large precision oncology center for inclusion in trials with the endpoints objective response rate (ORR) or duration of response (DOR) based on Response Evaluation Criteria in Solid Tumors (RECIST version 1.1).

Prospective patients with available imaging at the time of presentation in the MTB were included. Imaging data was reviewed for objectifiable measurable disease (MD) according to RECIST v1.1. Additionally, we evaluated the patients with MD for representativeness of the identified measurable lesion(s) in relation to the overall tumor burden.

262 patients with different solid malignancies were included. 177 patients (68%) had MD and 85 (32%) had non-measurable disease (NMD) at the time point of MTB presentation in accordance with RECIST v1.1. MD was not representative of the overall tumor burden in eleven patients (6%). The main reasons for NMD were lesions with longest diameter shorter than 10 mm (22%) and non-measurable peritoneal carcinomatosis (18%). Colorectal cancer and malignant melanoma displayed the highest rates of MD (> 75%). In contrast, gastric cancer, head and neck malignancies, and ovarian carcinoma had the lowest rates of MD (< 55%). In case of MD, the measurable lesions were representative of the overall tumor burden in the vast majority of cases (94%).

Approximately one third of cancer patients with advanced solid malignancies are not eligible for treatment response assessment in trials with endpoints ORR or DOR at the time of MTB presentation. The rate of patients eligible for trials with imaging endpoints differs significantly based on the underlying malignancy and should be taken under consideration during the planning of new precision oncology trials.

Next generation sequencing has (NGS) enabled the identification of molecularly guided treatment options for patients with cancer [ 1 ]. In comprehensive cancer centers, patients with advanced cancer and upon progression on systemic therapy are increasingly presented in a molecular tumor board (MTB) after NGS for possible inclusion in clinical trials [ 2 ]. With targeted therapies on the rise, MTBs serve as a central platform for allocating personalized treatments [ 3 ]. For the assessment of the safety and efficacy of such treatments, clinical trials are critical. The evidence for efficacy of targeted treatment is often based on non-randomized trials with the endpoints such as objective response rate (ORR), disease-free survival (DFS) and progression-free survival (PFS) [ 4 , 5 , 6 , 7 ]. Patient inclusion in trials is therefore very often dependent on objectifiable tumor burden in oncologic imaging. MTBs can and should serve as a platform to identify patients for early clinical trials and trials investigating targeted treatments [ 8 ].

The importance of objective tumor response assessment led to the development of systems used to standardize the determination and communication of the impact of a treatment on tumor burden. In the context of evaluating solid tumor response or progression in clinical trials, the prevailing standard are the Response Evaluation Criteria in Solid Tumors (RECIST) [ 9 ]. RECIST was developed with the objective of simplifying measurement of tumor burden and to limit the potential for overestimation of response rates [ 9 ]. In 2009, revisions were made (RECIST 1.1) incorporating major changes [ 9 ] followed by an updated version with clarifications published in 2016 from the RECIST committee [ 10 ]. These guidelines require serial imaging with protocol-specified frequency and imaging modality [ 11 ].

The utilization of tumor regression as the primary endpoint in phase II trials, aimed at assessing novel agents for indications of anti-tumor efficacy, is substantiated by an extensive body of evidence over several years. This evidence implies that, for numerous solid tumors, agents capable of inducing tumor shrinkage in a subset of patients exhibit a reasonable, albeit not flawless, likelihood of subsequently revealing enhancements in overall survival or other time-to-event metrics in randomized phase III studies [ 12 , 13 , 14 ] with the caveat that the surrogacy of ORR and PFS for overall survival (OS) differs based on treatment and tumor [ 15 , 16 ]. Moreover, to advance drug development, clinical trials conducted in advanced disease contexts are progressively incorporating time to progression (TTP) or PFS as an endpoint for deriving efficacy assessments at both the phase II and phase III stages [ 17 , 18 , 19 ]. This approach is also founded on anatomical measurements of tumor size.

The oncology community should be cognizant of the fact that trials with imaging-based endpoints only relate to patients with measurable disease. Many targeted therapy studies such as the KEYNOTE-158 study relied on RECIST assessment for the inclusion of eligible patients with measurable disease [ 20 , 21 ]. So far, the proportion of RECIST-eligible patients in MTB is unknown, and there is no literature on the influence of tumor entity or metastatic phenotype on the inclusion rate in trials with targeted therapies. With targeted treatment trials on the rise, we aimed to assess the eligibly of patients with solid malignancies who presented to a large precision oncology center based on RECIST version 1.1, and the influence of tumor-specific metastatic phenotypes.

Study design and population

All patients with solid malignancies included in this retrospective single-center study were presented in the molecular tumor board at the Comprehensive Cancer Center München-LMU (CCCM LMU ). In 2019, in-house diagnostics were changed to a 161-gene panel (Oncomine™ Comprehensive Assay v3 (OCAv3), ThermoFisher Scientific) and the Oncomine Tumor Mutational Load Assay (ThermoFisher Scientific) was added to the diagnostic repertoire [ 22 ]. For the present analysis, we only included patients that received the 161-gene panel and Tumor Mutational Analysis. Inclusion criteria was current cross-sectional imaging within the clinical routine no longer than three months prior to case presentation in the MTB. The study was conducted in accordance with the principles of the Declaration of Helsinki and International Council for Harmonisation Good Clinical Practice guidelines. All patients gave written informed consent, and the study protocol was approved by the Ethics Committee of the medical faculty of the Ludwig Maximilians University Munich. Furthermore, all molecular diagnostic tests were conducted in accordance with the medical treatment contract signed by each patient.

Evaluation of tumor burden

Overall tumor burden assessment was performed in a sequential manner by two radiologists with extensive experience in cross sectional oncological imaging based on RECIST version 1.1 [ 23 ]. Computed tomography (CT) and magnetic resonance imaging (MRI) were evaluated for the presence or absence of measurable disease (MD) in the scan with closest to the time point prior of case presentation at the MTB. Furthermore, in cases with simultaneous MD and non-measurable disease (NMD), the MD cohort was also assessed in regards of representativeness of overall tumor burden.

Definition of measurable disease

MD is defined by the presence of at least one measurable lesion. Measurable lesions were accurately measured in at least one dimension (longest diameter in the plane of measurement is to be recorded) with a minimum size of 10 mm by CT scan (slice thickness no greater than 5 mm). Malignant lymph nodes were classified as measurable when pathologically enlarges with ≥ 15 mm in short axis when assessed by CT scan (slice thickness no greater than 5 mm). Two exemplary cases of MD, which is representative of the overall tumor burden are shown in Fig.  1 .

figure 1

Distribution of measurable and non-measurable disease based on the underlying tumor entity with highest rate of MD from left to right. MD = measurable disease; NMD = non-measurable disease

Definition of non-measurable disease

All other lesions, including small lesions which did not fit the above-mentioned criteria and truly non-measurable lesion like leptomeningeal disease, ascites, pleural or pericardial effusion etc. were categorized as non-measurable. Tumor lesions subjected to local treatment or in an area subjected to other loco-regional therapy were usually not considered measurable unless there has been demonstrated progression in the lesion. Blastic bone lesions were also considered non-measurable.

Evaluation of representativeness of MD in regards to the overall tumor burden

Patients with limited measurable lesion(s) and simultaneous presence of unequivocal extensive NMD such as advanced peritoneal carcinomatosis or disseminated osteoblastic metastases were classified as MD non-representative of the overall tumor burden. An exemplary case of MD which was not representative of the overall tumor burden at the time of MTB is displayed in Fig.  2 . In this case of resected thyroid cancer with a solitary measurable cervical nodal metastasis on the left side, extensive small nodular lung metastasis which are non-measurable due to small lesion size less than 10 mm are observed.

figure 2

Rate of MD representative of the overall tumor burden across the included solid tumor entities. In most cases, measurable disease (MD) accurately represents the overall tumor burden; however, in 6% of cases, it is not representative due to concomitant extensive non-measurable disease (NMD), as demonstrated in a thyroid cancer case in Fig.  3 . MD = measurable disease; NMD = non-measurable disease

figure 3

Example of MD non-representative of the overall tumor burden in contrast enhanced CT

A female patient with resected thyroid cancer and a solitary measurable cervical lymph node metastasis on the left side (Level IIb) measuring 15,5 mm in the short axis (left image, white arrow). The CT-scan of the thorax in lung window (right image) of the same patient shows presence of extensive small nodular pulmonary metastases classified as non-measurable due to small tumor size of less than 10 mm

Patient population

Imaging data of 302 patients with solid malignancies presented at the molecular tumor board (MTB) at the Comprehensive Cancer Center München-LMU (CCCM LMU ) in the years 2019 to 2021 was reviewed. 262 patients with a median age of 55 years (19–83) and imaging less than three months prior to case presentation to the MTB (Median of 48 days) were included (Table  1 ). 177 (68%) patients had MD and 85 NMD (32%). No significant differences in age and sex were observed between the MD and NMD cohorts. As seen in Figs.  1 and 2 , the solid tumor entities were summarized in 15 categories. The most common solid tumor entity included was breast cancer ( n  = 55, 20%). Analysis of previous therapy prior to MTB revealed that the patients in MD-cohort had significantly higher median of lines of systemic therapy compared to NMD-cohort ( p  = 0.005). In contrary, the NMD-cohort had a significantly higher median of surgical tumor resection prior to case presentation ( p  = 0.015). 24 (9%) of the patients were included in clinical trials based on the recommendation of MTB.

Measurable disease

Most of the included solid tumor entities (10 out of 15) displayed MD within the range of 50–75% (Fig.  1 ). Colorectal cancer ( n  = 32), malignant melanoma ( n  = 5) and neuroendocrine tumors ( n  = 13) had the highest rates of MD (91%, 80% and 77%, respectively). Solid tumor entities with the lowest rate of MD included were ovarian carcinoma, head and neck tumors and gastric cancer (< 55% MD). Information regarding the number of RECIST available lesions, mean lesion size and location are summarized per tumor entity in Table  2 .

Non-measurable disease

The most common cause of NMD was a lesion size less than 10 mm (22%). Non-measurable peritoneal carcinomatosis (18%) and post-therapeutic changes to target lesions resulting in non-measurability (15%) were the second and third most common reason, respectively. Some cancer entities presented with more frequent metastatic pattern of non-measurability. This was the case for non-measurable peritoneal carcinomatosis commonly observed in cases of advanced ovarian cancer ( n  = 6/22, 27%). Osteoblastic metastases were also a common reason for NMD (14% overall) with most cases observed in patients with breast cancer ( n  = 13/55, 24%). Eight of all patients with breast cancer (15%) had only osteoblastic metastases and therefore were classified as NMD.

Measurable disease non-representative of tumor burden

Eleven patients (6%) had MD that was not representative of the overall tumor burden (Fig.  2 ). The patients were categorized in this group in the case of solitary measurable target lesions according to RECIST and simultaneous presence of extensive NMD such as extensive non-measurable peritoneal carcinomatosis or disseminated small metastases (pulmonary or hepatic). The majority of these patients presented either with prostate or breast cancer with solitary measurable target lesions and extensive non-measurable osteoblastic metastases ( n  = 8). Two patients had progressive ovarian cancer with solitary liver metastases as MD and extensive peritoneal carcinomatosis as NMD. Patient examples are provided in Figs.  4 and 3 .

figure 4

Exemplary cases of MD representative of the overall tumor burden in contrast enhanced CT

Right image: Patient with recurrent colorectal cancer after surgery with multiple metachronous liver metastases. An axial CT-image of the liver in soft tissue window displays a well-defined liver metastasis in liver segment VIII is measured (white arrow). Left image: Axial CT image in pulmonary window in a patient with resected sarcoma and well-defined measurable pulmonary metastases. A well-defined measurable lesion in the right upper pulmonary lobe is displayed (red arrow, 15 mm)

Although presenting with measurable disease at baseline, pretreated patients with disease progression, may not always be eligible for inclusion in clinical trials with targeted therapies due to NMD. The aim of this analysis was to evaluate patients with different solid malignancies from a large precision oncology center MTB regarding their eligibility for inclusion in clinical trials based on RECIST v1.1.

We discovered that approximately one third of cancer patients with advanced solid malignancies are not eligible for treatment response assessment in trials with the endpoints ORR or DOR due to NMD at the time of MTB presentation. Furthermore, we observed a high variability in the rate of eligibility at the time of case presentation based on the underlying solid malignancy as certain tumor-specific patterns were observed in several tumor entities affecting the assessment.

Specifically, several solid tumor entities like colorectal cancer presented with a high rate of MD (> 90%). This can be explained by the fact that high stage, recurrent or progressive colorectal cancer often affects the liver, thus presenting with well measurable liver metastases (69% of all colorectal cancer patients included in this study). In contrast, gastric cancer, head and neck tumors and ovarian cancer displayed the lowest rates of MD (< 55%). One explanation is the high rate of non-measurable peritoneal carcinomatosis in advanced ovarian or gastric cancer [ 24 , 25 ]. Analysis of previous lines of therapy revealed a significantly higher rate of surgical treatment prior to MTB in the NMD-cohort. This can pose as a possible explanation for NMD, as post-surgical changes or even complete resection result in the tumor no longer being measurable. We also discovered that a small percentage of the MD cohort had measurable lesion(s) that were however not representative of the overall tumor burden (6%). This was determined for cases with isolated MD and predominant NMD.

Endpoints which rely on anatomical measurements such as ORR and DOR in patients with solid malignancies are important in the assessment of the tumor burden after treatment as they often serve as a primary or secondary end-point in clinical trials in order to generate evidence regarding efficacy [ 5 , 26 ]. It has been shown that subjective assessment of tumor response may overestimate benefit and limit the potential role of real-world evidence [ 11 , 27 ]. This highlights the importance of standardized objective response criteria such as RECIST. Although imperfect, RECIST carry a body of evidence greater than any other biomarker supporting its utility.

Multidisciplinary tumor boards (MDT) consist of a team of experts which are required to manage a patient from diagnosis to treatment and to discuss patients’ eligibility for clinical trials [ 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 , 36 , 37 ]. A recent study exploring the impact of MDTs for the inclusion of patients in two large comprehensive cancer centers in Munich (CCCM) has shown that MDTs result in increased inclusion of patients in oncological clinical trials [ 38 ]. The core composition of MDTs may vary depending on the cancer type, but it generally includes clinical oncologists, surgeons, pathologists, palliative care physicians, radiation oncologists and diagnostic and interventional radiologists [ 29 ]. A systematic review of MTBs in clinical practice has reviewed multiple studies of global MTBs. It has shown that radiologists were only present in the MTB in five of the 25 studies (20%) [ 3 ].

Of note, all patients in this study had progressive advanced-stage cancer with a low inclusion rate in clinical trials (9%). This aligns with data from other cancer centers, emphasizing the need for closer collaboration with early clinical trial programs to maximize benefits for patients undergoing comprehensive genomic profiling [ 22 ]. The low inclusion rate in trials can be attributed to various reasons. As reported, 32% of the patients lacked measurable target lesions during the Molecular Tumor Board (MTB) evaluation. From a clinical standpoint, the primary reason for disqualifying patients from clinical trials was the absence of druggable mutations or insufficient evidence supporting certain therapies. Additionally, patients often did not receive experimental treatment due to rapid clinical deterioration and advanced disease progression in end-stage cancer.

This study underscores the pivotal role of imaging in RECIST-eligibility assessment, given that most therapy trials require measurable lesions as an inclusion criterion. For tumor entities with a higher rate of non-measurable disease, new serological, pathological or imaging biomarkers are essential. The oncologic community should be cognizant of the significant variability in RECIST-eligibility based on tumor entity and metastatic phenotype, posing a potential limiting factor for trial inclusion.

Limitations

The data is limited to a single-center with limited sample size, hence the representation of tumor entities may differ in larger cohorts.

A substantial proportion of patients with refractory or progressive solid malignancies do not qualify for treatment trials with the endpoints ORR, DFS or PFS at the time of case presentation in MTB due to NMD. The underlying malignancy and tumor-specific metastatic phenotype affect the rate of RECIST-eligibility with a high level of variance. If MD is present, there is a high rate of it being representative of the present total tumor burden. These findings should be taken into consideration during the planning of new precision oncology trials.

Data availability

The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.

Abbreviations

Comprehensive Cancer Center München-LMU

Computed tomography

Duration of response

Multidisciplinary tumor board

Magnetic resonance imaging

Molecular tumor board

Next generation sequencing

Objective response rate

Progression-free survival

Response Evaluation Criteria in Solid Tumors

Time to progression

Prasad V, Fojo T, Brada M. Precision oncology: origins, optimism, and potential. Lancet Oncol. 2016;17(2):e81–6.

Article   PubMed   Google Scholar  

Hoadley KA, Yau C, Wolf DM, Cherniack AD, Tamborero D, Ng S, et al. Multiplatform analysis of 12 cancer types reveals molecular classification within and across tissues of origin. Cell. 2014;158(4):929–44.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Luchini C, Lawlor RT, Milella M, Scarpa A. Molecular tumor boards in clinical practice. Trends Cancer. 2020;6(9):738–44.

Wang C-Y, Wei L-Q, Niu H-Z, Gao W-Q, Wang T, Chen S-J. Agitation thrombolysis combined with catheter-directed thrombolysis for the treatment of non-cirrhotic acute portal vein thrombosis. World J Gastroenterol. 2018;24(39):4482.

Article   PubMed   PubMed Central   Google Scholar  

Aykan NF, Özatlı T. Objective response rate assessment in oncology: current situation and future expectations. World J Clin Oncol. 2020;11(2):53.

Lebwohl D, Kay A, Berg W, Baladi JF, Zheng J. Progression-free survival: gaining on overall survival as a gold standard and accelerating drug development. Cancer J. 2009;15(5):386–94.

Article   CAS   PubMed   Google Scholar  

Delgado A, Guddati AK. Clinical endpoints in oncology-a primer. Am J cancer Res. 2021;11(4):1121.

PubMed   PubMed Central   Google Scholar  

Dienstmann R, Garralda E, Aguilar S, Sala G, Viaplana C, Ruiz-Pace F et al. Evolving Landscape of molecular prescreening strategies for oncology early clinical trials. JCO Precis Oncol. 2020;4.

Eisenhauer EA, Therasse P, Bogaerts J, Schwartz LH, Sargent D, Ford R, et al. New response evaluation criteria in solid tumours: revised RECIST guideline (version 1.1). Eur J Cancer. 2009;45(2):228–47.

Schwartz LH, Litière S, De Vries E, Ford R, Gwyther S, Mandrekar S, et al. RECIST 1.1—update and clarification: from the RECIST committee. Eur J Cancer. 2016;62:132–7.

Feinberg BA, Zettler ME, Klink AJ, Lee CH, Gajra A, Kish JK. Comparison of solid tumor treatment response observed in clinical practice with response reported in clinical trials. JAMA Netw Open. 2021;4(2):e2036741–e.

Paesmans M, Sculier J, Libert P, Bureau G, Dabouis G, Thiriaux J, et al. Response to chemotherapy has predictive value for further survival of patients with advanced non-small cell lung cancer: 10 years experience of the European lung cancer working party. Eur J Cancer. 1997;33(14):2326–32.

Goffin J, Baral S, Tu D, Nomikos D, Seymour L. Objective responses in patients with malignant melanoma or renal cell cancer in early clinical studies do not predict regulatory approval. Clin Cancer Res. 2005;11(16):5928–34.

El-Maraghi RH, Eisenhauer EA. Review of phase II trial designs used in studies of molecular targeted agents: outcomes and predictors of success in phase III. J Clin Oncol. 2008;26(8):1346–54.

Walia A, Haslam A, Prasad V. FDA validation of surrogate endpoints in oncology: 2005–2022. J Cancer Policy. 2022;34:100364.

Shahnam A, Hitchen N, Nindra U, Manoharan S, Desai J, Tran B, et al. Objective response rate and progression-free survival as surrogates for overall survival treatment effect: a meta-analysis across diverse tumour groups and contemporary therapies. Eur J Cancer. 2024;198:113503.

Li L, Pan Z. Progression-free survival and time to progression as real surrogate end points for overall survival in advanced breast cancer: a meta-analysis of 37 trials. Clin Breast Cancer. 2018;18(1):63–70.

Hotta K, Fujiwara Y, Matsuo K, Kiura K, Takigawa N, Tabata M, et al. Time to progression as a surrogate marker for overall survival in patients with advanced non-small cell lung cancer. J Thorac Oncol. 2009;4(3):311–7.

Burzykowski T, Buyse M, Piccart-Gebhart MJ, Sledge G, Carmichael J, Lück H-J et al. Evaluation of tumor response, disease control, progression-free survival, and time to progression as potential surrogate end points in metastatic breast cancer. 2008.

Jänne PA, Riely GJ, Gadgeel SM, Heist RS, Ou S-HI, Pacheco JM, et al. Adagrasib in non–small-cell lung cancer harboring a KRASG12C mutation. N Engl J Med. 2022;387(2):120–31.

Marabelle A, Le DT, Ascierto PA, Di Giacomo AM, De Jesus-Acosta A, Delord J-P, et al. Efficacy of pembrolizumab in patients with noncolorectal high microsatellite instability/mismatch repair–deficient cancer: results from the phase II KEYNOTE-158 study. J Clin Oncol. 2020;38(1):1.

Heinrich K, Miller-Phillips L, Ziemann F, Hasselmann K, Rühlmann K, Flach M, et al. Lessons learned: the first consecutive 1000 patients of the CCCMunich(LMU) molecular tumor board. J Cancer Res Clin Oncol. 2023;149(5):1905–15.

Schwartz LH, Seymour L, Litière S, Ford R, Gwyther S, Mandrekar S, et al. RECIST 1.1–Standardisation and disease-specific adaptations: perspectives from the RECIST Working Group. Eur J Cancer. 2016;62:138–45.

Pannu HK, Bristow RE, Montz FJ, Fishman EK. Multidetector CT of peritoneal carcinomatosis from ovarian cancer. Radiographics. 2003;23(3):687–701.

D’Angelica M, Gonen M, Brennan MF, Turnbull AD, Bains M, Karpeh MS. Patterns of initial recurrence in completely resected gastric adenocarcinoma. Ann Surg. 2004;240(5):808.

Kok P-S, Yoon W-H, Lord S, Marschner I, Friedlander M, Lee CK. Tumor response end points as surrogates for overall survival in immune checkpoint inhibitor trials: a systematic review and meta-analysis. JCO Precision Oncol. 2021;5:1151–9.

Article   Google Scholar  

Feinberg BA, Bharmal M, Klink AJ, Nabhan C, Phatak H. Using response evaluation criteria in solid tumors in real-world evidence cancer research. Future Oncol. 2018;14(27):2841–8.

Charara RN, Kreidieh FY, Farhat RA, Al-Feghali KA, Khoury KE, Haydar A, et al. Practice and impact of multidisciplinary tumor boards on patient management: a prospective study. J Global Oncol. 2017;3(3):242–9.

Fleissig A, Jenkins V, Catt S, Fallowfield L. Multidisciplinary teams in cancer care: are they effective in the UK? The lancet oncology. 2006;7(11):935–43.

Ruhstaller T, Roe H, Thürlimann B, Nicoll JJ. The multidisciplinary meeting: an indispensable aid to communication between different specialities. Eur J Cancer. 2006;42(15):2459–62.

Beets G, Sebag-Montefiore D, Andritsch E, Arnold D, Beishon M, Crul M, et al. ECCO essential requirements for quality cancer care: colorectal cancer. A critical review. Crit Rev Oncol/Hematol. 2017;110:81–93.

Fassnacht M, Tsagarakis S, Terzolo M, Tabarin A, Sahdev A, Newell-Price J, et al. European society of endocrinology clinical practice guidelines on the management of adrenal incidentalomas, in collaboration with the European network for the study of adrenal tumors. Eur J Endocrinol. 2023;189(1):G1–42.

Andritsch E, Beishon M, Bielack S, Bonvalot S, Casali P, Crul M, et al. ECCO essential requirements for quality cancer care: soft tissue sarcoma in adults and bone sarcoma. A critical review. Crit Rev Oncol/Hematol. 2017;110:94–105.

Allum W, Lordick F, Alsina M, Andritsch E, Ba-Ssalamah A, Beishon M, et al. ECCO essential requirements for quality cancer care: Oesophageal and gastric cancer. Crit Rev Oncol/Hematol. 2018;122:179–93.

Brausi M, Hoskin P, Andritsch E, Banks I, Beishon M, Boyle H, et al. ECCO essential requirements for quality cancer care: prostate cancer. Crit Rev Oncol/Hematol. 2020;148:102861.

Biganzoli L, Cardoso F, Beishon M, Cameron D, Cataliotti L, Coles CE, et al. The requirements of a specialist breast centre. Breast. 2020;51:65–84.

Wouters MW, Michielin O, Bastiaannet E, Beishon M, Catalano O, Del Marmol V, et al. ECCO essential requirements for quality cancer care: melanoma. Crit Rev Oncol/Hematol. 2018;122:164–78.

Dapper H, Dantes M, Herschbach P, Algül H, Heinemann V. Relevance of tumor boards for the inclusion of patients in oncological clinical trials. J Cancer Res Clin Oncol. 2023:1–8.

Download references

This study received no funding.

Open Access funding enabled and organized by Projekt DEAL.

Author information

Authors and affiliations.

Department of Radiology, University Hospital, LMU Munich, Marchioninistr. 15, 81377, Munich, Germany

Nabeel Mansour, Michael Winkelmann, Maria Ingenerf, Lukas Gold, Konstantin Klambauer, Jens Ricke & Wolfgang G. Kunz

Department of Medicine III, University Hospital, LMU Munich, Munich, Germany

Kathrin Heinrich, Danmei Zhang, Michael von Bergwelt-Baildon, Volker Heinemann & C. Benedikt Westphalen

Comprehensive Cancer Center München-LMU (CCCM LMU), LMU Munich, Munich, Germany

Kathrin Heinrich, Danmei Zhang, Michael von Bergwelt-Baildon, Volker Heinemann, C. Benedikt Westphalen & Wolfgang G. Kunz

Institute of Pathology, Ludwig-Maximilians-Universität Munich, Munich, Germany

Martina Rudelius & Frederick Klauschen

German Cancer Consortium (DKTK partner site Munich), Heidelberg, Germany

Danmei Zhang & Michael von Bergwelt-Baildon

You can also search for this author in PubMed   Google Scholar

Contributions

NM and WGK: Conception and design of the study; Generation, collection, assembly, analysis and/or interpretation of data; Drafting or revision of the manuscript; Approval of the final version of the manuscript. KH, DZ, MW, MI, LG, KK, MR, FK, MB, JR, VH and CBW: Generation, collection, assembly, analysis and/or interpretation of data; Drafting or revision of the manuscript; All authors: Approval of the final version of the manuscript.

Corresponding author

Correspondence to Wolfgang G. Kunz .

Ethics declarations

Ethics approval.

All medical records and imaging studies were reviewed with the approval of the LMU Munich Institutional Review Board (LMU Ethics Committee).

Consent for publication

Not applicable.

Competing interests

WGK: BMS, Boehringer Ingelheim, mintMedical; Need, Inc. CBW reports receipt of a fee for participation in Advisory Board from BMS, Celgene, Rafael, RedHill, Roche, Shire/Baxalta; receipt of a fee as an invited speaker from Amgen, AstraZeneca, Bayer, BMS, Celgene, Chugai, Falk, GSK, Janssen, Merck, MSD, Roche, Servier, Sirtex, Taiho; receipt of a fee for an expert testimony from Janssen; receipt of travel support from Bayer, Celgene, RedHill, Roche, Servier, Taiho; non-financial interest for receipt of research grant both personal and to institution from Roche; non-financial interest for serving as an officer in AIO - Arbeitsgemeinschaft Internistische Onkologie (Germany); non-financial interest for advisory role in EU Commission – DG RTD as a member of the EU Commission Mission Board for Cancer; non-financial interest for serving as an officer in ESMO. KH: Honoraria: Servier, Roche, Taiho, Merck, BMS, streamedup!. Consulting or Advisory role: Servier, MSD (Institutional), Roche (Institutional), Merck, Janssen. Travel support/Expenses: Amgen, Merck, Servier. All other authors have no relevant financial or non-financial interests to disclose.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Mansour, N., Heinrich, K., Zhang, D. et al. Patient eligibility for trials with imaging response assessment at the time of molecular tumor board presentation. Cancer Imaging 24 , 70 (2024). https://doi.org/10.1186/s40644-024-00708-5

Download citation

Received : 01 December 2023

Accepted : 11 May 2024

Published : 07 June 2024

DOI : https://doi.org/10.1186/s40644-024-00708-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Response assessment
  • Precision oncology

Cancer Imaging

ISSN: 1470-7330

presentation assessment criteria

IMAGES

  1. Giving Effective Presentations: 50 Things to Consider (with evaluation

    presentation assessment criteria

  2. Rubrics For Oral Presentations

    presentation assessment criteria

  3. Oral Presentation Criteria

    presentation assessment criteria

  4. FREE 9+ Sample Presentation Evaluation Forms in MS Word

    presentation assessment criteria

  5. Use Clear Criteria and Methodologies When Evaluating PowerPoint

    presentation assessment criteria

  6. Selected Assessment Criteria for Presentation.

    presentation assessment criteria

VIDEO

  1. Part 1

  2. LAW693 GROUP 4 PRESENTATION (Assessment 2)

  3. DG100 Presentation Assessment

  4. Presentation Assessment 3

  5. video PRESENTATION ELM3 ASSESSMENT 2

  6. ENGG955

COMMENTS

  1. PDF Oral Presentation Evaluation Rubric

    Organization. Logical, interesting, clearly delineated themes and ideas. Generally clear, overall easy for audience to follow. Overall organized but sequence is difficult to follow. Difficult to follow, confusing sequence of information. No clear organization to material, themes and ideas are disjointed. Evaluation.

  2. PDF SAMPLE ORAL PRESENTATION MARKING CRITERIA

    3. PEER ASSESSMENT OF GROUP PRESENTATIONS BY MEMBERS OF TEAM Use the criteria below to assess your contribution to the group presentation as well as the contribution of each of your teammates. 0 = no contribution 1 = minor contribution 2 = some contribution, but not always effective/successful 3 = some contribution, usually effective/successful

  3. PDF Presentation Evaluation Criteria

    The speaker presents ideas in a clear manner. The speaker states one point at a time. The speaker fully develops each point. The presentation is cohesive. The presentation is properly focused. A clear train of thought is followed and involves the audience. The speaker makes main points clear. The speaker sequences main points effectively.

  4. PDF Oral Presentation Evaluation Criteria and Checklist

    ORAL PRESENTATION EVALUATION CRITERIA AND CHECKLIST. talk was well-prepared. topic clearly stated. structure & scope of talk clearly stated in introduction. topic was developed in order stated in introduction. speaker summed up main points in conclusion. speaker formulated conclusions and discussed implications. was in control of subject matter.

  5. PDF Criteria for Evaluating an Individual Oral Presentation

    you to achieve sustained eye contact throughout the presentation. Volume Adjust the volume for the venue. Work to insure that remote audience members can clearly hear even the inflectional elements in your speech. Inflection Adjust voice modulation and stress points to assist the audience in identifying key concepts in the presentation.

  6. PDF Oral Presentation Grading Rubric

    presentation. Does not read off slides. Presenter's voice is clear. The pace is a little slow or fast at times. Most audience members can hear presentation. Presenter's voice is low. The pace is much too rapid/slow. Audience members have difficulty hearing presentation. Presenter mumbles, talks very fast, and speaks too quietly

  7. PDF ORAL PRESENTATION EVALUATION CRITERIA

    ORAL PRESENTATION EVALUATION CRITERIA Delivery / Overall Impression 20 17 14 11 Superior: Enthusiastic, poised, comprehensible, can be heard by all, interesting to audience Good: Moderately enthusiastic, comprehensible, generally can be heard, and moderately interesting Fair: Only mild enthusiasm, problems with comprehensibility, cannot be heard very

  8. What Makes A Great Presentation Checklist

    Well, you work out your presentation skills evaluation criteria and then measure/assess your people against them. ... on to discover what makes a great presentation and how to evaluate a presenter using our six-point Presenter Skills Assessment criteria so you can make a professional judgement of your people's presenting skills. 1. Ability to ...

  9. Use Clear Criteria and Methodologies When Evaluating PowerPoint

    Some of the criteria that you can use to assess presentations include: Focus of the presentation. Clarity and coherence of the content. Thoroughness of the ideas presented and the analysis. Clarity of the presentation. Effective use of facts, statistics and details. Lack of grammatical and spelling errors. Design of the slides.

  10. Oral presentations

    Oral presentations are often combined with other modes of assessment; for example oral presentation of a project report, oral presentation of a poster, commentary on a practical exercise, etc. Also common is the use of PechaKucha, a fast-paced presentation format consisting of a fixed number of slides that are set to move on every twenty ...

  11. Oral Presentation Evaluation Sheet

    Oral Presentation Evaluation Sheet Session : Time/Location . Paper Title: Presenter: Judge: A note to judges: Student presentations should be aimed at a general, but well-educated audience. Please use the following criteria for judging. Rating Scale: 1 = Marginal 2 = Acceptable 3 = Average 4 = Above Average 5 = Excellent . Scoring Criteria: •

  12. PDF Oral Presentation Rubric

    Oral Presentation Rubric 4—Excellent 3—Good 2—Fair 1—Needs Improvement Delivery • Holds attention of entire audience with the use of direct eye contact, seldom looking at notes • Speaks with fluctuation in volume and inflection to maintain audience interest and emphasize key points • Consistent use of direct eye contact with ...

  13. (PDF) Assessing Oral Presentation Performance: Designing ...

    These adopted criteria correspond to the widely accepted main criteria for presentations, in both literature and educational practice, regarding aspects as content of the presentation, structure ...

  14. Presentations

    Practising and improving your presentations will also increase your oral communication skills and confidence. Presentations are assessed on your content (what you present), your delivery (how you present it) and your visual aids (how they aid your presentation). Always check your assignment guidelines and marking rubric for specific information ...

  15. Development and validation of the oral presentation evaluation scale

    The assessment criteria included content (40%), presentation (40%), and structure (20%); the maximum percent in each domain was given for "excellence", which was relative to a minimum "threshold". Multiple "excellence" and "threshold" benchmarks were described for each domain.

  16. PDF Microsoft Word

    Assessment Rubric for Presentations Team: Assessor: Date: Category/ Criteria Exemplary (5) Competent (3) Needs Work (1) Score Structure • The presentation has a concise and clearly stated focus that is relevant to the audience. • The presentation is well-structured with a clear storyline. • Ideas are arranged logically; they strongly

  17. PDF Group Presentation Assessment Criteria

    Group Presentation Assessment Criteria For each presentation mark(s) out of four are given for each rubric (1‐5). The maximum number of marks for a presentation is 20 marks. Rubrics A (4 marks) B (3 marks) C (2 marks) D (1 mark) 1. Organisation and technical Student presents information in logical, interesting sequence which audience can ...

  18. Rubric for Evaluating Student Presentations

    The rubric for evaluating student presentations is included as a download in this article. In addition, the criteria on the rubric is explained in detail. The criteria included on this rubric is as follows: content, eye contact, volume and clarity, flow, confidence and attitude, visual aids, and time. In addition, you will find plenty of helpful hints for teachers and students to help make the ...

  19. Assessing oral presentations

    Andrew Leichsenring. Oral presentations are a common form of summative assessment in tertiary level English as a Foreign Language (EFL) syllabi. There will be an array of teaching and learning elements to be considered by a teacher in their set-up and execution of an ICT-based oral presentation activity that goes beyond having students stand in ...

  20. Oral Presentation Evaluation Sheet

    A note to judges: Student presentations should be aimed at a general, but well-educated audience. Please use the following criteria for judging. Rating Scale: 1 = Marginal 2 = Acceptable 3 = Above Average 4 = Excellent . Scoring Criteria: Excellent (demonstrates superior creativity, originality, or understanding in approach, content, or ...

  21. Step 4: Develop Assessment Criteria and Rubrics

    Assessment Criteria Example Using the questions above, the performance criteria in the example below were designed for an assignment in which students had to create an explainer video about a scientific concept for a specified audience. ... Rubrics work very well for projects, papers/reports, and presentations, as well as in peer review, and ...

  22. Mark Scheme for presentations

    Mark Scheme for presentations. Different students may legitimately approach their presentations in different ways and sometimes particular strength in one area can offset weakness in another. But the following criteria gives you an idea of the areas to think about when preparing and presenting, and what makes for a good presentation.

  23. Presentation Evaluation Criteria

    The criteria include: poise, clear articulation, proper volume, steady rate, good posture, eye contact, enthusiasm, and confidence. The speakers do not read (e.g., note cards, read the overhead transparencies). Balance between speakers. Each member of the team should participate equally in the presentation.

  24. Customer service specialist / Institute for Apprenticeships and

    Apprentices who have failed only the observation assessment method prior to the adjustment being implemented, on the now-retired version 1.0 EPA and are re-sitting or re-taking only the observation method on version 1.1 of the EPA, will be permitted 15 minutes of questioning following their observation re-sit or re-take to meet the criteria ...

  25. CRediT author statement

    CRediT in no way changes the journal's criteria to qualify for authorship. ... Preparation, creation and/or presentation of the published work, specifically visualization/ data presentation. Supervision. Oversight and leadership responsibility for the research activity planning and execution, including mentorship external to the core team ...

  26. Patient eligibility for trials with imaging response assessment at the

    Purpose To assess the eligibility of patients with advanced or recurrent solid malignancies presented to a molecular tumor board (MTB) at a large precision oncology center for inclusion in trials with the endpoints objective response rate (ORR) or duration of response (DOR) based on Response Evaluation Criteria in Solid Tumors (RECIST version 1.1). Methods Prospective patients with available ...