Status.net

Peer Review Examples (300 Key Positive, Negative Phrases)

Peer review is a process that helps you evaluate your work and that of others. It can be a valuable tool in ensuring the quality and credibility of any project or piece of research. Engaging in peer review lets you take a fresh look at something you may have become familiar with. You’ll provide constructive criticism to your peers and receive the same in return, allowing everyone to learn and grow.

Finding the right words to provide meaningful feedback can be challenging. This article provides positive and negative phrases to help you conduct more effective peer reviews.

Crafting Positive Feedback

Praising professionalism.

  • Your punctuality is exceptional.
  • You always manage to stay focused under pressure.
  • I appreciate your respect for deadlines.
  • Your attention to detail is outstanding.
  • You exhibit great organizational skills.
  • Your dedication to the task at hand is commendable.
  • I love your professionalism in handling all situations.
  • Your ability to maintain a positive attitude is inspiring.
  • Your commitment to the project shows in the results.
  • I value your ability to think critically and come up with solutions.

Acknowledging Skills

  • Your technical expertise has greatly contributed to our team’s success.
  • Your creative problem-solving skills are impressive.
  • You have an exceptional way of explaining complex ideas.
  • I admire your ability to adapt to change quickly.
  • Your presentation skills are top-notch.
  • You have a unique flair for motivating others.
  • Your negotiation skills have led to wonderful outcomes.
  • Your skillful project management ensured smooth progress.
  • Your research skills have produced invaluable findings.
  • Your knack for diplomacy has fostered great relationships.

Encouraging Teamwork

  • Your ability to collaborate effectively is evident.
  • You consistently go above and beyond to help your teammates.
  • I appreciate your eagerness to support others.
  • You always bring out the best in your team members.
  • You have a gift for uniting people in pursuit of a goal.
  • Your clear communication makes collaboration a breeze.
  • You excel in creating a nurturing atmosphere for the team.
  • Your leadership qualities are incredibly valuable to our team.
  • I admire your respectful attitude towards team members.
  • You have a knack for creating a supportive and inclusive environment.

Highlighting Achievements

  • Your sales performance this quarter has been phenomenal.
  • Your cost-saving initiatives have positively impacted the budget.
  • Your customer satisfaction ratings have reached new heights.
  • Your successful marketing campaign has driven impressive results.
  • You’ve shown a strong improvement in meeting your performance goals.
  • Your efforts have led to a significant increase in our online presence.
  • The success of the event can be traced back to your careful planning.
  • Your project was executed with precision and efficiency.
  • Your innovative product ideas have provided a competitive edge.
  • You’ve made great strides in strengthening our company culture.

Formulating Constructive Criticism

Addressing areas for improvement.

When providing constructive criticism, try to be specific in your comments and avoid generalizing. Here are 30 example phrases:

  • You might consider revising this sentence for clarity.
  • This section could benefit from more detailed explanations.
  • It appears there may be a discrepancy in your data.
  • This paragraph might need more support from the literature.
  • I suggest reorganizing this section to improve coherence.
  • The introduction can be strengthened by adding context.
  • There may be some inconsistencies that need to be resolved.
  • This hypothesis needs clearer justification.
  • The methodology could benefit from additional details.
  • The conclusion may need a stronger synthesis of the findings.
  • You might want to consider adding examples to illustrate your point.
  • Some of the terminology used here could be clarified.
  • It would be helpful to see more information on your sources.
  • A summary might help tie this section together.
  • You may want to consider rephrasing this question.
  • An elaboration on your methods might help the reader understand your approach.
  • This image could be clearer if it were larger or had labels.
  • Try breaking down this complex idea into smaller parts.
  • You may want to revisit your tone to ensure consistency.
  • The transitions between topics could be smoother.
  • Consider adding citations to support your argument.
  • The tables and figures could benefit from clearer explanations.
  • It might be helpful to revisit your formatting for better readability.
  • This discussion would benefit from additional perspectives.
  • You may want to address any logical gaps in your argument.
  • The literature review might benefit from a more critical analysis.
  • You might want to expand on this point to strengthen your case.
  • The presentation of your results could be more organized.
  • It would be helpful if you elaborated on this connection in your analysis.
  • A more in-depth conclusion may better tie your ideas together.

Offering Specific Recommendations

  • You could revise this sentence to say…
  • To make this section more detailed, consider discussing…
  • To address the data discrepancy, double-check the data at this point.
  • You could add citations from these articles to strengthen your point.
  • To improve coherence, you could move this paragraph to…
  • To add context, consider mentioning…
  • To resolve these inconsistencies, check…
  • To justify your hypothesis, provide evidence from…
  • To add detail to your methodology, describe…
  • To synthesize your findings in the conclusion, mention…
  • To illustrate your point, consider giving an example of…
  • To clarify terminology, you could define…
  • To provide more information on sources, list…
  • To create a summary, touch upon these key points.
  • To rephrase this question, try asking…
  • To expand upon your methods, discuss…
  • To make this image clearer, increase its size or add labels for…
  • To break down this complex idea, consider explaining each part like…
  • To maintain a consistent tone, avoid using…
  • To smooth transitions between topics, use phrases such as…
  • To support your argument, cite sources like…
  • To explain tables and figures, add captions with…
  • To improve readability, use formatting elements like headings, bullet points, etc.
  • To include additional perspectives in your discussion, mention…
  • To address logical gaps, provide reasoning for…
  • To create a more critical analysis in your literature review, critique…
  • To expand on this point, add details about…
  • To present your results more organized, use subheadings, tables, or graphs.
  • To elaborate on connections in your analysis, show how x relates to y by…
  • To provide a more in-depth conclusion, tie together the major findings by…

Highlighting Positive Aspects

When offering constructive criticism, maintaining a friendly and positive tone is important. Encourage improvement by highlighting the positive aspects of the work. For example:

  • Great job on this section!
  • Your writing is clear and easy to follow.
  • I appreciate your attention to detail.
  • Your conclusions are well supported by your research.
  • Your argument is compelling and engaging.
  • I found your analysis to be insightful.
  • The organization of your paper is well thought out.
  • Your use of citations effectively strengthens your claims.
  • Your methodology is well explained and thorough.
  • I’m impressed with the depth of your literature review.
  • Your examples are relevant and informative.
  • You’ve made excellent connections throughout your analysis.
  • Your grasp of the subject matter is impressive.
  • The clarity of your images and figures is commendable.
  • Your transitions between topics are smooth and well-executed.
  • You’ve effectively communicated complex ideas.
  • Your writing style is engaging and appropriate for your target audience.
  • Your presentation of results is easy to understand.
  • Your tone is consistent and professional.
  • Your overall argument is persuasive.
  • Your use of formatting helps guide the reader.
  • Your tables, graphs, and illustrations enhance your argument.
  • Your interpretation of the data is insightful and well-reasoned.
  • Your discussion is balanced and well-rounded.
  • The connections you make throughout your paper are thought-provoking.
  • Your approach to the topic is fresh and innovative.
  • You’ve done a fantastic job synthesizing information from various sources.
  • Your attention to the needs of the reader is commendable.
  • The care you’ve taken in addressing counterarguments is impressive.
  • Your conclusions are well-drawn and thought-provoking.

Balancing Feedback

Combining positive and negative remarks.

When providing peer review feedback, it’s important to balance positive and negative comments: this approach allows the reviewer to maintain a friendly tone and helps the recipient feel reassured.

Examples of Positive Remarks:

  • Well-organized
  • Clear and concise
  • Excellent use of examples
  • Thorough research
  • Articulate argument
  • Engaging writing style
  • Thoughtful analysis
  • Strong grasp of the topic
  • Relevant citations
  • Logical structure
  • Smooth transitions
  • Compelling conclusion
  • Original ideas
  • Solid supporting evidence
  • Succinct summary

Examples of Negative Remarks:

  • Unclear thesis
  • Lacks focus
  • Insufficient evidence
  • Overgeneralization
  • Inconsistent argument
  • Redundant phrasing
  • Jargon-filled language
  • Poor formatting
  • Grammatical errors
  • Unconvincing argument
  • Confusing organization
  • Needs more examples
  • Weak citations
  • Unsupported claims
  • Ambiguous phrasing

Ensuring Objectivity

Avoid using emotionally charged language or personal opinions. Instead, base your feedback on facts and evidence.

For example, instead of saying, “I don’t like your choice of examples,” you could say, “Including more diverse examples would strengthen your argument.”

Personalizing Feedback

Tailor your feedback to the individual and their work, avoiding generic or blanket statements. Acknowledge the writer’s strengths and demonstrate an understanding of their perspective. Providing personalized, specific, and constructive comments will enable the recipient to grow and improve their work.

For instance, you might say, “Your writing style is engaging, but consider adding more examples to support your points,” or “I appreciate your thorough research, but be mindful of avoiding overgeneralizations.”

Phrases for Positive Feedback

  • Great job on the presentation, your research was comprehensive.
  • I appreciate your attention to detail in this project.
  • You showed excellent teamwork and communication skills.
  • Impressive progress on the task, keep it up!
  • Your creativity really shined in this project.
  • Thank you for your hard work and dedication.
  • Your problem-solving skills were crucial to the success of this task.
  • I am impressed by your ability to multitask.
  • Your time management in finishing this project was stellar.
  • Excellent initiative in solving the issue.
  • Your work showcases your exceptional analytical skills.
  • Your positive attitude is contagious!
  • You were successful in making a complex subject easier to grasp.
  • Your collaboration skills truly enhanced our team’s effectiveness.
  • You handled the pressure and deadlines admirably.
  • Your written communication is both thorough and concise.
  • Your responsiveness to feedback is commendable.
  • Your flexibility in adapting to new challenges is impressive.
  • Thank you for your consistently accurate work.
  • Your devotion to professional development is inspiring.
  • You display strong leadership qualities.
  • You demonstrate empathy and understanding in handling conflicts.
  • Your active listening skills contribute greatly to our discussions.
  • You consistently take ownership of your tasks.
  • Your resourcefulness was key in overcoming obstacles.
  • You consistently display a can-do attitude.
  • Your presentation skills are top-notch!
  • You are a valuable asset to our team.
  • Your positive energy boosts team morale.
  • Your work displays your tremendous growth in this area.
  • Your ability to stay organized is commendable.
  • You consistently meet or exceed expectations.
  • Your commitment to self-improvement is truly inspiring.
  • Your persistence in tackling challenges is admirable.
  • Your ability to grasp new concepts quickly is impressive.
  • Your critical thinking skills are a valuable contribution to our team.
  • You demonstrate impressive technical expertise in your work.
  • Your contributions make a noticeable difference.
  • You effectively balance multiple priorities.
  • You consistently take the initiative to improve our processes.
  • Your ability to mentor and support others is commendable.
  • You are perceptive and insightful in offering solutions to problems.
  • You actively engage in discussions and share your opinions constructively.
  • Your professionalism is a model for others.
  • Your ability to quickly adapt to changes is commendable.
  • Your work exemplifies your passion for excellence.
  • Your desire to learn and grow is inspirational.
  • Your excellent organizational skills are a valuable asset.
  • You actively seek opportunities to contribute to the team’s success.
  • Your willingness to help others is truly appreciated.
  • Your presentation was both informative and engaging.
  • You exhibit great patience and perseverance in your work.
  • Your ability to navigate complex situations is impressive.
  • Your strategic thinking has contributed to our success.
  • Your accountability in your work is commendable.
  • Your ability to motivate others is admirable.
  • Your reliability has contributed significantly to the team’s success.
  • Your enthusiasm for your work is contagious.
  • Your diplomatic approach to resolving conflict is commendable.
  • Your ability to persevere despite setbacks is truly inspiring.
  • Your ability to build strong relationships with clients is impressive.
  • Your ability to prioritize tasks is invaluable to our team.
  • Your work consistently demonstrates your commitment to quality.
  • Your ability to break down complex information is excellent.
  • Your ability to think on your feet is greatly appreciated.
  • You consistently go above and beyond your job responsibilities.
  • Your attention to detail consistently ensures the accuracy of your work.
  • Your commitment to our team’s success is truly inspiring.
  • Your ability to maintain composure under stress is commendable.
  • Your contributions have made our project a success.
  • Your confidence and conviction in your work is motivating.
  • Thank you for stepping up and taking the lead on this task.
  • Your willingness to learn from mistakes is encouraging.
  • Your decision-making skills contribute greatly to the success of our team.
  • Your communication skills are essential for our team’s effectiveness.
  • Your ability to juggle multiple tasks simultaneously is impressive.
  • Your passion for your work is infectious.
  • Your courage in addressing challenges head-on is remarkable.
  • Your ability to prioritize tasks and manage your own workload is commendable.
  • You consistently demonstrate strong problem-solving skills.
  • Your work reflects your dedication to continuous improvement.
  • Your sense of humor helps lighten the mood during stressful times.
  • Your ability to take constructive feedback on board is impressive.
  • You always find opportunities to learn and develop your skills.
  • Your attention to safety protocols is much appreciated.
  • Your respect for deadlines is commendable.
  • Your focused approach to work is motivating to others.
  • You always search for ways to optimize our processes.
  • Your commitment to maintaining a high standard of work is inspirational.
  • Your excellent customer service skills are a true asset.
  • You demonstrate strong initiative in finding solutions to problems.
  • Your adaptability to new situations is an inspiration.
  • Your ability to manage change effectively is commendable.
  • Your proactive communication is appreciated by the entire team.
  • Your drive for continuous improvement is infectious.
  • Your input consistently elevates the quality of our discussions.
  • Your ability to handle both big picture and detailed tasks is impressive.
  • Your integrity and honesty are commendable.
  • Your ability to take on new responsibilities is truly inspiring.
  • Your strong work ethic is setting a high standard for the entire team.

Phrases for Areas of Improvement

  • You might consider revisiting the structure of your argument.
  • You could work on clarifying your main point.
  • Your presentation would benefit from additional examples.
  • Perhaps try exploring alternative perspectives.
  • It would be helpful to provide more context for your readers.
  • You may want to focus on improving the flow of your writing.
  • Consider incorporating additional evidence to support your claims.
  • You could benefit from refining your writing style.
  • It would be useful to address potential counterarguments.
  • You might want to elaborate on your conclusion.
  • Perhaps consider revisiting your methodology.
  • Consider providing a more in-depth analysis.
  • You may want to strengthen your introduction.
  • Your paper could benefit from additional proofreading.
  • You could work on making your topic more accessible to your readers.
  • Consider tightening your focus on key points.
  • It might be helpful to add more visual aids to your presentation.
  • You could strive for more cohesion between your sections.
  • Your abstract would benefit from a more concise summary.
  • Perhaps try to engage your audience more actively.
  • You may want to improve the organization of your thoughts.
  • It would be useful to cite more reputable sources.
  • Consider emphasizing the relevance of your topic.
  • Your argument could benefit from stronger parallels.
  • You may want to add transitional phrases for improved readability.
  • It might be helpful to provide more concrete examples.
  • You could work on maintaining a consistent tone throughout.
  • Consider employing a more dynamic vocabulary.
  • Your project would benefit from a clearer roadmap.
  • Perhaps explore the limitations of your study.
  • It would be helpful to demonstrate the impact of your research.
  • You could work on the consistency of your formatting.
  • Consider refining your choice of images.
  • You may want to improve the pacing of your presentation.
  • Make an effort to maintain eye contact with your audience.
  • Perhaps adding humor or anecdotes would engage your listeners.
  • You could work on modulating your voice for emphasis.
  • It would be helpful to practice your timing.
  • Consider incorporating more interactive elements.
  • You might want to speak more slowly and clearly.
  • Your project could benefit from additional feedback from experts.
  • You might want to consider the practical implications of your findings.
  • It would be useful to provide a more user-friendly interface.
  • Consider incorporating a more diverse range of sources.
  • You may want to hone your presentation to a specific audience.
  • You could work on the visual design of your slides.
  • Your writing might benefit from improved grammatical accuracy.
  • It would be helpful to reduce jargon for clarity.
  • You might consider refining your data visualization.
  • Perhaps provide a summary of key points for easier comprehension.
  • You may want to develop your skills in a particular area.
  • Consider attending workshops or trainings for continued learning.
  • Your project could benefit from stronger collaboration.
  • It might be helpful to seek guidance from mentors or experts.
  • You could work on managing your time more effectively.
  • It would be useful to set goals and priorities for improvement.
  • You might want to identify areas where you can grow professionally.
  • Consider setting aside time for reflection and self-assessment.
  • Perhaps develop strategies for overcoming challenges.
  • You could work on increasing your confidence in public speaking.
  • Consider collaborating with others for fresh insights.
  • You may want to practice active listening during discussions.
  • Be open to feedback and constructive criticism.
  • It might be helpful to develop empathy for team members’ perspectives.
  • You could work on being more adaptable to change.
  • It would be useful to improve your problem-solving abilities.
  • Perhaps explore opportunities for networking and engagement.
  • You may want to set personal benchmarks for success.
  • You might benefit from being more proactive in seeking opportunities.
  • Consider refining your negotiation and persuasion skills.
  • It would be helpful to enhance your interpersonal communication.
  • You could work on being more organized and detail-oriented.
  • You may want to focus on strengthening leadership qualities.
  • Consider improving your ability to work effectively under pressure.
  • Encourage open dialogue among colleagues to promote a positive work environment.
  • It might be useful to develop a growth mindset.
  • Be open to trying new approaches and techniques.
  • Consider building stronger relationships with colleagues and peers.
  • It would be helpful to manage expectations more effectively.
  • You might want to delegate tasks more efficiently.
  • You could work on your ability to prioritize workload effectively.
  • It would be useful to review and update processes and procedures regularly.
  • Consider creating a more inclusive working environment.
  • You might want to seek opportunities to mentor and support others.
  • Recognize and celebrate the accomplishments of your team members.
  • Consider developing a more strategic approach to decision-making.
  • You may want to establish clear goals and objectives for your team.
  • It would be helpful to provide regular and timely feedback.
  • Consider enhancing your delegation and time-management skills.
  • Be open to learning from your team’s diverse skill sets.
  • You could work on cultivating a collaborative culture.
  • It would be useful to engage in continuous professional development.
  • Consider seeking regular feedback from colleagues and peers.
  • You may want to nurture your own personal resilience.
  • Reflect on areas of improvement and develop an action plan.
  • It might be helpful to share your progress with a mentor or accountability partner.
  • Encourage your team to support one another’s growth and development.
  • Consider celebrating and acknowledging small successes.
  • You could work on cultivating effective communication habits.
  • Be willing to take calculated risks and learn from any setbacks.

Frequently Asked Questions

How can i phrase constructive feedback in peer evaluations.

To give constructive feedback in peer evaluations, try focusing on specific actions or behaviors that can be improved. Use phrases like “I noticed that…” or “You might consider…” to gently introduce your observations. For example, “You might consider asking for help when handling multiple tasks to improve time management.”

What are some examples of positive comments in peer reviews?

  • “Your presentation was engaging and well-organized, making it easy for the team to understand.”
  • “You are a great team player, always willing to help others and contribute to the project’s success.”
  • “Your attention to detail in documentation has made it easier for the whole team to access information quickly.”

Can you suggest ways to highlight strengths in peer appraisals?

Highlighting strengths in peer appraisals can be done by mentioning specific examples of how the individual excelled or went above and beyond expectations. You can also point out how their strengths positively impacted the team. For instance:

  • “Your effective communication skills ensured that everyone was on the same page during the project.”
  • “Your creativity in problem-solving helped resolve a complex issue that benefited the entire team.”

What are helpful phrases to use when noting areas for improvement in a peer review?

When noting areas for improvement in a peer review, try using phrases that encourage growth and development. Some examples include:

  • “To enhance your time management skills, you might try prioritizing tasks or setting deadlines.”
  • “By seeking feedback more often, you can continue to grow and improve in your role.”
  • “Consider collaborating more with team members to benefit from their perspectives and expertise.”

How should I approach writing a peer review for a manager differently?

When writing a peer review for a manager, it’s important to focus on their leadership qualities and how they can better support their team. Some suggestions might include:

  • “Encouraging more open communication can help create a more collaborative team environment.”
  • “By providing clearer expectations or deadlines, you can help reduce confusion and promote productivity.”
  • “Consider offering recognition to team members for their hard work, as this can boost motivation and morale.”

What is a diplomatic way to discuss negative aspects in a peer review?

Discussing negative aspects in a peer review requires tact and empathy. Try focusing on behaviors and actions rather than personal attributes, and use phrases that suggest areas for growth. For example:

  • “While your dedication to the project is admirable, it might be beneficial to delegate some tasks to avoid burnout.”
  • “Improving communication with colleagues can lead to better alignment within the team.”
  • “By asking for feedback, you can identify potential blind spots and continue to grow professionally.”
  • Flexibility: 25 Performance Review Phrases Examples
  • Job Knowledge Performance Review Phrases (Examples)
  • Integrity: 25 Performance Review Phrases Examples
  • 60 Smart Examples: Positive Feedback for Manager in a Review
  • 30 Employee Feedback Examples (Positive & Negative)
  • Initiative: 25 Performance Review Phrases Examples

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • What Is Peer Review? | Types & Examples

What Is Peer Review? | Types & Examples

Published on December 17, 2021 by Tegan George . Revised on June 22, 2023.

Peer review, sometimes referred to as refereeing , is the process of evaluating submissions to an academic journal. Using strict criteria, a panel of reviewers in the same subject area decides whether to accept each submission for publication.

Peer-reviewed articles are considered a highly credible source due to the stringent process they go through before publication.

There are various types of peer review. The main difference between them is to what extent the authors, reviewers, and editors know each other’s identities. The most common types are:

  • Single-blind review
  • Double-blind review
  • Triple-blind review

Collaborative review

Open review.

Relatedly, peer assessment is a process where your peers provide you with feedback on something you’ve written, based on a set of criteria or benchmarks from an instructor. They then give constructive feedback, compliments, or guidance to help you improve your draft.

Table of contents

What is the purpose of peer review, types of peer review, the peer review process, providing feedback to your peers, peer review example, advantages of peer review, criticisms of peer review, other interesting articles, frequently asked questions about peer reviews.

Many academic fields use peer review, largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the manuscript. For this reason, academic journals are among the most credible sources you can refer to.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure.

Peer assessment is often used in the classroom as a pedagogical tool. Both receiving feedback and providing it are thought to enhance the learning process, helping students think critically and collaboratively.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

peer review example research

Depending on the journal, there are several types of peer review.

Single-blind peer review

The most common type of peer review is single-blind (or single anonymized) review . Here, the names of the reviewers are not known by the author.

While this gives the reviewers the ability to give feedback without the possibility of interference from the author, there has been substantial criticism of this method in the last few years. Many argue that single-blind reviewing can lead to poaching or intellectual theft or that anonymized comments cause reviewers to be too harsh.

Double-blind peer review

In double-blind (or double anonymized) review , both the author and the reviewers are anonymous.

Arguments for double-blind review highlight that this mitigates any risk of prejudice on the side of the reviewer, while protecting the nature of the process. In theory, it also leads to manuscripts being published on merit rather than on the reputation of the author.

Triple-blind peer review

While triple-blind (or triple anonymized) review —where the identities of the author, reviewers, and editors are all anonymized—does exist, it is difficult to carry out in practice.

Proponents of adopting triple-blind review for journal submissions argue that it minimizes potential conflicts of interest and biases. However, ensuring anonymity is logistically challenging, and current editing software is not always able to fully anonymize everyone involved in the process.

In collaborative review , authors and reviewers interact with each other directly throughout the process. However, the identity of the reviewer is not known to the author. This gives all parties the opportunity to resolve any inconsistencies or contradictions in real time, and provides them a rich forum for discussion. It can mitigate the need for multiple rounds of editing and minimize back-and-forth.

Collaborative review can be time- and resource-intensive for the journal, however. For these collaborations to occur, there has to be a set system in place, often a technological platform, with staff monitoring and fixing any bugs or glitches.

Lastly, in open review , all parties know each other’s identities throughout the process. Often, open review can also include feedback from a larger audience, such as an online forum, or reviewer feedback included as part of the final published product.

While many argue that greater transparency prevents plagiarism or unnecessary harshness, there is also concern about the quality of future scholarship if reviewers feel they have to censor their comments.

In general, the peer review process includes the following steps:

  • First, the author submits the manuscript to the editor.
  • Reject the manuscript and send it back to the author, or
  • Send it onward to the selected peer reviewer(s)
  • Next, the peer review process occurs. The reviewer provides feedback, addressing any major or minor issues with the manuscript, and gives their advice regarding what edits should be made.
  • Lastly, the edited manuscript is sent back to the author. They input the edits and resubmit it to the editor for publication.

The peer review process

In an effort to be transparent, many journals are now disclosing who reviewed each article in the published product. There are also increasing opportunities for collaboration and feedback, with some journals allowing open communication between reviewers and authors.

It can seem daunting at first to conduct a peer review or peer assessment. If you’re not sure where to start, there are several best practices you can use.

Summarize the argument in your own words

Summarizing the main argument helps the author see how their argument is interpreted by readers, and gives you a jumping-off point for providing feedback. If you’re having trouble doing this, it’s a sign that the argument needs to be clearer, more concise, or worded differently.

If the author sees that you’ve interpreted their argument differently than they intended, they have an opportunity to address any misunderstandings when they get the manuscript back.

Separate your feedback into major and minor issues

It can be challenging to keep feedback organized. One strategy is to start out with any major issues and then flow into the more minor points. It’s often helpful to keep your feedback in a numbered list, so the author has concrete points to refer back to.

Major issues typically consist of any problems with the style, flow, or key points of the manuscript. Minor issues include spelling errors, citation errors, or other smaller, easy-to-apply feedback.

Tip: Try not to focus too much on the minor issues. If the manuscript has a lot of typos, consider making a note that the author should address spelling and grammar issues, rather than going through and fixing each one.

The best feedback you can provide is anything that helps them strengthen their argument or resolve major stylistic issues.

Give the type of feedback that you would like to receive

No one likes being criticized, and it can be difficult to give honest feedback without sounding overly harsh or critical. One strategy you can use here is the “compliment sandwich,” where you “sandwich” your constructive criticism between two compliments.

Be sure you are giving concrete, actionable feedback that will help the author submit a successful final draft. While you shouldn’t tell them exactly what they should do, your feedback should help them resolve any issues they may have overlooked.

As a rule of thumb, your feedback should be:

  • Easy to understand
  • Constructive

Prevent plagiarism. Run a free check.

Below is a brief annotated research example. You can view examples of peer feedback by hovering over the highlighted sections.

Influence of phone use on sleep

Studies show that teens from the US are getting less sleep than they were a decade ago (Johnson, 2019) . On average, teens only slept for 6 hours a night in 2021, compared to 8 hours a night in 2011. Johnson mentions several potential causes, such as increased anxiety, changed diets, and increased phone use.

The current study focuses on the effect phone use before bedtime has on the number of hours of sleep teens are getting.

For this study, a sample of 300 teens was recruited using social media, such as Facebook, Instagram, and Snapchat. The first week, all teens were allowed to use their phone the way they normally would, in order to obtain a baseline.

The sample was then divided into 3 groups:

  • Group 1 was not allowed to use their phone before bedtime.
  • Group 2 used their phone for 1 hour before bedtime.
  • Group 3 used their phone for 3 hours before bedtime.

All participants were asked to go to sleep around 10 p.m. to control for variation in bedtime . In the morning, their Fitbit showed the number of hours they’d slept. They kept track of these numbers themselves for 1 week.

Two independent t tests were used in order to compare Group 1 and Group 2, and Group 1 and Group 3. The first t test showed no significant difference ( p > .05) between the number of hours for Group 1 ( M = 7.8, SD = 0.6) and Group 2 ( M = 7.0, SD = 0.8). The second t test showed a significant difference ( p < .01) between the average difference for Group 1 ( M = 7.8, SD = 0.6) and Group 3 ( M = 6.1, SD = 1.5).

This shows that teens sleep fewer hours a night if they use their phone for over an hour before bedtime, compared to teens who use their phone for 0 to 1 hours.

Peer review is an established and hallowed process in academia, dating back hundreds of years. It provides various fields of study with metrics, expectations, and guidance to ensure published work is consistent with predetermined standards.

  • Protects the quality of published research

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. Any content that raises red flags for reviewers can be closely examined in the review stage, preventing plagiarized or duplicated research from being published.

  • Gives you access to feedback from experts in your field

Peer review represents an excellent opportunity to get feedback from renowned experts in your field and to improve your writing through their feedback and guidance. Experts with knowledge about your subject matter can give you feedback on both style and content, and they may also suggest avenues for further research that you hadn’t yet considered.

  • Helps you identify any weaknesses in your argument

Peer review acts as a first defense, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process. This way, you’ll end up with a more robust, more cohesive article.

While peer review is a widely accepted metric for credibility, it’s not without its drawbacks.

  • Reviewer bias

The more transparent double-blind system is not yet very common, which can lead to bias in reviewing. A common criticism is that an excellent paper by a new researcher may be declined, while an objectively lower-quality submission by an established researcher would be accepted.

  • Delays in publication

The thoroughness of the peer review process can lead to significant delays in publishing time. Research that was current at the time of submission may not be as current by the time it’s published. There is also high risk of publication bias , where journals are more likely to publish studies with positive findings than studies with negative findings.

  • Risk of human error

By its very nature, peer review carries a risk of human error. In particular, falsification often cannot be detected, given that reviewers would have to replicate entire experiments to ensure the validity of results.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Measures of central tendency
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Thematic analysis
  • Discourse analysis
  • Cohort study
  • Ethnography

Research bias

  • Implicit bias
  • Cognitive bias
  • Conformity bias
  • Hawthorne effect
  • Availability heuristic
  • Attrition bias
  • Social desirability bias

Peer review is a process of evaluating submissions to an academic journal. Utilizing rigorous criteria, a panel of reviewers in the same subject area decide whether to accept each submission for publication. For this reason, academic journals are often considered among the most credible sources you can use in a research project– provided that the journal itself is trustworthy and well-regarded.

In general, the peer review process follows the following steps: 

  • Reject the manuscript and send it back to author, or 
  • Send it onward to the selected peer reviewer(s) 
  • Next, the peer review process occurs. The reviewer provides feedback, addressing any major or minor issues with the manuscript, and gives their advice regarding what edits should be made. 
  • Lastly, the edited manuscript is sent back to the author. They input the edits, and resubmit it to the editor for publication.

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. It also represents an excellent opportunity to get feedback from renowned experts in your field. It acts as a first defense, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process.

Peer-reviewed articles are considered a highly credible source due to this stringent process they go through before publication.

Many academic fields use peer review , largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the published manuscript.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure. 

A credible source should pass the CRAAP test  and follow these guidelines:

  • The information should be up to date and current.
  • The author and publication should be a trusted authority on the subject you are researching.
  • The sources the author cited should be easy to find, clear, and unbiased.
  • For a web source, the URL and layout should signify that it is trustworthy.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

George, T. (2023, June 22). What Is Peer Review? | Types & Examples. Scribbr. Retrieved August 29, 2024, from https://www.scribbr.com/methodology/peer-review/

Is this article helpful?

Tegan George

Tegan George

Other students also liked, what are credible sources & how to spot them | examples, ethical considerations in research | types & examples, applying the craap test & evaluating sources, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

The Savvy Scientist

The Savvy Scientist

Experiences of a London PhD student and beyond

My Complete Guide to Academic Peer Review: Example Comments & How to Make Paper Revisions

peer review example research

Once you’ve submitted your paper to an academic journal you’re in the nerve-racking position of waiting to hear back about the fate of your work. In this post we’ll cover everything from potential responses you could receive from the editor and example peer review comments through to how to submit revisions.

My first first-author paper was reviewed by five (yes 5!) reviewers and since then I’ve published several others papers, so now I want to share the insights I’ve gained which will hopefully help you out!

This post is part of my series to help with writing and publishing your first academic journal paper. You can find the whole series here: Writing an academic journal paper .

The Peer Review Process

An overview of the academic journal peer review process.

When you submit a paper to a journal, the first thing that will happen is one of the editorial team will do an initial assessment of whether or not the article is of interest. They may decide for a number of reasons that the article isn’t suitable for the journal and may reject the submission before even sending it out to reviewers.

If this happens hopefully they’ll have let you know quickly so that you can move on and make a start targeting a different journal instead.

Handy way to check the status – Sign in to the journal’s submission website and have a look at the status of your journal article online. If you can see that the article is under review then you’ve passed that first hurdle!

When your paper is under peer review, the journal will have set out a framework to help the reviewers assess your work. Generally they’ll be deciding whether the work is to a high enough standard.

Interested in reading about what reviewers are looking for? Check out my post on being a reviewer for the first time. Peer-Reviewing Journal Articles: Should You Do It? Sharing What I Learned From My First Experiences .

Once the reviewers have made their assessments, they’ll return their comments and suggestions to the editor who will then decide how the article should proceed.

How Many People Review Each Paper?

The editor ideally wants a clear decision from the reviewers as to whether the paper should be accepted or rejected. If there is no consensus among the reviewers then the editor may send your paper out to more reviewers to better judge whether or not to accept the paper.

If you’ve got a lot of reviewers on your paper it isn’t necessarily that the reviewers disagreed about accepting your paper.

You can also end up with lots of reviewers in the following circumstance:

  • The editor asks a certain academic to review the paper but doesn’t get a response from them
  • The editor asks another academic to step in
  • The initial reviewer then responds

Next thing you know your work is being scrutinised by extra pairs of eyes!

As mentioned in the intro, my first paper ended up with five reviewers!

Potential Journal Responses

Assuming that the paper passes the editor’s initial evaluation and is sent out for peer-review, here are the potential decisions you may receive:

  • Reject the paper. Sadly the editor and reviewers decided against publishing your work. Hopefully they’ll have included feedback which you can incorporate into your submission to another journal. I’ve had some rejections and the reviewer comments were genuinely useful.
  • Accept the paper with major revisions . Good news: with some more work your paper could get published. If you make all the changes that the reviewers suggest, and they’re happy with your responses, then it should get accepted. Some people see major revisions as a disappointment but it doesn’t have to be.
  • Accept the paper with minor revisions. This is like getting a major revisions response but better! Generally minor revisions can be addressed quickly and often come down to clarifying things for the reviewers: rewording, addressing minor concerns etc and don’t require any more experiments or analysis. You stand a really good chance of getting the paper published if you’ve been given a minor revisions result.
  • Accept the paper with no revisions . I’m not sure that this ever really happens, but it is potentially possible if the reviewers are already completely happy with your paper!

Keen to know more about academic publishing? My series on publishing is now available as a free eBook. It includes my experiences being a peer reviewer. Click the image below for access.

peer review example research

Example Peer Review Comments & Addressing Reviewer Feedback

If your paper has been accepted but requires revisions, the editor will forward to you the comments and concerns that the reviewers raised. You’ll have to address these points so that the reviewers are satisfied your work is of a publishable standard.

It is extremely important to take this stage seriously. If you don’t do a thorough job then the reviewers won’t recommend that your paper is accepted for publication!

You’ll have to put together a resubmission with your co-authors and there are two crucial things you must do:

  • Make revisions to your manuscript based off reviewer comments
  • Reply to the reviewers, telling them the changes you’ve made and potentially changes you’ve not made in instances where you disagree with them. Read on to see some example peer review comments and how I replied!

Before making any changes to your actual paper, I suggest having a thorough read through the reviewer comments.

Once you’ve read through the comments you might be keen to dive straight in and make the changes in your paper. Instead, I actually suggest firstly drafting your reply to the reviewers.

Why start with the reply to reviewers? Well in a way it is actually potentially more important than the changes you’re making in the manuscript.

Imagine when a reviewer receives your response to their comments: you want them to be able to read your reply document and be satisfied that their queries have largely been addressed without even having to open the updated draft of your manuscript. If you do a good job with the replies, the reviewers will be better placed to recommend the paper be accepted!

By starting with your reply to the reviewers you’ll also clarify for yourself what changes actually have to be made to the paper.

So let’s now cover how to reply to the reviewers.

1. Replying to Journal Reviewers

It is so important to make sure you do a solid job addressing your reviewers’ feedback in your reply document. If you leave anything unanswered you’re asking for trouble, which in this case means either a rejection or another round of revisions: though some journals only give you one shot! Therefore make sure you’re thorough, not just with making the changes but demonstrating the changes in your replies.

It’s no good putting in the work to revise your paper but not evidence it in your reply to the reviewers!

There may be points that reviewers raise which don’t appear to necessitate making changes to your manuscript, but this is rarely the case. Even for comments or concerns they raise which are already addressed in the paper, clearly those areas could be clarified or highlighted to ensure that future readers don’t get confused.

How to Reply to Journal Reviewers

Some journals will request a certain format for how you should structure a reply to the reviewers. If so this should be included in the email you receive from the journal’s editor. If there are no certain requirements here is what I do:

  • Copy and paste all replies into a document.
  • Separate out each point they raise onto a separate line. Often they’ll already be nicely numbered but sometimes they actually still raise separate issues in one block of text. I suggest separating it all out so that each query is addressed separately.
  • Form your reply for each point that they raise. I start by just jotting down notes for roughly how I’ll respond. Once I’m happy with the key message I’ll write it up into a scripted reply.
  • Finally, go through and format it nicely and include line number references for the changes you’ve made in the manuscript.

By the end you’ll have a document that looks something like:

Reviewer 1 Point 1: [Quote the reviewer’s comment] Response 1: [Address point 1 and say what revisions you’ve made to the paper] Point 2: [Quote the reviewer’s comment] Response 2: [Address point 2 and say what revisions you’ve made to the paper] Then repeat this for all comments by all reviewers!

What To Actually Include In Your Reply To Reviewers

For every single point raised by the reviewers, you should do the following:

  • Address their concern: Do you agree or disagree with the reviewer’s comment? Either way, make your position clear and justify any differences of opinion. If the reviewer wants more clarity on an issue, provide it. It is really important that you actually address their concerns in your reply. Don’t just say “Thanks, we’ve changed the text”. Actually include everything they want to know in your reply. Yes this means you’ll be repeating things between your reply and the revisions to the paper but that’s fine.
  • Reference changes to your manuscript in your reply. Once you’ve answered the reviewer’s question, you must show that you’re actually using this feedback to revise the manuscript. The best way to do this is to refer to where the changes have been made throughout the text. I personally do this by include line references. Make sure you save this right until the end once you’ve finished making changes!

Example Peer Review Comments & Author Replies

In order to understand how this works in practice I’d suggest reading through a few real-life example peer review comments and replies.

The good news is that published papers often now include peer-review records, including the reviewer comments and authors’ replies. So here are two feedback examples from my own papers:

Example Peer Review: Paper 1

Quantifying 3D Strain in Scaffold Implants for Regenerative Medicine, J. Clark et al. 2020 – Available here

This paper was reviewed by two academics and was given major revisions. The journal gave us only 10 days to get them done, which was a bit stressful!

  • Reviewer Comments
  • My reply to Reviewer 1
  • My reply to Reviewer 2

One round of reviews wasn’t enough for Reviewer 2…

  • My reply to Reviewer 2 – ROUND 2

Thankfully it was accepted after the second round of review, and actually ended up being selected for this accolade, whatever most notable means?!

Nice to see our recent paper highlighted as one of the most notable articles, great start to the week! Thanks @Materials_mdpi 😀 #openaccess & available here: https://t.co/AKWLcyUtpC @ICBiomechanics @julianrjones @saman_tavana pic.twitter.com/ciOX2vftVL — Jeff Clark (@savvy_scientist) December 7, 2020

Example Peer Review: Paper 2

Exploratory Full-Field Mechanical Analysis across the Osteochondral Tissue—Biomaterial Interface in an Ovine Model, J. Clark et al. 2020 – Available here

This paper was reviewed by three academics and was given minor revisions.

  • My reply to Reviewer 3

I’m pleased to say it was accepted after the first round of revisions 🙂

Things To Be Aware Of When Replying To Peer Review Comments

  • Generally, try to make a revision to your paper for every comment. No matter what the reviewer’s comment is, you can probably make a change to the paper which will improve your manuscript. For example, if the reviewer seems confused about something, improve the clarity in your paper. If you disagree with the reviewer, include better justification for your choices in the paper. It is far more favourable to take on board the reviewer’s feedback and act on it with actual changes to your draft.
  • Organise your responses. Sometimes journals will request the reply to each reviewer is sent in a separate document. Unless they ask for it this way I stick them all together in one document with subheadings eg “Reviewer 1” etc.
  • Make sure you address each and every question. If you dodge anything then the reviewer will have a valid reason to reject your resubmission. You don’t need to agree with them on every point but you do need to justify your position.
  • Be courteous. No need to go overboard with compliments but stay polite as reviewers are providing constructive feedback. I like to add in “We thank the reviewer for their suggestion” every so often where it genuinely warrants it. Remember that written language doesn’t always carry tone very well, so rather than risk coming off as abrasive if I don’t agree with the reviewer’s suggestion I’d rather be generous with friendliness throughout the reply.

2. How to Make Revisions To Your Paper

Once you’ve drafted your replies to the reviewers, you’ve actually done a lot of the ground work for making changes to the paper. Remember, you are making changes to the paper based off the reviewer comments so you should regularly be referring back to the comments to ensure you’re not getting sidetracked.

Reviewers could request modifications to any part of your paper. You may need to collect more data, do more analysis, reformat some figures, add in more references or discussion or any number of other revisions! So I can’t really help with everything, even so here is some general advice:

  • Use tracked-changes. This is so important. The editor and reviewers need to be able to see every single change you’ve made compared to your first submission. Sometimes the journal will want a clean copy too but always start with tracked-changes enabled then just save a clean copy afterwards.
  • Be thorough . Try to not leave any opportunity for the reviewers to not recommend your paper to be published. Any chance you have to satisfy their concerns, take it. For example if the reviewers are concerned about sample size and you have the means to include other experiments, consider doing so. If they want to see more justification or references, be thorough. To be clear again, this doesn’t necessarily mean making changes you don’t believe in. If you don’t want to make a change, you can justify your position to the reviewers. Either way, be thorough.
  • Use your reply to the reviewers as a guide. In your draft reply to the reviewers you should have already included a lot of details which can be incorporated into the text. If they raised a concern, you should be able to go and find references which address the concern. This reference should appear both in your reply and in the manuscript. As mentioned above I always suggest starting with the reply, then simply adding these details to your manuscript once you know what needs doing.

Putting Together Your Paper Revision Submission

  • Once you’ve drafted your reply to the reviewers and revised manuscript, make sure to give sufficient time for your co-authors to give feedback. Also give yourself time afterwards to make changes based off of their feedback. I ideally give a week for the feedback and another few days to make the changes.
  • When you’re satisfied that you’ve addressed the reviewer comments, you can think about submitting it. The journal may ask for another letter to the editor, if not I simply add to the top of the reply to reviewers something like:
“Dear [Editor], We are grateful to the reviewer for their positive and constructive comments that have led to an improved manuscript.  Here, we address their concerns/suggestions and have tracked changes throughout the revised manuscript.”

Once you’re ready to submit:

  • Double check that you’ve done everything that the editor requested in their email
  • Double check that the file names and formats are as required
  • Triple check you’ve addressed the reviewer comments adequately
  • Click submit and bask in relief!

You won’t always get the paper accepted, but if you’re thorough and present your revisions clearly then you’ll put yourself in a really good position. Remember to try as hard as possible to satisfy the reviewers’ concerns to minimise any opportunity for them to not accept your revisions!

Best of luck!

I really hope that this post has been useful to you and that the example peer review section has given you some ideas for how to respond. I know how daunting it can be to reply to reviewers, and it is really important to try to do a good job and give yourself the best chances of success. If you’d like to read other posts in my academic publishing series you can find them here:

Blog post series: Writing an academic journal paper

Subscribe below to stay up to date with new posts in the academic publishing series and other PhD content.

Share this:

  • Click to share on Facebook (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Twitter (Opens in new window)
  • Click to share on Reddit (Opens in new window)

Related Posts

Self portrait photo of me thinking about the key lessons from my PhD

The Five Most Powerful Lessons I Learned During My PhD

8th August 2024 8th August 2024

Image with a title showing 'How to make PhD thesis corrections' with a cartoon image of a man writing on a piece of paper, while holding a test tube, with a stack of books on the desk beside him

Minor Corrections: How To Make Them and Succeed With Your PhD Thesis

2nd June 2024 2nd June 2024

Graphic of data from experiments written on a notepad with the title "How to manage data"

How to Master Data Management in Research

25th April 2024 4th August 2024

2 Comments on “My Complete Guide to Academic Peer Review: Example Comments & How to Make Paper Revisions”

Excellent article! Thank you for the inspiration!

No worries at all, thanks for your kind comment!

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Notify me of follow-up comments by email.

This site uses Akismet to reduce spam. Learn how your comment data is processed .

Privacy Overview

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Reumatologia
  • v.59(1); 2021

Logo of reumatol

Peer review guidance: a primer for researchers

Olena zimba.

1 Department of Internal Medicine No. 2, Danylo Halytsky Lviv National Medical University, Lviv, Ukraine

Armen Yuri Gasparyan

2 Departments of Rheumatology and Research and Development, Dudley Group NHS Foundation Trust (Teaching Trust of the University of Birmingham, UK), Russells Hall Hospital, Dudley, West Midlands, UK

The peer review process is essential for quality checks and validation of journal submissions. Although it has some limitations, including manipulations and biased and unfair evaluations, there is no other alternative to the system. Several peer review models are now practised, with public review being the most appropriate in view of the open science movement. Constructive reviewer comments are increasingly recognised as scholarly contributions which should meet certain ethics and reporting standards. The Publons platform, which is now part of the Web of Science Group (Clarivate Analytics), credits validated reviewer accomplishments and serves as an instrument for selecting and promoting the best reviewers. All authors with relevant profiles may act as reviewers. Adherence to research reporting standards and access to bibliographic databases are recommended to help reviewers draft evidence-based and detailed comments.

Introduction

The peer review process is essential for evaluating the quality of scholarly works, suggesting corrections, and learning from other authors’ mistakes. The principles of peer review are largely based on professionalism, eloquence, and collegiate attitude. As such, reviewing journal submissions is a privilege and responsibility for ‘elite’ research fellows who contribute to their professional societies and add value by voluntarily sharing their knowledge and experience.

Since the launch of the first academic periodicals back in 1665, the peer review has been mandatory for validating scientific facts, selecting influential works, and minimizing chances of publishing erroneous research reports [ 1 ]. Over the past centuries, peer review models have evolved from single-handed editorial evaluations to collegial discussions, with numerous strengths and inevitable limitations of each practised model [ 2 , 3 ]. With multiplication of periodicals and editorial management platforms, the reviewer pool has expanded and internationalized. Various sets of rules have been proposed to select skilled reviewers and employ globally acceptable tools and language styles [ 4 , 5 ].

In the era of digitization, the ethical dimension of the peer review has emerged, necessitating involvement of peers with full understanding of research and publication ethics to exclude unethical articles from the pool of evidence-based research and reviews [ 6 ]. In the time of the COVID-19 pandemic, some, if not most, journals face the unavailability of skilled reviewers, resulting in an unprecedented increase of articles without a history of peer review or those with surprisingly short evaluation timelines [ 7 ].

Editorial recommendations and the best reviewers

Guidance on peer review and selection of reviewers is currently available in the recommendations of global editorial associations which can be consulted by journal editors for updating their ethics statements and by research managers for crediting the evaluators. The International Committee on Medical Journal Editors (ICMJE) qualifies peer review as a continuation of the scientific process that should involve experts who are able to timely respond to reviewer invitations, submitting unbiased and constructive comments, and keeping confidentiality [ 8 ].

The reviewer roles and responsibilities are listed in the updated recommendations of the Council of Science Editors (CSE) [ 9 ] where ethical conduct is viewed as a premise of the quality evaluations. The Committee on Publication Ethics (COPE) further emphasizes editorial strategies that ensure transparent and unbiased reviewer evaluations by trained professionals [ 10 ]. Finally, the World Association of Medical Editors (WAME) prioritizes selecting the best reviewers with validated profiles to avoid substandard or fraudulent reviewer comments [ 11 ]. Accordingly, the Sarajevo Declaration on Integrity and Visibility of Scholarly Publications encourages reviewers to register with the Open Researcher and Contributor ID (ORCID) platform to validate and publicize their scholarly activities [ 12 ].

Although the best reviewer criteria are not listed in the editorial recommendations, it is apparent that the manuscript evaluators should be active researchers with extensive experience in the subject matter and an impressive list of relevant and recent publications [ 13 ]. All authors embarking on an academic career and publishing articles with active contact details can be involved in the evaluation of others’ scholarly works [ 14 ]. Ideally, the reviewers should be peers of the manuscript authors with equal scholarly ranks and credentials.

However, journal editors may employ schemes that engage junior research fellows as co-reviewers along with their mentors and senior fellows [ 15 ]. Such a scheme is successfully practised within the framework of the Emerging EULAR (European League Against Rheumatism) Network (EMEUNET) where seasoned authors (mentors) train ongoing researchers (mentees) how to evaluate submissions to the top rheumatology journals and select the best evaluators for regular contributors to these journals [ 16 ].

The awareness of the EQUATOR Network reporting standards may help the reviewers to evaluate methodology and suggest related revisions. Statistical skills help the reviewers to detect basic mistakes and suggest additional analyses. For example, scanning data presentation and revealing mistakes in the presentation of means and standard deviations often prompt re-analyses of distributions and replacement of parametric tests with non-parametric ones [ 17 , 18 ].

Constructive reviewer comments

The main goal of the peer review is to support authors in their attempt to publish ethically sound and professionally validated works that may attract readers’ attention and positively influence healthcare research and practice. As such, an optimal reviewer comment has to comprehensively examine all parts of the research and review work ( Table I ). The best reviewers are viewed as contributors who guide authors on how to correct mistakes, discuss study limitations, and highlight its strengths [ 19 ].

Structure of a reviewer comment to be forwarded to authors

SectionNotes
Introductory lineSummarizes the overall impression about the manuscript validity and implications
Evaluation of the title, abstract and keywordsEvaluates the title correctness and completeness, inclusion of all relevant keywords, study design terms, information load, and relevance of the abstract
Major commentsSpecifically analyses each manuscript part in line with available research reporting standards, supports all suggestions with solid evidence, weighs novelty of hypotheses and methodological rigour, highlights the choice of study design, points to missing/incomplete ethics approval statements, rights to re-use graphics, accuracy and completeness of statistical analyses, professionalism of bibliographic searches and inclusion of updated and relevant references
Minor commentsIdentifies language mistakes, typos, inappropriate format of graphics and references, length of texts and tables, use of supplementary material, unusual sections and order, completeness of scholarly contribution, conflict of interest, and funding statements
Concluding remarksReflects on take-home messages and implications

Some of the currently practised review models are well positioned to help authors reveal and correct their mistakes at pre- or post-publication stages ( Table II ). The global move toward open science is particularly instrumental for increasing the quality and transparency of reviewer contributions.

Advantages and disadvantages of common manuscript evaluation models

ModelsAdvantagesDisadvantages
In-house (internal) editorial reviewAllows detection of major flaws and errors that justify outright rejections; rarely, outstanding manuscripts are accepted without delaysJournal staff evaluations may be biased; manuscript acceptance without external review may raise concerns of soft quality checks
Single-blind peer reviewMasking reviewer identity prevents personal conflicts in small (closed) professional communitiesReviewer access to author profiles may result in biased and subjective evaluations
Double-blind peer reviewConcealing author and reviewer identities prevents biased evaluations, particularly in small communitiesMasking all identifying information is technically burdensome and not always possible
Open (public) peer reviewMay increase quality, objectivity, and accountability of reviewer evaluations; it is now part of open science culturePeers who do not wish to disclose their identity may decline reviewer invitations
Post-publication open peer reviewMay accelerate dissemination of influential reports in line with the concept “publish first, judge later”; this concept is practised by some open-access journals (e.g., F1000 Research)Not all manuscripts benefit from open dissemination without peers’ input; post-publication review may delay detection of minor or major mistakes
Post-publication social media commentingMay reveal some mistakes and misconduct and improve public perception of article implicationsNot all communities use social media for commenting and other academic purposes

Since there are no universally acceptable criteria for selecting reviewers and structuring their comments, instructions of all peer-reviewed journal should specify priorities, models, and expected review outcomes [ 20 ]. Monitoring and reporting average peer review timelines is also required to encourage timely evaluations and avoid delays. Depending on journal policies and article types, the first round of peer review may last from a few days to a few weeks. The fast-track review (up to 3 days) is practised by some top journals which process clinical trial reports and other priority items.

In exceptional cases, reviewer contributions may result in substantive changes, appreciated by authors in the official acknowledgments. In most cases, however, reviewers should avoid engaging in the authors’ research and writing. They should refrain from instructing the authors on additional tests and data collection as these may delay publication of original submissions with conclusive results.

Established publishers often employ advanced editorial management systems that support reviewers by providing instantaneous access to the review instructions, online structured forms, and some bibliographic databases. Such support enables drafting of evidence-based comments that examine the novelty, ethical soundness, and implications of the reviewed manuscripts [ 21 ].

Encouraging reviewers to submit their recommendations on manuscript acceptance/rejection and related editorial tasks is now a common practice. Skilled reviewers may prompt the editors to reject or transfer manuscripts which fall outside the journal scope, perform additional ethics checks, and minimize chances of publishing erroneous and unethical articles. They may also raise concerns over the editorial strategies in their comments to the editors.

Since reviewer and editor roles are distinct, reviewer recommendations are aimed at helping editors, but not at replacing their decision-making functions. The final decisions rest with handling editors. Handling editors weigh not only reviewer comments, but also priorities related to article types and geographic origins, space limitations in certain periods, and envisaged influence in terms of social media attention and citations. This is why rejections of even flawless manuscripts are likely at early rounds of internal and external evaluations across most peer-reviewed journals.

Reviewers are often requested to comment on language correctness and overall readability of the evaluated manuscripts. Given the wide availability of in-house and external editing services, reviewer comments on language mistakes and typos are categorized as minor. At the same time, non-Anglophone experts’ poor language skills often exclude them from contributing to the peer review in most influential journals [ 22 ]. Comments should be properly edited to convey messages in positive or neutral tones, express ideas of varying degrees of certainty, and present logical order of words, sentences, and paragraphs [ 23 , 24 ]. Consulting linguists on communication culture, passing advanced language courses, and honing commenting skills may increase the overall quality and appeal of the reviewer accomplishments [ 5 , 25 ].

Peer reviewer credits

Various crediting mechanisms have been proposed to motivate reviewers and maintain the integrity of science communication [ 26 ]. Annual reviewer acknowledgments are widely practised for naming manuscript evaluators and appreciating their scholarly contributions. Given the need to weigh reviewer contributions, some journal editors distinguish ‘elite’ reviewers with numerous evaluations and award those with timely and outstanding accomplishments [ 27 ]. Such targeted recognition ensures ethical soundness of the peer review and facilitates promotion of the best candidates for grant funding and academic job appointments [ 28 ].

Also, large publishers and learned societies issue certificates of excellence in reviewing which may include Continuing Professional Development (CPD) points [ 29 ]. Finally, an entirely new crediting mechanism is proposed to award bonus points to active reviewers who may collect, transfer, and use these points to discount gold open-access charges within the publisher consortia [ 30 ].

With the launch of Publons ( http://publons.com/ ) and its integration with Web of Science Group (Clarivate Analytics), reviewer recognition has become a matter of scientific prestige. Reviewers can now freely open their Publons accounts and record their contributions to online journals with Digital Object Identifiers (DOI). Journal editors, in turn, may generate official reviewer acknowledgments and encourage reviewers to forward them to Publons for building up individual reviewer and journal profiles. All published articles maintain e-links to their review records and post-publication promotion on social media, allowing the reviewers to continuously track expert evaluations and comments. A paid-up partnership is also available to journals and publishers for automatically transferring peer-review records to Publons upon mutually acceptable arrangements.

Listing reviewer accomplishments on an individual Publons profile showcases scholarly contributions of the account holder. The reviewer accomplishments placed next to the account holders’ own articles and editorial accomplishments point to the diversity of scholarly contributions. Researchers may establish links between their Publons and ORCID accounts to further benefit from complementary services of both platforms. Publons Academy ( https://publons.com/community/academy/ ) additionally offers an online training course to novice researchers who may improve their reviewing skills under the guidance of experienced mentors and journal editors. Finally, journal editors may conduct searches through the Publons platform to select the best reviewers across academic disciplines.

Peer review ethics

Prior to accepting reviewer invitations, scholars need to weigh a number of factors which may compromise their evaluations. First of all, they are required to accept the reviewer invitations if they are capable of timely submitting their comments. Peer review timelines depend on article type and vary widely across journals. The rules of transparent publishing necessitate recording manuscript submission and acceptance dates in article footnotes to inform readers of the evaluation speed and to help investigators in the event of multiple unethical submissions. Timely reviewer accomplishments often enable fast publication of valuable works with positive implications for healthcare. Unjustifiably long peer review, on the contrary, delays dissemination of influential reports and results in ethical misconduct, such as plagiarism of a manuscript under evaluation [ 31 ].

In the times of proliferation of open-access journals relying on article processing charges, unjustifiably short review may point to the absence of quality evaluation and apparently ‘predatory’ publishing practice [ 32 , 33 ]. Authors when choosing their target journals should take into account the peer review strategy and associated timelines to avoid substandard periodicals.

Reviewer primary interests (unbiased evaluation of manuscripts) may come into conflict with secondary interests (promotion of their own scholarly works), necessitating disclosures by filling in related parts in the online reviewer window or uploading the ICMJE conflict of interest forms. Biomedical reviewers, who are directly or indirectly supported by the pharmaceutical industry, may encounter conflicts while evaluating drug research. Such instances require explicit disclosures of conflicts and/or rejections of reviewer invitations.

Journal editors are obliged to employ mechanisms for disclosing reviewer financial and non-financial conflicts of interest to avoid processing of biased comments [ 34 ]. They should also cautiously process negative comments that oppose dissenting, but still valid, scientific ideas [ 35 ]. Reviewer conflicts that stem from academic activities in a competitive environment may introduce biases, resulting in unfair rejections of manuscripts with opposing concepts, results, and interpretations. The same academic conflicts may lead to coercive reviewer self-citations, forcing authors to incorporate suggested reviewer references or face negative feedback and an unjustified rejection [ 36 ]. Notably, several publisher investigations have demonstrated a global scale of such misconduct, involving some highly cited researchers and top scientific journals [ 37 ].

Fake peer review, an extreme example of conflict of interest, is another form of misconduct that has surfaced in the time of mass proliferation of gold open-access journals and publication of articles without quality checks [ 38 ]. Fake reviews are generated by manipulating authors and commercial editing agencies with full access to their own manuscripts and peer review evaluations in the journal editorial management systems. The sole aim of these reviews is to break the manuscript evaluation process and to pave the way for publication of pseudoscientific articles. Authors of these articles are often supported by funds intended for the growth of science in non-Anglophone countries [ 39 ]. Iranian and Chinese authors are often caught submitting fake reviews, resulting in mass retractions by large publishers [ 38 ]. Several suggestions have been made to overcome this issue, with assigning independent reviewers and requesting their ORCID IDs viewed as the most practical options [ 40 ].

Conclusions

The peer review process is regulated by publishers and editors, enforcing updated global editorial recommendations. Selecting the best reviewers and providing authors with constructive comments may improve the quality of published articles. Reviewers are selected in view of their professional backgrounds and skills in research reporting, statistics, ethics, and language. Quality reviewer comments attract superior submissions and add to the journal’s scientific prestige [ 41 ].

In the era of digitization and open science, various online tools and platforms are available to upgrade the peer review and credit experts for their scholarly contributions. With its links to the ORCID platform and social media channels, Publons now offers the optimal model for crediting and keeping track of the best and most active reviewers. Publons Academy additionally offers online training for novice researchers who may benefit from the experience of their mentoring editors. Overall, reviewer training in how to evaluate journal submissions and avoid related misconduct is an important process, which some indexed journals are experimenting with [ 42 ].

The timelines and rigour of the peer review may change during the current pandemic. However, journal editors should mobilize their resources to avoid publication of unchecked and misleading reports. Additional efforts are required to monitor published contents and encourage readers to post their comments on publishers’ online platforms (blogs) and other social media channels [ 43 , 44 ].

The authors declare no conflict of interest.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • CAREER COLUMN
  • 08 October 2018

How to write a thorough peer review

  • Mathew Stiller-Reeve 0

Mathew Stiller-Reeve is a climate researcher at NORCE/Bjerknes Centre for Climate Research in Bergen, Norway, the leader of SciSnack.com, and a thematic editor at Geoscience Communication .

You can also search for this author in PubMed   Google Scholar

Scientists do not receive enough peer-review training. To improve this situation, a small group of editors and I developed a peer-review workflow to guide reviewers in delivering useful and thorough analyses that can really help authors to improve their papers.

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 51 print issues and online access

185,98 € per year

only 3,65 € per issue

Rent or buy this article

Prices vary by article type

Prices may be subject to local taxes which are calculated during checkout

doi: https://doi.org/10.1038/d41586-018-06991-0

This is an article from the Nature Careers Community, a place for Nature readers to share their professional experiences and advice. Guest posts are encouraged. You can get in touch with the editor at [email protected].

Related Articles

peer review example research

Engage more early-career scientists as peer reviewers

Help graduate students to become good peer reviewers

  • Peer review

Exclusive: the papers that most heavily cite retracted studies

Exclusive: the papers that most heavily cite retracted studies

News 28 AUG 24

Cash for catching scientific errors

Cash for catching scientific errors

Technology Feature 19 AUG 24

Who will make AlphaFold3 open source? Scientists race to crack AI model

Who will make AlphaFold3 open source? Scientists race to crack AI model

News 23 MAY 24

Chain retraction: how to stop bad science propagating through the literature

Chain retraction: how to stop bad science propagating through the literature

Comment 28 AUG 24

No more hunting for replication studies: crowdsourced database makes them easy to find

No more hunting for replication studies: crowdsourced database makes them easy to find

Nature Index 27 AUG 24

Tales of a migratory marine biologist

Tales of a migratory marine biologist

Career Feature 28 AUG 24

Nail your tech-industry interviews with these six techniques

Nail your tech-industry interviews with these six techniques

Career Column 28 AUG 24

How to harness AI’s potential in research — responsibly and ethically

How to harness AI’s potential in research — responsibly and ethically

Career Feature 23 AUG 24

Alzheimer's Disease (AD) Researcher/Associate Researcher

Xiaoliang Sunney XIE’s Group is recruiting researchers specializing in Alzheimer's disease (AD).

Beijing, China

Changping Laboratory

peer review example research

Supervisory Bioinformatics Specialist CTG Program Head

The National Library of Medicine (NLM) is a global leader in biomedical informatics and computational health data science and the world’s largest b...

Bethesda, Maryland (US)

National Library of Medicine, National Center for Biotechnology Information

Post Doctoral Research Scientist

Post-Doctoral Research Scientist Position in Human Transplant Immunology at the Columbia Center for Translational Immunology in New York, NY

New York City, New York (US)

Columbia Center for Translational Immunoogy

peer review example research

Postdoctoral Associate- Neuromodulation and Computational Psychiatry

Houston, Texas (US)

Baylor College of Medicine (BCM)

peer review example research

Postdoctoral Fellow

Modeling Autism Spectrum Disorders using genetically modified human stem cell-derived brain organoids and mouse models.

Weill Cornell Medical College

peer review example research

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies
  • Reviewer Guidelines
  • Peer review model
  • Scope & article eligibility
  • Reviewer eligibility
  • Peer reviewer code of conduct
  • Guidelines for reviewing
  • How to submit
  • The peer-review process
  • Peer Reviewing Tips
  • Benefits for Reviewers

The genesis of this paper is the proposal that genomes containing a poor percentage of guanosine and cytosine (GC) nucleotide pairs lead to proteomes more prone to aggregation than those encoded by GC-rich genomes. As a consequence these organisms are also more dependent on the protein folding machinery. If true, this interesting hypothesis could establish a direct link between the tendency to aggregate and the genomic code.

In their paper, the authors have tested the hypothesis on the genomes of eubacteria using a genome-wide approach based on multiple machine learning models. Eubacteria are an interesting set of organisms which have an appreciably high variation in their nucleotide composition with the percentage of CG genetic material ranging from 20% to 70%. The authors classified different eubacterial proteomes in terms of their aggregation propensity and chaperone-dependence. For this purpose, new classifiers had to be developed which were based on carefully curated data. They took account for twenty-four different features among which are sequence patterns, the pseudo amino acid composition of phenylalanine, aspartic and glutamic acid, the distribution of positively charged amino acids, the FoldIndex score and the hydrophobicity. These classifiers seem to be altogether more accurate and robust than previous such parameters.

The authors found that, contrary to what expected from the working hypothesis, which would predict a decrease in protein aggregation with an increase in GC richness, the aggregation propensity of proteomes increases with the GC content and thus the stability of the proteome against aggregation increases with the decrease in GC content. The work also established a direct correlation between GC-poor proteomes and a lower dependence on GroEL. The authors conclude by proposing that a decrease in eubacterial GC content may have been selected in organisms facing proteostasis problems. A way to test the overall results would be through in vitro evolution experiments aimed at testing whether adaptation to low GC content provide folding advantage.

The main strengths of this paper is that it addresses an interesting and timely question, finds a novel solution based on a carefully selected set of rules, and provides a clear answer. As such this article represents an excellent and elegant bioinformatics genome-wide study which will almost certainly influence our thinking about protein aggregation and evolution. Some of the weaknesses are the not always easy readability of the text which establishes unclear logical links between concepts.

Another possible criticism could be that, as any in silico study, it makes strong assumptions on the sequence features that lead to aggregation and strongly relies on the quality of the classifiers used. Even though the developed classifiers seem to be more robust than previous such parameters, they remain only overall indications which can only allow statistical considerations. It could of course be argued that this is good enough to reach meaningful conclusions in this specific case.

The paper by Chevalier et al. analyzed whether late sodium current (I NaL ) can be assessed using an automated patch-clamp device. To this end, the I NaL effects of ranolazine (a well known I NaL inhibitor) and veratridine (an I NaL activator) were described. The authors tested the CytoPatch automated patch-clamp equipment and performed whole-cell recordings in HEK293 cells stably transfected with human Nav1.5. Furthermore, they also tested the electrophysiological properties of human induced pluripotent stem cell-derived cardiomyocytes (hiPS) provided by Cellular Dynamics International. The title and abstract are appropriate for the content of the text. Furthermore, the article is well constructed, the experiments were well conducted, and analysis was well performed.

I NaL is a small current component generated by a fraction of Nav1.5 channels that instead to entering in the inactivated state, rapidly reopened in a burst mode. I NaL critically determines action potential duration (APD), in such a way that both acquired (myocardial ischemia and heart failure among others) or inherited (long QT type 3) diseases that augmented the I NaL magnitude also increase the susceptibility to cardiac arrhythmias. Therefore, I NaL has been recognized as an important target for the development of drugs with either antiischemic or antiarrhythmic effects. Unfortunately, accurate measurement of I NaL is a time consuming and technical challenge because of its extra-small density. The automated patch clamp device tested by Chevalier et al. resolves this problem and allows fast and reliable I NaL measurements.

The results here presented merit some comments and arise some unresolved questions. First, in some experiments (such is the case in experiments B and D in Figure 2) current recordings obtained before the ranolazine perfusion seem to be quite unstable. Indeed, the amplitude progressively increased to a maximum value that was considered as the control value (highlighted with arrows). Can this problem be overcome? Is this a consequence of a slow intracellular dialysis? Is it a consequence of a time-dependent shift of the voltage dependence of activation/inactivation? Second, as shown in Figure 2, intensity of drug effects seems to be quite variable. In fact, experiments A, B, C, and D in Figure 2 and panel 2D, demonstrated that veratridine augmentation ranged from 0-400%. Even assuming the normal biological variability, we wonder as to whether this broad range of effect intensities can be justified by changes in the perfusion system. Has been the automated dispensing system tested? If not, we suggest testing the effects of several K + concentrations on inward rectifier currents generated by Kir2.1 channels (I Kir2.1 ).

The authors demonstrated that the recording quality was so high that the automated device allows to the differentiation between noise and current, even when measuring currents of less than 5 pA of amplitude. In order to make more precise mechanistic assumptions, the authors performed an elegant estimation of current variance (σ 2 ) and macroscopic current (I) following the procedure described more than 30 years ago by Van Driessche and Lindemann 1 . By means of this method, Chevalier et al. reducing the open channel probability, while veratridine increases the number of channels in the burst mode. We respectfully would like to stress that these considerations must be put in context from a pharmacological point of view. We do not doubt that ranolazine acts as an open channel blocker, what it seems clear however, is that its onset block kinetics has to be “ultra” slow, otherwise ranolazine would decrease peak I NaL even at low frequencies of stimulation. This comment points towards the fact that for a precise mechanistic study of ionic current modifying drugs it is mandatory to analyze drug effects with much more complicated pulse protocols. Questions thus are: does this automated equipment allow to the analysis of the frequency-, time-, and voltage-dependent effects of drugs? Can versatile and complicated pulse protocols be applied? Does it allow to a good voltage control even when generated currents are big and fast? If this is not possible, and by means of its extraordinary discrimination between current and noise, this automated patch-clamp equipment will only be helpful for rapid I NaL -modifying drug screening. Obviously it will also be perfect to test HERG blocking drug effects as demanded by the regulatory authorities.

Finally, as cardiac electrophysiologists, we would like to stress that it seems that our dream of testing drug effects on human ventricular myocytes seems to come true. Indeed, human atrial myocytes are technically, ethically and logistically difficult to get, but human ventricular are almost impossible to be obtained unless from the explanted hearts from patients at the end stage of cardiac diseases. Here the authors demonstrated that ventricular myocytes derived from hiPS generate beautiful action potentials that can be recorded with this automated equipment. The traces shown suggested that there was not alternation in the action potential duration. Is this a consistent finding? How long do last these stable recordings? The only comment is that resting membrane potential seems to be somewhat variable. Can this be resolved? Is it an unexpected veratridine effect? Standardization of maturation methods of ventricular myocytes derived from hiPS will be a big achievement for cardiac cellular electrophysiology which was obliged for years to the imprecise extrapolation of data obtained from a combination of several species none of which was representative of human electrophysiology. The big deal will be the maturation of human atrial myocytes derived from hiPS that fulfil the known characteristics of human atrial cells.

We suggest suppressing the initial sentence of section 3. We surmise that results obtained from the experiments described in this section cannot serve to understand the role of I NaL in arrhythmogenesis.

1. Van Driessche W, Lindemann B: Concentration dependence of currents through single sodium-selective pores in frog skin. Nature . 1979; 282 (5738): 519-520 PubMed Abstract | Publisher Full Text

The authors have clarified several of the questions I raised in my previous review. Unfortunately, most of the major problems have not been addressed by this revision. As I stated in my previous review, I deem it unlikely that all those issues can be solved merely by a few added paragraphs. Instead there are still some fundamental concerns with the experimental design and, most critically, with the analysis. This means the strong conclusions put forward by this manuscript are not warranted and I cannot approve the manuscript in this form.

  • The greatest concern is that when I followed the description of the methods in the previous version it was possible to decode, with almost perfect accuracy, any arbitrary stimulus labels I chose. See https://doi.org/10.6084/m9.figshare.1167456 for examples of this reanalysis. Regardless of whether we pretend that the actual stimulus appeared at a later time or was continuously alternating between signal and silence, the decoding is always close to perfect. This is an indication that the decoding has nothing to do with the actual stimulus heard by the Sender but is opportunistically exploiting some other features in the data. The control analysis the authors performed, reversing the stimulus labels, cannot address this problem because it suffers from the exact same problem. Essentially, what the classifier is presumably using is the time that has passed since the recording started.
  • The reason for this is presumably that the authors used non-independent data for training and testing. Assuming I understand correctly (see point 3), random sampling one half of data samples from an EEG trace are not independent data . Repeating the analysis five times – the control analysis the authors performed – is not an adequate way to address this concern. Randomly selecting samples from a time series containing slow changes (such as the slow wave activity that presumably dominates these recordings under these circumstances) will inevitably contain strong temporal correlations. See TemporalCorrelations.jpg in https://doi.org/10.6084/m9.figshare.1185723 for 2D density histograms and a correlation matrix demonstrating this.
  • While the revised methods section provides more detail now, it still is unclear about exactly what data were used. Conventional classification analysis report what data features (usual columns in the data matrix) and what observations (usual rows) were used. Anything could be a feature but typically this might be the different EEG channels or fMRI voxels etc. Observations are usually time points. Here I assume the authors transformed the raw samples into a different space using principal component analysis. It is not stated if the dimensionality was reduced using the eigenvalues. Either way, I assume the data samples (collected at 128 Hz) were then used as observations and the EEG channels transformed by PCA were used as features. The stimulus labels were assigned as ON or OFF for each sample. A set of 50% of samples (and labels) was then selected at random for training, and the rest was used for testing. Is this correct?
  • A powerful non-linear classifier can capitalise on such correlations to discriminate arbitrary labels. In my own analyses I used both an SVM with RBF as well as a k-nearest neighbour classifier, both of which produce excellent decoding of arbitrary stimulus labels (see point 1). Interestingly, linear classifiers or less powerful SVM kernels fare much worse – a clear indication that the classifier learns about the complex non-linear pattern of temporal correlations that can describe the stimulus label. This is further corroborated by the fact that when using stimulus labels that are chosen completely at random (i.e. with high temporal frequency) decoding does not work.
  • The authors have mostly clarified how the correlation analysis was performed. It is still left unclear, however, how the correlations for individual pairs were averaged. Was Fisher’s z-transformation used, or were the data pooled across pairs? More importantly, it is not entirely surprising that under the experimental conditions there will be some correlation between the EEG signals for different participants, especially in low frequency bands. Again, this further supports the suspicion that the classification utilizes slow frequency signals that are unrelated to the stimulus and the experimental hypothesis. In fact, a quick spot check seems to confirm this suspicion: correlating the time series separately for each channel from the Receiver in pair 1 with those from the Receiver in pair 18 reveals 131 significant (p‹0.05, Bonferroni corrected) out of 196 (14x14 channels) correlations… One could perhaps argue that this is not surprising because both these pairs had been exposed to identical stimulus protocols: one minute of initial silence and only one signal period (see point 6). However, it certainly argues strongly against the notion that the decoding is any way related to the mental connection between the particular Sender and Receiver in a given pair because it clearly works between Receivers in different pairs! However, to further control for this possibility I repeated the same analysis but now comparing the Receiver from pair 1 to the Receiver from pair 15. This pair was exposed to a different stimulus paradigm (2 minutes of initial silence and a longer paradigm with three signal periods). I only used the initial 3 minutes for the correlation analysis. Therefore, both recordings would have been exposed to only one signal period but at different times (at 1 min and 2 min for pair 1 and 15, respectively). Even though the stimulus protocol was completely different the time courses for all the channels are highly correlated and 137 out of 196 correlations are significant. Considering that I used the raw data for this analysis it should not surprise anyone that extracting power from different frequency bands in short time windows will also reveal significant correlations. Crucially, it demonstrates that correlations between Sender and Receiver are artifactual and trivial.
  • The authors argue in their response and the revision that predictive strategies were unlikely. After having performed these additional analyses I am inclined to agree. The excellent decoding almost certainly has nothing to do with expectation or imagery effects and it is irrelevant whether participants could guess the temporal design of the experiment. Rather, the results are almost entirely an artefact of the analysis. However, this does not mean that predictability is not an issue. The figure StimulusTimecourses.jpg in https://doi.org/10.6084/m9.figshare.1185723 plots the stimulus time courses for all 20 pairs as can be extracted from the newly uploaded data. This confirms what I wrote in my previous review, in fact, with the corrected data sets the problem with predictability is even greater. Out of the 20 pairs, 13 started with 1 min of initial silence. The remaining 7 had 2 minutes of initial silence. Most of the stimulus paradigms are therefore perfectly aligned and thus highly correlated. This also proves incorrect the statement that initial silence periods were 1, 2, or 3 minutes. No pair had 3 min of initial silence. It would therefore have been very easy for any given Receiver to correctly guess the protocol. It should be clear that this is far from optimal for testing such an unorthodox hypothesis. Any future experiments should employ more randomization to decrease predictability. Even if this wasn’t the underlying cause of the present results, this is simply not great experimental design.
  • The authors now acknowledge in their response that all the participants were authors. They say that this is also acknowledged in the methods section, but I did not see any statement about that in the revised manuscript. As before, I also find it highly questionable to include only authors in an experiment of this kind. It is not sufficient to claim that Receivers weren’t guessing their stimulus protocol. While I am giving the authors (and thus the participants) the benefit of the doubt that they actually believe they weren’t guessing/predicting the stimulus protocols, this does not rule out that they did. It may in fact be possible to make such predictions subconsciously (Now, if you ask me, this is an interesting scientific question someone should do an experiment on!). The fact familiar with the protocol may help that. Any future experiments should take steps to prevent this.
  • I do not follow the explanation for the binomial test the authors used. Based on the excessive Bayes Factor of 390,625 it is clear that the authors assumed a chance level of 50% on their binomial test. Because the design is not balanced, this is not correct.
  • In general, the Bayes Factor and the extremely high decoding accuracy should have given the authors reason to start. Considering the unusual hypothesis did the authors not at any point wonder if these results aren’t just far too good to be true? Decoding mental states from brain activity is typically extremely noisy and hardly affords accuracies at the level seen here. Extremely accurate decoding and Bayes Factors in the hundreds of thousands should be a tell-tale sign to check that there isn’t an analytical flaw that makes the result entirely trivial. I believe this is what happened here and thus I think this experiment serves as a very good demonstration for the pitfalls of applying such analysis without sanity checks. In order to make claims like this, the experimental design must contain control conditions that can rule out these problems. Presumably, recordings without any Sender, and maybe even when the “Receiver” is aware of this fact, should produce very similar results.

Based on all these factors, it is impossible for me to approve this manuscript. I should however state that it is laudable that the authors chose to make all the raw data of their experiment publicly available. Without this it would have impossible for me to carry out the additional analyses, and thus the most fundamental problem in the analysis would have remained unknown. I respect the authors’ patience and professionalism in dealing with what I can only assume is a rather harsh review experience. I am honoured by the request for an adversarial collaboration. I do not rule out such efforts at some point in the future. However, for all of the reasons outlined in this and my previous review, I do not think the time is right for this experiment to proceed to this stage. Fundamental analytical flaws and weaknesses in the design should be ruled out first. An adversarial collaboration only really makes sense to me for paradigms were we can be confident that mundane or trivial factors have been excluded.

This manuscript does an excellent job demonstrating significant strain differences in Burdian's paradigm. Since each Drosophila lab has their own wild type (usually Canton-S) isolate, this issue of strain differences is actually a very important one for between lab reproducibility. This work is a good reminder for all geneticists to pay attention to the population effects in the background controls, and presumably the mutant lines we are comparing.

I was very pleased to see the within-isolate behavior was consistent in replicate experiments one year apart. The authors further argue that the between-isolate differences in behavior arise from a Founder's effect, at least in the differences in locomotor behavior between the Paris lines CS_TP and CS_JC. I believe this is a very reasonable and testable hypothesis. It predicts that genetic variability for these traits exist within the populations. It should now be possible to perform selection experiments from the original CS_TP population to replicate the founding event and estimate the heritability of these traits.

Two other things that I liked about this manuscript are the ability to adjust parameters in figure 3, and our ability to download the raw data. After reading the manuscript, I was a little disappointed that the performance of the five strains in each 12 behavioral variables weren't broken down individually in a table or figure. I thought this may help us readers understand what the principle components were representing. The authors have made this data readily accessible in a downloadable spreadsheet.

This is an exceptionally good review and balanced assessment of the status of CETP inhibitors and ASCVD from a world authority in the field. The article highlights important data that might have been overlooked when promulgating the clinical value of CETPIs and related trials.

Only 2 areas need revision:

  • Page 3, para 2: the notion that these data from Papp et al . convey is critical and the message needs an explicit sentence or two at end of paragraph.
  • Page 4, Conclusion: the assertion concerning the ethics of the two Phase 3 clinical trials needs toning down. Perhaps rephrase to indicate that the value and sense of doing these trials is open to question, with attendant ethical implications, or softer wording to that effect.

The Wiley et al . manuscript describes a beautiful synthesis of contemporary genetic approaches to, with astonishing efficiency, identify lead compounds for therapeutic approaches to a serious human disease. I believe the importance of this paper stems from the applicability of the approach to the several thousand of rare human disease genes that Next-Gen sequencing will uncover in the next few years and the challenge we will have in figuring out the function of these genes and their resulting defects. This work presents a paradigm that can be broadly and usefully applied.

In detail, the authors begin with gene responsible for X-linked spinal muscular atrophy and express both the wild-type version of that human gene as well as a mutant form of that gene in S. pombe . The conceptual leap here is that progress in genetics is driven by phenotype, and this approach involving a yeast with no spine or muscles to atrophy is nevertheless and N-dimensional detector of phenotype.

The study is not without a small measure of luck in that expression of the wild-type UBA1 gene caused a slow growth phenotype which the mutant did not. Hence there was something in S. pombe that could feel the impact of this protein. Given this phenotype, the authors then went to work and using the power of the synthetic genetic array approach pioneered by Boone and colleagues made a systematic set of double mutants combining the human expressed UBA1 gene with knockout alleles of a plurality of S. pombe genes. They found well over a hundred mutations that either enhanced or suppressed the growth defect of the cells expressing UBI1. Most of these have human orthologs. My hunch is that many human genes expressed in yeast will have some comparably exploitable phenotype, and time will tell.

Building on the interaction networks of S. pombe genes already established, augmenting these networks by the protein interaction networks from yeast and from human proteome studies involving these genes, and from the structure of the emerging networks, the authors deduced that an E3 ligase modulated UBA1 and made the leap that it therefore might also impact X-linked Spinal Muscular Atrophy.

Here, the awesome power of the model organism community comes into the picture as there is a zebrafish model of spinal muscular atrophy. The principle of phenologs articulated by the Marcotte group inspire the recognition of the transitive logic of how phenotypes in one organism relate to phenotypes in another. With this zebrafish model, they were able to confirm that an inhibitor of E3 ligases and of the Nedd8-E1 activating suppressed the motor axon anomalies, as predicted by the effect of mutations in S. pombe on the phenotypes of the UBA1 overexpression.

I believe this is an important paper to teach in intro graduate courses as it illustrates beautifully how important it is to know about and embrace the many new sources of systematic genetic information and apply them broadly.

This paper by Amrhein et al. criticizes a paper by Bradley Efron that discusses Bayesian statistics ( Efron, 2013a ), focusing on a particular example that was also discussed in Efron (2013b) . The example concerns a woman who is carrying twins, both male (as determined by sonogram and we ignore the possibility that gender has been observed incorrectly). The parents-to-be ask Efron to tell them the probability that the twins are identical.

This is my first open review, so I'm not sure of the protocol. But given that there appears to be errors in both Efron (2013b) and the paper under review, I am sorry to say that my review might actually be longer than the article by Efron (2013a) , the primary focus of the critique, and the critique itself. I apologize in advance for this. To start, I will outline the problem being discussed for the sake of readers.

This problem has various parameters of interest. The primary parameter is the genetic composition of the twins in the mother’s womb. Are they identical (which I describe as the state x = 1) or fraternal twins ( x = 0)? Let y be the data, with y = 1 to indicate the twins are the same gender. Finally, we wish to obtain Pr( x = 1 | y = 1), the probability the twins are identical given they are the same gender 1 . Bayes’ rule gives us an expression for this:

Pr( x = 1 | y = 1) = Pr( x =1) Pr( y = 1 | x = 1) / {Pr( x =1) Pr( y = 1 | x = 1) + Pr( x =0) Pr( y = 1 | x = 0)}

Now we know that Pr( y = 1 | x = 1) = 1; twins must be the same gender if they are identical. Further, Pr( y = 1 | x = 0) = 1/2; if twins are not identical, the probability of them being the same gender is 1/2.

Finally, Pr( x = 1) is the prior probability that the twins are identical. The bone of contention in the Efron papers and the critique by Amrhein et al. revolves around how this prior is treated. One can think of Pr( x = 1) as the population-level proportion of twins that are identical for a mother like the one being considered.

However, if we ignore other forms of twins that are extremely rare (equivalent to ignoring coins finishing on their edges when flipping them), one incontrovertible fact is that Pr( x = 0) = 1 − Pr( x = 1); the probability that the twins are fraternal is the complement of the probability that they are identical.

The above values and expressions for Pr( y = 1 | x = 1), Pr( y = 1 | x = 0), and Pr( x = 0) leads to a simpler expression for the probability that we seek ‐ the probability that the twins are identical given they have the same gender:

Pr( x = 1 | y = 1) = 2 Pr( x =1) / [1 + Pr( x =1)] (1)

We see that the answer depends on the prior probability that the twins are identical, Pr( x =1). The paper by Amrhein et al. points out that this is a mathematical fact. For example, if identical twins were impossible (Pr( x = 1) = 0), then Pr( x = 1| y = 1) = 0. Similarly, if all twins were identical (Pr( x = 1) = 1), then Pr( x = 1| y = 1) = 1. The “true” prior lies somewhere in between. Apparently, the doctor knows that one third of twins are identical 2 . Therefore, if we assume Pr( x = 1) = 1/3, then Pr( x = 1| y = 1) = 1/2.

Now, what would happen if we didn't have the doctor's knowledge? Laplace's “Principle of Insufficient Reason” would suggest that we give equal prior probability to all possibilities, so Pr( x = 1) = 1/2 and Pr( x = 1| y = 1) = 2/3, an answer different from 1/2 that was obtained when using the doctor's prior of 1/3.

Efron(2013a) highlights this sensitivity to the prior, representing someone who defines an uninformative prior as a “violator”, with Laplace as the “prime violator”. In contrast, Amrhein et al. correctly points out that the difference in the posterior probabilities is merely a consequence of mathematical logic. No one is violating logic – they are merely expressing ignorance by specifying equal probabilities to all states of nature. Whether this is philosophically valid is debatable ( Colyvan 2008 ), but weight to that question, and it is well beyond the scope of this review. But setting Pr( x = 1) = 1/2 is not a violation; it is merely an assumption with consequences (and one that in hindsight might be incorrect 2 ).

Alternatively, if we don't know Pr( x = 1), we could describe that probability by its own probability distribution. Now the problem has two aspects that are uncertain. We don’t know the true state x , and we don’t know the prior (except in the case where we use the doctor’s knowledge that Pr( x = 1) = 1/3). Uncertainty in the state of x refers to uncertainty about this particular set of twins. In contrast, uncertainty in Pr( x = 1) reflects uncertainty in the population-level frequency of identical twins. A key point is that the state of one particular set of twins is a different parameter from the frequency of occurrence of identical twins in the population.

Without knowledge about Pr( x = 1), we might use Pr( x = 1) ~ dunif(0, 1), which is consistent with Laplace. Alternatively, Efron (2013b) notes another alternative for an uninformative prior: Pr( x = 1) ~ dbeta(0.5, 0.5), which is the Jeffreys prior for a probability.

Here I disagree with Amrhein et al. ; I think they are confusing the two uncertain parameters. Amrhein et al. state:

“We argue that this example is not only flawed, but useless in illustrating Bayesian data analysis because it does not rely on any data. Although there is one data point (a couple is due to be parents of twin boys, and the twins are fraternal), Efron does not use it to update prior knowledge. Instead, Efron combines different pieces of expert knowledge from the doctor and genetics using Bayes’ theorem.”

This claim might be correct when describing uncertainty in the population-level frequency of identical twins. The data about the twin boys is not useful by itself for this purpose – they are a biased sample (the data have come to light because their gender is the same; they are not a random sample of twins). Further, a sample of size one, especially if biased, is not a firm basis for inference about a population parameter. While the data are biased, the claim by Amrheim et al. that there are no data is incorrect.

However, the data point (the twins have the same gender) is entirely relevant to the question about the state of this particular set of twins. And it does update the prior. This updating of the prior is given by equation (1) above. The doctor’s prior probability that the twins are identical (1/3) becomes the posterior probability (1/2) when using information that the twins are the same gender. The prior is clearly updated with Pr( x = 1| y = 1) ≠ Pr( x = 1) in all but trivial cases; Amrheim et al. ’s statement that I quoted above is incorrect in this regard.

This possible confusion between uncertainty about these twins and uncertainty about the population level frequency of identical twins is further suggested by Amrhein et al. ’s statements:

“Second, for the uninformative prior, Efron mentions erroneously that he used a uniform distribution between zero and one, which is clearly different from the value of 0.5 that was used. Third, we find it at least debatable whether a prior can be called an uninformative prior if it has a fixed value of 0.5 given without any measurement of uncertainty.”

Note, if the prior for Pr( x = 1) is specified as 0.5, or dunif(0,1), or dbeta(0.5, 0.5), the posterior probability that these twins are identical is 2/3 in all cases. Efron (2013b) says the different priors lead to different results, but this result is incorrect, and the correct answer (2/3) is given in Efron (2013a) 3 . Nevertheless, a prior that specifies Pr( x = 1) = 0.5 does indicate uncertainty about whether this particular set of twins is identical (but certainty in the population level frequency of twins). And Efron’s (2013a) result is consistent with Pr( x = 1) having a uniform prior. Therefore, both claims in the quote above are incorrect.

It is probably easiest to show the (lack of) influence of the prior using MCMC sampling. Here is WinBUGS code for the case using Pr( x = 1) = 0.5.

Running this model in WinBUGS shows that the posterior mean of x is 2/3; this is the posterior probability that x = 1.

Instead of using pr_ident_twins <- 0.5, we could set this probability as being uncertain and define pr_ident_twins ~ dunif(0,1), or pr_ident_twins ~ dbeta(0.5,0.5). In either case, the posterior mean value of x remains 2/3 (contrary to Efron 2013b , but in accord with the correction in Efron 2013a ).

Note, however, that the value of the population level parameter pr_ident_twins is different in all three cases. In the first it remains unchanged at 1/2 where it was set. In the case where the prior distribution for pr_ident_twins is uniform or beta, the posterior distributions remain broad, but they differ depending on the prior (as they should – different priors lead to different posteriors 4 ). However, given the biased sample size of 1, the posterior distribution for this particular parameter is likely to be misleading as an estimate of the population-level frequency of twins.

So why doesn’t the choice of prior influence the posterior probability that these twins are identical? Well, for these three priors, the prior probability that any single set of twins is identical is 1/2 (this is essentially the mean of the prior distributions in these three cases).

If, instead, we set the prior as dbeta(1,2), which has a mean of 1/3, then the posterior probability that these twins are identical is 1/2. This is the same result as if we had set Pr( x = 1) = 1/3. In both these cases (choosing dbeta(1,2) or 1/3), the prior probability that a single set of twins is identical is 1/3, so the posterior is the same (1/2) given the data (the twins have the same gender).

Further, Amrhein et al. also seem to misunderstand the data. They note:

“Although there is one data point (a couple is due to be parents of twin boys, and the twins are fraternal)...”

This is incorrect. The parents simply know that the twins are both male. Whether they are fraternal is unknown (fraternal twins being the complement of identical twins) – that is the question the parents are asking. This error of interpretation makes the calculations in Box 1 and subsequent comments irrelevant.

Box 1 also implies Amrhein et al. are using the data to estimate the population frequency of identical twins rather than the state of this particular set of twins. This is different from the aim of Efron (2013a) and the stated question.

Efron suggests that Bayesian calculations should be checked with frequentist methods when priors are uncertain. However, this is a good example where this cannot be done easily, and Amrhein et al. are correct to point this out. In this case, we are interested in the probability that the hypothesis is true given the data (an inverse probability), not the probabilities that the observed data would be generated given particular hypotheses (frequentist probabilities). If one wants the inverse probability (the probability the twins are identical given they are the same gender), then Bayesian methods (andtherefore a prior) are required. A logical answer simply requires that the prior is constructed logically. Whether that answer is “correct” will be, in most cases, only known in hindsight.

However, one possible way to analyse this example using frequentist methods would be to assess the likelihood of obtaining the data for each of the two hypothesis (the twins are identical or fraternal). The likelihood of the twins having the same gender under the hypothesis that they are identical is 1. The likelihood of the twins having the same gender under the hypothesis that they are fraternal is 0.5. Therefore, the weight of evidence in favour of identical twins is twice that of fraternal twins. Scaling these weights so they sum to one ( Burnham and Anderson 2002 ), gives a weight of 2/3 for identical twins and 1/3 for fraternal twins. These scaled weights have the same numerical values as the posterior probabilities based on either a Laplace or Jeffreys prior. Thus, one might argue that the weight of evidence for each hypothesis when using frequentist methods is equivalent to the posterior probabilities derived from an uninformative prior. So, as a final aside in reference to Efron (2013a) , if we are being “violators” when using a uniform prior, are we also being “violators” when using frequentist methods to weigh evidence? Regardless of the answer to this rhetorical question, “checking” the results with frequentist methods doesn’t give any more insight than using uninformative priors (in this case). However, this analysis shows that the question can be analysed using frequentist methods; the single data point is not a problem for this. The claim in Armhein et al. that a frequentist analyis "is impossible because there is only one data point, and frequentist methods generally cannot handle such situations" is not supported by this example.

In summary, the comment by Amrhein et al. raises some interesting points that seem worth discussing, but it makes important errors in analysis and interpretation, and misrepresents the results of Efron (2013a) . This means the current version should not be approved.

Burnham, K.P. & D.R. Anderson. 2002. Model Selection and Multi-model Inference: a Practical Information-theoretic Approach. Springer-Verlag, New York.

Colyvan, M. 2008. Is Probability the Only Coherent Approach to Uncertainty? Risk Anal. 28: 645-652.

Efron B. (2003a) Bayes’ Theorem in the 21st Century. Science 340(6137): 1177-1178.

Efron B. (2013b) A 250-year argument: Belief, behavior, and the bootstrap. Bull Amer. Math Soc. 50: 129-146.

  • The twins are both male. However, if the twins were both female, the statistical results would be the same, so I will simply use the data that the twins are the same gender.
  • In reality, the frequency of twins that are identical is likely to vary depending on many factors but we will accept 1/3 for now.
  • Efron (2013b) reports the posterior probability for these twins being identical as “a whopping 61.4% with a flat Laplace prior” but as 2/3 in Efron (2013a) . The latter (I assume 2/3 is “even more whopping”!) is the correct answer, which I confirmed via email with Professor Efron. Therefore, Efron (2013b) incorrectly claims the posterior probability is sensitive to the choice between a Jeffreys or Laplace uninformative prior.
  • When the data are very informative relative to the different priors, the posteriors will be similar, although not identical.

I am very glad the authors wrote this essay. It is a well-written, needed, and useful summary of the current status of “data publication” from a certain perspective. The authors, however, need to be bolder and more analytical. This is an opinion piece, yet I see little opinion. A certain view is implied by the organization of the paper and the references chosen, but they could be more explicit.

The paper would be both more compelling and useful to a broad readership if the authors moved beyond providing a simple summary of the landscape and examined why there is controversy in some areas and then use the evidence they have compiled to suggest a path forward. They need to be more forthright in saying what data publication means to them, or what parts of it they do not deal with. Are they satisfied with the Lawrence et al. definition? Do they accept the critique of Parsons and Fox? What is the scope of their essay?

The authors take a rather narrow view of data publication, which I think hinders their analyses. They describe three types of (digital) data publication: Data as a supplement to an article; data as the subject of a paper; and data independent of a paper. The first two types are relatively new and they represent very little of the data actually being published or released today. The last category, which is essentially an “other” category, is rich in its complexity and encompasses the vast majority of data released. I was disappointed that the examples of this type were only the most bare-bones (Zenodo and Figshare). I think a deeper examination of this third category and its complexity would help the authors better characterize the current landscape and suggest paths forward.

Some questions the authors might consider: Are these really the only three models in consideration or does the publication model overstate a consensus around a certain type of data publication? Why are there different models and which approach is better for different situations? Do they have different business models or imply different social contracts? Might it also be worthy of typing “publishers” instead of “publications”? For example, do domain repositories vs. institutional repositories vs. publishers address the issues differently? Are these models sustaining models or just something to get us through the next 5-10 years while we really figure it out?

I think this oversimplification inhibited some deeper analysis in other areas as well. I would like to see more examination of the validation requirement beyond the lens of peer review, and I would like a deeper examination of incentives and credit beyond citation.

I thought the validation section of the paper was very relevant, but somewhat light. I like the choice of the term validation as more accurate than “quality” and it fits quite well with Callaghan’s useful distinction between technical and scientific review, but I think the authors overemphasize the peer-review style approach. The authors rightly argue that “peer-review” is where the publication metaphor leads us, but it may be a false path. They overstate some difficulties of peer-review (No-one looks at every data value? No, they use statistics, visualization, and other techniques.) while not fully considering who is responsible for what. We need a closer examination of different roles and who are appropriate validators (not necessarily conventional peers). The narrowly defined models of data publication may easily allow for a conventional peer-review process, but it is much more complex in the real-world “other” category. The authors discuss some of this in what they call “independent data validation,” but they don’t draw any conclusions.

Only the simplest of research data collections are validated only by the original creators. More often there are teams working together to develop experiments, sampling protocols, algorithms, etc. There are additional teams who assess, calibrate, and revise the data as they are collected and assembled. The authors discuss some of this in their examples like the PDS and tDAR, but I wish they were more analytical and offered an opinion on the way forward. Are there emerging practices or consensus in these team-based schemes? The level of service concept illustrated by Open Context may be one such area. Would formalizing or codifying some of these processes accomplish the same as peer-review or more? What is the role of the curator or data scientist in all of this? Given the authors’s backgrounds, I was surprised this role was not emphasized more. Finally, I think it is a mistake for science review to be the main way to assess reuse value. It has been shown time and again that data end up being used effectively (and valued) in ways that original experts never envisioned or even thought valid.

The discussion of data citation was good and captured the state of the art well, but again I would have liked to see some views on a way forward. Have we solved the basic problem and are now just dealing with edge cases? Is the “just-in-time identifier” the way to go? What are the implications? Will the more basic solutions work in the interim? More critically, are we overemphasizing the role of citation to provide academic credit? I was gratified that the authors referenced the Parsons and Fox paper which questions the whole data publication metaphor, but I was surprised that they only discussed the “data as software” alternative metaphor. That is a useful metaphor, but I think the ecosystem metaphor has broader acceptance. I mention this because the authors critique the software metaphor because “using it to alter or affect the academic reward system is a tricky prospect”. Yet there is little to suggest that data publication and corresponding citation alters that system either. Indeed there is little if any evidence that data publication and citation incentivize data sharing or stewardship. As Christine Borgman suggests, we need to look more closely at who we are trying to incentivize to do what. There is no reason to assume it follows the same model as research literature publication. It may be beyond the scope of this paper to fully examine incentive structures, but it at least needs to be acknowledged that building on the current model doesn’t seem to be working.

Finally, what is the takeaway message from this essay? It ends rather abruptly with no summary, no suggested directions or immediate challenges to overcome, no call to action, no indications of things we should stop trying, and only brief mention of alternative perspectives. What do the authors want us to take away from this paper?

Overall though, this is a timely and needed essay. It is well researched and nicely written with rich metaphor. With modifications addressing the detailed comments below and better recognizing the complexity of the current data publication landscape, this will be a worthwhile review paper. With more significant modification where the authors dig deeper into the complexities and controversies and truly grapple with their implications to suggest a way forward, this could be a very influential paper. It is possible that the definitions of “publication” and “peer-review” need not be just stretched but changed or even rejected.

  • The whole paper needs a quick copy edit. There are a few typos, missing words, and wrong verb tenses. Note the word “data” is a plural noun. E.g., Data are not software, nor are they literature. (NSICD, instead of NSIDC)
  • Page 2, para 2: “citability is addressed by assigning a PID.” This is not true, as the authors discuss on page 4, para 4. Indeed, page 4, para 4 seems to contradict itself. Citation is more than a locator/identifier.
  • In the discussion of “Data independent of any paper” it is worth noting that there may often be linkages between these data and myriad papers. Indeed a looser concept of a data paper has existed for some time, where researchers request a citation to a paper even though it is not the data nor fully describes the data (e.g the CRU temp records)
  • Page 4, para 1: I’m not sure it’s entirely true that published data cannot involve requesting permission. In past work with Indigenous knowledge holders, they were willing to publish summary data and then provide the details when satisfied the use was appropriate and not exploitive. I think those data were “published” as best they could be. A nit, perhaps, but it highlights that there are few if any hard and fast rules about data publication.
  • Page 4, para 2: You may also want to mention the WDS certification effort, which is combining with the DSA via an RDA Working Group:
  • Page 4, para 2: The joint declaration of data citation principles involved many more organizations than Force11, CODATA, and DCC. Please credit them all (maybe in a footnote). The glory of the effort was that it was truly a joint effort across many groups. There is no leader. Force11 was primarily a convener.
  • Page 4, para 6: The deep citation approach recommended by ESIP is not to just to list variables or a range of data. It is to identify a “structural index” for the data and to use this to reference subsets. In Earth science this structural index is often space and time, but many other indices are possible--location in a gene sequence, file type, variable, bandwidth, viewing angle, etc. It is not just for “straightforward” data sets.
  • Page 5, para 5: I take issue with the statement that few repositories provide scientific review. I can think of a couple dozen that do just off the top of my head, and I bet most domain repositories have some level of science review. The “scientists” may not always be in house, but the repository is a team facilitator. See my general comments.
  • Page 5, para 10: The PDS system is only unusual in that it is well documented and advertised. As mentioned, this team style approach is actually fairly common.
  • Page 6, para 3: Parsons and Fox don’t just argue that the data publication metaphor is limiting. They also say it is misleading. That should be acknowledged at least, if not actively grappled with.
  • Artifact removal: Unfortunately the authors have not updated the paper with a 2x2 table showing guns and smiles by removed data points. This could dispel criticism that an asymmetrical expectation bias that has been shown to exist in similar experiments is not driving a bias leading to inappropriate conclusions.
  • Artifact removal: Unfortunately the authors have not updated the paper with a 2x2 table showing guns and smiles by removed data points. This could dispel criticism that an asymmetrical expectation bias that has been shown to exist in similar experiments is not driving a bias leading to inappropriate conclusions. This is my strongest criticism of the paper and should be easily addressed as per my previous review comment. The fact that this simple data presentation was not performed to remove a clear potential source of spurious results is disappointing.
  • The authors have added 95% CIs to figures S1 and S2. This clarifies the scope for expectation bias in these data. The addition of error bars permits the authors’ assumption of a linear trend, indicating that the effect of sequences of either guns or smiles may not skew results. Equally, there could be either a downwards or upwards trend fitting within the confidence intervals that could be indicative of a cognitive bias that may violate the assumptions of the authors, leading to spurious results. One way to remove these doubts could be to stratify the analyses by the length of sequences of identical symbols. If the results hold up in each of the strata, this potential bias could be shown to not be present in the data. If the bias is strong, particularly in longer runs, this could indicate that the positive result was due to small numbers of longer identical runs combined with a cognitive bias rather than an ability to predict future events.

Chamberlain and Szöcs present the taxize R package, a set of functions that provides interfaces to several web tools and databases, and simplifies the process of checking, updating, correcting and manipulating taxon names for researchers working with ecological/biological data. A key feature that is repeated throughout is the need for reproducibility of science workflows and taxize provides a means to achieve this within the R software ecosystem for taxonomic search.

The manuscript is well-written and nicely presented, with a good balance of descriptive text and discourse and practical illustration of package usage. A number of examples illustrate the scope of the package, something that is fully expanded upon in the two appendices, which are a welcome addition to the paper.

As to the package, I am not overly fond of long function names; the authors should consider dropping the data source abbreviations from the function names in a future update/revision of the package. Likewise there is some inconsistency in the naming conventions used. For example there is the ’tpl_search()’ function to search The Plant List, but the equivalent function to search uBio is ’ubio_namebank()’. Whilst this may reflect specific aspects of terminology in use at the respective data stores, it does not help the user gain familiarity with the package by having them remember inconsistent function names.

One advantage of taxize is that it draws together a rich selection of data stores to query. A further suggestion for a future update would be to add generic function names, that apply to a database connection/information object. The latter would describe the resource the user wants to search and any other required information, such as the API key, etc., for example:

The user function to search would then be ’search(foo, "Abies")’. Similar generically named functions would provide the primary user-interface, thus promoting a more consistent toolbox at the R level. This will become increasingly relevant as the scope of taxize increases through the addition of new data stores that the package can access.

In terms of presentation in the paper, I really don’t like the way the R code inputs merge with the R outputs. I know the author of Knitr doesn’t like the demarcation of output being polluted by the R prompt, but I do find it difficult parsing the inputs/outputs you show because often there is no space between them and users not familiar with R will have greater difficulties than I. Consider adding in more conventional indications of R outputs, or physically separate input from output by breaking up the chunks of code to have whitespace between the grey-background chunks. Related, in one location I noticed something amiss with the layout; in the first code block at the top of page 5, the printed output looks wrong here. I would expect the attributes to print on their own line and the data in the attribute to also be on its own separate line.

Note also, the inconsistency in the naming of the output object columns. For example, in the two code chunks shown in column 1 of page 4, the first block has an object printed with column names ’matched_name’ and ’data_source_title’, whilst camelCase is used in the outputs shown in the second block. As the package is revised and developed, consider this and other aspects of providing a consistent presentation to the user.

I was a little confused about the example in the section Resolve Taxonomic Names on page 4. Should the taxon name be “Helianthus annuus” or “Helianthus annus” ? In the ‘mynames’ definition you include ‘Helianthus annuus’ in the character vector but the output shown suggests that the submitted name was ‘Helianthus annus’ (1 “u”) in rows with rownames 9 and 10 in the output shown.

Other than that there were the following minor observations:

  • Abstract: replace “easy” with “simple” in “...fashion that’s easy...” , and move the details about availability and the URI to the end of the sentence.
  • Page 2, Column 1, Paragraph 2: You have “In addition, there is no one authoritative taxonomic names source...” , which is a little clumsy to read. How about “In addition, there is no one authoritative source of taxonomic names... ” ?
  • Pg 2, C1, P2-3: The abbreviated data sources are presented first (in paragraph 2) and subsequently defined (in para 3). Restructure this so that the abbreviated forms are explained upon first usage.
  • Pg 2, C2, P2: Most R packages are “in development” so I would drop the qualifier and reword the opening sentence of the paragraph.
  • Pg 2, C2, P6: Change “and more can easily be added” to “and more can be easily added” seems to flow better?
  • Pg 5, paragraph above Figure 1: You refer to converting the object to an **ape** *phylo* object and then repeat essentially the same information in the next sentence. Remove the repetition.
  • Pg 6, C1: The header may be better as “Which taxa are children of the taxon of interest” .
  • Pg 6: In the section “IUCN status”, the term “we” is used to refer to both the authors and the user. This is confusing. Reserve “we” for reference to the authors and use something else (“a user” perhaps) for the other instances. Check this throughout the entire manuscript.
  • Pg 6, C2: in the paragraph immediately below the ‘grep()’ for “RAG1”, two consecutive sentences begin with “However”.
  • Pg 7: The first sentence of “Aggregating data....” reads “In biology, one can asks questions...” . It should be “one asks” or “one can ask” .
  • Pg 7, Conclusions: The first sentence reads “information is increasingly sought out by biologists” . I would drop “out” as “sought” is sufficient on its own.
  • Appendices: Should the two figures in the Appendices have a different reference to differentiate them from Figure 1 in the main body of the paper? As it stands, the paper has two Figure 1s, one on page 5 and a second on page 12 in the Appendix.
  • On Appendix Figure 2: The individual points are a little large. Consider reducing the plotting character size. I appreciate the effect you were going for with the transparency indicating density of observation through overplotting, but the effect is weakened by the size of the individual points.
  • Should the phylogenetic trees have some scale to them? I presume the height of the stems is an indication of phylogenetic distance but the figure is hard to calibrate without an associated scale. A quick look at Paradis (2012) Analysis of Phylogenetics and Evolution with R would suggest however that a scale is not consistently applied to these trees. I am happy to be guided by the authors as they will be more familiar with the conventions than I.

Hydbring and Badalian-Very summarize in this review, the current status in the potential development of clinical applications based on miRNAs’ biology. The article gives an interesting historical and scientific perspective on a field that has only recently boomed.

Hydbring and Badalian-Very summarize in this review, the current status in the potential development of clinical applications based on miRNAs’ biology. The article gives an interesting historical and scientific perspective on a field that has only recently boomed; focusing mostly on the two main products in the pipeline of several biotech companies (in Europe and USA) which work with miRNAs-based agents, disease diagnostics and therapeutics. Interestingly, not only the specific agents that are being produced are mentioned, but also clever insights in the important cellular pathways regulated by key miRNAs are briefly discussed.

Minor points to consider in subsequent versions:

  • Page 2; paragraph ‘Genomic location and transcription of microRNAs’ : the concept of miRNA clusters and precursors could be a bit better explained.
  • Page 2; paragraph ‘Genomic location and transcription of microRNAs’ : when discussing the paper by the laboratory of Richard Young (reference 16); I think it is important to mention that that particular study refers to stem cells.
  • Page 2; paragraph ‘Processing of microRNAs’ : “Argonate” should be replaced by “Argonaute”.
  • Page 3; paragraph ‘MicroRNAs in disease diagnostics’ : are miR-15a and 16-1 two different miRNAs? I suggest mentioning them as: miR-15a and miR-16-1 and not using a slash sign (/) between them.
  • Page 4; paragraph ‘Circulating microRNAs’ : I am a bit bothered by the description of multiple sclerosis (MS) only as an autoimmune disease. Without being an expert in the field, I believe that there are other hypotheses related to the etiology of MS.
  • Page 5; paragraph ‘Clinical microRNA diagnostics’ : Does ‘hsa’ in hsa-miR-205 mean something?
  • Page 5; paragraph ‘Clinical microRNA diagnostics’ : the authors mention the company Asuragen, Austin, TX, USA but they do not really say anything about their products. I suggest to either remove the reference to that company or to include their current pipeline efforts.
  • Page 6; paragraph ‘MicroRNAs in therapeutics’ : in the first paragraph the authors suggest that miRNAs-based therapeutics should be able to be applied with “minimal side-effects”. Since one miRNA can affect a whole gene program, I found this a bit counterintuitive; I was wondering if any data has been published to support that statement. Also, in the same paragraph, the authors compare miRNAs to protein inhibitors, which are described as more specific and/or selective. I think there are now good indications to think that protein inhibitors are not always that specific and/or selective and that such a property actually could be important for their evidenced therapeutic effects.
  • Page 6; paragraph ‘MicroRNAs in therapeutics’ : I think the concept of “antagomir” is an important one and could be better highlighted in the text.
  • Throughout the text (pages 3, 5, 6, and 7): I am a bit bothered by separating the word “miRNA” or “miRNAs” at the end of a sentence in the following way: “miR-NA” or “miR-NAs”. It is a bit confusing considering the particular nomenclature used for miRNAs. That was probably done during the formatting and editing step of the paper.
  • I was wondering if the authors could develop a bit more the general concept that seems to indicate that in disease (and in particular in cancer) the expression and levels of miRNAs are in general downregulated. Maybe some papers have been published about this phenomenon?

The authors describe their attempt to reproduce a study in which it was claimed that mild acid treatment was sufficient to reprogramme postnatal splenocytes from a mouse expressing GFP in the oct4 locus to pluripotent stem cells. The authors followed a protocol that has recently become available as a technical update of the original publication.

They report obtaining no pluripotent stem cells expressing GFP driven over the same time period of several days described in the original publication. They describe observation of some green fluorescence that they attributed to autofluorescence rather than GFP since it coincided with PI positive dead cells. They confirmed the absence of oct4 expression by RT-PCR and also found no evidence for Nanog or Sox2, also markers of pluripotent stem cells.

The paper appears to be an authentic attempt to reproduce the original study, although the study might have had additional value with more controls: “failure to reproduce” studies need to be particularly well controlled.

Examples that could have been valuable to include are:

  • For the claim of autofluorescence: the emission spectrum of the samples would likely have shown a broad spectrum not coincident with that of GFP.
  • The reprogramming efficiency of postnatal mouse splenocytes using more conventional methods in the hands of the authors would have been useful as a comparison. Idem the lung fibroblasts.
  • There are no positive control samples (conventional mESC or miPSC) in the qPCR experiments for pluripotency markers. This would have indicated the biological sensitivity of the assay.
  • Although perhaps a sensitive issue, it might have been helpful if the authors had been able to obtain samples of cells (or their mRNA) from the original authors for simultaneous analysis.

In summary, this is a useful study as it is citable and confirms previous blog reports, but it could have been improved by more controls.

The article is well written, treats an actual problem (the risk of development of valvulopathy after long-term cabergoline treatment in patients with macroprolactinoma) and provides evidence about the reversibility of valvular changes after timely discontinuation of DA treatment.

Title and abstract: The title is appropriate for the content of the article. The abstract is concise and accurately summarizes the essential information of the paper although it would be better if the authors define more precisely the anatomic specificity of valvulopathy – mild mitral regurgitation.

Case report: The clinical case presentation is comprehensive and detailed but there are some minor points that should be clarified:

  • Please clarify the prolactin levels at diagnosis. In the Presentation section (line 3) “At presentation, prolactin level was found to be greater than 1000 ng/ml on diluted testing” but in the section describing the laboratory evaluation at diagnosis (line 7) “Prolactin level was 55 ng/ml”. Was the difference due to so called “hook effect”?
  • Figure 1: In the text the follow-up MR imaging is indicated to be “after 10 months of cabergoline treatment” . However, the figures 1C and 1D represent 2 years post-treatment MR images. Please clarify.
  • Figure 2: Echocardiograms 2A and 2B are defined as baseline but actually they correspond to the follow-up echocardiographic assessment at the 4th year of cabergoline treatment. Did the patient undergo a baseline (prior to dopamine agonist treatment) echocardiographic evaluation? If he did not, it should be mentioned as study limitation in the Discussion section.
  • The mitral valve thickness was mentioned to be normal. Did the echographic examination visualize increased echogenicity (hyperechogenicity) of the mitral cusps?
  • How could you explain the decrease of LV ejection fraction (from 60-65% to 50-55%) after switching from cabergoline to bromocriptine treatment and respectively its increase to 62% after doubling the bromocriptine daily dose? Was LV function estimated always by the same method during the follow-up?
  • Final paragraph: Authors conclude that early discontinuation and management with bromocriptine may be effective in reversing cardiac valvular dysfunction. Even though, regular echocardiographic follow up should be considered in patients who are expected to be on long-term high dose treatment with bromocriptine regarding its partial 5-HT2b agonist activity.

This is an interesting topic: as the authors note, the way that communicators imagine their audiences will shape their output in significant ways. And I enjoyed what clearly has the potential to be a very rich data set.

This is an interesting topic: as the authors note, the way that communicators imagine their audiences will shape their output in significant ways. And I enjoyed what clearly has the potential to be a very rich data set. But I have some reservations about the adequacy of that data set, as it currently stands, given the claims the authors make; the relevance of the analytical framework(s) they draw upon; and the extent to which their analysis has offered significant new insights ‐ by which I mean, I would be keen to see the authors push their discussion further. My suggestions are essentially that they extend the data set they are working with to ensure that their analysis is both rigorous and generalisable, an re-consider the analytical frame they use. I will make some more concrete comments below.

With regard to the data: my feeling is that 14 interviews is a rather slim data set, and that this is heightened by the fact that they were all carried out in a single location, and recruited via snowball sampling and personal contacts. What efforts have the authors made to ensure that they are not speaking to a single, small, sub-community in the much wider category of science communicators? ‐ a case study, if you like, of a particular group of science communicators in North Carolina? In addition, though the authors reference grounded theory as a method for analysis, I got little sense of the data reaching saturation. The reliance on one-off quotes, and on the stories and interests of particular individuals, left me unsure as to how representative interview extracts were. I would therefore recommend either that the data set is extended by carrying out more interviews, in a wider variety of locations (e.g. other sites in the US), or that it is redeveloped as a case study of a particular local professional community. (Which would open up some fascinating questions ‐ how many of these people know each other? What spaces, online or offline, do they interact in, and do they share knowledge, for instance about their audiences? Are there certain touchstone events or publics they communally make reference to?)

As a more minor point with regard to the data set and what the authors want it to do, there were some inconsistencies as to how the study was framed. On p.2 they variously describe the purpose as to “understand the experiences and perspectives of science communicators” and the goals as identifying “the basic interests and value orientations attributed to lay audiences by science communicators”. Later, on p.5, they note that the “research is inductive and seeks to build theory rather than generalizable claims”, while in the Discussion they talk again about having identified communicators‘ “personal motivations” (p.12). There are a number of questions left hanging: is the purpose to understand communicator experiences ‐ in which case why focus on perceptions of audiences? Where is theory being built, and in what ways can this be mobilised in future work? The way that the study is framed and argued as a whole needs, I would suggest, to be clarified.

Relatedly, my sense is that some of this confusion is derived from what I find a rather busy analytical framework. I was not convinced of the value of combining inductive and deductive coding: if the ‘human value typology’ the authors use is ‘universal’, then what is added by open coding? Or, alternatively, why let their open coding, and their findings from this, be constrained by an additional, rather rigid, framework? The addition of the considerable literature on news values to the mix makes the discussion more confusing again. I would suggest that the authors either make much more clear the value of combining these different approaches ‐ building new theory outlining how they relate, and can be jointly mobilised in practice ‐ or fix on one. (My preference would be to focus on the findings from the open coding ‐ but that reflects my own disciplinary biases.)

A more minor analytical point: the authors note that their interviewees come from slightly different professions, and communicate through different formats, have different levels of experience, and different educational backgrounds ‐ but as far as I can see there is no comparative analysis based on this. Were there noticeable differences in the interview talk based on these categorisations? Or was the data set too small to identify any potential contrasts or themes? A note explaining this would be useful.

My final point has reference to the potential that this data set has, particularly if it is extended and developed. I would like to encourage the authors to take their analysis further: at the moment, I was not particularly surprised by the ways in which the communicators referenced news values or imagined their audiences. But it seems to me that the analytical work is not yet complete. What does it mean that communicators imagine audience values and preferences in the way that they do ‐ who is included and excluded by these imaginations? One experiment might be to consider what ‘ideal type’ publics are created in the communicators’ talk. What are the characteristics of the audiences constructed in the interviews and ‐ presumably ‐ in the communicative products of interviewees? What would these people look like? There are also some tantalizing hints in the Discussion that are not really discussed in the Findings ‐ of, for instance, the way in which communicator’s personal motivations may combine with their perceptions of audiences to shape their products. How does this happen? These are, of course, suggestions. But my wider point is that the authors need to show more clearly what is original and useful in their findings ‐ what it is, exactly, that will be important to other scholars in the field.

I hope my comments make sense ‐ please do not hesitate to contact me if not.

This is an interesting article and piece of software. I think it contributes towards further alternatives to easily visualize high dimensionality data on the web. It’s simple and easy to embed into other web frameworks or applications.

a) About the software

  • CSV format . It was hard to guess the expected format. The authors need to add a syntax description of the CSV format at the help page.
  • Simple HTML example . It will be easy to test HeatmapViewer (HmV) if you add a simple downloadable example file with the minimum required HTML-JavaScript to set up a HmV (without all the CSV import code).
  • Color scale . HmV only implements a simple three point linear color scale. For me this is the major weakness of HmV. It will be very convenient that in the next HmV release the user can give as a parameter a function that manages the score to color conversion.

b) About the paper

  • http://www.broadinstitute.org/gsea (desktop)
  • http://jheatmap.github.io/jheatmap/ (website)
  • http://www.gitools.org/ (desktop)
  • http://blog.nextgenetics.net/demo/entry0044/ (website)
  • http://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram2d.html (python)
  • http://matplotlib.org/api/pyplot_api.html (python)
  • Predicted protein mutability landscape: The authors say: “Without using a tool such as the HeatmapViewer, we could hardly obtain an overview of the protein mutability landscape”. This paragraph seems to suggest that you can explore the data with HmV. I think that HmV is a good tool to report your data, but not to explore it.
  • Conclusions: The authors say: “... provides a new, powerful way to generate and display matrix data in web presentations and in publications.” To use heat maps in web presentations and publications is nothing new. I think that HmV makes it easier and user-friendly, but it’s not new.

This article addresses the links between habitat condition and an endangered bird species in an important forest reserve (ASF) in eastern Kenya. It addresses an important topic, especially given ongoing anthropogenic pressures on this and similar types of forest reserves in eastern Kenya and throughout the tropics. Despite the rather small temporal and spatial extent of the study, it should make an important contribution to bird and forest conservation.

This article addresses the links between habitat condition and an endangered bird species in an important forest reserve (ASF) in eastern Kenya. It addresses an important topic, especially given ongoing anthropogenic pressures on this and similar types of forest reserves in eastern Kenya and throughout the tropics. Despite the rather small temporal and spatial extent of the study, it should make an important contribution to bird and forest conservation. There are a number of issues with the methods and analysis that need to be clarified/addressed however; furthermore, some of the conclusions overreach the data collected, while other important results are given less emphasis that they warrant. Below are more specific comments by section:

The conclusion that human-driven tree removal is an important contributor to the degradation of ASF is reasonable given the data reported in the article. Elephant damage, while clearly likely a very big contributor to habitat modification in ASF, was not the focus of the study (the authors state clearly in the Discussion that elephant damage was not systematically quantified, and thus no data were analyzed) ‐ and thus should only be mentioned in passing here ‐ if at all.

More information about the life history ecology of A. Sokokensis would provide welcome context here. A bit more detail about breeding sites as well as dispersal behavior etc. would be helpful – and especially why these and other aspects render the Pipit a good indicator species/proxy for habitat condition. This could be revisited in the Discussion as links are made between habitat conditions and occurrence of the bird (where you discuss the underlying mechanisms for why it thrives in some parts of ASF and not others, and why it’s abundance correlate strongly with some types of disturbance and not others). Again, you reference other studies that have explored other species in ASF and forest disturbance, but do not really explicitly state why the Pipit is a particularly important indicator of forest condition.

  • Bird Survey: As described, all sightings and calls were recorded and incorporated into distance analysis – but it is not clear here whether or not distances to both auditory and visual encounters were measured the same way (i.e., with the rangefinder). Please clarify.
  • Floor litter sampling: Not clear here whether or not litter cover was recorded as a continuous or categorical variable (percentage). If not, please describe percentage “categories” used.
  • Mean litter depth graph (Figure 2) and accompanying text reports the means and sd but no post-hoc comparison test (e.g. Tukey HSD) – need to report the stats on which differences were/were not significant.
  • Figure 3 – you indicate litter depth was better predictor of bird abundance than litter cover, but r-squared is higher for litter cover. Need to clarify (and also indicate why you chose only to shown depth values in Figure 3.
  • The linear equation can be put in Figure 3 caption (not necessary to include in text).
  • Figure 4 – stats aren’t presented here; also, the caption states that tree loss and leaf litter are inversely correlated – this might be taken to mean, given discussion (below) about pruning, that there could be a poaching threshold below which poaching may pay dividends to Pipits (and above which Pipits are negatively affected). This warrants further exploration/elaboration.
  • The pruning result is arguably the most important one here – this suggests an intriguing trade-off between poaching and bird conservation (in particular, the suggestion that pruning by poachers may bolster Pipit populations – or at the very least mitigate against other aspects of habitat degradation). Worth highlighting this more in Discussion.
  • Last sentence on p. 7 suggests causality (“That is because…”) – but your data only support correlation (one can imagine that there may have been other extrinsic or intrinsic drivers of population decline).
  • P. 8: discussion of classification of habitat types in ASF is certainly interesting, but could be made much more succinct in keeping with focus of this paper.
  • P. 9, top: first paragraph could be expanded – as noted before, tradeoff between poaching/pruning and Pipit abundance is worth exploring in more depth. Could your results be taken as a prescription for understory pruning as a conservation tool for the Sokoke Pipit or other threatened species? More detail here would be welcome (and also in Conclusion); in subsequent paragraph about Pipit foraging behavior and specific relationship to understory vegetation at varying heights could be incorporated into this discussion. Is there any info about optimal perch height for foraging or for flying through the understory? Linking to results of other studies in ASF, is there potential for positive correlations with optimal habitat conditions for the other important bird species in ASF in order to make more general conclusions about management?

Bierbach and co-authors investigated the topic of the evolution of the audience effect in live bearing fishes, by applying a comparative method. They specifically focused on the hypothesis that sperm competition risk, arising from male mate choice copying, and avoidance of aggressive interactions play a key role in driving the evolution of audience-induced changes in male mate choice behavior.

Bierbach and co-authors investigated the topic of the evolution of the audience effect in live bearing fishes, by applying a comparative method. They specifically focused on the hypothesis that sperm competition risk, arising from male mate choice copying, and avoidance of aggressive interactions play a key role in driving the evolution of audience-induced changes in male mate choice behavior. The authors found support to their hypothesis of an influence of SCR on the evolution of deceptive behavior as their findings at species level showed a positive correlation between mean sexual activity and the occurrence of deceptive behavior. Moreover, they found a positive correlation between mean aggressiveness and sexual activity but they did not detect a relationship between aggressiveness and audience effects.

The manuscript is certainly well written and attractive, but I have some major concerns on the data analyses that prevent me to endorse its acceptance at the present stage.

I see three main problems with the statistics that could have led to potentially wrong results and, thus, to completely misleading conclusions.

  • First of all the Authors cannot run an ANCOVA in which there is a significant interaction between factor and covariate Tab. 2 (a). Indeed, when the assumption of common slopes is violated (as in their case), all other significant terms are meaningless. They might want to consider alternative statistical procedures, e.g. Johnson—Neyman method.
  • Second, the Authors cannot retain into the model a non significant interaction term, as this may affect estimations for the factors Tab. 2 (d). They need to remove the species x treatment interaction (as they did for other non significant terms, see top left of the same page 7).
  • The third problem I see regards all the GLMs in which species are compared. Authors entered the 'species' level as fixed factor when species are clearly a random factor. Entering species as fixed factors has the effect of badly inflating the denominator degrees of freedom, making authors’ conclusions far too permissive. They should, instead, use mixed LMs, in which species are the random factor. They should also take care that the degrees of freedom are approximately equal to the number of species (not the number of trials). To do so, they can enter as random factor the interaction between treatment and species.

Data need to be re-analyzed relying on the proper statistical procedures to confirm results and conclusions.

A more theoretical objection to the authors’ interpretation of results (supposing that results will be confirmed by the new analyses) could emerge from the idea that male success in mating with the preferred female may reduce the probability of immediate female’s re-mating, and thus reduce the risk of sperm competition on the short term. As a consequence, it may be not beneficial to significantly increase the risk of losing a high quality and inseminated female for a cost that will not be paid with certainty. The authors might want to consider also this for discussion.

Lastly, I think that the scenario generated from comparative studies at species level may be explained by phylogenetic factors other than sexual selection. Only the inclusion of phylogeny, that allow to account for the shared history among species, into data analyses can lead to unequivocal adaptive explanations for the observed patterns. I see the difficulty in doing this with few species, as it is the case of the present study, but I would suggest the Authors to consider also this future perspective. Moreover, a phylogenetic comparative study would be aided by the recent development of a well-resolved phylogenetic tree for the genus Poecilia (Meredith 2011).

Page 3: the authors should specify that also part of data on male aggressiveness (3 species from Table 1) come from previous studies, as they do for data on deceptive male mating behavior.

Page 5: since data on mate choice come from other studies is it so necessary to report a detailed description of methods for this section? Maybe the authors could refer to the already published methods and only give a brief additional description.

Page 6: how do the authors explain the complete absence of aggressive displays between the focal male and the audience male during the mate choice experiments? This sounds curious if considering that in all the examined species aggressive behaviors and dominance establishment are always observed during dyadic encounters.

In their response to my previous comments, the authors have clarified that only the data from the “Experimental phase” were used to calculate prediction accuracy. However, if I now understand the analysis procedure correctly, there are serious concerns with the approach adopted.

First, let me state what I now understand the analysis procedure to be:

  • For each subject the PD values across the 20 trials were converted to z-scores.
  • For each stimulus, the mean z-score was calculated.
  • The sign of the mean z-score for each stimulus was used to make predictions.
  • For each of the 20 trials, if the sign of the z-score on that trial was the same as for the mean z-score for that stimulus, a hit (correct prediction) was assigned. In contrast, if the sign of the z-score on that trial was the opposite as for the mean z-score for that stimulus, a miss (incorrect prediction) was assigned.
  • For each stimulus the total hits and misses were calculated.
  • Average hits (correct prediction) for each stimulus was calculated across subjects.

If this is a correct description of the procedure, the problem is that the same data were used to determine the sign of the z-score that would be associated with a correct prediction and to determine the actual correct predictions. This will effectively guarantee a correct prediction rate above chance.

To check if this is true, I quickly generated random data and used the analysis procedure as laid out above (see MATLAB code below). Across 10,000 iterations of 100 random subjects, the average “prediction” accuracy was ~57% for each stimulus (standard deviation, 1.1%), remarkably similar to the values reported by the authors in their two studies. In this simulation, I assumed that all subjects contributed 20 trials, but in the actual data analyzed in the study, some subjects contributed fewer than 20 trials due to artifacts in the pupil measurements.

If the above description of the analysis procedure is correct, then I think the authors have provided no evidence to support pupil dilation prediction of random events, with the results reflecting circularity in the analysis procedure.

However, if the above description of the procedure is incorrect, the authors need to clarify exactly what the analysis procedure was, perhaps by providing their analysis scripts.

I think this paper excellent and is an important addition to the literature. I really like the conceptualization of a self-replicating cycle as it illustrates the concept that the “problem” starts with the neuron, i.e., due to one or more of a variety of insults, the neuron is negatively impacted and releases H1, which in turn activates microglia with over expression of cytokines that may, when limited, foster repair but when activated becomes chronic (as is demonstrated here with the potential of cyclic H1 release) and thus facilitates neurotoxicity. I hope the authors intend to measure cytokine expression soon, especially IL-1 and TNF in both astrocytes and microglia, and S100B in astrocytes.

In more detail, Gilthorpe and colleagues provide novel experimental data that demonstrate a new role for a specific histone protein—the linker histone, H1—in neurodegeneration. This study, which was originally designed to identify axonal chemorepellents, actually provided a previously unknown role for H1, as well as other novel and thought provoking results. Fortuitously, as sometimes happens, the authors had a pleasant surprise: their results set some old dogmas on their respective ears and opened up new avenues of approach for studying the role of histones in self-amplification of neurodegenerative cycles. In point, they show that H1 is not just a nice little partner of nuclear DNA as previously thought. H1 is released from ‘damaged’ (or leaky) neurons, kills adjacent healthy neurons, and promotes a proinflammatory profile in both microglia and astrocytes.

Interestingly, the authors’ conceptualization of a damaged neuron → H1 release → healthy neuron killing cycle does not take into account the H1-mediated proinflammatory glial response. This facet of the study opens for these investigators a new avenue they may wish to follow: the role of H1 in stimulation of neuroinflammation with overexpression of cytokines. This is interesting, as neuronal injury has been shown to set in motion an acute phase response that activates glia, increases their expression of cytokines (interleukin-1 and S100B), which, in turn, induce neurons to produce excess Alzheimer-related proteins such as βAPP and ApoE (favoring formation of mature Aβ/ApoE plaques), activated MAPK-p38 and hyperphosphorylated tau (favoring formation of neurofibrillary tangles), and α synuclein (favoring formation of Lewy bodies). To date, the neuronal response shown responsible for stimulating glia is neuronal stress related release of sAPP, but these H1 results from Gilthorpe and colleagues may contribute to or exacerbate the role of sAPP.

The email address should be the one you originally registered with F1000.

You registered with F1000 via Google, so we cannot reset your password.

To sign in, please click here .

If you still need help with your Google account password, please click here .

You registered with F1000 via Facebook, so we cannot reset your password.

If you still need help with your Facebook account password, please click here .

If your email address is registered with us, we will email you instructions to reset your password.

If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.

Teamflect Blog

50 Great Peer Review Examples: Sample Phrases + Scenarios

by Emre Ok March 16, 2024, 10:48 am updated August 8, 2024, 12:19 pm 1k Views

Peer Feedback Examples

Peer review is a concept that has multiple different applications and definitions. Depending on your field, the definition of peer review can change greatly.

In the workplace, the meaning of peer review or peer feedback is that it is simply the input of a peer or colleague on another peer’s performance, attitude, output, or any other performance metric .

While in the academic world peer review’s definition is the examination of an academic paper by another fellow scholar in the field.

Even in the American legal system , people are judged in front of a jury made up of their peers.

It is clear as day that peer feedback carries a lot of weight and power. The input from someone who has the same experience with you day in and day out is on occasion, more meaningful than the feedback from direct reports or feedback from managers .

So here are 50 peer review examples and sample peer feedback phrases that can help you practice peer-to-peer feedback more effectively!

Table of Contents

Peer Feedback Examples: Offering Peers Constructive Criticism

Peer review examples: constructive criticism

One of the most difficult types of feedback to offer is constructive criticism. Whether you are a chief people officer or a junior employee, offering someone constructive criticism is a tight rope to walk.

When you are offering constructive criticism to a peer? That difficulty level is doubled. People can take constructive criticism from above or below.

One place where criticism can really sting is when it comes from someone at their level. That is why the peer feedback phrases below can certainly be of help.

Below you will find 10 peer review example phrases that offer constructive feedback to peers:

  • “I really appreciate the effort you’ve put into this project, especially your attention to detail in the design phase. I wonder if considering alternative approaches to the user interface might enhance user engagement. Perhaps we could explore some user feedback or current trends in UI design to guide us.”
  • “Your presentation had some compelling points, particularly the data analysis section. However, I noticed a few instances where the connection between your arguments wasn’t entirely clear. For example, when transitioning from the market analysis to consumer trends, a clearer linkage could help the audience follow your thought process more effectively.”
  • “I see you’ve put a lot of work into developing this marketing strategy, and it shows promise. To address the issue with the target demographic, it might be beneficial to integrate more specific market research data. I can share a few resources on market analysis that could provide some valuable insights for this section.”
  • “You’ve done an excellent job balancing different aspects of the project, but I think there’s an opportunity to enhance the overall impact by integrating some feedback we received in the last review. For instance, incorporating more user testimonials could strengthen our case study section.”
  • “Your report is well-structured and informative. I would suggest revisiting the conclusions section to ensure that it aligns with the data presented earlier. Perhaps adding a summary of key findings before concluding would reinforce the report’s main takeaways.”
  • “In reviewing your work, I’m impressed by your analytical skills. I believe using ‘I’ statements could make your argument even stronger, as it would provide a personal perspective that could resonate more with the audience. For example, saying ‘I observed a notable trend…’ instead of ‘There is a notable trend…’ can add a personal touch.”
  • “Your project proposal is thought-provoking and innovative. To enhance it further, have you considered asking reflective questions at the end of each section? This could encourage the reader to engage more deeply with the material, fostering a more interactive and thought-provoking dialogue.”
  • “I can see the potential in your approach to solving this issue, and I believe with a bit more refinement, it could be very effective. Maybe a bit more focus on the scalability of the solution could highlight its long-term viability, which would be impressive to stakeholders.”
  • “I admire the dedication you’ve shown in tackling this challenging project. If you’re open to it, I would be happy to collaborate on some of the more complex aspects, especially the data analysis. Together, we might uncover some additional insights that could enhance our findings.”
  • “Your timely submission of the project draft is commendable. To make your work even more impactful, I suggest incorporating recent feedback we received on related projects. This could provide a fresh perspective and potentially uncover aspects we might not have considered.”

Sample Peer Review Phrases: Positive Reinforcement

Peer feedback examples: Positive reinforcement

Offering positive feedback to peers as opposed to constructive criticism is on the easier side when it comes to the feedback spectrum.

There are still questions that linger however, such as: “ How to offer positive feedback professionally? “

To help answer that question and make your life easier when offering positive reinforcements to peers, here are 10 positive peer review examples! Feel free to take any of the peer feedback phrases below and use them in your workplace in the right context!

  • “Your ability to distill complex information into easy-to-understand visuals is exceptional. It greatly enhances the clarity of our reports.”
  • “Congratulations on surpassing this quarter’s sales targets. Your dedication and strategic approach are truly commendable.”
  • “The innovative solution you proposed for our workflow issue was a game-changer. It’s impressive how you think outside the box.”
  • “I really appreciate the effort and enthusiasm you bring to our team meetings. It sets a positive tone that encourages everyone.”
  • “Your continuous improvement in client engagement has not gone unnoticed. Your approach to understanding and addressing their needs is exemplary.”
  • “I’ve noticed significant growth in your project management skills over the past few months. Your ability to keep things on track and communicate effectively is making a big difference.”
  • “Thank you for your proactive approach in the recent project. Your foresight in addressing potential issues was key to our success.”
  • “Your positive attitude, even when faced with challenges, is inspiring. It helps the team maintain momentum and focus.”
  • “Your detailed feedback in the peer review process was incredibly helpful. It’s clear you put a lot of thought into providing meaningful insights.”
  • “The way you facilitated the last workshop was outstanding. Your ability to engage and inspire participants sparked some great ideas.”

Peer Review Examples: Feedback Phrases On Skill Development

Sample Peer Review Phrases: Skill Development

Peer review examples on talent development are one of the most necessary forms of feedback in the workplace.

Feedback should always serve a purpose. Highlighting areas where a peer can improve their skills is a great use of peer review.

Peers have a unique perspective into each other’s daily life and aspirations and this can quite easily be used to guide each other to fresh avenues of skill development.

So here are 10 peer sample feedback phrases for peers about developing new skillsets at work:

  • “Considering your interest in data analysis, I think you’d benefit greatly from the advanced Excel course we have access to. It could really enhance your data visualization skills.”
  • “I’ve noticed your enthusiasm for graphic design. Setting a goal to master a new design tool each quarter could significantly expand your creative toolkit.”
  • “Your potential in project management is evident. How about we pair you with a senior project manager for a mentorship? It could be a great way to refine your skills.”
  • “I came across an online course on persuasive communication that seems like a perfect fit for you. It could really elevate your presentation skills.”
  • “Your technical skills are a strong asset to the team. To take it to the next level, how about leading a workshop to share your knowledge? It could be a great way to develop your leadership skills.”
  • “I think you have a knack for writing. Why not take on the challenge of contributing to our monthly newsletter? It would be a great way to hone your writing skills.”
  • “Your progress in learning the new software has been impressive. Continuing to build on this momentum will make you a go-to expert in our team.”
  • “Given your interest in market research, I’d recommend diving into analytics. Understanding data trends could provide valuable insights for our strategy discussions.”
  • “You have a good eye for design. Participating in a collaborative project with our design team could offer a deeper understanding and hands-on experience.”
  • “Your ability to resolve customer issues is commendable. Enhancing your conflict resolution skills could make you even more effective in these situations.”

Peer Review Phrase Examples: Goals And Achievements

Peer Review Phrase Examples: Goals and Achievements

Equally important as peer review and feedback is peer recognition . Being recognized and appreciated by one’s peers at work is one of the best sentiments someone can experience at work.

Peer feedback when it comes to one’s achievements often comes hand in hand with feedback about goals.

One of the best goal-setting techniques is to attach new goals to employee praise . That is why our next 10 peer review phrase examples are all about goals and achievements.

While these peer feedback examples may not directly align with your situation, customizing them according to context is simple enough!

  • “Your goal to increase client engagement has been impactful. Reviewing and aligning these goals quarterly could further enhance our outreach efforts.”
  • “Setting a goal to reduce project delivery times has been a great initiative. Breaking this down into smaller milestones could provide clearer pathways to success.”
  • “Your aim to improve team collaboration is commendable. Identifying specific collaboration tools and practices could make this goal even more attainable.”
  • “I’ve noticed your dedication to personal development. Establishing specific learning goals for each quarter could provide a structured path for your growth.”
  • “Celebrating your achievement in enhancing our customer satisfaction ratings is important. Let’s set new targets to maintain this positive trajectory.”
  • “Your goal to enhance our brand’s social media presence has yielded great results. Next, we could focus on increasing engagement rates to build deeper connections with our audience.”
  • “While striving to increase sales is crucial, ensuring we have measurable and realistic targets will help maintain team morale and focus.”
  • “Your efforts to improve internal communication are showing results. Setting specific objectives for team meetings and feedback sessions could further this progress.”
  • “Achieving certification in your field was a significant milestone. Now, setting a goal to apply this new knowledge in our projects could maximize its impact.”
  • “Your initiative to lead community engagement projects has been inspiring. Let’s set benchmarks to track the positive changes and plan our next steps in community involvement.”

Peer Evaluation Examples: Communication Skills

Communication skills.

The last area of peer feedback we will be covering in this post today is peer review examples on communication skills.

Since the simple act of delivering peer review or peer feedback depends heavily on one’s communication skills, it goes without saying that this is a crucial area.

Below you will find 10 sample peer evaluation examples that you can apply to your workplace with ease.

Go over each peer review phrase and select the ones that best reflect the feedback you want to offer to your peers!

  • “Your ability to articulate complex ideas in simple terms has been a great asset. Continuously refining this skill can enhance our team’s understanding and collaboration.”
  • “The strategies you’ve implemented to improve team collaboration have been effective. Encouraging others to share their methods can foster a more collaborative environment.”
  • “Navigating the recent conflict with diplomacy and tact was impressive. Your approach could serve as a model for effective conflict resolution within the team.”
  • “Your active listening during meetings is commendable. It not only shows respect for colleagues but also ensures that all viewpoints are considered, enhancing our decision-making process.”
  • “Your adaptability in adjusting communication styles to different team members is key to our project’s success. This skill is crucial for maintaining effective collaboration across diverse teams.”
  • “The leadership you displayed in coordinating the team project was instrumental in its success. Your ability to align everyone’s efforts towards a common goal is a valuable skill.”
  • “Your presentation skills have significantly improved, effectively engaging and informing the team. Continued focus on this area can make your communication even more impactful.”
  • “Promoting inclusivity in your communication has positively influenced our team’s dynamics. This approach ensures that everyone feels valued and heard.”
  • “Your negotiation skills during the last project were key to reaching a consensus. Developing these skills further can enhance your effectiveness in future discussions.”
  • “The feedback culture you’re fostering is creating a more dynamic and responsive team environment. Encouraging continuous feedback can lead to ongoing improvements and innovation.”

Best Way To Offer Peer Feedback: Using Feedback Software!

If you are offering feedback to peers or conducting peer review, you need a performance management tool that lets you digitize, streamline, and structure those processes effectively.

To help you do just that let us show you just how you can use the best performance management software for Microsoft Teams , Teamflect, to deliver feedback to peers!

While this particular example approaches peer review in the form of direct feedback, Teamflect can also help implement peer reviews inside performance appraisals for a complete peer evaluation.

Step 1: Head over to Teamflect’s Feedback Module

While Teamflect users can exchange feedback without leaving Microsoft Teams chat with the help of customizable feedback templates, the feedback module itself serves as a hub for all the feedback given and received.

Once inside the feedback module, all you have to do is click the “New Feedback” button to start giving structured and effective feedback to your peers!

Microsoft Teams classic

Step 2: Select a feedback template

Teamflect has an extensive library of customizable feedback templates. You can either directly pick a template that best fits the topic on which you would like to deliver feedback to your peer or create a custom feedback template specifically for peer evaluations.

Once you’ve chosen your template, you can start giving feedback right then and there!

Microsoft Teams classic 1

Optional: 360-Degree Feedback

Why stop with peer review? Include all stakeholders around the performance cycle into the feedback process with one of the most intuitive 360-degree feedback systems out there.

Microsoft Teams classic 3

Request feedback about yourself or about someone else from everyone involved in their performance, including managers, direct reports, peers, and external parties.

Optional: Summarize feedback with AI

If you have more feedback on your hands then you can go through, summarize that feedback with the help of Teamflect’s AI assistant!

Microsoft Teams classic 2

What Are The Benefits of Implementing Peer Review Systems?

Peer reviews have plenty of benefits to the individuals delivering the peer review, the ones receiving the peer evaluation, as well as the organization itself. So here are the 5 benefits of implementing peer feedback programs organization-wide.

1. Enhanced Learning and Understanding Peer feedback promotes a deeper engagement with the material or project at hand. When individuals know they will be receiving and providing feedback, they have a brand new incentive to engage more thoroughly with the content.

2. Cultivation of Open Communication and Continuous Improvement Establishing a norm where feedback is regularly exchanged fosters an environment of open communication. People become more accustomed to giving and receiving constructive criticism, reducing defensiveness, and fostering a culture where continuous improvement is the norm.

3. Multiple Perspectives Enhance Quality Peer feedback introduces multiple viewpoints, which can significantly enhance the quality of work. Different perspectives can uncover blind spots, introduce new ideas, and challenge existing ones, leading to more refined and well-rounded outcomes.

4. Encouragement of Personal and Professional Development Feedback from peers can play a crucial role in personal and professional growth. It can highlight areas of strength and identify opportunities for development, guiding individuals toward their full potential.

Related Posts:

Written by emre ok.

Emre is a content writer at Teamflect who aims to share fun and unique insight into the world of performance management.

peer review example research

15 Performance Review Competencies to Track in 2024

promotion interview questions thumbnail

10 Best Employee Promotion Interview Questions & Answers!

70 samples of peer review examples for employees

  • Performance Management

70 Peer Review Examples: Powerful Phrases You Can Use

Picture of Surabhi

  • October 30, 2023

The blog is tailored for HR professionals looking to set up and improve peer review feedback within their organization. Share the article with your employees as a guide to help them understand how to craft insightful peer review feedback.

Peer review is a critical part of personal development, allowing colleagues to learn from each other and excel at their job. Crafting meaningful and impactful feedback for peers is an art. It’s not just about highlighting strengths and weaknesses; it’s about doing so in a way that motivates others. 

In this blog post, we will explore some of the most common phrases you can use to give peer feedback. Whether you’re looking for a comment on a job well done, offer constructive criticism , or provide balanced and fair feedback, these peer review examples will help you communicate your feedback with clarity and empathy.

Peer review feedback is the practice of colleagues and co-workers assessing and providing meaningful feedback on each other’s performance. It is a valuable instrument that helps organizations foster professional development, teamwork, and continuous improvement.

Peoplebox lets you conduct effective peer reviews within minutes. You can customize feedback, use tailored surveys, and seamlessly integrate it with your collaboration tools. It’s a game-changer for boosting development and collaboration in your team.

See Peoplebox in Action

Why are Peer Reviews Important?

Here are some compelling reasons why peer review feedback is so vital:

Broader Perspective: Peer feedback offers a well-rounded view of an employee’s performance. Colleagues witness their day-to-day efforts and interactions, providing a more comprehensive evaluation compared to just a supervisor’s perspective.

Skill Enhancement: It serves as a catalyst for skill enhancement. Constructive feedback from peers highlights areas of improvement and offers opportunities for skill development.

Encourages Accountability: Peer review fosters a culture of accountability . Knowing that one’s work is subject to review by peers can motivate individuals to perform at their best consistently.

Team Cohesion: It strengthens team cohesion by promoting open communication. and constructive communication. Teams that actively engage in peer feedback often develop a stronger sense of unity and shared purpose.

Fair and Unbiased Assessment: By involving colleagues, peer review helps ensure a fair and unbiased assessment. It mitigates the potential for supervisor bias and personal favoritism in performance evaluations .

Identifying Blind Spots: Peers can identify blind spots that supervisors may overlook. This means addressing issues at an early stage, preventing them from escalating.

Motivation and Recognition: Positive peer feedback can motivate employees and offer well-deserved recognition for their efforts. Acknowledgment from colleagues can be equally, if not more, rewarding than praise from higher-ups.

Now, let us look at the best practices for giving peer feedback in order to leverage its benefits effectively.

Best practices to follow while giving peer feedback

30 Positive Peer Feedback Examples

Now that we’ve established the importance of peer review feedback, the next step is understanding how to use powerful phrases to make the most of this evaluation process.  In this section, we’ll equip you with various examples of phrases to use during peer reviews, making the journey more confident and effective for you and your team .

Must Read: 60+ Self-Evaluation Examples That Can Make You Shine

Peer Review Example on Work Quality

When it comes to recognizing excellence, quality work is often the first on the list. Here are some peer review examples highlighting the work quality:

  • “Kudos to Sarah for consistently delivering high-quality reports that never fail to impress both clients and colleagues. Her meticulous attention to detail and creative problem-solving truly set the bar high.”
  • “John’s attention to detail and unwavering commitment to excellence make his work a gold standard for the entire team. His consistently high-quality contributions ensure our projects shine.”
  • “Alexandra’s dedication to maintaining the project’s quality standards sets a commendable benchmark for the entire department. Her willingness to go the extra mile is a testament to her work ethic and quality focus.”
  • “Patrick’s dedication to producing error-free code is a testament to his commitment to work quality. His precise coding and knack for bug spotting make his work truly outstanding.”

Peer Review Examples on Competency and Job-Related Skills

Competency and job-related skills set the stage for excellence. Here’s how you can write a peer review highlighting this particular skill set:

  • “Michael’s extensive knowledge and problem-solving skills have been instrumental in overcoming some of our most challenging technical hurdles. His ability to analyze complex issues and find creative solutions is remarkable. Great job, Michael!”
  • “Emily’s ability to quickly grasp complex concepts and apply them to her work is truly commendable. Her knack for simplifying the intricate is a gift that benefits our entire team.”
  • “Daniel’s expertise in data analysis has significantly improved the efficiency of our decision-making processes. His ability to turn data into actionable insights is an invaluable asset to the team.”
  • “Sophie’s proficiency in graphic design has consistently elevated the visual appeal of our projects. Her creative skills and artistic touch add a unique, compelling dimension to our work.”

Peer Review Sample on Leadership Skills

Leadership ability extends beyond a mere title; it’s a living embodiment of vision and guidance, as seen through these exceptional examples:

  • “Under Lisa’s leadership, our team’s morale and productivity have soared, a testament to her exceptional leadership skills and hard work. Her ability to inspire, guide, and unite the team in the right direction is truly outstanding.”
  • “James’s ability to inspire and lead by example makes him a role model for anyone aspiring to be a great leader. His approachability and strong sense of ethics create an ideal leadership model.”
  • “Rebecca’s effective delegation and strategic vision have been the driving force behind our project’s success. Her ability to set clear objectives, give valuable feedback, and empower team members is truly commendable.”
  • “Victoria’s leadership style fosters an environment of trust and innovation, enabling our team to flourish in a great way. Her encouragement of creativity and openness to diverse ideas is truly inspiring.”

Feedback on Teamwork and Collaboration Skills

Teamwork is where individual brilliance becomes collective success. Here are some peer review examples highlighting teamwork:

  • “Mark’s ability to foster a collaborative environment is infectious; his team-building skills unite us all. His open-mindedness and willingness to listen to new ideas create a harmonious workspace.”
  • “Charles’s commitment to teamwork has a ripple effect on the entire department, promoting cooperation and synergy. His ability to bring out the best in the rest of the team is truly remarkable.”
  • “David’s talent for bringing diverse perspectives together enhances the creativity and effectiveness of our group projects. His ability to unite us under a common goal fosters a sense of belonging.”

Peer Review Examples on Professionalism and Work Ethics

Professionalism and ethical conduct define a thriving work culture. Here’s how you can write a peer review highlighting work ethics in performance reviews :

  • “Rachel’s unwavering commitment to deadlines and ethical work practices is a model for us all. Her dedication to punctuality and ethics contributes to a culture of accountability.”
  • “Timothy consistently exhibits the highest level of professionalism, ensuring our clients receive impeccable service. His courtesy and reliability set a standard of excellence.”
  • “Daniel’s punctuality and commitment to deadlines set a standard of professionalism we should all aspire to. His sense of responsibility is an example to us all.”
  • “Olivia’s unwavering dedication to ethical business practices makes her a trustworthy and reliable colleague. Her ethical principles create an atmosphere of trust and respect within our team, leading to a more positive work environment.”

Feedback on Mentoring and Support

Mentoring and support pave the way for future success. Check out these peer review examples focusing on mentoring:

  • “Ben’s dedication to mentoring new team members is commendable; his guidance is invaluable to our junior colleagues. His approachability and patience create an environment where learning flourishes.”
  • “David’s mentorship has been pivotal in nurturing the talents of several team members beyond his direct report, fostering a culture of continuous improvement. His ability to transfer knowledge is truly outstanding.”
  • “Laura’s patient mentorship and continuous support for her colleagues have helped elevate our team’s performance. Her constructive feedback and guidance have made a remarkable difference.”
  • “William’s dedication to knowledge sharing and mentoring is a driving force behind our team’s constant learning and growth. His commitment to others’ development is inspiring.”

Peer Review Examples on Communication Skills

Effective communication is the linchpin of harmonious collaboration. Here are some peer review examples to highlight your peer’s communication skills:

  • “Grace’s exceptional communication skills ensure clarity and cohesion in our team’s objectives. Her ability to articulate complex ideas in a straightforward manner is invaluable.”
  • “Oliver’s ability to convey complex ideas with simplicity greatly enhances our project’s success. His effective communication style fosters a productive exchange of ideas.”
  • “Aiden’s proficiency in cross-team communication ensures that our projects move forward efficiently. His ability to bridge gaps in understanding is truly commendable.”

Peer Review Examples on Time Management and Productivity

Time management and productivity are the engines that drive accomplishments. Here are some peer review examples highlighting time management:

  • “Ella’s time management is nothing short of exemplary; it sets a benchmark for us all. Her efficient task organization keeps our projects on track.”
  • “Robert’s ability to meet deadlines and manage time efficiently significantly contributes to our team’s overall productivity. His time management skills are truly remarkable.”
  • “Sophie’s time management skills are a cornerstone of her impressive productivity, inspiring us all to be more efficient. Her ability to juggle multiple tasks is impressive.”
  • “Liam’s time management skills are key to his consistently high productivity levels. His ability to organize work efficiently is an example for all of us to follow.”

Though these positive feedback examples are valuable, it’s important to recognize that there will be instances when your team needs to convey constructive or negative feedback. In the upcoming section, we’ll present 40 examples of constructive peer review feedback. Keep reading!

40 Constructive Peer Review Feedback

Receiving peer review feedback, whether positive or negative, presents a valuable chance for personal and professional development. Let’s explore some examples your team can employ to provide constructive feedback , even in situations where criticism is necessary, with a focus on maintaining a supportive and growth-oriented atmosphere.

Constructive Peer Review Feedback on Work Quality

  • “I appreciate John’s meticulous attention to detail, which enhances our projects. However, I noticed a few minor typos in his recent report. To maintain an impeccable standard, I’d suggest dedicating more effort to proofreading.”
  • “Sarah’s research is comprehensive, and her insights are invaluable. Nevertheless, for the sake of clarity and brevity, I recommend distilling her conclusions to their most essential points.”
  • “Michael’s coding skills are robust, but for the sake of team collaboration, I’d suggest that he provides more detailed comments within the code to enhance readability and consistency.”
  • “Emma’s creative design concepts are inspiring, yet consistency in her chosen color schemes across projects could further bolster brand recognition.”
  • “David’s analytical skills are thorough and robust, but it might be beneficial to present data in a more reader-friendly format to enhance overall comprehension.”
  • “I’ve observed Megan’s solid technical skills, which are highly proficient. To further her growth, I recommend taking on more challenging projects to expand her expertise.”
  • “Robert’s industry knowledge is extensive and impressive. To become a more well-rounded professional, I’d suggest he focuses on honing his client relationship and communication skills.”
  • “Alice’s project management abilities are impressive, and she’s demonstrated an aptitude for handling complexity. I’d recommend she refines her risk assessment skills to excel further in mitigating potential issues.”
  • “Daniel’s presentation skills are excellent, and his reports are consistently informative. Nevertheless, there is room for improvement in terms of interpreting data and distilling it into actionable insights.”
  • “Laura’s sales techniques are effective, and she consistently meets her targets. I encourage her to invest time in honing her negotiation skills for even greater success in securing deals and partnerships.”

Peer Review Examples on Leadership Skills

  • “I’ve noticed James’s commendable decision-making skills. However, to foster a more inclusive and collaborative environment, I’d suggest he be more open to input from team members during the decision-making process.”
  • “Sophia’s delegation is efficient, and her team trusts her leadership. To further inspire the team, I’d suggest she share credit more generously and acknowledge the collective effort.”
  • “Nathan’s vision and strategic thinking are clear and commendable. Enhancing his conflict resolution skills is suggested to promote a harmonious work environment and maintain team focus.”
  • “Olivia’s accountability is much appreciated. I’d encourage her to strengthen her mentoring approach to develop the team’s potential even further and secure a strong professional legacy.”
  • “Ethan’s adaptability is an asset that brings agility to the team. Cultivating a more motivational leadership style is recommended to uplift team morale and foster a dynamic work environment.”

Peer Review Examples on Teamwork and Collaboration

  • “Ava’s collaboration is essential to the team’s success. She should consider engaging more actively in group discussions to contribute her valuable insights.”
  • “Liam’s teamwork is exemplary, but he could motivate peers further by sharing credit more openly and recognizing their contributions.”
  • “Chloe’s flexibility in teamwork is invaluable. To become an even more effective team player, she might invest in honing her active listening skills.”
  • “William’s contributions to group projects are consistently valuable. To maximize his impact, I suggest participating in inter-departmental collaborations and fostering cross-functional teamwork.”
  • “Zoe’s conflict resolution abilities create a harmonious work environment. Expanding her ability to mediate conflicts and find mutually beneficial solutions is advised to enhance team cohesion.”
  • “Noah’s punctuality is an asset to the team. To maintain professionalism consistently, he should adhere to deadlines with unwavering dedication, setting a model example for peers.”
  • “Grace’s integrity and ethical standards are admirable. To enhance professionalism further, I’d recommend that she maintain a higher level of discretion in discussing sensitive matters.”
  • “Logan’s work ethics are strong, and his commitment is evident. Striving for better communication with colleagues regarding project updates is suggested, ensuring everyone remains well-informed.”
  • “Sophie’s reliability is appreciated. Maintaining a high level of attention to confidentiality when handling sensitive information would enhance her professionalism.”
  • “Jackson’s organizational skills are top-notch. Upholding professionalism by maintaining a tidy and organized workspace is recommended.”

Peer Review Feedback Examples on Mentoring and Support

  • “Aiden provides invaluable mentoring to junior team members. He should consider investing even more time in offering guidance and support to help them navigate their professional journeys effectively.”
  • “Harper’s commendable support to peers is noteworthy. She should develop coaching skills to maximize their growth, ensuring their development matches their potential.”
  • “Samuel’s patience in teaching is a valuable asset. He should tailor support to individual learning styles to enhance their understanding and retention of key concepts.”
  • “Ella’s mentorship plays a pivotal role in the growth of colleagues. She should expand her role in offering guidance for long-term career development, helping them set and achieve their professional goals.”
  • “Benjamin’s exceptional helpfulness fosters a more supportive atmosphere where everyone can thrive. He should encourage team members to seek assistance when needed.”
  • “Mia’s communication skills are clear and effective. To cater to different audience types, she should use more varied communication channels to convey her message more comprehensively.”
  • “Lucas’s ability to articulate ideas is commendable, and his verbal communication is strong. He should polish non-verbal communication to ensure that his body language aligns with his spoken message.”
  • “Evelyn’s appreciated active listening skills create strong relationships with colleagues. She should foster stronger negotiation skills for client interactions, ensuring both parties are satisfied with the outcomes.”
  • “Jack’s presentation skills are excellent. He should elevate written communication to match the quality of verbal presentations, offering more comprehensive and well-structured documentation.”
  • “Avery’s clarity in explaining complex concepts is valued by colleagues. She should develop persuasive communication skills to enhance her ability to secure project proposals and buy-in from stakeholders.”

Feedback on Time Management and Productivity

  • “Isabella’s efficient time management skills contribute to the team’s success. She should explore time-tracking tools to further optimize her workflow and maximize her efficiency.”
  • “Henry’s remarkable productivity sets a high standard. He should maintain a balanced approach to tasks to prevent burnout and ensure sustainable long-term performance.”
  • “Luna’s impressive task prioritization and strategic time allocation should be fine-tuned with goal-setting techniques to ensure consistent productivity aligned with objectives.”
  • “Leo’s great deadline adherence is commendable. He should incorporate short breaks into the schedule to enhance productivity and focus, allowing for the consistent meeting of high standards.”
  • “Mila’s multitasking abilities are a valuable skill. She should strive to implement regular time-blocking sessions into the daily routine to further enhance time management capabilities.”

Do’s and Don’t of Peer Review Feedback

Peer review feedback can be extremely helpful for intellectual growth and professional development. Engaging in this process with thoughtfulness and precision can have a profound impact on both the reviewer and the individual seeking feedback.

However, there are certain do’s and don’ts that must be observed to ensure that the feedback is not only constructive but also conducive to a positive and productive learning environment.

Do’s and don’t for peer review feedback

The Do’s of Peer Review Feedback:

Empathize and Relate : Put yourself in the shoes of the person receiving the feedback. Recognize the effort and intention behind their work, and frame your comments with sensitivity.

Ground Feedback in Data : Base your feedback on concrete evidence and specific examples from the work being reviewed. This not only adds credibility to your comments but also helps the recipient understand precisely where improvements are needed.

Clear and Concise Writing : Express your thoughts in a clear and straightforward manner. Avoid jargon or ambiguous language that may lead to misinterpretation.

Offer Constructive Criticism : Focus on providing feedback that can guide improvement. Instead of simply pointing out flaws, suggest potential solutions or alternatives.

Highlight Strength s: Acknowledge and commend the strengths in the work. Recognizing what’s done well can motivate the individual to build on their existing skills.

The Don’ts of Peer Review Feedback:

Avoid Ambiguity : Vague or overly general comments such as “It’s not good” do not provide actionable guidance. Be specific in your observations.

Refrain from Personal Attacks : Avoid making the feedback personal or overly critical. Concentrate on the work and its improvement, not on the individual.

Steer Clear of Subjective Opinions : Base your feedback on objective criteria and avoid opinions that may not be universally applicable.

Resist Overloading with Suggestions : While offering suggestions for improvement is important, overwhelming the recipient with a laundry list of changes can be counterproductive.

Don’t Skip Follow-Up : Once you’ve provided feedback, don’t leave the process incomplete. Follow up and engage in a constructive dialogue to ensure that the feedback is understood and applied effectively.

Remember that the art of giving peer review feedback is a valuable skill, and when done right, it can foster professional growth, foster collaboration, and inspire continuous improvement. This is where performance management software like Peoplebox come into play.

Start Collecting Peer Review Feedback On Peoplebox 

In a world where the continuous improvement of your workforce is paramount, harnessing the potential of peer review feedback is a game-changer. Peoplebox offers a suite of powerful features that revolutionize performance management, simplifying the alignment of people with business goals and driving success. Want to experience it first hand? Take a quick tour of our product.

Take a Product Tour

Through Peoplebox, you can effortlessly establish peer reviews, customizing key aspects such as:

  • Allowing the reviewee to select their peers
  • Seeking managerial approval for chosen peers to mitigate bias
  • Determining the number of peers eligible for review, and more.

Peoplebox lets you choose your peers to review

And the best part? Peoplebox lets you do all this from right within Slack.

Use Peoplebox to collect performance reviews on Slack

Peer Review Feedback Template That You Can Use Right Away

Still on the fence about using software for performance reviews? Here’s a quick ready-to-use peer review template you can use to kickstart the peer review process.

Free peer review template on Google form

Download the Free Peer Review Feedback Form here.

If you ever reconsider and are looking for a more streamlined approach to handle 360 feedback, give Peoplebox a shot!

Frequently Asked Questions

Why is peer review feedback important.

Peer review feedback provides a well-rounded view of employee performance, fosters skill enhancement, encourages accountability, strengthens team cohesion, ensures fair assessment, and identifies blind spots early on.

How does peer review feedback benefit employees?

Peer review feedback offers employees valuable insights for growth, helps them identify areas for improvement, provides recognition for their efforts, and fosters a culture of collaboration and continuous learning.

What are some best practices for giving constructive peer feedback?

Best practices include grounding feedback in specific examples, offering both praise and areas for improvement, focusing on actionable suggestions, maintaining professionalism, and ensuring feedback is clear and respectful.

What role does HR software like Peoplebox play in peer review feedback?

HR software like Peoplebox streamlines the peer review process by allowing customizable feedback, integration with collaboration tools like Slack, easy selection of reviewers, and providing templates and tools for effective feedback.

How can HR professionals promote a culture of feedback and openness in their organization?

HR professionals can promote a feedback culture by leading by example, providing training on giving and receiving feedback, recognizing and rewarding constructive feedback, creating safe spaces for communication, and fostering a culture of continuous improvement.

What is peer review?

A peer review is a collaborative evaluation process where colleagues assess each other’s work. It’s a cornerstone of professional development, enhancing accountability and shared learning. By providing constructive feedback , peers contribute to overall team improvement. Referencing peer review examples can guide effective implementation within your organization.

What should I write in a peer review?

In a peer review, you should focus on providing constructive, balanced feedback. Highlight strengths such as effective communication or leadership, and offer specific suggestions for improvement. The goal is to help peers grow professionally by addressing areas like skill development or performance gaps. Use clear and supportive language to ensure your feedback is actionable. By incorporating peer review examples, you can provide valuable insights to enhance performance.

What are some examples of peer review phrases?

Statements like ‘ Your ability to articulate complex ideas is impressive ‘ or ‘ I recommend focusing on time management to improve project delivery ‘ are examples of peer review phrases. These phrases help peers identify specific strengths and areas for growth. Customizing feedback to fit the context ensures it’s relevant and actionable. Exploring different peer review examples can inspire you to craft impactful feedback that drives growth.

Why is it called peer review?

It’s called peer review because the evaluation is conducted by colleagues or peers who share similar expertise or roles. This ensures that the feedback is relevant and credible, as it comes from individuals who understand the challenges and standards of the work being assessed. Analyzing peer review examples can reveal best practices for implementing this process effectively.

What are the types of peer reviews?

Peer reviews can be formal or informal. Formal reviews are typically structured, documented, and tied to performance evaluation. Informal reviews offer more frequent, real-time feedback. Both types are valuable for development. Exploring peer review examples can help you determine the best approach for your team or organization.

Table of Contents

What’s Next?

peer review example research

Get Peoplebox Demo

Get a 30-min. personalized demo of our OKR, Performance Management and People Analytics Platform Schedule Now

peer review example research

Take Product Tour

Watch a product tour to see how Peoplebox makes goals alignment, performance management and people analytics seamless. Take a product tour

Subscribe to our blog & newsletter

Popular Categories

  • Employee Engagement
  • One on Ones
  • People Analytics
  • Strategy Execution
  • Remote Work

Recent Blogs

15 Best employee survey tools reviewed

15 Best Employee Survey Tools in 2024

Employee Development

HR’s Guide to Employee Development

Best Reference Check Software

Top 10 Reference Check Software for HR Teams

peer review example research

  • OKRs (Aligned Goals)
  • Performance Reviews
  • 360 Degree Employee Reviews
  • Performance Reviews in Slack
  • 1:1 Meetings
  • Business Reviews
  • Automated Engagement Survey
  • Anonymous Messaging
  • Engagement Insights
  • Integrations
  • Why Peoplebox
  • Our Customers
  • Customer Success Stories
  • Product Tours
  • Peoplebox Analytics Talk
  • The Peoplebox Pulse Newsletter
  • OKR Podcast
  • OKR Examples
  • One-on-one-meeting questions
  • Performance Review Templates
  • Request Demo
  • Help Center
  • Careers (🚀 We are hiring)
  • Privacy Policy
  • Terms & Conditions
  • GDPR Compliance
  • Data Processing Addendum
  • Responsible Disclosure
  • Cookies Policy

Share this blog

Back Home

  • Science Notes Posts
  • Contact Science Notes
  • Todd Helmenstine Biography
  • Anne Helmenstine Biography
  • Free Printable Periodic Tables (PDF and PNG)
  • Periodic Table Wallpapers
  • Interactive Periodic Table
  • Periodic Table Posters
  • Science Experiments for Kids
  • How to Grow Crystals
  • Chemistry Projects
  • Fire and Flames Projects
  • Holiday Science
  • Chemistry Problems With Answers
  • Physics Problems
  • Unit Conversion Example Problems
  • Chemistry Worksheets
  • Biology Worksheets
  • Periodic Table Worksheets
  • Physical Science Worksheets
  • Science Lab Worksheets
  • My Amazon Books

Understanding Peer Review in Science

Peer Review Process

Peer review is an essential element of the scientific publishing process that helps ensure that research articles are evaluated, critiqued, and improved before release into the academic community. Take a look at the significance of peer review in scientific publications, the typical steps of the process, and and how to approach peer review if you are asked to assess a manuscript.

What Is Peer Review?

Peer review is the evaluation of work by peers, who are people with comparable experience and competency. Peers assess each others’ work in educational settings, in professional settings, and in the publishing world. The goal of peer review is improving quality, defining and maintaining standards, and helping people learn from one another.

In the context of scientific publication, peer review helps editors determine which submissions merit publication and improves the quality of manuscripts prior to their final release.

Types of Peer Review for Manuscripts

There are three main types of peer review:

  • Single-blind review: The reviewers know the identities of the authors, but the authors do not know the identities of the reviewers.
  • Double-blind review: Both the authors and reviewers remain anonymous to each other.
  • Open peer review: The identities of both the authors and reviewers are disclosed, promoting transparency and collaboration.

There are advantages and disadvantages of each method. Anonymous reviews reduce bias but reduce collaboration, while open reviews are more transparent, but increase bias.

Key Elements of Peer Review

Proper selection of a peer group improves the outcome of the process:

  • Expertise : Reviewers should possess adequate knowledge and experience in the relevant field to provide constructive feedback.
  • Objectivity : Reviewers assess the manuscript impartially and without personal bias.
  • Confidentiality : The peer review process maintains confidentiality to protect intellectual property and encourage honest feedback.
  • Timeliness : Reviewers provide feedback within a reasonable timeframe to ensure timely publication.

Steps of the Peer Review Process

The typical peer review process for scientific publications involves the following steps:

  • Submission : Authors submit their manuscript to a journal that aligns with their research topic.
  • Editorial assessment : The journal editor examines the manuscript and determines whether or not it is suitable for publication. If it is not, the manuscript is rejected.
  • Peer review : If it is suitable, the editor sends the article to peer reviewers who are experts in the relevant field.
  • Reviewer feedback : Reviewers provide feedback, critique, and suggestions for improvement.
  • Revision and resubmission : Authors address the feedback and make necessary revisions before resubmitting the manuscript.
  • Final decision : The editor makes a final decision on whether to accept or reject the manuscript based on the revised version and reviewer comments.
  • Publication : If accepted, the manuscript undergoes copyediting and formatting before being published in the journal.

Pros and Cons

While the goal of peer review is improving the quality of published research, the process isn’t without its drawbacks.

  • Quality assurance : Peer review helps ensure the quality and reliability of published research.
  • Error detection : The process identifies errors and flaws that the authors may have overlooked.
  • Credibility : The scientific community generally considers peer-reviewed articles to be more credible.
  • Professional development : Reviewers can learn from the work of others and enhance their own knowledge and understanding.
  • Time-consuming : The peer review process can be lengthy, delaying the publication of potentially valuable research.
  • Bias : Personal biases of reviews impact their evaluation of the manuscript.
  • Inconsistency : Different reviewers may provide conflicting feedback, making it challenging for authors to address all concerns.
  • Limited effectiveness : Peer review does not always detect significant errors or misconduct.
  • Poaching : Some reviewers take an idea from a submission and gain publication before the authors of the original research.

Steps for Conducting Peer Review of an Article

Generally, an editor provides guidance when you are asked to provide peer review of a manuscript. Here are typical steps of the process.

  • Accept the right assignment: Accept invitations to review articles that align with your area of expertise to ensure you can provide well-informed feedback.
  • Manage your time: Allocate sufficient time to thoroughly read and evaluate the manuscript, while adhering to the journal’s deadline for providing feedback.
  • Read the manuscript multiple times: First, read the manuscript for an overall understanding of the research. Then, read it more closely to assess the details, methodology, results, and conclusions.
  • Evaluate the structure and organization: Check if the manuscript follows the journal’s guidelines and is structured logically, with clear headings, subheadings, and a coherent flow of information.
  • Assess the quality of the research: Evaluate the research question, study design, methodology, data collection, analysis, and interpretation. Consider whether the methods are appropriate, the results are valid, and the conclusions are supported by the data.
  • Examine the originality and relevance: Determine if the research offers new insights, builds on existing knowledge, and is relevant to the field.
  • Check for clarity and consistency: Review the manuscript for clarity of writing, consistent terminology, and proper formatting of figures, tables, and references.
  • Identify ethical issues: Look for potential ethical concerns, such as plagiarism, data fabrication, or conflicts of interest.
  • Provide constructive feedback: Offer specific, actionable, and objective suggestions for improvement, highlighting both the strengths and weaknesses of the manuscript. Don’t be mean.
  • Organize your review: Structure your review with an overview of your evaluation, followed by detailed comments and suggestions organized by section (e.g., introduction, methods, results, discussion, and conclusion).
  • Be professional and respectful: Maintain a respectful tone in your feedback, avoiding personal criticism or derogatory language.
  • Proofread your review: Before submitting your review, proofread it for typos, grammar, and clarity.
  • Couzin-Frankel J (September 2013). “Biomedical publishing. Secretive and subjective, peer review proves resistant to study”. Science . 341 (6152): 1331. doi: 10.1126/science.341.6152.1331
  • Lee, Carole J.; Sugimoto, Cassidy R.; Zhang, Guo; Cronin, Blaise (2013). “Bias in peer review”. Journal of the American Society for Information Science and Technology. 64 (1): 2–17. doi: 10.1002/asi.22784
  • Slavov, Nikolai (2015). “Making the most of peer review”. eLife . 4: e12708. doi: 10.7554/eLife.12708
  • Spier, Ray (2002). “The history of the peer-review process”. Trends in Biotechnology . 20 (8): 357–8. doi: 10.1016/S0167-7799(02)01985-6
  • Squazzoni, Flaminio; Brezis, Elise; Marušić, Ana (2017). “Scientometrics of peer review”. Scientometrics . 113 (1): 501–502. doi: 10.1007/s11192-017-2518-4

Related Posts

Service update: Some parts of the Library’s website will be down for maintenance on August 11.

Secondary menu

  • Log in to your Library account
  • Hours and Maps
  • Connect from Off Campus
  • UC Berkeley Home

Search form

Research 101.

  • Research 101 Workshops
  • Getting Started
  • Finding Sources

Peer Review

  • Evaluating Sources
  • Organizing Research
  • For Instructors + GSIs

Peer Review in Three Minutes from NC State University Libraries on Vimeo

A peer reviewed  or  peer refereed  journal or article is one in which a group of widely acknowledged experts in a field reviews the content for scholarly soundness and academic value.

Scholarly vs. Popular Articles

  • Scholarly Articles
  • Popular Articles

Example of a Scholarly Article

peer review example research

Note the Author's credentials, abstract, and citations in the text. These features indicate that the article is scholarly.

Scholarly articles often have abstracts, footnotes or citations, and list the author's credentials.

Learn more about the difference between scholarly and popular resources on our Evaluating Resources guide . 

Example of a Popular Article

peer review example research

Popular articles, like this one from Scientific American  may be from a reputable publication but not peer-reviewed. The Author may or may not be an academic, but the article is written for a popular audience. There are no footnotes or citations.

  • << Previous: Finding Sources
  • Next: Evaluating Sources >>
  • Last Updated: Mar 12, 2024 12:01 PM
  • URL: https://guides.lib.berkeley.edu/research101

peer review example research

"Culture and morale changed overnight! In under 2 months, we’ve had over 2,000 kudos sent and 80%+ engagement across all employees."

peer review example research

President at M&H

peer review example research

Recognition, Rewards, and Surveys all inside Slack or Teams

Free Forever. No Credit Card Required.

Microsoft Teams Logo

Celebrate wins together and regularly for all to see

peer review example research

Redeem coins for gift cards, company rewards & donations

Feedback Friday

Start a weekly recognition habit with automatic reminders

peer review example research

Automatically celebrate birthdays and work anniversaries

Feedback Surveys

10x your response rate, instantly with surveys inside Slack/Teams

Continuous Feedback

Gather continuous, real-time feedback and insights

peer review example research

Discover insights from recognition

Have questions? Send us a message

How teams are building culture with employee recognition and rewards

Advice and answers from the Matter team

Helpful videos to fully experience Matter

Peer Review Examples (+14 Phrases to Use)

peer review example research

‍ Table of Contents:

Peer review feedback examples, what are the benefits of peer review feedback examples, what are peer review feedback examples, 5 key parts of good peer review examples, 14 examples of performance review phrases, how do you give peer review feedback to remote teams, the benefits of a feedback culture, how to implement a strong feedback culture.

A peer review is a type of evaluative feedback. It focuses on the strengths and areas of improvement for yourself, your team members, and even the organization as a whole. This form of evaluation can benefit all parties involved, helping to build self-awareness and grow in new ways that we might not have realized before. Of course, the best examples of peer review feedback are those that are well-received and effective in the workplace, which we will go over in the next section.

As mentioned, peer review feedback is a great way to identify your strengths and weaknesses and those of others. The benefits are two-fold: it helps you grow in new ways that may have been difficult for you before, while also making sure everyone involved feels confident about their abilities moving forward.

For instance, organizations with robust feedback cultures can close any gaps that hinder their performance and seize business opportunities whenever they present themselves. This dual benefit gives them competitive advantages that allow them to grow, along with a more positive workplace. Leading companies that enjoy these types of advantages include Cargill, Netflix, and Google. Peer review feedback can also be a great tool to use for conducting your annual performance reviews. They give managers visibility and insights that might not be possible otherwise. The feedback can help you better understand how your employees view their performance, as well as what they think the company's expectations are of them. This opportunity is especially helpful for those who work remotely—it allows managers to see things that might be missed otherwise.

For example, if an employee works from home often or telecommutes frequently, it can be more difficult for managers to get a sense of how they are doing. This is where peer review feedback comes in—if their peers notice issues that need attention, this provides the manager with valuable insights that might otherwise have gone unnoticed. Everyone must be on the same page about what exactly it is they want from these sessions and how their employees will benefit from receiving them.

A Gallup poll revealed that organizations that give their employees regular feedback have turnover rates that are almost 15% lower than for those employees that didn't receive any. This statistic indicates that regular reviews, including peer reviews, are important. However, so is giving the right kind of peer review feedback.

As such, when you have a peer review session, think about some good examples of the type of feedback that might be beneficial for both parties. These would be the relevant peer review examples you want to use for your organization.

One example would be to discuss ways in which the employee’s performance may have been exemplary when you give them their peer review feedback forms . This conversation gives the person being reviewed an idea about how well they're doing and where their strengths lie in the form of positive feedback. 

On the other hand, it also helps them know there is room for improvement where they may not have realized it before in the form of negative feedback.

Another example would be to discuss how you might improve how the person being reviewed conducts themselves on a day-to-day basis. Again, this action can help someone realize how their performance can be improved and provide them with suggestions that they might not have thought of before.

For example, you may notice that a team member tends to talk more than is necessary during meetings or wastes time by doing unnecessary tasks when other pressing matters are at hand. This type of negative feedback would allow the person receiving it to know what areas they need to work on and how they can improve themselves.

As mentioned previously, peer reviews are a great way of giving an employee concrete suggestions for the areas in which they need improvement, as well as those where their performance is exemplary.

To ensure that your team feels valued and confident moving forward, you should give them the best examples of peer review feedback possible. The following are five examples of what constitutes good peer review feedback:

1. Use anonymity. Keeping them anonymous so that the employee review makes workers feel comfortable with the content and don't feel any bias has entered the review process.

2. Scheduling them frequently enough. A good employee experience with peer reviews involves scheduling them often enough so that no one has an unwelcome surprise come annual or biannual performance appraisal time.

3. Keep them objective & constructive. Keep peer review feedback objective and constructive—your goal is to help improve the peers you're reviewing so they can continue to do an even better job than before!

4. Having key points to work on. Ask questions such as: what is the goal? And what does the company want people to get out of each session?

5. The right people giving the peer review . Personnel familiar with the employee's work should be the ones doing the employee evaluation, rating the reviewer's performance, and providing peer feedback.

You can use the following positive performance appraisal phrases to recognize and coach your employees for anything from regularly scheduled peer reviews to biannual and annual appraisals:

  • "I can always count on you to..." ‍
  • "You are a dependable employee who meets all deadlines." ‍
  • "Your customer service is excellent. You make everyone feel welcome and comfortable, no matter how busy things get." ‍
  • "The accounting work that you do for our team helps us out in the long run." ‍
  • "I appreciate your helpfulness when it comes to training new employees. You always seem willing to take some time out of your day, even though you're busy with other tasks, to show them how we do things here at [COMPANY]." ‍
  • "It's so nice to see you staying on top of your work. You never miss a deadline, and that is very important here at [COMPANY]." ‍
  • "I can always count on you when I need something done immediately." ‍
  • "Your communication skills are exceptional, and I appreciate the way you always get your point across clearly." ‍
  • "You are always willing to lend an ear if someone needs help or has a question about something. You're great at being the go-to person when people need advice." ‍
  • "I appreciate your ability to anticipate our customers' needs."

Negative performance review phrases can be helpful if handled the right way and often contribute to improving the employee's performance. 

Here are some examples of effective negative performance review phrases you can use:

  • "You seem to struggle with following the company's processes. I would like to see you get better at staying on top of what needs to be done and getting it done on time." ‍
  • "I'm concerned that your work quality has slipped lately. You're still meeting deadlines, but some of your work seems rushed or incomplete. I want to make sure that you're giving everything the attention it deserves." ‍
  • "I noticed that you've been getting a lot of customer complaints lately. Is there anything going on? Maybe we can work together and come up with some solutions for how things could be better handled in the future?" ‍
  • "You seem overwhelmed right now, and it's affecting your work quality. I want to help you figure out how we can better distribute the workload so that you're not feeling like this anymore."

When giving peer review feedback to remote teams, it is essential for everyone involved that the employee being reviewed feels comfortable and respected. And whether a peer or direct report gives the remote employee a review, the most effective way to ensure this happens is by providing open communication and constructive feedback throughout the process.

However, when you work remotely, it can be difficult to get the opportunity for peer feedback. However, there are ways of ensuring that such a process is still beneficial and productive.

The following are some examples of how to go about giving effective peer review feedback when working virtually:

  • Take advantage of webcams or video conferencing to make sure that you can see the employee's facial expressions and monitor body language during a performance review, remote or otherwise. ‍
  • Just like with any in-person performance review, it's critical to schedule a regular time for sessions so they don't catch anyone by surprise. ‍
  • Make it clear at both your end as well as theirs what the overall goal is—this helps them prepare ahead of time and ensures there are no unforeseen surprises. ‍
  • Ensure that you keep the feedback objective with constructive criticism, as this is what will allow them to improve their performance in a way that they can take advantage of immediately. Include all these key points in your company peer review templates also. ‍
  • Be prepared for these sessions by having a list of key points you want to cover with your peer reviewer—this helps guide the conversation while ensuring no important points are overlooked.

When employees enjoy their work, understand their goals, and know the values and competencies of the job, job satisfaction increases, along with their performance. In addition, the link between productivity and effective feedback is well established. For instance, 69% of workers said they would work harder if their efforts were recognized, according to LinkedIn.

Continuous and regularly scheduled performance appraisal feedback helps with employee development, clarifies expectations, aligns goals, and motivates staff (check out our article Peer Review Feedback to find out why peer feedback is so essential), establishing a positive workplace. Lastly, a workplace that dedicates itself to motivating people to be better will improve employee engagement and the levels of performance.

If you haven't implemented a culture for using feedback yet, there are several effective ways to go about it. One good way to kick things off is to first identify teams or some other similar organizational unit and have them experiment with the social feedback system.

While the frequency of peer reviews should be given every three to four weeks, or even at the end of a project sprint , the cycles for building a strong feedback culture can be quarterly or monthly, depending on your preferences and operations.

After the three cycles are finalized, you typically have built up enough feedback information to start the organization on its path to a strong feedback culture.

Knowing these peer review feedback examples and tips on giving them to remote teams will help you become more comfortable with this type of evaluative discussion. It can be difficult at first, but remember that the benefits are worth it! And remember: when giving peer review feedback, make sure you keep each session objective. This helps ensure they're constructive and that both parties walk away feeling as though they've learned a lot from them.

Want to keep that morale sky-high during Feedback Friday and the peer review process? If so, be sure to check out Matter , with features that allow you to give public Kudos all inside Slack.

Recognition & Rewards all inside Slack or Teams

Awwards cat

Employee Recognition & Rewards all in Slack or Teams

peer review example research

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • What Is Peer Review? | Types & Examples

What Is Peer Review? | Types & Examples

Published on 6 May 2022 by Tegan George . Revised on 2 September 2022.

Peer review, sometimes referred to as refereeing , is the process of evaluating submissions to an academic journal. Using strict criteria, a panel of reviewers in the same subject area decides whether to accept each submission for publication.

Peer-reviewed articles are considered a highly credible source due to the stringent process they go through before publication.

There are various types of peer review. The main difference between them is to what extent the authors, reviewers, and editors know each other’s identities. The most common types are:

  • Single-blind review
  • Double-blind review
  • Triple-blind review

Collaborative review

Open review.

Relatedly, peer assessment is a process where your peers provide you with feedback on something you’ve written, based on a set of criteria or benchmarks from an instructor. They then give constructive feedback, compliments, or guidance to help you improve your draft.

Table of contents

What is the purpose of peer review, types of peer review, the peer review process, providing feedback to your peers, peer review example, advantages of peer review, criticisms of peer review, frequently asked questions about peer review.

Many academic fields use peer review, largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the manuscript. For this reason, academic journals are among the most credible sources you can refer to.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure.

Peer assessment is often used in the classroom as a pedagogical tool. Both receiving feedback and providing it are thought to enhance the learning process, helping students think critically and collaboratively.

Prevent plagiarism, run a free check.

Depending on the journal, there are several types of peer review.

Single-blind peer review

The most common type of peer review is single-blind (or single anonymised) review . Here, the names of the reviewers are not known by the author.

While this gives the reviewers the ability to give feedback without the possibility of interference from the author, there has been substantial criticism of this method in the last few years. Many argue that single-blind reviewing can lead to poaching or intellectual theft or that anonymised comments cause reviewers to be too harsh.

Double-blind peer review

In double-blind (or double anonymised) review , both the author and the reviewers are anonymous.

Arguments for double-blind review highlight that this mitigates any risk of prejudice on the side of the reviewer, while protecting the nature of the process. In theory, it also leads to manuscripts being published on merit rather than on the reputation of the author.

Triple-blind peer review

While triple-blind (or triple anonymised) review – where the identities of the author, reviewers, and editors are all anonymised – does exist, it is difficult to carry out in practice.

Proponents of adopting triple-blind review for journal submissions argue that it minimises potential conflicts of interest and biases. However, ensuring anonymity is logistically challenging, and current editing software is not always able to fully anonymise everyone involved in the process.

In collaborative review , authors and reviewers interact with each other directly throughout the process. However, the identity of the reviewer is not known to the author. This gives all parties the opportunity to resolve any inconsistencies or contradictions in real time, and provides them a rich forum for discussion. It can mitigate the need for multiple rounds of editing and minimise back-and-forth.

Collaborative review can be time- and resource-intensive for the journal, however. For these collaborations to occur, there has to be a set system in place, often a technological platform, with staff monitoring and fixing any bugs or glitches.

Lastly, in open review , all parties know each other’s identities throughout the process. Often, open review can also include feedback from a larger audience, such as an online forum, or reviewer feedback included as part of the final published product.

While many argue that greater transparency prevents plagiarism or unnecessary harshness, there is also concern about the quality of future scholarship if reviewers feel they have to censor their comments.

In general, the peer review process includes the following steps:

  • First, the author submits the manuscript to the editor.
  • Reject the manuscript and send it back to the author, or
  • Send it onward to the selected peer reviewer(s)
  • Next, the peer review process occurs. The reviewer provides feedback, addressing any major or minor issues with the manuscript, and gives their advice regarding what edits should be made.
  • Lastly, the edited manuscript is sent back to the author. They input the edits and resubmit it to the editor for publication.

The peer review process

In an effort to be transparent, many journals are now disclosing who reviewed each article in the published product. There are also increasing opportunities for collaboration and feedback, with some journals allowing open communication between reviewers and authors.

It can seem daunting at first to conduct a peer review or peer assessment. If you’re not sure where to start, there are several best practices you can use.

Summarise the argument in your own words

Summarising the main argument helps the author see how their argument is interpreted by readers, and gives you a jumping-off point for providing feedback. If you’re having trouble doing this, it’s a sign that the argument needs to be clearer, more concise, or worded differently.

If the author sees that you’ve interpreted their argument differently than they intended, they have an opportunity to address any misunderstandings when they get the manuscript back.

Separate your feedback into major and minor issues

It can be challenging to keep feedback organised. One strategy is to start out with any major issues and then flow into the more minor points. It’s often helpful to keep your feedback in a numbered list, so the author has concrete points to refer back to.

Major issues typically consist of any problems with the style, flow, or key points of the manuscript. Minor issues include spelling errors, citation errors, or other smaller, easy-to-apply feedback.

The best feedback you can provide is anything that helps them strengthen their argument or resolve major stylistic issues.

Give the type of feedback that you would like to receive

No one likes being criticised, and it can be difficult to give honest feedback without sounding overly harsh or critical. One strategy you can use here is the ‘compliment sandwich’, where you ‘sandwich’ your constructive criticism between two compliments.

Be sure you are giving concrete, actionable feedback that will help the author submit a successful final draft. While you shouldn’t tell them exactly what they should do, your feedback should help them resolve any issues they may have overlooked.

As a rule of thumb, your feedback should be:

  • Easy to understand
  • Constructive

Below is a brief annotated research example. You can view examples of peer feedback by hovering over the highlighted sections.

Influence of phone use on sleep

Studies show that teens from the US are getting less sleep than they were a decade ago (Johnson, 2019) . On average, teens only slept for 6 hours a night in 2021, compared to 8 hours a night in 2011. Johnson mentions several potential causes, such as increased anxiety, changed diets, and increased phone use.

The current study focuses on the effect phone use before bedtime has on the number of hours of sleep teens are getting.

For this study, a sample of 300 teens was recruited using social media, such as Facebook, Instagram, and Snapchat. The first week, all teens were allowed to use their phone the way they normally would, in order to obtain a baseline.

The sample was then divided into 3 groups:

  • Group 1 was not allowed to use their phone before bedtime.
  • Group 2 used their phone for 1 hour before bedtime.
  • Group 3 used their phone for 3 hours before bedtime.

All participants were asked to go to sleep around 10 p.m. to control for variation in bedtime . In the morning, their Fitbit showed the number of hours they’d slept. They kept track of these numbers themselves for 1 week.

Two independent t tests were used in order to compare Group 1 and Group 2, and Group 1 and Group 3. The first t test showed no significant difference ( p > .05) between the number of hours for Group 1 ( M = 7.8, SD = 0.6) and Group 2 ( M = 7.0, SD = 0.8). The second t test showed a significant difference ( p < .01) between the average difference for Group 1 ( M = 7.8, SD = 0.6) and Group 3 ( M = 6.1, SD = 1.5).

This shows that teens sleep fewer hours a night if they use their phone for over an hour before bedtime, compared to teens who use their phone for 0 to 1 hours.

Peer review is an established and hallowed process in academia, dating back hundreds of years. It provides various fields of study with metrics, expectations, and guidance to ensure published work is consistent with predetermined standards.

  • Protects the quality of published research

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. Any content that raises red flags for reviewers can be closely examined in the review stage, preventing plagiarised or duplicated research from being published.

  • Gives you access to feedback from experts in your field

Peer review represents an excellent opportunity to get feedback from renowned experts in your field and to improve your writing through their feedback and guidance. Experts with knowledge about your subject matter can give you feedback on both style and content, and they may also suggest avenues for further research that you hadn’t yet considered.

  • Helps you identify any weaknesses in your argument

Peer review acts as a first defence, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process. This way, you’ll end up with a more robust, more cohesive article.

While peer review is a widely accepted metric for credibility, it’s not without its drawbacks.

  • Reviewer bias

The more transparent double-blind system is not yet very common, which can lead to bias in reviewing. A common criticism is that an excellent paper by a new researcher may be declined, while an objectively lower-quality submission by an established researcher would be accepted.

  • Delays in publication

The thoroughness of the peer review process can lead to significant delays in publishing time. Research that was current at the time of submission may not be as current by the time it’s published.

  • Risk of human error

By its very nature, peer review carries a risk of human error. In particular, falsification often cannot be detected, given that reviewers would have to replicate entire experiments to ensure the validity of results.

Peer review is a process of evaluating submissions to an academic journal. Utilising rigorous criteria, a panel of reviewers in the same subject area decide whether to accept each submission for publication.

For this reason, academic journals are often considered among the most credible sources you can use in a research project – provided that the journal itself is trustworthy and well regarded.

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. It also represents an excellent opportunity to get feedback from renowned experts in your field.

It acts as a first defence, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process.

Peer-reviewed articles are considered a highly credible source due to this stringent process they go through before publication.

In general, the peer review process follows the following steps:

  • Reject the manuscript and send it back to author, or
  • Lastly, the edited manuscript is sent back to the author. They input the edits, and resubmit it to the editor for publication.

Many academic fields use peer review , largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the published manuscript.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

George, T. (2022, September 02). What Is Peer Review? | Types & Examples. Scribbr. Retrieved 30 August 2024, from https://www.scribbr.co.uk/research-methods/peer-reviews/

Is this article helpful?

Tegan George

Tegan George

Other students also liked, what is a double-blind study | introduction & examples, a quick guide to experimental design | 5 steps & examples, data cleaning | a guide with examples & steps.

  • Search Search
  • CN (Chinese)
  • DE (German)
  • ES (Spanish)
  • FR (Français)
  • JP (Japanese)
  • Open science
  • Booksellers
  • Peer Reviewers
  • Springer Nature Group ↗
  • Publish an article
  • Roles and responsibilities
  • Signing your contract
  • Writing your manuscript
  • Submitting your manuscript
  • Producing your book
  • Promoting your book
  • Submit your book idea
  • Manuscript guidelines
  • Book author services
  • Publish a book
  • Publish conference proceedings

How to peer review

Author tutorials 

For science to progress, research methods and findings need to be closely examined and verified, and from them a decision on the best direction for future research is made. After a study has gone through peer review and is accepted for publication, scientists and the public can be confident that the study has met certain standards, and that the results can be trusted.

What you will get from this course

When you have completed this course and the included quizzes, you will have gained the skills needed to evaluate another researcher’s manuscript in a way that will help a journal Editor make a decision about publication. Additionally, having successfully completed the quizzes will let you demonstrate that competence to the wider research community

Topics covered

How the peer review process works.

Journals use peer review to both validate the research reported in submitted manuscripts, and sometimes to help inform their decisions about whether or not to publish that article in their journal. 

If the Editor does not immediately reject the manuscript (a “desk rejection”), then the editor will send the manuscript to two or more experts in the field to review it. The experts—called peer reviewers—will then prepare a report that assesses the manuscript, and return it to the editor. After reading the peer reviewer's report, the editor will decide to do one of three things: reject the manuscript, accept the manuscript, or ask the authors to revise and resubmit the manuscript after responding to the peer reviewers’ feedback. If the authors resubmit the manuscript, editors will sometimes ask the same peer reviewers to look over the manuscript again to see if their concerns have been addressed. This is called re-review.

Some of the problems that peer reviewers may find in a manuscript include errors in the study’s methods or analysis that raise questions about the findings, or sections that need clearer explanations so that the manuscript is easily understood. From a journal editor’s point of view, comments on the importance and novelty of a manuscript, and if it will interest the journal’s audience, are particularly useful in helping them to decide which manuscripts to publish.

Will the authors know I am a reviewer? Will I know who the authors are? 

Traditionally, peer review worked in a way we now call “closed,” where the editor and the reviewers knew who the authors were, but the authors did not know who the reviewers were. In recent years, however, many journals have begun to develop other approaches to peer review. These include:

  • Closed peer review — where the reviewers are aware of the authors’ identities but the authors’ are never informed of the reviewers’ identities.
  • Double-blind peer review —where neither author nor reviewer is aware of each other’s identities.
  • Open peer review —where authors and reviewers are aware of each other’s identity. In some journals with open peer review the reviewers’ reports are published alongside the article.

The type of peer review used by a journal should be clearly stated in the invitation to review letter you receive and policy pages on the journal website. If, after checking the journal website, you are unsure of the type of peer review used or would like clarification on the journal’s policy you should contact the journal’s editors.

Why serve as a peer reviewer?

As your career advances, you are likely to be asked to serve as a peer reviewer.

As well as supporting the advancement of science, and providing guidance on how the author can improve their paper, there are also some benefits of peer reviewing to you as a researcher:

  • Serving as a peer reviewer looks good on your CV as it shows that your expertise is recognized by other scientists. (See the supplemental material about the Web of Science Reviewer Recognition Service to learn more about getting credit for the reviews you do. Also see the supplemental material about ORCiD iDs to learn how to connect your reviews to your unique ORCiD iD.) 
  • You will get to read some of the latest science in your field well before it is in the public domain.
  • The critical thinking skills needed during peer review will help you in your own research and writing.

Who does peer review benefit?

When performed correctly peer review helps improve the clarity, robustness and reproducibility of research.

When peer reviewing, it is helpful to think from the point of view of three different groups of people:

  • Authors . Try to review the manuscript as you would like others to review your work. When you point out problems in a manuscript, do so in a way that will help the authors to improve the manuscript. Even if you recommend to the editor that the manuscript be rejected, your suggested revisions could help the authors prepare the manuscript for submission to a different journal. 
  • Journal editors . Comment on the importance and novelty of the study. Editors will use your comments to assess whether the manuscript is of the right level of impact for the journal. Your comments and opinions on the paper are much more important that a simple recommendation; editors need to know why you think a paper should be published or rejected as your reasoning will help inform their decision.
  • Readers . Identify areas that need clarification to make sure other readers can easily understand the manuscript. As a reviewer, you can also save readers’ time and frustration by helping to keep unimportant or error filled research out of the published literature.

Writing a thorough, thoughtful review usually takes several hours or more. But by taking the time to be a good reviewer, you will be providing a service to the scientific community.  

Accepting an invitation to review

Editors invite you to review as they believe that you are an expert in a certain area. They would have judged this from your previous publication record or posters and/or sessions you have contributed to at conferences. You may find that the number of invitations to review increases as you progress in your career.

There are several questions to consider before you accept an invitation to review a paper.

  • Are you qualified? The editor has asked you to review the manuscript because he or she believes you are familiar with the specific topic or research method used in the paper. It will usually be okay if you can review some, but not all, aspects of a manuscript. Take as an example, if the study focused on a certain physiological process in an animal model you conduct your research on but used a technique that you have never used. In this case, simply review the parts of the manuscript that are in your area of expertise, and tell the editor which parts you cannot review. However, if the manuscript is too far outside your area, you should decline to review it.
  • Do you have time? If you know you will not be able to review the manuscript by the deadline, then you should not accept the invitation. Sending in a review long after the deadline will delay the publication process and frustrate the editor and authors. Keep in mind that reviewing manuscripts, like research and teaching, is a valuable contribution to science, and is worth making time for whenever possible.
  • The reported results could cause you to make or lose money, e.g., the authors are developing a drug that could compete with a drug you are working on.
  • The manuscript concerns a controversial question that you have strong feelings about (either agreeing or disagreeing with the authors).
  • You have strong positive or negative feelings about one of the authors, e.g., a former teacher who you admire greatly.
  • You have published papers or collaborated with one of the co-authors in recent years.

If you are not sure if you have a conflict of interest, discuss your circumstances with the editor.

Along with avoiding a conflict of interest, there are several other ethical guidelines to keep in mind as you review the manuscript. Manuscripts under review are highly confidential, so you should not discuss the manuscript – or even mention its existence – to others. One exception is if you would like to consult with a colleague about your review; in this case, you will need to ask the editor’s permission. It is normally okay to ask one of your students or postdocs to help with the review. However, you should let the editor know that you are being helped, and tell your assistant about the need for confidentiality. In some cases case, when the journal operates an open peer review policy they will allow the student or postdoc to co-sign the report with you should they wish.

It is very unethical to use information in the manuscript to make business decisions, such as buying or selling stock. Also, you should never plagiarize the content or ideas in the manuscript.

Next: Evaluating manuscripts

For further support

We hope that with this tutorial you have a clearer idea of how the peer review process works and feel confident in becoming a peer reviewer.

If you feel that you would like some further support with writing, reviewing, and publishing, Springer Nature offer some services which may be of help.

  • Nature Research Editing Service offers high quality  English language and scientific editing. During language editing , Editors will improve the English in your manuscript to ensure the meaning is clear and identify problems that require your review. With Scientific Editing experienced development editors will improve the scientific presentation of your research in your manuscript and cover letter, if supplied. They will also provide you with a report containing feedback on the most important issues identified during the edit, as well as journal recommendations.
  • Our affiliates American Journal Experts also provide English language editing* as well as other author services that may support you in preparing your manuscript.
  • We provide both online and face-to-face training for researchers on all aspects of the manuscript writing process.

* Please note, using an editing service is neither a requirement nor a guarantee of acceptance for publication. 

Test your knowledge

Take the Quiz!

Stay up to date

Here to foster information exchange with the library community

Connect with us on LinkedIn and stay up to date with news and development.

  • Tools & Services
  • Account Development
  • Sales and account contacts
  • Professional
  • Press office
  • Locations & Contact

We are a world leading research, educational and professional publisher. Visit our main website for more information.

  • © 2024 Springer Nature
  • General terms and conditions
  • Your US State Privacy Rights
  • Your Privacy Choices / Manage Cookies
  • Accessibility
  • Legal notice
  • Help us to improve this site, send feedback.
  • University of Michigan Library
  • Research Guides

The Library Research Process, Step-by-Step

  • Finding Articles
  • Finding & Exploring a Topic
  • Finding Books
  • Evaluating Sources
  • Reading Scholarly Articles
  • Understanding & Using a Citation Style

Peer Reviewed and Scholarly Articles

What are they? Peer-reviewed articles, also known as scholarly or refereed articles are papers that describe a research study. 

Why are peer-reviewed articles useful? They report on original research that have been reviewed by other experts before they are accepted for publication, so you can reasonably be assured that they contain valid information. 

How do you find them?  Many of the library's databases contain scholarly articles! You'll find more about searching databases below.

Watch: Peer Review in 3 Minutes

Why watch this video?

We are often told that scholarly and peer-reviewed sources are the most credible, but, it's sometimes hard to understand why they are credible and why we should trust these sources more than others. This video takes an in depth approach at explaining the peer review process. 

Hot Tip: Check out the Reading Scholarly Articles page for guidance on how to read and understand a scholarly article.

Using Library Databases

What Are Library Databases? 

Databases are similar to search engines but primarily search scholarly journals, magazines, newspapers and other sources. Some databases are subject specific while others are multi-disciplinary (searching across multiple fields and content types). 

You can view our most popularly used databases on the Library's Home Page , or view a list of all of our databases organized by subject or alphabetically at  U-M Library Databases .

Popular Multidisciplinary Databases

Many students use ProQuest , JSTOR , and Google Scholar for their initial search needs. These are multi-disciplinary and not subject-specific, and they can supply a very large number of  search results.

Subject-Specific Databases

Some popular subject-specific databases include PsycINFO for psychology and psychiatry related topics and  PubMed for health sciences topics. 

Why Should You Use Library Databases?

Unlike a Google search, the Library Databases will grant you access to high quality credible sources. 

The sources you'll find in library databases include:

  • Scholarly journal articles
  • Newspaper articles
  • Theses & dissertations
  • Empirical evidence

Database Filters & Limits Most databases have Filters/Limits. You can use these to narrow down your search to the specific dates, article type, or population that you are researching.

Here is an example of limits in a database, all databases look slightly different but most have these options:

peer review example research

Keywords and Starting a Search

What are Keywords?

  • Natural language words that describe your topic 
  • Allows for a more flexible search - looks for anywhere the words appear in the record
  • Can lead to a broader search, but may yield irrelevant results

Keyword searching  is how we normally start a search. Pull out important words or phrases from your topic to find your keywords.

Tips for Searching with Keywords:

  • Example: "climate change"
  • Example:  "climate change" AND policy
  • Example: comput* will return all words starting with four letters; computing, computer, compute, etc.  
  • Example: wom?n will find both woman and women.

What are Subject Headings?

  • Pre-defined "controlled vocabulary" that describe what an item is  about 
  • Makes for a less flexible search - only the subject fields will be searched
  • Targeted search; results are usually more relevant to the topic, but may miss some variations

Subject Terms and/or Headings are pre-defined terms that are used to describe the content of an item. These terms are a controlled vocabulary and function similarly to hashtags on social media. Look carefully at the results from your search. If you find an article that is relevant to the topic you want to write about, take a look at the subject headings. 

Hot Tip: Make a copy of this Google Doc to help you find and develop your topic's keywords.

More Database Recommendations

Need articles for your library research project, but not sure where to start? We recommend these top ten article databases for kicking off your research. If you can't find what you need searching in one of these top ten databases, browse the list of all library databases by subject (academic discipline) or title .

  • U-M Library Articles Search This link opens in a new window Use Articles Search to locate scholarly and popular articles, as well as reference works and materials from open access archives.
  • ABI/INFORM Global This link opens in a new window Indexes 3,000+ business-related periodicals (with full text for 2,000+), including Wall Street Journal.
  • Academic OneFile This link opens in a new window Provides indexing for over 8,000 scholarly journals, industry periodicals, general interest magazines and newspapers.
  • Access World News [NewsBank] This link opens in a new window Full text of 600+ U.S. newspapers and 260+ English-language newspapers from other countries worldwide.
  • CQ Researcher This link opens in a new window Noted for its in-depth, unbiased coverage of health, social trends, criminal justice, international affairs, education, the environment, technology, and the economy.
  • Gale Health and Wellness This link opens in a new window
  • Humanities Abstracts (with Full Text) This link opens in a new window Covers 700 periodicals in art, film, journalism, linguistics, music, performing arts, philosophy, religion, history, literature, etc.
  • JSTOR This link opens in a new window Full-text access to the archives of 2,600+ journals and 35,000+ books in the arts, humanities, social sciences and sciences.
  • ProQuest Research Library This link opens in a new window Indexes over 5,000 journals and magazines, academic and popular, with full text included for over 3,600.
  • PsycInfo (APA) This link opens in a new window Premier resource for surveying the literature of psychology and adjunct fields. Covers 1887-present. Produced by the APA.
  • Tel: +81-3-5541-4400 (Monday–Friday, 09:30–18:00)

ThinkSCIENCE

Giving an effective peer review: sample framework and comments

Giving an effective peer review

The system of peer-reviewed journals requires that academics review papers written by other academics, that is, papers written by their peers. We have previously discussed peer review generally ( Why do the rules and conventions of academic publishing keep changing and how can researchers stay current? ) and how authors can effectively respond to peer review ( Writing effective response letters to reviewers: Tips and a template ). This article will cover the other side: being a reviewer.

Here, we'll look at the basic tenets of peer review, and we've provided a sample framework to help new reviewers give comments that will help authors strengthen their papers.

Basic tenets of peer reviewing:

There are 5 basic tenets that should be kept in mind:

  • Decline the review if you have any conflicts of interest (COIs).
  • Remember that you're advising the journal editor, not making the decision about whether to accept or reject.
  • Try to be helpful and always respectful to the author.
  • Maintain confidentiality of the paper contents.
  • Decline the review if you are too busy, or not familiar enough with the topic, to complete a proper review.

Peer reviews are intended to be impartial (unbiased), and so anyone asked to be a reviewer should consider, before accepting, whether they have any COIs. Anything that could make you, as a reviewer, consider the paper more or less favorably because of your relationship with the author is a COI. You should decline to review, or at minimum disclose to the journal editor, papers written by (a) past co-authors of yours, (b) members of your department, (c) your students or mentors, (d) personal friends, and (e) professional rivals. You should also decline if you will gain any potential financial or personal benefits from publication of the work. If you are unsure about whether a conflict of interest exists, check the journal's guidelines or with the journal editor. As examples of COI policies, Elsevier has a general factsheet on COIs and the International Committee of Medical Journal Editors provides information about peer reviewer responsibilities .

The reviewer acts as an advisor to the journal editor. Because of this, the review should be more than a simple "accept" or "reject". When writing a review, you should describe the reasons for the recommendation so that the editor can make an informed decision. It is far more important to comment on the academic content of a paper than on grammar and punctuation. However, if the language is too poor to understand the contents adequately, then alert the journal editor. See below for a sample framework that will assist you in ensuring that you've covered the most important points in your review.

The review will be sent to the author of the paper. Because of this, reviewers are in a strong position to advise the author on how the paper could be strengthened. Whether you are recommending acceptance or rejection, the author could benefit from your feedback and advice. One particular caution is when you want to suggest the authors cite your own papers—do this sparingly. The review should be intended to help the author, not the reviewer. Finally, reviews should be respectful in tone. Unfortunately, we've all seen derogatory and unhelpful reviewer comments at times, which do not help the author. Peer review should be collegial and respectful.

Reviewers receive submitted papers with the understanding that they are handling confidential communications. As such, they should not discuss the review or disclose any of its content to third parties. Reviewers also should not use their knowledge of the work they are reviewing to further their own personal interests.

Reviewers who are not able to provide a proper review, due to lack of time or lack of expertise in the area covered by the paper, should decline the review.

Get featured articles and other author resources sent to you in English, Japanese, or both languages via our monthly newsletter.

Sample Framework for Your Reviewer Comments

Many journals provide reviewers with a form to fill out during review, but the framework below can be used in other cases.

Describe the basic contribution of the paper. This should be a few sentences on the topic of the paper. Beginning with this helps the journal editor and lets the author know that you've understood the paper.

"This paper discusses _______________. The main contribution of the paper is ____________."

Give your recommendation. You can use one of the following sentences.

"I recommend that this paper be accepted."

"I recommend that this paper be accepted after minor revision."

"I recommend that this paper not be accepted without major revision."

"I recommend that this paper be rejected."

Give your reasons for your recommendation. Label these as "major comments". A few examples are given to the right.

Major comments:

  • The statistical analysis in this paper is suitable/unsuitable for….
  • In terms of experimental technique, this paper is conventional/novel, and so…
  • The Methods section does not clearly explain…
  • The results obtained will be useful in…
  • Some of the fundamental/recent papers in the field are not cited, among these…
  • I would like to see some discussion of the findings of the papers in relation to recent findings and developments in ______.

Finally, give some additional comments about the paper. This is where you can note problems with spelling and/or grammar, suggest changes to figures and tables, and make other specific comments. Label these as "minor comments". A few examples are to the right.

Minor comments:

  • In several places, you've used the term _____, but it seems you mean _____.
  • In some of the figures, the legends are too small to be legible.
  • On page ____, it is stated that _____, but the paper by Smith et al. states that ______. Can you comment on this disparity?
  • Have you thought about testing this with _____________?

We hope you've found these tips useful. We currently offer support for new and experienced reviewers in a number of ways, including by translating their comments to English and by editing their English comments to ensure that the authors receiving the review have high-quality, well-worded comments that help them strengthen their manuscripts.

Also, if you have any questions about writing effective reviewer comments, please do let us know. We're happy to support you in this important academic task.

peer review example research

Stay up to date

Our monthly newsletter offers valuable tips on writing and presenting your research most effectively, as well as advice on avoiding or resolving common problems that authors face.

Get 10% off your first order

Looking for subject-specialists?

Your research represents you, and your career reflects thousands of hours of your time.

Here at ThinkSCIENCE, our translation and editing represent us, and our reputation reflects thousands of published papers and millions of corrections and improvements.

zavvy logo

Quicklinks ‍

Peer review examples: 50+ effective phrases for next review.

Are you struggling with writing effective reviews for your peers? Learn does and don'ts and get inspired by 50 peer review examples for coworkers.

Let's face it: giving feedback can be challenging, especially when it comes to peer reviews.

As a peer, you're in a unique position to provide constructive feedback to your colleagues. You want to help them grow and develop. But finding the right words to use is no walk in the park.

🙋 We're here to help you ensure your feedback is effective and actionable.

We collected a comprehensive peer review sample: 50+ effective review phrases to use in your next performance or skill review, helping you provide feedback that's supportive, constructive, and inspiring. You'll find peer review phrases for positive performance and constructive peer review feedback examples.

Plus, we've also included tips for giving peer review feedback (and how not to do it), supported by multiple peer feedback examples.

360 Feedback toolkit for growing businesses

❓ What are peer review feedback examples?

Peer review feedback is part of an  employee's development and performance process and an essential component of 360 feedback.

Performance reviews are a key of 360 degree feedback systems and can be the difference between a happy employee and one who is just going through the motions.

Think of peer reviews as a thermometer that measures an employee's performance, skills, abilities, or attitudes by their fellow co-workers and team members.

Peer reviews on Zavvy -> questions and peer review example phrases

As part of a wider performance management system , peer reviews help an organization in the following ways:

  • 🎯 Can be used as a goal-setting opportunity.
  • 🔎 Peer feedback helps identify the strengths and weaknesses of individual employees, teams, and the company as a whole.
  • 🌱 Suggestions from peers can help employees and team members develop personally and professionally .
  • 🔗 Boost employee motivation and satisfaction and strengthen trust and collaboration within the team .
  • 📈 Through peer reviews, employees can receive constructive criticism and solutions on how they can work to meet the company's expectations and contribute to its growth.

🌟 33 Positive peer review feedback examples

We structured these positive feedback samples into competency-specific examples and job performance -specific examples.

🗣️ Communication skills

  • "You effectively communicate with colleagues, customers, vendors, supervisors, and partners. You are a key driver of our high customer satisfaction scores."
  • "You are an excellent communicator, and you are adept at discussing difficult issues effectively and straight to the point."
  • "Tom has excellent communication skills and always keeps the team up-to-date on his progress, ensuring the team is always on the same page."
  • "John is an excellent mentor who is always willing to share his knowledge and experience with others, providing guidance and support when needed."
  • "Your approach to giving peer feedback is exemplary. You have a knack for delivering constructive insights in a manner that fosters growth and understanding. Your peers, including myself, value the way you phrase your feedback to be actionable and uplifting."

🤝 Teamwork & collaboration

  • "I appreciate the way you collaborate with your team and cross-functionally to find solutions to problems."
  • "You're an effective team member, as demonstrated by your willingness to help out and contribute as required."
  • "Sarah is a true team player who always helps out her colleagues. She consistently meets deadlines and produces work of a high standard."
  • "Bob is an excellent collaborator and has built strong relationships with his colleagues. He actively seeks out opportunities to share knowledge and support others on the team."

🤗 Mentoring & support

  • "I appreciate that you never make your team members feel belittled even when they ask the simplest questions. You're eager to help, and you're exceptional at mentoring when people need advice."
  • "I appreciate how Julie is always willing to share her knowledge and expertise with others. She is an excellent resource for the team and is always happy to help out when someone needs guidance."

😊 Positivity & attitude

  • "I appreciate how Sarah always brings a positive attitude to the team. She is always willing to help out and support others, and her enthusiasm is infectious."
  • "I appreciate how Maria always takes the time to build relationships with her colleagues. She is friendly and approachable, and she has a talent for bringing people together."
  • "I appreciate how you remain calm under pressure and greet customers with a smile."

Competency Matrix Database including levels

🙏 Professionalism & work ethics

  • "I admire how you uphold organizational standards for inclusion, diversity, and ethics."
  • "I appreciate how John builds relationships with clients and colleagues. He is always professional and courteous, and he has a natural talent for making people feel comfortable and valued."
  • "I appreciate how David always takes a thoughtful and considered approach to his work. He is always looking for ways to improve his performance and is never satisfied with simply meeting the bare minimum."

⭐ Quality of work & performance

  • "Your copy-editing skills are excellent. You always ensure that all articles published by the content marketing team are thoroughly edited and proofed, which is very important here at (COMPANY)."
  • "You've improved XX by XYZ%, and you've streamlined the work process by doing XYZ."
  • "John has a great eye for detail and consistently produces high work quality. I appreciate the way he is always happy to lend a hand to others when needed and proactively offers ideas to improve processes."
  • "Karen is a fast learner and has a keen eye for detail, making her a valuable asset to the team."
  • "I can always count on you to give our customers the best customer experience, and I appreciate the way you go over and beyond for them."

🚀 Innovation & initiative

  • "You are always suggesting new ideas in meetings and during projects. Well done!"
  • "You constantly show initiative by developing new ways of thinking to improve projects and overall company success."
  • "Jane has been doing an excellent job with her projects, and her creativity and innovative ideas have helped move the team forward."
  • "Samantha has a creative approach to problem-solving, and I have noticed that she often comes up with unique and innovative solutions to complex challenges."

🌱 Self-improvement & learning

  • "You are constantly open to learning and ask for more training when you don't understand XYZ processes."
  • "You accept coaching when things aren't clear and apply what you learned to improve XYZ ability."
  • "David is a role model for the rest of the team with his continuous self-improvement mindset and focus on developing his skills and expertise."
  • "I appreciate how Karen is always looking for ways to improve her work and is never satisfied with the status quo. She is a great role model for the rest of the team."

💼 Leadership skills

  • "You show great leadership signs by owning up to mistakes and errors, fixing them, and communicating with others (quickly) when you're unable to meet a deadline."
  • "During our recent project, I noticed how effectively you lead the team. Your ability to listen to everyone's input, make decisions promptly, and delegate tasks was truly commendable. The team felt both supported and empowered under your guidance."
  • "Your leadership during challenging times is admirable. You remain calm, focused, and provide clarity when most needed. This not only keeps the team aligned but also instills a sense of trust and security amongst us."

Leadership competency model template

😥 23 Examples of effective  negative performance peer review examples

All of the above are peer review examples for positive performance .

But it's not always that we only have good things to share.

So, what happens when you want to give negative feedback in cases of low or disappointing performance?

If handled rightly, negative feedback can improve an employee's performance . The key is giving criticism constructively.

📉 Overall employee performance

  • "While your presentations are always well-researched and insightful, they can sometimes run longer than scheduled, which affects subsequent agenda items. For future projects, consider practicing time management during meetings or working on summarizing key points more concisely."
  • "I've noticed that you often work late hours to meet deadlines. While your commitment is commendable, it's crucial to balance workload and ensure that tasks are spread out adequately. Perhaps adopting a more structured approach to project management or seeking delegation opportunities could help prevent last-minute rushes."
  • "I've observed that while you excel in your core tasks, there's occasionally a delay in responding to emails or returning calls. This sometimes causes minor setbacks in our project timelines. It might be beneficial to set aside dedicated times during the day for communication or using a tool to manage and prioritize your inbox."

🧠 Mindset & perspective

  • "You seem to focus more on what can't be done instead of offering solutions. I would like to see you develop an open mindset and work alongside our teammates on brainstorming solutions."
  • "Jane has strong ideas but could work on being more open-minded and considering the perspectives of others to create a more collaborative work environment. I highly encourage her to actively listen to others' ideas and provide constructive feedback. As a result, I think she will become a better collaborator."
  • "Lisa seems to stick to familiar routines and processes and be resistant to change. I think that she could benefit from being more open to change and new ways of doing things to encourage growth and innovation for the team. For a concrete suggestion, I would recommend for her to exchange ideas with new team members with different backgrounds or skill sets to broaden her perspective and challenge her existing ideas."
  • "I think your ideas are really creative and valuable, but I've noticed that you sometimes struggle to communicate them effectively in meetings. I think it would be helpful for you to practice presenting your ideas to a smaller group or one-on-one, and to ask for feedback from your colleagues on how you can improve your communication skills."
  • "Greg tends to be unclear or vague in his messaging, causing confusion and misunderstandings. I encourage him to practice active listening techniques such as asking questions to clarify understanding, and summarizing the conversation."
  • "I've observed challenges in your approach to communicating with remote workers. At times, there seems to be a disconnect or delay in relaying vital information, which has led to inefficiencies and misunderstandings. It might be beneficial to revisit your communication tools and strategies to ensure that everyone, regardless of their location, stays informed and aligned."
  • "I appreciate your attention to detail and your commitment to producing high-quality work, but I've noticed that you sometimes struggle to take feedback or suggestions from others. I think it would be helpful for you to practice being more open to feedback and to work on developing your collaboration skills."
  • "Frank often puts his personal goals above the team's objectives, causing conflict and tension in the workplace. He could work on being more of a team player and prioritizing the team's objectives over personal goals to avoid conflict and tension and help the team meet our goals faster. For example, I would like him to attend our team-building activities or events to help build stronger relationships within our team."

⏰ Time management & meeting deadlines

  • "I've noticed that you're having difficulty meeting your deadlines. I think it would be helpful for you to break down your tasks into smaller, more manageable pieces, and to communicate with fellow colleagues if you need more time or support to complete your work."
  • "Alex could benefit from developing better time management skills to prioritize tasks effectively and avoid delays and missed deadlines. I think that with the right time management training and resources, he will discover time saving processes."

🛠️ Task execution & quality

  • ‍ "I noticed you aren't meeting your targets. Let's get on a call in two days to go over your cold email strategy . Perhaps you can use an email verification tool to validate prospects' addresses." ‍
  • ‍ "Jim could benefit from working on his organization skills and prioritizing his workload to avoid missed deadlines and inconvenience for the team. He could work on creating a system to better manage his workload and set reminders for important deadlines."
  • "Although he is very fast at handling customer requests, Tim is not detail-oriented and often overlooks important aspects of a project, leading to mistakes and oversights. One idea for improving his attention to detail while maintaining his fast response time could be to implement a system of double-checking or quality control." ‍
📈 Explore 45 performance review phrases and extra tips and tricks for giving better performance feedback.

💼 Professionalism & attitude ‍

  • ‍ "Peter could benefit from improving his professionalism in the workplace and avoiding negative or gossipy conversations that create tension. I think that focusing on more positive and constructive interactions with colleagues could help create a better work environment and work relationships."
  • "Samantha can be confrontational and abrasive, making it difficult for others to work with her. She could work on being more approachable and collaborative. One way to do so is by practicing active listening and binge more mindful of how she communicates with others."

🌱 Personal development & growth

  • "I appreciate the effort you're putting in, but I've noticed that you're struggling with certain tasks. I think it would be helpful for you to receive additional training or guidance in those areas."
  • "Sarah has great potential but there is room for improvement, especially with regards to seeking out opportunities to contribute and taking initiative on tasks. I think she could benefit from setting goals and creating a plan to take more ownership of her work."

Performance improvement plan template

  • "During team meetings, it would be beneficial if you could encourage other team members (especially quiet ones) to voice their opinions. When a few individuals dominate the discussions, it might be stifling innovative ideas from others."
  • "I've noticed you generally give feedback in group settings. It would be more effective and respectful to provide constructive criticism in private to avoid any unnecessary embarrassment or tension amongst the team."
  • "When receiving feedback, I've observed that you sometimes become defensive or dismissive. Truly embracing feedback can catalyze growth and development. It might be beneficial to explore methods or strategies that foster a more open and accepting attitude towards feedback."
🌱 Use your peer's feedback to create a development plan to set the path for growth? First, set concrete professional development goals . Then, define the concrete steps that will make your goals a reality.

excel template development plan Zavvy

📝 How do you write a peer review: Does & don'ts for giving feedback to peers

How to write a peer review

The following steps will help you learn how to write a peer review for your co-workers.

For each step, we included positive peer feedback examples and negative peer feedback examples.

By following these guidelines, giving quality feedback should no longer feel like an intimidating task.

1. Think about their work

Before writing your peer review, think about your colleagues' contribution to the workplace.

Then, to get you started, ask yourself the following questions?

  • What are their strengths? What are their weaknesses?
  • How can they improve?
  • What are their latest accomplishments?
  • What do I like or appreciate about them?
  • What do I wish they did less? What do I want them to do more?
  • What are their expected competencies? (In case your company uses a competency model ).

🔴 DO NOT  make the peer review personal. Try to avoid using "I" such as "I don't like..." or "I'm not comfortable with..." when giving constructive feedback.

🟢 DO Tie your comments to the goal of the peer review and not your personal references.

👎 "I don't really pay attention to what John does, so I can't say much about his work."

This peer review example is not helpful or constructive feedback because it doesn't provide any specific information or insights about John's work or his abilities. The feedback is vague and non-specific.

This kind of feedback is not only unhelpful, but it can also be demotivating and discouraging for John. He may feel that his contributions are not valued or recognized.

Recognition is something that people need to stay motivated and engaged. Last thing you want is to disengage and demotivate your peers.

👍 "John has a great eye for detail and consistently produces high-quality work. I appreciate his ability to prioritize tasks and his willingness to help others when needed."

This peer review sample is a good peer review example. It acknowledges John's strengths and provides specific examples of his skills and abilities.

The reviewer highlights John's ability to produce high-quality work, his attention to detail, and his willingness to help others, which are all positive attributes that contribute to the team's success.

👎 "I don't like the way that Mary interacts with others on the team. She can be really abrasive and confrontational, which makes it difficult to work with her."

This peer review example is overly negative and vague, providing no specific information or insights that could help the colleague improve. It also uses emotionally charged language that can be interpreted as a personal attack rather than constructive feedback.

The feedback is also specific and actionable, which can help John continue to excel in his work and contribute to a positive work environment.

👍 "I've noticed that Mary sometimes comes across as confrontational or abrasive in team meetings, which can create tension and make it difficult to collaborate effectively. I think it would be helpful for Mary to work on developing more positive and collaborative communication skills, such as active listening and empathy, to build more positive relationships with her colleagues."

This is another good peer review example because it acknowledges Jane's strengths and accomplishments while also providing specific and actionable feedback on areas for improvement.

By focusing on specific behaviors that Mary can improve, such as organization and task prioritization, the feedback is constructive and helpful for Jane. It also provides her with specific strategies for growth and development in her role, which can help her to continue to excel in her work.

Overall, this kind of feedback can be a powerful tool for helping colleagues to grow and develop in their roles, and for promoting a more collaborative work environment.

2. Be mindful of your colleague's feelings

While it's okay to give constructive feedback and share your honest thoughts on a peer review, you should communicate your opinions professionally without being rude or insulting.

Also, instead of constantly reiterating their weaknesses, let their strengths shine and think of solutions that could motivate them to do better.

🟢 DO be mindful of the tone of your feedback. Using harsh or judgmental language can damage relationships and create a negative work environment.

🔴 DO NOT  use condescending language when evaluating your colleague's performance.

Let's look at some peer feedback examples.

👎 "I don't believe my colleague can function effectively in this job."

👎 "I'm not really sure what Mary does around here. She seems to just be coasting and not really contributing much to the team."

👎 " Mary's work is consistently subpar and it's frustrating to work with her. She needs to work harder."

These are poor example of peer feedback because they are overly negative and do not provide any actionable steps for the person receiving the feedback to improve their performance.

Words like "subpar" and "frustrating" can be hurtful and demotivating, and don't give any specific information on what exactly Mary needs to improve on or how to do so.

👍 "While there's room for improvement, I appreciate the effort Mary puts into her work. I think she could benefit from more training and guidance on how to prioritize tasks."

👍 "I think Mary has the potential to be a great team member, but she could benefit from improving her communication skills. I would suggest that she work on being more clear and direct in her interactions with others."

These are better examples of constructive peer feedback because they acknowledges Mary's effort and provides specific steps for improvement. The reviewer uses more positive language to acknowledge that Mary is trying, and suggests that training and guidance could help her prioritize tasks more effectively or her communication.

The positive examples are more specific, actionable, and solution-focused, and are more likely to lead to improved performance and a more positive work environment.

By focusing on specific areas for improvement and suggesting a way forward, the feedback provides Mary with a clear path to success and encourages her to continue working hard to improve her skills.

3. Explain in detail

While your goal, when given a peer review form, is to focus solely on a particular area of your co-worker's performance, it won't help them in the long run.

🟢 DO share a comprehensive review helps your manager identify their areas of improvement and helps your colleague understand how others view their overall performance at work.

🔴 DO NOT focus on a single event or project. Discuss how they operate daily and their attitudes to work.

Do they have excellent communication skills?

Are they great at communicating with people?

How do they approach brainstorming sessions or when asked to handle complex tasks? 

🔴 DO NOT  critique every tiny detail about your colleague's performance. For example, a colleague's approach to handling a difficult task may be to take some time away from everyone or work and come up with answers than yours.

🟢 DO Understand and appreciate that everyone has different working styles, and it makes up their personalities and who they are.

Let's analyze some concrete peer review feedback examples.

👎 "Samantha's work is good."

👎 "Jane is a great teammate. Great work."

For the negative examples of peer review comments, the feedback is too vague. It doesn't provide enough detail for the recipient to be actionable or meaningful.

👍 "Samantha has great communication skills and is always willing to step in and help others. She excels at problem-solving and is able to stay calm under pressure."

👍 "I really appreciate Jane's ability to stay calm under pressure and help us problem-solve when things get tough. She's always willing to pitch in and go above and beyond to make sure the team succeeds, whether it's taking on extra work or providing a listening ear when someone needs to vent."

For the positive examples of peer review comments, the reviewer provides specific examples of the colleague's behavior and how it positively impacts the team. As a result, the feedback is more meaningful; the receiving peer can use to continue to be a great teammate in the future.

👎 "I can't believe how poorly Tom handled the client meeting last week. He was disorganized and unprepared, and it was clear that the client was not impressed.

This example of peer review feedback is overly negative and strictly refers to a single event. There is no indication that John always displays the same behavior. It also does not acknowledge any strengths or positive attributes that Tom may possess, which can make the feedback feel overly harsh and unfair.

👍 "I think Tom has a lot of potential, but I have noticed that he tends to struggle with giving presentations. I think it would be helpful for him to work on his preparation and public speaking skills, perhaps by attending a workshop or training session. With some additional support and training, I believe Tom could continue to grow in his role and make a positive impact on the team."

In this example, the reviewer does not refer to a single event but to a recurring behavior. By providing specific feedback and actionable steps for improvement, the feedback is more constructive and helpful for the colleague. It also focuses on growth and development rather than criticism and negativity.

This is what we call an effective peer reviewer.

4. Write clearly

Summarize what you've noticed about your co-worker's performance.

🟢 DO mention areas of improvement you've noticed and highlight areas you hope you see their work on in the future.

🔴 DO NOT beat around the bush with your answers during peer reviews.

Ensure your answers are clear, concise, and easy to understand.

👎 "Brian is fine, I guess."

This peer review doesn't provide any specific information or insights about Brian's work or his abilities.

There is a clear example of non-effective feedback. There is nothing actionable for Brian. Even if, on the surface, the reviewer did not share anything negative, there is no take-away for the reviewee.

👍 "I've noticed that Brian has been taking on more responsibilities lately and doing a great job. I think he could benefit from more opportunities to showcase his leadership skills and contribute to larger projects."

This peer review sample is a good example of constructive feedback. It acknowledges Brian's growth and contributions to the team, and suggests opportunities for him to further develop his skills and take on more responsibility.

By acknowledging that Brian has been taking on more responsibilities and doing a great job, the feedback is specific and provides actionable steps for Brian to continue to excel in his role.

👎 "I think Mary is a good worker overall, but there are some things she could improve on. Maybe she could be more organized or something."

👍 "I have noticed that Mary tends to struggle with prioritizing her tasks and meeting deadlines. To help her improve in these areas, I think it would be helpful for her to work on creating more detailed to-do lists or setting reminders for herself. Additionally, I think Mary could benefit from some additional training or support in project management skills."

📜 Templates you can use for you next peer review

Employee peer review templates for annual performance reviews

While there are different ways to create a peer review template, we recommend using Google Docs or Microsoft Word. Not only are they easier to use, but they are free too. With these two online document creation tools, you can say goodbye to purchasing expensive peer review templates or downloading special software.

Here is our free Google Forms template you can give colleagues to send each other meaningful feedback.

Peer review feedback form

  • 🌱 Make it as easy as possible for people to give each other meaningful feedback.
  • 🧩 It's 100% customizable so you can truly make it your own.
➡️ Download your free peer review feedback form here.

Peer feedback form template Google Forms

You could also use Zavvy's feedback tool to collect peer reviews.

With Zavvy, you can create peer review forms that are relevant to the department or job.

For example, peer review forms for sales representatives, customer support specialists, or receptionists should focus on soft skills. In contrast, a Cybersecurity engineer or software developer might focus on technical skills.

This means that you'll need some kind of sheet that outlines your peer's competencies.

Career progression competencies

Don't forget to leave blank spaces on your peer review forms to allow the reviewers to add important yet overlooked topics.

If you're using Zavvy, you can either have reviewees choose their peers themselves - or have managers do it for them.

peer review example research

➡️ Facilitate feedback and growth with Zavvy

When implemented and done right, peer reviews can offer insights that you might never have otherwise discovered and increase an employees' performance.

Zavvy makes collecting feedback a breeze . With just a few clicks, you will have recurring feedback cycles.

  • Select the types of feedback you want to collect - any combination of self-review , downward, upward feedback, or peer reviews.
  • Customize the survey forms for each feedback type (Or use one of our ready-to-use templates ).
  • Define your anonymity settings (Should all feedback be anonymous ?)
  • Decide if you want to include a performance calibration step .
  • Select the participants for your review cycle (For example, Taktile automates feedback cycles for their new hires at the 6 week, 12 week, and 18 week of their new hire journeys).
  • Define the timeline for writing, nomination and feedback sharing tasks.
  • Double-check all the details and activate your cycle 🏁 .

how Taktile automates giving feedback to their new hires - quote

But, it's one thing to collect peer review feedback, and it's a different ballgame to use it to propel employee growth.

Don't leave your employees wondering what comes next.

Instead, roll out learning and development programs to improve their skills and put them on the right career path .

📅 Want to ensure a cycle of continuous development and grow your people? Book a demo  today.

Zavvy 360 degree growtth system

Keke is Zavvy's expert in learning experience. On our blog, she shares experience and insights based on her studies in learning design and experiences made with our customers.

Als Nächstes lesen

peer review example research

Commonwealth Honors College: Getting Started With Library Research

  • Discovery Search

Why Databases?

Peer reviewed/refereed/scholarly articles, best databases for starting education research, find databases by subject and format: databases a-z list, find databases by subject or topic: research guides, what if the article i want isn't available full-text, google scholar, know the journal name of the article you want try publication finder.

  • Print Materials
  • More Formats (Videos, Data, Etc.)
  • Interdisciplinary Topics
  • Borrowing from Other Libraries
  • Citation Management
  • Reservable Library Spaces
  • Library Workshops and Events
  • Getting Help!

Databases are collections of information. We purchase access to several databases that contain journals and magazines where you can find articles for your research.

There are two types of databases for articles:

Subject-specific: These databases gather articles from journals about specific disciplines or topics, such as Education or Art or Psychology.

  • Good for: Finding scholarly articles on very specific topics

Multidisciplinary: These databases gather articles from across multiple disciplines. It could be a database that covers a wide variety of social sciences or it could be a database that covers a wide variety across the arts, humanities, social sciences and sciences. Using a subject-specific database often means you can search for very specific topics and find materials.

  • Good for: Finding scholarly articles on your topic from a variety of perspectives from different disciplines

Articles that are peer-reviewed can also be referred to as  peer-reviewed, refereed or scholarly articles.

Scholarly articles are written by researchers or experts in a field to share the results of their original research or analysis with other researchers, experts and students. These articles go through a process known as "peer review" where the article is reviewed by a group of experts in the field and revised based on peer feedback before being accepted and published by a journal.

This short video further explains what peer review is and why it's important.

  • Video: Peer Review

These databases are examples of good subject-specific databases for researching the disciplines of Art, Education, and Psychology:

Terms of Use

Education journal articles (EJ references) and ERIC documents (ED references), 1967-present. EDs before 1997 are requestable using the Microforms Request page and usable in the Microforms Vewing Room in the LC.

A free version of ERIC is available for all to use at this link: https://eric.ed.gov/ .

Available on campus to all, or off-campus to UMass Amherst students, staff and faculty with an UMass Amherst IT NetID (user name) and password.

These are examples of multidisciplinary databases that also have a broader focus. Social Science Premium Collection  covers multiple disciplines in the social sciences and Scopus has coverage in the arts, humanities, social sciences and sciences. With Scopus, you can sort by citation to see highly cited articles.

  • Scopus This link opens in a new window Scopus is an indexing and abstracting database of peer-reviewed scholarly content covering the sciences, social sciences, and arts & humanities, comparable to the Web of Science. Scopus allows for the discovery, tracking, and analysis of scholarship that includes: journal articles, conference proceedings, trade magazines, book series, books and book chapters, and patents. Use Scopus to: • Search for documents by topic, title, author, or institutional affiliation • Perform citation searches and establish citation alerts • Export citations to reference management systems • View impact metrics for authors and journals • Integrate Scopus content with ORCID profiles more... less... Available on campus to all, or off-campus to UMass Amherst students, staff and faculty with an UMass Amherst IT NetID (user name) and password.

We have more than 600 databases on a wide variety of topics. The spectrum ranges from databases that have a very specific topic to databases that are multidisciplinary.

The easiest way to find databases with articles on your research topic is to use the Databases A-Z List. Use the link below to go the list.

You can use the following filters to find databases based on subject and format:

  • Click on the Subjects filter to narrow down to a specific subject. If you select Multidisciplinary , you will get databases that cover a wide variety of publications.
  • Click on the Types filter and select the Articles  filter. This narrows down the list to databases with articles (abstract only and full-text).
  • Finally, click Search .

A-Z list interface with showing subject and format filters being used

  • You can select multiple subjects. Once you've picked one subject, you can go back and select another to add.
  • If you use the filters, make sure to click on Clear Filters  before switching to another subject and/or format.
  • Try exploring different subjects to find databases that have other discipline perspective on your topic. For instance, you might want to explore psychology databases if you're researching the effects of a specific learning theory.
  • If there's a database you want to bookmark, make sure to bookmark the link from the Databases A-Z list.
  • Databases A-Z List of databases by subject and type.

Library staff at the UMass Libraries have developed research guides by subjects, topics and collections. You can look at various guides and see what resources librarians recommend for those subjects, which includes databases where you can find articles.

  • UMass Amherst Libraries Research Guides

If the article that you want doesn't have full-text available, look for this icon in the result for the article and click on it:

UMass Full Text Finder icon

This will search our other databases to see if it's available full-text. You'll go to a page that may list several of the options if they are available:

Option What It Does

Click on the name of the database to go directly to the article. If it lists more than one option, make sure to look at the date ranges to make sure that the date of your article falls within the data range.

Sometimes that link will send you to the database instead of the specific article. If that happens, search for the article in the new database.

If we don't have another database that has full-text, you can submit an Interlibrary Loan (ILL) request for the article (for free!). Clicking on this link will take you to the login for our ILL system. The best part is that it will fill in the article details needed for ILL for you!

If you haven't used ILL before, please see the XXXXX page on the left for details on activating your account.

This will search Google Scholar to see if there's a full-text version available for the article.

This will search Unpaywall to see if there's a full-text version available for the article.

Unpaywall is an open database of open access content from publishers and repositories.

Google Scholar searches scholarly literature across many topics. However, we don't know what it searches - you can't tell if it's a comprehensive search of the literature. The benefit of using library databases is that you can see where the information in the database is from, such as a list of publications.

Use Our Google Scholar Link!

You want to use the Google Scholar link from the Databases A-Z list or use the link below (and use that link if you want a bookmark!)

This will allow you to search Google Scholar and if the article is in one our databases, you'll either see a link to the article on the right and/or you will see UMass Check for Full Text . The check for full text will do the same as the UMass icon described above.

Full text links from Google Scholar

Google Scholar Search Tips

  • You can then limit the search by exact phrases, exclude specific words, or select where the words searched occur (anywhere or just the title). You can also search by the author, journal and/or specific date ranges.
  • Most of the article search tips below will work for Google Scholar!
  • Google Scholar This link opens in a new window Use to access many UMass online journal subscriptions. more... less... Available on campus to all, or off-campus to UMass Amherst students, staff and faculty with an UMass Amherst IT NetID (user name) and password. You can access Google Scholar with UMLinks buttons from outside the UMass Amherst IP range ("off campus") by two methods: 1. Access Google Scholar through the Library web site by using this link. 2. Go to generic Google Scholar. a. Click on "Settings." b. Click on Library links. c. Type in "University of Massachusetts" or "UMass Amherst" (or a few other variations). d. Check "University of Massachusetts Amherst - UMass Check for Full Text" and Save. e. You will be asked to authenticate somewhere along the way to full text.

If you know the name of the journal of the article that you want, you can use Publication Finder to see if we have electronic access to the journal. You can search for the name of the publication and limit by publication type.

Publication Finder interface

How To Search

  • If you are getting too many results, you may want to change Contains to Exact Match or Begins With to narrow the results down.
  • You can also switch from Title to ISSN and search by the ISSN for the journal if you have it. You can often find the ISSN on the publisher's page for the journal. This is helpful for journals with frequently used words in the title, such as Journal or Education .
  • If you see Full Text Delay , this means that there are only abstracts available for the specified number of years.

Publication Finder journal result

  • Once you've determined that the date is available, click on the name of the database. This will bring you to the details for the publication.
  • Usually there is some way to browse by the year (often on the right or in a drop-down field in a bar under the publication's name).
  • There is often a link to click to search within that publication or sometimes a search bar to immediately search within the publication.
  • Publication Finder Search PubFinder to see if we have electronic access to a publication by name or ISSN.
  • << Previous: Discovery Search
  • Next: Print Materials >>
  • Last Updated: Aug 27, 2024 7:31 PM
  • URL: https://guides.library.umass.edu/getstartedchc

© 2022 University of Massachusetts Amherst • Site Policies • Accessibility

University Libraries

FYEX1110 SoE

  • Search Strategies

Source Evaluation: A Good Place to Start

Video: peer review in 3 minutes.

  • Citation Managers
  • How to cite AIs

Learning, Research & Engagement Librarian

Profile Photo

Using sources found in the library's databases takes some -- but NOT ALL -- of the guesswork out of determining if a source accurate, credible, and appropriate to use in specific context. The table below walks you through three levels of source evaluation with some basic questions that can help you get started. Don't worry. Evaluating sources gets easier with practice and experience.

Don't forget to consider your own information needs. Is this information going to be useful to you? Is it relevant to what you're trying to create or learn? Remember, you're bringing your own background, identity, and worldview to the table, as everyone must, whenever you absorb new information. So also consider how those factors might be shaping your reaction to this text.

 
Reputation of the author/institution can be a flawed indicator.
 

Even good journals sometimes make mistakes.

Impact factor is sometimes used problematically. 

Beginner researchers might not have expertise to judge. 

Citation counts can be misleading. 

Sometimes older information is still relevant.
 

  • << Previous: Search Strategies
  • Next: Using Sources >>
  • Last Updated: Aug 29, 2024 11:32 AM
  • URL: https://libguides.unm.edu/fyex1110-soe

American Psychological Association Logo

Announcing the New Peer Review Framework for Research Project Grant and Fellowship Applications Submitted to the National Institutes of Health

  • Peer Review
  • Funding and Grants

Essential Science Conversations

  • Slides (PDF, 1MB)
  • Transcript (PDF, 96KB)

Have you heard about the initiative at the National Institutes of Health (NIH) to improve the peer review of research project grant and fellowship applications? Join us as NIH describes the steps the agency is taking to simplify its process of assessing the scientific and technical merit of applications, better identify promising scientists for training opportunities, and mitigate elements that have the potential to introduce bias in review.

This program does not offer CE credit.

Valerie Durrant, PhD

More in this series

A discussion on best practices as a mentor, including mentoring scholars of color.

January 2023 On Demand Webinar

McGuire and a panel of psychological scientists discuss how your research and expertise can shape policy, garner funding for essential programs, and improve human lives.

October 2022 On Demand Webinar

Discusses what reporters and editors want and need from scientist sources, how psychological scientists can present their work in compelling ways, and the do’s and don’ts when working with the news media.

September 2022 On Demand Webinar

Discusses the need to shift psychology’s focus from the individual researcher, and their career and work, to a truly publicly-engaged science, where the public is involved in all steps of the research process.

April 2022 On Demand Webinar

  • Open access
  • Published: 24 August 2024

User engagement in clinical trials of digital mental health interventions: a systematic review

  • Jack Elkes 1 ,
  • Suzie Cro 1 ,
  • Rachel Batchelor 2 ,
  • Siobhan O’Connor 3 ,
  • Ly-Mee Yu 2 ,
  • Lauren Bell 4 ,
  • Victoria Harris 2 ,
  • Jacqueline Sin 5   na1 &
  • Victoria Cornelius 1   na1  

BMC Medical Research Methodology volume  24 , Article number:  184 ( 2024 ) Cite this article

209 Accesses

1 Altmetric

Metrics details

Introduction

Digital mental health interventions (DMHIs) overcome traditional barriers enabling wider access to mental health support and allowing individuals to manage their treatment. How individuals engage with DMHIs impacts the intervention effect. This review determined whether the impact of user engagement was assessed in the intervention effect in Randomised Controlled Trials (RCTs) evaluating DMHIs targeting common mental disorders (CMDs).

This systematic review was registered on Prospero (CRD42021249503). RCTs published between 01/01/2016 and 17/09/2021 were included if evaluated DMHIs were delivered by app or website; targeted patients with a CMD without non-CMD comorbidities (e.g., diabetes); and were self-guided. Databases searched: Medline; PsycInfo; Embase; and CENTRAL. All data was double extracted. A meta-analysis compared intervention effect estimates when accounting for engagement and when engagement was ignored.

We identified 184 articles randomising 43,529 participants. Interventions were delivered predominantly via websites (145, 78.8%) and 140 (76.1%) articles reported engagement data. All primary analyses adopted treatment policy strategies, ignoring engagement levels. Only 19 (10.3%) articles provided additional intervention effect estimates accounting for user engagement: 2 (10.5%) conducted a complier-average-causal effect (CACE) analysis (principal stratum strategy) and 17 (89.5%) used a less-preferred per-protocol (PP) population excluding individuals failing to meet engagement criteria (estimand strategies unclear). Meta-analysis for PP estimates, when accounting for user engagement, changed the standardised effect to -0.18 95% CI (-0.32, -0.04) from − 0.14 95% CI (-0.24, -0.03) and sample sizes reduced by 33% decreasing precision, whereas meta-analysis for CACE estimates were − 0.19 95% CI (-0.42, 0.03) from − 0.16 95% CI (-0.38, 0.06) with no sample size decrease and less impact on precision.

Discussion

Many articles report user engagement metrics but few assessed the impact on the intervention effect missing opportunities to answer important patient centred questions for how well DMHIs work for engaged users. Defining engagement in this area is complex, more research is needed to obtain ways to categorise this into groups. However, the majority that considered engagement in analysis used approaches most likely to induce bias.

Peer Review reports

One in four people experience a mental health problem every year [ 1 ]. However, an estimated 70% with mental ill health are unable to access treatment [ 2 ]. App and web-based tools, collectively digital mental health interventions (DMHIs), are low cost, scalable [ 3 ], and have potential for overcoming traditional barriers to treatment access, such as physical access (flexibility in treatment location), confidentiality (providing anonymity), and stigma [ 4 ]. In recent years, the number of available DMHIs has rapidly increasd [ 5 ], the Apple App Store alone has over 10,000 behavioural apps [ 6 ]. This rapid increase combined with the complex nature of DMHIs has meant safety and effectiveness regulations have lagged behind [ 7 ]. Additionally, many DMHIs are developed for commercial purposes and marketed to the public without scientific evidence [ 8 ]. The current National Institute for Health and Care Excellence (NICE) guidelines [ 9 ] for digital health technologies, advocate for use of randomised controlled trials (RCTs) to evaluate the effectiveness of digital interventions in specific conditions such as mental health. Promisingly, the number of digital interventions evaluated in RCTs over the last decade has more than doubled [ 10 ].

Many DMHIs are developed through the digitalisations of existing services, such as online self-led formats of conventional therapist-delivered treatments. However, in contrary to conventional therapist-led treatments, DMHIs offer flexible anytime access for individuals [ 11 ]. This change in delivery means existing evidence of risk-benefit balance from structured therapist-delivered interventions is not translatable. DMHIs are potential solutions to provide more individuals with much needed treatment access, but they are not without challenges. In 2018 the James Lind Alliance (JLA) patient priority setting group for DMHIs set out the top 10 challenges to address [ 12 ]. Overcoming these challenges is essential for DMHIs to successfully improve treatment access and health outcomes in mental health [ 13 , 14 ]. One theme that emerged from across the priorities was the importance of improving methods for evaluating DMHIs including the impact of user engagement.

The impact user engagement has on DMHIs efficacy is poorly understood [ 6 , 15 , 16 ]. Although DMHIs are widely available, user engagement with DMHIs is typically low [ 17 ]. For multi-component DMHIs (commonly including psychoeducation, cognitive exercises, self-monitoring diary), a minimally sufficient engagement in DMHIs is often crucial for establishing behavioural changes and thus improved health outcomes [ 18 ]. However, achieved sustained behavioural changes by engaging with DMHIs is a multidimensional construct that is both challenging to assess and the pathway for patients to achieve this is complex [ 19 , 20 ]. Unlike other interventions, DMHIs are unique in that web-based or app-based interventions can capture interactions from individuals. User engagement can be measured and recorded using automatically captured indicators (e.g., pageviews, proportion of content/modules completed, or number of logins). However, the large variety in measurable indicators across different DMHIs [ 16 , 21 ] further compounds challenges to understanding pathways to sustained behaviour changes.

For RCTS, the latest estimand framework in the ICH E9 R1 addendum [ 22 ] provides guidance on defining different estimands, which enables trialists to ensure the most important research questions of interest are evaluated. This includes guidance on handling post-randomisation events, such as user engagement with the DMHI, in efficacy analysis. For example, policy makers are likely to be most interested in a treatment policy estimand which provides an assessment of the benefit received on average under the new policy of prescribing the DMHI regardless of how it’s engaged with. For DMHIs typically engagement is poor, which means treatment policy estimands may underestimate the true intervention efficacy for those who engaged [ 23 ], so alternative estimands that address this may also be of interest to target. For example, the benefit received on average for individuals who would actively engage with the DMHI (a principal stratification estimand). However, to utilise available methods post-randomisation variables need to be clearly defined, but this is difficult for engagement with DMHIs because it is multifaceted with many different engagement indicators available to use.

This systematic review aimed to assess the current landscape of how RCTs for DMHIs are reported and analysed. The review primarily assessed how user engagement is described, what engagement indicators are reported and how, if at all, researchers assessed the impact of user engagement on efficacy. As the number of DMHIs evaluated in RCTs is ever increasing, this review is essential to identify current practice in trial reporting to inform further research to improve the quality of future trials. The specific research aims of interest were to: (1) examine trial design and characteristics of DMHIs; (2) summarise how user engagement had been defined and measured in RCTs of DMHIs; and (3) assess how often intervention efficacy was adjusted for user engagement and the impact of user engagement on efficacy estimates.

The protocol for this systematic review was prospectively published in Prospero [ 24 ], and PRISMA guidance was followed in reporting of this review.

Study selection

We included RCTs examining the efficacy of DMHIs, excluding pilot and feasibility studies [ 25 ]. Search terms for RCT designs followed guidance from Glanville et al. [ 26 ]. We included trials of participants with common mental disorders (CMD) defined by Cochrane [ 27 ] excluding populations with non-CMD comorbidities, such as patients with depression and comorbid diabetes. Populations with multiple CMDs were not excluded as there were many transdiagnostic interventions targeting overlapping symptoms of different conditions. Both trials requiring a confirmed clinical diagnosis and trials where participants self-referred were included. For consistency in DMHIs included interventions must meet any criteria from items 1.1 (targeted communication on health information), 1.3 (client to client communication, e.g., peer forums), 1.4 (health tracking or self-monitoring) or 1.6 (access to own health information) from the WHO Classification of Digital Health Interventions [ 28 ]. DMHIs must have been delivered on a mobile app or through a web-browser and where the intervention was self-guided by participants, defined as an intervention where participants have full autonomy over how this is used. Search terms for interventions followed guidance from Ayiku et al. [ 29 ]. All publications must have been reported in English.

The search was performed on the 17th September 2021 and included trials published between 1st January 2016 to 17th September 2021. Search terms were adapted for each database: MEDLINE, Embase, PsycINFO and Cochrane CENTRAL (see supplemental table S1 for search strategy). Title and abstracts were independently screened by two reviewers (JE, RB, SO, LB, LM & VH), and again at the full text review stage. Covidence [ 30 ] was used to manage all stages, remove duplicates and resolve disagreements.

As a methodology review to examine how user engagement was described and analysed a risk of bias tool to assess trial quality was not undertaken [ 31 ]. However, key CONSORT items [ 32 ] were extracted to determine adherence to reporting guidance, including reporting of a protocol or trial registration (item 23/24), planned sample size (item 7a) and amendments to the primary analysis (item 3b). For all items self-reported data from articles was extracted.

Data extraction

A data extraction form was developed by the lead author (JE) and reviewed by VC, SC and JS. Summary data extracted covered: trial characteristics (e.g., design and sample size); intervention and comparator descriptions (e.g., delivery method or primary function); participant demographics (e.g., age or gender); reporting of user engagement (e.g., indicators reported); and point estimates, confidence intervals and P-values of analysis results unadjusted and adjusted for user engagement. In trials with multiple arms the first active arm mentioned was included. No restriction was applied to the control arm in the trial. The full extraction sheet, including CONSORT items, is in the table S2 of the supplementary material.

The analysis was predominantly descriptive and used mean and standard deviations, or medians and interquartile ranges (IQRs) to describe continuous variables. Frequencies and percentages summarized categorical variables. User engagement captured through engagement indicators (e.g., pageviews and total logins) and methods to encourage user engagement (e.g., automatic notifications) were summarised descriptively. Indicator data was summarised in four categories: duration of use (e.g., length of session), frequency of use (e.g., number of logins), milestone achieved (e.g., modules completed) and communication (e.g., messages to therapist). Descriptive summaries also assessed both the recommended user engagement definitions, the pre-specified minimum engagement level investigators told participants to use DMHIs, and active user definitions, the pre-specified engagement level of most interest to investigators for intervention effects accounting for user engagement. Both were summarised by indicators used in definitions.

To determine the impact of user engagement on intervention efficacy, restricted maximum likelihood random effects meta-analyses were conducted for articles that reported both intervention effect when user engagement was accounted for and when it wasn’t. Standardised effects were used due to outcomes and measures varying between articles. These were taken directly, where reported, otherwise calculated using guidance from Cochrane [ 33 ], and Cohen’s d formula for the standard deviation [ 34 ]. Articles were grouped by outcome domains (e.g., depression, anxiety or eating disorders) based on the reported primary clinical outcome used to evaluate efficacy. Analyses also group articles based on the analytical approach used for adjustment, those using statistical methods that retained all participants formed one group (recommend approaches) and those using statistical methods only retaining conventional per-protocol populations, i.e., exclude the data from those who did not comply, formed the other group (per-protocol approaches). All analysis was performed using Stata 17.

From a total of 6,042 articles identified, 184 were eligible and included in this review (see Fig.  1 ) randomising 43,529 participants. The most evaluated outcome domain was Depression, 74 (40.2%) articles, followed by Anxiety, 29 (15.8%) articles, and PTSD, 12 (6.5%) articles, see supplementary table S3 for full list. At least 123 unique interventions were assessed, however some interventions ( n  = 39) were only described in general terms, such as internet delivered cognitive behaviour therapy for depression, so could not be distinguished as separate interventions and are excluded from the count. On average 30.7 (SD 7.7) articles were published each year, a more detailed breakdown by outcome domain is in supplementary figures s1 and s2 .

figure 1

PRISMA flowchart for studies included in the systematic review

Extracted CONSORT items assessed trial reporting quality, 51 articles (27.7%) did not report their planned sample size and 36 articles (19.7%) did not clearly reference a trial protocol or trial registration number. For the 133 articles that reported both the planned and actual sample size, 43 (32.3%) failed to recruit to their target. The planned analysis approach was reportedly changed in 3 (1.6%) articles, one due to changes in the intervention [ 35 ] and the others due to high attrition [ 36 , 37 ].

Most articles used “traditional” trial designs with 170 (92.4%) opting for a parallel arm design and the majority assessed only one new intervention ( n  = 134, 78.8%). Four articles (2.2%) used a factorial design allowing for the simultaneous evaluation of multiple treatments providing statistical efficiency by reducing the number of participants required in the trial. Two articles (1.1%) in the Body Dysmorphic Disorder outcome domain reported using a crossover design. However, the first had no wash-out period and instead those in the intervention arm were asked to stop engaging with the app after 16 days [ 38 ]. The second actually used a parallel arm design, where the control group received the intervention after 3 weeks [ 39 ]. Median delivery period for DMHIs was 56 days (IQR 42–84) post-randomisation and the median total follow-up time for primary outcome collection was 183 days post-randomisation (IQR 84–365).

Participants average age was 34.1 years (SD 11.1), and most participants were female (70.7%), see Table  1 . Ethnicity data was not extractable in 133 (72.3%) articles. Most trials required a confirmed diagnosis of a CMD, such as through a structured interview, for inclusion ( n  = 110, 59.8%). Symptom severity could not be extracted in 97 (52.7%) trials, but where available the most common (49 trials, 56.3%) severity was a combination of both mild and moderate. Only 12 (6.5%) articles assessed participants with severe symptomatology in the depression domain ( n  = 7, 58.3%), anxiety ( n  = 1, 8.3%), psychological distress ( n  = 1, 8.3%), general fatigue ( n  = 1, 8.3%), post-traumatic stress disorder ( n  = 1, 8.3%), or psychosis ( n  = 1, 8.3%).

Most interventions were delivered through a website, 145 (78.8%), see Table  2 . There were 76 (41.3%) trials that adapted interventions from existing in-person therapist led interventions, and 84 (45.7%) interventions were newly developed. App delivered interventions were more likely to be newly developed, 23 (71.9%), compared to website interventions, 57 (39.3%). Most common choice of control arm was usual care, 126 (68.5%). For articles with usual care as control, most opted to use wait-lists, 94 (74.6%), where intervention access was provided either immediately after the intervention period, 62/94 (66.0%), or after the total follow-up period, 32/94 (34.0%).

Most articles, 136 (73.9%), reported using at least one approach to encourage participants to engage with the intervention. Methods of encouragement were automatic notifications, n  = 49/136 (32.5%), contacting participants by telephone or email, n  = 68/136 (45.0%), or automated feedback on homework exercises, n  = 76/136 (50.3%). Most used only one method of encouragement, n  = 85 (62.5%), with 6 (4.4%) articles using all 3 methods of encouragement. Although many articles encouraged engagement, only 23.9% ( n  = 44) provided a recommended level of engagement to participants. Recommendations varied from using a rate to progress through content (e.g., one module per week or maximum of two modules per week), a specified duration to use the intervention (e.g., 1.5 h per week or 4 to 6 h per week), or specifying milestones to complete (e.g., complete one lesson every 1–2 weeks or complete daily homework assignments), a full list is in table s5 of the supplementary material.

User engagement data captured through indicators was reported in many articles, 76.1% ( n  = 140), Fig.  2 . Typically, this included only reporting only one indicator ( n  = 41, 29.3%) ranging up to eight indicators for one (0.7%) trial [ 40 ]. Across the 140 studies reporting user engagement data, most commonly indicators described the frequency of use, 150 (40.7%), followed by indicators to capture milestones achieved, 124 (33.6%), further detail is found in table s4 of the supplemental. A total of 150 unique indicators were reported across the 140 articles, the most popular measure used was modules completed, 51.3% ( n  = 77), followed by the number of logins, 25.3% ( n  = 38). In website only delivered interventions there were 102 unique indicators compared to 41 unique indicators reported in app-based interventions, and 7 unique indicators in interventions delivered as both an app and website.

figure 2

Proportion of trials describing user engagement in methods section ( A ) or in results section ( B )

A) – How user engagement was reported in the methods section

Recommended – the participant was told how to use the intervention by the study team

Encouraged – reminders (e.g., notifications or emails) were sent to the participant

Active User – participants meeting a pre-specified engagement level set by the study team

B) – How user engagement data was reported in the results section

Reported – results describe activity for at least one engagement indicator

Analysis – results report an intervention effect where user engagement has been considered

Active user definitions, the engagement level of most interest to trial teams, was stated in the methods sections for 20.1% ( n  = 37) of articles. Digital components of active user definitions included setting a minimum number of modules completed (e.g., 4 out of 5 modules), a proportion of content accessed (e.g., at least 25% of pages viewed), or the total time accessed (e.g., used app for 30 min per week), a full list of active user definitions is in table s6 of the supplemental. From the 37 articles reporting active user definitions, 27 (14.7%) described statistical methods to perform an analysis accounting for user engagement but only 19 (10.3%) reported intervention effect estimates.

All articles reporting effects from the analysis accounting for user engagement also reported effects not accounting for engagement so were included a meta-analysis, Table  3 . All articles used a treatment policy estimand (including all participants randomised regardless of the level of user engagement) for their primary outcome, where user engagement was not accounted for. In articles reporting an analysis accounting for user engagement, all outcome domains reported an increase in overall effect size favouring the intervention in comparison to estimates from analysis not accounting for user engagement. The largest increase in intervention efficacy was in the distress domain ( n  = 1) where the standardised mean effect size increased from − 0.61 (95% CI -0.86 to -0.36) to -0.88 (95% CI -1.17 to -0.59).

The results comparing changes in the intervention effect by the analysis approach used (recommended versus per-protocol) is in Table  4 . From the 19 articles included in the analysis, 17 (89.5%) used a conventional per-protocol (i.e., exclude the data from those who did not comply) approach for the analysis accounting for user engagement [ 41 ]. A consequence of which is that the average sample size decreased to 76.9% (IQR 67.7–87.6%) of the original size, in the active arm the average size decreased by 61.8% (IQR 38.1–75.4%). The overall standardised intervention effect increased from − 0.14 (95% CI -0.24 to -0.03, n  = 17), p  = .01, to -0.18 (95% CI -0.32 to -0.04, n  = 17), p  = .01, but was also less precise. Two trials used a Complier Average Causal Effect (CACE) analysis [ 42 ], a recommended approach where assumptions hold, with all participants randomised included in the analysis. The overall standardised intervention effect increased in the meta-analysis with an overall change from − 0.16 (95% CI -0.38 to 0.06, n  = 2), p  = .16, to -0.19 (95%CI -0.42 to 0.03, n  = 2), p  = .09, with no decrease in sample size and slightly less impact on the precision of the estimate.

This systematic review found that in trials of DMHIs for CMDs, promisingly many articles reported user engagement as summaries of automatically captured indicators, but the reported intervention effect rarely accounted for this. Overall, trials were not well reported, almost 30% did not reference a trial protocol and only 27% of articles had available data on ethnicity. The JLA patient priority group set user engagement as a research priority in 2018 and this review, including publications between 2016 and 2021, supports evidence that engagement data has been poorly utilised where only 10% ( n  = 19) of articles had available estimates to evaluate the impact of user engagement on intervention efficacy. Many (> 70%) articles reported summarised engagement data highlighting plenty of opportunities to better utilise this data and understand the relationship between user engagement and efficacy, a question of particular interest to the individual using DMHIs to know the true intervention efficacy.

Many articles reported at least one method used to encourage participants to engage with the intervention, however very few articles were able to specify what the recommended level of engagement should be for individuals. Additionally, only a small proportion of trials assessed the impact of user engagement on the intervention efficacy through active user definitions, but these were broad ranging and used a variety of different engagement indicators. This highlights the complex and challenging task to properly assess user engagement where currently there is little guidance available. This also shows how difficult it is for researchers to identify what the minimum required engagement to the intervention, active user definitions, should be due to the heterogeneity in both the individuals being treated and how the intervention is being delivered (e.g., timeliness and access to other support).

Most articles performing an analysis that accounted for engagement used a conventional per-protocol population. Although the per-protocol population can be unbiased under the strong assumption that user engagement is independent from treatment allocation [ 43 ], typically use of this population causes bias in the estimated intervention effect [ 44 ] and the underlying estimand cannot be determined, i.e. unclear precisely what is being estimated. User engagement is a post-randomisation variable and the estimand framework [ 22 ] suggests using more appropriate strategies for handling post-randomisation events. For example, conducting a complier average causal effect analysis [ 42 ] under the principal stratification strategy estimated using instrumental variable regression [ 45 ] with randomised treatment allocation used as the instrumental variable. Alternative statistical methods can also be used to implement the estimand framework [ 46 ], but due to large variation in the reported engagement indicators and therefore difficulties in how engagement as a post-randomisation variable should be defined comparisons between trials remain challenging.

Establishing better methods in how user groups are defined, based on all available engagement measures, for example by using clustering algorithms combining all engagement measures, are needed. Secondly, once groups are defined existing statistical methods available to implement the estimand framework need to be assessed to determine the optimal approach to analyse the impact of engagement on the efficacy analysis. This is now the focus of our future work.

Future implications

The JLA priority setting partnership occurred in 2018, meaning this review of publications between 2016 and 2021, includes very few trials recruiting after 2018. Therefore, implementation of the JLA priorities cannot be assessed. However, this review has shown user engagement data was available, showing potential for more trials to explore engagement in efficacy analysis. An update of this systematic review should be performed for the next 5 years (2021–2026) to assess whether issues identified in this review around user engagement have been improved. More trials exploring engagement in efficacy analysis will mean the pathway of sustained behaviour changes through engagement with DMHIs is better understood. Additionally, reporting of user engagement varied greatly, and although the CONSORT extension of e-health [ 47 ] outlines some detail on engagement reporting, more directed guidance is needed. Improvements should include reporting what and how many indicators were available and better guidance on how indicator data should be summarised. Additionally, trial publications varied greatly in quality of reported results and particularly for key demographic information such as ethnicity. CONSORT trial reporting guidance has been around since 1996 and more journals should enforce its implemented to ensure robust reporting of trials.

Finally, where data was available, participants were mostly female, white ethnicity and young, demographics consistent with another systematic review of DMHI trials [ 48 ] and the most recent 2014 Adult Psychiatric Morbidity Survey (APMS) for who is most likely to receive treatment [ 49 ]. However, the APMS 2014 also shows that individuals from black or mixed ethnicities are more likely to experience a CMD than those from white ethnicities. This supports other literature [ 50 , 51 ] and highlights differences in those recruited into trials and those who experience a CMD and not represented in DMHI efficacy estimates.

Strengths and limitations of the review

This systematic review assessed a wide-ranging number of outcome domains, providing an overview for all current DMHIs evaluated, including articles from CMDs with lots of active research, such as anxiety and depression, to CMDs with very few published results. Additionally, this review collected detailed information on engagement indicators, how these were reported, and how they were utilised in the analysis of the intervention effect, providing a rich database of the typical indicators available across a wide range of DMHIs.

As the focus of this review was to assess user engagement the review does not analyse the temporal differences of when primary outcome data for the intervention effect were collected. This means the review ignores that differences of the intervention effects across articles could partly be due to temporal differences in when they were collected, assuming the intervention effect changes over time. However, comparisons of adjusted and unadjusted intervention effects are measured at the same timepoints within each article. Additionally, as very few studies reported analysis adjusted for user engagement there was limited data to assess the impact of user engagement on the intervention efficacy in most outcome domains. Further, as most studies assessing engagement used a similar approach, per-protocol population, a formal comparison of methods was not possible. Finally, as this review only focused on appraising how engagement was reported and statistical methods used to analyse engagement, we don’t consider the impact of loss to follow-up has on the efficacy of interventions but must acknowledge that DMHIs typically have high drop-out rates from studies with very low proportions of individuals completing the intervention [ 52 ].

This review assessed reporting of user engagement and how authors considered engagement in the efficacy analysis of digital mental health interventions. While many articles reported at least one measure of engagement, very few articles used the data to analyse how engagement affects intervention efficacy, making it difficult to draw conclusions on the impact of engagement. In the small proportion of articles that reported this analysis, nearly all used statistical methods at high risk of bias. There is a clear need to improve the methods used to define active users by using all available engagement measures. This will help ensure a more consistent approach to how user engagement as a post-randomisation variable is defined. Once these methods are established trialists can then utilise existing statistical methods to target alternative estimands, such as principal stratification, that mean the impact of user engagement with the intervention efficacy can be explored.

Data availability

The study protocol is already available on Prospero (CRD42021249503), datasets used are available from the corresponding author on reasonable request after the NIHR fellowship from which this project comes from is completed (April 2025). Any researchers interested in using the data extracted can contact the lead author using the shared correspondence information.

England N. The Five Year Forward View For Mental Health. 82NHS, NHS England, 2016.

Henderson C, Evans-Lacko S, Thornicroft G. Mental Illness Stigma, help seeking, and Public Health Programs. Am J Public Health. 2013;103:777–80.

Article   PubMed   PubMed Central   Google Scholar  

Muñoz RF, et al. Massive Open Online interventions: a Novel Model for delivering behavioral-health services Worldwide. Clin Psychol Sci. 2015;4:194–205.

Article   Google Scholar  

Ferwerda M, et al. What patients think about E-health: patients’ perspective on internet-based cognitive behavioral treatment for patients with rheumatoid arthritis and psoriasis. 2013.

Koh J, Tng GA-O, Hartanto A. Potential and Pitfalls of Mobile Mental Health Apps in traditional treatment: an Umbrella Review. LID –  https://doi.org/10.3390/jpm12091376 LID – 1376. J Personalized Med. 2022;12.

Torous J, et al. Towards a consensus around standards for smartphone apps and digital mental health. World Psychiatry. 2019;18:97–8.

Capital FC. NICE and MHRA will review regulation of digital mental health tools. Volume 2023. Future Care Capital Website; 2022.

Torous J, Haim A. Dichotomies in the Development and Implementation of Digital Mental Health Tools. 2018.

NICE. NICE Evidience Standards For Digital Health. NICE, https://www.nice.org.uk/ , 2019.

Koneska EA-O, Appelbe DA-O, Williamson PA-O, Dodd SA. -O. usage Metrics of web-based interventions evaluated in Randomized controlled trials. Systematic Review; 2020.

Patel SA-O, et al. The Acceptability and Usability of Digital Health Interventions for Adults With Depression, Anxiety, and Somatoform Disorders: Qualitative Systematic Review and Meta-Synthesis. 2020.

Hollis C, et al. Identifying research priorities for digital technology in mental health care: results of the James Lind Alliance Priority setting Partnership. Lancet Psychiatry. 2018;5:845–54.

Article   PubMed   Google Scholar  

Torous JB, et al. A hierarchical Framework for evaluation and informed decision making regarding smartphone apps for Clinical Care. Technol Mental Health. 2018;69:498–500.

Google Scholar  

Donker T, et al. Smartphones for smarter delivery of mental health programs: a systematic review. 2013.

Torous J, Nicholas J, Larsen ME, Firth J, Christensen H. Clinical review of user engagement with mental health smartphone apps: evidence, theory and improvements. Evid Based Mental Health. 2018;21:116.

Doherty K, Doherty G. Engagement in HCI: Conception, Theory and Measurement. ACM Comput Surv. 2018;51:99.

Lipschitz J, et al. Adoption of mobile apps for depression and Anxiety: cross-sectional survey study on patient interest and barriers to Engagement. JMIR Ment Health. 2019;6:e11334.

Michie SA-O, Yardley LA-OX, West RA-O, Patrick KA-O, Greaves FA-O. Developing and Evaluating Digital Interventions to Promote Behavior Change in Health and Health Care: Recommendations Resulting From an International Workshop. Journal of Medical Internet Research. 2017;19.

Haine-Schlagel R, Walsh NE. A review of parent participation engagement in child and family mental health treatment. Clin Child Fam Psychol Rev 2015.

Saleem MA-O et al. Understanding Engagement strategies in Digital Interventions for Mental Health Promotion: scoping review. JMIR Mental Health. 2021;8.

Perski O, Blandford A, West R, Michie S. Conceptualising engagement with digital behaviour change interventions: a systematic review using principles from critical interpretive synthesis. Transl Behav Med. 2017;7:254–67.

Use C. f.M.P.f.H. ICH E9 (R1) addendum on estimands and sensitivity analysis in clinical trials to the guideline on statistical principles for clinical trials. European Medicines Agency; 2020.

Eysenbach G. The law of attrition. J Med Internet Res. 2005;7.

Elkes J. A systematic review to evaluate how user engagement is described and analysed in randomised controlled trials for digital mental health interventions. PROSPERO Int Prospective Register Syst Reviews. 2021.

Eldridge SM, et al. Defining feasibility and Pilot studies in Preparation for Randomised controlled trials: development of a conceptual Framework. PLoS ONE. 2016;11:e0150205.

Glanville J, et al. Translating the Cochrane EMBASE RCT filter from the Ovid interface to Embase.com: a case study. Health Inform Libr J. 2019;36:264–77.

Cochrane. Glossary of Cochrane Common Mental Disorders. Vol. 2021 A glossary of the definitions of common mental disorders. 2021.

World Health O. Classification of digital health interventions v1.0: a shared language to describe the uses of digital technology for health. Geneva: World Health Organization; 2018.

Ayiku L, et al. The NICE MEDLINE and Embase (Ovid) health apps search filters: development of validated filters to retrieve evidence about health apps. Int J Technol Assess Health Care. 2021;37:e16.

Innovation VH. Covidence systematic review software, Veritas Health Innovation, Melbourne, Australia. www.covidence.org . 2022.

Murad MH, Wang Z. Guidelines for reporting meta-epidemiological methodology research. Evid Based Med. 2017;22:139.

Schulz KF, Altman DG, Moher D. CONSORT 2010 Statement: updated guidelines for reporting parallel group randomised trials. BMJ. 2010;340:c332.

Cochrane. Cochrane Handbook for Systematic Reviews of Interventions. Cochrane. 2023.

Larry V, Hedges, Olkin I. Statistical methods for meta-analysis. 2014.

Fitzsimmons-Craft EE, et al. Effectiveness of a Digital Cognitive Behavior Therapy-guided self-help intervention for eating disorders in College Women. A Cluster Randomized Clinical Trial; 2020.

Salamanca-Sanabria AA-O, et al. A culturally adapted cognitive behavioral internet-delivered intervention for depressive symptoms. Randomized Controlled Trial; 2020.

Richards D, et al. Effectiveness of an internet-delivered intervention for generalized anxiety disorder in routine care: A randomised controlled trial in a student population. 2016.

Cerea S, et al. Cognitive behavioral training using a Mobile Application reduces body image-related symptoms in high-risk Female University students: a randomized controlled study. Behav Ther. 2021;52:170–82.

Glashouwer KA, Neimeijer RAM, de Koning ML, Vestjens M, Martijn C. Evaluative conditioning as a body image intervention for adolescents with eating disorders. 2018.

Milgrom JA-O, et al. Internet Cognitive Behavioral Therapy for Women With Postnatal Depression: A Randomized Controlled Trial of MumMoodBooster. 2016.

Hernán MA, Hernández-Díaz S. Beyond the intention-to-treat in comparative effectiveness research. Clin Trails. 2011;9:48–55.

Dunn G, Maracy M, Fau - Tomenson B, Tomenson B. Estimating treatment effects from randomized clinical trials with noncompliance and loss to follow-up: the role of instrumental variable methods. 2005.

Kahan BC, White IR, Edwards M, Harhay MO. Using modified intention-to-treat as a principal stratum estimator for failure to initiate treatment. Clin Trails. 2023;20:269–75.

Ranganathan P, Pramesh CS, Aggarwal R. Common pitfalls in statistical analysis: Intention-to-treat versus per-protocol analysis. 2016.

Lipkovich I, et al. Using principal stratification in analysis of clinical trials. Stat Med. 2022;41:3837–77.

Parra CO, Rhian M, Daniel, Bartlett JW. Hypothetical estimands in clinical trials: a unification of causal inference and missing data methods. Arxiv - Stat Methodol. 2021.

Eysenbach G, Group C-E. CONSORT-EHEALTH: improving and standardizing evaluation reports of web-based and mobile health interventions. J Med Internet Res. 2011;13:e126.

Sin J, et al. Digital Interventions for Screening and Treating Common Mental disorders or symptoms of Common Mental illness in adults: systematic review and Meta-analysis. J Med Internet Res. 2020;22:e20581.

McManus S, Jenkins BP, Brugha R T, editors. Mental health and wellbeing in England: Adult Psychiatric Morbidity Survey 2014. in Adult Psychiatric Morbidity Survey 405 (Leeds, NHS Digital) 2016.

Iflaifel M, et al. Widening participation - recruitment methods in mental health randomised controlled trials: a qualitative study. 2023.

Coss NA, et al. Does clinical research account for diversity in deploying digital health technologies? Npj Digit Med. 2023;6:187.

Karyotaki E, et al. Predictors of treatment dropout in self-guided web-based interventions for depression: an ‘individual patient data’ meta-analysis. Psychol Med. 2015;45:2717–26.

Article   CAS   PubMed   Google Scholar  

Download references

Acknowledgements

This work was funded by the NIHR Doctoral Fellowship (NIHR301810). The views expressed are those of the author(s) and not necessarily those of the NIHR or the Department of Health and Social Care. The funder had no role in study design, data collection, data analysis, data interpretation, or writing of the report. The TMRP Health Informatics working group, which the lead author is a member of, was essential in finding members (SOC, LY, LB) to join this project and support the work.

This work was funded by the NIHR Doctoral Fellowship (NIHR301810).

Author information

Jacqueline Sin and Victoria Cornelius contributed equally to this work.

Authors and Affiliations

Imperial Clinical Trials Unit, Imperial College London, White City Campus, Stadium House, 68 Wood Lane, London, W12 7RH, UK

Jack Elkes, Suzie Cro & Victoria Cornelius

University of Oxford, Oxford, UK

Rachel Batchelor, Ly-Mee Yu & Victoria Harris

Florence Nightingale Faculty of Nursing, Midwifery and Palliative Care, King’s College London, London, UK

Siobhan O’Connor

Leeds Institute of Clinical Trials Research, University of Leeds, Leeds, LS2 9JT, UK

Lauren Bell

City St Geroge’s, University of London, London, UK

Jacqueline Sin

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization, JE, VC, SC and JS.; Methodology, JE, VC, SC and JS.; Software, JE.; Validation, JE, VC, SC, JS, SO, LB, RB, LY, and VH.; Formal Analysis, JE.; Investigation, JE, VC, SC, JS, SO, LB, RB, LY, and VH.; Resources, JE, VC, SC, and JS.; Data Curation, JE.; Writing – Original Draft, JE, VC, SC, and JS.; Writing – Reviewing & Editing, JE, VC, SC, JS, SO, LB, RB, LY, and VH.; Visualisation, JE, VC, SC, and JS.; Supervision, VC, SC, and JS.; Project Administration, JE, VC, and SC.; Funding Acquisition, JE, VC, SC, and JS.

Corresponding author

Correspondence to Jack Elkes .

Ethics declarations

Ethics approval and consent to participate.

Not applicable as all data was publicly available.

Consent for publication

Not applicable as no participants were recruited for this research.

Competing interests

JE was recently a collaborator on a NIHR HTA grant (NIHR132896) for long term effectiveness of a video feedback intervention for parents. JE is also on the trial steering committee for a trial (NIHR302349) that is part of an NIHR Doctoral Fellowship called Restore-B. JE is also on the programme steering committee (NIHR204413) for a trial called ATTEND. SC was previously awarded funding for an NIHR advanced fellowship (NIHR300593) between Stepember 2020 and December 2023. VC was also involved in the NIHR HTA (NIHR132896) funded trial of long-term follow-up of the video feedback intervention for parents. VC is also on the trial steering committee for a problem solving intervention for adults with dementia and depression, a steering committee member for a trial called ADVANCE and the chair of a NIHR HTA funded data monitoring committee (NIHR132808) called BAY. No other competing interests are reported for all other authors (RB, SOC, LMY, LB, VH and JS).

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Elkes, J., Cro, S., Batchelor, R. et al. User engagement in clinical trials of digital mental health interventions: a systematic review. BMC Med Res Methodol 24 , 184 (2024). https://doi.org/10.1186/s12874-024-02308-0

Download citation

Received : 02 May 2024

Accepted : 14 August 2024

Published : 24 August 2024

DOI : https://doi.org/10.1186/s12874-024-02308-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Randomised controlled trials
  • Digital mental health interventions
  • Mental health
  • User engagement
  • Digital health
  • Systematic review
  • Meta-analysis

BMC Medical Research Methodology

ISSN: 1471-2288

peer review example research

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Supplements
  • BMJ Journals

You are here

  • Volume 9, Issue 8
  • Defining and identifying the critical elements of operational readiness for public health emergency events: a rapid scoping review
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • René English 1 ,
  • Heather Carlson 2 ,
  • Heike Geduld 3 ,
  • Juliet Charity Yauka Nyasulu 1 ,
  • Quinette Louw 4 ,
  • Karina Berner 4 ,
  • http://orcid.org/0000-0002-2441-2566 Maria Yvonne Charumbira 4 ,
  • Michele Pappin 1 ,
  • Michael McCaul 5 ,
  • Conran Joseph 4 ,
  • Nina Gobat 2 ,
  • Linda Lucy Boulanger 2 ,
  • Nedret Emiroglu 2
  • 1 Division of Health Systems and Public Health, Department of Global Health , Stellenbosch University Faculty of Medicine and Health Sciences , Cape Town , South Africa
  • 2 Country Readiness and Strengthening Department, World Health Emergencies Programme , World Health Organization , Geneva , Switzerland
  • 3 Department of Family and Emergency Medicine, Faculty of Medicine and Health Sciences , Stellenbosch University Division of Emergency Medicine , Stellenbosch , South Africa
  • 4 Division of Physiotherapy, Department of Health and Rehabilitation Sciences , Stellenbosch University Faculty of Medicine and Health Sciences , Cape Town , South Africa
  • 5 Centre for Evidence-based Health Care, Division of Epidemiology and Biostatistics, Department of Global Health , Stellenbosch University , Cape town , South Africa
  • Correspondence to Professor René English; renglish{at}sun.ac.za

Introduction COVID-19 showed that countries must strengthen their operational readiness (OPR) capabilities to respond to an imminent pandemic threat rapidly and proactively. We conducted a rapid scoping evidence review to understand the definition and critical elements of OPR against five core sub-systems of a new framework to strengthen the global architecture for Health Emergency Preparedness Response and Resilience (HEPR).

Methods We searched MEDLINE, Embase, and Web of Science, targeted repositories, websites, and grey literature databases for publications between 1 January 2010 and 29 September 2021 in English, German, French or Afrikaans. Included sources were of any study design, reporting OPR, defined as immediate actions taken in the presence of an imminent threat, from groups who led or responded to a specified health emergency. We used prespecified and tested methods to screen and select sources, extract data, assess credibility and analyse results against the HEPR framework.

Results Of 7005 sources reviewed, 79 met the eligibility criteria, including 54 peer-reviewed publications. The majority were descriptive reports (28%) and qualitative analyses (30%) from early stages of the COVID-19 pandemic. Definitions of OPR varied while nine articles explicitly used the term ‘readiness’, others classified OPR as part of preparedness or response. Applying our working OPR definition across all sources, we identified OPR actions within all five HEPR subsystems. These included resource prepositioning for early detection, data sharing, tailored communication and interventions, augmented staffing, timely supply procurement, availability and strategic dissemination of medical countermeasures, leadership, comprehensive risk assessment and resource allocation supported by relevant legislation. We identified gaps related to OPR for research and technology-enabled manufacturing platforms.

Conclusions OPR is in an early stage of adoption. Establishing a consistent and explicit framework for OPRs within the context of existing global legal and policy frameworks can foster coherence and guide evidence-based policy and practice improvements in health emergency management.

  • Public Health

Data availability statement

Data are available on reasonable request. The rapid scoping review protocol can be publicly accessed on the Open Science Framework (OSF) platform ( https://osf.io/39q4b/ ). The datasets used and/or analysed during the scoping review are available from the corresponding author on reasonable request.

This is an open access article distributed under the terms of the Creative Commons Attribution IGO License ( CC BY 3.0 IGO ), which permits use, distribution, and reproduction in any medium, provided the original work is properly cited. In any reproduction of this article there should not be any suggestion that WHO or this article endorse any specific organization or products. The use of the WHO logo is not permitted. This notice should be preserved along with the article’s original URL.

https://doi.org/10.1136/bmjgh-2023-014379

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

WHAT IS ALREADY KNOWN ON THIS TOPIC

Operational readiness (OPR) has emerged as a crucial but relatively unexplored concept in the context of health emergencies.

WHAT THIS STUDY ADDS

OPR is in an early stage of adoption with variable understandings of what it entails. This study highlights a need for conceptual clarity and consistency in describing OPR to build a coherent body of evidence that can underpin policy and practice. Key OPR actions aligned with five core subsystems of Health Emergency Preparedness Response and Resilience (a global, integrated framework for health emergency management) are identified.

HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY

Instruments to evaluate country-level preparedness under the International Health Regulations require evidence of readiness planning. The most recent global policy framework to strengthen the global architecture for health emergencies also signposts the critical role of readiness. This scoping review has provided a foundation for global expert deliberations and agreement on OPR, which is an important step forward towards a coherent body of evidence and to advance policy and practice for improved health emergency management.

Introduction

A key lesson learnt from the global and national response to COVID-19 is the critical importance of early action. COVID-19 caught many countries off guard, and the consequences of delayed responses were severe in terms of public health as well as socioeconomic impacts. To prevent and mitigate the impact of future events, countries must strengthen their capabilities for rapid mobilisation to proactively respond in anticipation of an imminent threat. To this end, operational readiness (OPR) has emerged as an important part of efforts to strengthen the global architecture for health emergency preparedness, response and resilience (HEPR). 1 HEPR, WHO’s new strategic framework, is intended to guide, inform and resource collective efforts to strengthen the key interlinked national, regional and global multisectoral capacities sitting at the intersection of health security, primary healthcare and health promotion.

In the context of the health emergency cycle, OPR arises at the intersection between preparedness planning and response. 2 By promptly mobilising specific resources and strategies in the face of a high-priority and imminent threat, countries can enhance their ability to respond swiftly and efficiently by strategic deployment of well-defined capabilities, plans and actions that are tailored to the specific threat. The importance of this neglected phase in the health emergency cycle has catalysed related global policy initiatives. Instruments to evaluate country preparedness for emergency response under the International Health Regulations (IHR) require evaluation of country-level OPR planning, as seen in the Joint External Evaluation (JEE) 3.0’s Health Emergency Management Capacity, which targets risk-based plans for readiness and existence of an emergency readiness assessment. 3 The WHO’s proposals for a strengthened HEPR architecture across core domains of governance, finance and systems require OPR and capacities in five core subsystems: Collaborative Surveillance; Community Protection; Safe and Scalable Care; Access to Countermeasures and Emergency Coordination, along with OPR plans in Emergency Coordination. 1 4 Currently, there is no WHO guidance related to standardised emergency readiness assessments and readiness planning. To achieve the promise of strengthened OPR policy and practice, closer specification is needed to define what OPR involves and how it works, and the methodologies and approaches used to implement and operationalise it.

To underpin WHO technical products for OPR, we conducted a rapid scoping evidence review to examine the definitions and critical elements of OPR for public health emergencies caused by new or re-emerging infectious diseases and other public health threats in the context of the latest global policy frameworks for health emergency management. 5 This review is important given the absence of a standardised checklist of ‘must haves’ to inform the development of a country contingency plan in the face of an emergency.

Objectives of our review were (a) to identify how OPR has been conceptualised and defined; (b) to elicit critical elements of ‘OPRs’ in the context of key global policy frameworks, such as the WHO Global Health Security Framework, HEPR and JEE 3.0. 3 4 6 Anticipating a large and diverse body of evidence and given the need for a rapid output from this work, we conducted a rapid, scoping review following well-recognised methods. 7–9 We used the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews 10 checklist for reporting. Our study protocol is published 5 and registered (doi:10.17605/OSF.IO/6SYAH).

Eligibility criteria

We included articles that:

Reported on OPRs, defined for this review, as those immediate action(s) required to preposition response actions to acute, proximal or imminent hazards and/or threats (eg, an infectious disease outbreak or a natural disaster threat), that is, an all-hazards approach 5 in the context of health emergencies, that is, disasters and major incidents (natural and otherwise) including emerging and re-emerging infectious disease threats with the potential to significantly impact a population’s health; and described actions of emergency response groups or organisations at national, regional or global levels.

Types: English, German, French or Afrikaans language peer-reviewed original articles or reviews published between 1 January 2010 and 29 September 2021, publicly available policy frameworks and programme reports, published conference reports or electronic theses, relevant grey literature and documents for which full texts or abstracts were available.

We excluded articles that:

Focused exclusively on longer-term preparedness actions (ie, an imminent threat was not explicitly defined) or response actions (ie, actions to respond to an active public health emergency), reported on contexts beyond health emergencies or did not focus on disease prevention and control.

Search strategy

We developed and ran a search structured by population (health systems/community), concept (readiness/preparedness/risk/planning) and context (emergencies/diseases/natural disasters) in MEDLINE, Embase and Web of Science databases (see online supplemental table S1 for detailed search strategies for the electronic databases). We searched various targeted repositories, websites and databases for grey literature 11 (see online supplemental box S1 ). We also used forward and backward citation tracking.

Supplemental material

Selection of sources.

Search outcomes were imported into Rayyan V.0.1.0 software (Rayyan Systems, Massachusetts, USA) for screening, checking of duplicates and final selection. 8 12 Our approach to citation screening aimed to balance rigour and speed, consistent with rapid reviews and adapted from the Cochrane Rapid Reviews Methods Group’s guidance for rapid reviews, 8 including guidance on addressing the methodological challenges faced during COVID-19 rapid reviews. 9

Screening occurred at three levels (title, abstract and full report). The review team agreed on screening decisions upfront and agreed on guidelines after piloting for consistency. 8 13 For piloting, two reviewers (MP and MYC) independently and in duplicate screened 100 titles and abstracts, followed by discussion with three senior authors (RE, QL and MM) to refine screening decisions. Category coding by study design and keywords for excluded articles at the title and abstract level were agreed and set in Rayyan.

After this, one reviewer (MP) screened 20% of the initially identified titles and screened abstracts to remove irrelevant reports. A second reviewer (MYC) verified excluded titles and abstracts. 8 Conflicts and uncertainties were resolved by discussion with senior authors (RE, HG or QL). To ensure that all texts could be assessed in detail against the eligibility criteria within the limited time frame of the rapid review, 9 full-text screening was independently conducted by eight reviewers (MYC, MP, KB, JCYN, CJ, QL, RE and HG) with the yield divided among them. Discrepancies were resolved through discussion.

Selection of grey literature

Grey literature search outputs were screened at two levels (title and body of the report) and recorded by one reviewer (MP). A second content expert (HG) verified the included sample. 8 9

Data extraction and management

Two reviewers (MYC and QL) extracted data from journal articles, and one (MP) from grey literature; an additional reviewer (RE) checked for accuracy in both instances. 14 Data were deductively coded in ATLAS.ti V.9 (Scientific Software Development) ( https://atlasti.com/ ) and extracted into a custom-built, pilot-tested MS Excel spreadsheet, according to preset criteria The data extraction form was revised after pilot testing and consultation with WHO and amended 14 to reflect the study authors’ affiliations and the WHO region in which the study was conducted. Uncertainties were discussed by the full review team. It was not necessary to contact the study authors.

Credibility of evidence in the included articles was assessed based on the information source and type. 8–10 Two reviewers (MP and MYC) appraised the included sources for descriptive purposes and incorporated the results narratively in the reflective summaries of the charting findings.

Data analysis and presentation

To analyse data, we (QL, RE, HG, CJ and JCYN) used qualitative thematic analysis with deductive synthesis, 15–17 against the following preidentified thematic categories: leadership, governance and coordination; country risk assessment; operational planning and coordination; contingency finance; health facility capacity and service delivery; health workforce/human resources; early warning or surveillance and health information systems; community resilience and risk communications; logistics or supply chain for access to essential medicines; WHO readiness and partner readiness. New themes were also identified. A revision of this analysis (HC and NG) used the new HEPR global architecture as an organising frame. 4

Patient and public involvement

As this study presents a scoping review of already published literature, patient and public involvement was not applicable.

Of 7005 citations identified in the database (n=6827) and grey literature (n=178) searches, we included 78 (54 peer-reviewed publications; 25 grey literature) ( figure 1 ). The study characteristics are highlighted in online supplemental table 2A, B .

  • Download figure
  • Open in new tab
  • Download powerpoint

PRISMA flow diagram. PRISMA, Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews.

Online supplemental table S2A characteristics of peer-reviewed studies on the definitions of OPR according to emergency type.

Online supplemental table S2B characteristics of grey literature publications on the definitions of OPR according to emergency type.

Definitions of OPR

Descriptions of OPR lacked clarity and consistency in definition and use. Nine primary research papers and one grey literature document provided explicit definitions of ‘readiness’ and/or ‘preparedness’ for infectious disease emergencies ( online supplemental table S3 ). 18–27 Of these, three 18 21 24 explicitly defined ‘readiness’ while the others used the term ‘preparedness’ in a way that was congruent with our working definition of OPR. The term readiness was used interchangeably with concepts of preparedness, response and recovery. In other included articles, the concept of readiness was reflected implicitly, as per our working definition.

Some included articles suggested that preparedness indicators, using tools like the State Party Self-Assessment Annual reporting tool (SPAR), could be used to indicate gaps for the purposes of targeting OPR actions. 24 28 Others suggested that a country’s OPR and response capacity depends on the strengths of its preparedness, with regular testing and updating of plans and capacities assessing country OPR. 22 26 However, some authors noted that countries’ responses to COVID-19 highlighted an incongruence between IHR compliance scores and response performance; for example, some countries with lower IHR scores demonstrated a better ability to contain COVID-19 at the early stages of the pandemic. 21 29 A lack of recently updated and tested plans and a lack of large-scale training and refresher courses or key actions for OPR, have been identified as reasons for inconsistency and weakness in previous responses. 23 25 Others have identified the activities they had taken as a result of lessons learned from similar diseases as a reason for more successful responses. 29 30 For example, rapid training and simulation exercises and leveraging specific expertise and experiences were considered important in preventing or mitigating an outbreak. 20 28 31

The nature of the imminent threat also influenced the scale and speed of OPR actions, along with the proximity to the hazard. 18–21 OPR could thus be considered the ‘operationalisation’ of hazard-specific capacities aimed at mitigation of a specific, identified risk. Triggering rapid action in response to an imminent threat was noted as a way to feedback and strengthen country capacities while effectively cutting costs of ‘firefighting’ public health emergencies.

While preparedness and OPR are used interchangeably by papers during this review, the reasonable abundance of literature dedicated to time-bound actions right before an event suggests they are different concepts. This observation prompts the necessity for a clear understanding of OPR and its differences from preparedness. Thus, OPR actions could build on overall preparedness levels but consist of time-sensitive activities focused on the imminent threat (eg, ensuring that the healthcare workforce has been recently trained for an imminent threat). These activities have been focused on ensuring that overall preparedness gaps are accounted for (eg, requesting international emergency medical teams (EMTs) to be ready to deploy if EMTs are unavailable in-country). In the following section, we detail the variety of OPR actions that have been taken in articles included in our review, in alignment with the HEPR subsystems. 20 28 31

Critical capabilities for OPR

Collaborative surveillance.

Previous emergencies highlighted the importance of a strong Early Warning System with capacity to improve disease outbreak detection for early action to localised health events. 32–34 Strong surveillance systems at all levels, rapid feedback of results and accessibility of information were described as critical for risk management and decision-making. 33 35 36 A critical review of epidemiological data linked with planning and decision-making to increase vigilance and real-time information sharing at all levels was viewed as critical to communicate changes in the incidence of disease, which could signal triggers. 22 35 37–40

Key OPR actions embedded in surveillance systems included updating case definitions for consistency in identifying and reporting cases, early investigation, proactive contract tracing training for all staff and rapidly updating guidance for clinicians. 18 19 23 41–44 Measures to rapidly ensure integration of various types of surveillance and to address gaps in information collection and sharing were noted. 19 37 40 Integration of human and animal health surveillance systems was viewed as critical, as was the importance of interoperability of surveillance systems. 38 40 The interconnectivity of surveillance systems has been stressed to ensure that actions taken, and information gathered in one part of the system are made aware to other parts. 22 45 For example, it was stressed that the occurrence of viral haemorrhagic fever in animals should activate enhanced surveillance. 38 The timely reconciliation of data from multiple sources has been noted as challenging without an escalation in trained staff, improved communication, information technology and accessibility to more remote locations. 32 The need to have epidemic data be open and transparent for decision-making was emphasised. 46

OPR actions taken for surveillance systems in anticipation of a disease outbreak were centred around detecting gaps and providing solutions, 19 47 improving case detection via procurement of supplies, distribution of case definitions and the deployment of screening teams, 28 44 47–49 improving reporting for Integrated Disease Surveillance and Response priority diseases 28 48 and strengthening specimen transportation and analysis. 47–49 Others included increased frequency of surveillance system results 36 and rapid delivery of updated training and mechanisms for data sharing. 28 29 50 Existing systems were leveraged for COVID-19 as a novel disease 37 or the private sector engaged to provide surge capacity. 31 Other efforts centred around digitising systems to improve flexibility of use and reporting times. 32 46 Contact tracing systems were established as OPR actions, 44 51 along with quarantine or isolation options, screening and referral pathways in community settings and dedicated transfers for suspect cases. 44 47 52

OPR actions for increasing diagnostics and laboratory capacity for surveillance included prepositioning laboratory supplies in high-risk areas which was described as key to facilitating the investigation of suspected cases (eg, specimen transportation containers, triple packages and gloves, transportation vehicles for specimens). 18 19 Electronic systems developed to improve laboratory results turnaround time, 19 the quick detection of hotspots 36 37 or digital contract tracing applications 37 were important developments implemented by countries by way of OPR actions. Lessons learnt from the digitalisation of contact tracing highlighted the importance of scaling up laboratory capacity to account for the increased demand for testing and to timeously ensure sufficient capability to test and process tests. 29 31 53 54 Mechanisms, if not available, should be rapidly instituted for sharing laboratory investigation data and establishing laboratory networks within and outside countries for timely diagnoses. 18 38

Included sources also signposted OPR actions for a collaborative approach to successful surveillance. For the rapid confirmation of novel influenza strains, for example, countries were successful in collaborating with WHO collaborating centres in their region. 35 Laboratory capacity in other countries were rapidly increased through the creation of laboratory networks. 18 42 In scenarios where a neighbouring country had a disease outbreak, cross-border surveillance teams have been established and the sharing of information between border countries improved and highlighted as a reason for the limited spill. 19 During COVID-19, surveillance was rapidly readied at the point of entries, including standard operating procedures for detected cases and awareness-raising sessions for personnel. 55 56

Community protection

Included articles highlighted key actions to upscale for rapidly involving and engaging affected communities in anticipation of an imminent threat. 22 57 These include rapidly providing updated information about the threat, including on identifying symptoms and any known public health and social measures, disseminated through numerous mechanisms and in a variety of languages to those at risk. 19 23 31 33 46 These should be adapted for all literacy levels. 58 Value was found in daily communications to build public trust. 37 Community volunteers were trained to carry out communal and door-to-door health education 19 32 or public websites containing epidemic reports to keep communities informed. 46

Further recommendations highlighted risk communications and public health and social measures to be rapidly readied to contain any potential community transmission. 18 21 48 51 59–61 These communications should allow the public to have a proper understanding of the perceived risk. 35 Other recommendations included working with local influencers to disseminate trusted information 47 and creating specialised focus messages for high-risk populations. 26 62 Crucially, there should be strong efforts for engaging vulnerable populations. 28 31 57

Plans and protocols should be in place for community-specific risk assessments to fill gaps in community OPR. 28 These assessments should focus on community perception, knowledge, preferred and accessible communication channels and existing barriers preventing community members from adopting promoted behaviours. 47 Plans should further account for resources for social security to support vulnerable communities. 40 To support this, community-based measures such as leveraging the community health workforce and community-based actors should be considered. 52 In this way, community needs and realities can be accounted for in the development of risk communication and community protection interventions. Misconceptions in the community should be identified and efforts made to dispel misinformation. 44

Some papers highlighted the early identification of vulnerable and remote population groups to ensure that their unique needs are well understood and addressed both in the design of interventions and in mitigating the impact of response interventions. 28 57 Accordingly, planning OPR should involve the input of communities, particularly organisations representing vulnerable groups, to inform community OPR. 47 Plans for response action should additionally consider secondary impacts or unintended consequences. For example, a clear lesson from COVID-19 related to the need for social security policies to mitigate the impacts of restrictive public health and social measures. 63 Policies for implementation should incorporate social security safety nets for communities, such as social health protection schemes or providing financial assistance for quarantined populations. 40 Plans should further be supported by partners. 64 Indirect health impacts should also be considered when OPR actions are implemented. 65 For example, some countries rapidly scaled up their capabilities for mental health services by implementing psychiatric hotlines 66 or providing stress management protocols. 48 Other indirect health impacts could include food insecurity; to prevent this, doorstep delivery of daily essentials 31 or provision of prepackaged meals 39 were planned.

Numerous papers highlighted the need for public health and social measures to be available rapidly and as early as possible, such as (for respiratory disease outbreaks) mask usage in public places when the risk level was high 31 36 46 52 63 and access to water, sanitation and hygiene, 44 48 with additional measures in place for individuals at risk of complications at the household level, such as using physical barriers, proper wearing of masks and environmental cleaning. 52 If non-existent, a strategy should be in place to assist in accelerating the containment of disease through imposing various public health and social measures, such as limits on local and international travel, the wearing of masks in public places, 37 social distancing, 67 bans or limits on mass gathering events 33 48 and closing educational institutions. 36 48 These measures were all implemented to a varying degree during COVID-19, with analyses finding that the earlier efforts of containment generally resulted in better containment early in the pandemic. 21 The measures taken should be weighed against the possibility of improving detection and spread through other methods, such as a rapid expansion of laboratory testing. 63 Public health and social measures should additionally take into account other likely risks—for example, countries with hurricane-prone areas during COVID-19 had to quickly revise their strategies to ensure social distancing in shelters. 39 If vaccines are available, a prioritisation policy should be developed to avoid ethical and political conflicts. 23

Safe and scalable care

For the health service to function during an emergency, they need a baseline quota of adequate staffing to perform core functions. 68 Included articles stressed OPR to surge additional healthcare personnel. 31 The healthcare workforce needs updated case definitions, transmission, clinical presentation, infection prevention and control (IPC), community surveillance and case management for the threat. 19 Capacity assessments can guide OPR to estimate the ability of health systems to contain the imminent threat 36 37 41 and to identify gaps. 29 36 Additional recommendations highlight that capacity modelling should integrate risks to the workforce during the response—previously, health workforce absenteeism has not always been considered in the development of staffing plans, leading to reduced response capacities. 57 When scaling up healthcare worker OPR for a threat, actions should also be taken to scale up the services to support them. 64 Health systems gaps have been addressed by increasing the space of intensive care unit beds in relevant facilities, human resource training and mobilisation 20 36 48 49 63 69–71 and reducing the workload (eg, patients with mild symptoms were managed at home in isolation). 46 Referral systems and safe pathways should be established. 36 42 52

COVID-19 highlighted the importance of maintaining essential health services during an emergency. Many studies under review did not immediately prioritise this when considering OPR for the imminent threat. Measures taken proactively to maintain essential health services and to reduce the stress on the health system were described, such as giving patients with chronic diseases a stockpile to prevent them from coming to the hospitals 31 72 and use of telemedicine. 31 40 It was recommended to establish referral systems and safe pathways to designated local isolation facilities and enhance case detection in healthcare facilities and the community. 47 Others emphasised their learnings from response to diseases before COVID-19 and maintaining the continuum of care 36 40 - for example, Korea created two systems (COVID-19 health system vs non-COVID-19 health system) to ensure continuity of non-COVID-19 needs and diverted the flow of patients through triage centres. 36 Measures were taken to safeguard hospitals not identified as part of the response, for example, using temperature checks or encouraging the use of masks. 33 40

Included articles also noted that staff protection and welfare should be strongly included in OPR planning, for example, to anticipate provision of personal protective equipment (PPE) and supplies for staff protection. 73 74 An IPC programme should be implemented before an outbreak. 33 38 63 Prepositioning of PPE supplies in high-risk districts has been recommended to enable a more rapid response, 19 or if the risk level is low, the availability of a regional reserve of PPE. 75 Where PPE was unavailable, production was quickly ramped up to be able to maintain inventory before the response 76 - others who did not do this noted that they suffered shortages during the response. 46 Regular training and simulation exercises were conducted for case management teams. 19 38 Psychosocial support and other interventions necessary to support staff welfare were also emphasised. 26 40 Others quickly put legislation into place to protect healthcare workers engaged in response from being attacked. 31

Access to countermeasures

There were fewer descriptions of OPR in this HEPR subsystem in comparison with others. When gearing up for response, countries have increased production and procurement by procuring from local industry, working with manufacturing companies to increase supply by, for example, adapting manufacturing facilities or establishing warehouses and transportation. 18 31 Numerous studies noted that they had extreme difficulty in obtaining the supplies they needed, 40 46 due to limited stockpiles and lack of finances to maintain them. 23 OPR actions for an imminent threat would focus on scaling up manufacturing plans and to ensure that a stockpile is in place.

Prepositioning essential supplies is essential for OPR, with an adequate supply of medical equipment to the frontline identified as vital for reducing health emergency risks. 77 Additionally, measures to quickly acquire and distribute medical supplies using government-set prices, prioritise frontline health professionals and vulnerable populations for the disbursement of medical countermeasures and promote local manufacturing were identified. 20 Other countries described OPR actions to introduce therapeutics, diagnostics and vaccines. 37

One study identified research topics such as system OPR, knowledge, attitudes and practices of the health workforce, epidemiology of the disease at the national level, best practices at the points of entries and isolation centres and infection-control measures as important to inform OPR actions. 78 Research should also support decision-making, cost-effectiveness, intervention effectiveness and the impact of these on pandemic trajectories. 50 79 Competing demands can limit the volume of research conducted which was considered a missed opportunity. 32 Early convening of expert groups to advise government was identified as useful for managing health service responses and OPR, and their work should as far as possible be informed by evidence (eg, scenario planning). 33 Health systems researchers occupying the highest levels of oversight across the sectors were said to enhance the use of evidence and data for decision-making. 36 Another paper noted that lessons learnt by regions found that funding for research and investigations during OPR and response should also be in place. 39

Emergency coordination

We identified several critical and overarching governance-related elements that facilitated OPR within regions and countries. Lessons from OPR or responses to previous diseases have demonstrated the importance of a coordinating body at regional or national levels 19 35 36 41 42 46 48 75 78 80 81 led by high-level officials. 19 48 80 These structures should provide leadership and coordination, 42 46 62 82 guidance and action plans, 36 and communication of critical information. 48 80 Strong and skilled leadership was a notable enabler 29 32 36 54 83 and was marked by active OPR involvement of the responsible health departments, and effective coordination with multiple stakeholders as the planning or response evolved. 29 32 54 82 84 Flexibility and adaptation, particularly during OPR, were important. 32

Many included articles emphasised the timely activation of coordination mechanisms and risk assessments to inform plans. 18 19 31 34 38 47 54 69 75 83 85 86 This involved the establishment and operationalisation of intersectoral and/or interdisciplinary teams (eg, task teams, 19 33 75 80 special councils 41 42 46 and command centres 30 41 ) to provide technical expertise, 25 42 78 87 prepare and coordinate the implementation of policy decisions 32 80 87 and guide lower health system-level or governmental-level structures or actors. 28 32 88 An Incident Management System was adopted in several countries with a dedicated lead, 32 35 36 83 89 and this was further recommended in the grey literature. 72 90 91 When operationalising these aspects for an efficient and effective response, the early establishment of clear roles and responsibilities, with a clear lead was considered vital and instrumental for later response success. 28 32 The highest levels of government should be involved, with an all-of-society and/or all-of-government approach. 32 35 69 70 79 87–89

To successfully implement coordination and response to an emergency, workforce management is key for a successful response. Actions taken include recruitment of staff from the private sector, healthcare students or retired or non-practising trained workers, 31 40 42 48 78 89 92 community health workers and community-based organisations 19 31 40 48 73 or volunteers. 19 48 89 Grey literature emphasised, actions in support of cross-border response teams or surge teams with rapid staff registration and accreditation systems, staff redeployment and reallocation, 18 72 93 and appropriate training. 18 90 94 95 Also critical was ensuring the availability of emergency medical services for immediate response and the early deployment of multidisciplinary Rapid Response Teams in high-risk groups. 23 31 53 83 87 89 Some papers emphasised prioritising actions which enable rapid deployment of these teams. 53 83

Other important factors included threat-specific contingency planning at national and subnational levels for identifying preparedness gaps and actions to work around them, thus supporting rapid detection, response and containment. 18 19 35 83 89 Contingency plans helped to prioritise targeted actions 83 as well as identify and prioritise at-risk geographic areas and vulnerable communities. 40 57 Having recently updated or tested contingency plans in place was stated as essential to enhance OPR and effective response, 25 39 96 and these should support operations and logistics, help understand organisational structures and functions, and optimise resources. 44 68 93 They should further ensure critical infrastructure for health system functioning and ensure clinical and health service-level plans are detailed and able to assist in preparing for increased patient volumes or need for critical care services. 19 68 Contingency plans should incorporate past experiences and learnings from other outbreaks, changing contexts 52 and the results of simulation exercises conducted on the preparedness and response systems. 18 19 23 Countries with similar public health emergency experiences have been found to be better prepared than those without previous experience, 63 raising the importance of practice, via simulation exercises and training, for a new imminent threat. 19 23

Furthermore, country risk and vulnerability assessments should be available and guide risk assessment activities. 19 31 35 38 39 47 52 53 57 84 They were recommended to be focused on geographical areas with particularly high assessed risks 39 52 89 and related to prevention and control strategies. 19 47 84 The assessments should be conducted to ensure that the contingency plans contain appropriate OPR actions and consider local contexts 47 68 89 and can also be used to guide the prioritisation of actions. 47 89 Risk assessments for future waves or outbreaks should also be conducted, and updated worst-case scenarios incorporated into contingency plans. 39 63

OPR needs bespoke financial planning. 22 28 70 It was recommended that contingency funds be available for OPR, 83 ring-fenced and situated within a dedicated emergency programme. 19 50 70 There should be existing emergency financial management systems which allow for rapid, transparent and efficient use of funding. 40 42 Contingency funds were emphasised as particularly important as resources should not be diverted from necessary routine programmes. 25 50 Having contingency funds in place would ensure a few key capacities: first, earmarked resources for the hazard are ensured 22 and lead to rapid activation of key surveillance and early response activities. 25 50 Second, changes which may need to occur to financing healthcare services are already outlined, such as creating financial protection mechanisms for discontinued outpatient services or outlining how citizens or health insurance systems pay for screening and diagnostic testing. 42 Finally, contingency funds should cover workforce surge, including staff, supplies, training and workforce management. 73

This scoping review examined definitions and critical elements of OPR for public health emergencies. We sought to identify key actions that were mobilised in anticipation of an imminent threat framed in the latest conceptualisation of a global architecture for health emergency management. From 54 peer-reviewed publications and 24 grey literature sources, we found that the concept of OPR was in an early stage of adoption. Where the term was explicitly defined, these definitions lacked coherence and consistency and included articles that matched our working definition of OPR, often did not use the term. Our analysis highlights the important need for conceptual clarity regarding OPR. We agreed on a working definition of OPR at the outset of the review as those immediate actions taken in the presence of an imminent threat that is rapidly mobilised or prepositioned to respond to that threat. It was also often difficult to identify where the line between preparedness, OPR and response lay. For our purposes, these distinctions are relevant in so far as they can guide early detection and timely activation of key OPR capabilities in useful and practical ways. Put simply: when a hurricane is coming, you may rapidly begin to take measures to prevent damage to your house. These could include actions such as securing loose objects, protecting windows, turning off utilities and filling tubs with water. These actions, taken before a storm, would differ greatly from the years spent building and maintaining the house beforehand - ensuring the foundation is sound, and the roof has been well maintained. They would further differ from the actions would take immediately during and after the storm has hit.

This review was initiated during the dynamic and fast-moving context of a pandemic where important policy developments were advancing in parallel. To maximise the utility of this work, we reanalysed our findings to map to the HEPR framework once it became publicly available for wider discussion among WHO member states. Our analysis across the body of articles included in this review identified OPR actions that mapped to the five core subsystems considered critical to strengthen the global HEPR. Additionally, our review mostly identified national-level capabilities and provided less insight into key actions to activate subnational and local capabilities. This observation may reflect a limitation of our review, an under-reporting, or a need to further develop and define OPR at these levels.

Across articles included in this review, OPR actions were identified as those that aimed to fill gaps in a country’s capacities or to prepare for an early response. In this way, a key contribution of embedding OPR in health emergency management is in institutionalising prompt action as soon as a potential signal is detected. Of note are the many actions identified for emergency coordination, including strong, high-level leadership, governance and coordination, with clarity around the roles and responsibilities of the leaders and the coordination bodies. Collaborative surveillance that allows for early detection of signals is key for OPR in terms of triggering action. This is an underdeveloped part of readiness practice. Other important areas included rapid, integrated and interoperative health information systems for purposes such as surveillance, planning and decision-making, managing operations, and monitoring country responses. The ability to rapidly plan for, mobilise and manage resources (eg, human, PPE, financial) and scale-up services (eg, essential or laboratory) underpinned by supportive legislation were also identified. Clear and strong communication at the level of the policy-maker, within the services and in the community was also identified as crucial for optimal OPR. We note gaps related to research and manufacturing platforms enabled by technology and our analysis did not consider OPR actions at the intersection of the five subsystems, for example, the readiness of communities for early detection to support collaborative surveillance or for participation in clinical trials of novel medical countermeasures.

The review methodology has strengths and limitations. This work was done rapidly by a large team with the aim of underpinning practical technical products for OPR in health emergency risk management. A scoping review methodology was best suited to answer our research question, due to the broad base of evidence. 13 As far as possible, we followed expert group recommendations on the adaptations needed in the conduct of rapid reviews. 9 14 Our initial analysis mapped key thematic categories in the HEPR Framework. 1 To align with global policy developments that have led to the HEPR framework, we updated our analysis. In this process, we may have missed new articles that would add further insight into OPR experience. However, given the pragmatic focus for this review, and the global consensus work that has followed, it is unlikely that further updates to this review would significantly alter our key conclusions. Since this review, there has been significant progress in actions to strengthen the global HEPR architecture. A more thorough review of OPR, one for each of the subsystems, is needed to reflect the recently published breakdown of HEPR subsystems into capabilities. 4 Further, as OPR becomes engrained in health emergency response, a review to identify the optimal time frame needed to quickly and effectively operationalise the capabilities of every subsystem is needed. Additionally, the purpose of the review was not to identify how OPR actions have increased resilience. Future research is needed now that OPR has been defined to identify the OPR interventions which maximise populations; abilities to withstand an event and increase resilience. Finally, our review does not include a body of work on anticipatory actions, which aligns well with OPR. Anticipatory actions are defined as ‘actions taken ahead of predicted hazards to prevent or reduce acute humanitarian impacts before they fully unfold’. 97 They highlight OPR as part of emergency management, particularly for disaster management and in humanitarian contexts. 98 The outcome of these meetings reflects a growing consensus on the critical importance of OPR. The essence of OPR is to mobilise early action when a threat is on the horizon. The work reported in this paper is an important step to advancing this important and urgent agenda. Indeed, this work has now set a foundation for the more substantive and coherent development of the evidence in this important area and has provided input to readiness actions within the recently published IHR Benchmarks, and is informing the creation of readiness assessments and has informed the creation of a readiness course on OpenWHO. 99 100

Ethics statements

Patient consent for publication.

Not applicable.

Acknowledgments

We would like to thank Professor Taryn Young for guidance regarding the methodology and Hilmar Luckoff for editing earlier versions of the paper. The rapid scoping review was commissioned by the WHO to inform an Operational Readiness Framework for the Country Readiness Strengthening Department in the World Health Emergencies Program in WHO (Reference #: 2021/1145765; Unit: MST; Cluster: QNF/SCI).

  • World Health Organization
  • English R ,
  • Yauka Nyasulu JC ,
  • Berner K , et al
  • Garritty C ,
  • Gartlehner G ,
  • Nussbaumer-Streit B , et al
  • Tricco AC ,
  • Garritty CM ,
  • Boulos L , et al
  • Zarin W , et al
  • Hillier-Brown FC ,
  • Moore HJ , et al
  • Ouzzani M ,
  • Hammady H ,
  • Fedorowicz Z , et al
  • Peters MDJ ,
  • Stern C , et al
  • Tricco AC , et al
  • Colquhoun H ,
  • Nowell LS ,
  • Norris JM ,
  • White DE , et al
  • World Health Organization Regional Office for Africa
  • Muruta AN , et al
  • Park YS , et al
  • Petrović D ,
  • Petrović M ,
  • Bojković N , et al
  • O’Sullivan T ,
  • Brown A , et al
  • Hanvoravongchai P ,
  • Adisasmito W ,
  • Chau PN , et al
  • Chungong S ,
  • Omaar A , et al
  • Ippolito G ,
  • Lauria FN ,
  • Locatelli F , et al
  • Henry B , et al
  • Gibson PJ ,
  • Theadore F ,
  • Jellison JB
  • Gagliardi AR , et al
  • GRID COVID-19 Study Group
  • Nyenswah TG ,
  • Bawo L , et al
  • Cutter J , et al
  • Wijesinghe PR ,
  • Bhola AK , et al
  • Schwarz D , et al
  • Moonasar D ,
  • Leonard E , et al
  • El Bushra HE ,
  • Opoka M , et al
  • Sohrabizadeh S ,
  • Yousefian S ,
  • Bahramzadeh A , et al
  • Pradhan SK ,
  • Ghanbari MK ,
  • Behzadifar M ,
  • Bakhtiari A , et al
  • Magalhães JPM ,
  • Biscaia A , et al
  • Santos TBS ,
  • Andrade LR de ,
  • Vieira SL , et al
  • United Nations Office for the Coordination of Humanitarian Affairs
  • Centres for Disease Control and Prevention
  • Mao Y , et al
  • Inter-Agency Standing Committee
  • Akbari Sari A , et al
  • Shimizu K ,
  • Kapiriri L ,
  • Bwire G , et al
  • Wan Mohamed Noor WN ,
  • Sandhu SS ,
  • Ahmad Mahir HM , et al
  • Biswas RK ,
  • Afiaz A , et al
  • International Organization for Migration
  • Itzwerth R ,
  • MacIntyre CR
  • Costantino C ,
  • Fiacchini D
  • World Health Organization Regional Office for Europe
  • Song JY , et al
  • Paudyal V ,
  • Cadogan C ,
  • Fialová D , et al
  • Jin Y , et al
  • Mohammadpour M ,
  • Zarifinezhad E ,
  • Ghanbarzadegan A , et al
  • Nguyen TV ,
  • Phan LT , et al
  • Ballard M ,
  • Bancroft E ,
  • Nesbit J , et al
  • Espinal M ,
  • Aldighieri S ,
  • St John R , et al
  • Das P , et al
  • Pham HQ , et al
  • Al Nsour M ,
  • Bashier H ,
  • Al Serouri A , et al
  • Oliveira WK de ,
  • França GVA de , et al
  • Conseil A ,
  • Longstaff A , et al
  • Nsubuga P ,
  • Masiira B ,
  • Kihembo C , et al
  • Higdon MA , et al
  • Neupane HC ,
  • Shrestha N ,
  • Adhikari S , et al
  • Schol LGC , et al
  • Nanziri C ,
  • Ntono V , et al
  • World Health Organization Regional Office for Eastern Mediterranean Region
  • Osifo-Dawodu E , et al
  • Centre for Global Development
  • Seyedin H ,
  • Moslehi S ,
  • Sakhaei F , et al
  • Boulanger L ,
  • Carlson H , et al

Handling editor Helen J Surana

Contributors Conceptualisation: RE, HG, QL, JCYN, NG and LLB; Data extraction: MYC and MP; Formal analysis: QL, RE, HG, CJ, JCYN, NG, HC and NE (synthesis); MYC, MP and KB (descriptive); Funding acquisition: RE; Methodology: QL, MM, KB, MP, MYC, CJ, NG and LLB; Project administration: JCYN; Software: MYC, MP, MM and KB; Source screening: MP, MYC, RE, HG, QL, KB and JCYN; Supervision: RE; Visualisation: QL and MYC; Writing–original draft preparation: KB, MYC, QL, MM, MP, CJ, RE, HG, JCYN, NG and LLB; Writing–review and editing: KB, MYC and NE; Writing–final version review: All authors have read and approved the final version of the report manuscript. RE is the nominated guarantor.

Funding This work was supported by the WHO (Reference: APW/RR/Readiness/2021/1145765). The manuscript development and publication were funded in part by the Wellcome Trust and the UK Foreign and Commonwealth Development Office under grant agreement 222037/A/20/Z and in part by the United States Agency for International Development (USAID) under grant agreement 720BHA21IO00300.

Disclaimer The authors alone are responsible for the views expressed in this article and they do not necessarily represent the views, decisions or policies of the institutions with which they are affiliated.

Competing interests None declared.

Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.

Provenance and peer review Not commissioned; externally peer reviewed.

Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

Read the full text or download the PDF:

  • Open access
  • Published: 28 August 2024

Facilitators and barriers of midwife-led model of care at public health institutions of dire Dawa city, Eastern Ethiopia, 2022: a qualitative study

  • Mickiale Hailu 1 ,
  • Aminu Mohammed 1 ,
  • Daniel Tadesse 1 ,
  • Neil Abdurashid 1 ,
  • Legesse Abera 1 ,
  • Samrawit Ali 2 ,
  • Yesuneh Dejene 2 ,
  • Tadesse Weldeamaniel 1 ,
  • Meklit Girma 3 ,
  • Tekleberhan Hailemariam 1 ,
  • Netsanet Melkamu 1 ,
  • Tewodros Getnet 1 ,
  • Yibekal Manaye 1 ,
  • Tariku Derese 1 ,
  • Muluken Yigezu 1 ,
  • Natnael Dechasa 1 &
  • Anteneh Atle 1  

BMC Health Services Research volume  24 , Article number:  998 ( 2024 ) Cite this article

Metrics details

The midwife-led model of care is woman-centered and based on the premise that pregnancy and childbirth are normal life events, and the midwife plays a fundamental role in coordinating care for women and linking with other health care professionals as required. Worldwide, this model of care has made a great contribution to the reduction of maternal and child mortality. For example, the global under-5 mortality rate fell from 42 deaths per 1,000 live births in 2015 to 39 in 2018. The neonatal mortality rate fell from 31 deaths per 1,000 live births in 2000 to 18 deaths per 1,000 in 2018. Even if this model of care has a pivotal role in the reduction of maternal and newborn mortality, in recent years it has faced many challenges.

To explore facilitators and barriers to a midwife-led model of care at a public health institution in Dire Dawa, Eastern Ethiopia, in 2021.

Methodology

: A qualitative approach was conducted at Dire Dawa public health institution from March 1–April 30, 2022. Data was collected using a semi-structured, in-depth interview tool guide, focused group discussions, and key informant interviews. A convenience sampling method was implemented to select study participants, and the data were analyzed thematically using computer-assisted qualitative data analysis software Atlas.ti7. The thematic analysis with an inductive approach goes through six steps: familiarization, coding, generating themes, reviewing themes, defining and naming themes, and writing up.

Two major themes were driven from facilitators of the midwife-led model of care (professional pride and good team spirit), and seven major themes were driven from barriers to the midwife-led model of care (lack of professional development, shortage of resources, unfair risk or hazard payment, limited organizational power of midwives, feeling of demoralization absence of recognition from superiors, lack of work-related security).

The midwifery-led model of care is facing considerable challenges, both pertaining to the management of the healthcare service locally and nationally. A multidisciplinary and collaborative effort is needed to solve those challenges.

Peer Review reports

Introduction

A midwife-led model of care is defined as care where “the midwife is the lead professional in the planning, organization, and delivery of care given to a woman from the initial booking to the postnatal period“ [ 1 ]. Within these models, midwives are, however, in partnership with the woman, the lead professional with responsibility for the assessment of her needs, planning her care, referring her to other professionals as appropriate, and ensuring the provision of maternity services. Most industrialized countries with the lowest mortality and morbidity rates of mothers and infants are those in which midwifery is a valued and integral pillar of the maternity care system [ 2 , 3 , 4 , 5 ].

Over the past 20 years, midwife-led model of care (MLC) has significantly lowered mother and infant mortality across the globe. In 2018, there were 39 deaths for every 1,000 live births worldwide, down from 42 in 2015. From 31 deaths per 1,000 live births in 2000 to 18 deaths per 1,000 in 2018, the neonatal mortality rate (NMR) decreased. The midwifery-led care approach is regarded as the gold standard of care for expectant women in many industrialized nations, including Canada, Australia, the United Kingdom, Sweden, the Netherlands, Norway, and Denmark. Evidence from those nations demonstrates that women and babies who get midwife-led care, as opposed to alternative types of care, experience favorable maternal outcomes, fewer interventions, and lower rates of fetal loss or neonatal death [ 6 , 7 , 8 ].

In Pakistan, the MLC was accompanied by many challenges. Some of the challenges were political threats, a lack of diversity (midwives had no opportunities for collaborating with other midwives outside their institutions), long duty hours and low remuneration, a lack of a career ladder, and a lack of socialization (the health centers are isolated from other parts of the country due to relative geographical inaccessibility, transportation issues, and a lack of infrastructure). Currently, in Pakistan, 276 women die for every 100,000 live births, and the infant mortality rate is 74/1000. But the majority of these deaths are preventable through the midwife-led care model [ 7 ].

The MLC in African countries has faced many challenges. Shortages of resources, work overload, low inter-professional collaboration between health facilities, lack of personal development, lack of a well-functioning referral system, societal challenges, family life troubles, low professional autonomy, and unmanageable workloads are the main challenges [ 8 ].

Due to the aforementioned challenges, Sub Saharan Africa (SSA) is currently experiencing the highest rate of infant mortality (1 in 13) and is responsible for 86% of all maternal fatalities worldwide. As a result, it is imperative to look at the MLC issues in low-income countries, which continue to be responsible for 99% of all maternal and newborn deaths worldwide [ 8 , 9 ].

Ethiopia’s has a Maternal mortality rate (MMR) and NMR of 412 per 100,000 live births and 33 per 1000 live births, respectively, remain high, making Ethiopia one of the largest contributors to the global burden of maternal and newborn deaths, placed 4th and 6th, although MLC could prevent a total of 83% of all neonatal and maternal fatalities in an environment that supports it. The MMR & infant mortality rate (IMR) in the research area were indistinguishable from that, at 150 per 100,000 live births and 67 fatalities per 1,000 live births, respectively [ 10 , 11 , 12 , 13 ].

Since the Federal Ministry of Health is currently viewing midwifery-led care as an essential tool in reducing the maternal mortality ratio and ending preventable deaths of newborns, exploring the facilitators and barriers of MLC may have a great contribution to make in reducing maternal and newborn mortality [ 14 ]. Since there has been no study done in Ethiopia or the study area regarding the facilitators and barriers of MLC, the aim of this research was to explore the facilitators and barriers of MLC in Dire Dawa City public health institutions.

In so doing, the research attempted to address the following research questions:

What were the facilitators for a midwife-led model of care at the Dire Dawa city public health institution?

What were the barriers to a midwife-led model of care at the Dire Dawa city public health institution?

Study setting and design

Institutional based qualitative study was conducted from March 01-April 30, 2022 in Dire Dawa city. Dire Dawa city is one of the federal city administrations in Ethiopia which is located at the distance of 515killo meters away from Addis Ababa (the capital city) to the east. The city administration has 9 urban and 38 rural kebeles (kebeles are the smallest administrative unit in Ethiopia). There are 2 government hospitals, 5 private hospitals, 15 health centers, and 33 health posts. The current metro area population of Dire Dawa city is 426,129.Of which 49.8% of them are males and 50.2% females. The total number of women in reproductive age group (15–49 years) is 52,673 which account 15.4% of the total population. It has hot temperature with a mean of 25 degree centigrade [ 15 ].

Study population and sampling procedure

The source population for this study included all midwives who worked at Dire Dawa City public health facilities as well as key informants from appropriate organizations (the focal person for the Ethiopian Midwives Association and maternal and child health (MCH) team leaders). The study encompassed basically 41 healthcare professionals who worked in Dire Dawa public health institutions in total, and the final sample size was decided based on the saturation of the data or information.

From the total 15 Health centers and 2 Governmental Hospitals found in Dire Dawa city administration, 8 Health centers and 2 Governmental Hospitals were selected by non-probability purposive sampling method. In addition to that a non-probability convenience sampling method was used to select midwives who were working in Dire Dawa city public health institutions and key informants from the relevant organization such as Ethiopian midwives association focal person and MCH team leaders. Midwives who were working for at least six months in the institution were taken as inclusion criteria while those who were working as a free service were excluded from the study.

Data collection tool and procedures

Focus groups, in-depth interviews, and key informant interviews were used in collecting data. A voice recorder, a keynote-keeping, and a semi-structured interview tool were all used to conduct the interviews. Voluntary informed written consent was obtained from the study participant’s before they participated in the study. Then an in-depth interview and focus group discussion were held with midwives chosen from various healthcare organizations. The MCH department heads and the Dire Dawa branch of the Ethiopian Midwife Association served as the key informants. In-depth interview (IDI) and key informant interviews (KII) with participants took place only once and lasted for roughly 50–60 min. In the midwives’ duty room, the interview was held. Six to eight people participated in focus group discussions (FGD), which lasted 90 to 100 min. Two midwives with experience in gathering qualitative data gathered the information.

Data quality control

The qualitative design is prone for bias but open-ended questions were used to avoid acquiescence and 2 day proper training was given for the data collector regarding taking keynotes and recording using a tape recorder. For consistency and possible modification, a pretest was done in one FGD and In-depth interviews at non selected health institutions of Dire Dawa city administrations. A detailed explanation was given for the study participants about the objectives of the study prior to the actual data collections. All (FGDs, key informant interview and In-depth interviews) were taken in a silent place.

Data analysis

Atlas.ti7, a qualitative data analysis program, was used for analyzing the data thematically. An inductive approach to thematic analysis involves six steps: familiarization, coding, generation of themes, review of themes, defining and naming of themes, and writing up. By listening to the taped interview again, the data was transcribed. The participants’ well-spoken verbatim was used to extract and describe the inductive meanings of the statements. The data was then coded after that. Each code describes the concept or emotion made clear in that passage of text. Then we look at the codes we’ve made, search for commonalities, and begin to develop themes. To ensure the data’s accuracy and representation, the generated themes were reviewed. Themes were defined and named, and then the analysis of the data was written up.

Trustworthiness of data

Meeting standards of trustworthiness by addressing credibility, conformability, and transferability ensures the quality of qualitative research. Data triangulation, data collection from various sites and study participants, the use of multiple data collection techniques (IDI, KII, and FGD), multiple peer reviews of the proposal, and the involvement of more than two researchers in the coding, analysis, and interpretation decisions are all instances of the methods that were used in order to fulfill the criteria for credibility. To increase its transferability to various contexts, the study gave details of the context, sample size and sampling method, eligibility criteria, and interview processes. To ensure conformability, the research paths were maintained throughout the study in accordance with the work plan [ 16 , 17 ].

Background characteristics of the study participants

In this study, a total of 41 health care providers who are working in Dire Dawa public health facilities participated in the three FGDs, six KIIs, and fifteen IDIs. The years of experience of study participants range from one year to 12 years. The participants represented a wide age range (30–39 years), and the educational status of the respondents ranged from diploma to master’s degree. (Table  1 )

As shown in Table  2 , from the qualitative analysis of the data, two major themes were driven from facilitators of MLC, and seven major themes were driven from barriers to MLC. (Table  2 ).

Facilitators of midwife-led model of care at a public health institution of Dire Dawa city, Eastern Ethiopia, in 2021

Professional pride.

This study found that saving the lives of mothers and newborns was a strong facilitator. Specifically, it was motivational to have skills within the midwifery domain, such as managing the full continuum of care during pregnancy and labour, supporting women in having normal physiologic births, being able to handle complications, and building relationships with the women and the community, as mentioned below by one of the IDI participants.

“I am so proud since I am a midwife; nothing is more satisfying than seeing a pregnant mother give birth almost without complications. I always see their smile and happiness on their faces , especially in the postpartum period , and they warmly thank me and say , “Here is your child; he or she is yours.” They bless me a lot. Even sometimes , when they sew me in the transport area , cafeteria , or other area , they thank me warmly , and some of them also want to invite me to something else. The sum total of those things motivates me to be in this profession or to provide midwifery care.“ IDI participants.

This finding is also supported by other participants in FGD.

“We have learned and promised to work as midwives. We are proud of our profession , to help women and children’s health. The greatest motivation is that we are midwives , we love the profession , and we are contributing a great role in decreasing maternal and child mortality….” FGD discussant.

Good teamwork

The research revealed that good midwifery teamwork and good social interaction within the staff have become facilitators of MLC. FGD participants share their experiences of working in a team.

“In our facility , all the midwives have good teamwork; we have good communication , and we share client information accurately and timely. In case a severe complication happens , we manage it as a team , and we try to cover the gap if some of our staff are absent. Further from that , we do have good social interactions in the case of weeding , funeral ceremonies , and other social activities. We do have good team spirit; we work as a team in the clinical area , and we also have good social relationships. “If some of our staff gets sick or if she or he has other social issues , the other free staff will cover her or his task.” FGD discussant.

Another participant from IDI also shared the same experience regarding their good teamwork and their social interactions.

“As a maternal and child health team , we do have a good team spirit , not only with midwives but also with other professions. We are not restricted by the ward that we assign. If there is a caseload in any unit , some midwives will volunteer to help the other team. Most of the time in the night , we admit more than 3 or 4 labouring mothers at the same time. Since in our health center only one midwife is assigned in the night , we always call nurses to help us. This is our routine experience.” IDI participants.

Barriers of midwife-led model of care at a public health institution of Dire Dawa city, Eastern Ethiopia, in 2021

Lack of professional development.

This study revealed that insufficient opportunities for further education and updated training were the main barriers for MLC. Even the few trainings and update courses that were actually arranged were unavailable to them, either because they did not meet the criteria seated or because the people who work in administration were selected. Even though opportunities are not arranged for them to upgrade themselves through self-sponsored. One of the participants from IDI narrates her opinion about opportunities for further education as follows:

“Training and updates are not sufficient; currently we are almost working with almost old science. For example , the new obstetrics management protocol for 2021 has been released from the ministry of health , and many things have changed there. But we did not receive any training or even announcements. Even the few trainings and update courses that were truly organized and turned in to us are unavailable since the selection criteria are not fair. As a result , we miss those trainings either because we did not meet the selection criteria or because those who work in administration are prioritized.” IDI participant.

FGD discussants also support this idea. She mentioned that even though opportunities are not arranged for them to upgrade themselves through self-sponsorship,

“There is almost no educational opportunity in our institution. Every year , one or two midwives may get institutional sponsorship. Midwives that will be selected for this opportunity are those who have served for more than five to ten years. Imagine that to get this chance , every midwife is expected to serve five or more years. Not only this , even if staff want to learn or upgrade at governmental or private colleges through self-sponsored programmes , whether at night or in an extension programme , they are not cooperative. Let me share with you my personal experience. Before two years , I personally started my MSc degree at Dire Dawa University in a weekend programme , and I have repeatedly asked the management bodies to let me free on weekends and to compensate me at night or any time from Monday to Friday. Since they refuse to accept my concern , I withdraw from the programme.“ FGD discussant.

Shortage of resource

The finding indicates that a shortage of equipment, staff, and rooms or wards was a challenge for MLC. Midwives claimed they were working with few staff, insufficient essential supplies, and advanced materials. This lack of equipment endangers both the midwives and their patients. One of the participants from IDI narrates her opinion about the shortage of resources as follows:

“Of course there is a shortage of resources in our hospital , like gloves and personal protective devices. Even the few types of medical equipment available , like the autoclave , forceps , vacuum delivery couch , and BP apparatus , are outdated , and some of them are unfunctional. If you see the Bp apparatus we used in ANC , it is digital but full of false positives. When I worked in the ANC , I did not trust it and always brought the analogue one from other wards. This is the routine experience of every staff member.“ IDI participants.

Another participant from IDI also shared the same experience regarding the crowdedness of rooms or wards.

“In our health center , there are no adequate wards or rooms. For example , the delivery ward and postnatal ward are almost in one room. Postnatal mothers and neonates did not get enough rest and sleep because of the sound of laboring mothers. Not only is this , but even the antenatal care and midwifery duty rooms are also very narrow.“ IDI participants.

The study also revealed midwifery staff were pressured to work long hours because they were understaffed, which in turn affected the quality of midwifery care. The experience of a certain midwife is shared as follows:

“I did not think that the management bodies understood the risk and stress that we midwives face. They did not want to consider the risk of midwives even equal to that of other disciplines but lower than the others. For example , in our health centre , during the night , only one midwife is assigned for the next 12 hours , but if you see in the nurse department , two or more nurses are assigned at night in the emergency ward.” IDI participants.

The discussion affirms the fact that being understaffed and not having an adequate allocation of midwife professionals on night shifts are affecting labouring mothers’ ability to get sufficient health midwifery care. The above narration is also supported by the FGD discussant.

“In our case , only one midwife is assigned to the labour ward during the night shift. I think this is the main challenge for midwives that needs attention. Let me share with you my experience that happened months before. While I was on night shift , two labouring mothers were fully dilated within three or four minutes. It was very difficult for me , to manage two labouring mothers at the same time. Immediately , I call one of my nurse friends from the emergency department to help me. If my friend was so busy , what could happen to the labouring mother and also to me? This is not only my experience but also the routine experience of other midwives.” FGD discussant.

Unfair risk or hazard payments

It is reported that the compensation amount paid for risk is lower than in other health professions. The health risks are not any less, but the remuneration system failed to capture the need to fairly compensate midwifery professionals. The narration from the FGD discussant regarding unfair payment is mentioned below.

“Only 470 ETB is paid for midwives as risk payments , which is incomparable with the risks that midwives are facing. But contrary to that , the risk payments for nurses (in emergencies) are about 1200 Ethiopian birr (ETB) , and Anesthesia is 1000 ETB. I did not want to compare my profession with other disciplines , but with the lowest cost , how the risk of midwifery cannot be equal to that of nursing and other professions. I did not know whose professionals made such types of unfair decisions and with what scientific background or base this calculation was done . ” FGD discussant.

The above finding is also supported by an IDI participant.

“………………………….Even though the midwifery profession is full of risks , with the current Ethiopian health care system , midwives are being paid the lowest risk payments compared to other disciplines…………….” IDI participants.

Limited organizational power of midwives

Midwives’ interviews reported that limited senior midwifery positions in the health system have become the challenge of midwifery care. This constrains the decision-making power and capability of midwives. This was compounded by limited opportunities for midwifery personnel to address their concerns to the responsible bodies, as stated by one of the key informants.

“Our staff has many concerns , especially professional-related concerns , which can contribute to the quality of midwifery care. Personally , as department head , I have tried to address those concerns in different management meetings at different times. But since the leadership positions are dominated by other disciplines , many of our staff concerns have not been solved yet. But let me tell you my personal prediction… If those concerns are not solved early and if this trend continues , the quality of midwifery care will be in danger.“ Participant from Key Informant.

The above finding is also supported by another IDI participant.

“In our hospital , at every hierarchal and structural level , midwives are not well represented. That is why all of our challenges or concerns have not been solved yet. For example , as a structure in the Dire Dawa Health Office (DDHO) , there is a team of management related to maternal and child health. But unfortunately , those professionals working there are not midwives. I was one of three midwives chosen to meet with Dr. X (former DDHO leader) to discuss this issue. At the time , we were reaching an agreement that two or three midwives would be represented on that team. But since a few months later the leader resigned , the issue has not gotten a solution yet.“ IDI participant.

Feeling of demoralization

One of the main concerns reported by the participants during the interviews was a feeling of demoralization induced by both their clients and their supervisors about barriers to midwifery care. They reported having been verbally abused by their patients, something that made them feel that their hard work was being undermined, as stated by an FGD participant.

“I don’t think there is any midwife who would be happy for anybody to lose their baby , or that there is any midwife who would want a woman to die. These things are accidents , but the patient and leaders will always blame the midwife.” FDG discussant.

A narration from an IDI participant also mentioned the following:

“……….If something happens , like a conflict with the patients or clients , the management is on the patient side. Not only that , the way in which they communicate with us is in an aggressive or disrespectful manner . ” IDI participant.

Absence of recognition or /motivation from superiors

This study revealed that midwives experience a loss of motivation at work due to limited support from their superiors. Their effort is used only for reporting purposes. A midwife from FGD shared her experience as follows.

“In our scenario , till the nearest time , the maternal and child health services are provided in a good way. But this was not easy; it is the cumulative effort of midwives. But unfortunately , only those in managerial positions are recognized. Nothing was done for us despite our efforts. To me , our efforts are used only for reporting purposes.” FGD discussant.

This finding was also supported by IDI participants.

“Even though we have good achievements in the MCH services , there is no motivation mechanism done to motivate midwives.” But if something or a minor mistake happens , they are on the front lines to intimidate us or write a warning letter. Generally , their concern is a report or a number issue. We are tired of such types of scenarios.” IDI participant.

Insufficient of work-related security

One of the main concerns reported by the participants during the interviews was the work related security, which has become a challenge for MLC. The midwives’ work environment was surrounded by insecurity, especially during night shifts, when midwives were facing verbal and even physical attack, as mentioned by participants.

“In the labour ward , especially at night , we face many security-related issues. The families of labouring mothers , especially those who are young , are very aggressive. Sometimes they even want to enter the delivery room. They did not hear what we told them to do , but if they hear any labour sounds from their family , they disturb the whole ward. This leads to verbal abuse , and sometimes we face physical abuse. There may be one or two security personnel at the main gate , but since the delivery ward is far from the main gate , they do not know what is happening in the delivery ward. When things become beyond our scope , we call security guards. Immediately after the security guards go back , similar things will continue. What makes it difficult to manage such situations is that only one midwife is assigned at night , and labouring mothers will not get quality midwifery care.” IDI participant.

FGD discussants also shared their experience that their working environment is full of insecurity.

“In case any complications occur , especially at night , it is very difficult to tell the labouring mother’s family or husband unless we call security personnel. It is not only swearing that we face but also that they intimidate us.” FDG discussant.

Discussions

The aim of this study was to explore facilitators’ and barriers to a midwifery-led model of care at Dire Dawa public health facilities. In this study, professional pride was the main facilitator of the midwifery-led model of care. Another qualitative study that examined the midwifery care challenges and factors that motivate them to remain in their workplace lends confirmation to this conclusion. It was found that a strong feeling of love for their work was the main facilitator’s midwifery-led model of care [ 9 ]. Having a good team spirit was also another facilitator’s midwifery-led model of care in our study. Another study’s findings confirmed this one, which emphasizes that building relationships with the midwives, women, and community was the driving force behind providing midwifery care [ 7 , 18 ].

The midwives in this study expressed a need for additional professional training, updates, and competence as part of their continuing professional development. Similar findings have been reported in the worldwide literature that midwives were struggling for survival due to a lack of limited in-service training opportunities to improve their knowledge and skills [ 19 ]. This phenomenon does not seem to differ between settings in high-, middle-, and low-income countries [ 7 , 9 , 18 ], in which midwives experienced difficult work situations due to a lack of professional development to autonomously manage work tasks, which made them feel frustrated, guilty, and inadequate. As such, this can contribute to distress and burnout, which in turn prevent midwives from being able to provide quality care and can eventually cause them to leave the profession [ 19 ].

Shortages of resources (shortage of staff, lack of physical space, and equipment) were the other reported barriers to midwifery care explored in this study. They reported that they are working in an environment with a shortage of resources, which leads to poor patient outcomes. This finding is supported by many other studies conducted around the globe [ 20 , 21 , 22 , 23 ]. Another qualitative finding, which likewise supports the aforementioned finding, which emphasizes that a shortage of resources was reported as a barrier to providing adequate midwifery care [ 19 ]. Delivery attended by skilled personnel with appropriate supplies and equipment has been found to be strongly associated with a reduction in child and maternal mortality [ 24 ].

The feeling of demoralization and lack of motivation from their superiors were other barriers to midwifery care explored in this study. This finding is concurrent with other studies conducted around the globe [ 19 , 25 , 26 , 28 ]. The above finding is also is in accord with another qualitative narration, which emphasizes that feelings of demoralization and a lack of motivation were the main challenges of midwifery care [ 22 ]. Positive support from supervisors has been demonstrated to be important for the quality of services that health workers are able to deliver. In the World Health Organization’s report on improving performance in healthcare, the WHO stresses that supportive supervision can contribute to the improved performance of health workers [ 27 ].

Unfair risk payment was the other challenge identified by the current study. Even though there is no difference in the risk they face among health professionals, the risk payment for midwives is very low compared to others. This finding was in conformity with another qualitative narration, which emphasizes that the lack of an equitable remuneration system was experienced by the DRC midwives, and it has also been confirmed to be highly problematic in other studies in low- and middle-income settings [ 7 , 8 , 22 , 28 ], leading to serious challenges. In settings where salaries are extremely low or unpredictable, proper remuneration is seen as crucial to worker motivation and the quality of midwifery care [ 29 , 30 ].

The limited organizational power of midwives was another identified challenge of MLC. This finding was in step with other studies that emphasize that limited senior midwifery positions in the health system constrain the decision-making power and capability of midwives. This was compounded by limited opportunities for midwifery personnel to address their concerns to the responsible bodies. Hence, midwives need to take control of their own situations. When midwives are included in customizing their work environments, it has proven to result in improved quality of care for women and newborns around the globe [ 8 , 15 ].

Lack of work-related security was another barrier to MLC explored in this study, in which the midwives’ work environment was surrounded by insecurity, especially during night shifts, when midwives are facing verbal and even physical attack, as mentioned by participants. This finding is supported by many other studies conducted around the globe [ 22 , 23 , 25 , 31 ]. The above finding is also in agreement with another qualitative narration, which emphasizes that the midwives’ work environment was surrounded by insecurity, especially during night shifts due to a lack of available security personnel; they often felt frightened on their way to and from work [ 7 ]. In order for midwives to provide quality care, it is crucial to create supportive work environments by ensuring sufficient pre-conditions, primarily security issues [ 31 ].

Conclusions

The study findings contribute to a better understanding of the facilitators’ and barriers of a midwifery-led model of care in the case of Dire Dawa public health facilities. Professional pride and having good team spirit were the main facilitators of midwifery-led model care. Contrary to that, insufficient professional development, shortage of resources, feeling of demoralization, lack of motivation, limited organizational power of midwives, unfair risk payment, and lack of work-related security were the main barriers to a midwifery-led model of care in the case of Dire Dawa public health facilities. Generally, midwifery care is facing considerable challenges, both pertaining to the management of the healthcare service locally and nationally.

Study implications

The findings of the study have implications for midwifery care practices in Eastern Ethiopia. Addressing these areas could potentially contribute to the reduction of IMR and MMR.

Strengths and limitations

The first strength of the study is that the participants represented different healthcare facilities, both urban and rural, thereby offering deeper and more varied experiences and reflections. A second strength is using a midwife as a moderator. She or he understood the midwives’ situation, thereby making the participants feel more comfortable and willing to share their stories. However, focusing solely on the perspective of the midwives is a limitation.

Recommendations

To overcome the barriers of midwifery care, based on the result of this study and in accordance with the 2020 Triad Statement made by the International Council of Nurses, the International Confederation of Midwives, and the World Health Organization, it is suggested that policymakers, Ethiopian federal ministry of health, Dire dawa health office, and regulators in Dire Dawa city and settings with similar conditions coordinate actions in the following:

To the Ethiopian federal ministry of health (FMOH)

Should strengthen regular and continuous educational opportunities, trainings, and updates for midwives, prioritizing and enforcing policies to include adequate and reasonable remuneration and hazard payment for midwives. Support midwifery leadership at all levels of the health system to contribute to health policy development and decision-making.

To dire Dawa health Bureau

Ensure decent working conditions and an enabling environment for midwives. This includes reasonable working hours, occupational safety, safe staffing levels, and merit-based opportunities for career progression. Special efforts must be made to ensure safe, respectful, and enabling workplaces for midwives operating on the night shift. Midwifery leaders should be involved in management bodies within an appropriate legal framework. Made regular mentorships on the functionality of different diagnostic instruments in respective health facilities.

To Dire Dawa public health facility’s

Create an arena for dialogue and implement a more supportive leadership style at the respective health facilities. Should address professional-related concerns of midwives early. Ensure midwives’ representation at the management bodies. Ensure the selection criteria for educational opportunities and different trainings are fair and inclusive. Ensure the safety and security of midwives, especially those who work night shifts. Should assign adequate staff (midwives and security guards) to the night shifts.

Ethiopian midwifery association

Should influence different stakeholders to solve midwife’s concerns like hazards payment and educational opportunity.

Data availability

All the datasets for this study are available from the corresponding author upon request.

Abbreviations

Focused group discussion

In-depth interview

Infant mortality rate

Key informant interview

Maternal and child health

Midwives led model of care

Neonatal mortality rate

The midwives model of care. Midwives alliance North America, the MANA core documents, 2020.

WHO. Midwife-led care delivers positive pregnancy and birth outcomes. The global health work force alliance,2020.

ICM, Midwifery Led Care, the First Choice for All Women, Netherlands, 2017.

Alba R, Franco R, Patrizia B, Maria CB, Giovanna A, Chiara F, Isabella N. The midwifery-led care model: a continuity of care model in the birth path. Acta Bio Medica: Atenei Parmensis. 2019;90(Suppl 6):41.

Google Scholar  

Dahl B, Heinonen K, Bondas TE. From midwife-dominated to midwifery-led antenatal care: a meta-ethnography. Int J Environ Res Public Health. 2020;17(23):8946.

Article   PubMed   PubMed Central   Google Scholar  

McConville F, Lavender DT. Quality of care and midwifery services to meet the needs of women and newborns. BJOG: Int J Obstet Gynecol. 2014;121.

Shahnaz S, Jan R, Lakhani A, Sikandar R. Factors affecting the midwifery-led service provider model in Pakistan. J Asian Midwives (JAM). 2015;1(2):33–45.

Bogren M, Grahn M, Kaboru BB, Berg M. Midwives’ challenges and factors that motivate them to remain in their workplace in the Democratic Republic of Congo—an interview study. Hum Resour Health. 2020;18:1–0.

Article   Google Scholar  

Bremnes HS, Wiig ÅK, Abeid M, Darj E. Challenges in day-to-day midwifery practice; a qualitative study from a regional referral hospital in Dar Es Salaam. Tanzan Global Health Action. 2018;11(1):1453333.

Yigzaw T, Abebe F, Belay L, Assaye Y, Misganaw E, Kidane A, Ademie D, van Roosmalen J, Stekelenburg J, Kim YM. Quality of midwife-provided intrapartum care in Amhara regional state, Ethiopia. BMC Pregnancy Childbirth. 2017;17:1–2.

Federal Democratic Republic of Ethiopia Mini Demographic and Health Survey. 2019 Ethiopian Public Health Institution, Addis Ababa The DHS Program ICF Rockville, Maryland, USA May 2021.

Federal Democratic Republic of Ethiopia. Demographic and Health Survey 2016 Central Statistical Agency Addis Ababa, Ethiopia The DHS Program ICF Rockville, Maryland, USA July 2017.

UNICEF for every child. Situation Analysis of children and women. Dire Dawa Administration; 2020.

Federal Ministry of. Health, Midwifery care process,2021.

Dire Dawa administration Regional Health Bureau. 2017 six months report [unpublished].

Shenton AK. Strategies for ensuring trustworthiness in qualitative research projects. Educ Inform. 2004;22(2):63–75.

Irene K, Albine M, Series. Practical guidance to qualitative research. Trustworthiness and publishing. Eur J Gen Pract. 2018;24(1):120–4.

Behruzi R, Hatem M, Fraser W, Goulet L, Ii M, Misago C. Facilitators and barriers in the humanization of childbirth practice in Japan. BMC Pregnancy Childbirth. 2010;10:1–8.

Adatara P, Amooba PA, Afaya A, Salia SM, Avane MA, Kuug A, Maalman RS, Atakro CA, Attachie IT, Atachie C. Challenges experienced by midwives working in rural communities in the Upper East Region of Ghana: a qualitative study. BMC Pregnancy Childbirth. 2021;21:1–8.

Roets L. Independent midwifery practice: opportunities and challenges. Afr J Phys Health Educ Recreation Dance. 2014;20(3):1209–24.

Mselle LT, Moland KM, Mvungi A, Evjen-Olsen B, Kohi TW. Why give birth in health facility? Users’ and providers’ accounts of poor quality of birth care in Tanzania. BMC Health Serv Res. 2013;13:1–2.

Bogren M, Erlandsson K, Byrskog U. What prevents midwifery quality care in Bangladesh? A focus group enquiry with midwifery students. BMC Health Serv Res. 2018;18(1):639.

Mtegha MB, Chodzaza E, Chirwa E, Kalembo FW, Zgambo M. Challenges experienced by newly qualified nurse-midwives transitioning to practice in selected midwifery settings in northern Malawi. BMC Nurs. 2022;21(1):236.

Floyd L. Helping midwives in Ghana to reduce maternal mortality. Afr J Midwifery Women’s Health. 2013;7(1):34–8.

Filby A, McConville F, Portela A. What prevents quality midwifery care? A systematic mapping of barriers in low and middle income countries from the provider perspective. PLoS ONE. 2016;11(5):e0153391.

Prytherch H, Kagoné M, Aninanya GA, Williams JE, Kakoko DC, Leshabari MT, Yé M, Marx M, Sauerborn R. Motivation and incentives of rural maternal and neonatal health care providers: a comparison of qualitative findings from Burkina Faso, Ghana and Tanzania. BMC Health Serv Res. 2013;13:1–5.

World Health Organization. The world health report 2000: health systems: improving performance. World Health Organization; 2000.

Oyetunde MO, Nkwonta CA. Quality issues in midwifery: a critical analysis of midwifery in Nigeria within the context of the International Confederation of Midwives (ICM) global standards. Int J Nurs Midwifery. 2014;6(3):40–8.

Kruk ME, Gage AD, Arsenault C, Jordan K, Leslie HH, Roder-DeWan S, Adeyi O, Barker P, Daelmans B, Doubova SV, English M. High-quality health systems in the Sustainable Development goals era: time for a revolution. Lancet Global Health. 2018;6(11):e1196–252.

Article   PubMed   Google Scholar  

Mathauer I, Imhoff I. Health worker motivation in Africa: the role of non-financial incentives and human resource management tools. Hum Resour Health. 2006;4:1–7.

World Health Organization. Global strategy on human resources for health: workforce 2030.

Download references

Acknowledgements

We are very grateful to Dire Dawa University for the financial support for this study and to the College of Medicine and Health for its monitoring ship. All study participants for their willingness to respond to our questionnaire.

this work has been funded by Dire Dawa University for data collection purposes. The Dire Dawa University College of Medicine and Health Sciences was involved in the project through monitoring and evaluation of the work from the beginning to the result submission. However, this organization was not involved in the design, analysis, critical review of its intellectual content, or manuscript preparation, and its budget did not include publication.

Author information

Authors and affiliations.

College of Medicine and Health Sciences, Dire Dawa University, Dire Dawa, Ethiopia

Mickiale Hailu, Aminu Mohammed, Daniel Tadesse, Neil Abdurashid, Legesse Abera, Tadesse Weldeamaniel, Tekleberhan Hailemariam, Netsanet Melkamu, Tewodros Getnet, Yibekal Manaye, Tariku Derese, Muluken Yigezu, Natnael Dechasa & Anteneh Atle

College of Health Sciences, Wachemo University, Hossana, Ethiopia

Samrawit Ali & Yesuneh Dejene

College of Health Sciences, Mekelle University, Mekelle, Ethiopia

Meklit Girma

You can also search for this author in PubMed   Google Scholar

Contributions

MH developed the study proposal, served as the primary lead for study implementation and data analysis/interpretation, and was a major contributor in writing and revising all drafts of the paper. AM, DT, NA, LA, and SA supported study implementation and data analysis, and contributed to writing the initial draft of the paper. YD, TW, MG, TH and, NM supported study recruitment and contributed to writing the final draft of the paper. TG, YM, TD, MY, ND and, AA conceptualized, acquired funding, and led protocol development for the study, co-led study implementation and data analysis/interpretation, and was a major contributor in writing and revising all drafts of the paper. All authors contributed to its content. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Mickiale Hailu .

Ethics declarations

Ethics approval and consent to participate.

All methods were followed in accordance with relevant guidelines and regulations. The institutional review board of Dire Dawa University has also examined and evaluated it for its methodological approach and ethical concerns. Ethical clearance was obtained from Dire Dawa University Institutional Review Board and an official letter from research affairs directorate office of Dire Dawa University was submitted to Dire Dawa health office and it was distributed to selected health institutions. Voluntary informed written consent was obtained from the study participant’s right after the objectives of the study were explained to the study participants and confidentiality of the study participants was assured throughout the study period. Participants were informed that they have the right to terminate the discussion (interview) or they can’t answer any questions they didn’t want to answer.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Hailu, M., Mohammed, A., Tadesse, D. et al. Facilitators and barriers of midwife-led model of care at public health institutions of dire Dawa city, Eastern Ethiopia, 2022: a qualitative study. BMC Health Serv Res 24 , 998 (2024). https://doi.org/10.1186/s12913-024-11417-x

Download citation

Received : 03 September 2023

Accepted : 09 August 2024

Published : 28 August 2024

DOI : https://doi.org/10.1186/s12913-024-11417-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Continuous midwifery care model
  • Obstetric care by midwives
  • Barriers to obstetric care
  • Facilitators to obstetric care

BMC Health Services Research

ISSN: 1472-6963

peer review example research

  • Open access
  • Published: 28 August 2024

Using PACS for teaching radiology to undergraduate medical students

  • Mojtahedzadeh Rita 1 ,
  • Mohammadi Aeen 1 ,
  • Farnood Rajabzadeh   ORCID: orcid.org/0000-0001-6581-4716 2 &
  • Akhlaghi Saeed 3  

BMC Medical Education volume  24 , Article number:  935 ( 2024 ) Cite this article

Metrics details

Traditional radiology education for medical students predominantly uses textbooks, PowerPoint files, and hard-copy radiographic images, which often lack student interaction. PACS (Picture Archiving and Communication System) is a crucial tool for radiologists in viewing and reporting images, but its use in medical student training remains limited.

This study investigates the effectiveness of using PACS (Picture Archiving and Communication System) for teaching radiology to undergraduate medical students compared to traditional methods.

Fifty-three medical students were divided into a control group (25 students) receiving traditional slide-based training and an intervention group (28 students) using PACS software to view complete patient images. Pre- and post-course tests and satisfaction surveys were conducted for both groups, along with self-evaluation by the intervention group. The validity and reliability of the assessment tools were confirmed through expert review and pilot testing.

No significant difference was found between the control and intervention groups regarding, gender, age, and GPA. Final multiple-choice test scores were similar (intervention: 10.89 ± 2.9; control: 10.76 ± 3.5; p  = 0.883). However, the intervention group demonstrated significantly higher improvement in the short answer test for image interpretation (intervention: 8.8 ± 2.28; control: 5.35 ± 2.39; p  = 0.001). Satisfaction with the learning method did not significantly differ between groups (intervention: 36.54 ± 5.87; control: 39.44 ± 7.76; p  = 0.129). The intervention group reported high familiarity with PACS capabilities (75%), CT principles (71.4%), interpretation (64.3%), appropriate window selection (75%), and anatomical relationships (85.7%).

PACS-based training enhances medical students’ diagnostic and analytical skills in radiology. Further research with larger sample sizes and robust assessment methods is recommended to confirm and expand upon theses results

Peer Review reports

Introduction

Radiology is a fundamental component in basic medical education, bridging the gap between anatomy and clinical practice. Like other fields of medical education, radiology education faces the challenge of transitioning from passive learning to interactive and experiential learning [ 1 , 2 ]. With the expansion of the field of radiology, radiology education has undergone a revolution. Doctors used to carry plain films and show them using projectors or view boxes because plain films were the only main diagnostic method in radiology during the 1970s. Since the introduction of computed tomography (CT) and magnetic resonance imaging (MRI) in the late 1980s, the increase in the amount of image data associated with these imaging modalities has led to a greater demand for compatible information storage systems. Therefore, the picture archiving and communication system (PACS), capable of storing, retrieving, distributing, analyzing, and digitally processing medical images, has become an essential tool in clinical work today [ 3 , 4 , 5 ]. However, due to hardware and software limitations, the use of PACS in radiology education remains somewhat limited [ 6 , 7 ]. Currently, most radiology education still relies heavily on textbooks and traditional computer media such as PowerPoint or Word files both of which lack student interaction. PACS offers advantages such as interactive image viewing, 3D reconstruction capabilities, and the ability to simulate real-life radiology practice, which traditional methods lack. These features enhance students’ understanding and interpretation of radiological images, addressing the shortcomings of conventional methods. There is a minimal probability for a medical student to see whole images like a real radiologist in class. It is often a challenge for them to understand 3D anatomical images, as well as a comprehensive view of diseases. Consequently, some students may attempt to independently identify abnormal findings and analyze and formulate radiological diagnoses. According to one study, only a limited number of final-year medical students had satisfactory basic radiology interpretation skills, which necessitates the search for a more effective method of training [ 8 ].

Recent advancements in radiology teaching methods have previously been reported in addition to face-to-face teaching, including problem-based learning (discussion of a case or scenario consistent with curriculum objectives and students’ independent research to complete subject knowledge and share findings), case-based learning (showing several radiographs of the same subject and discussing them), and team-based learning (student collaboration by creating learning groups) [ 8 ].

In contrast to these conventional methods, a new method was created under the concept of learning from experience. This virtual method is based on individual learning in the PACS software environment, enabling students in the role of radiologists to interpret and diagnose radiology in a simulation environment. All common items are shown to the student using PACS instead of selected specific images. Students are allowed to see the whole image, do basic reconstructions of the images freely, and find specific features of the image by themselves. During this process, students can access PACS and clinical information, integrating clinical knowledge and 3D reconstruction ability, essential to arriving at radiological diagnoses PACS enables efficient archiving and transfer of medical images. Initially developed in the U.S. in the 1980s, it later expanded to Europe and Asia, including China, Japan, and Korea [ 9 ]. Iran has also implemented PACS, improving its medical imaging infrastructure with global DICOM standards.

The goal of this learning method was to compare the effectiveness of practical radiology training through traditional face-to-face interactive lectures with the virtual practical radiology training method based on individual learning in the PACS software environment for medical students.

The use of PACS in healthcare in Iran has only recently become widespread, primarily for patient management and diagnosis, and is rarely used for educational purposes. Iran, as a country with a rapidly developing healthcare system, faces unique challenges in medical education. This study seeks to compare radiology education in Iran with existing literature and to understand its context in relation to the region and worldwide. Managing medical education effectively is a significant challenge. And this research addresses this by introducing innovative teaching methods. Specifically, current study investigates the effectiveness of using PACS on medical students radiology education compared to traditional methods.

The research population was the medical students of the Islamic Azad University of Mashhad during the academic year 2021–2022. The entry criteria were: being a medical trainee student, consent to enter the study, and the exclusion criteria were: students who had previously graduated in radiology or other medical sciences and students who had renewed their course in radiology. participation in the study was voluntary, and students were informed that it would not impact their end-of-section evaluation After obtaining informed consent, they participated in the study. Ethical approval for this study was obtained from the Virtual University of medical sciences with the reference number [IR.VUMS.REC.1400.022]. This proposal was implemented after being approved by the ethics committee and obtaining the code of ethics.

Participants

The sample size was calculated using power analysis to ensure the study had sufficient power to detect a statistically significant difference between the control and intervention groups. Assuming an effect size of 0.5, a significance level (alpha) of 0.05, and a power of 0.80, it was determined that at least 50 participants were needed. To account for potential dropouts and ensure robustness, a total of 53 students were included in the study. According to the calculated sample size, four rotations of radiology internship students were included in the study for each of the control and intervention groups (each rotation is about 5–10 students). Due to the prevention of contamination, the first four rotations were assigned to the control group and the next four rotations to the intervention group.

The validity of the tools used in this study was established through expert review and pilot testing. Content validity was confirmed by 10 faculty members specializing in radiology. Reliability was assessed using Cronbach’s Alpha, yielding a coefficient of 0.91, indicating high internal consistency. In this study, three tools were used: measuring the level of knowledge, measuring the level of performance, and measuring the satisfaction of students in both groups (Appendix 1 ) and self-evaluation for PACS learning in the intervention group (Appendix 2 ). After one month of class, the final exam was taken which was a combination of 20 multiple choice questions and 5 short answer type questions (description and image recognition). The scores of the questions were collected as an objective assessment. To provide a subjective assessment of radiology learning, all students were invited to complete a satisfaction questionnaire on how radiology was taught. Also, the students of the intervention group were invited to complete a questionnaire for their self-evaluation of the amount of PACS learning. A 5-point Likert scale was used in both researcher-made questionnaires. The questionnaire used was created for this study. Informed consent was obtained from each patient whose data was used in the study, ensuring they were fully aware of how their medical images would be utilized for educational purposes.

Familiarization with PACS

Before starting the study with the PACS system, students were given an introductory session that covered the basics of PACS functionality, including how to navigate the software, view and manipulate images, and use the various tools available for image analysis.

Knowledge and performance measurement tools

In the knowledge section, questions evaluated theoretical content, and the performance section involved diagnosing radiographic image. Students described the type of radiography, pathological signs, and the final diagnosis. Multiple-choice questions and short answer questions were used to assess knowledge and performance The specific type of radiography used in this study included plain radiographs, computed tomography (CT) scans, and magnetic resonance imaging (MRI). These imaging modalities were chosen to cover a broad spectrum of radiological techniques relevant to the medical curriculum. In the subject of knowledge, 20 multiple-choice questions were proposed based on the objectives of the lesson and the blueprint, which was approved by two colleagues of the radiology department, which must have been consistent with the objectives of the lesson. In the discussion of the performance of 5 of radiology images, which again corresponded to the objectives of the lesson and the blueprint, and it was approved by two colleagues of the radiology department that the objectives of the lesson were covered, they were provided to the students, and the students had to describe and diagnose the radiographies. The radiology images in both groups adequately covered the goals, but they were taught to the students in two different ways described.

Student satisfaction questionnaire

This questionnaire aimed to determine student’s satisfaction with the educational method. It consisted of ten questions graded on a 5-point Likert scale the range of scores was between 10 and 50 and higher scores indicating greater satisfaction. The content and form validity were confirmed by 10 faculty members and reliability was obtained by Cronbach’s Alpha test of 0.91.

Student self-assessment questionnaire

This questionnaire evaluated the learning rate of the PACS teaching method. It consisted of twelve questions graded on a 5-point Likert scale, and the range of scores was between 12 and 60, and higher scores indicate learning. Content and form validity were confirmed by 10 faculty members and reliability was assessed with a Cronbach’s Alpha of 0.91.

Implementation method in the control group

The teaching strategy involved traditional face-to-face interactive lectures using PowerPoint presentations. The practical part included demonstrating selected radiographic images on slides and discussing their interpretation.

This method aimed to develop the student’s ability to diagnose and interpret radiographs through structured lectures and guided discussions. A pre-test was conducted in the first session to determine the student’s initial knowledge and performance levels. The classes were held daily in person. After teaching the theoretical part with a PowerPoint presentation, radiographic images were shown to the control group for interpretation and discussion. This conventional method aimed to develop the ability to diagnose and interpret radiographs. The post-test to determine knowledge and performance was performed and the education satisfaction questionnaire was completed at the end of each rotation.

Bias caused by human factors during the teaching of the two groups was controlled by standardizing the teaching materials and methods across both groups. Additionally, the instructors were blinded to the group assignments to prevent any conscious or unconscious bias in teaching and assessment.

Implementation method in the intervention group

The stages of developing the training course using PACS software and DICOM were as follows: 1). Initial planning and curriculum alignment, 2) Selection of relevant radiographic cases, 3) Configuration of PACS workstations, 4) Training faculty on PACS software, and 5) Implementation of PACS-based learning sessions for students, followed by assessment and feedback.

After the control group, the rotations of the intervention group were included in the study, and the pre-test was administered to the students of the intervention group. Assessment of knowledge with multiple choice questions and performance with radiographic images was with short answer questions. The classes were held daily in person. In the intervention group, after participating in the theoretical part of the course, which was similar to the control group and was held face-to-face, for the practical part, they were trained in a virtual way with Adobe Connect software, and there was no face-to-face class for radiography images. In this way, students were given access to PACS Radiant software (installation on personal desktop). Following the teaching of the theoretical part, based on the goals of the radiology course for medical trainees, a number of images of the brain, lungs, bones, urinary tract, and digestive system (including radiography, CT and MRI) were assigned to the students of the intervention group, and the images of these patients were completely at their disposal.

The computers used were personal desktops with standardized configurations. Adjustments and calibrations were made to ensure all students could view images with consistent quality and brightness, replicating the clinical environment as closely as possible. This software enables students to perform basic operations with images, such as windowing, comparing different MRI sequences, and performing cross-sectional reconstruction (MPR) or 3D reconstruction, exactly as a radiologist does and has the facilities. After studying the material and checking the images, the students were required to announce the completion of their study to the teacher and they were given the opportunity to review the pictures, ask questions, and solve problems with the teacher in the virtual space.

The post-test to determine knowledge and performance was performed in the intervention group. The education satisfaction questionnaire was completed at the end of each rotation. The self-assessment questionnaire for PACS learning was completed at the end of each rotation.

Data analysis

The data was analyzed with SPSS-17 software, IBM, US. Central and dispersion indices were used in the descriptive statistics report, and a T-test was used in the analytical section, independent t-test, paired t-test and, chi-square test were used to compare the data. The confidence level was set at p  < 0.05.

A total of 52 students entered this study, 28 students in the intervention group and 25 in the control group. The students were similar in terms of age, gender, and overall academic average ( p  = 0.05) (Table  1 ). The average age in the control group is 26.04 ± 3.96 and in the intervention group is 24.29 ± 2.14. The result of the independent t-test shows that the average age in the two groups is not different ( P  = 0.060). The average overall academic grade point average of the medical course in the control group is 15.73 and in the intervention group is 16.01, which has no difference ( P  = 0.383) (Table  1 ).

The control group included 25 people, 16 of whom were women and 9 of whom were men, and the intervention group included 28 people of whom 16 were women and 12 were men. The result of the chi-square test shows that the two groups do not differ in terms of gender ( P  = 0.610). Evaluation result: At the beginning of the exam, there were two parts of a multiple-choice test and a short answer for the interpretation of radiology images (pre-test). The same exam was done twice at the end of the one-month session (post-test). It was a multiple-choice test to check knowledge and a short answer test to check performance.

The result of the independent t-test shows that the score of the multiple-choice test before and after the intervention, as well as the changes in the test score, are not different in the two groups. ( P  = 0.084, P  = 0.883, P  = 0.764) The result of the paired t-test shows that the multiple-choice test scores of the students before and after the intervention differ between the case and control groups, and it is higher after the intervention. ( P  < 0.001, P  < 0.001) (Table  2 ) The result of the independent t-test shows that the score of the student’s short answer test, which was for the interpretation of radiology images, is not different before and after the intervention ( P  = 0.002 and P  = 0.444, respectively). The changes in the test scores are different in the two groups and are more in the intervention group. ( P  < 0.001) The result of the paired t-test shows that the score of the short answer test of the students before and after the intervention is different according to the case and control groups, and it is higher after the intervention. ( P  < 0.001, P  < 0.001)

The result of the independent t-test shows that there is no difference in the level of satisfaction with the teaching method between the two control groups with a score of 39.44 ± 7.76 and the intervention group with a score of 36.54 ± 5. ( P  = 0.129) (Table  3 ).

The analysis of the satisfaction questionnaire in the intervention group showed that most students were satisfied with the organization (64%) and interaction of the learning activity (64%) (Table  3 ). Most students use this learning activity to learn radiology (85%). They found it useful. More importantly, a large percentage of students stated that PACS training encouraged personal interest in radiology (82%) as well as satisfaction with the quality of learning (71%). Also, in the intervention group, based on the self-evaluation form, they stated that with the abilities of PACS (75%), the principles of CT (71.4%) and its interpretation (64.3%), choosing the appropriate window (75%), the location of different organs in the image (82.9%) and their vicinity (85.7%) are familiar (Table  3 ). An evaluation of the impact of the intervention on participants’ knowledge is included, showing significant improvements in their understanding and diagnostic skills, highlighting the effectiveness of the PACS-based training method.

Traditional practical radiology training that continues to be used today provides only a cross-section of the entire routine imaging. While this teaching method may be useful in helping students manage the features of routine imaging, it may be inadequate for learning anatomy [ 10 ]. Hence, students may have difficulty interpreting images independently during clinical practice when they are expected to do so [ 11 ]. Although a variety of radiology educational models such as problem-based learning and the use of dynamic images can solve part of this problem, images of the main workplace are the most ideal learning method [ 12 , 13 ]. The experiential learning theory, developed by Dewey, Kolb and others provide explanations for how students learn things in their own way as they react to their perceptions of a real experiences. This concept is explained by principle of constructionism, which is the base of experiential learning [ 13 ].

During this study, a training course using PACS software and DICOM viewer was developed to simulate a work environment that reflects the typical clinical work of a radiologist. The results of the study indicated that this educational approach allows for better clinical guidance, which is necessary to help students form a holistic view of anatomy and pathology. Most importantly, this educational method helps students to develop critical thinking and a systematic approach to formulating imaging interpretation and differential diagnosis, which may be partially due to the exploratory atmosphere of the experiential learning mode. Apart from the objective improvement in imaging descriptions and interpretations, subjective improvements in self-confidence from students’ feedback to self-assessment questionnaires, as well as skills including determining the order of imaging reading, choosing the appropriate window, and also choosing the reconstruction method, which may result under the influence of direct activity during The course of learning and discussion should be free. In addition, the experiential approach allows for better interactions that increase interest in radiology [ 14 ].

To provide students with access to the Radiant PACS software (installed on their personal desktops), following the theoretical section and based on the objectives of the radiology course for medical trainees, a number of images from the brain, lungs, bones, urinary, and gastrointestinal systems (including radiography, CT, MRI) were assigned to the intervention group. These patient images were fully available to them. This software enables students to perform basic operations on images, such as window adjustment, comparing different MRI sequences, and performing multiplanar reconstruction (MPR) or 3D reconstruction, exactly as a radiologist does within the PACS system.

To resolve the issue of patient confidentiality, all patient identifiers were removed from the images before they were made accessible to students. Additionally, access to PACS was restricted to ensure that students could only view and analyze the images without accessing sensitive patient information.

Undergraduate students had limited access to PACS, ensuring they could not modify or delete any content. Additional software controls were implemented to restrict access and prevent any unauthorized changes. This ensured that the integrity of the medical images was maintained, and patient care data was not compromised.

Our study shows the effectiveness of PACS in training in the study of anatomical imaging. Anatomy is the basis of radiology training. In theory, reading CT and MRI images is a good way to study anatomy because continuous scanning helps students understand the three-dimensional concepts of the relative adjacencies of body parts [ 15 , 16 ]. Globally, they concluded that anatomical imaging increases the quality and efficiency of teaching human anatomy [ 17 ]. However, it is difficult to discern the entire anatomical structure from a single cross-section of the image, which increases students’ confusion [ 16 ]. The results of this study provide evidence that continuous scan reading improves students’ comprehensive understanding of anatomy. Furthermore, by using multiple reconstruction methods, 3D images are more comprehensively examined by students, which has been confirmed by other studies [ 18 ].

The integration of PACS in medical education has been shown to enhance the learning experience by providing students with interactive and practical tools for understanding radiological images. Recent advancements in healthcare technology acceptance highlight the importance of user-friendly interfaces and training for successful implementation [ 19 ]. Moreover, the current state of medical education in the UK emphasizes the adoption of advanced technologies like PACS to improve educational outcomes and prepare students for real-world clinical environments [ 20 ]. The utilization of big data technologies in conjunction with PACS further enhances the management and analysis of medical images, facilitating a more personalized and effective learning experience for medical students [ 21 ]. Additionally, recent market reports indicate a steady growth in the adoption of medical imaging technologies, including PACS, driven by advancements in AI and machine learning, which are poised to revolutionize medical education [ 22 ]. These developments collectively underscore the critical role of PACS in modernizing medical education and improving the quality of training for future healthcare professionals. Also, the implementation of PACS could significantly enhance radiology education by providing access to digital imaging resources that may otherwise be unavailable.

Compared to Chen et al.‘s study [ 1 ], the study was conducted on 101 students, but our study was on 52 students. Satisfaction with PACS training in Chen’s study was on average 80% and in our study, it was about 65%. The percentage of being interested in radiology in this study and Chen’s study was almost similar. Also, in our study, similar to Chen’s study, there was no difference in pre-test scores between the two intervention and control groups. Also, the final scores in Chen’s study and our study were not significantly different, but the scores of interpretations of pictures, which in our study were equivalent to a number of stereotypes in the form of PowerPoint with short answer questions, showed a significant difference in both our study and Chen’s study.

​ In the study of Restauri [ 6 ] and Soman [ 23 ], as in our study, PACS was used to teach medical students, and at the end of the course, only a survey form was filled by the students, and the impact of using PACS on the ability to interpret radiology images by students was not done. In the above two studies, after using PACS, students stated that they gained more confidence on interpreting images and would use PACS in the future, which was similar to the survey results in our study. It takes a lot of effort to do this kind of training. PACS and a suitable DICOM viewer represent basic software requirements for training and to protect patient privacy, DICOM data from PACS rather than linking to the original PACS. Copied In this way, a PACS simulation for medical education was obtained [ 6 ]. In addition, teacher guidance is a vital element in education. A minimum of 3 instructors with experience in standard radiology training is required for a class, as team discussion is a major component of the training. In experimental courses, students need educational help both to guide reading the picture and to answer the questions. Therefore, teaching professors need specific work experience in the radiology department. Having said that, the lack of a radiology professor prevents the use of this training and this training model acts as a limitation on a larger scale. There are several limitations to the study. First, due to the limited number of supervisors, the sample size was correspondingly limited. Secondly, it was a single study center. Thirdly, due to the limitation of the operation, some students did not answer some of the questions in the questionnaire. Although the probability is very low, it still has the chance to bias the result. Fourth, although we control for faculty and teaching standards between the two groups, human bias is still a factor that cannot be completely avoided in practice. Fifth, although we used objective assessment measures, the study also revealed the weakness of our assessment system in radiology education. The study instrument consisted of paper and pencil tests, with most questions consisting of objective items that test memory, such as multiple-choice questions and short answer questions. Furthermore, the mental items used to test application ability are limited. As a result, only a small part of the final test reflects the difference between the experimental training group and the control group. Other test forms such as bedside examinations and multi-station examinations should be used in the future for better evaluation [ 24 , 25 ]. In this study, according to the curriculum, students entered the radiology department with different numbers during different periods, and 4 periods of students were entered into the study for each group. The exams were held at the end of the one-month section, so the exam was held in the control group and in the intervention group at different times, although we tried to make the questions the same in terms of number and content similarity. In the study of Chen et al [ 8 ], the test was conducted at the end of the semester and simultaneously for two groups. If this study is conducted with a larger number of students and in multiple centers, the results will be more valid.

PACS-based training is beneficial for medical students, enhancing their diagnostic and analytical skills in radiology. Further research with larger sample sizes and robust assessment methods is recommended to confirm and expand upon theses results. We believe that our findings suggest that PACS which is used routinely in healthcare diagnostic context, can also be used in medical students’ education and healthcare can be integrated in education.

Data availability

The demographic and clinical datasets generated and/or analyzed during the current study are available from the corresponding author (Dr. Farnood Rajabzadeh ) upon reasonable request.

Abbreviations

Picture Archiving and Communication System

Computed Tomography

Magnetic Resonance Imaging

Grade Point Average

Digital Imaging and Communications in Medicine

Statistical Package for the Social Sciences

Multi planar Reconstruction

Artificial Intelligence

Bhogal P, Booth TC, Phillips AJ, Golding SJ. Radiology in the undergraduate medical curriculum -- who, how, what, when, and where? Clin Radiol. 2012;67(12):1146–52.

Article   Google Scholar  

Naeger DM, Webb EM, Zimmerman L, Elicker BM. Strategies for incorporating radiology into early medical school curricula. J Am Coll Radiol. 2014;11(1):74 – 9.

Forsberg D, Rosipko B, Sunshine JL. Factors affecting Radiologist’s PACS usage. J Digit Imaging. 2016;29(6):670–6.

Mirsadraee S, Mankad K, McCoubrie P, Roberts T, Kessel D. Radiology curriculum for undergraduate medical studies–a consensus survey. Clin Radiol. 2012;67(12):1155–61.

Arriero A, Bonomo L, Calliada F, Campioni P, Colosimo C, Cotroneo A, Cova M, Ettorre GC, Fugazzola C, Garlaschi G, Macarini L, Mascalchi M, Meloni GB, Midiri M, Mucelli RP, Rossi C, Sironi S, Torricelli P, Beomonte BZ, Zompatori M, Zuiani C. E-learning in radiology: an Italian multicentre experience. Eur J Radiol. 2012;81(12):3936–41.

Restauri N, Bang T, Hall B. Sachs P Development and Utilization of a Simulation PACS in Undergraduate Medical Education. J Am Coll Radiol. 2018;15(2):346–9.

Zafar S, Safdar S, Zafar AN. Evaluation of use of e-Learning in undergraduate radiology education: a review. Eur J Radiol. 2014;83(12):2277–87.

Chen Y, Zheng K, Ye S, Wang J, Xu L, Li Z, Meng Q, Yang J, Feng ST. Constructing an experiential education model in undergraduate radiology education by the utilization of the picture archiving and communication system (PACS). BMC Med Educ. 2019;19(1):383.

Huang -HK. Twenty-five years of Picture Archiving and Communication Systems (PACS). Dev Iran J Radiol. 2007;4(2):1.

Google Scholar  

-Pascual TN, Chhem R, Wang SC, Vujnovic S. Undergraduate radiology education in the era of dynamism in medical curriculum: an educational perspective. Eur J Radiol. 2011;78(3):319–25.

-Sendra-Portero F, Torales-Chaparro OE, Ruiz-Gomez MJ, Martinez-Morillo. M.A pilot study to evaluate the use of virtual lectures for undergraduate radiology teaching. Eur J Radiol. 2013;82(5):888–93.

-Zhang S, Xu J, Wang H, Zhang D, Zhang Q, Zou L. Effects of problem-based learning in Chinese radiology education: a systematic review and metaanalysis. Med (Baltim). 2018;97(9):e0069.

Yardley S, Teunissen PW, Dornan T. Experiential learning: transforming theory into practice. Med Teach. 2012;34(2):161–4.

Branstetter BF. Humphrey AL,Schumann JB.The long-term impact of preclinical education on medical students’ pinions about radiology. Acad Radiol. 2008;15(10):1331–9.

Schober A, Pieper CC, Schmidt R, Wittkowski W. Anatomy and imaging: 10 years of experience with an interdisciplinary teaching project in preclinical medical education - from an elective to a curricular course. Rofo. 2014;186(5):458–65.

Jang HW, Oh CS, Choe YH, Jang DS. Use of dynamic images in radiology education: movies of CT and MRI in the anatomy classroom. Anat Sci Educ. 2018;11(6):547–53.

Grignon B, Oldrini G, Walter F. Teaching medical anatomy: what is the role of imaging today? Surg Radiol Anat. 2016;38(2):253–60.

Loke YH, Harahsheh AS, Krieger A, Olivieri LJ. Usage of 3D models of tetralogy of Fallot for medical education: impact on learning congenital heart disease. BMC Med Educ. 2017;17(1):54.

AlQudah, Adi A, Al-Emran M, Khaled Shaalan. Technology Acceptance in Healthcare: a systematic review Applied sciences 2021;11(22):10537. https://doi.org/10.3390/app112210537

GMC. (2022). The state of medical education and practice in the UK.

Geroski T, Jakovljević D, Filipović N. Big Data in multiscale modelling: from medical image processing to personalized models. J Big Data. 2023;10:72. https://doi.org/10.1186/s40537-023-00763-y .

-Visage Imaging. Medical imaging market size. Share & Growth Report; 2024.

Soman S, Amorosa JK, Mueller L, Hu J, Zou L, Masand A, Cheng C, Virk J, Rama H, Tseng I, Patel K, Connolly SE. Evaluation of medical student experience using medical student created StudentPACS flash based PACS simulator tutorials for learning radiological topics. Acad Radiol. 2010;17(6):799–807.

Reddy S, Straus CM, McNulty NJ, Tigges S, Ayoob A, Randazzo W, Neutze J, Lewis P. Development of the AMSER standardized examinations in radiology for medical students. Acad Radiol. 2015;22(1):130–4.

Monticciolo DL. The ACR diagnostic radiology in-training examination: evolution, current status, and future directions. J Am Coll Radiol. 2010;7(2):132–7.

Download references

Acknowledgements

Farbod Rajabzadeh for helping in data gathering, Ladan Goshayeshi for helping in editing, Lena Goshayeshi for helping in editing.

This study was supported by the Smart university of medical sciences and Mashhad Azad University of Medical Sciences.

Author information

Authors and affiliations.

Department of e-Learning in Medical Education, School of Medicine, Center of Excellence for E- learning in Medical Education, Tehran University of Medical Sciences, Tehran, Iran

Mojtahedzadeh Rita & Mohammadi Aeen

Department of Radiology, Faculty of Medicine, Mashhad Medical Sciences, Islamic Azad university, Mashhad, Iran

Farnood Rajabzadeh

Department of Community Medicine, University of Medical Sciences, Mashhad, Mashhad, Iran

Akhlaghi Saeed

You can also search for this author in PubMed   Google Scholar

Contributions

RM, FR, designed the study. FR was involved in the data gathering and interpretation of the results. AM and SA performed analyses. FR wrote the first draft of the manuscript. FR and RM edited the final version of the manuscript. All authors read and approved the final version of the manuscript.

Corresponding author

Correspondence to Farnood Rajabzadeh .

Ethics declarations

Ethics approval and consent to participate.

This study was approved by the Ethics Committee of Smart University of Medical Sciences (ethics code: IR.VUMS.REC.1400.022, 4/12/2021) and conformed to the ethical principles contained in the Declaration of Helsinki. For experiments involving human participants the participants signed an informed consent form before the study.

Consent for publication

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Supplementary material 2, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Rita, M., Aeen, M., Rajabzadeh, F. et al. Using PACS for teaching radiology to undergraduate medical students. BMC Med Educ 24 , 935 (2024). https://doi.org/10.1186/s12909-024-05919-9

Download citation

Received : 05 December 2023

Accepted : 16 August 2024

Published : 28 August 2024

DOI : https://doi.org/10.1186/s12909-024-05919-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Radiology education
  • Medical students

BMC Medical Education

ISSN: 1472-6920

peer review example research

IMAGES

  1. FREE 10+ Sample Peer Review Forms in PDF

    peer review example research

  2. FREE 8+ Sample Peer Feedback Forms in PDF

    peer review example research

  3. Research and Peer-Review Process

    peer review example research

  4. 43 Great Peer Evaluation Forms [+Group Review] ᐅ TemplateLab

    peer review example research

  5. Understanding Peer Review in Science

    peer review example research

  6. How to Write a Peer Review

    peer review example research

VIDEO

  1. What is a Peer Discussion?

  2. Peer review training

  3. Peer Review Standards Update: Ask Us Anything

  4. Peer Observation Clip

  5. THIS Got Through Peer Review?!

  6. Module 3

COMMENTS

  1. Peer Review Examples (300 Key Positive, Negative Phrases)

    Discussing negative aspects in a peer review requires tact and empathy. Try focusing on behaviors and actions rather than personal attributes, and use phrases that suggest areas for growth. For example: "While your dedication to the project is admirable, it might be beneficial to delegate some tasks to avoid burnout.".

  2. What Is Peer Review?

    The most common types are: Single-blind review. Double-blind review. Triple-blind review. Collaborative review. Open review. Relatedly, peer assessment is a process where your peers provide you with feedback on something you've written, based on a set of criteria or benchmarks from an instructor.

  3. How to Write a Peer Review

    Think about structuring your review like an inverted pyramid. Put the most important information at the top, followed by details and examples in the center, and any additional points at the very bottom. Here's how your outline might look: 1. Summary of the research and your overall impression. In your own words, summarize what the manuscript ...

  4. My Complete Guide to Academic Peer Review: Example Comments & How to

    The good news is that published papers often now include peer-review records, including the reviewer comments and authors' replies. So here are two feedback examples from my own papers: Example Peer Review: Paper 1. Quantifying 3D Strain in Scaffold Implants for Regenerative Medicine, J. Clark et al. 2020 - Available here

  5. A step-by-step guide to peer review: a template for patients and novice

    The peer review template for patients and novice reviewers ( table 1) is a series of steps designed to create a workflow for the main components of peer review. A structured workflow can help a reviewer organise their thoughts and create space to engage in critical thinking. The template is a starting point for anyone new to peer review, and it ...

  6. Peer review guidance: a primer for researchers

    The peer review process is essential for evaluating the quality of scholarly works, suggesting corrections, and learning from other authors' mistakes. The principles of peer review are largely based on professionalism, eloquence, and collegiate attitude. As such, reviewing journal submissions is a privilege and responsibility for 'elite ...

  7. (PDF) A step-by-step guide to peer review: a template ...

    components of peer review. A structured workflow can. help a reviewer organise their thoughts and create space. to engage in critical thinking. The template is a starting. point for anyone new to ...

  8. Peer Review Template

    Summary of the research and your overall impression. In your own words, summarize the main research question, claims, and conclusions of the study. Provide context for how this research fits within the existing literature. Discuss the manuscript's strengths and weaknesses and your overall recommendation. Evidence and examples.

  9. How to write a thorough peer review

    You should now have a list of comments and suggestions for a complete peer review. The full peer-review document can comprise the following sections: 1. Introduction: Mirror the article, state ...

  10. Peer Review Examples

    This paper by Amrhein et al. criticizes a paper by Bradley Efron that discusses Bayesian statistics ( Efron, 2013a ), focusing on a particular example that was also discussed in Efron (2013b). The example concerns a woman who is carrying twins, both male (as determined by sonogram and we ignore the possibility that gender has been observed ...

  11. 50 Great Peer Review Examples: Sample Phrases + Scenarios

    Here are 50+ peer review examples! Use these sample peer feedback phrases with your peers and help them grow professionally! ... To address the issue with the target demographic, it might be beneficial to integrate more specific market research data. I can share a few resources on market analysis that could provide some valuable insights for ...

  12. 70 Peer Review Examples: Powerful Phrases You Can Use

    Peer Review Examples on Professionalism and Work Ethics. "Noah's punctuality is an asset to the team. To maintain professionalism consistently, he should adhere to deadlines with unwavering dedication, setting a model example for peers.". "Grace's integrity and ethical standards are admirable.

  13. Understanding Peer Review in Science

    The manuscript peer review process helps ensure scientific publications are credible and minimizes errors. Peer review is an essential element of the scientific publishing process that helps ensure that research articles are evaluated, critiqued, and improved before release into the academic community. Take a look at the significance of peer review in scientific publications, the typical steps ...

  14. Peer review

    National Institutes of Health (NIH) Peer Review Policies and Practices. NIH resources about the regulations and processes that govern peer review, including management of conflicts of interest, applicant and reviewer responsibilities in maintaining the integrity in peer review, appeals, and more.

  15. Peer Review

    Peer review. A key convention in the publication of research is the peer review process, in which the quality and potential contribution of each manuscript is evaluated by one's peers in the scientific community. Like other scientific journals, APA journals utilize a peer review process to guide manuscript selection and publication decisions.

  16. Peer Review

    Peer Review. Peer Review in Three Minutes from NC State University Libraries on Vimeo. A peer reviewed or peer refereed journal or article is one in which a group of widely acknowledged experts in a field reviews the content for scholarly soundness and academic value.

  17. Peer Review Examples (+14 Phrases to Use)

    Peer review feedback is a form of evaluative feedback that benefits both the person being reviewed and the reviewer. Unlike typical methods, this type of feedback focuses on strengths as well as areas for improvement. It may seem challenging at first, but it gets easier with practice! This article will go over some examples of what makes good peer review feedback, along with tips on giving it ...

  18. What Is Peer Review?

    The most common types are: Single-blind review. Double-blind review. Triple-blind review. Collaborative review. Open review. Relatedly, peer assessment is a process where your peers provide you with feedback on something you've written, based on a set of criteria or benchmarks from an instructor.

  19. How to Peer Review

    When peer reviewing, it is helpful to think from the point of view of three different groups of people: Authors. Try to review the manuscript as you would like others to review your work. When you point out problems in a manuscript, do so in a way that will help the authors to improve the manuscript. Even if you recommend to the editor that the ...

  20. Finding Articles

    Peer-reviewed articles, also known as scholarly or refereed articles are papers that describe a research study. Why are peer-reviewed articles useful? They report on original research that have been reviewed by other experts before they are accepted for publication, so you can reasonably be assured that they contain valid information.

  21. Peer Review Examples (With 25 Effective Peer Review Phrases)

    Here are examples of positive peer reviews phrases to use: I'm impressed with how you completed the website design task this week. I want you to keep implementing user-friendly designs and suggest areas where you think the process requires improvement. You excel at marketing products to clients. It may be beneficial for you to continue taking ...

  22. Giving an effective peer review: sample framework and comments

    Basic tenets of peer reviewing: There are 5 basic tenets that should be kept in mind: Decline the review if you have any conflicts of interest (COIs). Remember that you're advising the journal editor, not making the decision about whether to accept or reject. Try to be helpful and always respectful to the author.

  23. Peer Review Examples: 50+ Effective Phrases for Next Review

    Peer reviews on Zavvy: Questions and peer review example phrases. As part of a wider performance management system, peer reviews help an organization in the following ways:. 🎯 Can be used as a goal-setting opportunity.; 🔎 Peer feedback helps identify the strengths and weaknesses of individual employees, teams, and the company as a whole.; 🌱 Suggestions from peers can help employees ...

  24. Commonwealth Honors College: Getting Started With Library Research

    These articles go through a process known as "peer review" where the article is reviewed by a group of experts in the field and revised based on peer feedback before being accepted and published by a journal. ... These databases are examples of good subject-specific databases for researching the disciplines of Art, Education, and Psychology ...

  25. Research Guides: FYEX1110 SoE: Evaluating Sources

    Learning, Research & Engagement Librarian. Holly Surbaugh she/her Email Me. Fall 2024 Drop-In Hours Centennial Science & Engineering Library, L164 Mondays: 10 am - 12pm ... Does this publication venue conduct peer review? What is the publication process? Who edits the journal? What is the impact factor?

  26. Announcing the new peer review framework for research project grant and

    Have you heard about the initiative at the National Institutes of Health (NIH) to improve the peer review of research project grant and fellowship applications? Join us as NIH describes the steps the agency is taking to simplify its process of assessing the scientific and technical merit of applications, better identify promising scientists for ...

  27. User engagement in clinical trials of digital mental health

    Introduction Digital mental health interventions (DMHIs) overcome traditional barriers enabling wider access to mental health support and allowing individuals to manage their treatment. How individuals engage with DMHIs impacts the intervention effect. This review determined whether the impact of user engagement was assessed in the intervention effect in Randomised Controlled Trials (RCTs ...

  28. Defining and identifying the critical elements of operational readiness

    Results Of 7005 sources reviewed, 79 met the eligibility criteria, including 54 peer-reviewed publications. The majority were descriptive reports (28%) and qualitative analyses (30%) from early stages of the COVID-19 pandemic. Definitions of OPR varied while nine articles explicitly used the term 'readiness', others classified OPR as part of preparedness or response.

  29. Facilitators and barriers of midwife-led model of care at public health

    For example, the global under-5 mortality rate fell from 42 deaths per 1,000 live births in 2015 to 39 in 2018. ... Peer Review reports. Introduction. ... The MMR & infant mortality rate (IMR) in the research area were indistinguishable from that, at 150 per 100,000 live births and 67 fatalities per 1,000 live births, respectively [10,11,12,13].

  30. Using PACS for teaching radiology to undergraduate medical students

    Subjects. The research population was the medical students of the Islamic Azad University of Mashhad during the academic year 2021-2022. The entry criteria were: being a medical trainee student, consent to enter the study, and the exclusion criteria were: students who had previously graduated in radiology or other medical sciences and students who had renewed their course in radiology ...