free review research paper

How To Write An A-Grade Literature Review

3 straightforward steps (with examples) + free template.

By: Derek Jansen (MBA) | Expert Reviewed By: Dr. Eunice Rautenbach | October 2019

Quality research is about building onto the existing work of others , “standing on the shoulders of giants”, as Newton put it. The literature review chapter of your dissertation, thesis or research project is where you synthesise this prior work and lay the theoretical foundation for your own research.

Long story short, this chapter is a pretty big deal, which is why you want to make sure you get it right . In this post, I’ll show you exactly how to write a literature review in three straightforward steps, so you can conquer this vital chapter (the smart way).

Overview: The Literature Review Process

  • Understanding the “ why “
  • Finding the relevant literature
  • Cataloguing and synthesising the information
  • Outlining & writing up your literature review
  • Example of a literature review

But first, the “why”…

Before we unpack how to write the literature review chapter, we’ve got to look at the why . To put it bluntly, if you don’t understand the function and purpose of the literature review process, there’s no way you can pull it off well. So, what exactly is the purpose of the literature review?

Well, there are (at least) four core functions:

  • For you to gain an understanding (and demonstrate this understanding) of where the research is at currently, what the key arguments and disagreements are.
  • For you to identify the gap(s) in the literature and then use this as justification for your own research topic.
  • To help you build a conceptual framework for empirical testing (if applicable to your research topic).
  • To inform your methodological choices and help you source tried and tested questionnaires (for interviews ) and measurement instruments (for surveys ).

Most students understand the first point but don’t give any thought to the rest. To get the most from the literature review process, you must keep all four points front of mind as you review the literature (more on this shortly), or you’ll land up with a wonky foundation.

Okay – with the why out the way, let’s move on to the how . As mentioned above, writing your literature review is a process, which I’ll break down into three steps:

  • Finding the most suitable literature
  • Understanding , distilling and organising the literature
  • Planning and writing up your literature review chapter

Importantly, you must complete steps one and two before you start writing up your chapter. I know it’s very tempting, but don’t try to kill two birds with one stone and write as you read. You’ll invariably end up wasting huge amounts of time re-writing and re-shaping, or you’ll just land up with a disjointed, hard-to-digest mess . Instead, you need to read first and distil the information, then plan and execute the writing.

Free Webinar: Literature Review 101

Step 1: Find the relevant literature

Naturally, the first step in the literature review journey is to hunt down the existing research that’s relevant to your topic. While you probably already have a decent base of this from your research proposal , you need to expand on this substantially in the dissertation or thesis itself.

Essentially, you need to be looking for any existing literature that potentially helps you answer your research question (or develop it, if that’s not yet pinned down). There are numerous ways to find relevant literature, but I’ll cover my top four tactics here. I’d suggest combining all four methods to ensure that nothing slips past you:

Method 1 – Google Scholar Scrubbing

Google’s academic search engine, Google Scholar , is a great starting point as it provides a good high-level view of the relevant journal articles for whatever keyword you throw at it. Most valuably, it tells you how many times each article has been cited, which gives you an idea of how credible (or at least, popular) it is. Some articles will be free to access, while others will require an account, which brings us to the next method.

Method 2 – University Database Scrounging

Generally, universities provide students with access to an online library, which provides access to many (but not all) of the major journals.

So, if you find an article using Google Scholar that requires paid access (which is quite likely), search for that article in your university’s database – if it’s listed there, you’ll have access. Note that, generally, the search engine capabilities of these databases are poor, so make sure you search for the exact article name, or you might not find it.

Method 3 – Journal Article Snowballing

At the end of every academic journal article, you’ll find a list of references. As with any academic writing, these references are the building blocks of the article, so if the article is relevant to your topic, there’s a good chance a portion of the referenced works will be too. Do a quick scan of the titles and see what seems relevant, then search for the relevant ones in your university’s database.

Method 4 – Dissertation Scavenging

Similar to Method 3 above, you can leverage other students’ dissertations. All you have to do is skim through literature review chapters of existing dissertations related to your topic and you’ll find a gold mine of potential literature. Usually, your university will provide you with access to previous students’ dissertations, but you can also find a much larger selection in the following databases:

  • Open Access Theses & Dissertations
  • Stanford SearchWorks

Keep in mind that dissertations and theses are not as academically sound as published, peer-reviewed journal articles (because they’re written by students, not professionals), so be sure to check the credibility of any sources you find using this method. You can do this by assessing the citation count of any given article in Google Scholar. If you need help with assessing the credibility of any article, or with finding relevant research in general, you can chat with one of our Research Specialists .

Alright – with a good base of literature firmly under your belt, it’s time to move onto the next step.

Need a helping hand?

free review research paper

Step 2: Log, catalogue and synthesise

Once you’ve built a little treasure trove of articles, it’s time to get reading and start digesting the information – what does it all mean?

While I present steps one and two (hunting and digesting) as sequential, in reality, it’s more of a back-and-forth tango – you’ll read a little , then have an idea, spot a new citation, or a new potential variable, and then go back to searching for articles. This is perfectly natural – through the reading process, your thoughts will develop , new avenues might crop up, and directional adjustments might arise. This is, after all, one of the main purposes of the literature review process (i.e. to familiarise yourself with the current state of research in your field).

As you’re working through your treasure chest, it’s essential that you simultaneously start organising the information. There are three aspects to this:

  • Logging reference information
  • Building an organised catalogue
  • Distilling and synthesising the information

I’ll discuss each of these below:

2.1 – Log the reference information

As you read each article, you should add it to your reference management software. I usually recommend Mendeley for this purpose (see the Mendeley 101 video below), but you can use whichever software you’re comfortable with. Most importantly, make sure you load EVERY article you read into your reference manager, even if it doesn’t seem very relevant at the time.

2.2 – Build an organised catalogue

In the beginning, you might feel confident that you can remember who said what, where, and what their main arguments were. Trust me, you won’t. If you do a thorough review of the relevant literature (as you must!), you’re going to read many, many articles, and it’s simply impossible to remember who said what, when, and in what context . Also, without the bird’s eye view that a catalogue provides, you’ll miss connections between various articles, and have no view of how the research developed over time. Simply put, it’s essential to build your own catalogue of the literature.

I would suggest using Excel to build your catalogue, as it allows you to run filters, colour code and sort – all very useful when your list grows large (which it will). How you lay your spreadsheet out is up to you, but I’d suggest you have the following columns (at minimum):

  • Author, date, title – Start with three columns containing this core information. This will make it easy for you to search for titles with certain words, order research by date, or group by author.
  • Categories or keywords – You can either create multiple columns, one for each category/theme and then tick the relevant categories, or you can have one column with keywords.
  • Key arguments/points – Use this column to succinctly convey the essence of the article, the key arguments and implications thereof for your research.
  • Context – Note the socioeconomic context in which the research was undertaken. For example, US-based, respondents aged 25-35, lower- income, etc. This will be useful for making an argument about gaps in the research.
  • Methodology – Note which methodology was used and why. Also, note any issues you feel arise due to the methodology. Again, you can use this to make an argument about gaps in the research.
  • Quotations – Note down any quoteworthy lines you feel might be useful later.
  • Notes – Make notes about anything not already covered. For example, linkages to or disagreements with other theories, questions raised but unanswered, shortcomings or limitations, and so forth.

If you’d like, you can try out our free catalog template here (see screenshot below).

Excel literature review template

2.3 – Digest and synthesise

Most importantly, as you work through the literature and build your catalogue, you need to synthesise all the information in your own mind – how does it all fit together? Look for links between the various articles and try to develop a bigger picture view of the state of the research. Some important questions to ask yourself are:

  • What answers does the existing research provide to my own research questions ?
  • Which points do the researchers agree (and disagree) on?
  • How has the research developed over time?
  • Where do the gaps in the current research lie?

To help you develop a big-picture view and synthesise all the information, you might find mind mapping software such as Freemind useful. Alternatively, if you’re a fan of physical note-taking, investing in a large whiteboard might work for you.

Mind mapping is a useful way to plan your literature review.

Step 3: Outline and write it up!

Once you’re satisfied that you have digested and distilled all the relevant literature in your mind, it’s time to put pen to paper (or rather, fingers to keyboard). There are two steps here – outlining and writing:

3.1 – Draw up your outline

Having spent so much time reading, it might be tempting to just start writing up without a clear structure in mind. However, it’s critically important to decide on your structure and develop a detailed outline before you write anything. Your literature review chapter needs to present a clear, logical and an easy to follow narrative – and that requires some planning. Don’t try to wing it!

Naturally, you won’t always follow the plan to the letter, but without a detailed outline, you’re more than likely going to end up with a disjointed pile of waffle , and then you’re going to spend a far greater amount of time re-writing, hacking and patching. The adage, “measure twice, cut once” is very suitable here.

In terms of structure, the first decision you’ll have to make is whether you’ll lay out your review thematically (into themes) or chronologically (by date/period). The right choice depends on your topic, research objectives and research questions, which we discuss in this article .

Once that’s decided, you need to draw up an outline of your entire chapter in bullet point format. Try to get as detailed as possible, so that you know exactly what you’ll cover where, how each section will connect to the next, and how your entire argument will develop throughout the chapter. Also, at this stage, it’s a good idea to allocate rough word count limits for each section, so that you can identify word count problems before you’ve spent weeks or months writing!

PS – check out our free literature review chapter template…

3.2 – Get writing

With a detailed outline at your side, it’s time to start writing up (finally!). At this stage, it’s common to feel a bit of writer’s block and find yourself procrastinating under the pressure of finally having to put something on paper. To help with this, remember that the objective of the first draft is not perfection – it’s simply to get your thoughts out of your head and onto paper, after which you can refine them. The structure might change a little, the word count allocations might shift and shuffle, and you might add or remove a section – that’s all okay. Don’t worry about all this on your first draft – just get your thoughts down on paper.

start writing

Once you’ve got a full first draft (however rough it may be), step away from it for a day or two (longer if you can) and then come back at it with fresh eyes. Pay particular attention to the flow and narrative – does it fall fit together and flow from one section to another smoothly? Now’s the time to try to improve the linkage from each section to the next, tighten up the writing to be more concise, trim down word count and sand it down into a more digestible read.

Once you’ve done that, give your writing to a friend or colleague who is not a subject matter expert and ask them if they understand the overall discussion. The best way to assess this is to ask them to explain the chapter back to you. This technique will give you a strong indication of which points were clearly communicated and which weren’t. If you’re working with Grad Coach, this is a good time to have your Research Specialist review your chapter.

Finally, tighten it up and send it off to your supervisor for comment. Some might argue that you should be sending your work to your supervisor sooner than this (indeed your university might formally require this), but in my experience, supervisors are extremely short on time (and often patience), so, the more refined your chapter is, the less time they’ll waste on addressing basic issues (which you know about already) and the more time they’ll spend on valuable feedback that will increase your mark-earning potential.

Literature Review Example

In the video below, we unpack an actual literature review so that you can see how all the core components come together in reality.

Let’s Recap

In this post, we’ve covered how to research and write up a high-quality literature review chapter. Let’s do a quick recap of the key takeaways:

  • It is essential to understand the WHY of the literature review before you read or write anything. Make sure you understand the 4 core functions of the process.
  • The first step is to hunt down the relevant literature . You can do this using Google Scholar, your university database, the snowballing technique and by reviewing other dissertations and theses.
  • Next, you need to log all the articles in your reference manager , build your own catalogue of literature and synthesise all the research.
  • Following that, you need to develop a detailed outline of your entire chapter – the more detail the better. Don’t start writing without a clear outline (on paper, not in your head!)
  • Write up your first draft in rough form – don’t aim for perfection. Remember, done beats perfect.
  • Refine your second draft and get a layman’s perspective on it . Then tighten it up and submit it to your supervisor.

Literature Review Course

Psst… there’s more!

This post is an extract from our bestselling short course, Literature Review Bootcamp . If you want to work smart, you don't want to miss this .

38 Comments

Phindile Mpetshwa

Thank you very much. This page is an eye opener and easy to comprehend.

Yinka

This is awesome!

I wish I come across GradCoach earlier enough.

But all the same I’ll make use of this opportunity to the fullest.

Thank you for this good job.

Keep it up!

Derek Jansen

You’re welcome, Yinka. Thank you for the kind words. All the best writing your literature review.

Renee Buerger

Thank you for a very useful literature review session. Although I am doing most of the steps…it being my first masters an Mphil is a self study and one not sure you are on the right track. I have an amazing supervisor but one also knows they are super busy. So not wanting to bother on the minutae. Thank you.

You’re most welcome, Renee. Good luck with your literature review 🙂

Sheemal Prasad

This has been really helpful. Will make full use of it. 🙂

Thank you Gradcoach.

Tahir

Really agreed. Admirable effort

Faturoti Toyin

thank you for this beautiful well explained recap.

Tara

Thank you so much for your guide of video and other instructions for the dissertation writing.

It is instrumental. It encouraged me to write a dissertation now.

Lorraine Hall

Thank you the video was great – from someone that knows nothing thankyou

araz agha

an amazing and very constructive way of presetting a topic, very useful, thanks for the effort,

Suilabayuh Ngah

It is timely

It is very good video of guidance for writing a research proposal and a dissertation. Since I have been watching and reading instructions, I have started my research proposal to write. I appreciate to Mr Jansen hugely.

Nancy Geregl

I learn a lot from your videos. Very comprehensive and detailed.

Thank you for sharing your knowledge. As a research student, you learn better with your learning tips in research

Uzma

I was really stuck in reading and gathering information but after watching these things are cleared thanks, it is so helpful.

Xaysukith thorxaitou

Really helpful, Thank you for the effort in showing such information

Sheila Jerome

This is super helpful thank you very much.

Mary

Thank you for this whole literature writing review.You have simplified the process.

Maithe

I’m so glad I found GradCoach. Excellent information, Clear explanation, and Easy to follow, Many thanks Derek!

You’re welcome, Maithe. Good luck writing your literature review 🙂

Anthony

Thank you Coach, you have greatly enriched and improved my knowledge

Eunice

Great piece, so enriching and it is going to help me a great lot in my project and thesis, thanks so much

Stephanie Louw

This is THE BEST site for ANYONE doing a masters or doctorate! Thank you for the sound advice and templates. You rock!

Thanks, Stephanie 🙂

oghenekaro Silas

This is mind blowing, the detailed explanation and simplicity is perfect.

I am doing two papers on my final year thesis, and I must stay I feel very confident to face both headlong after reading this article.

thank you so much.

if anyone is to get a paper done on time and in the best way possible, GRADCOACH is certainly the go to area!

tarandeep singh

This is very good video which is well explained with detailed explanation

uku igeny

Thank you excellent piece of work and great mentoring

Abdul Ahmad Zazay

Thanks, it was useful

Maserialong Dlamini

Thank you very much. the video and the information were very helpful.

Suleiman Abubakar

Good morning scholar. I’m delighted coming to know you even before the commencement of my dissertation which hopefully is expected in not more than six months from now. I would love to engage my study under your guidance from the beginning to the end. I love to know how to do good job

Mthuthuzeli Vongo

Thank you so much Derek for such useful information on writing up a good literature review. I am at a stage where I need to start writing my one. My proposal was accepted late last year but I honestly did not know where to start

SEID YIMAM MOHAMMED (Technic)

Like the name of your YouTube implies you are GRAD (great,resource person, about dissertation). In short you are smart enough in coaching research work.

Richie Buffalo

This is a very well thought out webpage. Very informative and a great read.

Adekoya Opeyemi Jonathan

Very timely.

I appreciate.

Norasyidah Mohd Yusoff

Very comprehensive and eye opener for me as beginner in postgraduate study. Well explained and easy to understand. Appreciate and good reference in guiding me in my research journey. Thank you

Maryellen Elizabeth Hart

Thank you. I requested to download the free literature review template, however, your website wouldn’t allow me to complete the request or complete a download. May I request that you email me the free template? Thank you.

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

free review research paper

Something went wrong when searching for seed articles. Please try again soon.

No articles were found for that search term.

Author, year The title of the article goes here

LITERATURE REVIEW SOFTWARE FOR BETTER RESEARCH

free review research paper

“Litmaps is a game changer for finding novel literature... it has been invaluable for my productivity.... I also got my PhD student to use it and they also found it invaluable, finding several gaps they missed”

Varun Venkatesh

Austin Health, Australia

free review research paper

As a full-time researcher, Litmaps has become an indispensable tool in my arsenal. The Seed Maps and Discover features of Litmaps have transformed my literature review process, streamlining the identification of key citations while revealing previously overlooked relevant literature, ensuring no crucial connection goes unnoticed. A true game-changer indeed!

Ritwik Pandey

Doctoral Research Scholar – Sri Sathya Sai Institute of Higher Learning

free review research paper

Using Litmaps for my research papers has significantly improved my workflow. Typically, I start with a single paper related to my topic. Whenever I find an interesting work, I add it to my search. From there, I can quickly cover my entire Related Work section.

David Fischer

Research Associate – University of Applied Sciences Kempten

“It's nice to get a quick overview of related literature. Really easy to use, and it helps getting on top of the often complicated structures of referencing”

Christoph Ludwig

Technische Universität Dresden, Germany

“This has helped me so much in researching the literature. Currently, I am beginning to investigate new fields and this has helped me hugely”

Aran Warren

Canterbury University, NZ

“I can’t live without you anymore! I also recommend you to my students.”

Professor at The Chinese University of Hong Kong

“Seeing my literature list as a network enhances my thinking process!”

Katholieke Universiteit Leuven, Belgium

“Incredibly useful tool to get to know more literature, and to gain insight in existing research”

KU Leuven, Belgium

“As a student just venturing into the world of lit reviews, this is a tool that is outstanding and helping me find deeper results for my work.”

Franklin Jeffers

South Oregon University, USA

“Any researcher could use it! The paper recommendations are great for anyone and everyone”

Swansea University, Wales

“This tool really helped me to create good bibtex references for my research papers”

Ali Mohammed-Djafari

Director of Research at LSS-CNRS, France

“Litmaps is extremely helpful with my research. It helps me organize each one of my projects and see how they relate to each other, as well as to keep up to date on publications done in my field”

Daniel Fuller

Clarkson University, USA

As a person who is an early researcher and identifies as dyslexic, I can say that having research articles laid out in the date vs cite graph format is much more approachable than looking at a standard database interface. I feel that the maps Litmaps offers lower the barrier of entry for researchers by giving them the connections between articles spaced out visually. This helps me orientate where a paper is in the history of a field. Thus, new researchers can look at one of Litmap's "seed maps" and have the same information as hours of digging through a database.

Baylor Fain

Postdoctoral Associate – University of Florida

free review research paper

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • How to Write a Literature Review | Guide, Examples, & Templates

How to Write a Literature Review | Guide, Examples, & Templates

Published on January 2, 2023 by Shona McCombes . Revised on September 11, 2023.

What is a literature review? A literature review is a survey of scholarly sources on a specific topic. It provides an overview of current knowledge, allowing you to identify relevant theories, methods, and gaps in the existing research that you can later apply to your paper, thesis, or dissertation topic .

There are five key steps to writing a literature review:

  • Search for relevant literature
  • Evaluate sources
  • Identify themes, debates, and gaps
  • Outline the structure
  • Write your literature review

A good literature review doesn’t just summarize sources—it analyzes, synthesizes , and critically evaluates to give a clear picture of the state of knowledge on the subject.

Instantly correct all language mistakes in your text

Upload your document to correct all your mistakes in minutes

upload-your-document-ai-proofreader

Table of contents

What is the purpose of a literature review, examples of literature reviews, step 1 – search for relevant literature, step 2 – evaluate and select sources, step 3 – identify themes, debates, and gaps, step 4 – outline your literature review’s structure, step 5 – write your literature review, free lecture slides, other interesting articles, frequently asked questions, introduction.

  • Quick Run-through
  • Step 1 & 2

When you write a thesis , dissertation , or research paper , you will likely have to conduct a literature review to situate your research within existing knowledge. The literature review gives you a chance to:

  • Demonstrate your familiarity with the topic and its scholarly context
  • Develop a theoretical framework and methodology for your research
  • Position your work in relation to other researchers and theorists
  • Show how your research addresses a gap or contributes to a debate
  • Evaluate the current state of research and demonstrate your knowledge of the scholarly debates around your topic.

Writing literature reviews is a particularly important skill if you want to apply for graduate school or pursue a career in research. We’ve written a step-by-step guide that you can follow below.

Literature review guide

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

free review research paper

Writing literature reviews can be quite challenging! A good starting point could be to look at some examples, depending on what kind of literature review you’d like to write.

  • Example literature review #1: “Why Do People Migrate? A Review of the Theoretical Literature” ( Theoretical literature review about the development of economic migration theory from the 1950s to today.)
  • Example literature review #2: “Literature review as a research methodology: An overview and guidelines” ( Methodological literature review about interdisciplinary knowledge acquisition and production.)
  • Example literature review #3: “The Use of Technology in English Language Learning: A Literature Review” ( Thematic literature review about the effects of technology on language acquisition.)
  • Example literature review #4: “Learners’ Listening Comprehension Difficulties in English Language Learning: A Literature Review” ( Chronological literature review about how the concept of listening skills has changed over time.)

You can also check out our templates with literature review examples and sample outlines at the links below.

Download Word doc Download Google doc

Before you begin searching for literature, you need a clearly defined topic .

If you are writing the literature review section of a dissertation or research paper, you will search for literature related to your research problem and questions .

Make a list of keywords

Start by creating a list of keywords related to your research question. Include each of the key concepts or variables you’re interested in, and list any synonyms and related terms. You can add to this list as you discover new keywords in the process of your literature search.

  • Social media, Facebook, Instagram, Twitter, Snapchat, TikTok
  • Body image, self-perception, self-esteem, mental health
  • Generation Z, teenagers, adolescents, youth

Search for relevant sources

Use your keywords to begin searching for sources. Some useful databases to search for journals and articles include:

  • Your university’s library catalogue
  • Google Scholar
  • Project Muse (humanities and social sciences)
  • Medline (life sciences and biomedicine)
  • EconLit (economics)
  • Inspec (physics, engineering and computer science)

You can also use boolean operators to help narrow down your search.

Make sure to read the abstract to find out whether an article is relevant to your question. When you find a useful book or article, you can check the bibliography to find other relevant sources.

You likely won’t be able to read absolutely everything that has been written on your topic, so it will be necessary to evaluate which sources are most relevant to your research question.

For each publication, ask yourself:

  • What question or problem is the author addressing?
  • What are the key concepts and how are they defined?
  • What are the key theories, models, and methods?
  • Does the research use established frameworks or take an innovative approach?
  • What are the results and conclusions of the study?
  • How does the publication relate to other literature in the field? Does it confirm, add to, or challenge established knowledge?
  • What are the strengths and weaknesses of the research?

Make sure the sources you use are credible , and make sure you read any landmark studies and major theories in your field of research.

You can use our template to summarize and evaluate sources you’re thinking about using. Click on either button below to download.

Take notes and cite your sources

As you read, you should also begin the writing process. Take notes that you can later incorporate into the text of your literature review.

It is important to keep track of your sources with citations to avoid plagiarism . It can be helpful to make an annotated bibliography , where you compile full citation information and write a paragraph of summary and analysis for each source. This helps you remember what you read and saves time later in the process.

Prevent plagiarism. Run a free check.

To begin organizing your literature review’s argument and structure, be sure you understand the connections and relationships between the sources you’ve read. Based on your reading and notes, you can look for:

  • Trends and patterns (in theory, method or results): do certain approaches become more or less popular over time?
  • Themes: what questions or concepts recur across the literature?
  • Debates, conflicts and contradictions: where do sources disagree?
  • Pivotal publications: are there any influential theories or studies that changed the direction of the field?
  • Gaps: what is missing from the literature? Are there weaknesses that need to be addressed?

This step will help you work out the structure of your literature review and (if applicable) show how your own research will contribute to existing knowledge.

  • Most research has focused on young women.
  • There is an increasing interest in the visual aspects of social media.
  • But there is still a lack of robust research on highly visual platforms like Instagram and Snapchat—this is a gap that you could address in your own research.

There are various approaches to organizing the body of a literature review. Depending on the length of your literature review, you can combine several of these strategies (for example, your overall structure might be thematic, but each theme is discussed chronologically).

Chronological

The simplest approach is to trace the development of the topic over time. However, if you choose this strategy, be careful to avoid simply listing and summarizing sources in order.

Try to analyze patterns, turning points and key debates that have shaped the direction of the field. Give your interpretation of how and why certain developments occurred.

If you have found some recurring central themes, you can organize your literature review into subsections that address different aspects of the topic.

For example, if you are reviewing literature about inequalities in migrant health outcomes, key themes might include healthcare policy, language barriers, cultural attitudes, legal status, and economic access.

Methodological

If you draw your sources from different disciplines or fields that use a variety of research methods , you might want to compare the results and conclusions that emerge from different approaches. For example:

  • Look at what results have emerged in qualitative versus quantitative research
  • Discuss how the topic has been approached by empirical versus theoretical scholarship
  • Divide the literature into sociological, historical, and cultural sources

Theoretical

A literature review is often the foundation for a theoretical framework . You can use it to discuss various theories, models, and definitions of key concepts.

You might argue for the relevance of a specific theoretical approach, or combine various theoretical concepts to create a framework for your research.

Like any other academic text , your literature review should have an introduction , a main body, and a conclusion . What you include in each depends on the objective of your literature review.

The introduction should clearly establish the focus and purpose of the literature review.

Depending on the length of your literature review, you might want to divide the body into subsections. You can use a subheading for each theme, time period, or methodological approach.

As you write, you can follow these tips:

  • Summarize and synthesize: give an overview of the main points of each source and combine them into a coherent whole
  • Analyze and interpret: don’t just paraphrase other researchers — add your own interpretations where possible, discussing the significance of findings in relation to the literature as a whole
  • Critically evaluate: mention the strengths and weaknesses of your sources
  • Write in well-structured paragraphs: use transition words and topic sentences to draw connections, comparisons and contrasts

In the conclusion, you should summarize the key findings you have taken from the literature and emphasize their significance.

When you’ve finished writing and revising your literature review, don’t forget to proofread thoroughly before submitting. Not a language expert? Check out Scribbr’s professional proofreading services !

This article has been adapted into lecture slides that you can use to teach your students about writing a literature review.

Scribbr slides are free to use, customize, and distribute for educational purposes.

Open Google Slides Download PowerPoint

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

  • Sampling methods
  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a thesis, dissertation , or research paper , in order to situate your work in relation to existing knowledge.

There are several reasons to conduct a literature review at the beginning of a research project:

  • To familiarize yourself with the current state of knowledge on your topic
  • To ensure that you’re not just repeating what others have already done
  • To identify gaps in knowledge and unresolved problems that your research can address
  • To develop your theoretical framework and methodology
  • To provide an overview of the key findings and debates on the topic

Writing the literature review shows your reader how your work relates to existing research and what new insights it will contribute.

The literature review usually comes near the beginning of your thesis or dissertation . After the introduction , it grounds your research in a scholarly field and leads directly to your theoretical framework or methodology .

A literature review is a survey of credible sources on a topic, often used in dissertations , theses, and research papers . Literature reviews give an overview of knowledge on a subject, helping you identify relevant theories and methods, as well as gaps in existing research. Literature reviews are set up similarly to other  academic texts , with an introduction , a main body, and a conclusion .

An  annotated bibliography is a list of  source references that has a short description (called an annotation ) for each of the sources. It is often assigned as part of the research process for a  paper .  

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, September 11). How to Write a Literature Review | Guide, Examples, & Templates. Scribbr. Retrieved August 26, 2024, from https://www.scribbr.com/dissertation/literature-review/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, what is a theoretical framework | guide to organizing, what is a research methodology | steps & tips, how to write a research proposal | examples & templates, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

“The only truly modern academic research engine”

Oa.mg is a search engine for academic papers, specialising in open access. we have over 250 million papers in our index..

Revolutionize Your Research with Jenni AI

Literature Review Generator

Welcome to Jenni AI, the ultimate tool for researchers and students. Our AI Literature Review Generator is designed to assist you in creating comprehensive, high-quality literature reviews, enhancing your academic and research endeavors. Say goodbye to writer's block and hello to seamless, efficient literature review creation.

free review research paper

Loved by over 3 million academics

free review research paper

Endorsed by Academics from Leading Institutions

Join the Community of Scholars Who Trust Jenni AI

google logo

Elevate Your Research Toolkit

Discover the Game-Changing Features of Jenni AI for Literature Reviews

Advanced AI Algorithms

Jenni AI utilizes cutting-edge AI technology to analyze and suggest relevant literature, helping you stay on top of current research trends.

Get started

free review research paper

Idea Generation

Overcome writer's block with AI-generated prompts and ideas that align with your research topic, helping to expand and deepen your review.

Citation Assistance

Get help with proper citation formats to maintain academic integrity and attribute sources correctly.

free review research paper

Our Pledge to Academic Integrity

At Jenni AI, we are deeply committed to the principles of academic integrity. We understand the importance of honesty, transparency, and ethical conduct in the academic community. Our tool is designed not just to assist in your research, but to do so in a way that respects and upholds these fundamental values.

How it Works

Start by creating your account on Jenni AI. The sign-up process is quick and user-friendly.

Define Your Research Scope

Enter the topic of your literature review to guide Jenni AI’s focus.

Citation Guidance

Receive assistance in citing sources correctly, maintaining the academic standard.

Easy Export

Export your literature review to LaTeX, HTML, or .docx formats

Interact with AI-Powered Suggestions

Use Jenni AI’s suggestions to structure your literature review, organizing it into coherent sections.

What Our Users Say

Discover how Jenni AI has made a difference in the lives of academics just like you

free review research paper

I thought AI writing was useless. Then I found Jenni AI, the AI-powered assistant for academic writing. It turned out to be much more advanced than I ever could have imagined. Jenni AI = ChatGPT x 10.

free review research paper

Charlie Cuddy

@sonofgorkhali

Love this use of AI to assist with, not replace, writing! Keep crushing it @Davidjpark96 💪

free review research paper

Waqar Younas, PhD

@waqaryofficial

4/9 Jenni AI's Outline Builder is a game-changer for organizing your thoughts and structuring your content. Create detailed outlines effortlessly, ensuring your writing is clear and coherent. #OutlineBuilder #WritingTools #JenniAI

free review research paper

I started with Jenni-who & Jenni-what. But now I can't write without Jenni. I love Jenni AI and am amazed to see how far Jenni has come. Kudos to http://Jenni.AI team.

free review research paper

Jenni is perfect for writing research docs, SOPs, study projects presentations 👌🏽

free review research paper

Stéphane Prud'homme

http://jenni.ai is awesome and super useful! thanks to @Davidjpark96 and @whoisjenniai fyi @Phd_jeu @DoctoralStories @WriteThatPhD

Frequently asked questions

What exactly does jenni ai do, is jenni ai suitable for all academic disciplines, is there a trial period or a free version available.

How does Jenni AI help with writer's block?

Can Jenni AI write my literature review for me?

How often is the literature database updated in Jenni AI?

How user-friendly is Jenni AI for those not familiar with AI tools?

Jenni AI: Standing Out From the Competition

In a sea of online proofreaders, Jenni AI stands out. Here’s how we compare to other tools on the market:

Feature Featire

COMPETITORS

Advanced AI-Powered Assistance

Uses state-of-the-art AI technology to provide relevant literature suggestions and structural guidance.

May rely on simpler algorithms, resulting in less dynamic or comprehensive support.

User-Friendly Interface

Designed for ease of use, making it accessible for users with varying levels of tech proficiency.

Interfaces can be complex or less intuitive, posing a challenge for some users.

Transparent and Flexible Pricing

Offers a free trial and clear, flexible pricing plans suitable for different needs.

Pricing structures can be opaque or inflexible, with fewer user options.

Unparalleled Customization

Offers highly personalized suggestions and adapts to your specific research needs over time.

Often provide generic suggestions that may not align closely with individual research topics.

Comprehensive Literature Access

Provides access to a vast and up-to-date range of academic literature, ensuring comprehensive research coverage.

Some may have limited access to current or diverse research materials, restricting the scope of literature reviews.

Ready to Transform Your Research Process?

Don't wait to elevate your research. Sign up for Jenni AI today and discover a smarter, more efficient way to handle your academic literature reviews.

A free, AI-powered research tool for scientific literature

  • Noam Chomsky

New & Improved API for Developers

Introducing semantic reader in beta.

Stay Connected With Semantic Scholar Sign Up What Is Semantic Scholar? Semantic Scholar is a free, AI-powered research tool for scientific literature, based at Ai2.

Academia Insider

Best Websites To Download Research Papers For Free: Beyond Sci-Hub

Navigating the vast ocean of academic research can be daunting, especially when you’re on a quest for specific research papers without the constraints of paywalls. Fortunately, the digital age has ushered in an era of accessible knowledge, with various platforms offering free downloads of scholarly articles.

In this article, we explore some of the best websites that provide researchers, students, and academicians with free access to a plethora of research papers across diverse fields, ensuring that knowledge remains within everyone’s reach.

Best Websites To Download Research Papers For Free

PlatformFeatures
– Hosts diverse academic papers.
– Free access to many scholarly articles.
– Links to open-access resources.
– Combines social networking with research.
– Direct downloads of open-access papers.
– Allows requests for papers from authors.
– Open-access article repository.
– Direct download of free PDFs.
– Search using keywords, DOI, or journals.
– Extensive open-access journal repository.
– Free download of scholarly articles.
– Advanced search by keywords, publisher, language.
– Focus on medicine and life sciences.
– Lists open-access and subscription articles.
– Free full-text links and integration with Unpaywall.
– Free access to paywalled articles.
– Uses DOI for article retrieval.
– Legal and ethical considerations.

Google Scholar

As a researcher, you might find Google Scholar to be a repository brimming with academic papers covering a broad span of domains like social sciences, computer science, and humanity, including:

  • Journal articles
  • Conference papers, and

Unlike other websites to download research papers, Google Scholar provides free access to a vast collection of scholarly literature, making it one of the best websites to download research.

Not every article is available in full PDF format directly; however, Google Scholar often links to other open access resources like DOAJ (Directory of Open Access Journals) and open-access repositories where you can directly download papers.

For instance, if you’re searching for a specific 2023 research paper in mathematics, you can use Google Scholar to locate the paper and check if it’s available for free download either on the platform itself or through links to various open access sources.

In many cases, Google Scholar integrates with tools like Unpaywall and Open Access Button, which are browser extensions that help you find free versions of paywalled articles.

These extensions often redirect you to open-access content, including those on platforms like Sci-Hub and Library Genesis, although it’s crucial to be aware of the legal and ethical implications of using such services.

ResearchGate

ResearchGate is a unique platform that blends social networking with academic research, making it an essential tool for researchers and scientists across various disciplines.

free review research paper

Here, you have access to a digital library of millions of research papers, spanning fields from computer science to social sciences and beyond.

When you’re on ResearchGate, downloading a research paper is relatively straightforward, especially if it’s open access. Many researchers upload the full PDF of their work, providing free access to their peer-reviewed articles.

If the research paper you’re interested in isn’t available for direct download, ResearchGate offers a unique feature: you can request a copy directly from the author.

This approach not only gets you the paper but also potentially opens a line of communication with leading experts in your field.

It’s important to note that ResearchGate isn’t just a repository; it’s a platform for discovery and connection. You can:

  • Follow specific researchers
  • Join discussions, and
  • Receive notifications about new research in your domain.

While it doesn’t have the controversial direct download links like Sci-Hub or Library Genesis, ResearchGate offers a more ethical and legal route to accessing academic papers. 

ScienceOpen

ScienceOpen is a comprehensive repository that hosts a multitude of open-access research articles across various fields, from the social sciences to computer science. 

The process of downloading a research paper on ScienceOpen is remarkably straightforward. Since it’s an open-access platform, most of the papers are available to download as PDFs without any cost.

This means you can access high-quality, peer-reviewed academic research without encountering paywalls that are often a barrier in many other scientific platforms.

For instance, if you’re delving into the latest 2023 scientific papers in mathematics, ScienceOpen can be your go-to source. You can easily search for research papers using:

  • Browsing through various open access journals featured on the site.

The direct download feature simplifies access to these papers, making it convenient for you to obtain the research you need.

Directory of Open Access Journals (DOAJ)

The Directory of Open Access Journals (DOAJ) is a digital library is an extensive repository of open-access, peer-reviewed journals, covering a wide array of subjects from humanities to nuclear science.

When you’re navigating DOAJ, you’ll discover that it’s not just a platform to download research papers; it’s a gateway to a world of academic research.

free review research paper

Each journal article listed is freely accessible, meaning you can download these scholarly articles without any cost or subscription.

The process is simple: search for research papers using specific keywords, subjects, or even DOAJ’s advanced search functionality that includes filters like:

  • Language, or
  • The year of publication.

For example, if you’re delving into the latest developments in scientific research in 2023, DOAJ allows you to refine your search to the most recent publications.

Once you find a relevant research paper, you can easily access the full text in PDF format through a direct download link. This is particularly useful for accessing high-quality, open-access research papers that are not always readily available on other platforms like Sci-Hub or Library Genesis.

PubMed hosts millions of research articles, primarily in the fields of medicine and life sciences, but also encompassing a broad range of scientific research.

When you’re on PubMed, you can search for research papers using:

  • Authors, or
  • Specific journal names.

While PubMed lists both open-access and subscription-based journal articles, it offers a unique feature for accessing papers for free.

If you’re looking for a particular research paper, say in the domain of computer science or social sciences from 2023, you can directly access its abstract on PubMed. For open access articles, a free full-text link is often available, allowing you to download the research paper in PDF format.

PubMed integrates with tools like Unpaywall and the Open Access Button. These browser extensions help you find open-access versions of the articles you’re interested in, bypassing the paywalls that often restrict access to scholarly literature.

While PubMed itself doesn’t provide direct download links for all articles, its connection with these tools and various open access repositories ensures that you, as a researcher, have greater access to scientific papers.

Sci-Hub (with Caution)

Sci-Hub, often dubbed the ‘Pirate Bay of Science,’ has been a game-changer in the scientific community since its inception by Alexandra Elbakyan in 2011.

It operates as a controversial, yet widely used platform providing free access to millions of research papers and academic articles that are typically locked behind paywalls.

As a researcher, you might find Sci-Hub an intriguing, albeit contentious, tool for accessing scholarly literature.

When you’re looking to download a research paper from Sci-Hub, the process is relatively straightforward. Say you need a journal article on computer science or a groundbreaking study in social sciences from 2023; you just need to have the DOI (Digital Object Identifier) of the paper.

By entering this DOI into Sci-Hub’s search bar, the website bypasses publisher paywalls, offering you direct download links to PDF versions of the articles.

free review research paper

It’s crucial to note that while Sci-Hub provides access to a vast repository of scientific research, its legality is under constant scrutiny. The platform operates via various proxy links and has been the subject of numerous legal battles with publishers and academic institutions.

Nevertheless, Sci-Hub remains a popular go-to for researchers and scientists globally, especially those without access to university libraries or digital archives.

While it opens doors to a wealth of knowledge, users should be aware of the ethical and legal implications of using such a service in their respective countries.

Wrapping Up: You Can Get Free Academic Papers 

The digital landscape offers a wealth of resources for accessing academic research without financial barriers. The platforms we share here provide an invaluable service to the scholarly community, democratising access to knowledge and fostering intellectual growth.

Whether you’re a seasoned researcher or a curious student, these websites bridge the gap between you and the vast world of academic literature, ensuring that the pursuit of knowledge remains an inclusive and equitable journey for all. Remember to consider the legal and ethical aspects when using these resources.

free review research paper

Dr Andrew Stapleton has a Masters and PhD in Chemistry from the UK and Australia. He has many years of research experience and has worked as a Postdoctoral Fellow and Associate at a number of Universities. Although having secured funding for his own research, he left academia to help others with his YouTube channel all about the inner workings of academia and how to make it work for you.

Thank you for visiting Academia Insider.

We are here to help you navigate Academia as painlessly as possible. We are supported by our readers and by visiting you are helping us earn a small amount through ads and affiliate revenue - Thank you!

free review research paper

2024 © Academia Insider

free review research paper

Article type icon

21 Legit Research Databases for Free Journal Articles in 2024

#scribendiinc

Written by  Scribendi

Has this ever happened to you? While looking for websites for research, you come across a research paper site that claims to connect academics to a peer-reviewed article database for free.

Intrigued, you search for keywords related to your topic, only to discover that you must pay a hefty subscription fee to access the service. After the umpteenth time being duped, you begin to wonder if there's even such a thing as free journal articles.

Subscription fees and paywalls are often the bane of students and academics, especially those at small institutions who don't provide access to many free article directories and repositories.

Whether you're working on an undergraduate paper, a PhD dissertation, or a medical research study, we want to help you find tools to locate and access the information you need to produce well-researched, compelling, and innovative work.

Below, we discuss why peer-reviewed articles are superior and list out the best free article databases to use in 2024.

Download Our Free Research Database Roundup PDF

Why peer-reviewed scholarly journal articles are more authoritative.

Peer-Reviewed Articles

Determining what sources are reliable can be challenging. Peer-reviewed scholarly journal articles are the gold standard in academic research. Reputable academic journals have a rigorous peer-review process.

The peer review process provides accountability to the academic community, as well as to the content of the article. The peer review process involves qualified experts in a specific (often very specific) field performing a review of an article's methods and findings to determine things like quality and credibility.

Peer-reviewed articles can be found in peer-reviewed article databases and research databases, and if you know that a database of journals is reliable, that can offer reassurances about the reliability of a free article. Peer review is often double blind, meaning that the author removes all identifying information and, likewise, does not know the identity of the reviewers. This helps reviewers maintain objectivity and impartiality so as to judge an article based on its merit.

Where to Find Peer-Reviewed Articles

Peer-reviewed articles can be found in a variety of research databases. Below is a list of some of the major databases you can use to find peer-reviewed articles and other sources in disciplines spanning the humanities, sciences, and social sciences.

What Are Open Access Journals?

An open access (OA) journal is a journal whose content can be accessed without payment. This provides scholars, students, and researchers with free journal articles. OA journals use alternate methods of funding to cover publication costs so that articles can be published without having to pass those publication costs on to the reader.

Open Access Journals

Some of these funding models include standard funding methods like advertising, public funding, and author payment models, where the author pays a fee in order to publish in the journal. There are OA journals that have non-peer-reviewed academic content, as well as journals that focus on dissertations, theses, and papers from conferences, but the main focus of OA is peer-reviewed scholarly journal articles.

The internet has certainly made it easier to access research articles and other scholarly publications without needing access to a university library, and OA takes another step in that direction by removing financial barriers to academic content.

Choosing Wisely

Features of legitimate oa journals.

 There are things to look out for when trying to decide if a free publication journal is legitimate:

Mission statement —The mission statement for an OA journal should be available on their website.

Publication history —Is the journal well established? How long has it been available?

Editorial board —Who are the members of the editorial board, and what are their credentials?

Indexing —Can the journal be found in a reliable database?

Peer review —What is the peer review process? Does the journal allow enough time in the process for a reliable assessment of quality?

Impact factor —What is the average number of times the journal is cited over a two-year period?

Features of Illegitimate OA Journals

There are predatory publications that take advantage of the OA format, and they are something to be wary of. Here are some things to look out for:

Contact information —Is contact information provided? Can it be verified?

Turnaround —If the journal makes dubious claims about the amount of time from submission to publication, it is likely unreliable.

Editorial board —Much like determining legitimacy, looking at the editorial board and their credentials can help determine illegitimacy.

Indexing —Can the journal be found in any scholarly databases?

Peer review —Is there a statement about the peer review process? Does it fit what you know about peer review?

How to Find Scholarly Articles

Identify keywords.

Keywords are included in an article by the author. Keywords are an excellent way to find content relevant to your research topic or area of interest. In academic searches, much like you would on a search engine, you can use keywords to navigate through what is available to find exactly what you're looking for.

Authors provide keywords that will help you easily find their article when researching a related topic, often including general terms to accommodate broader searches, as well as some more specific terms for those with a narrower scope. Keywords can be used individually or in combination to refine your scholarly article search.

Narrow Down Results

Sometimes, search results can be overwhelming, and searching for free articles on a journal database is no exception, but there are multiple ways to narrow down your results. A good place to start is discipline.

What category does your topic fall into (psychology, architecture, machine learning, etc.)? You can also narrow down your search with a year range if you're looking for articles that are more recent.

A Boolean search can be incredibly helpful. This entails including terms like AND between two keywords in your search if you need both keywords to be in your results (or, if you are looking to exclude certain keywords, to exclude these words from the results).

Consider Different Avenues

If you're not having luck using keywords in your search for free articles, you may still be able to find what you're looking for by changing your tactics. Casting a wider net sometimes yields positive results, so it may be helpful to try searching by subject if keywords aren't getting you anywhere.

You can search for a specific publisher to see if they have OA publications in the academic journal database. And, if you know more precisely what you're looking for, you can search for the title of the article or the author's name.

Determining the Credibility of Scholarly Sources

Ensuring that sources are both credible and reliable is crucial to academic research. Use these strategies to help evaluate the usefulness of scholarly sources:

  • Peer Review : Look for articles that have undergone a rigorous peer-review process. Peer-reviewed articles are typically vetted by experts in the field, ensuring the accuracy of the research findings.
Tip: To determine whether an article has undergone rigorous peer review, review the journal's editorial policies, which are often available on the journal's website. Look for information about the peer-review process, including the criteria for selecting reviewers, the process for handling conflicts of interest, and any transparency measures in place.
  • Publisher Reputation : Consider the reputation of the publisher. Established publishers, such as well-known academic journals, are more likely to adhere to high editorial standards and publishing ethics.
  • Author Credentials : Evaluate the credentials and expertise of the authors. Check their affiliations, academic credentials, and past publications to assess their authority in the field.
  • Citations and References : Examine the citations and references provided in the article. A well-researched article will cite credible sources to support its arguments and findings. Verify the accuracy of the cited sources and ensure they are from reputable sources.
  • Publication Date : Consider the publication date of the article. While older articles may still be relevant, particularly in certain fields, it is best to prioritize recent publications for up-to-date research and findings.
  • Journal Impact Factor : Assess the journal's impact factor or other metrics that indicate its influence and reputation within the academic community. Higher impact factor journals are generally considered more prestigious and reliable. 
Tip: Journal Citation Reports (JCR), produced by Clarivate Analytics, is a widely used source for impact factor data. You can access JCR through academic libraries or directly from the Clarivate Analytics website if you have a subscription.
  • Peer Recommendations : Seek recommendations from peers, mentors, or professors in your field. They can provide valuable insights and guidance on reputable sources and journals within your area of study.
  • Cross-Verification : Cross-verify the information presented in the article with other credible sources. Compare findings, methodologies, and conclusions with similar studies to ensure consistency and reliability.

By employing these strategies, researchers can confidently evaluate the credibility and reliability of scholarly sources, ensuring the integrity of their research contributions in an ever-evolving landscape.

The Top 21 Free Online Journal and Research Databases

Navigating OA journals, research article databases, and academic websites trying to find high-quality sources for your research can really make your head spin. What constitutes a reliable database? What is a useful resource for your discipline and research topic? How can you find and access full-text, peer-reviewed articles?

Fortunately, we're here to help. Having covered some of the ins and outs of peer review, OA journals, and how to search for articles, we have compiled a list of the top 21 free online journals and the best research databases. This list of databases is a great resource to help you navigate the wide world of academic research.

These databases provide a variety of free sources, from abstracts and citations to full-text, peer-reviewed OA journals. With databases covering specific areas of research and interdisciplinary databases that provide a variety of material, these are some of our favorite free databases, and they're totally legit!

CORE is a multidisciplinary aggregator of OA research. CORE has the largest collection of OA articles available. It allows users to search more than 219 million OA articles. While most of these link to the full-text article on the original publisher's site, or to a PDF available for download, five million records are hosted directly on CORE.

CORE's mission statement is a simple and straightforward commitment to offering OA articles to anyone, anywhere in the world. They also host communities that are available for researchers to join and an ambassador community to enhance their services globally. In addition to a straightforward keyword search, CORE offers advanced search options to filter results by publication type, year, language, journal, repository, and author.

CORE's user interface is easy to use and navigate. Search results can be sorted based on relevance or recency, and you can search for relevant content directly from the results screen.

Collection : 219,537,133 OA articles

Other Services : Additional services are available from CORE, with extras that are geared toward researchers, repositories, and businesses. There are tools for accessing raw data, including an API that provides direct access to data, datasets that are available for download, and FastSync for syncing data content from the CORE database.

CORE has a recommender plug-in that suggests relevant OA content in the database while conducting a search and a discovery feature that helps you discover OA versions of paywalled articles. Other features include tools for managing content, such as a dashboard for managing repository output and the Repository Edition service to enhance discoverability.

Good Source of Peer-Reviewed Articles : Yes

Advanced Search Options : Language, author, journal, publisher, repository, DOI, year

2. ScienceOpen

Functioning as a research and publishing network, ScienceOpen offers OA to more than 74 million articles in all areas of science. Although you do need to register to view the full text of articles, registration is free. The advanced search function is highly detailed, allowing you to find exactly the research you're looking for.

The Berlin- and Boston-based company was founded in 2013 to "facilitate open and public communications between academics and to allow ideas to be judged on their merit, regardless of where they come from." Search results can be exported for easy integration with reference management systems.

You can also bookmark articles for later research. There are extensive networking options, including your Science Open profile, a forum for interacting with other researchers, the ability to track your usage and citations, and an interactive bibliography. Users have the ability to review articles and provide their knowledge and insight within the community.

Collection : 74,560,631

Other Services : None

Advanced Search Options :   Content type, source, author, journal, discipline

3. Directory of Open Access Journals

A multidisciplinary, community-curated directory, the Directory of Open Access Journals (DOAJ) gives researchers access to high-quality peer-reviewed journals. It has archived more than two million articles from 17,193 journals, allowing you to either browse by subject or search by keyword.

The site was launched in 2003 with the aim of increasing the visibility of OA scholarly journals online. Content on the site covers subjects from science, to law, to fine arts, and everything in between. DOAJ has a commitment to "increase the visibility, accessibility, reputation, usage and impact of quality, peer-reviewed, OA scholarly research journals globally, regardless of discipline, geography or language."

Information about the journal is available with each search result. Abstracts are also available in a collapsible format directly from the search screen. The scholarly article website is somewhat simple, but it is easy to navigate. There are 16 principles of transparency and best practices in scholarly publishing that clearly outline DOAJ policies and standards.

Collection : 6,817,242

Advanced Search Options :   Subject, journal, year

4. Education Resources Information Center

The Education Resources Information Center (ERIC) of the Institution of Education Sciences allows you to search by topic for material related to the field of education. Links lead to other sites, where you may have to purchase the information, but you can search for full-text articles only. You can also search only peer-reviewed sources.

The service primarily indexes journals, gray literature (such as technical reports, white papers, and government documents), and books. All sources of material on ERIC go through a formal review process prior to being indexed. ERIC's selection policy is available as a PDF on their website.

The ERIC website has an extensive FAQ section to address user questions. This includes categories like general questions, peer review, and ERIC content. There are also tips for advanced searches, as well as general guidance on the best way to search the database. ERIC is an excellent database for content specific to education.

Collection : 1,292,897

Advanced Search Options : Boolean

5. arXiv e-Print Archive

The arXiv e-Print Archive is run by Cornell University Library and curated by volunteer moderators, and it now offers OA to more than one million e-prints.

There are advisory committees for all eight subjects available on the database. With a stated commitment to an "emphasis on openness, collaboration, and scholarship," the arXiv e-Print Archive is an excellent STEM resource.

The interface is not as user-friendly as some of the other databases available, and the website hosts a blog to provide news and updates, but it is otherwise a straightforward math and science resource. There are simple and advanced search options, and, in addition to conducting searches for specific topics and articles, users can browse content by subject. The arXiv e-Print Archive clearly states that they do not peer review the e-prints in the database.

Collection : 1,983,891

Good Source of Peer-Reviewed Articles : No

Advanced Search Options :   Subject, date, title, author, abstract, DOI

6. Social Science Research Network

The Social Science Research Network (SSRN) is a collection of papers from the social sciences community. It is a highly interdisciplinary platform used to search for scholarly articles related to 67 social science topics. SSRN has a variety of research networks for the various topics available through the free scholarly database.

The site offers more than 700,000 abstracts and more than 600,000 full-text papers. There is not yet a specific option to search for only full-text articles, but, because most of the papers on the site are free access, it's not often that you encounter a paywall. There is currently no option to search for only peer-reviewed articles.

You must become a member to use the services, but registration is free and enables you to interact with other scholars around the world. SSRN is "passionately committed to increasing inclusion, diversity and equity in scholarly research," and they encourage and discuss the use of inclusive language in scholarship whenever possible.

Collection : 1,058,739 abstracts; 915,452 articles

Advanced Search Options : Term, author, date, network

7. Public Library of Science

Public Library of Science (PLOS) is a big player in the world of OA science. Publishing 12 OA journals, the nonprofit organization is committed to facilitating openness in academic research. According to the site, "all PLOS content is at the highest possible level of OA, meaning that scientific articles are immediately and freely available to anyone, anywhere."

PLOS outlines four fundamental goals that guide the organization: break boundaries, empower researchers, redefine quality, and open science. All PLOS journals are peer-reviewed, and all 12 journals uphold rigorous ethical standards for research, publication, and scientific reporting.

PLOS does not offer advanced search options. Content is organized by topic into research communities that users can browse through, in addition to options to search for both articles and journals. The PLOS website also has resources for peer reviewers, including guidance on becoming a reviewer and on how to best participate in the peer review process.

Collection : 12 journals

Advanced Search Options : None

8. OpenDOAR

OpenDOAR, or the Directory of Open Access Repositories, is a comprehensive resource for finding free OA journals and articles. Using Google Custom Search, OpenDOAR combs through OA repositories around the world and returns relevant research in all disciplines.

The repositories it searches through are assessed and categorized by OpenDOAR staff to ensure they meet quality standards. Inclusion criteria for the database include requirements for OA content, global access, and categorically appropriate content, in addition to various other quality assurance measures. OpenDOAR has metadata, data, content, preservation, and submission policies for repositories, in addition to two OA policy statements regarding minimum and optimum recommendations.

This database allows users to browse and search repositories, which can then be selected, and articles and data can be accessed from the repository directly. As a repository database, much of the content on the site is geared toward the support of repositories and OA standards.

Collection : 5,768 repositories

Other Services : OpenDOAR offers a variety of additional services. Given the nature of the platform, services are primarily aimed at repositories and institutions, and there is a marked focus on OA in general. Sherpa services are OA archiving tools for authors and institutions.

They also offer various resources for OA support and compliance regarding standards and policies. The publication router matches publications and publishers with appropriate repositories.

There are also services and resources from JISC for repositories for cost management, discoverability, research impact, and interoperability, including ORCID consortium membership information. Additionally, a repository self-assessment tool is available for members.

Advanced Search Options :   Name, organization name, repository type, software name, content type, subject, country, region

9. Bielefeld Academic Search Engine

The Bielefeld Academic Search Engine (BASE) is operated by the Bielefeld University Library in Germany, and it offers more than 240 million documents from more than 8,000 sources. Sixty percent of its content is OA, and you can filter your search accordingly.

BASE has rigorous inclusion requirements for content providers regarding quality and relevance, and they maintain a list of content providers for the sake of transparency, which can be easily found on their website. BASE has a fairly elegant interface. Search results can be organized by author, title, or date.

From the search results, items can be selected and exported, added to favorites, emailed, and searched in Google Scholar. There are basic and advanced search features, with the advanced search offering numerous options for refining search criteria. There is also a feature on the website that saves recent searches without additional steps from the user.

Collection : 276,019,066 documents; 9,286 content providers

Advanced Search Options :   Author, subject, year, content provider, language, document type, access, terms of reuse

Research Databases

10. Digital Library of the Commons Repository

Run by Indiana University, the Digital Library of the Commons (DLC) Repository is a multidisciplinary journal repository that allows users to access thousands of free and OA articles from around the world. You can browse by document type, date, author, title, and more or search for keywords relevant to your topic.

DCL also offers the Comprehensive Bibliography of the Commons, an image database, and a keyword thesaurus for enhanced search parameters. The repository includes books, book chapters, conference papers, journal articles, surveys, theses and dissertations, and working papers. DCL advanced search features drop-down menus of search types with built-in Boolean search options.

Searches can be sorted by relevance, title, date, or submission date in ascending or descending order. Abstracts are included in selected search results, with access to full texts available, and citations can be exported from the same page. Additionally, the image database search includes tips for better search results.

Collection : 10,784

Advanced Search Options :   Author, date, title, subject, sector, region, conference

11. CIA World Factbook

The CIA World Factbook is a little different from the other resources on this list in that it is not an online journal directory or repository. It is, however, a useful free online research database for academics in a variety of disciplines.

All the information is free to access, and it provides facts about every country in the world, which are organized by category and include information about history, geography, transportation, and much more. The World Factbook can be searched by country or region, and there is also information about the world's oceans.

This site contains resources related to the CIA as an organization rather than being a scientific journal database specifically. The site has a user interface that is easy to navigate. The site also provides a section for updates regarding changes to what information is available and how it is organized, making it easier to interact with the information you are searching for.

Collection : 266 countries

12. Paperity

Paperity boasts its status as the "first multidisciplinary aggregator of OA journals and papers." Their focus is on helping you avoid paywalls while connecting you to authoritative research. In addition to providing readers with easy access to thousands of journals, Paperity seeks to help authors reach their audiences and help journals increase their exposure to boost readership.

Paperity has journal articles for every discipline, and the database offers more than a dozen advanced search options, including the length of the paper and the number of authors. There is even an option to include, exclude, or exclusively search gray papers.

Paperity is available for mobile, with both a mobile site and the Paperity Reader, an app that is available for both Android and Apple users. The database is also available on social media. You can interact with Paperity via Twitter and Facebook, and links to their social media are available on their homepage, including their Twitter feed.

Collection : 8,837,396

Advanced Search Options : Title, abstract, journal title, journal ISSN, publisher, year of publication, number of characters, number of authors, DOI, author, affiliation, language, country, region, continent, gray papers

13. dblp Computer Science Bibliography

The dblp Computer Science Bibliography is an online index of major computer science publications. dblp was founded in 1993, though until 2010 it was a university-specific database at the University of Trier in Germany. It is currently maintained by the Schloss Dagstuhl – Leibniz Center for Informatics.

Although it provides access to both OA articles and those behind a paywall, you can limit your search to only OA articles. The site indexes more than three million publications, making it an invaluable resource in the world of computer science. dblp entries are color-coded based on the type of item.

dblp has an extensive FAQ section, so questions that might arise about topics like the database itself, navigating the website, or the data on dblp, in addition to several other topics, are likely to be answered. The website also hosts a blog and has a section devoted to website statistics.

Collection : 5,884,702

14. EconBiz

EconBiz is a great resource for economic and business studies. A service of the Leibniz Information Centre for Economics, it offers access to full texts online, with the option of searching for OA material only. Their literature search is performed across multiple international databases.

EconBiz has an incredibly useful research skills section, with resources such as Guided Walk, a service to help students and researchers navigate searches, evaluate sources, and correctly cite references; the Research Guide EconDesk, a help desk to answer specific questions and provide advice to aid in literature searches; and the Academic Career Kit for what they refer to as Early Career Researchers.

Other helpful resources include personal literature lists, a calendar of events for relevant calls for papers, conferences, and workshops, and an economics terminology thesaurus to help in finding keywords for searches. To stay up-to-date with EconBiz, you can sign up for their newsletter.

Collection : 1,075,219

Advanced Search Options :   Title, subject, author, institution, ISBN/ISSN, journal, publisher, language, OA only

15. BioMed Central

BioMed Central provides OA research from more than 300 peer-reviewed journals. While originally focused on resources related to the physical sciences, math, and engineering, BioMed Central has branched out to include journals that cover a broader range of disciplines, with the aim of providing a single platform that provides OA articles for a variety of research needs. You can browse these journals by subject or title, or you can search all articles for your required keyword.

BioMed Central has a commitment to peer-reviewed sources and to the peer review process itself, continually seeking to help and improve the peer review process. They're "committed to maintaining high standards through full and stringent peer review."

Additionally, the website includes resources to assist and support editors as part of their commitment to providing high-quality, peer-reviewed OA articles.

Collection : 507,212

Other Services : BMC administers the International Standard Randomised Controlled Trial Number (ISRCTN) registry. While initially designed for registering clinical trials, since its creation in 2000, the registry has broadened its scope to include other health studies as well.

The registry is recognized by the International Committee of Medical Journal Editors, as well as the World Health Organization (WHO), and it meets the requirements established by the WHO International Clinical Trials Registry Platform.

The study records included in the registry are all searchable and free to access. The ISRCTN registry "supports transparency in clinical research, helps reduce selective reporting of results and ensures an unbiased and complete evidence base."

Advanced Search Options :   Author, title, journal, list

A multidisciplinary search engine, JURN provides links to various scholarly websites, articles, and journals that are free to access or OA. Covering the fields of the arts, humanities, business, law, nature, science, and medicine, JURN has indexed almost 5,000 repositories to help you find exactly what you're looking for.

Search features are enhanced by Google, but searches are filtered through their index of repositories. JURN seeks to reach a wide audience, with their search engine tailored to researchers from "university lecturers and students seeking a strong search tool for OA content" and "advanced and ambitious students, age 14-18" to "amateur historians and biographers" and "unemployed and retired lecturers."

That being said, JURN is very upfront about its limitations. They admit to not being a good resource for educational studies, social studies, or psychology, and conference archives are generally not included due to frequently unstable URLs.

Collection : 5,064 indexed journals

Other Services : JURN has a browser add-on called UserScript. This add-on allows users to integrate the JURN database directly into Google Search. When performing a search through Google, the add-on creates a link that sends the search directly to JURN CSE. JURN CSE is a search service that is hosted by Google.

Clicking the link from the Google Search bar will run your search through the JURN database from the Google homepage. There is also an interface for a DuckDuckGo search box; while this search engine has an emphasis on user privacy, for smaller sites that may be indexed by JURN, DuckDuckGo may not provide the same depth of results.

Advanced Search Options :   Google search modifiers

Dryad is a digital repository of curated, OA scientific research data. Launched in 2009, it is run by a not-for-profit membership organization, with a community of institutional and publisher members for whom their services have been designed. Members include institutions such as Stanford, UCLA, and Yale, as well as publishers like Oxford University Press and Wiley.

Dryad aims to "promote a world where research data is openly available, integrated with the scholarly literature, and routinely reused to create knowledge." It is free to access for the search and discovery of data. Their user experience is geared toward easy self-depositing, supports Creative Commons licensing, and provides DOIs for all their content.

Note that there is a publishing charge associated if you wish to publish your data in Dryad.  When searching datasets, they are accompanied by author information and abstracts for the associated studies, and citation information is provided for easy attribution.

Collection : 44,458

Advanced Search Options : No

Run by the British Library, the E-Theses Online Service (EThOS) allows you to search over 500,000 doctoral theses in a variety of disciplines. All of the doctoral theses available on EThOS have been awarded by higher education institutions in the United Kingdom.

Although some full texts are behind paywalls, you can limit your search to items available for immediate download, either directly through EThOS or through an institution's website. More than half of the records in the database provide access to full-text theses.

EThOS notes that they do not hold all records for all institutions, but they strive to index as many doctoral theses as possible, and the database is constantly expanding, with approximately 3,000 new records added and 2,000 new full-text theses available every month. The availability of full-text theses is dependent on multiple factors, including their availability in the institutional repository and the level of repository development.

Collection : 500,000+

Advanced Search Options : Abstract, author's first name, author's last name, awarding body, current institution, EThOS ID, year, language, qualifications, research supervisor, sponsor/funder, keyword, title

PubMed is a research platform well-known in the fields of science and medicine. It was created and developed by the National Center for Biotechnology Information (NCBI) at the National Library of Medicine (NLM). It has been available since 1996 and offers access to "more than 33 million citations for biomedical literature from MEDLINE, life science journals, and online books."

While PubMed does not provide full-text articles directly, and many full-text articles may be behind paywalls or require subscriptions to access them, when articles are available from free sources, such as through PubMed Central (PMC), those links are provided with the citations and abstracts that PubMed does provide.

PMC, which was established in 2000 by the NLM, is a free full-text archive that includes more than 6,000,000 records. PubMed records link directly to corresponding PMC results. PMC content is provided by publishers and other content owners, digitization projects, and authors directly.

Collection : 33,000,000+

Advanced Search Options : Author's first name, author's last name, identifier, corporation, date completed, date created, date entered, date modified, date published, MeSH, book, conflict of interest statement, EC/RN number, editor, filter, grant number, page number, pharmacological action, volume, publication type, publisher, secondary source ID, text, title, abstract, transliterated title

20. Semantic Scholar

A unique and easy-to-use resource, Semantic Scholar defines itself not just as a research database but also as a "search and discovery tool." Semantic Scholar harnesses the power of artificial intelligence to efficiently sort through millions of science-related papers based on your search terms.

Through this singular application of machine learning, Semantic Scholar expands search results to include topic overviews based on your search terms, with the option to create an alert for or further explore the topic. It also provides links to related topics.

In addition, search results produce "TLDR" summaries in order to provide concise overviews of articles and enhance your research by helping you to navigate quickly and easily through the available literature to find the most relevant information. According to the site, although some articles are behind paywalls, "the data [they] have for those articles is limited," so you can expect to receive mostly full-text results.

Collection : 203,379,033

Other Services : Semantic Scholar supports multiple popular browsers. Content can be accessed through both mobile and desktop versions of Firefox, Microsoft Edge, Google Chrome, Apple Safari, and Opera.

Additionally, Semantic Scholar provides browser extensions for both Chrome and Firefox, so AI-powered scholarly search results are never more than a click away. The mobile interface includes an option for Semantic Swipe, a new way of interacting with your research results.

There are also beta features that can be accessed as part of the Beta Program, which will provide you with features that are being actively developed and require user feedback for further improvement.

Advanced Search Options : Field of study, date range, publication type, author, journal, conference, PDF

Zenodo, powered by the European Organization for Nuclear Research (CERN), was launched in 2013. Taking its name from Zenodotus, the first librarian of the ancient library of Alexandria, Zenodo is a tool "built and developed by researchers, to ensure that everyone can join in open science." Zenodo accepts all research from every discipline in any file format.

However, Zenodo also curates uploads and promotes peer-reviewed material that is available through OA. A DOI is assigned to everything that is uploaded to Zenodo, making research easily findable and citable. You can sort by keyword, title, journal, and more and download OA documents directly from the site.

While there are closed access and restricted access items in the database, the vast majority of research is OA material. Search results can be filtered by access type, making it easy to view the free articles available in the database.

Collection : 2,220,000+

Advanced Search Options : Access, file type, keywords

Check out our roundup of free research databases as a handy one-page PDF.

How to find peer-reviewed articles.

There are a lot of free scholarly articles available from various sources. The internet is a big place. So how do you go about finding peer-reviewed articles when conducting your research? It's important to make sure you are using reputable sources.

The first source of the article is the person or people who wrote it. Checking out the author can give you some initial insight into how much you can trust what you’re reading. Looking into the publication information of your sources can also indicate whether the article is reliable.

Aspects of the article, such as subject and audience, tone, and format, are other things you can look at when evaluating whether the article you're using is valid, reputable, peer-reviewed material. So, let's break that down into various components so you can assess your research to ensure that you're using quality articles and conducting solid research.

Check the Author

Peer-reviewed articles are written by experts or scholars with experience in the field or discipline they're writing about. The research in a peer-reviewed article has to pass a rigorous evaluation process, so it's a foregone conclusion that the author(s) of a peer-reviewed article should have experience or training related to that research.

When evaluating an article, take a look at the author's information. What credentials does the author have to indicate that their research has scholarly weight behind it? Finding out what type of degree the author has—and what that degree is in—can provide insight into what kind of authority the author is on the subject.

Something else that might lend credence to the author's scholarly role is their professional affiliation. A look at what organization or institution they are affiliated with can tell you a lot about their experience or expertise. Where were they trained, and who is verifying their research?

Identify Subject and Audience

The ultimate goal of a study is to answer a question. Scholarly articles are also written for scholarly audiences, especially articles that have gone through the peer review process. This means that the author is trying to reach experts, researchers, academics, and students in the field or topic the research is based on.

Think about the question the author is trying to answer by conducting this research, why, and for whom. What is the subject of the article? What question has it set out to answer? What is the purpose of finding the information? Is the purpose of the article of importance to other scholars? Is it original content?

Research should also be approached analytically. Is the methodology sound? Is the author using an analytical approach to evaluate the data that they have obtained? Are the conclusions they've reached substantiated by their data and analysis? Answering these questions can reveal a lot about the article's validity.

Format Matters

Reliable articles from peer-reviewed sources have certain format elements to be aware of. The first is an abstract. An abstract is a short summary or overview of the article. Does the article have an abstract? It's unlikely that you're reading a peer-reviewed article if it doesn't. Peer-reviewed journals will also have a word count range. If an article seems far too short or incredibly long, that may be reason to doubt it.

Another feature of reliable articles is the sections the information is divided into. Peer-reviewed research articles will have clear, concise sections that appropriately organize the information. This might include a literature review, methodology, results (in the case of research articles), and a conclusion.

One of the most important sections is the references or bibliography. This is where the researcher lists all the sources of their information. A peer-reviewed source will have a comprehensive reference section.

An article that has been written to reach an academic community will have an academic tone. The language that is used, and the way this language is used, is important to consider. If the article is riddled with grammatical errors, confusing syntax, and casual language, it almost definitely didn't make it through the peer review process.

Also consider the use of terminology. Every discipline is going to have standard terminology or jargon that can be used and understood by other academics in the discipline. The language in a peer-reviewed article is going to reflect that.

If the author is going out of their way to explain simple terms, or terms that are standard to the field or discipline, it's unlikely that the article has been peer reviewed, as this is something that the author would be asked to address during the review process.

Publication

The source of the article will be a very good indicator of the likelihood that it was peer reviewed. Where was the article published? Was it published alongside other academic articles in the same discipline? Is it a legitimate and reputable scholarly publication?

A trade publication or newspaper might be legitimate or reputable, but it is not a scholarly source, and it will not have been subject to the peer review process. Scholarly journals are the best resource for peer-reviewed articles, but it's important to remember that not all scholarly journals are peer reviewed.

It's helpful to look at a scholarly source's website, as peer-reviewed journals will have a clear indication of the peer review process. University libraries, institutional repositories, and reliable databases (and now you have a list of legit ones) can also help provide insight into whether an article comes from a peer-reviewed journal.

Free Online Journal

Common Research Mistakes to Avoid

Research is a lot of work. Even with high standards and good intentions, it's easy to make mistakes. Perhaps you searched for access to scientific journals for free and found the perfect peer-reviewed sources, but you forgot to document everything, and your references are a mess. Or, you only searched for free online articles and missed out on a ground-breaking study that was behind a paywall.

Whether your research is for a degree or to get published or to satisfy your own inquisitive nature, or all of the above, you want all that work to produce quality results. You want your research to be thorough and accurate.

To have any hope of contributing to the literature on your research topic, your results need to be high quality. You might not be able to avoid every potential mistake, but here are some that are both common and easy to avoid.

Sticking to One Source

One of the hallmarks of good research is a healthy reference section. Using a variety of sources gives you a better answer to your question. Even if all of the literature is in agreement, looking at various aspects of the topic may provide you with an entirely different picture than you would have if you looked at your research question from only one angle.

Not Documenting Every Fact

As you conduct your research, do yourself a favor and write everything down. Everything you include in your paper or article that you got from another source is going to need to be added to your references and cited.

It's important, especially if your aim is to conduct ethical, high-quality research, that all of your research has proper attribution. If you don't document as you go, you could end up making a lot of work for yourself if the information you don't write down is something that later, as you write your paper, you really need.

Using Outdated Materials

Academia is an ever-changing landscape. What was true in your academic discipline or area of research ten years ago may have since been disproven. If fifteen studies have come out since the article that you're using was published, it's more than a little likely that you're going to be basing your research on flawed or dated information.

If the information you're basing your research on isn't as up-to-date as possible, your research won't be of quality or able to stand up to any amount of scrutiny. You don't want all of your hard work to be for naught.

Relying Solely on Open Access Journals

OA is a great resource for conducting academic research. There are high-quality journal articles available through OA, and that can be very helpful for your research. But, just because you have access to free articles, that doesn't mean that there's nothing to be found behind a paywall.

Just as dismissing high-quality peer-reviewed articles because they are OA would be limiting, not exploring any paid content at all is equally short-sighted. If you're seeking to conduct thorough and comprehensive research, exploring all of your options for quality sources is going to be to your benefit.

Digging Too Deep or Not Deep Enough

Research is an art form, and it involves a delicate balance of information. If you conduct your research using only broad search terms, you won't be able to answer your research question well, or you'll find that your research provides information that is closely related to your topic but, ultimately, your findings are vague and unsubstantiated.

On the other hand, if you delve deeply into your research topic with specific searches and turn up too many sources, you might have a lot of information that is adjacent to your topic but without focus and perhaps not entirely relevant. It's important to answer your research question concisely but thoroughly.

Different Types of Scholarly Articles

Different types of scholarly articles have different purposes. An original research article, also called an empirical article, is the product of a study or an experiment. This type of article seeks to answer a question or fill a gap in the existing literature.

Research articles will have a methodology, results, and a discussion of the findings of the experiment or research and typically a conclusion.

Review articles overview the current literature and research and provide a summary of what the existing research indicates or has concluded. This type of study will have a section for the literature review, as well as a discussion of the findings of that review. Review articles will have a particularly extensive reference or bibliography section.

Theoretical articles draw on existing literature to create new theories or conclusions, or look at current theories from a different perspective, to contribute to the foundational knowledge of the field of study.

10 Tips for Navigating Journal Databases

Use the right academic journal database for your search, be that interdisciplinary or specific to your field. Or both!

If it's an option, set the search results to return only peer-reviewed sources.

Start by using search terms that are relevant to your topic without being overly specific.

Try synonyms, especially if your keywords aren't returning the desired results.

Scholarly Journal Articles

Even if you've found some good articles, try searching using different terms.

Explore the advanced search features of the database(s).

Learn to use Booleans (AND, OR, NOT) to expand or narrow your results.

Once you've gotten some good results from a more general search, try narrowing your search.

Read through abstracts when trying to find articles relevant to your research.

Keep track of your research and use citation tools. It'll make life easier when it comes time to compile your references.

7 Frequently Asked Questions

1. how do i get articles for free.

Free articles can be found through free online academic journals, OA databases, or other databases that include OA journals and articles. These resources allow you to access free papers online so you can conduct your research without getting stuck behind a paywall.

Academics don't receive payment for the articles they contribute to journals. There are often, in fact, publication fees that scholars pay in order to publish. This is one of the funding structures that allows OA journals to provide free content so that you don't have to pay fees or subscription costs to access journal articles.

2. How Do I Find Journal Articles?

Journal articles can be found in databases and institutional repositories that can be accessed at university libraries. However, online research databases that contain OA articles are the best resource for getting free access to journal articles that are available online.

Peer-reviewed journal articles are the best to use for academic research, and there are a number of databases where you can find peer-reviewed OA journal articles. Once you've found a useful article, you can look through the references for the articles the author used to conduct their research, and you can then search online databases for those articles, too.

3. How Do I Find Peer-Reviewed Articles?

Peer-reviewed articles can be found in reputable scholarly peer-reviewed journals. High-quality journals and journal articles can be found online using academic search engines and free research databases. These resources are excellent for finding OA articles, including peer-reviewed articles.

OA articles are articles that can be accessed for free. While some scholarly search engines and databases include articles that aren't peer reviewed, there are also some that provide only peer-reviewed articles, and databases that include non-peer-reviewed articles often have advanced search features that enable you to select "peer review only." The database will return results that are exclusively peer-reviewed content.

4. What Are Research Databases?

A research database is a list of journals, articles, datasets, and/or abstracts that allows you to easily search for scholarly and academic resources and conduct research online. There are databases that are interdisciplinary and cover a variety of topics.

For example, Paperity might be a great resource for a chemist as well as a linguist, and there are databases that are more specific to a certain field. So, while ERIC might be one of the best educational databases available for OA content, it's not going to be one of the best databases for finding research in the field of microbiology.

5. How Do I Find Scholarly Articles for Specific Fields?

There are interdisciplinary research databases that provide articles in a variety of fields, as well as research databases that provide articles that cater to specific disciplines. Additionally, a journal repository or index can be a helpful resource for finding articles in a specific field.

When searching an interdisciplinary database, there are frequently advanced search features that allow you to narrow the search results down so that they are specific to your field. Selecting "psychology" in the advanced search features will return psychology journal articles in your search results. You can also try databases that are specific to your field.

If you're searching for law journal articles, many law reviews are OA. If you don't know of any databases specific to history, visiting a journal repository or index and searching "history academic journals" can return a list of journals specific to history and provide you with a place to begin your research.

6. Are Peer-Reviewed Articles Really More Legitimate?

The short answer is yes, peer-reviewed articles are more legitimate resources for academic research. The peer review process provides legitimacy, as it is a rigorous review of the content of an article that is performed by scholars and academics who are experts in their field of study. The review provides an evaluation of the quality and credibility of the article.

Non-peer-reviewed articles are not subject to a review process and do not undergo the same level of scrutiny. This means that non-peer-reviewed articles are unlikely, or at least not as likely, to meet the same standards that peer-reviewed articles do.

7. Are Free Article Directories Legitimate?

Yes! As with anything, some databases are going to be better for certain requirements than others. But, a scholarly article database being free is not a reason in itself to question its legitimacy.

Free scholarly article databases can provide access to abstracts, scholarly article websites, journal repositories, and high-quality peer-reviewed journal articles. The internet has a lot of information, and it's often challenging to figure out what information is reliable. 

Research databases and article directories are great resources to help you conduct your research. Our list of the best research paper websites is sure to provide you with sources that are totally legit.

Get Professional Academic Editing

Hire an expert academic editor , or get a free sample, about the author.

Scribendi Editing and Proofreading

Scribendi's in-house editors work with writers from all over the globe to perfect their writing. They know that no piece of writing is complete without a professional edit, and they love to see a good piece of writing transformed into a great one. Scribendi's in-house editors are unrivaled in both experience and education, having collectively edited millions of words and obtained numerous degrees. They love consuming caffeinated beverages, reading books of various genres, and relaxing in quiet, dimly lit spaces.

Have You Read?

"The Complete Beginner's Guide to Academic Writing"

Related Posts

How to Write a Research Proposal

How to Write a Research Proposal

How to Write a Scientific Paper

How to Write a Scientific Paper

How to Write a Thesis or Dissertation

How to Write a Thesis or Dissertation

Upload your file(s) so we can calculate your word count, or enter your word count manually.

We will also recommend a service based on the file(s) you upload.

File Word Count  
Include in Price?  

English is not my first language. I need English editing and proofreading so that I sound like a native speaker.

I need to have my journal article, dissertation, or term paper edited and proofread, or I need help with an admissions essay or proposal.

I have a novel, manuscript, play, or ebook. I need editing, copy editing, proofreading, a critique of my work, or a query package.

I need editing and proofreading for my white papers, reports, manuals, press releases, marketing materials, and other business documents.

I need to have my essay, project, assignment, or term paper edited and proofread.

I want to sound professional and to get hired. I have a resume, letter, email, or personal document that I need to have edited and proofread.

 Prices include your personal % discount.

 Prices include % sales tax ( ).

free review research paper

This website uses cookies to ensure you get the best experience. Learn more about DOAJ’s privacy policy.

Hide this message

You are using an outdated browser. Please upgrade your browser to improve your experience and security.

The Directory of Open Access Journals

Directory of Open Access Journals

Find open access journals & articles.

Doaj in numbers.

80 languages

134 countries represented

13,713 journals without APCs

20,854 journals

10,457,979 article records

Quick search

About the directory.

DOAJ is a unique and extensive index of diverse open access journals from around the world, driven by a growing community, and is committed to ensuring quality content is freely available online for everyone.

DOAJ is committed to keeping its services free of charge, including being indexed, and its data freely available.

→ About DOAJ

→ How to apply

DOAJ is twenty years old in 2023.

Fund our 20th anniversary campaign

DOAJ is independent. All support is via donations.

82% from academic organisations

18% from contributors

Support DOAJ

Publishers don't need to donate to be part of DOAJ.

News Service

Meet the doaj team: head of editorial and deputy head of editorial (quality), vacancy: operations manager, press release: pubscholar joins the movement to support the directory of open access journals, new major version of the api to be released.

→ All blog posts

We would not be able to work without our volunteers, such as these top-performing editors and associate editors.

→ Meet our volunteers

Librarianship, Scholarly Publishing, Data Management

Brisbane, Australia (Chinese, English)

Adana, Türkiye (Turkish, English)

Humanities, Social Sciences

Natalia Pamuła

Toruń, Poland (Polish, English)

Medical Sciences, Nutrition

Pablo Hernandez

Caracas, Venezuela (Spanish, English)

Research Evaluation

Paola Galimberti

Milan, Italy (Italian, German, English)

Social Sciences, Humanities

Dawam M. Rohmatulloh

Ponorogo, Indonesia (Bahasa Indonesia, English, Dutch)

Systematic Entomology

Kadri Kıran

Edirne, Türkiye (English, Turkish, German)

Library and Information Science

Nataliia Kaliuzhna

Kyiv, Ukraine (Ukrainian, Russian, English, Polish)

WeChat QR code

free review research paper

Reference management. Clean and simple.

The top list of academic search engines

academic search engines

1. Google Scholar

4. science.gov, 5. semantic scholar, 6. baidu scholar, get the most out of academic search engines, frequently asked questions about academic search engines, related articles.

Academic search engines have become the number one resource to turn to in order to find research papers and other scholarly sources. While classic academic databases like Web of Science and Scopus are locked behind paywalls, Google Scholar and others can be accessed free of charge. In order to help you get your research done fast, we have compiled the top list of free academic search engines.

Google Scholar is the clear number one when it comes to academic search engines. It's the power of Google searches applied to research papers and patents. It not only lets you find research papers for all academic disciplines for free but also often provides links to full-text PDF files.

  • Coverage: approx. 200 million articles
  • Abstracts: only a snippet of the abstract is available
  • Related articles: ✔
  • References: ✔
  • Cited by: ✔
  • Links to full text: ✔
  • Export formats: APA, MLA, Chicago, Harvard, Vancouver, RIS, BibTeX

Search interface of Google Scholar

BASE is hosted at Bielefeld University in Germany. That is also where its name stems from (Bielefeld Academic Search Engine).

  • Coverage: approx. 136 million articles (contains duplicates)
  • Abstracts: ✔
  • Related articles: ✘
  • References: ✘
  • Cited by: ✘
  • Export formats: RIS, BibTeX

Search interface of Bielefeld Academic Search Engine aka BASE

CORE is an academic search engine dedicated to open-access research papers. For each search result, a link to the full-text PDF or full-text web page is provided.

  • Coverage: approx. 136 million articles
  • Links to full text: ✔ (all articles in CORE are open access)
  • Export formats: BibTeX

Search interface of the CORE academic search engine

Science.gov is a fantastic resource as it bundles and offers free access to search results from more than 15 U.S. federal agencies. There is no need anymore to query all those resources separately!

  • Coverage: approx. 200 million articles and reports
  • Links to full text: ✔ (available for some databases)
  • Export formats: APA, MLA, RIS, BibTeX (available for some databases)

Search interface of Science.gov

Semantic Scholar is the new kid on the block. Its mission is to provide more relevant and impactful search results using AI-powered algorithms that find hidden connections and links between research topics.

  • Coverage: approx. 40 million articles
  • Export formats: APA, MLA, Chicago, BibTeX

Search interface of Semantic Scholar

Although Baidu Scholar's interface is in Chinese, its index contains research papers in English as well as Chinese.

  • Coverage: no detailed statistics available, approx. 100 million articles
  • Abstracts: only snippets of the abstract are available
  • Export formats: APA, MLA, RIS, BibTeX

Search interface of Baidu Scholar

RefSeek searches more than one billion documents from academic and organizational websites. Its clean interface makes it especially easy to use for students and new researchers.

  • Coverage: no detailed statistics available, approx. 1 billion documents
  • Abstracts: only snippets of the article are available
  • Export formats: not available

Search interface of RefSeek

Consider using a reference manager like Paperpile to save, organize, and cite your references. Paperpile integrates with Google Scholar and many popular databases, so you can save references and PDFs directly to your library using the Paperpile buttons:

free review research paper

Google Scholar is an academic search engine, and it is the clear number one when it comes to academic search engines. It's the power of Google searches applied to research papers and patents. It not only let's you find research papers for all academic disciplines for free, but also often provides links to full text PDF file.

Semantic Scholar is a free, AI-powered research tool for scientific literature developed at the Allen Institute for AI. Sematic Scholar was publicly released in 2015 and uses advances in natural language processing to provide summaries for scholarly papers.

BASE , as its name suggest is an academic search engine. It is hosted at Bielefeld University in Germany and that's where it name stems from (Bielefeld Academic Search Engine).

CORE is an academic search engine dedicated to open access research papers. For each search result a link to the full text PDF or full text web page is provided.

Science.gov is a fantastic resource as it bundles and offers free access to search results from more than 15 U.S. federal agencies. There is no need any more to query all those resources separately!

free review research paper

  • Resources Home 🏠
  • Try SciSpace Copilot
  • Search research papers
  • Add Copilot Extension
  • Try AI Detector
  • Try Paraphraser
  • Try Citation Generator
  • April Papers
  • June Papers
  • July Papers

SciSpace Resources

5 literature review tools to ace your research (+2 bonus tools)

Sucheth

Table of Contents

Your literature review is the lore behind your research paper . It comes in two forms, systematic and scoping , both serving the purpose of rounding up previously published works in your research area that led you to write and finish your own.

A literature review is vital as it provides the reader with a critical overview of the existing body of knowledge, your methodology, and an opportunity for research applications.

Tips-For-Writing-A-Literature-Review

Some steps to follow while writing your review:

  • Pick an accessible topic for your paper
  • Do thorough research and gather evidence surrounding your topic
  • Read and take notes diligently or you can use ChatPDF tool for this
  • Create a rough structure for your review
  • Synthesis your notes and write the first draft
  • Edit and proofread your literature review

To make your workload a little lighter, there are many literature review AI tools. These tools can help you find academic articles through AI and answer questions about a research paper.  

Best literature review tools to improve research workflow

A literature review is one of the most critical yet tedious stages in composing a research paper. Many students find it an uphill task since it requires extensive reading and careful organization .

Using some of the best literature review tools listed here, you can make your life easier by overcoming some of the existing challenges in literature reviews. From collecting and classifying to analyzing and publishing research outputs, these tools help you with your literature review and improve your productivity without additional effort or expenses.

1. SciSpace

SciSpace is an AI for academic research that will help find research papers and answer questions about a research paper. You can discover, read, and understand research papers with SciSpace making it an excellent platform for literature review. Featuring a repository with over 270 million research papers, it comes with your AI research assistant called Copilot that offers explanations, summaries , and answers as you read.

Get started now:

free review research paper

Find academic articles through AI

SciSpace has a dedicated literature review tool that finds scientific articles when you search for a question. Based on semantic search, it shows all the research papers relevant for your subject. You can then gather quick insights for all the papers displayed in your search results like methodology, dataset, etc., and figure out all the papers relevant for your research.

Identify relevant articles faster

Abstracts are not always enough to determine whether a paper is relevant to your research question. For starters, you can ask questions to your AI research assistant, SciSpace Copilot to explore the content and better understand the article. Additionally, use the summarize feature to quickly review the methodology and results of a paper and decide if it is worth reading in detail.

Quickly skim through the paper and focus on the most relevant information with summarize and brainstorm questions feature on SciSpace Copilot

Learn in your preferred language

A big barrier non-native English speakers face while conducting a literature review is that a significant portion of scientific literature is published in English. But with SciSpace Copilot, you can review, interact, and learn from research papers in any language you prefer — presently, it supports 75+ languages. The AI will answer questions about a research paper in your mother tongue.

Read and understand scientific literature in over 75 languages with SciSpace Copilot

Integrates with Zotero

Many researchers use Zotero to create a library and manage research papers. SciSpace lets you import your scientific articles directly from Zotero into your SciSpace library and use Copilot to comprehend your research papers. You can also highlight key sections, add notes to the PDF as you read, and even turn helpful explanations and answers from Copilot into notes for future review.

Understand math and complex concepts quickly

Come across complex mathematical equations or difficult concepts? Simply highlight the text or select the formula or table, and Copilot will provide an explanation or breakdown of the same in an easy-to-understand manner. You can ask follow-up questions if you need further clarification.

Understand math and tables in research papers

Discover new papers to read without leaving

Highlight phrases or sentences in your research paper to get suggestions for related papers in the field and save time on literature reviews. You can also use the 'Trace' feature to move across and discover connected papers, authors, topics, and more.

Find related papers quickly

SciSpace Copilot is now available as a Chrome extension , allowing you to access its features directly while you browse scientific literature anywhere across the web.

free review research paper

Get citation-backed answers

When you're conducting a literature review, you want credible information with proper references.  Copilot ensures that every piece of information provided by SciSpace Copilot is backed by a direct reference, boosting transparency, accuracy, and trustworthiness.

Ask a question related to the paper you're delving into. Every response from Copilot comes with a clickable citation. This citation leads you straight to the section of the PDF from which the answer was extracted.

By seamlessly integrating answers with citations, SciSpace Copilot assures you of the authenticity and relevance of the information you receive.

2. Mendeley

Mendeley Citation Manager is a free web and desktop application. It helps simplify your citation management workflow significantly. Here are some ways you can speed up your referencing game with Mendeley.

Generate citations and bibliographies

Easily add references from your Mendeley library to your Word document, change your citation style, and create a bibliography, all without leaving your document.

Retrieve references

It allows you to access your references quickly. Search for a term, and it will return results by referencing the year, author, or source.

Add sources to your Mendeley library by dragging PDF to Mendeley Reference Manager. Mendeley will automatically remove the PDF(s) metadata and create a library entry.‌

Read and annotate documents

It helps you highlight and comment across multiple PDFs while keep them all in one place using Mendeley Notebook . Notebook pages are not tied to a reference and let you quote from many PDFs.

A big part of many literature review workflows, Zotero is a free, open-source tool for managing citations that works as a plug-in on your browser. It helps you gather the information you need, cite your sources, lets you attach PDFs, notes, and images to your citations, and create bibliographies.

Import research articles to your database

Search for research articles on a keyword, and add relevant results to your database. Then, select the articles you are most interested in, and import them into Zotero.

Add bibliography in a variety of formats

With Zotero, you don’t have to scramble for different bibliography formats. Simply use the Zotero-Word plug-in to insert in-text citations and generate a bibliography.

Share your research

You can save a paper and sync it with an online library to easily share your research for group projects. Zotero can be used to create your database and decrease the time you spend formatting citations.

Sysrev is an AI too for article review that facilitates screening, collaboration, and data extraction from academic publications, abstracts, and PDF documents using machine learning. The platform is free and supports public and Open Access projects only.

Some of the features of Sysrev include:

Group labels

Group labels can be a powerful concept for creating database tables from documents. When exported and re-imported, each group label creates a new table. To make labels for a project, go into the manage -> labels section of the project.

Group labels enable project managers to pull table information from documents. It makes it easier to communicate review results for specific articles.

Track reviewer performance

Sysrev's label counting tool provides filtering and visualization options for keeping track of the distribution of labels throughout the project's progress. Project managers can check their projects at any point to track progress and the reviewer's performance.

Tool for concordance

The Sysrev tool for concordance allows project administrators and reviewers to perform analysis on their labels. Concordance is measured by calculating the number of times users agree on the labels they have extracted.

Colandr is a free, open-source, internet-based analysis and screening software used as an AI for academic research. It was designed to ease collaboration across various stages of the systematic review procedure. The tool can be a little complex to use. So, here are the steps involved in working with Colandr.

Create a review

The first step to using Colandr is setting up an organized review project. This is helpful to librarians who are assisting researchers with systematic reviews.

The planning stage is setting the review's objectives along with research queries. Any reviewer can review the details of the planning stage. However, they can only be modified by the author for the review.

Citation screening/import

In this phase, users can upload their results from database searches. Colandr also offers an automated deduplication system.

Full-text screening

The system in Colandr will discover the combination of terms and expressions that are most useful for the reader. If an article is selected, it will be moved to the final step.

Data extraction/export

Colandr data extraction is more efficient than the manual method. It creates the form fields for data extraction during the planning stage of the review procedure. Users can decide to revisit or modify the form for data extraction after completing the initial screening.

Bonus literature review tools

SRDR+ is a web-based tool for extracting and managing systematic review or meta-analysis data. It is open and has a searchable archive of systematic reviews and their data.

7. Plot Digitizer

Plot Digitizer is an efficient tool for extracting information from graphs and images, equipped with many features that facilitate data extraction. The program comes with a free online application, which is adequate to extract data quickly.

Final thoughts

Writing a literature review is not easy. It’s a time-consuming process, which can become tiring at times. The literature review tools mentioned in this blog do an excellent job of maximizing your efforts and helping you write literature reviews much more efficiently. With them, you can breathe a sigh of relief and give more time to your research.

As you dive into your literature review, don’t forget to use SciSpace ResearchGPT to streamline the process. It facilitates your research and helps you explore key findings, summary, and other components of the paper easily.

Frequently Asked Questions (FAQs)

1. what is rrl in research.

RRL stands for Review of Related Literature and sometimes interchanged with ‘Literature Review.’ RRL is a body of studies relevant to the topic being researched. These studies may be in the form of journal articles, books, reports, and other similar documents. Review of related literature is used to support an argument or theory being made by the researcher, as well as to provide information on how others have approached the same topic.

2. What are few softwares and tools available for literature review?

• SciSpace Discover

• Mendeley

• Zotero

• Sysrev

• Colandr

• SRDR+

3. How to generate an online literature review?

The Scispace Discover tool, which offers an excellent repository of millions of peer-reviewed articles and resources, will help you generate or create a literature review easily. You may find relevant information by utilizing the filter option, checking its credibility, tracing related topics and articles, and citing in widely accepted formats with a single click.

4. What does it mean to synthesize literature?

To synthesize literature is to take the main points and ideas from a number of sources and present them in a new way. The goal is to create a new piece of writing that pulls together the most important elements of all the sources you read. Make recommendations based on them, and connect them to the research.

5. Should we write abstract for literature review?

Abstracts, particularly for the literature review section, are not required. However, an abstract for the research paper, on the whole, is useful for summarizing the paper and letting readers know what to expect from it. It can also be used to summarize the main points of the paper so that readers have a better understanding of the paper's content before they read it.

6. How do you evaluate the quality of a literature review?

• Whether it is clear and well-written.

• Whether Information is current and up to date.

• Does it cover all of the relevant sources on the topic.

• Does it provide enough evidence to support its conclusions.

7. Is literature review mandatory?

Yes. Literature review is a mandatory part of any research project. It is a critical step in the process that allows you to establish the scope of your research and provide a background for the rest of your work.

8. What are the sources for a literature review?

• Reports

• Theses

• Conference proceedings

• Company reports

• Some government publications

• Journals

• Books

• Newspapers

• Articles by professional associations

• Indexes

• Databases

• Catalogues

• Encyclopaedias

• Dictionaries

• Bibliographies

• Citation indexes

• Statistical data from government websites

9. What is the difference between a systematic review and a literature review?

A systematic review is a form of research that uses a rigorous method to generate knowledge from both published and unpublished data. A literature review, on the other hand, is a critical summary of an area of research within the context of what has already been published.

free review research paper

Suggested reads!

Types of essays in academic writing Citation Machine Alternatives — A comparison of top citation tools 2023

QuillBot vs SciSpace: Choose the best AI-paraphrasing tool

ChatPDF vs. SciSpace Copilot: Unveiling the best tool for your research

You might also like

Consensus GPT vs. SciSpace GPT: Choose the Best GPT for Research

Consensus GPT vs. SciSpace GPT: Choose the Best GPT for Research

Sumalatha G

Literature Review and Theoretical Framework: Understanding the Differences

Nikhil Seethi

Types of Essays in Academic Writing - Quick Guide (2024)

Educational resources and simple solutions for your research journey

finding research papers for literature review

How and Where to Find Research Papers for Literature Reviews

The literature review is an integral part of the research process. Finding the correct research papers for a literature review can be a daunting task, especially for early career researchers. This is more so in the digital age, where the sheer quantum of research available can drown researchers who attempt to sift through case studies, journals, online platforms, repositories, and databases. Regardless of whether you are just starting a career in research or are a veteran in the field, looking for relevant sources for a literature review can be time-consuming and frustrating.

In this article, we will provide valuable tips and explore various resources to understand how to find research papers relevant to literature reviews efficiently.

Table of Contents

Academic Databases and Search Engines

To master how to find research papers, start with academic databases and search engines. Platforms such as Google Scholar, ResearchGate, and Scopus are indispensable for accessing a diverse array of scholarly articles. Enhance your search effectiveness by using advanced search options, employing specific keywords, and exploring related terms. Understanding and using the subject headings or descriptors provided by these databases is crucial for honing in on the most relevant papers quickly.

Reference Lists and Citation Networks

An effective strategy for finding research papers lies within the reference lists of the papers you already have. These lists can be gateways to additional, highly relevant sources. Similarly, investigating citation networks—observing which papers have cited key articles in your field—can unveil contemporary studies and emerging perspectives.

Accessing University Libraries

University libraries are a primary source of information on where you can find research papers. They offer access to a wealth of research papers and journals, including databases like JSTOR and ScienceDirect, for those seeking free resources. Library catalogs are instrumental in finding papers by title, author, or subject, and librarians can provide expert navigation through these resources.

Engaging with Online Academic Platforms

Platforms such as ResearchGate and Academia.edu, aside from being academic social networks, are valuable for finding research papers. They facilitate access to scholarly articles and enable researchers to share their work and connect with peers. Repositories like arXiv, bioRxiv, and SSRN provide early access to preprints across various disciplines, broadening your research scope.

Networking through Professional Associations and Conferences

For insights on where to find research papers, tap into professional associations and conferences. These platforms often grant members access to specialized publications and maintain online libraries of scholarly work. Conferences are also a goldmine for obtaining preprints or drafts of papers and for networking with fellow researchers.

Institutional Repositories

Institutional repositories are a go-to resource for finding research papers. These digital collections, hosted by academic institutions, offer open access to a variety of research outputs. Use keywords and subject categories to navigate these repositories for a rich selection of freely available scholarly material.

Government Reports and Policy Documents

Government agencies and research institutes are sometimes overlooked but can be significant sources of research papers. Their published reports and policy documents often include references to pertinent studies, providing valuable insights for your literature review.

By applying these expert strategies, you can streamline your search for relevant research papers. Remember, the quest is not just about where to find scientific articles; it’s about adopting a systematic and informed approach to locate the best resources. Utilizing targeted keywords, keeping abreast of the latest research, and exploring various sources will immensely enhance the quality of your literature reviews.

References:  

  • https://www.academictransfer.com/en/blog/how-to-find-papers-when-you-do-your-literature-review/  
  • https://www.scribendi.com/academy/articles/free_online_journal_and_research_databases.en.html  

R Discovery is a literature search and research reading app that uses your interests to instantly create personalized reading feeds. Researchers can stay updated on the latest, most relevant content from its continually expanding library of 115M+  research articles  sourced from trusted aggregators like  CrossRef ,  Unpaywall ,  PubMed,   PubMed  Central, Open Alex as well as prestigious publishing houses like  Springer Nature ,  JAMA , IOP,  Taylor & Francis , NEJM,  BMJ ,  Karger , SAGE,  Emerald Publishing  and more. The top-rated app in its space, R Discovery’s carefully curated features give you the power to choose what, where, and how you read research.    

Try the app for free or upgrade to  R Discovery Prime , which unlocks unlimited access to premium features that let you listen to research on the go, read in your language, invite collaborators, auto sync with top reference managers, multiple feeds, and more. It’s like having the world of research  at your fingertips ! Choose a simpler, smarter way to find and read research –  Get R Discovery Prime now at just  US $39 a year!

Related Posts

convenience sampling

What is Convenience Sampling: Definition, Method, and Examples 

research funding sources

What are the Best Research Funding Sources

free review research paper

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

  • We're Hiring!
  • Help Center

Download 55 million PDFs for free

Explore our top research interests.

free review research paper

Engineering

free review research paper

Anthropology

free review research paper

  • Earth Sciences

free review research paper

  • Computer Science

free review research paper

  • Mathematics

free review research paper

  • Health Sciences

free review research paper

Join 269 million academics and researchers

Track your impact.

Share your work with other academics, grow your audience and track your impact on your field with our robust analytics

Discover new research

Get access to millions of research papers and stay informed with the important topics around the world

Publish your work

Publish your research with fast and rigorous service through Academia.edu Journals. Get instant worldwide dissemination of your work

Unlock the most powerful tools with Academia Premium

free review research paper

Work faster and smarter with advanced research discovery tools

Search the full text and citations of our millions of papers. Download groups of related papers to jumpstart your research. Save time with detailed summaries and search alerts.

  • Advanced Search
  • PDF Packages of 37 papers
  • Summaries and Search Alerts

free review research paper

Share your work, track your impact, and grow your audience

Get notified when other academics mention you or cite your papers. Track your impact with in-depth analytics and network with members of your field.

  • Mentions and Citations Tracking
  • Advanced Analytics
  • Publishing Tools

Real stories from real people

free review research paper

Used by academics at over 15,000 universities

free review research paper

Get started and find the best quality research

  • Academia.edu Journals
  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Cognitive Science
  • Academia ©2024
  • [email protected]
  • +918401209201

Free Paper Publication

ISRDO does not charge any article submission or processing fees.

ISRDO is the world's leading open-access, peer-reviewed publication platform, with highly qualified reviewers.

Duration from Manuscripts Submission to First Editorial Decision - 5 to 8 Days

Duration from First Editorial Decision to Manuscripts Acceptance - 10 to 14 Days

Duration from Manuscripts Acceptance To Publication  - 1 to 5 Days

Cover 200+ Subjects

in the field of Agriculture, Veterinary Science,  Arts, Humanities, Social Science, Biology, Life Science, Business, Management, Accounting, Science, Engineering, Technology, Environment, Earth, Physical Science, Medicine and Pharmacology

AI assisted Peer Review Process

Reduce time to publication with ISRDO Evaluate, we are given insight and evaluate of incoming manuscript submissions, assisted by AI and Natural language processing. This Evaluate is integrated directly into your manuscript evaluation workflow.  Click Here

Find a Subject:

What we offer.

isrdo what we offer

  • Publication of high-impact research from top academics and institutions around the world.
  • Publishing to ISRDO ensures the widest access and impact for your work.
  • All the benefits of traditional peer review - detailed, thoughtful and constructive improvement in your work.
  • 7 to 10 days before the first decision (for all subjects).
  • Publish quickly without compromising the quality of reviews.
  • ISRDO offers a multitude of tools and services to help you better publish your work

Online Private College and Courses

Investig ationes demons travge vunt lectores legere lruis quodk legunt saepius claritas esta consectet adipi sicing elit, sed do eiusmod tempor incididunt ut labore edolore tempor incididunt ut labore et.dolore magna aliqua.

free review research paper

Information/Guidelines

peer review process

Peer Review Process

author guidelines

For Board Member

policies and ethics

Policies and ethics

Scientific Research Journal of Science, Engineering and Technology

ISSN Number : 2584-0584

Subject : 46

Board Member : 41

Scientific Research Journal of Medical and Health Science

ISSN Number : 2584-1521

Subject : 45

Board Member : 24

Scientific Research Journal of Environment, Earth and Physical Science

ISSN Number : 2584-0614

Subject : 21

Board Member : 14

Scientific Research Journal of Business, Management and Accounting

ISSN Number : 2584-0592

Subject : 16

Board Member : 15

Scientific Research Journal of Arts, Humanities and Social Science

ISSN Number : 2584-0622

Subject : 32

Board Member : 18

Scientific Research Journal of Agriculture and Veterinary Science

ISSN Number : 2584-1416

Subject : 14

Scientific Research Journal of  Biology and Life Science

ISSN Number : 2584-0606

Subject : 19

Board Member : 16

Recent Articles

free review research paper

An Analysis of the Literature on the Subject of Venture Capital Funding Selection Factors and Their Impact on Return on Investment

Investig atones demns traivg sed vunt lectoes legere

free review research paper

A New Approach to Behavioral Economics: How to Take Language Preferences into Account in Everyday Conversations

free review research paper

Exploring Economic Preferences in Children: A Case Study on Behavioral Insights and Methodological Approaches

free review research paper

Harnessing Generative AI to Transform Economic Research: Insights and Practical Applications

free review research paper

Methods Used by Indian Businesses and Governments to Foster Human Resource Innovation

Human Resource Management

free review research paper

An Examination of the Impact of Cultural Marketing on the Behavior of Customers Staying at Luxury Hotels: A Case Study of the ITC Narmada in Ahmedabad

Cultural Aspects of Development

free review research paper

Research Paper: Perceptions of Collaboration Skills among Secondary School Students in Ahmedabad Municipal School

free review research paper

An Examination of the Effects of the Technological Renaissance on Labour Relations: Opportunities and Obstacles for Employees and Employers

Law and Legislature

free review research paper

City Waste Management: A Case Study of Ahmedabad Municipal Corporation, Gujarat

free review research paper

Organizational Culture and Its Effects on Knowledge Acquisition and Application in the Workplace: A Survey of the Literature

free review research paper

Causes, Effects, and Solutions to the Problem of Malnutrition in Older Adults Who Remain Independent

Gerontology

free review research paper

Case Study: Uncommon Manifestation of Advanced Renal Cancer

General Medicine

free review research paper

Anesthetic Considerations for a Patient with Edwards Syndrome Undergoing Extensive Spinal Surgery

Anesthesiology

free review research paper

Comprehensive Solution for Resolving Master Data Management Issues in Electronic Medical Records (EMR) Systems

Medical Technologies

free review research paper

Research Report on Medical Technology: Virtual Reality (VR)

free review research paper

Zygogramma bicolorata: A Natural Biocontrol Agent Against Parthenium hysterophorus

Agronomy and Chemical Science

free review research paper

MONETIZING THE DIGITAL PERSONA: STRATEGIES, CHALLENGES, AND FUTURE TRENDS IN SOCIAL MEDIA INCOME GENERATION

Information Technology

free review research paper

Use Apriori, Genetic Algorithm and Fuzzy Logic to Foretell the Most Common Amino Acid Sequence

Computer Science and Engineering

free review research paper

Deep Learning Based Natural Language Processing E-Commerce Chatbot

free review research paper

Functional exercises and counselling help an out-of-shape elderly patient achieve her functional goals: a case study

Medical Education

free review research paper

LABOR PAIN RELIEF WITH CHOSEN AROMATHERAPY IN PARTICULAR KANPUR HOSPITAL SETTINGS

free review research paper

Reviewing DNA barcoding as a means of identifying rare and endangered plant species

free review research paper

Helping India's Small and Medium-Sized Enterprises Create Jobs

Entrepreneurship

free review research paper

Present-Day Dalit Repositioning in Rural Bihar's Social Hierarchy

Anthropology

free review research paper

The use of shrimp shells in products that are not intended for consumption

free review research paper

An investigation of brushless DC motors, which boast great efficiencies and superior degrees of controllability

Electrical and Electronics Engineering

free review research paper

A Excellence Measurement Study was conducted Within the Oriental Cables Manufacturing

free review research paper

Research on the Development and Training Programs of Regenta Wire Industries

free review research paper

The structure and physicochemical characteristics of the hydrolytic device were studied in relation to the impacts of two different drying processes for fish cuts.

Food Science and Technology

free review research paper

Three-phase voltage regulation based on Synchronous generators and controllers

free review research paper

Element of CIPARAY for Fish Farming Planning and Research

free review research paper

A LOOK AT HOW RAINFALL CHANGES AND HOW THE RAINY Period CHANGES OVER TIME IN MADAGASCAR'S FAR NORTH

Climatology and Weather

free review research paper

Evaluation of Rarely Covered Topics in Port Harcourt's Senior High Mathematics Curricula

free review research paper

Village Communists In AMBATOMANGA-MADAGASCAR Eat Mostly Homemade Food

Biological Anthropology

free review research paper

Goosefoots giganteum leaf extracts exhibited antibacterial efficacy against Gram-positive bacteria and a substantial number of other microorganisms.

Pharmacology

free review research paper

RELATIONSHIP BETWEEN WORK PLACE CONDITIONS AND INDIVIDUAL PERCEPTIONS OF JOB SATISIFACTION

free review research paper

The Importance of Endangered Species Conservation for Lepidoptera: A Survey of the Research

free review research paper

A look at the many applications and types of nanoscience in the pharmaceutical industry

free review research paper

ARTICLE EXAMINATION ON THE USE OF PCALAMARI INK FOR Synthetic Colorants AND Medical products

free review research paper

Reusing sewage water in cementitious materials as an alternative to drinking water

Civil Engineering

free review research paper

The Origins and Consequences of the French Revolutionary

Political Science

free review research paper

The Nonviolent Resistance Movement and Its Leader, Mahatma Gandhiji

free review research paper

A Critical Analysis of Macroalgae Farming Using the Kappaphycus alvarezii Species and Its Impact on the Environment

free review research paper

A NEW Respiratory syncytial Infectious disease With KLEBSIELLA PNUEMONIAE.

free review research paper

Pseudomonas putida bacteria: the biotech industry's obsession with the organism's fundamental carbon and cellular biochemistry

Bio-engineering

free review research paper

EXAMINING THE BARE MINIMUM REQUIRED FOR PROTOCELL-LIKE NANOSTRUCTURES TO FORM IN THE PREBIOTIC ENVIRONMENT

free review research paper

A Research Project on the Impact of Technology on the Indian Banks Sector

Banking and Finance

free review research paper

Brand Recognition of Fast Moving Consumer Goods (FMCG) in Coimbatore District, Tamil Nadu, Before and After Gst Regime

Marketing Management

free review research paper

Geographic information system and Sensing Methods for Management of Water Resources

free review research paper

To achieve environmental preservation, it is necessary to use eco-friendly products.

Environment Science and technology

free review research paper

MADAGASCAR: A CASE STUDY OF THE Consequences of Climate Change ON THE COUNTRY'S Groundwater

Earth sciences

free review research paper

Global warming throughout the world: Is it knocking by now?

free review research paper

MATHEMATICAL MODEL FOR STOCK PRICE PREDICTION USING LSTM NETWORKS IN PYTHON JUPYTER NOTEBOOK

Mathematics

free review research paper

VARIATIONAL PRINCIPLE FOR REYNOLD’S NUMBER IN CASE OF STRATIFIED FLUID

free review research paper

How the Study Routines of Students in Elementary School Affect the Students' Overall Academic Performance

free review research paper

Enhancing Fertilizer Utilization Effectiveness: An Extensive Review of Techniques

Agriculture Science

free review research paper

The Importance of Engaging Customers in the Process of Revitalising Culture and Art in the Age of COVID-19

free review research paper

Herbal Remedies from the Sandstone Regions of northwestern Rajasthan

free review research paper

A Concise Explanation of Plastic Surgery: Transforming Bodies and Lives

Plastic surgery

free review research paper

Performance Evaluation of Commercial Banks in the Indian Banking Industry

free review research paper

The analysis focuses on the earnings of state-owned commercial banks in India concerning high-risk investments.

free review research paper

Sustainable Solutions from Nature: Research Advancements and Application Prospects in China

free review research paper

Taking Stock of Our Earth's Resources Is Crucial for Effective Integrated Watershed Care

free review research paper

Parents of adolescents bring their unique gender perspective to their roles as adults.

free review research paper

Communities of Microorganisms in Their Natural Setting

free review research paper

Challenges and novel approaches to addressing slenderness and malnutrition in Indonesia.

free review research paper

Effects of Weather Change in Various Regions

free review research paper

Detailed Case Study: Himalayan Mountain Formation

free review research paper

Enhancing Climate Resilience in Madagascar's Cotton Farming: A Multi-Stakeholder Approach

free review research paper

Analyzing the Main Factors Behind Biodiversity Decline and Conservation Strategies in Ethiopia: A Comprehensive Overview

free review research paper

Comprehensive Tactics for Managing Risk in the Banking Sector

free review research paper

Revolutionizing Digital Retail: Leveraging the Full Potential of the E-commerce Supply Chain

free review research paper

Digital Music Downloading and Forecasting Trends: A Study of University Students and E-Commerce Practices

free review research paper

Enhancing the Management of Parkinson's Disease: Investigating the Potency of Rukshana - An Ayurvedic Method - An in-depth Case Study

Internal Medicine

free review research paper

An Instance of Radiation-Related Vertebral Displacement Fracture Simulating Solitary Bones Lung Cancer Metastatic Development

Orthopedics

free review research paper

Conserving Smiles: An In-Depth Examination of Alveolar Crest Conservation in Implant Dentistry

free review research paper

Progress in On-the-Go Genetically Modified Product Identification: Transitioning from Defense Biosensing to On-Site Sequencing

Microbiology

free review research paper

Impact of Pretreatment Methods on Chromosome Stability in Regenerated Plantlets

Cell Biology

free review research paper

Recent Breakthroughs in Halting the Formation of Atherosclerotic Plaques

free review research paper

Developing an Adaptive Feed Algorithm: A Case Study

free review research paper

Unlocking the Potential of On-Farm Testing for Agriculture and Environment

free review research paper

Preserving Soil Health: Strategies for Sustainable Agriculture

Soil Sciences

free review research paper

Navigating the Future of Food Security: The Dual Role of Pesticides in Modern Agriculture and Environmental Health

free review research paper

Exploring Human Culture: Interplay Between Academic Disciplines and Cultural Diversity

free review research paper

Matriarchs of the Margins: Steering Progress in India's Heartlands

free review research paper

Optimizing Digital Marketing Through Cross-Platform Data Integration: A Focus on Facebook Campaign Efficiency

free review research paper

Optimizing Online User Experience and Conversion Rates for an Indian Eyewear Retailer

free review research paper

Alleviating Greenhouse Gas Emissions from Animal Waste Storage with Biochar Integration

free review research paper

A Golden Symbiosis: Exploring Tobacco Cultivation Practices in Charotar, Gujarat

free review research paper

A Comprehensive Review on the Advancement of Sustainability in the Agro-Processing Industries

free review research paper

An In-Depth Analysis of Green Valley Organic Farm's Organic Farming Methods and Obstacles: A Case Study

free review research paper

South African Fisheries Administration: A Case Report on the Use of Ecosystem-Based Concepts and Research

free review research paper

The Indian subcontinent is home to a newly found Gigantochloa bamboo species.

free review research paper

In the Senior Chinese Society, the Relationship Between Sarcopenia, Body Health, and Protein Intake

Human Genetics

free review research paper

Increase in the Periodic Distribution of GI Trchochiles in Goats and Sheep in the Anand District of Gujarat, India

free review research paper

A Comprehensive Review of Antimicrobial Properties of Medicinal Plants

free review research paper

Metabolic Pattern Alterations in Pearl Millet Inflorescence Florets Caused by Downy Mildew

free review research paper

Enhancing Late Sown Rice Productivity through Advanced Soil Moisture Management

free review research paper

Research Report on Marine-Derived Cyclic Lactones: An In-Depth Survey of Potential Therapeutic Activities

Marine Sciences

free review research paper

Unveiling the Secrets of the Deep: Advances in Marine Geology and Their Implications for Earth's Geological Evolution

free review research paper

Exploring Deep Geodynamics and Metallogenic Processes: Interactions of Mantle Flow and Lithosphere Dynamics

free review research paper

Blending Tradition with Innovation: Harnessing Conventional Wisdom in Rainwater Harvesting to Combat Climate Change

free review research paper

Hypothetical Retrieval-Augmented Generation (Hypothetical RAG): Advancing AI for Enhanced Contextual Understanding and Creative Problem-Solving

free review research paper

Transforming Data Warehouses into Dynamic Knowledge Bases for RAG

free review research paper

Implementation Approach for Duplicate Image Identification and Removal

free review research paper

Case Study: Transitioning to Green Hydrogen: A Sustainable Future for China

Mechanical Engineering

free review research paper

Case Study: Impact of Adjacent Construction Projects on Soil Integrity and Tree Growth

Board Member

Blog & Update

free review research paper

17 Feb, 2024

free review research paper

13 Feb, 2024

free review research paper

13 Nov, 2022

The BEST FREE ONLINE PLAGIARISM CHECKER FOR STUDENTS

PREVENT PLAGIARISM WITH THIS ONLINE TOOL FOR ST...

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

catalysts-logo

Article Menu

  • Subscribe SciFeed
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Exclusive papers of the editorial board members and topical advisory panel members of catalysts in section “catalytic materials”.

free review research paper

Author Contributions

Acknowledgments, conflicts of interest, list of contributions.

  • Liu, P.; Qiao, H.; Su, X.; Peirong; Yang, F. Visible Light-Photocatalyzed C5-H Nitration of 8 Aminoquinoline Amides. Catalysts 2024 , 14 , 263. https://doi.org/10.3390/Catal14040263 .
  • Neves, P.; Simoes, G.; Naprus, B.D.; Pamin, K. Versatile Polyoxo Metalates of Different Structural Dimensionalities for Liquid Phase Catalytic Oxidations. Catalysts 2024 , 14 , 251. https://doi.org/10.3390/Catal4040251 .
  • Sidorenko, N.D.; Topchiyan, P.A.; Saraev, A.A.; Yu, E.; Gerasimov; Zhurenok, A.V.; Vasilchenko, D.B.; Kozlova, E.A. Bimettalic Pt-IrOx/g-C 3 N 4 Photocatalysts for the Highly Efficient Overall Water Splitting Under Visible Light. Catalysts 2024 , 14 , 225. https://doi.org/10.3390/Catal14040225 .
  • Marks, K.; Erbing, A.; Hohmann, L.; Chien, T.-E.; Yazdi, M.G.; Muntwiler, M.; Hansson, T.; Engvall, K.; Harding, D.J.; Öström, H.; Odellus, M.; Göthelid, M. Naphthalene Dehydrogenation on Ni(111) in the Presence of Chemisorbed Oxygen and Nickel Oxide. Catalysts 2024 , 14 , 124. https://doi.org/10.3390/Catal14020124 .
  • Cepollaro, E.M.; Cimino, S.; Marco D‘Agostini; Gargiulo, N. Giorgia Franchin, and Luciana Lisi, 3D-Printed Monoliths Based on Cu-Exchanged SSZ-13 as Catalystsfor SCR of NOx. Catalysts 2024 , 14 , 85. https://doi.org/10.3390/Catal14010085 .
  • Chakarova, K.; Drenchev, N.; Mihaylov, M.; Hadjiivanov, K. Interaction of O 2 with Reduced Ceria nanoparticles at 100–400K: Fast Oxidation of Ce 3+ ions and dissolved H 2 . Catalysts 2024 , 14 , 45. https://doi.org/10.3390/Catal14010045 .
  • Mohammadpour, N.; Uznanski, H.K.-P.; Tyczkowski, J. Plasma Deposited CoO-(Carbon Matrix) Thin-Film Nanocatalysts: The Impact of Nanoscale p-n Heterogunctions on Activity in CO 2 Methanations. Catalysts 2024 , 14 , 38. https://doi.org/10.3390/Catal14010038 .
  • Zhu, Q.; Luo, Y.; Yang, K.; Che, G.; Wang, H.; Qi, J. Construction of Spinel/Perovskite Heterojunction for Boosting Photocatalytic Performance for Polyacrylamide. Catalysts 2023 , 13 , 1424. https://10.33901/Catal13111424 .
  • Chernov, A.N.; Cherepanova, S.V.; Gerasimov, E.Y.; Prosvitin, I.P.; Zenkovets, G.A.; Shutilov, A.A.; Gorbunova, A.S.; Koltunov, K.Y.; Sobolev, V.L. Propane Dehydrogenation over Cobalt Aluminates: Evaluation of Potential Catalytic Active Sites. Catalysts 2023 , 13 , 1419. https://doi.org/10.3390/Catal13111419 .
  • Rizzato, L.; Cavazzani, J.; Osti, A.; Scavini, M.; Glsenti, A. Cu-Doped SrTiO 3 Nanostructured Catalysts for CO 2 Conversion into Solar Fuels Using Localiged Surface Plasmon Resonance. Catalysts 2023 , 13 , 1377. https://doi.org/10.3390/Catal13101377 .
  • Panek, B.; Kierzkowska-Pawlak, H.; Uznanski, P.; Nagy, S.; Nagy-Trembosova, V.; Tyczkowski, J. The role of Carbon Nanotube Deposit in Catalytic Activity of FeOx-Based PECVD Thin Films Tested in RWGS Reaction. Catalysts 2023 , 13 , 1302. https://doi.org/10.3390/Catal13091302 .
  • Osti, A.; Rizzato, L.; Canazzani, J.; Gilsenti, A. Optimizing Citrate Combustion Synthesi of A-Site Defficient La, Mn-Based Perovskite: Application of Catalytic CH 4 Combustion in Stoichiometric Conditions. Catalysts 2023 , 13 , 1177. https://doi.org/10.3390/Catal13081177 .
  • Afshari, M.; Carabineiro, S.A.C.; Gorjizadeh, M. Sulfonated Silica Coated CoFe 2 O 4 Magnetic Nanoparticles for the Synthesis of 3, 4-Dihydropyrimidin-2(1H)-one and octahydroquinazoline Derivatives. Catalysts 2023 , 13 , 989. https://doi.org/10.3390/Catal13060989 .
  • Lee, J.W.; Jeong, R.H.; Shin, I.; Boo, J.-H. Fabrication of New TiO 2 Photo Catalyst for removing organic dyes and Harzadous VOCs in Air Purifier System. Catalysts 2023 , 13 , 935. https://doi.org/10.3390/Catal13060935 .
  • Peng, A.; Huang, Z.; Li, G. Ethylene Oligomerization Catalyzed by Different Homogeneous or Heterogeneous Catalysts. Catalysts 2024 , 14 , 268. https://doi.org/10.3390/Catal14040268 .
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Liotta, L.F.; Kumar, N.; Hadjiivanov, K.I. Exclusive Papers of the Editorial Board Members and Topical Advisory Panel Members of Catalysts in Section “Catalytic Materials”. Catalysts 2024 , 14 , 564. https://doi.org/10.3390/catal14090564

Liotta LF, Kumar N, Hadjiivanov KI. Exclusive Papers of the Editorial Board Members and Topical Advisory Panel Members of Catalysts in Section “Catalytic Materials”. Catalysts . 2024; 14(9):564. https://doi.org/10.3390/catal14090564

Liotta, Leonarda Francesca, Narendra Kumar, and Konstantin Ivanov Hadjiivanov. 2024. "Exclusive Papers of the Editorial Board Members and Topical Advisory Panel Members of Catalysts in Section “Catalytic Materials”" Catalysts 14, no. 9: 564. https://doi.org/10.3390/catal14090564

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Research: How to Build Consensus Around a New Idea

  • Devon Proudfoot
  • Wayne Johnson

free review research paper

Strategies for overcoming the disagreements that can stymie innovation.

Previous research has found that new ideas are seen as risky and are often rejected. New research suggests that this rejection can be due to people’s lack of shared criteria or reference points when evaluating a potential innovation’s value. In a new paper, the authors find that the more novel the idea, the more people differ on their perception of its value. They also found that disagreement itself can make people view ideas as risky and make them less likely to support them, regardless of how novel the idea is. To help teams get on the same page when it comes to new ideas, they suggest gathering information about evaluator’s reference points and developing criteria that can lead to more focused discussions.

Picture yourself in a meeting where a new idea has just been pitched, representing a major departure from your company’s standard practices. The presenter is confident about moving forward, but their voice is quickly overtaken by a cacophony of opinions from firm opposition to enthusiastic support. How can you make sense of the noise? What weight do you give each of these opinions? And what does this disagreement say about the idea?

free review research paper

  • DP Devon Proudfoot is an Associate Professor of Human Resource Studies at Cornell’s ILR School. She studies topics related to diversity and creativity at work.
  • Wayne Johnson is a researcher at the Utah Eccles School of Business. He focuses on evaluations and decisions about new information, including persuasion regarding creative ideas and belief change.

Partner Center

  • Share full article

Advertisement

Supported by

Wordle Review No. 1,165

Scroll down for hints and conversation about the puzzle for August 27, 2024.

An illustration of people interacting with plant and fruit imagery. The Wordle number of the day is centered in the illustration on a cloud.

By New York Times Games

Welcome to The Wordle Review. Be warned: This page contains spoilers for today’s puzzle. Solve Wordle first , or scroll at your own risk.

Wordle is released at midnight in your time zone. In order to accommodate all time zones, there will be two Wordle Reviews live every day, dated based on Eastern Standard Time. If you find yourself on the wrong review, check the number of your puzzle, and go to this page to find the corresponding review.

To avoid spoiling the game for others, make sure you are posting a comment about Wordle 1,165.

Need a hint?

Give me a consonant

Give me a vowel

Open the comments section for more hints, scores, and conversation from the Wordle community.

Today’s Difficulty

The difficulty of each puzzle is determined by averaging the number of guesses provided by a small panel of testers who are paid to solve each puzzle in advance to help us catch any issues and inconsistencies.

Today’s average difficulty is 4.6 guesses out of 6, or moderately challenging.

For more in-depth analysis, visit our friend, WordleBot .

Today’s Word

Click to reveal

Today’s word is CROWN, a noun. According to Webster’s New World College Dictionary, it refers to “a garland or wreath worn on the head as a sign of honor, victory, etc.”

Our Featured Artist

Roche is an illustrator, sculptor and painter from France who lives in Marseille. Soon after graduating from Gobelins Paris, where they studied animation, they directed a film called “Couchée” for French TV. They worked as an art director at Buck, a design agency in Los Angeles, and their art has been exhibited at galleries in Europe and the United States, including Leiminspace in Los Angeles and Barney Savage Gallery in New York. Roche received the Young Gun Award for illustration in 2020.

Further Reading

See the archive for past and future posts.

If you solved for a word different from what was featured today, please refresh your page .

Join the conversation on social media! Use the hashtag #wordlereview to chat with other solvers.

Leave any thoughts you have in the comments! Please follow community guidelines:

Be kind. Comments are moderated for civility.

Having a technical issue? Use the help button in the settings menu of the Games app.

See the Wordle Glossary for information on how to talk about Wordle.

Want to talk about Spelling Bee? Check out our Spelling Bee Forum .

Want to talk about Connections? Check out our Connections Companion .

Trying to go back to the puzzle ?

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 24 August 2024

A scoping review of large language model based approaches for information extraction from radiology reports

  • Daniel Reichenpfader   ORCID: orcid.org/0000-0002-8052-3359 1 , 2 ,
  • Henning Müller   ORCID: orcid.org/0000-0001-6800-9878 3 , 4 &
  • Kerstin Denecke   ORCID: orcid.org/0000-0001-6691-396X 1  

npj Digital Medicine volume  7 , Article number:  222 ( 2024 ) Cite this article

95 Accesses

Metrics details

  • Computer science
  • Institutions
  • Medical imaging

Radiological imaging is a globally prevalent diagnostic method, yet the free text contained in radiology reports is not frequently used for secondary purposes. Natural Language Processing can provide structured data retrieved from these reports. This paper provides a summary of the current state of research on Large Language Model (LLM) based approaches for information extraction (IE) from radiology reports. We conduct a scoping review that follows the PRISMA-ScR guideline. Queries of five databases were conducted on August 1st 2023. Among the 34 studies that met inclusion criteria, only pre-transformer and encoder-based models are described. External validation shows a general performance decrease, although LLMs might improve generalizability of IE approaches. Reports related to CT and MRI examinations, as well as thoracic reports, prevail. Most common challenges reported are missing validation on external data and augmentation of the described methods. Different reporting granularities affect the comparability and transparency of approaches.

Similar content being viewed by others

free review research paper

Information extraction from German radiological reports for general clinical text and language understanding

free review research paper

Comparative analysis of machine learning algorithms for computer-assisted reporting based on fully automated cross-lingual RadLex mappings

free review research paper

Adapted large language models can outperform medical experts in clinical text summarization

Introduction.

In contemporary medicine, diagnostic tests, particularly various forms of radiological imaging, are vital for informed decision-making 1 . Radiologists create for image examinations semi-structured free-text radiology reports by dictation, sticking to a personal or institutional schema to organize the information contained. Structured reporting that is only used in few institutions and for specific cases on the other hand offers a possibility to enhance automatic analysis of reports by defining standardized report layouts and contents.

Despite the potential benefits of structured reporting in radiology, its implementation often encounters resistance due to the possible temporary increase in radiologists’ workload, rendering the integration into clinical practice challenging 2 . Natural language processing (NLP) can provide the means to make structured information available by maintaining existing documentation procedures. NLP is defined as “tract of artificial intelligence and linguistics, devoted to making computers understand the statements or words written in human languages” 3 . Applied on radiology reports, methods related to NLP can extract clinically relevant information. Specifically, information extraction (IE) provides techniques to use this clinical information for secondary purposes, such as prediction, quality assurance or research.

IE, a subfield within NLP, involves extracting pertinent information from free-text. Subtasks include named entity recognition (NER), relation extraction (RE), and template filling. These subtasks are realized using heuristic-based methods, machine learning-based techniques (e.g., support vector machines or Naıve Bayes), and deep learning-based methods 4 . Within the field of deep learning, a new architecture of models has recently emerged - namely large language models (LLMs).

LLMs are “deep learning models with a huge number of parameters trained in an unsupervised way on large volumes of text” 5 . These models typically exceed one million parameters and have proven highly effective in information extraction tasks. The transformer architecture, introduced in 2017, serves as the foundation for most contemporary LLMs, comprising two distinct architectural blocks; the encoder and the decoder. Both blocks apply an innovative approach of creating contextualized word embeddings called attention 6 . Prior to the “age of transformers” still present today, recurrent neural network (RNN)-based LLMs were regarded as state-of-the-art for creating contextualized word embeddings. ELMo, a language model based on a bidirectional Long Short Term Memory (BiLSTM) network 7 , is an example thereof. Noteworthy transformer-based LLMs include encoder-based models like BERT (2018) 8 , decoder-based models like GPT-3 (2020) 9 and GPT-4 (2023) 10 , as well as models applying both encoder and decocoder blocks, e.g., Megatron-ML (2019) 11 . Models continue to evolve, being trained on expanding datasets and consistently surpassing the performance benchmarks established by previous state-of-the-art models. The question arises how these new models shape IE applied to radiology reports.

Regarding existing literature concerning IE from radiology reports, several reviews are available, although these sources either miss current developments or only focus on a specific aspect or clinical domain, see Table 1 . The application of NLP to radiology reports for IE has already been subject to two systematic reviews in 2016 12 and 2021 13 . While the former is not freely available, the latter searches only Google Scholar and includes only one study based on LLMs. Davidson et al. focused on comparing the quality of studies applying NLP-related methods to radiology reports 14 . More recent reviews include a specific scoping review on the application of NLP to reports specifically related to breast cancer 15 , the extraction of cancer concepts from clinical notes 16 , and a systematic review on BERT-based NLP applications in radiology without a specific focus on information extraction 17 .

As LLMs have only recently gained a strong momentum, a research gap exists as there is no overview of LLM-based approaches for IE from radiology reports available. With this scoping review, we therefore intend to answer the following research question:

What is the state of research regarding information extraction from free-text radiology reports based on LLMs?

Specifically, we are interested in the subquestions that arise from the posed research question:

RQ.01 - Performance: What is the performance of LLMs for information extraction from radiology reports?

RQ.02 - Training and Modeling: Which models are used and how is the pre-training and fine-tuning process designed?

RQ.03 - Use cases: Which modalities and anatomical regions do the analyzed reports correspond to?

RQ.04 - Data and annotation: How much data was used to train the model, how was the annotation process designed and is the data publicly available?

RQ.05 - Challenges: What are open challenges and common limitations of existing approaches?

The objective of this scoping review is to answer the above-mentioned questions, provide an overview of recent developments, identify key trends and highlight future research by identifying outstanding challenges and limitations of current approaches.

Study selection

As shown in Fig. 1 , the systematic search yielded 1,237 records, retrieved from five databases. After removing duplicate records and records published before 2018, 374 records (title, abstract) were screened for eligibility. The screening process resulted in the exclusion of 302 records. The remaining 72 records were sought for full-text-retrieval, of which 68 could be retrieved. During data extraction, 43 papers were excluded due to not fulfilling inclusion criteria, which was not apparent based on information provided in the abstract.

figure 1

Querying of five databases resulted in a total of 1237 sources of evidence eligible for screening. This number was reduced to 374 after deduplication and removal based on publication year. Eventually, 34 studies were included in this review after completion of the screening process.

Within the cited references of included papers, nine additional papers fulfilling all inclusion criteria were identified. Therefore, following the above-mentioned methodology, 34 records in total were included in this review.

Study characteristics

In the following, we organize the extracted information according to the structure of the extraction table, which in turn reflects the defined research questions. This review covers studies that were published between 01/01/2018 and 01/08/2023. The earliest study included was published in 2019. After eight included studies published in 2020, the topic reaches its peak with eleven studies published in 2021. Eight studies of 2022 were included. Six included studies were published in the first half of 2023.

Based on corresponding author address, 15 out of 35 papers are located in the USA, followed by six in China and three each in the UK and Germany. Other countries include Austria ( n  = 1), Canada ( n  = 2), Japan ( n  = 2), Spain ( n  = 1) and The Netherlands ( n  = 1) (Table 2 ).

Extracted information

This chapter describes the NLP task, the extracted entities, the information model development process and data normalization strategies of the included studies.

Extracted concepts encompass various entities, attributes, and relations. These concepts relate to abnormalities 18 , 19 , 20 , anatomical information 21 , breast-cancer related concepts 22 , clinical findings 23 , 24 , 25 , devices 26 , diagnoses 27 , 28 , observations 29 , pathological concepts 30 , protected health information (PHI) 31 , recommendations 32 , scores (TI-RADS 33 , tumor response categories 34 ), spatial expressions 35 , 36 , 37 , staging-related information 38 , 39 , and stroke phenotypes 40 . Several papers extract various concepts, e.g., ref. 41 .

Studies solely describing document-level single-label classification were excluded from this review. Two studies apply document-level multi-class classification. Document-level multi-label classification is described in nine studies (26%), whereof three only classify more than two classes for each entity. The majority of the included studies ( n  = 21, 62%) describes NER methods, ten studies additionally apply RE methods. These studies encompass sequence-labeling and span-labeling approaches. Question answering (QA)-based methods are described in two studies, see Fig. 2 .

figure 2

The circles contain the absolute number of studies per task. NER Named entity recognition, RE Relation extraction, ML-CL Binary multi-label classification, MC-CL Multi-class classification, QA Question answering.

The number of extracted concepts (including entities, attributes, and relations) ranges from one entity in both papers describing multi-class classification 33 , 34 up to 64 entities described in a NER-based study 30 .

Three studies base their information model on clinical guidelines, namely the Response evaluation criteria in solid tumors 42 and the TNM Classification of Malignant Tumors (TNM) staging system 43 . Development by domain experts ( n  = 2), references to previous studies ( n  = 3), regulations of the Health Insurance Portability and Accountability Act 44 ( n  = 1), the Stanza radiology model 45 ( n  = 1) and references to previously developed schemes ( n  = 2) are other foundations for information model development. One study provides detailed information about the development process of the information model as supplementary information 19 . One study reports development of their information model based on the RadLex terminology 46 , another based on the National Cancer Institute Thesaurus 47 . 21 studies (62%) do not report any details regarding the development of the information model.

Out of the 34 included studies, only three describe methods to structure and/or normalize extracted information. While Torres-Lopez et al. apply rule-based methods to structure extracted data based on entity positions and combinations 30 , Sugimoto et al. additionally apply rule-based normalization based on a concept table 24 . Datta et al. describe a hybrid approach to normalize extracted entities by first generating concept candidates with BM25, a ranking algorithm, and then choosing the best equivalent with a BERT-based classifier 48 .

Regarding the distribution of annotated entities within the datasets, only one study reports on having conducted measures to counteract class imbalance 19 . Another study reports on not having used F1 score as a performance measure, as the F1 score is not suited when class imbalances are present 27 . Four studies (12%) report coarse entity distributions and seven studies (21%) describe granular entity distributions.

In the following, details regarding the reported model architectures and implementations are described, including base models, (further) pre-training and fine-tuning methods, hyperparameters, performance measures, external validation and hardware details.

For an overview of applied model architectures, see Table 3 . 28 out of 34 papers (82%) describe at least one transformer-based architecture, while the remaining six studies apply various adaptions of the Bidirectional Long Short-Term Memory (Bi-LSTM) architecture. Out of the 28 studies that describe transformer-based architectures, 27 are based on the BERT architecture 8 and one is based on the ERNIE architecture 49 . Eight studies (24%) describe further pre-training of a BERT-based, pre-trained model on in-house data. Eighteen studies (53%) use a BERT-based, pre-trained model without further pre-training. One study applies pre-training to other layers than the LLM. Two studies do not provide any details regarding the architecture of the BERT models. One study combines both BERT- and BiLSTM-based architectures 28 . Out of six studies that describe only BiLSTM-based architectures, two studies apply pre-training of word vectors based on word2vec 50 . 31 studies (91%) provide sufficient details about the fine-tuning process. Three studies do not provide details 24 , 39 , 51 .

Reported performance measures vary between included studies, including traditional measures like precision, recall, and accuracy as well as different variations of the F1 score (micro, macro, averaged, weighted, pooled). The performance of studies reporting a F1-score variation (including micro-, macro-, pooled- generalized, exact match and weighted F1) is compared in Table 4 . If a study describes multiple models, the score of the best model was chosen. If two or more datasets are compared, the higher score was chosen. If applicable, the result of external validation is also presented. 22 studies (65%) report having conducted statistical tests, including cross-validation, McNemar test, Mann-Whitney U test and Tukey-Kramer test.

Hyperparameters used to train the models (e.g., learning rate, batch size, embedding dimensions) are described in 28 studies (82%), however with varying degree of detail. Six studies (18%) do not report any details on hyperparameters. Seven studies (21%) describe a validation of their algorithm on training data from an external institution. Seven studies (21%) include details about hardware and computational resources spent during the training process.

In this section, we describe the study characteristics related to data sets, encompassing number of reports, data splits, modalities, anatomic regions, origin, language, and ethics approval.

Data set size used for fine-tuning ranges from 50 to 10,155 reports. The amount of external validation data ranges from 10% to 31% of the amount of data used for fine-tuning. For further pre-training of transformer-based architectures, 50,000 up to 3.8 million reports are used. Jantscher et al. additionally use the content of a public clinical knowledge platform ( DocCheck Flexicon 52 ) 53 . Zhang et al. only report the amount of data (3 GB) 54 . Jaiswal et al. performed further pre-training on the complete MIMIC-CXR corpus 29 . Two studies that described pre-training of word embeddings for Bi-LSTM-based architectures used 3.3 million and 317,130 reports, respectively 24 , 32 .

Data splits vary widely; the majority of studies ( n  = 23, 68%) divide their data into three sets, namely train-, validation- and test-set, with the most common split being 80/10/10, respectively. This split variation is reported in eight studies (24%). Seven studies (21%) use two sets only, four studies (12%) apply cross-validation-based methods.

19 studies (56%) describe the timeframe within which reports had been extracted. Dada et al. report the longest timeframe of 22 years, using reports between 1999 and 2021 for further pre-training 41 . The shortest timeframe reported is less than one year (2020–2021) 26 .

Several studies are based on publicly available datasets: MIMIC-CXR 55 was used once 29 while MIMIC 56 was used by two studies 40 , 57 . MIMIC-III 58 was used by six studies (18%) 37 , 40 , 48 , 57 , 59 , 60 . The Indiana chest X-ray collection 61 was used twice 35 , 36 . For external validation, MIMIC-II was applied by Mithun et al. 62 and MIMIC-CXR by Lau et al. 23 . While some of these studies use the datasets as-is, some perform additional annotation. Other studies use data from hospitals, hospital networks, other tertiary care institutions, medical big data companies, research centers, care centers or university research repositories.

Figures 3 and 4 show the frequencies of modalities and anatomical regions, respectively. Note that frequencies were counted on study-level and not weighted by the number of reports.

figure 3

The diagram shows absolute numbers of mentioned modalities. Several studies use reports obtained from multiple modalities. Other modalities include positron emission tomography-computed tomography (PET-CT) ( n  = 1) and ultrasound ( n  = 2). Three studies did not explicitly mention associated modalities. Abbreviations: CT Computer tomography, MRI Magnetic resonance imaging.

figure 4

The diagram shows absolute numbers of mentioned anatomical regions. Several studies use reports corresponding to multiple anatomical regions. Other anatomical regions include the heart, abdomen, pelvis, “all body regions'', nose, thyroid ( n  = 1 each) and breast ( n  = 2). Four studies did not explicitly mention associated anatomical regions.

Report language was inferred from the location of the institution of the corresponding author: Most studies use English reports ( n  = 21, 62%) followed by Chinese ( n  = 6, 18%), German ( n  = 4, 12%), Japanese ( n  = 2, 6%) and Spanish ( n  = 1). The corresponding author address of one study is located in the Netherlands but using data from an Indian Hospital 62 .

19 studies (56%) explicitly state that the endeavor was approved by either a national committee or agency ( n  = 3, 9%) or a local institutional or hospital review board or committee ( n  = 15, 44%). One study reports approval only for in-house data, but not for the external validation set from another institution 33 .

Annotation process

28 studies (82%) describe an exclusively manual annotation process. Five studies (15%) explicitly state that each report was annotated by two persons independently. Lau et al. use annotated data to train a classifier that supports the annotation process by proposing only documents that contain potential annotations 32 . Two studies use tools for automated annotation with manual correction and review 29 , 31 . Lybarger et al. do not provide details on their augmentation of an existing dataset 21 , three others do not report details as they either extract information available in the hospital information system 33 or exclusively use existing annotated datasets 36 , 59 .

Annotation tagging schemes mentioned include IOB(2), BISO and BIOES (short for beginning, inside, outside, start, end). The number of involved annotators ranges from one to five, roles include clinical coordinators, radiologists, radiology residents, medical and graduate students, medical informatics engineers, neurologists, neuro-radiologists, surgeons, radiological technologists and internists. Existing annotation guidelines are reported by three studies, four studies mention that instructions exist but do not provide details. 23 studies (68%) do not mention information regarding annotation guidelines.

Inter-annotator-agreement (IAA) is reported by 23 (68%) studies. Measures include F1 score variants ( n  = 8, 24%), Cohen kappa ( n  = 7, 21%), Fleiss kappa ( n  = 19, 56%) and the intraclass correlation coefficient ( n  = 1). IAA results are reported by 16 studies (47%) and range, for Cohen kappa, from 81% to 93.7%. Eleven studies (32%) mention the tool used for annotation, including Brat 23 , 37 , 39 , 48 , 53 , 60 , Doccano 34 , TagEditor 30 , Talen 46 and two self-developed tools 19 , 63 .

Data and source code availability

Five studies (15%) state that data is available upon request. One study claims availability, although there is no data present in the referenced online repository 57 . One study published its dataset in a GitHub repository 35 . One study only uses annotations provided within a dataset with credentialed access 59 . The remaining 22 studies (65%) do not mention whether data is available or not. Regarding source code availability, ten studies (29%) claim their code to be available. The remaining 24 studies (71%) do not mention whether the source code is available or not.

Challenges and limitations

Various aspects related to limitations and challenges are described. The most common mentioned limitation is that studies use only data from a single institution 21 , 22 , 24 , 30 , 36 , 51 , 53 . Similarly, multiple studies mention validation on external or multi-institutional data as a future research direction 19 , 26 , 59 . Two studies mention the need of semantic enrichment or normalization of extracted information 48 , 54 .

Many studies report intentions to augment their described approaches to other report types 21 , 28 , 30 , 37 , other report sections 22 , to include other or more data sources 35 , 39 , 54 or entities 32 , 62 , body parts 46 , clinical contexts 34 or modalities 35 , 53 , 59 .

Additional limitations include the application to only a single modality or clinical area 21 , 46 , 53 , small dataset size 27 , 32 , 54 , technical limitations 27 , 63 , no negation detection 35 , 62 , few extracted entities 24 , 28 or result degradation upon evaluation on external data 19 or more recent reports 25 . Missing interpretability is mentioned by two studies 28 , 41 .

Performance measures reported in Table 4 cannot be compared due to differences in datasets, number of extracted concepts and the heterogeneity of applied performance measures. External validation performed by six studies shows in general lower performance of the algorithm applied to external data, so data from a source different from the one used for training. The largest performance drop of 35% (overall F1 score) was reported in a Bi-LSTM-based study, performing multi-label binary classification of only three entities on the document-level 62 . On the contrary, Torres-Lopez et al. extracted a total of 64 entities with a performance drop of only 3.16% (F1 score), although not providing details on their model architecture. The smallest performance drop amounts to only 0.74% (Micro F1) for extracting seven entities based on a further pre-trained model 46 . However, it cannot be assumed that further pre-training increases model generalizability and therefore performance.

Upon analysis of performance, several inconsistencies between included studies impairs comparability: First, there is no standardized measure or best-practice to assess model performance for information extraction. Although in general, the F1 score is most often applied and well known, there exist many variations, including micro-, macro-, exact and inexact match scores, weighted F1 score and 1-Margin F1 scores. On the contrary, Zaman et al. argue that macro-averaged F1 score or overall accuracy are not suited as performance measures when class imbalances are present 27 . For the same reason, F1 score is only used to assess binary classification and not for multi-class classification by Wood et al. 19 .

While 22 studies apply some variation of cross-validation to assess model performance, 12 studies apply simple split validation methods. Singh et al. show that if data sets are small, simple split validation shows significant differences of performance measures compared to cross-validation 64 .

Specific statistical tests to compare performance of different models include DeLong’s test to compare Area under the ROC Curves 19 , 27 , the Tukey-Kramer method for multiple comparison analysis 46 and the McNemar test to compare the agreement between two models 22 . However, appropriateness of each test method remains unclear, as shown by Demner et al. 65 .

In general, equations on how performance metrics are computed should always be included in the manuscript to improve understandability, e.g., as done by 22 or 30 . To improve comparability of studies, scores for each class as well as a reasonable aggregated score over all classes should be reported.

This review identified only decoder-based architectures or pre-transformer architectures and no generative models, such as GPT-4 (released in March 2023). The majority of the described models is based on the encoder-only BERT architecture, first described by Devlin et al. 8 . We envision multiple reasons: First, while having been available since 2018 66 , generative models first needed time to be established as a new technology to be investigated and applied in the healthcare sector. Second, early generative models might have demonstrated poor performance due to their relatively small size and lack of domain-specific data for pre-training 67 . Third, poor performance might also entail model hallucinations: Farquhar et al. define hallucination as “answering unreliably or without necessary information” 68 . Hallucinations include, among others, provision of wrong answers due to erroneous training data, lying in pursuit of a reward or errors related to reasoning and generalization 68 . On the contrary, encoder-only models like the BERT architecture cannot hallucinate as they provide only context-aware embeddings of input data; the actual NLP task (e.g., sequence labeling, classification or regression) is performed by a relatively simple, downstream neural network, rendering this architecture more transparent and verifiable than generative models.

An advantage of LLMs is their capability to be customized to a specific language or general domain (e.g., medicine): First, a base version of the model is trained using a large amount of unlabeled data: This process is called pre-training. The concept of transfer-learning enables researchers to further customize a pre-trained model to a more specific domain (e.g., clinical domain, another language or from a certain hospital). This is also referred to as further pre-training. The process of training the model to perform a particular NLP task (e.g., classification) based on labeled data is called fine-tuning. These definitions (pre-training, further pre-training, transfer learning and fine-tuning) tend to be confused by authors or replaced by other term variants, e.g., “supervised learning”. However, it is imperative to use clear and concise language to distinguish between the concepts mentioned above.

Seven included studies apply further pre-training as defined above. The effect of further pre-training depends on various factors, including specifications of the input model used or amount and quality of the data used for further pre-training. Interestingly, further pre-training of a pre-trained model to another language was not reported.

Opposed to the traditional further pre-training as described above, Jaiswal et al. show how BERT-based models achieve higher performance when little data is available based on contrastive pre-training 29 . The authors claim that their model achieves better results than conventional transformers when the number of annotated reports is limited.

Only two studies solve the task of information extraction based on extractive question answering 41 , 59 . Extractive question answering was already described in the original BERT paper 8 : Instead of generating a pooled embedding of the input text or one embedding per input token, a BERT model fine-tuned for question answering takes an answer as an input and outputs the start and end token of the text span that contains the answer to the posed question - this is also possible if no answer or multiple answers are contained within the text as shown by Zhang et al. 69 .

The most common modalities for which reports of findings were used in the included studies are CT ( n  = 16), MRI ( n  = 15) and X-Ray ( n  = 14). CT reports appear to be the most common source when using in-house data. According to data provided by the Organisation for Economic Cooperation and Development (OECD), the availability of CT scanners and MRI machines has increased steadily during the past decades. Furthermore, there has been a general upwards trend in the number of performed CT and MRI interventions worldwide 70 . CT exams are fast and cheap compared to MRI.

The most common anatomical regions studied are thorax ( n  = 17) and brain ( n  = 8). There might be different reasons for this distribution. First, chest X-Ray is one of the most frequently performed imaging examinations. Second, six studies used reports obtained from MIMIC datasets, including thorax X-Ray, brain MRI and babygram examinations. Two studies used thorax X-Ray reports obtained from publicly available datasets. Furthermore, a report on the annual exposure from medical imaging in Switzerland shows that the thorax region is the third most common anatomical region of CT procedures (11.8%), preceded by abdomen and thorax (16.4%) and abdomen only (17.7%) 71 .

We identified several aspects that showed different interpretations in the included studies. One of the major ambiguities discovered is the clear definition of the terms test set and validation set: Some studies use these two very distinct terms interchangeably. However, agreement is needed upon which set is used during parameter optimization of a model and which set is used for evaluation of the final model. Furthermore, studies either report number of sentences or number of documents, hindering comparability. It also remains unclear, whether the stated dataset size includes documents without annotation or annotated data only. Report language is never explicitly stated.

Regarding annotation, it becomes apparent that there is no standard for IAA calculation, recommended number of annotator and their backgrounds, number of reports, number of reconciliation rounds and especially, IAA calculation methods. All these aspects differ widely in the included papers.

Good practices observed in the included papers include reporting of descriptive annotation statistics 35 and conducting complexity analysis of the report corpus 29 , 34 : These complexity metrics include e.g., unique n-gram counts, lexical diversity as measured with the Yule 1 score and the Type-Token-Ratio, as reported in ref. 46 . Wood et al. highlight the importance of splitting data on patient-level instead of report level 19 .

Last, we want to highlight interesting approaches: Fine et al. first use structured reports for fine-tuning and then apply the resulting model on unstructured reports 34 . Jaiswal et al. introduce three novel data augmentation techniques before fine-tuning their model based on contrastive learning 29 . Pérez-Díez et al. developed a randomization algorithm to substitute detected entities with synthetic alternatives to disguise undetected personal information 31 .

The mentioned challenges and limitations are manifold and diverse. Ten papers in total address the topic of generalizing to data from other institutions. Another challenge are the limitations of every study, be it a limited number of entities and usually a single modality and clinical domain. Every included study is based on a pre-defined information model and fine-tuned on annotated data. This means, that by August 2023, no truly generalized approach for IE has been described in the identified literature.

Upon interpretation of the above-mentioned results, several limitations of this review can be mentioned. First, the definition of information extraction proved to be challenging. We defined information extraction as a collective term for the NLP tasks of document-level multi-label classification (including binary or multiple classes for each label), NER (including RE), as well as question answering approaches. We excluded binary classification on the document level. While a narrow definition of IE would possibly only include NER and RE, whereas the widest definition would also include binary document classification. With our approach, we wanted to ensure a balanced level of task complexity.

Furthermore, the definition of an LLM was also unclear. In the protocol for this review, LLMs are defined as “deep learning models with more than one million parameters, trained on unlabeled text data” 72 . Although BiLSTM-based architectures are not trained on text, the applied context-aware word embeddings like fastText and word2vec stipulate the inclusion of these architectures into this review. An additional argument for including BiLSTM-based architectures is ELMO, a BiLSTM-based architecture with ~ 13M parameters, and referred to as one of the first LLMs. However, we decided not to include BiGRU-based architectures, as information on their parameter count was usually not available. A more narrow definition would only include transformer-based architectures, having billions of parameters. This definition seems to have recently reached consensus among researchers and in industry. As of the time of submission in June 2024, LLMs tend to be defined even more narrow, only including generative models based on autoregressive sampling 73 . This might be due to generative models currently being the most common and frequent model architecture. On the contrary, a wider definition would also potentially include BiGRU-based, CNN-based and other architectures. It also remains subject to discussion whether summarization can be regarded as information extraction—for this study, summarization was not included, potentially missing studies of interest, e.g., ref. 74 . Likewise, image-to-text report generation was excluded.

Regarding the search strategy, we decided not to include numerous model names to keep the complexity of the search term low. Instead, we initially only included the terms transformers and Bert . Eventually, only two search dimensions were used because otherwise, the number of search results would have been too small. To minimize the number of missed studies, the forward search of references of included studies was carried out, eventually leading to nine additionally included studies that were not covered by the search strategy. Nevertheless, our search strategy was not exhaustive: Studies that used terms related to transformation or structuring of reports, e.g.,refs. 75 , 76 , were missed as these terms are missing in the search strategy.

No generative models and therefore no approaches based on generative models (including few-, single- or zero-shot learning) are included in the search results. This might be due to the fact that generative models have only started to become widely accessible with the publication of chatGPT in November 2022. Only later, open-source alternatives became available. However, due to the sensitive nature of patient data, utilization of publicly serviced models, e.g., GPT-4, is restricted due to data protection rules. Until the cut-off time of this review, state-of-the-art, open-source generative models, e.g., LLama 2 (70B), had still required vast computational resources, restricting the possibilities of on-premise deployment within hospital infrastructures. Furthermore, early studies might so far only be published without peer-review (e.g., on arXiv), excluding them for this review, e.g., ref. 77 . As no search updates were performed for this review, arXiv papers that were later peer-reviewed were also not included, e.g., 78 . Relevant papers published in the ACL Anthology were also not included, potentially missing papers describing generative approaches, e.g., by Agrawal et al. 79 and Kartchner et al. 80 . Sources that did not mention “information extraction”, “named entity recognition” or “relation extraction” in the title or abstract and were not referred to by other papers were also not included, e.g., ref. 81 .

Given the diverse nature of the included studies alongside discrepancies in both the quality and quantity of reported data, a comprehensive analysis of the extracted information was deemed impossible. Future systematic reviews could enhance this comparison by refining the research question and subquestions to a more specific scope. However, according to the protocol for this scoping review, a purely descriptive presentation of findings was conducted.

Another potential limitation is the fact that data extraction was performed by one author (DR) only. However, prior to data extraction, two studies were extracted by two authors, and the resulting information compared. This led to the addition of six additional aspects to the original data extraction table, including details on hardware specification, hyperparameters, ethical approval, timeframe of dataset and class imbalance measures.

Last, we want to highlight that this scoping review strictly adheres to the PRISMA-ScR and PRISMA-S guidelines. Our search strategy of five databases resulted in over 1200 primary search results, minimizing the risk of missing relevant studies. This risk was further minimized by carefully choosing a balanced definition of both IE and LLMs. As only peer-reviewed studies were taken into account, a certain study quality was furthermore ensured.

Due to the current rapid technical progress, we summarize the latest developments regarding LLMs in general, their application in medicine, as well with regard to this review’s topic. We give an overview on studies published outside the scope of our review (published after August 1st 2023) as well as on the application of LLMs in clinical domains and tasks different from IE from radiology reports.

As of June 2024, the majority of recently published LLMs, be it commercial or open-source, are generative models, based on the decoder-block of the original transformer architecture. Two development strategies can be observed to increase model performance: The first strategy is about simply increasing the amount of model parameters (and therefore, model size), leading also to an increased demand for training data. The second strategy, on the other hand, is about optimizing existing models based on different strategies, including model pruning, quantization or distillation, as shown by Rohanian et al. 82 . Recent models include the Gemini family (2024) 83 , the T5 family 84 , LLama 3 (2024) 85 and Mixtral (2024) 86 . Moreover, research has increasingly been focussing on developing domain-specific models, e.g., Meditron, Med-PaLM 2, or Med-Gemini for the healthcare domain 87 , 88 , 89 .

In the broad clinical domain, these recent, generative LLMs show impressive capabilities, partly outperforming clinicians in test settings regarding, e.g., medical summary generation 90 , prediction of clinical outcomes 91 and answering of clinical questions 92 . Dagdelen et al. have recently demonstrated that, in the context of structured information extraction from scientific texts, even generative models require a few hundred training examples to effectively extract and organize information using the open-source model Llama-2 93 .

For the specific topic of structured IE from radiology reports, several papers and pre-prints have been published since August 2023: In general, it becomes apparent that resource-demanding generative models seem not to show better results compared to encoder-based approaches, as shown by the following studies: When applying the open-source model Vicuna 94 to binary label 13 concepts on document-level of radiology reports, Mukherjee et al. showed only moderate to substantial agreement with existing, less resource-demanding approaches 95 . Document-level binary level was also investigated by Adams et al., who compared GPT-4 to a BERT-based model further pre-trained on German medical documents 75 . In this comparison, the smaller, open-source model 96 outperformed GPT-4 for five out of nine concepts. The authors also tested GPT-4 on English radiology reports, however not providing any detailed performance measures. Similarily, Hu et al. used ChatGPT as a commercial platform to extract eleven concepts from radiology reports without further fine-tuning or provision of examples 97 . The results show inferiority of ChatGPT upon comparison with a previously described approach (BERT-based multiturn question answering 98 ) as well as a rule-based approach (averaged F1 scores: 0.88, 0.91, 0.93, respectively). Mallio et al. qualitatively compared several closed-source generative LLMs for structured reporting, although lacking clear results 99 . Additionally, several key gaps remain with the application of above-mentioned generative models. For example, closed-source models continue getting larger, requiring an increasing extent of scarce hardware resources and training data. Moreover, although large generative models currently show the best performance, they are less explainable than, e.g., encoder-based architectures prevalent in this review’s results 100 .

Generative models and encoder-based models each offer unique advantages and disadvantages. Yang et al. show that generative models might excel at generalizing to external data by applying in-context learning 101 . Generative models are by design able to aggregate information, and might be therefore more suitable to extract more complex concepts. Recently, open-source models are becoming more efficient and compact, as seen in recent advancements, e.g., the Phi 3 model family 102 . However, generative models are usually computationally intensive and require substantial resources for training and deployment. While still facing issues regarding hallucination, this behavior might be improved by combining LLMs with knowledge graphs, as introduced by Gilbert et al. 103 .

On the other hand, encoder-based models, such as BERT, are highly effective at understanding and generating bidirectional contextual embeddings of input data, which makes them particularly strong in tasks requiring precise comprehension or annotation of text, such as extractive question answering or NER. They tend to be more resource-efficient during inference compared to generative models. However, encoder-based models often struggle with generating coherent text, a task where generative models excel. Additionally, while encoder-based models can be fine-tuned for specific tasks, they may not generalize as well as generative models. Moreover, research and industry currently focus on the development of generative models, as the last encoder-based architecture was published in 2021 104 . In summary, while generative models currently offer flexibility and powerful aggregation capabilities, encoder-based models provide efficiency and precision.

In this review, we provide a comprehensive overview of recent studies on LLM-based information extraction from radiology reports, published between January 2018 and August 2023. No generative model architectures for IE from radiology reports were described in literature. After August 2023, generative models have been becoming more common, however tending not to show a performance increase compared to pre-transformer and encoder-based architectures. According to the included studies, pre-transformer and encoder-based models show promising results, although comparison is hindered by different performance score calculation methods and vastly different data sets and tasks. LLMs might improve generalizability of IE methods, although external validation is performed in only seven studies. The majority of studies used pre-trained LLMs without further pre-training on their own data. So far, research has focused on IE from reports related to CT and MRI examinations and most frequently on reports related to the thorax region. We recognize a lack of publicly available datasets. Furthermore, a lack of standardization of the annotation process results in potential differences regarding data quality. The source code is made available by only ten studies, limiting reproducibility of the described methods. Most common challenges reported are missing validation on external data and augmentation of the described method to other clinical domains, report types, concepts, modalities and anatomical regions.

No generative model architectures for IE from radiology reports were described in literature. After August 2023, generative models have been becoming more common, however tending not to show a performance increase compared to pre-transformer and encoder-based architectures. According to the included studies, pre-transformer and encoder-based models show promising results, although comparison is hindered by different performance score calculation methods and vastly different data sets and tasks. LLMs might improve generalizability of IE methods, although external validation is performed in only seven studies.

We conclude by highlighting the need to facilitate comparability of studies and to review generative AI-based approaches. We therefore plan to develop a reporting framework for clinical application of NLP methods. This need is confirmed by Davidson et al. who also state that available guidelines are limited 14 ; journal-specific guidelines already exist 105 . Considering the periodical publication of larger, more capable generative models, transparent and verifiable reporting of all aspects described in this review is essential to compare and identify successful approaches. We furthermore suggest future research to focus on the optimization and standardization of annotation processes to develop few-shot prompts. Currently, the correlation between annotation quality, quantity and model performance is unknown. Last, we recommend the development and publication of standardized, multilingual datasets to foster external validation of models.

This scoping review was conducted according to the JBI Manual for evidence synthesis and adheres to the PRISMA extension for scoping reviews (PRISMA-ScR). Regarding methodological details, we refer to the published protocol for this review 72 . In this section, we give an overview on the applied methodology and explain the adaptations made to the protocol. The completed PRISMA-ScR checklist is provided in Supplementary Table 1 .

Search strategy

The search strategy comprised three steps: First, a preliminary search was conducted by searching two databases (Google Scholar and PubMed), using keywords related to this review’s research question. Based on the results, a list of relevant search and index terms was retrieved, which in turn served as a basis for the iterative development of the full search query.

During search query development, different combinations of terms and dimensions of the research topic were combined to build query combinations that were run on PubMed. Balancing of search results and relevance showed that the inclusion of only two dimensions, “radiology” and “information extraction”, showed the best balance regarding the quantity and quality of results and was therefore chosen as the final search query.

Second, a systematic search was carried out using the final version of the search query. The PubMed-based query was adapted to meet syntactical requirements of the other four databases, comprising IEEE Xplore, ACM Digital Library, Web of Science Core Collection and Embase. The systematic search was conducted on 01/08/2023, and included all sources of evidence (SOE) since database inception. No additional limits, restrictions, or filters were applied. The full query for each database as well as a completed PRISMA-S extension checklist are shown in Supplementary Table 2 and Supplementary Table 3 . Third, reference lists of included studies were manually checked for additional sources of evidence and included if fulfilling all inclusion criteria. No search updates were performed.

Inclusion criteria

Inclusion criteria were discussed among and agreed on by all three authors. No separation was made between exclusion and inclusion criteria; reports were included upon fulfillment of all the following six aspects:

C.01: The full-text SOE is retrievable.

C.02: The SOE was published after 31/12/2017.

C.03: The SOE is published in a peer-reviewed journal or conference proceeding.

C.04: The SOE describes original research, excluding reviews, comments, patents and white papers.

C.05: The SOE describes the application of NLP methods for the purpose of IE from free-text radiology reports.

C.06: The described approach is LLM-based (defined as deep learning models with more than one million parameters, trained on unlabeled text data).

Screening and data extraction

Record screening was performed by two authors (KD, DR), using the online-platform Rayyan 106 . To improve alignment regarding inclusion criteria between reviewers, a first batch of 25 records was screened individually. Two conflicting decisions were discussed and clarified, leading to the consensus that BiLSTM-based architectures might also classify as LLMs and should therefore be included. In order to validate this change, a second batch of 25 records was screened and compared. Three conflicting decisions helped to clarify that, when a LLM-based architecture is not explicitly stated in the title or abstract, the record should still be marked as included to maximize overall recall of relevant papers.

Upon clarification of the inclusion criteria, each remaining record (title, abstract) was screened twice. After completion of the screening process, conflicts (comprising differing decisions or records marked as “maybe”) were resolved by including all records that are marked at least once as “included”.

After screening, records were sought for full-text retrieval. Data extraction was performed by one author (DR). During the extraction phase, reports were ex post excluded when a violation of inclusion criteria became apparent from the full-text. Reference lists of included papers were screened for further reports to include. Changes to the published protocol for this review are documented in Supplementary Table 4 , including its description, reason, and date.

Data availability

The complete list of extracted documents for all queried databases as well as the completed data extraction table are available in the OSF repository, see https://doi.org/10.17605/OSF.IO/RWU5M .

Code availability

For data screening, the publicly available online platform rayyain.ai was used (free plan), see https://www.rayyan.ai .

Müskens, J. L. J. M., Kool, R. B., Van Dulmen, S. A. & Westert, G. P. Overuse of diagnostic testing in healthcare: a systematic review. BMJ Qual. Saf. 31 , 54–63 (2022).

Article   PubMed   Google Scholar  

Nobel, J. M., Van Geel, K. & Robben, S. G. F. Structured reporting in radiology: a systematic review to explore its potential. Eur. Radiol. 32 , 2837–2854 (2022).

Khurana, D., Koli, A., Khatter, K. & Singh, S. Natural language processing: state of the art, current trends and challenges. Multimed. Tools Appl. 82 , 3713–3744 (2023).

Jurafsky, D. & Martin, J. H. Speech and Language Processing. An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition (Pearson Education, 2024).

Birhane, A., Kasirzadeh, A., Leslie, D. & Wachter, S. Science in the age of large language models. Nat. Rev. Phys. 5 , 277–280 (2023).

Article   Google Scholar  

Vaswani, A. et al. Attention is all you need. In Advances in Neural Information Processing Systems , Vol. 30 (Curran Associates, Inc., 2017).

Peters, M. E. et al. Deep contextualized word representations 1802. 05365 (2018).

Devlin, J., Chang, M.-W., Lee, K. & Toutanova, K. BERT: Pre-training of deep bidirectional transformers for language understanding. In Burstein, J., Doran, C. & Solorio, T. (eds.) In Proc. Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) , 4171–4186 (Association for Computational Linguistics, Minneapolis, Minnesota, 2019).

Brown, T. et al. Language models are few-shot learners. In Advances in Neural Information Processing Systems , vol. 33, 1877–1901 (Curran Associates, Inc., 2020).

OpenAI et al. GPT-4 Technical Report 2303.08774. (2023).

Shoeybi, M. et al. Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism 1909.08053 (2020).

Pons, E., Braun, L. M. M., Hunink, M. G. M. & Kors, J. A. Natural language processing in radiology: a systematic review. Radiology 279 , 329–343 (2016).

Casey, A. et al. A systematic review of natural language processing applied to radiology reports. BMC Med. Inform. Decis. Mak. 21 , 179 (2021).

Article   PubMed   PubMed Central   Google Scholar  

Davidson, E. M. et al. The reporting quality of natural language processing studies: systematic review of studies of radiology reports. BMC Med. Imaging 21 , 142 (2021).

Saha, A., Burns, L. & Kulkarni, A. M. A scoping review of natural language processing of radiology reports in breast cancer. Front. Oncol. 13 , 1160167 (2023).

Gholipour, M., Khajouei, R., Amiri, P., Hajesmaeel Gohari, S. & Ahmadian, L. Extracting cancer concepts from clinical notes using natural language processing: a systematic review. BMC Bioinform. 24 , 405 (2023).

Gorenstein, L., Konen, E., Green, M. & Klang, E. Bidirectional encoder representations from transformers in radiology: a systematic review of natural language processing applications. J. Am. Coll. Radiol. 21 , 914–941 (2024).

Wood, D. A. et al. Automated labelling using an attention model for radiology reports of MRI scans (ALARM). In Arbel, T. et al. (eds.) Proceedings of the Third Conference on Medical Imaging with Deep Learning , vol. 121 of Proceedings of Machine Learning Research , 811–826 (PMLR, 2020-07-06/2020-07-08).

Wood, D. A. et al. Deep learning to automate the labelling of head MRI datasets for computer vision applications. Eur. Radiol. 32 , 725–736 (2022).

Li, Z. & Ren, J. Fine-tuning ERNIE for chest abnormal imaging signs extraction. J. Biomed. Inform. 108 , 103492 (2020).

Lybarger, K., Damani, A., Gunn, M., Uzuner, O. Z. & Yetisgen, M. Extracting radiological findings with normalized anatomical information using a span-based BERT relation extraction model. AMIA Jt. Summits Transl. Sci. Proc. 2022 , 339–348 (2022).

PubMed   PubMed Central   Google Scholar  

Kuling, G., Curpen, B. & Martel, A. L. BI-RADS BERT and using section segmentation to understand radiology reports. J. Imaging 8 , 131 (2022).

Lau, W., Lybarger, K., Gunn, M. L. & Yetisgen, M. Event-based clinical finding extraction from radiology reports with pre-trained language model. J. Digit. Imaging 36 , 91–104 (2023).

Sugimoto, K. et al. End-to-end approach for structuring radiology reports. Stud. Health Technol. Inform. 270 , 203–207 (2020).

PubMed   Google Scholar  

Zhang, Y. et al. Using recurrent neural networks to extract high-quality information from lung cancer screening computerized tomography reports for inter-radiologist audit and feedback quality improvement. JCO Clin. Cancer Inform. 7 , e2200153 (2023).

Tejani, A. S. et al. Performance of multiple pretrained BERT models to automate and accelerate data annotation for large datasets. Radiol. Artif. Intell. 4 , e220007 (2022).

Zaman, S. et al. Automatic diagnosis labeling of cardiovascular MRI by using semisupervised natural language processing of text reports. Radiol. Artif. Intell. 4 , e210085 (2022).

Liu, H. et al. Use of BERT (bidirectional encoder representations from transformers)-based deep learning method for extracting evidences in chinese radiology reports: Development of a computer-aided liver cancer diagnosis framework. J. Med. Internet Res. 23 , e19689 (2021).

Jaiswal, A. et al. RadBERT-CL: factually-aware contrastive learning for radiology report classification. In Proc. Machine Learning for Health , 196–208 (PMLR, 2021).

Torres-Lopez, V. M. et al. Development and validation of a model to identify critical brain injuries using natural language processing of text computed tomography reports. JAMA Netw. Open 5 , e2227109 (2022).

Pérez-Díez, I., Pérez-Moraga, R., López-Cerdán, A., Salinas-Serrano, J. M. & la Iglesia-Vayá, M. De-identifying Spanish medical texts - named entity recognition applied to radiology reports. J. Biomed. Semant. 12 , 6 (2021).

Lau, W., Payne, T. H., Uzuner, O. & Yetisgen, M. Extraction and analysis of clinically important follow-up recommendations in a large radiology dataset. AMIA Summits Transl. Sci. Proc. 2020 , 335–344 (2020).

Santos, T. et al. A fusion NLP model for the inference of standardized thyroid nodule malignancy scores from radiology report text. Annu. Symp. Proc. AMIA Symp. 2021 , 1079–1088 (2021).

Fink, M. A. et al. Deep learning–based assessment of oncologic outcomes from natural language processing of structured radiology reports. Radiol. Artif. Intell. 4 , e220055 (2022).

Datta, S. et al. Understanding spatial language in radiology: representation framework, annotation, and spatial relation extraction from chest X-ray reports using deep learning. J. Biomed. Inform. 108 , 103473 (2020).

Datta, S. & Roberts, K. Spatial relation extraction from radiology reports using syntax-aware word representations. AMIA Jt. Summits Transl. Sci. Proc. 2020 , 116–125 (2020).

Datta, S. & Roberts, K. A Hybrid deep learning approach for spatial trigger extraction from radiology reports. In Proc. Third International Workshop on Spatial Language Understanding , 50–55 (Association for Computational Linguistics, Online, 2020).

Zhang, H. et al. A novel deep learning approach to extract Chinese clinical entities for lung cancer screening and staging. BMC Med. Inform. Decis. Mak. 21 , 214 (2021).

Hu, D. et al. Automatic extraction of lung cancer staging information from computed tomography reports: Deep learning approach. JMIR Med. Inform. 9 , e27955 (2021).

Datta, S., Khanpara, S., Riascos, R. F. & Roberts, K. Leveraging spatial information in radiology reports for ischemic stroke phenotyping. AMIA Jt. Summits Transl. Sci. Proc. 2021 , 170–179 (2021).

Dada, A. et al. Information extraction from weakly structured radiological reports with natural language queries. Eur. Radiol. 34 , 330–337 (2023).

Eisenhauer, E. et al. New response evaluation criteria in solid tumours: revised RECIST guideline (version 1.1). Eur. J. Cancer 45 , 228–247 (2009).

Article   CAS   PubMed   Google Scholar  

Rosen, R. D. & Sapra, A. TNM Classification. In StatPearls (StatPearls Publishing, 2023).

University of California Berkeley. HIPAA PHI: definition of PHI and List of 18 Identifiers. https://cphs.berkeley.edu/hipaa/hipaa18.html# (2023).

Stanford NLP Group. Stanfordnlp/stanza. Stanford NLP (2024).

Sugimoto, K. et al. Extracting clinical terms from radiology reports with deep learning. J. Biomed. Inform. 116 , 103729 (2021).

US National Institutes of Health. NationalCancer Institute. NCI Thesaurus. https://ncit.nci.nih.gov/ncitbrowser/ .

Datta, S., Godfrey-Stovall, J. & Roberts, K. RadLex normalization in radiology reports. AMIA Annu. Symp. Proc. 2020 , 338–347 (2021).

Zhang, Z. et al. ERNIE: Enhanced Language Representation with Informative Entities In Proc. 57th Annual Meeting of the Association for Computational Linguistics , pages 1441–1451, Florence, Italy. Association for Computational Linguistics (2019).

Mikolov, T., Chen, K., Corrado, G. & Dean, J. Efficient Estimation of Word Representations in Vector Space 1301.3781 (2013).

Huang, X., Chen, H. & Yan, J. D. Study on structured method of Chinese MRI report of nasopharyngeal carcinoma. BMC Med. Inform. Decis. Mak. 21 , 203 (2021).

DocCheck. DocCheck Flexicon. https://flexikon.doccheck.com/de/Hauptseite (2024).

Jantscher, M. et al. Information extraction from German radiological reports for general clinical text and language understanding. Sci. Rep. 13 , 2353 (2023).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Zhang, X. et al. Extracting comprehensive clinical information for breast cancer using deep learning methods. Int. J. Med. Inform. 132 , 103985 (2019).

Johnson, A. E. W. et al. MIMIC-CXR, a de-identified publicly available database of chest radiographs with free-text reports. Sci. Data 6 , 317 (2019).

Moody, G. B. & Mark, R. G. The MIMIC Database (1992).

Datta, S. & Roberts, K. Weakly supervised spatial relation extraction from radiology reports. JAMIA Open 6 , ooad027 (2023).

Johnson, A. E. W. et al. MIMIC-III, a freely accessible critical care database. Sci. Data 3 , 160035 (2016).

Datta, S. & Roberts, K. Fine-grained spatial information extraction in radiology as two-turn question answering. Int. J. Med. Inform. 158 , 104628 (2022).

Datta, S. et al. Rad-SpatialNet: a frame-based resource for fine-grained spatial relations in radiology reports. In Calzolari, N. et al . (eds.) Proc. Twelfth Language Resources and Evaluation Conference , 2251–2260 (European Language Resources Association, Marseille, France, 2020).

Demner-Fushman, D. et al. Preparing a collection of radiology examinations for distribution and retrieval. J. Am. Med. Inform. Assoc. 23 , 304–310 (2016).

Mithun, S. et al. Clinical concept-based radiology reports classification pipeline for lung carcinoma. J. Digit. Imaging 36 , 812–826 (2023).

Bressem, K. K. et al. Highly accurate classification of chest radiographic reports using a deep learning natural language model pre-trained on 3.8 million text reports. Bioinformatics 36 , 5255–5261 (2021).

Singh, V. et al. Impact of train/test sample regimen on performance estimate stability of machine learning in cardiovascular imaging. Sci. Rep. 11 , 14490 (2021).

Demler, O. V., Pencina, M. J. & D’Agostino, R. B. Misuse of DeLong test to compare AUCs for nested models. Stat. Med. 31 , 2577–2587 (2012).

Radford, A., Narasimhan, K., Salimans, T. & Sutskever, I. Improving language understanding by generative pre-training (2018).

Thirunavukarasu, A. J. et al. Large language models in medicine. Nat. Med. 29 , 1930–1940 (2023).

Farquhar, S., Kossen, J., Kuhn, L. & Gal, Y. Detecting hallucinations in large language models using semantic entropy. Nature 630 , 625–630 (2024).

Zhang, Y. & Xu, Z. BERT for question answering on SQuAD 2.0 (2019).

OECD. Diagnostic technologies (2023).

Viry, A. et al. Annual exposure of the Swiss population from medical imaging in 2018. Radiat. Prot. Dosim. 195 , 289–295 (2021).

Reichenpfader, D., Müller, H. & Denecke, K. Large language model-based information extraction from free-text radiology reports: a scoping review protocol. BMJ Open 13 , e076865 (2023).

Shanahan, M., McDonell, K. & Reynolds, L. Role play with large language models. Nature 623 , 493–498 (2023).

Liang, S. et al. Fine-tuning BERT Models for Summarizing German Radiology Findings. In Naumann, T., Bethard, S., Roberts, K. & Rumshisky, A. (eds.) Proc. 4th Clinical Natural Language Processing Workshop , 30–40 (Association for Computational Linguistics, Seattle, WA, 2022).

Adams, L. C. et al. Leveraging GPT-4 for post hoc transformation of free-text radiology reports into structured reporting: a multilingual feasibility study. Radiology 307 , e230725 (2023).

Nowak, S. et al. Transformer-based structuring of free-text radiology report databases. Eur. Radiol. 33 , 4228–4236 (2023).

Košprdić, M., Prodanović, N., Ljajić, A., Bašaragin, B. & Milošević, N. From zero to hero: harnessing transformers for biomedical named entity recognition in zero- and few-shot contexts 2305.04928 (2023).

Smit, A. et al. Combining Automatic Labelers and Expert Annotations for Accurate Radiology Report Labeling Using BERT. In Webber, B., Cohn, T., He, Y. & Liu, Y. (eds.) Proc. Conference on Empirical Methods in Natural Language Processing (EMNLP) , 1500–1519 (Association for Computational Linguistics, Online, 2020).

Agrawal, M., Hegselmann, S., Lang, H., Kim, Y. & Sontag, D. Large language models are few-shot clinical information extractors. In Goldberg, Y., Kozareva, Z. & Zhang, Y. (eds.) Proc. Conference on Empirical Methods in Natural Language Processing , 1998–2022 (Association for Computational Linguistics, Abu Dhabi, United Arab Emirates, 2022).

Kartchner, D., Ramalingam, S., Al-Hussaini, I., Kronick, O. & Mitchell, C. Zero-shot information extraction for clinical meta-analysis using large language models. In Demner-fushman, D., Ananiadou, S. & Cohen, K. (eds.) Proc. 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks , 396–405 (Association for Computational Linguistics, Toronto, Canada, 2023).

Jupin-Delevaux, É. et al. BERT-based natural language processing analysis of French CT reports: application to the measurement of the positivity rate for pulmonary embolism. Res. Diagn. Interv. Imaging 6 , 100027 (2023).

Rohanian, O., Nouriborji, M., Kouchaki, S. & Clifton, D. A. On the effectiveness of compact biomedical transformers. Bioinformatics 39 , btad103 (2023).

Gemini Team, Google. Gemini: a family of highly capable multimodal models. https://storage.googleapis.com/deepmind-media/gemini/gemini_1_report.pdf (2024).

Raffel, C. et al. Exploring the limits of transfer learning with a unified text-to-text transformer 1910.10683 (2023).

Llama-3. Meta (2024).

Jiang, A. Q. et al. Mixtral of experts 2401.04088 (2024).

Chen, Z. et al. MEDITRON-70B: scaling medical pretraining for large language models 2311.16079 (2023).

Singhal, K. et al. Towards expert-level medical question answering with large language models 2305.09617 (2023).

Saab, K. et al. Capabilities of Gemini models in medicine 2404.18416 (2024).

Van Veen, D. et al. Adapted large language models can outperform medical experts in clinical text summarization. Nat. Med. 30 , 1134–1142 (2024).

Jiang, L. Y. et al. Health system-scale language models are all-purpose prediction engines. Nature 619 , 357–362 (2023).

Singhal, K. et al. Large language models encode clinical knowledge. Nature 620 , 172–180 (2023).

Dagdelen, J. et al. Structured information extraction from scientific text with large language models. Nat. Commun. 15 , 1418 (2024).

Zheng, L. et al. Judging LLM-as-a-judge with MT-bench and chatbot arena. Adv. Neural Inf. Process Syst. 36 , 46595–46623 (2023).

Google Scholar  

Mukherjee, P., Hou, B., Lanfredi, R. B. & Summers, R. M. Feasibility of using the privacy-preserving large language model Vicuna for labeling radiology reports. Radiology 309 , e231147 (2023).

Bressem, K. K. et al. MEDBERT.de: a comprehensive German BERT model for the medical domain. Expert Syst. Appl. 237 , 121598 (2024).

Hu, D., Liu, B., Zhu, X., Lu, X. & Wu, N. Zero-shot information extraction from radiological reports using ChatGPT. Int. J. Med. Inform. 183 , 105321 (2024).

Hu, D., Li, S., Zhang, H., Wu, N. & Lu, X. Using natural language processing and machine learning to preoperatively predict lymph node metastasis for non–small cell lung cancer with electronic medical records: development and validation study. JMIR Med. Inform. 10 , e35475 (2022).

Mallio, C. A., Sertorio, A. C., Bernetti, C. & Beomonte Zobel, B. Large language models for structured reporting in radiology: performance of GPT-4, ChatGPT-3.5, Perplexity and Bing. La Radiol. Med. 128 , 808–812 (2023).

Zhao, H. et al. Explainability for large language models: a survey. ACM Trans. Intell. Syst. Technol. 15 , 1–38 (2024).

Article   CAS   Google Scholar  

Yang, H. et al. Unveiling the generalization power of fine-tuned large language models. In Proc. of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) (eds Duh, K., Gomez, H. & Bethard, S.) 884–899 (Association for Computational Linguistics, Mexico City, Mexico, 2024). https://doi.org/10.18653/v1/2024.naacl-long.51 .

Abdin, M. et al. Phi-3 technical report: a highly capable language model locally on your phone 2404.14219 (2024).

Gilbert, S., Kather, J. N. & Hogan, A. Augmented non-hallucinating large language models as medical information curators. npj Digital Med. 7 , 1–5 (2024).

He, P., Liu, X., Gao, J. & Chen, W. DeBERTa: decoding-enhanced BERT with disentangled attention 2006.03654 (2021).

Kakarmath, S. et al. Best practices for authors of healthcare-related artificial intelligence manuscripts. NPJ Digit. Med. 3 , 134 (2020).

Rayyan - AI Powered Tool for Systematic Literature Reviews (2021).

Si, Y., Wang, J., Xu, H. & Roberts, K. Enhancing clinical concept extraction with contextual embeddings. J. Am. Med. Inform. Assoc. 26 , 1297–1304 (2019).

Liu, Y. et al. RoBERTa: a robustly optimized BERT pretraining approach 1907.11692 (2019).

Lee, J. et al. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36 , 1234–1240 (2020).

Alsentzer, E. et al. Publicly Available Clinical BERT Embeddings. In Rumshisky, A., Roberts, K., Bethard, S. & Naumann, T. (eds.) Proc. 2nd Clinical Natural Language Processing Workshop , 72–78 (Association for Computational Linguistics, Minneapolis, Minnesota, USA, 2019).

Deepset. German BERT. https://huggingface.co/bert-base-german-cased (2019).

Gu, Y. et al. Domain-specific language model pretraining for biomedical natural language processing. ACM Trans. Comput. Healthc. 3 , 2:1–2:23 (2021).

Sanh, V., Debut, L., Chaumond, J. & Wolf, T. DistilBERT, a distilled version of BERT: Smaller, faster, cheaper and lighter 1910.01108 (2020).

Cui, Y., Che, W., Liu, T., Qin, B. & Yang, Z. Pre-training with whole word masking for Chinese BERT. IEEE/ACM Trans. Audio, Speech, Lang. Process. 29 , 3504–3514 (2021).

Peng, Y., Yan, S. & Lu, Z. Transfer learning in biomedical natural language processing: An evaluation of BERT and ELMo on ten benchmarking datasets. In Proc. of the 18th BioNLP Workshop and Shared Task (eds Demner-Fushman, D., Cohen, K. B., Ananiadou, S. & Tsujii, J.) 58–65 (Association for Computational Linguistics, Florence, Italy, 2019). https://doi.org/10.18653/v1/W19-5006 .

Chan, B., Schweter, S. & Möller, T. German’s next language model. In Proc. of the 28th International Conference on Computational Linguistics (eds Scott, D., Bel, N. & Zong, C.) 6788–6796 (International Committee on Computational Linguistics, Barcelona, Spain (Online), 2020). https://doi.org/10.18653/v1/2020.coling-main.598 .

Shrestha, M. Development of a Language Model for the Medical Domain . Ph.D. thesis (Rhine-Waal University of Applied Sciences, 2021).

The MultiBERTs: BERT reproductions for robustness analysis. In Sellam, T. et al. (eds.) ICLR 2022 (2022).

Wu, S. & He, Y. Enriching pre-trained language model with entity information for relation classification. In Proc. of the 28th ACM International Conference on Information and Knowledge Management , 2361–2364 (Association for Computing Machinery, New York, NY, USA, 2019). https://doi.org/10.1145/3357384.3358119 .

Beltagy, I., Lo, K. & Cohan, A. SciBERT: a pretrained language model for scientific text. In Proc. Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) , (eds Inui, K., Jiang, J., Ng, V. & Wan, X.), 3615–3620 (Association for Computational Linguistics, Hong Kong, China, 2019).

Eberts, M. & Ulges, A. Span-based joint entity and relation extraction with transformer pre-training. In ECAI 2020 , 2006–2013 (IOS Press, 2020).

Yang, Z. et al. XLNet: Generalized autoregressive pretraining for language understanding. In Advances in Neural Information Processing Systems vol. 32 (Curran Associates, Inc., 2019).

Download references

Acknowledgements

This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors. We thank Cornelia Zelger for her support during the search query definition process.

Author information

Authors and affiliations.

Institute for Patient-Centered Digital Health, Bern University of Applied Sciences, Biel/Bienne, Switzerland

Daniel Reichenpfader & Kerstin Denecke

Faculty of Medicine, University of Geneva, Geneva, Switzerland

Daniel Reichenpfader

Department of Radiology and Medical Informatics, University of Geneva, Geneva, Switzerland

Henning Müller

Informatics Institute, HES-SO Valais-Wallis, Sierre, Switzerland

You can also search for this author in PubMed   Google Scholar

Contributions

D.R. conceptualized the study, defined the methodology (incl. the search strategy), performed the database searches and managed the screening process. D.R. also performed data extraction and authored the original draft. K.D. focused on reviewing and editing the manuscript. K.D. also participated in the screening process. H.M. provided supervision and contributed to writing review.

Corresponding author

Correspondence to Daniel Reichenpfader .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplemental material, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Reichenpfader, D., Müller, H. & Denecke, K. A scoping review of large language model based approaches for information extraction from radiology reports. npj Digit. Med. 7 , 222 (2024). https://doi.org/10.1038/s41746-024-01219-0

Download citation

Received : 21 February 2024

Accepted : 09 August 2024

Published : 24 August 2024

DOI : https://doi.org/10.1038/s41746-024-01219-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

free review research paper

  • Today's Paper
  • Markets Data
  • Commercialisation
  • Work & Careers
  • Higher Education Awards

This special paint could save lives from bushfires

New paint technology developed at UNSW to help fire-proof homes is a joint winner in the Higher Education Awards research commercialisation category.

Subscribe to gift this article

Gift 5 articles to anyone you choose each month when you subscribe.

Already a subscriber? Login

The winners of all seven categories in The Australian Financial Review Higher Education Awards 2024, which recognise and celebrate the outstanding efforts of Australian universities during the past year, have been announced.

The categories include community engagement; emerging leadership; employability; industry engagement; equity and access; research commercialisation; and teaching and learning excellence.

Introducing your Newsfeed

Follow the topics, people and companies that matter to you.

  • AFR Reports
  • Pharmaceuticals
  • University of Queensland
  • University of New South Wales
  • University Of Sydney
  • Lists & Awards

Latest In Education

Fetching latest articles

IMAGES

  1. 31+ Research Paper Templates in PDF

    free review research paper

  2. How to write research paper report

    free review research paper

  3. 💄 Review paper sample. How to Write a Movie Review with Free Samples

    free review research paper

  4. FREE 5+ Sample Research Paper Templates in PDF

    free review research paper

  5. How to write a scientific review paper

    free review research paper

  6. Research paper review sample. Literature Review Samples & Examples

    free review research paper

COMMENTS

  1. 10 Best Literature Review Tools for Researchers

    6. Consensus. Researchers to work together, annotate, and discuss research papers in real-time, fostering team collaboration and knowledge sharing. 7. RAx. Researchers to perform efficient literature search and analysis, aiding in identifying relevant articles, saving time, and improving the quality of research. 8.

  2. Free Literature Review Generator For College Students

    Our generator is simple to use. Type in a description of your subject. Pose your research question, or simply list the keywords that are most relevant. You can then define the parameters of your search to include only journal articles published within the last 3, 5, or 10 years—or however far back you want to go.

  3. Review Paper Format: How To Write A Review Article Fast

    Types Of Review Paper. Not all review articles are created equal. Each type has its methodology, purpose, and format, catering to different research needs and questions. Systematic Review Paper. First up is the systematic review, the crème de la crème of review types. It's known for its rigorous methodology, involving a detailed plan for ...

  4. How to write a superb literature review

    Attribute. Manubot. Overleaf. Google Docs. Cost. Free, open source. $15-30 per month, comes with academic discounts. Free, comes with a Google account. Writing language

  5. Unpaywall

    An open database of 50,960,556 free scholarly articles. We harvest Open Access content from over 50,000 publishers and repositories, and make it easy to find, track, and use. Get the extension "Unpaywall is transforming Open Science" —Nature feature ... Libraries Enterprise Research.

  6. How To Write A Literature Review (+ Free Template)

    Quality research is about building onto the existing work of others, "standing on the shoulders of giants", as Newton put it.The literature review chapter of your dissertation, thesis or research project is where you synthesise this prior work and lay the theoretical foundation for your own research.. Long story short, this chapter is a pretty big deal, which is why you want to make sure ...

  7. Litmaps

    Join the 250,000+ researchers, students, and professionals using Litmaps to accelerate their literature review. Find the right papers faster. Get started for free! About. ... Get started for free. Used by over 250,000 researchers worldwide ... Using Litmaps for my research papers has significantly improved my workflow. Typically, I start with a ...

  8. How to Write a Literature Review

    Examples of literature reviews. Step 1 - Search for relevant literature. Step 2 - Evaluate and select sources. Step 3 - Identify themes, debates, and gaps. Step 4 - Outline your literature review's structure. Step 5 - Write your literature review.

  9. OA.mg

    Free access to millions of research papers for everyone. OA.mg is a search engine for academic papers. Whether you are looking for a specific paper, or for research from a field, or all of an author's works - OA.mg is the place to find it. Universities and researchers funded by the public publish their research in papers, but where do we ...

  10. AI Literature Review Generator

    Welcome to Jenni AI, the ultimate tool for researchers and students. Our AI Literature Review Generator is designed to assist you in creating comprehensive, high-quality literature reviews, enhancing your academic and research endeavors. Say goodbye to writer's block and hello to seamless, efficient literature review creation.

  11. Semantic Scholar

    A free, AI-powered research tool for scientific literature. Search 220,372,175 papers from all fields of science. Search. Try: Thomas Devereaux; Roman Empire; Epistemology ; New & Improved API for Developers. Our API now includes paper search, better documentation, and increased stability. Join hundreds of other developers and start building ...

  12. Best Websites To Download Research Papers For Free: Beyond Sci-Hub

    Unlike other websites to download research papers, Google Scholar provides free access to a vast collection of scholarly literature, making it one of the best websites to download research. Not every article is available in full PDF format directly; however, Google Scholar often links to other open access resources like DOAJ (Directory of Open ...

  13. 10000 PDFs

    Explore the latest full-text research PDFs, articles, conference papers, preprints and more on PEER-REVIEWED JOURNALS. Find methods information, sources, references or conduct a literature review ...

  14. 21 Legit Research Databases for Free Journal Articles in 2024

    It is a highly interdisciplinary platform used to search for scholarly articles related to 67 social science topics. SSRN has a variety of research networks for the various topics available through the free scholarly database. The site offers more than 700,000 abstracts and more than 600,000 full-text papers.

  15. Directory of Open Access Journals

    About the directory. DOAJ is a unique and extensive index of diverse open access journals from around the world, driven by a growing community, and is committed to ensuring quality content is freely available online for everyone. DOAJ is committed to keeping its services free of charge, including being indexed, and its data freely available.

  16. The best academic search engines [Update 2024]

    Get 30 days free. 1. Google Scholar. Google Scholar is the clear number one when it comes to academic search engines. It's the power of Google searches applied to research papers and patents. It not only lets you find research papers for all academic disciplines for free but also often provides links to full-text PDF files.

  17. Ace your research with these 5 literature review tools

    3. Zotero. A big part of many literature review workflows, Zotero is a free, open-source tool for managing citations that works as a plug-in on your browser. It helps you gather the information you need, cite your sources, lets you attach PDFs, notes, and images to your citations, and create bibliographies.

  18. ScienceOpen

    Make an impact and build your research profile in the open with ScienceOpen. Search and discover relevant research in over 95 million Open Access articles and article records; Share your expertise and get credit by publicly reviewing any article; Publish your poster or preprint and track usage and impact with article- and author-level metrics; Create a topical Collection to advance your ...

  19. How and where to find research papers for literature review

    Accessing University Libraries. University libraries are a primary source of information on where you can find research papers. They offer access to a wealth of research papers and journals, including databases like JSTOR and ScienceDirect, for those seeking free resources. Library catalogs are instrumental in finding papers by title, author ...

  20. Academia.edu

    Work faster and smarter with advanced research discovery tools. Search the full text and citations of our millions of papers. Download groups of related papers to jumpstart your research. Save time with detailed summaries and search alerts. Advanced Search.

  21. 110553 PDFs

    Explore the latest full-text research PDFs, articles, conference papers, preprints and more on RESEARCH PAPERS. Find methods information, sources, references or conduct a literature review on ...

  22. International Scientific Research and Development Organization

    Publication of high-impact research from top academics and institutions around the world. Publishing to ISRDO ensures the widest access and impact for your work. All the benefits of traditional peer review - detailed, thoughtful and constructive improvement in your work. 7 to 10 days before the first decision (for all subjects).

  23. Search

    Find the research you need | With 160+ million publications, 1+ million questions, and 25+ million researchers, this is where everyone can access science

  24. Catalysts

    The Special Issue "Exclusive Papers of the Editorial Board Members and Topical Advisory Panel Members of Catalysts in Section "Catalytic Materials" contains 14 peer-reviewed research articles and 1 review paper (Contributions 1-15), which broadly focus on the field of homogeneous and heterogeneous catalysis, with an emphasis on synthesis, physico-chemical characterizations, and ...

  25. HTA Review research and analysis papers

    The HTA Policy and Methods Review was informed by papers analysing: contemporary research, relevant methodologies and purchasing practices used by comparable international jurisdictions; their applicability to the Australian context. To find out more about the evidence supporting the review, see the research and analysis plan overview.

  26. Research: How to Build Consensus Around a New Idea

    New research suggests that this rejection can be due to people's lack of shared criteria or reference points when evaluating a potential innovation's value. In a new paper, the authors find ...

  27. Today's Wordle Answer for August 27, 2024

    Read today's Wordle Review, and get insights on the game from our columnists. The editor of Connections , our new game about finding common threads between words, talks about how she makes this ...

  28. A scoping review of large language model based approaches for ...

    C.04: The SOE describes original research, excluding reviews, comments, patents and white papers. C.05: The SOE describes the application of NLP methods for the purpose of IE from free-text ...

  29. This special paint could save lives from bushfires

    New paint technology developed at UNSW to help fire-proof homes is a joint winner in the Higher Education Awards research commercialisation category. Alexandra Cain Aug 21, 2024 - 5.00am

  30. Comment on—The effectiveness of positive psychological interventions

    We have carefully read the recent paper 'The effectiveness of positive psychological interventions for patients with cancer: A systematic review and meta-analysis' by Tian et al. (Tian et al., 2024) in the Journal of Clinical Nursing and have some concerns regarding the '3.7 Data Synthesis' section.