Durch die vorgegebene Zuteilung der Versuchs- und Kontrollgruppe ist es möglich, das Experiment jederzeit mit einer anderen Versuchsleitung mit identischen Ergebnissen durchzuführen;
hohe Authentizität durch natürliches Milieu.
Bei einem Quasi-Experiment kann der Versuchsleiter lediglich die unabhängige Variable (z. B. das Alter der Teilnehmenden im Beispiel) vor Beginn des Experiments festlegen. Es ist ihm nicht möglich, aktiv in das Experiment und die Verteilung der Teilnehmenden einzugreifen, da diese fest vorgegeben sind.
Durch die fehlende Randomisierung kommt es zu Störvariablen, die die interne Validität des Experiments beeinträchtigen. Die interne Validität bezieht sich auf die Qualität deines Forschungsdesigns und stellt die Frage danach, ob mit einem Experiment tatsächlich die Werte gemessen werden, die gemessen werden sollen.
Störvariablen sind Variablen, die neben der unabhängigen Variable ebenfalls Einfluss auf die abhängige Variable haben. Neben dem Alter kann z. B. auch die Schulbildung oder die generelle Intelligenz einen Einfluss auf die Konzentrationsfähigkeit haben.
Die interne Validität muss aufgrund der folgenden Sachverhalte kritisch betrachtet werden:
Bei einem Quasi-Experiment findet keine zufällige Zuteilung der Versuchsgruppen statt. Versuchs- und Kontrollgruppe sind bereits vorgegeben. Die Zuteilung zu der jeweiligen Gruppe erfolgt durch die Zuschreibung bestimmter Eigenschaften (z. B. männlich oder weiblich).
Das Quasi-Experiment besitzt eine hohe Authentizität. Das Kriterium der Objektivität ist vollumfänglich gegeben, die Ergebnisse können auch mit jeder anderen Versuchsleitung erreicht werden.
Die interne Validität des Quasi-Experiments stellt ein Problem dar, da die Versuchsleitung keinen Einfluss auf mögliche Störvariablen hat. Sieh dir genauere Details zur internen Validität in unserem Artikel zum Quasi-Experiment an.
Wenn du diese Quelle zitieren möchtest, kannst du die Quellenangabe kopieren und einfügen oder auf die Schaltfläche „Diesen Artikel zitieren“ klicken, um die Quellenangabe automatisch zu unserem kostenlosen Zitier-Generator hinzuzufügen.
Genau, L. (2022, 11. November). Was du über das Quasi-Experiment wissen musst – mit Beispiel. Scribbr. Abgerufen am 3. September 2024, von https://www.scribbr.de/methodik/quasi-experiment/
Das feldexperiment erklärt mit beispiel, ein experiment in deiner abschlussarbeit durchführen, validität, reliabilität und objektivität - die quantitativen gütekriterien, aus versehen plagiiert finde kostenlos heraus.
Statistics By Jim
Making statistics intuitive
By Jim Frost Leave a Comment
A quasi experimental design is a method for identifying causal relationships that does not randomly assign participants to the experimental groups. Instead, researchers use a non-random process. For example, they might use an eligibility cutoff score or preexisting groups to determine who receives the treatment.
Quasi-experimental research is a design that closely resembles experimental research but is different. The term “quasi” means “resembling,” so you can think of it as a cousin to actual experiments. In these studies, researchers can manipulate an independent variable — that is, they change one factor to see what effect it has. However, unlike true experimental research, participants are not randomly assigned to different groups.
Learn more about Experimental Designs: Definition & Types .
Researchers typically use a quasi-experimental design because they can’t randomize due to practical or ethical concerns. For example:
Quasi-experimental designs also come in handy when researchers want to study the effects of naturally occurring events, like policy changes or environmental shifts, where they can’t control who is exposed to the treatment.
Quasi-experimental designs occupy a unique position in the spectrum of research methodologies, sitting between observational studies and true experiments. This middle ground offers a blend of both worlds, addressing some limitations of purely observational studies while navigating the constraints often accompanying true experiments.
A significant advantage of quasi-experimental research over purely observational studies and correlational research is that it addresses the issue of directionality, determining which variable is the cause and which is the effect. In quasi-experiments, an intervention typically occurs during the investigation, and the researchers record outcomes before and after it, increasing the confidence that it causes the observed changes.
However, it’s crucial to recognize its limitations as well. Controlling confounding variables is a larger concern for a quasi-experimental design than a true experiment because it lacks random assignment.
In sum, quasi-experimental designs offer a valuable research approach when random assignment is not feasible, providing a more structured and controlled framework than observational studies while acknowledging and attempting to address potential confounders.
Quasi-experimental studies use various methods, depending on the scenario.
This design uses naturally occurring events or changes to create the treatment and control groups. Researchers compare outcomes between those whom the event affected and those it did not affect. Analysts use statistical controls to account for confounders that the researchers must also measure.
Natural experiments are related to observational studies, but they allow for a clearer causality inference because the external event or policy change provides both a form of quasi-random group assignment and a definite start date for the intervention.
For example, in a natural experiment utilizing a quasi-experimental design, researchers study the impact of a significant economic policy change on small business growth. The policy is implemented in one state but not in neighboring states. This scenario creates an unplanned experimental setup, where the state with the new policy serves as the treatment group, and the neighboring states act as the control group.
Researchers are primarily interested in small business growth rates but need to record various confounders that can impact growth rates. Hence, they record state economic indicators, investment levels, and employment figures. By recording these metrics across the states, they can include them in the model as covariates and control them statistically. This method allows researchers to estimate differences in small business growth due to the policy itself, separate from the various confounders.
This method involves matching existing groups that are similar but not identical. Researchers attempt to find groups that are as equivalent as possible, particularly for factors likely to affect the outcome.
For instance, researchers use a nonequivalent groups quasi-experimental design to evaluate the effectiveness of a new teaching method in improving students’ mathematics performance. A school district considering the teaching method is planning the study. Students are already divided into schools, preventing random assignment.
The researchers matched two schools with similar demographics, baseline academic performance, and resources. The school using the traditional methodology is the control, while the other uses the new approach. Researchers are evaluating differences in educational outcomes between the two methods.
They perform a pretest to identify differences between the schools that might affect the outcome and include them as covariates to control for confounding. They also record outcomes before and after the intervention to have a larger context for the changes they observe.
This process assigns subjects to a treatment or control group based on a predetermined cutoff point (e.g., a test score). The analysis primarily focuses on participants near the cutoff point, as they are likely similar except for the treatment received. By comparing participants just above and below the cutoff, the design controls for confounders that vary smoothly around the cutoff.
For example, in a regression discontinuity quasi-experimental design focusing on a new medical treatment for depression, researchers use depression scores as the cutoff point. Individuals with depression scores just above a certain threshold are assigned to receive the latest treatment, while those just below the threshold do not receive it. This method creates two closely matched groups: one that barely qualifies for treatment and one that barely misses out.
By comparing the mental health outcomes of these two groups over time, researchers can assess the effectiveness of the new treatment. The assumption is that the only significant difference between the groups is whether they received the treatment, thereby isolating its impact on depression outcomes.
Accounting for confounding variables is a challenging but essential task for a quasi-experimental design.
In a true experiment, the random assignment process equalizes confounders across the groups to nullify their overall effect. It’s the gold standard because it works on all confounders, known and unknown.
Unfortunately, the lack of random assignment can allow differences between the groups to exist before the intervention. These confounding factors might ultimately explain the results rather than the intervention.
Consequently, researchers must use other methods to equalize the groups roughly using matching and cutoff values or statistically adjust for preexisting differences they measure to reduce the impact of confounders.
A key strength of quasi-experiments is their frequent use of “pre-post testing.” This approach involves conducting initial tests before collecting data to check for preexisting differences between groups that could impact the study’s outcome. By identifying these variables early on and including them as covariates, researchers can more effectively control potential confounders in their statistical analysis.
Additionally, researchers frequently track outcomes before and after the intervention to better understand the context for changes they observe.
Statisticians consider these methods to be less effective than randomization. Hence, quasi-experiments fall somewhere in the middle when it comes to internal validity , or how well the study can identify causal relationships versus mere correlation . They’re more conclusive than correlational studies but not as solid as true experiments.
In conclusion, quasi-experimental designs offer researchers a versatile and practical approach when random assignment is not feasible. This methodology bridges the gap between controlled experiments and observational studies, providing a valuable tool for investigating cause-and-effect relationships in real-world settings. Researchers can address ethical and logistical constraints by understanding and leveraging the different types of quasi-experimental designs while still obtaining insightful and meaningful results.
Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design & analysis issues in field settings . Boston, MA: Houghton Mifflin
Comments and questions cancel reply.
Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.
Learning objectives.
The prefix quasi means “resembling.” Thus quasi-experimental research is research that resembles experimental research but is not true experimental research. Although the independent variable is manipulated, participants are not randomly assigned to conditions or orders of conditions (Cook & Campbell, 1979). Because the independent variable is manipulated before the dependent variable is measured, quasi-experimental research eliminates the directionality problem. But because participants are not randomly assigned—making it likely that there are other differences between conditions—quasi-experimental research does not eliminate the problem of confounding variables. In terms of internal validity, therefore, quasi-experiments are generally somewhere between correlational studies and true experiments.
Quasi-experiments are most likely to be conducted in field settings in which random assignment is difficult or impossible. They are often conducted to evaluate the effectiveness of a treatment—perhaps a type of psychotherapy or an educational intervention. There are many different kinds of quasi-experiments, but we will discuss just a few of the most common ones here.
Recall that when participants in a between-subjects experiment are randomly assigned to conditions, the resulting groups are likely to be quite similar. In fact, researchers consider them to be equivalent. When participants are not randomly assigned to conditions, however, the resulting groups are likely to be dissimilar in some ways. For this reason, researchers consider them to be nonequivalent. A nonequivalent groups design , then, is a between-subjects design in which participants have not been randomly assigned to conditions.
Imagine, for example, a researcher who wants to evaluate a new method of teaching fractions to third graders. One way would be to conduct a study with a treatment group consisting of one class of third-grade students and a control group consisting of another class of third-grade students. This would be a nonequivalent groups design because the students are not randomly assigned to classes by the researcher, which means there could be important differences between them. For example, the parents of higher achieving or more motivated students might have been more likely to request that their children be assigned to Ms. Williams’s class. Or the principal might have assigned the “troublemakers” to Mr. Jones’s class because he is a stronger disciplinarian. Of course, the teachers’ styles, and even the classroom environments, might be very different and might cause different levels of achievement or motivation among the students. If at the end of the study there was a difference in the two classes’ knowledge of fractions, it might have been caused by the difference between the teaching methods—but it might have been caused by any of these confounding variables.
Of course, researchers using a nonequivalent groups design can take steps to ensure that their groups are as similar as possible. In the present example, the researcher could try to select two classes at the same school, where the students in the two classes have similar scores on a standardized math test and the teachers are the same sex, are close in age, and have similar teaching styles. Taking such steps would increase the internal validity of the study because it would eliminate some of the most important confounding variables. But without true random assignment of the students to conditions, there remains the possibility of other important confounding variables that the researcher was not able to control.
In a pretest-posttest design , the dependent variable is measured once before the treatment is implemented and once after it is implemented. Imagine, for example, a researcher who is interested in the effectiveness of an antidrug education program on elementary school students’ attitudes toward illegal drugs. The researcher could measure the attitudes of students at a particular elementary school during one week, implement the antidrug program during the next week, and finally, measure their attitudes again the following week. The pretest-posttest design is much like a within-subjects experiment in which each participant is tested first under the control condition and then under the treatment condition. It is unlike a within-subjects experiment, however, in that the order of conditions is not counterbalanced because it typically is not possible for a participant to be tested in the treatment condition first and then in an “untreated” control condition.
If the average posttest score is better than the average pretest score, then it makes sense to conclude that the treatment might be responsible for the improvement. Unfortunately, one often cannot conclude this with a high degree of certainty because there may be other explanations for why the posttest scores are better. One category of alternative explanations goes under the name of history . Other things might have happened between the pretest and the posttest. Perhaps an antidrug program aired on television and many of the students watched it, or perhaps a celebrity died of a drug overdose and many of the students heard about it. Another category of alternative explanations goes under the name of maturation . Participants might have changed between the pretest and the posttest in ways that they were going to anyway because they are growing and learning. If it were a yearlong program, participants might become less impulsive or better reasoners and this might be responsible for the change.
Another alternative explanation for a change in the dependent variable in a pretest-posttest design is regression to the mean . This refers to the statistical fact that an individual who scores extremely on a variable on one occasion will tend to score less extremely on the next occasion. For example, a bowler with a long-term average of 150 who suddenly bowls a 220 will almost certainly score lower in the next game. Her score will “regress” toward her mean score of 150. Regression to the mean can be a problem when participants are selected for further study because of their extreme scores. Imagine, for example, that only students who scored especially low on a test of fractions are given a special training program and then retested. Regression to the mean all but guarantees that their scores will be higher even if the training program has no effect. A closely related concept—and an extremely important one in psychological research—is spontaneous remission . This is the tendency for many medical and psychological problems to improve over time without any form of treatment. The common cold is a good example. If one were to measure symptom severity in 100 common cold sufferers today, give them a bowl of chicken soup every day, and then measure their symptom severity again in a week, they would probably be much improved. This does not mean that the chicken soup was responsible for the improvement, however, because they would have been much improved without any treatment at all. The same is true of many psychological problems. A group of severely depressed people today is likely to be less depressed on average in 6 months. In reviewing the results of several studies of treatments for depression, researchers Michael Posternak and Ivan Miller found that participants in waitlist control conditions improved an average of 10 to 15% before they received any treatment at all (Posternak & Miller, 2001). Thus one must generally be very cautious about inferring causality from pretest-posttest designs.
Early studies on the effectiveness of psychotherapy tended to use pretest-posttest designs. In a classic 1952 article, researcher Hans Eysenck summarized the results of 24 such studies showing that about two thirds of patients improved between the pretest and the posttest (Eysenck, 1952). But Eysenck also compared these results with archival data from state hospital and insurance company records showing that similar patients recovered at about the same rate without receiving psychotherapy. This suggested to Eysenck that the improvement that patients showed in the pretest-posttest studies might be no more than spontaneous remission. Note that Eysenck did not conclude that psychotherapy was ineffective. He merely concluded that there was no evidence that it was, and he wrote of “the necessity of properly planned and executed experimental studies into this important field” (p. 323). You can read the entire article here:
http://psychclassics.yorku.ca/Eysenck/psychotherapy.htm
Fortunately, many other researchers took up Eysenck’s challenge, and by 1980 hundreds of experiments had been conducted in which participants were randomly assigned to treatment and control conditions, and the results were summarized in a classic book by Mary Lee Smith, Gene Glass, and Thomas Miller (Smith, Glass, & Miller, 1980). They found that overall psychotherapy was quite effective, with about 80% of treatment participants improving more than the average control participant. Subsequent research has focused more on the conditions under which different types of psychotherapy are more or less effective.
In a classic 1952 article, researcher Hans Eysenck pointed out the shortcomings of the simple pretest-posttest design for evaluating the effectiveness of psychotherapy.
Wikimedia Commons – CC BY-SA 3.0.
A variant of the pretest-posttest design is the interrupted time-series design . A time series is a set of measurements taken at intervals over a period of time. For example, a manufacturing company might measure its workers’ productivity each week for a year. In an interrupted time series-design, a time series like this is “interrupted” by a treatment. In one classic example, the treatment was the reduction of the work shifts in a factory from 10 hours to 8 hours (Cook & Campbell, 1979). Because productivity increased rather quickly after the shortening of the work shifts, and because it remained elevated for many months afterward, the researcher concluded that the shortening of the shifts caused the increase in productivity. Notice that the interrupted time-series design is like a pretest-posttest design in that it includes measurements of the dependent variable both before and after the treatment. It is unlike the pretest-posttest design, however, in that it includes multiple pretest and posttest measurements.
Figure 7.5 “A Hypothetical Interrupted Time-Series Design” shows data from a hypothetical interrupted time-series study. The dependent variable is the number of student absences per week in a research methods course. The treatment is that the instructor begins publicly taking attendance each day so that students know that the instructor is aware of who is present and who is absent. The top panel of Figure 7.5 “A Hypothetical Interrupted Time-Series Design” shows how the data might look if this treatment worked. There is a consistently high number of absences before the treatment, and there is an immediate and sustained drop in absences after the treatment. The bottom panel of Figure 7.5 “A Hypothetical Interrupted Time-Series Design” shows how the data might look if this treatment did not work. On average, the number of absences after the treatment is about the same as the number before. This figure also illustrates an advantage of the interrupted time-series design over a simpler pretest-posttest design. If there had been only one measurement of absences before the treatment at Week 7 and one afterward at Week 8, then it would have looked as though the treatment were responsible for the reduction. The multiple measurements both before and after the treatment suggest that the reduction between Weeks 7 and 8 is nothing more than normal week-to-week variation.
Figure 7.5 A Hypothetical Interrupted Time-Series Design
The top panel shows data that suggest that the treatment caused a reduction in absences. The bottom panel shows data that suggest that it did not.
A type of quasi-experimental design that is generally better than either the nonequivalent groups design or the pretest-posttest design is one that combines elements of both. There is a treatment group that is given a pretest, receives a treatment, and then is given a posttest. But at the same time there is a control group that is given a pretest, does not receive the treatment, and then is given a posttest. The question, then, is not simply whether participants who receive the treatment improve but whether they improve more than participants who do not receive the treatment.
Imagine, for example, that students in one school are given a pretest on their attitudes toward drugs, then are exposed to an antidrug program, and finally are given a posttest. Students in a similar school are given the pretest, not exposed to an antidrug program, and finally are given a posttest. Again, if students in the treatment condition become more negative toward drugs, this could be an effect of the treatment, but it could also be a matter of history or maturation. If it really is an effect of the treatment, then students in the treatment condition should become more negative than students in the control condition. But if it is a matter of history (e.g., news of a celebrity drug overdose) or maturation (e.g., improved reasoning), then students in the two conditions would be likely to show similar amounts of change. This type of design does not completely eliminate the possibility of confounding variables, however. Something could occur at one of the schools but not the other (e.g., a student drug overdose), so students at the first school would be affected by it while students at the other school would not.
Finally, if participants in this kind of design are randomly assigned to conditions, it becomes a true experiment rather than a quasi experiment. In fact, it is the kind of experiment that Eysenck called for—and that has now been conducted many times—to demonstrate the effectiveness of psychotherapy.
Discussion: Imagine that a group of obese children is recruited for a study in which their weight is measured, then they participate for 3 months in a program that encourages them to be more active, and finally their weight is measured again. Explain how each of the following might affect the results:
Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design & analysis issues in field settings . Boston, MA: Houghton Mifflin.
Eysenck, H. J. (1952). The effects of psychotherapy: An evaluation. Journal of Consulting Psychology, 16 , 319–324.
Posternak, M. A., & Miller, I. (2001). Untreated short-term course of major depression: A meta-analysis of studies using outcomes from studies using wait-list control groups. Journal of Affective Disorders, 66 , 139–146.
Smith, M. L., Glass, G. V., & Miller, T. I. (1980). The benefits of psychotherapy . Baltimore, MD: Johns Hopkins University Press.
Research Methods in Psychology Copyright © 2016 by University of Minnesota is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.
Home Market Research Research Tools and Apps
Much like an actual experiment, quasi-experimental research tries to demonstrate a cause-and-effect link between a dependent and an independent variable. A quasi-experiment, on the other hand, does not depend on random assignment, unlike an actual experiment. The subjects are sorted into groups based on non-random variables.
“Resemblance” is the definition of “quasi.” Individuals are not randomly allocated to conditions or orders of conditions, even though the regression analysis is changed. As a result, quasi-experimental research is research that appears to be experimental but is not.
The directionality problem is avoided in quasi-experimental research since the regression analysis is altered before the multiple regression is assessed. However, because individuals are not randomized at random, there are likely to be additional disparities across conditions in quasi-experimental research.
As a result, in terms of internal consistency, quasi-experiments fall somewhere between correlational research and actual experiments.
The key component of a true experiment is randomly allocated groups. This means that each person has an equivalent chance of being assigned to the experimental group or the control group, depending on whether they are manipulated or not.
Simply put, a quasi-experiment is not a real experiment. A quasi-experiment does not feature randomly allocated groups since the main component of a real experiment is randomly assigned groups. Why is it so crucial to have randomly allocated groups, given that they constitute the only distinction between quasi-experimental and actual experimental research ?
Let’s use an example to illustrate our point. Let’s assume we want to discover how new psychological therapy affects depressed patients. In a genuine trial, you’d split half of the psych ward into treatment groups, With half getting the new psychotherapy therapy and the other half receiving standard depression treatment .
And the physicians compare the outcomes of this treatment to the results of standard treatments to see if this treatment is more effective. Doctors, on the other hand, are unlikely to agree with this genuine experiment since they believe it is unethical to treat one group while leaving another untreated.
A quasi-experimental study will be useful in this case. Instead of allocating these patients at random, you uncover pre-existing psychotherapist groups in the hospitals. Clearly, there’ll be counselors who are eager to undertake these trials as well as others who prefer to stick to the old ways.
These pre-existing groups can be used to compare the symptom development of individuals who received the novel therapy with those who received the normal course of treatment, even though the groups weren’t chosen at random.
If any substantial variations between them can be well explained, you may be very assured that any differences are attributable to the treatment but not to other extraneous variables.
As we mentioned before, quasi-experimental research entails manipulating an independent variable by randomly assigning people to conditions or sequences of conditions. Non-equivalent group designs, pretest-posttest designs, and regression discontinuity designs are only a few of the essential types.
Quasi-experimental research designs are a type of research design that is similar to experimental designs but doesn’t give full control over the independent variable(s) like true experimental designs do.
In a quasi-experimental design, the researcher changes or watches an independent variable, but the participants are not put into groups at random. Instead, people are put into groups based on things they already have in common, like their age, gender, or how many times they have seen a certain stimulus.
Because the assignments are not random, it is harder to draw conclusions about cause and effect than in a real experiment. However, quasi-experimental designs are still useful when randomization is not possible or ethical.
The true experimental design may be impossible to accomplish or just too expensive, especially for researchers with few resources. Quasi-experimental designs enable you to investigate an issue by utilizing data that has already been paid for or gathered by others (often the government).
Because they allow better control for confounding variables than other forms of studies, they have higher external validity than most genuine experiments and higher internal validity (less than true experiments) than other non-experimental research.
Quasi-experimental research is a quantitative research method. It involves numerical data collection and statistical analysis. Quasi-experimental research compares groups with different circumstances or treatments to find cause-and-effect links.
It draws statistical conclusions from quantitative data. Qualitative data can enhance quasi-experimental research by revealing participants’ experiences and opinions, but quantitative data is the method’s foundation.
There are many different sorts of quasi-experimental designs. Three of the most popular varieties are described below: Design of non-equivalent groups, Discontinuity in regression, and Natural experiments.
Example: design of non-equivalent groups, discontinuity in regression, example: discontinuity in regression, natural experiments, example: natural experiments.
However, because they couldn’t afford to pay everyone who qualified for the program, they had to use a random lottery to distribute slots.
Experts were able to investigate the program’s impact by utilizing enrolled people as a treatment group and those who were qualified but did not play the jackpot as an experimental group.
QuestionPro can be a useful tool in quasi-experimental research because it includes features that can assist you in designing and analyzing your research study. Here are some ways in which QuestionPro can help in quasi-experimental research:
Randomize participants, collect data over time, analyze data, collaborate with your team.
With QuestionPro, you have access to the most mature market research platform and tool that helps you collect and analyze the insights that matter the most. By leveraging InsightsHub, the unified hub for data management, you can leverage the consolidated platform to organize, explore, search, and discover your research data in one organized data repository .
Optimize Your quasi-experimental research with QuestionPro. Get started now!
LEARN MORE FREE TRIAL
Sep 5, 2024
Sep 4, 2024
Sep 3, 2024
Sep 2, 2024
Appinio Research · 19.12.2023 · 37min read
Ever wondered how researchers uncover cause-and-effect relationships in the real world, where controlled experiments are often elusive? Quasi-experimental design holds the key. In this guide, we'll unravel the intricacies of quasi-experimental design, shedding light on its definition, purpose, and applications across various domains. Whether you're a student, a professional, or simply curious about the methods behind meaningful research, join us as we delve into the world of quasi-experimental design, making complex concepts sound simple and embarking on a journey of knowledge and discovery.
Quasi-experimental design is a research methodology used to study the effects of independent variables on dependent variables when full experimental control is not possible or ethical. It falls between controlled experiments, where variables are tightly controlled, and purely observational studies, where researchers have little control over variables. Quasi-experimental design mimics some aspects of experimental research but lacks randomization.
The primary purpose of quasi-experimental design is to investigate cause-and-effect relationships between variables in real-world settings. Researchers use this approach to answer research questions, test hypotheses, and explore the impact of interventions or treatments when they cannot employ traditional experimental methods. Quasi-experimental studies aim to maximize internal validity and make meaningful inferences while acknowledging practical constraints and ethical considerations.
It's essential to understand the distinctions between Quasi-Experimental and Experimental Design to appreciate the unique characteristics of each approach:
A quasi-experimental design is particularly valuable in several situations:
Quasi-experimental design plays a vital role in research for several reasons:
By embracing the challenges and opportunities of quasi-experimental design, researchers can contribute valuable insights to their respective fields and drive positive changes in the real world.
In quasi-experimental design, it's essential to grasp the fundamental concepts underpinning this research methodology. Let's explore these key concepts in detail.
The independent variable (IV) is the factor you aim to study or manipulate in your research. Unlike controlled experiments, where you can directly manipulate the IV, quasi-experimental design often deals with naturally occurring variables. For example, if you're investigating the impact of a new teaching method on student performance, the teaching method is your independent variable.
The dependent variable (DV) is the outcome or response you measure to assess the effects of changes in the independent variable. Continuing with the teaching method example, the dependent variable would be the students' academic performance, typically measured using test scores, grades, or other relevant metrics.
While quasi-experimental design lacks the luxury of randomly assigning participants to control and experimental groups, you can still establish comparison groups to make meaningful inferences. Control groups consist of individuals who do not receive the treatment, while comparison groups are exposed to different levels or variations of the treatment. These groups help researchers gauge the effect of the independent variable.
In quasi-experimental design, it's common practice to collect data both before and after implementing the independent variable. The initial data (pre-test) serves as a baseline, allowing you to measure changes over time (post-test). This approach helps assess the impact of the independent variable more accurately. For instance, if you're studying the effectiveness of a new drug, you'd measure patients' health before administering the drug (pre-test) and afterward (post-test).
Internal validity is crucial for establishing a cause-and-effect relationship between the independent and dependent variables. However, in a quasi-experimental design, several threats can compromise internal validity. These threats include:
Understanding these threats is essential for designing and conducting Quasi-Experimental studies that yield valid and reliable results.
In traditional experimental designs, randomization is a powerful tool for ensuring that groups are equivalent at the outset of a study. However, quasi-experimental design often involves non-randomization due to the nature of the research. This means that participants are not randomly assigned to treatment and control groups. Instead, researchers must employ various techniques to minimize biases and ensure that the groups are as similar as possible.
For example, if you are conducting a study on the effects of a new teaching method in a real classroom setting, you cannot randomly assign students to the treatment and control groups. Instead, you might use statistical methods to match students based on relevant characteristics such as prior academic performance or socioeconomic status. This matching process helps control for potential confounding variables, increasing the validity of your study.
In quasi-experimental design, researchers employ various approaches to investigate causal relationships and study the effects of independent variables when complete experimental control is challenging. Let's explore these types of quasi-experimental designs.
The One-Group Posttest-Only Design is one of the simplest forms of quasi-experimental design. In this design, a single group is exposed to the independent variable, and data is collected only after the intervention has taken place. Unlike controlled experiments, there is no comparison group. This design is useful when researchers cannot administer a pre-test or when it is logistically difficult to do so.
Example : Suppose you want to assess the effectiveness of a new time management seminar. You offer the seminar to a group of employees and measure their productivity levels immediately afterward to determine if there's an observable impact.
Similar to the One-Group Posttest-Only Design, this approach includes a pre-test measure in addition to the post-test. Researchers collect data both before and after the intervention. By comparing the pre-test and post-test results within the same group, you can gain a better understanding of the changes that occur due to the independent variable.
Example : If you're studying the impact of a stress management program on participants' stress levels, you would measure their stress levels before the program (pre-test) and after completing the program (post-test) to assess any changes.
The Non-Equivalent Groups Design involves multiple groups, but they are not randomly assigned. Instead, researchers must carefully match or control for relevant variables to minimize biases. This design is particularly useful when random assignment is not possible or ethical.
Example : Imagine you're examining the effectiveness of two teaching methods in two different schools. You can't randomly assign students to the schools, but you can carefully match them based on factors like age, prior academic performance, and socioeconomic status to create equivalent groups.
Time Series Design is an approach where data is collected at multiple time points before and after the intervention. This design allows researchers to analyze trends and patterns over time, providing valuable insights into the sustained effects of the independent variable.
Example : If you're studying the impact of a new marketing campaign on product sales, you would collect sales data at regular intervals (e.g., monthly) before and after the campaign's launch to observe any long-term trends.
Regression Discontinuity Design is employed when participants are assigned to different groups based on a specific cutoff score or threshold. This design is often used in educational and policy research to assess the effects of interventions near a cutoff point.
Example : Suppose you're evaluating the impact of a scholarship program on students' academic performance. Students who score just above or below a certain GPA threshold are assigned differently to the program. This design helps assess the program's effectiveness at the cutoff point.
Propensity Score Matching is a technique used to create comparable treatment and control groups in non-randomized studies. Researchers calculate propensity scores based on participants' characteristics and match individuals in the treatment group to those in the control group with similar scores.
Example : If you're studying the effects of a new medication on patient outcomes, you would use propensity scores to match patients who received the medication with those who did not but have similar health profiles.
The Interrupted Time Series Design involves collecting data at multiple time points before and after the introduction of an intervention. However, in this design, the intervention occurs at a specific point in time, allowing researchers to assess its immediate impact.
Example : Let's say you're analyzing the effects of a new traffic management system on traffic accidents. You collect accident data before and after the system's implementation to observe any abrupt changes right after its introduction.
Each of these quasi-experimental designs offers unique advantages and is best suited to specific research questions and scenarios. Choosing the right design is crucial for conducting robust and informative studies.
Quasi-experimental design offers a valuable research approach, but like any methodology, it comes with its own set of advantages and disadvantages. Let's explore these in detail.
Quasi-experimental design presents several advantages that make it a valuable tool in research:
These advantages make quasi-experimental design an attractive choice for researchers facing practical or ethical constraints in their studies.
However, quasi-experimental design also comes with its share of challenges and disadvantages:
Despite these disadvantages, quasi-experimental design remains a valuable research tool when used judiciously and with a keen awareness of its limitations. Researchers should carefully consider their research questions and the practical constraints they face before choosing this approach.
Conducting a Quasi-Experimental study requires careful planning and execution to ensure the validity of your research. Let's dive into the essential steps you need to follow when conducting such a study.
The first step in any research endeavor is clearly defining your research questions and objectives. This involves identifying the independent variable (IV) and the dependent variable (DV) you want to study. What is the specific relationship you want to explore, and what do you aim to achieve with your research?
Choosing the right quasi-experimental design is crucial for achieving your research objectives. Select a design that aligns with your research questions and the available data. Consider factors such as the feasibility of implementing the design and the ethical considerations involved.
Selecting the right participants is a critical aspect of Quasi-Experimental research. The participants should represent the population you want to make inferences about, and you must address ethical considerations, including informed consent.
Data collection is a crucial step in Quasi-Experimental research. You must adhere to a consistent and systematic process to gather relevant information before and after the intervention or treatment.
Once you've collected your data, it's time to analyze it using appropriate statistical techniques . The choice of analysis depends on your research questions and the type of data you've gathered.
Chi-Square Calculator :
t-Test Calculator :
One-way ANOVA Calculator :
With the analysis complete, you can interpret the results to draw meaningful conclusions about the relationship between the independent and dependent variables.
Based on your analysis and interpretation of the results, draw conclusions about the research questions and objectives you set out to address.
By following these steps meticulously, you can conduct a rigorous and informative Quasi-Experimental study that advances knowledge in your field of research.
Quasi-experimental design finds applications in a wide range of research domains, including business-related and market research scenarios. Below, we delve into some detailed examples of how this research methodology is employed in practice:
Suppose a company wants to evaluate the effectiveness of a new marketing strategy aimed at boosting sales. Conducting a controlled experiment may not be feasible due to the company's existing customer base and the challenge of randomly assigning customers to different marketing approaches. In this scenario, a quasi-experimental design can be employed.
In the context of human resources and employee development, organizations often seek to evaluate the impact of training programs. A randomized controlled trial (RCT) with random assignment may not be practical or ethical, as some employees may need specific training more than others. Instead, a quasi-experimental design can be employed.
In economics and public policy, researchers often examine the effects of tax policy changes on economic behavior. Conducting a controlled experiment in such cases is practically impossible. Therefore, a quasi-experimental design is commonly employed.
These examples illustrate how quasi-experimental design can be applied in various research contexts, providing valuable insights into the effects of independent variables in real-world scenarios where controlled experiments are not feasible or ethical. By carefully selecting comparison groups and controlling for potential biases, researchers can draw meaningful conclusions and inform decision-making processes.
Publishing your Quasi-Experimental research findings is a crucial step in contributing to the academic community's knowledge. We'll explore the essential aspects of reporting and publishing your Quasi-Experimental research effectively.
When preparing your research paper, it's essential to adhere to a well-structured format to ensure clarity and comprehensibility. Here are key elements to include:
Ethical reporting is paramount in Quasi-Experimental research. Ensure that you adhere to ethical standards, including:
When reporting your Quasi-Experimental research, watch out for common pitfalls that can diminish the quality and impact of your work:
To enhance the transparency and reproducibility of your Quasi-Experimental research, consider adhering to established reporting guidelines, such as:
By following these reporting guidelines and maintaining the highest ethical standards, you can contribute to the advancement of knowledge in your field and ensure the credibility and impact of your Quasi-Experimental research findings.
Conducting a Quasi-Experimental study can be fraught with challenges that may impact the validity and reliability of your findings. We'll take a look at some common challenges and provide strategies on how you can address them effectively.
Challenge: Selection bias occurs when non-randomized groups differ systematically in ways that affect the study's outcome. This bias can undermine the validity of your research, as it implies that the groups are not equivalent at the outset of the study.
Addressing Selection Bias:
Challenge: History effects refer to external events or changes over time that influence the study's results. These external factors can confound your research by introducing variables you did not account for.
Addressing History Effects:
Challenge: Maturation effects occur when participants naturally change or develop throughout the study, independent of the intervention. These changes can confound your results, making it challenging to attribute observed effects solely to the independent variable.
Addressing Maturation Effects:
Challenge: Regression to the mean is the tendency for extreme scores on a variable to move closer to the mean upon retesting. This can create the illusion of an intervention's effectiveness when, in reality, it's a natural statistical phenomenon.
Addressing Regression to the Mean:
Challenge: Attrition refers to the loss of participants over the course of your study, while mortality is the permanent loss of participants. High attrition rates can introduce biases and affect the representativeness of your sample.
Addressing Attrition and Mortality:
Challenge: Testing effects occur when the mere act of testing or assessing participants affects their subsequent performance. This phenomenon can lead to changes in the dependent variable that are unrelated to the independent variable.
Addressing Testing Effects:
By proactively addressing these common challenges, you can enhance the validity and reliability of your Quasi-Experimental study, making your findings more robust and trustworthy.
Quasi-experimental design is a powerful tool that helps researchers investigate cause-and-effect relationships in real-world situations where strict control is not always possible. By understanding the key concepts, types of designs, and how to address challenges, you can conduct robust research and contribute valuable insights to your field. Remember, quasi-experimental design bridges the gap between controlled experiments and purely observational studies, making it an essential approach in various fields, from business and market research to public policy and beyond. So, whether you're a researcher, student, or decision-maker, the knowledge of quasi-experimental design empowers you to make informed choices and drive positive changes in the world.
Introducing Appinio , the real-time market research platform that transforms the world of quasi-experimental design. Imagine having the power to conduct your own market research in minutes, obtaining actionable insights that fuel your data-driven decisions. Appinio takes care of the research and tech complexities, freeing you to focus on what truly matters for your business.
Here's why Appinio stands out:
Get free access to the platform!
Join the loop 💌
Be the first to hear about new updates, product news, and data insights. We'll send it all straight to your inbox.
Get the latest market research news straight to your inbox! 💌
03.09.2024 | 3min read
Get your brand Holiday Ready: 4 Essential Steps to Smash your Q4
03.09.2024 | 8min read
Beyond Demographics: Psychographics power in target group identification
29.08.2024 | 32min read
What is Convenience Sampling? Definition, Method, Examples
Discover the concept of quasi-experiment, its various types, real-world examples, and how QuestionPro aids in conducting these studies.
Quasi-experimental research designs have gained significant recognition in the scientific community due to their unique ability to study cause-and-effect relationships in real-world settings. Unlike true experiments, quasi-experiment lack random assignment of participants to groups, making them more practical and ethical in certain situations. In this article, we will delve into the concept, applications, and advantages of quasi-experiments, shedding light on their relevance and significance in the scientific realm.
Quasi-experimental research designs are research methodologies that resemble true experiments but lack the randomized assignment of participants to groups. In a true experiment, researchers randomly assign participants to either an experimental group or a control group, allowing for a comparison of the effects of an independent variable on the dependent variable. However, in quasi-experiments, this random assignment is often not possible or ethically permissible, leading to the adoption of alternative strategies.
There are several types of quasi-experiment designs to study causal relationships in specific contexts. Some common types include:
This design involves selecting pre-existing groups that differ in some key characteristics and comparing their responses to the independent variable. Although the researcher does not randomly assign the groups, they can still examine the effects of the independent variable.
This design utilizes a cutoff point or threshold to determine which participants receive the treatment or intervention. It assumes that participants on either side of the cutoff are similar in all other aspects, except for their exposure to the independent variable.
This design involves measuring the dependent variable multiple times before and after the introduction of an intervention or treatment. By comparing the trends in the dependent variable, researchers can infer the impact of the intervention.
Natural experiments take advantage of naturally occurring events or circumstances that mimic the random assignment found in true experiments. Participants are exposed to different conditions in situations identified by researchers without any manipulation from them.
Quasi-experimental research designs find applications in various fields, ranging from education to public health and beyond. One significant advantage of quasi-experiments is their feasibility in real-world settings where randomization is not always possible or ethical.
Ethical concerns often arise in research when randomizing participants to different groups could potentially deny individuals access to beneficial treatments or interventions. In such cases, quasi-experimental designs provide an ethical alternative, allowing researchers to study the impact of interventions without depriving anyone of potential benefits.
Let’s explore a few examples of quasi-experimental designs to understand their application in different contexts.
Determining the effectiveness of math apps in supplementing math classes.
Imagine a study aiming to determine the effectiveness of math apps in supplementing traditional math classes in a school. Randomly assigning students to different groups might be impractical or disrupt the existing classroom structure. Instead, researchers can select two comparable classes, one receiving the math app intervention and the other continuing with traditional teaching methods. By comparing the performance of the two groups, researchers can draw conclusions about the app’s effectiveness.
To conduct a quasi-experiment study like the one mentioned above, researchers can utilize QuestionPro , an advanced research platform that offers comprehensive survey and data analysis tools. With QuestionPro, researchers can design surveys to collect data, analyze results, and gain valuable insights for their quasi-experimental research.
QuestionPro’s powerful features, such as random assignment of participants, survey branching, and data visualization, enable researchers to efficiently conduct and analyze quasi-experimental studies. The platform provides a user-friendly interface and robust reporting capabilities, empowering researchers to gather data, explore relationships, and draw meaningful conclusions.
In some cases, researchers can leverage natural experiments to examine causal relationships.
Consider a study evaluating the effectiveness of teaching modern leadership techniques in start-up businesses. Instead of artificially assigning businesses to different groups, researchers can observe those that naturally adopt modern leadership techniques and compare their outcomes to those of businesses that have not implemented such practices.
Quasi-experimental designs offer several advantages over true experiments, making them valuable tools in research:
Lack of random assignment : Quasi-experimental designs lack the random assignment of participants, which introduces the possibility of confounding variables affecting the results. Researchers must carefully consider potential alternative explanations for observed effects.
Quasi-experimental designs encompass various approaches, including nonequivalent group designs, interrupted time series designs, and natural experiments. Each design offers unique advantages and limitations, providing researchers with versatile tools to explore causal relationships in different contexts.
Researchers interested in studying the impact of a public health campaign aimed at reducing smoking rates may take advantage of a natural experiment. By comparing smoking rates in a region that has implemented the campaign to a similar region that has not, researchers can examine the effectiveness of the intervention.
Quasi-experiments and true experiments differ primarily in their ability to randomly assign participants to groups. While true experiments provide a higher level of control, quasi-experiments offer practical and ethical alternatives in situations where randomization is not feasible or desirable.
In a true experiment investigating the effects of a new medication on a specific condition, researchers would randomly assign participants to either the experimental group, which receives the medication, or the control group, which receives a placebo. In a quasi-experiment, researchers might instead compare patients who voluntarily choose to take the medication to those who do not, examining the differences in outcomes between the two groups.
Quasi-experimental research designs play a vital role in scientific inquiry by allowing researchers to investigate cause-and-effect relationships in real-world settings. These designs offer practical and ethical alternatives to true experiments, making them valuable tools in various fields of study. With their versatility and applicability, quasi-experimental designs continue to contribute to our understanding of complex phenomena.
When you wish to explain any complex data, it’s always advised to break it down into simpler visuals or stories. This is where Mind the Graph comes in. It is a platform that helps researchers and scientists to turn their data into easy-to-understand and dynamic stories, helping the audience understand the concepts better. Sign Up now to explore the library of scientific infographics.
Exclusive high quality content about effective visual communication in science.
Sign Up for Free
Try the best infographic maker and promote your research with scientifically-accurate beautiful figures
no credit card required
Home » Quasi-Experimental Research Design – Types, Methods
Table of Contents
Quasi-experimental design is a research method that seeks to evaluate the causal relationships between variables, but without the full control over the independent variable(s) that is available in a true experimental design.
In a quasi-experimental design, the researcher uses an existing group of participants that is not randomly assigned to the experimental and control groups. Instead, the groups are selected based on pre-existing characteristics or conditions, such as age, gender, or the presence of a certain medical condition.
There are several types of quasi-experimental designs that researchers use to study causal relationships between variables. Here are some of the most common types:
This design involves selecting two groups of participants that are similar in every way except for the independent variable(s) that the researcher is testing. One group receives the treatment or intervention being studied, while the other group does not. The two groups are then compared to see if there are any significant differences in the outcomes.
This design involves collecting data on the dependent variable(s) over a period of time, both before and after an intervention or event. The researcher can then determine whether there was a significant change in the dependent variable(s) following the intervention or event.
This design involves measuring the dependent variable(s) before and after an intervention or event, but without a control group. This design can be useful for determining whether the intervention or event had an effect, but it does not allow for control over other factors that may have influenced the outcomes.
This design involves selecting participants based on a specific cutoff point on a continuous variable, such as a test score. Participants on either side of the cutoff point are then compared to determine whether the intervention or event had an effect.
This design involves studying the effects of an intervention or event that occurs naturally, without the researcher’s intervention. For example, a researcher might study the effects of a new law or policy that affects certain groups of people. This design is useful when true experiments are not feasible or ethical.
Here are some data analysis methods that are commonly used in quasi-experimental designs:
This method involves summarizing the data collected during a study using measures such as mean, median, mode, range, and standard deviation. Descriptive statistics can help researchers identify trends or patterns in the data, and can also be useful for identifying outliers or anomalies.
This method involves using statistical tests to determine whether the results of a study are statistically significant. Inferential statistics can help researchers make generalizations about a population based on the sample data collected during the study. Common statistical tests used in quasi-experimental designs include t-tests, ANOVA, and regression analysis.
This method is used to reduce bias in quasi-experimental designs by matching participants in the intervention group with participants in the control group who have similar characteristics. This can help to reduce the impact of confounding variables that may affect the study’s results.
This method is used to compare the difference in outcomes between two groups over time. Researchers can use this method to determine whether a particular intervention has had an impact on the target population over time.
This method is used to examine the impact of an intervention or treatment over time by comparing data collected before and after the intervention or treatment. This method can help researchers determine whether an intervention had a significant impact on the target population.
This method is used to compare the outcomes of participants who fall on either side of a predetermined cutoff point. This method can help researchers determine whether an intervention had a significant impact on the target population.
Here are the general steps involved in conducting a quasi-experimental design:
Here are some examples of real-time quasi-experimental designs:
Here are some applications of quasi-experimental design:
Here are some situations where quasi-experimental designs may be appropriate:
The purpose of quasi-experimental design is to investigate the causal relationship between two or more variables when it is not feasible or ethical to conduct a randomized controlled trial (RCT). Quasi-experimental designs attempt to emulate the randomized control trial by mimicking the control group and the intervention group as much as possible.
The key purpose of quasi-experimental design is to evaluate the impact of an intervention, policy, or program on a targeted outcome while controlling for potential confounding factors that may affect the outcome. Quasi-experimental designs aim to answer questions such as: Did the intervention cause the change in the outcome? Would the outcome have changed without the intervention? And was the intervention effective in achieving its intended goals?
Quasi-experimental designs are useful in situations where randomized controlled trials are not feasible or ethical. They provide researchers with an alternative method to evaluate the effectiveness of interventions, policies, and programs in real-life settings. Quasi-experimental designs can also help inform policy and practice by providing valuable insights into the causal relationships between variables.
Overall, the purpose of quasi-experimental design is to provide a rigorous method for evaluating the impact of interventions, policies, and programs while controlling for potential confounding factors that may affect the outcome.
Quasi-experimental designs have several advantages over other research designs, such as:
There are several limitations associated with quasi-experimental designs, which include:
Researcher, Academic Writer, Web developer
IResearchNet
Quasi-experimental design definition.
Quasi-experimental designs are most often used in natural (nonlaboratory) settings over longer periods and usually include an intervention or treatment. Consider, for example, a study of the effect of a motivation intervention on class attendance and enjoyment in students. When an intact group such as a classroom is singled out for an intervention, randomly assigning each person to experimental conditions is not possible. Rather, the researcher gives one classroom the motivational intervention (intervention group) and the other classroom receives no intervention (comparison group). The researcher uses two classrooms that are as similar as possible in background (e.g., same age, racial composition) and that have comparable experiences within the class (e.g., type of class, meeting time) except for the intervention. In addition, the researcher gives participants in both conditions (comparison and motivation intervention) pretest questionnaires to assess attendance, enjoyment, and other related variables before the intervention. After the intervention is administered, the researcher measures attendance and enjoyment of the class. The researcher can then determine if students in the motivation intervention group enjoyed and attended class more than the students in the comparison group did.
How should results from this hypothetical study be interpreted? Investigators, when interpreting the results of quasi-experimental designs that lacked random assignment of participants to conditions, must be cautious drawing conclusions about causality because of potential confounds in the setting. For example, the previous hypothetical example course material in the intervention group might have become more engaging whereas the comparison group started to cover a more mundane topic that led to changes in class enjoyment and attendance. However, if the intervention group and comparison group had similar pretest scores and comparable classroom experiences, then changes on posttest scores suggest that the motivation intervention influenced class attendance and enjoyment.
Quasi-experiments are most useful when conducting research in settings where random assignment is not possible because of ethical considerations or constraining situational factors. In consequence, such designs are more prevalent in studies conducted in natural settings, thereby increasing the real-world applicability of the findings. Such studies are not, however, true experiments, and thus the lack of control over assignment of participants to conditions renders causal conclusions suspect.
Last updated
6 February 2023
Reviewed by
Miroslav Damyanov
Short on time? Get an AI generated summary of this article instead
Commonly used in medical informatics (a field that uses digital information to ensure better patient care), researchers generally use this design to evaluate the effectiveness of a treatment – perhaps a type of antibiotic or psychotherapy, or an educational or policy intervention.
Even though quasi-experimental design has been used for some time, relatively little is known about it. Read on to learn the ins and outs of this research design.
Dovetail streamlines research to help you uncover and share actionable insights
A quasi-experimental design is used when it's not logistically feasible or ethical to conduct randomized, controlled trials. As its name suggests, a quasi-experimental design is almost a true experiment. However, researchers don't randomly select elements or participants in this type of research.
Researchers prefer to apply quasi-experimental design when there are ethical or practical concerns. Let's look at these two reasons more closely.
In some situations, the use of randomly assigned elements can be unethical. For instance, providing public healthcare to one group and withholding it to another in research is unethical. A quasi-experimental design would examine the relationship between these two groups to avoid physical danger.
Randomized controlled trials may not be the best approach in research. For instance, it's impractical to trawl through large sample sizes of participants without using a particular attribute to guide your data collection .
Recruiting participants and properly designing a data-collection attribute to make the research a true experiment requires a lot of time and effort, and can be expensive if you don’t have a large funding stream.
A quasi-experimental design allows researchers to take advantage of previously collected data and use it in their study.
Quasi-experimental research design is common in medical research, but any researcher can use it for research that raises practical and ethical concerns. Here are a few examples of quasi-experimental designs used by different researchers:
A school wanted to supplement its math classes with a math app. To select the best app, the school decided to conduct demo tests on two apps before selecting the one they will purchase.
Since every grade had two math teachers, each teacher used one of the two apps for three months. They then gave the students the same math exams and compared the results to determine which app was most effective.
This simple study is a quasi-experiment since the school didn't randomly assign its students to the applications. They used a pre-existing class structure to conduct the study since it was impractical to randomly assign the students to each app.
A hypothetical quasi-experimental study was conducted in an economically developing country in a mid-sized city.
Five start-ups in the textile industry and five in the tech industry participated in the study. The leaders attended a six-week workshop on leadership style, team management, and employee motivation.
After a year, the researchers assessed the performance of each start-up company to determine growth. The results indicated that the tech start-ups were further along in their growth than the textile companies.
The basis of quasi-experimental research is a non-randomized subject-selection process. This study didn't use specific aspects to determine which start-up companies should participate. Therefore, the results may seem straightforward, but several aspects may determine the growth of a specific company, apart from the variables used by the researchers.
In a study to determine the economic impact of government reforms in an economically developing country, the government decided to test whether creating reforms directed at small businesses or luring foreign investments would spur the most economic development.
The government selected two cities with similar population demographics and sizes. In one of the cities, they implemented specific policies that would directly impact small businesses, and in the other, they implemented policies to attract foreign investment.
After five years, they collected end-of-year economic growth data from both cities. They looked at elements like local GDP growth, unemployment rates, and housing sales.
The study used a non-randomized selection process to determine which city would participate in the research. Researchers left out certain variables that would play a crucial role in determining the growth of each city. They used pre-existing groups of people based on research conducted in each city, rather than random groups.
Some advantages of quasi-experimental designs are:
Researchers can manipulate variables to help them meet their study objectives.
It offers high external validity, making it suitable for real-world applications, specifically in social science experiments.
Integrating this methodology into other research designs is easier, especially in true experimental research. This cuts down on the time needed to determine your outcomes.
Despite the pros that come with a quasi-experimental design, there are several disadvantages associated with it, including the following:
It has a lower internal validity since researchers do not have full control over the comparison and intervention groups or between time periods because of differences in characteristics in people, places, or time involved. It may be challenging to determine whether all variables have been used or whether those used in the research impacted the results.
There is the risk of inaccurate data since the research design borrows information from other studies.
There is the possibility of bias since researchers select baseline elements and eligibility.
There are three distinct types of quasi-experimental designs:
Regression discontinuity, natural experiment.
This is a hybrid of experimental and quasi-experimental methods and is used to leverage the best qualities of the two. Like the true experiment design, nonequivalent group design uses pre-existing groups believed to be comparable. However, it doesn't use randomization, the lack of which is a crucial element for quasi-experimental design.
Researchers usually ensure that no confounding variables impact them throughout the grouping process. This makes the groupings more comparable.
A small study was conducted to determine whether after-school programs result in better grades. Researchers randomly selected two groups of students: one to implement the new program, the other not to. They then compared the results of the two groups.
This type of quasi-experimental research design calculates the impact of a specific treatment or intervention. It uses a criterion known as "cutoff" that assigns treatment according to eligibility.
Researchers often assign participants above the cutoff to the treatment group. This puts a negligible distinction between the two groups (treatment group and control group).
Students must achieve a minimum score to be enrolled in specific US high schools. Since the cutoff score used to determine eligibility for enrollment is arbitrary, researchers can assume that the disparity between students who only just fail to achieve the cutoff point and those who barely pass is a small margin and is due to the difference in the schools that these students attend.
Researchers can then examine the long-term effects of these two groups of kids to determine the effect of attending certain schools. This information can be applied to increase the chances of students being enrolled in these high schools.
This research design is common in laboratory and field experiments where researchers control target subjects by assigning them to different groups. Researchers randomly assign subjects to a treatment group using nature or an external event or situation.
However, even with random assignment, this research design cannot be called a true experiment since nature aspects are observational. Researchers can also exploit these aspects despite having no control over the independent variables.
An example of a natural experiment is the 2008 Oregon Health Study.
Oregon intended to allow more low-income people to participate in Medicaid.
Since they couldn't afford to cover every person who qualified for the program, the state used a random lottery to allocate program slots.
Researchers assessed the program's effectiveness by assigning the selected subjects to a randomly assigned treatment group, while those that didn't win the lottery were considered the control group.
There are several differences between a quasi-experiment and a true experiment:
Participants in true experiments are randomly assigned to the treatment or control group, while participants in a quasi-experiment are not assigned randomly.
In a quasi-experimental design, the control and treatment groups differ in unknown or unknowable ways, apart from the experimental treatments that are carried out. Therefore, the researcher should try as much as possible to control these differences.
Quasi-experimental designs have several "competing hypotheses," which compete with experimental manipulation to explain the observed results.
Quasi-experiments tend to have lower internal validity (the degree of confidence in the research outcomes) than true experiments, but they may offer higher external validity (whether findings can be extended to other contexts) as they involve real-world interventions instead of controlled interventions in artificial laboratory settings.
Despite the distinct difference between true and quasi-experimental research designs, these two research methodologies share the following aspects:
Both study methods subject participants to some form of treatment or conditions.
Researchers have the freedom to measure some of the outcomes of interest.
Researchers can test whether the differences in the outcomes are associated with the treatment.
Imagine you wanted to study the effects of junk food on obese people. Here's how you would do this as a true experiment and a quasi-experiment:
In a true experiment, some participants would eat junk foods, while the rest would be in the control group, adhering to a regular diet. At the end of the study, you would record the health and discomfort of each group.
This kind of experiment would raise ethical concerns since the participants assigned to the treatment group are required to eat junk food against their will throughout the experiment. This calls for a quasi-experimental design.
In quasi-experimental research, you would start by finding out which participants want to try junk food and which prefer to stick to a regular diet. This allows you to assign these two groups based on subject choice.
In this case, you didn't assign participants to a particular group, so you can confidently use the results from the study.
Quasi-experimental designs are used when researchers don’t want to use randomization when evaluating their intervention.
Some of the characteristics of a quasi-experimental design are:
Researchers don't randomly assign participants into groups, but study their existing characteristics and assign them accordingly.
Researchers study the participants in pre- and post-testing to determine the progress of the groups.
Quasi-experimental design is ethical since it doesn’t involve offering or withholding treatment at random.
Quasi-experimental design encompasses a broad range of non-randomized intervention studies. This design is employed when it is not ethical or logistically feasible to conduct randomized controlled trials. Researchers typically employ it when evaluating policy or educational interventions, or in medical or therapy scenarios.
You can use two-group tests, time-series analysis, and regression analysis to analyze data in a quasi-experiment design. Each option has specific assumptions, strengths, limitations, and data requirements.
Do you want to discover previous research faster?
Do you share your research findings with others?
Do you analyze research data?
Start for free today, add your research, and get to key insights faster
Last updated: 18 April 2023
Last updated: 27 February 2023
Last updated: 22 August 2024
Last updated: 5 February 2023
Last updated: 16 August 2024
Last updated: 9 March 2023
Last updated: 30 April 2024
Last updated: 12 December 2023
Last updated: 11 March 2024
Last updated: 4 July 2024
Last updated: 6 March 2024
Last updated: 5 March 2024
Last updated: 13 May 2024
Related topics, .css-je19u9{-webkit-align-items:flex-end;-webkit-box-align:flex-end;-ms-flex-align:flex-end;align-items:flex-end;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;row-gap:0;text-align:center;max-width:671px;}@media (max-width: 1079px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}}@media (max-width: 799px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}} decide what to .css-1kiodld{max-height:56px;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}@media (max-width: 1079px){.css-1kiodld{display:none;}} build next, decide what to build next, log in or sign up.
Get started for free
Here’s a table that summarizes the similarities and differences between an experimental and a quasi-experimental study design:
Experimental Study (a.k.a. Randomized Controlled Trial) | Quasi-Experimental Study | |
---|---|---|
Objective | Evaluate the effect of an intervention or a treatment | Evaluate the effect of an intervention or a treatment |
How participants get assigned to groups? | Random assignment | Non-random assignment (participants get assigned according to their choosing or that of the researcher) |
Is there a control group? | Yes | Not always (although, if present, a control group will provide better evidence for the study results) |
Is there any room for confounding? | No (although check for a detailed discussion on post-randomization confounding in randomized controlled trials) | Yes (however, statistical techniques can be used to study causal relationships in quasi-experiments) |
Level of evidence | A randomized trial is at the highest level in the hierarchy of evidence | A quasi-experiment is one level below the experimental study in the hierarchy of evidence [ ] |
Advantages | Minimizes bias and confounding | – Can be used in situations where an experiment is not ethically or practically feasible – Can work with smaller sample sizes than randomized trials |
Limitations | – High cost (as it generally requires a large sample size) – Ethical limitations – Generalizability issues – Sometimes practically infeasible | Lower ranking in the hierarchy of evidence as losing the power of randomization causes the study to be more susceptible to bias and confounding |
A quasi-experimental design is a non-randomized study design used to evaluate the effect of an intervention. The intervention can be a training program, a policy change or a medical treatment.
Unlike a true experiment, in a quasi-experimental study the choice of who gets the intervention and who doesn’t is not randomized. Instead, the intervention can be assigned to participants according to their choosing or that of the researcher, or by using any method other than randomness.
Having a control group is not required, but if present, it provides a higher level of evidence for the relationship between the intervention and the outcome.
(for more information, I recommend my other article: Understand Quasi-Experimental Design Through an Example ) .
Examples of quasi-experimental designs include:
An experimental design is a randomized study design used to evaluate the effect of an intervention. In its simplest form, the participants will be randomly divided into 2 groups:
Randomization ensures that each participant has the same chance of receiving the intervention. Its objective is to equalize the 2 groups, and therefore, any observed difference in the study outcome afterwards will only be attributed to the intervention – i.e. it removes confounding.
(for more information, I recommend my other article: Purpose and Limitations of Random Assignment ).
Examples of experimental designs include:
Although many statistical techniques can be used to deal with confounding in a quasi-experimental study, in practice, randomization is still the best tool we have to study causal relationships.
Another problem with quasi-experiments is the natural progression of the disease or the condition under study — When studying the effect of an intervention over time, one should consider natural changes because these can be mistaken with changes in outcome that are caused by the intervention. Having a well-chosen control group helps dealing with this issue.
So, if losing the element of randomness seems like an unwise step down in the hierarchy of evidence, why would we ever want to do it?
This is what we’re going to discuss next.
The issue with randomness is that it cannot be always achievable.
So here are some cases where using a quasi-experimental design makes more sense than using an experimental one:
HubSpot CRM
Google Sheets
Google Analytics
Microsoft Excel
Try without registration Quick Start
Read engaging stories, how-to guides, learn about forms.app features.
Inspirational ready-to-use templates for getting started fast and powerful.
Spot-on guides on how to use forms.app and make the most out of it.
See the technical measures we take and learn how we keep your data safe and secure.
Defne Çobanoğlu
According to the Cambridge Dictionary, the word quasi is “used to show that something is almost, but not completely, the thing described.” And as the name suggests, quasi-experiments are almost experiments because of the way they are conducted. What actually differentiates this type of experiment from true experimental research is the way the subjects are divided.
In a true experiment, sample groups are assigned to an experimental group and to a treatment group randomly. However, there are some studies in which the use of random assignment would not be possible because that would be unethical or impractical. These studies follow a quasi-experimental research design. Let us see exactly what is a quasi-experimental design and give some examples.
Quasi-experimental research is a type of experiment where the researcher does not randomly assigns subjects. Rather, unlike a true experiment, subjects are assigned to groups based on non-random criteria. The researchers may manipulate an independent variable and observe the effect on a dependent variable. However, they cannot randomly assign participants to the groups being studied.
The reason for this could be a practicality issue or ethical rules, as you can not deliberately deprive someone of treatment or give them intentional harm. As a consequence, quasi-experimental research can suggest cause-and-effect relationships, but it can not do so with the confidence that true experimental research can.
What is quasi-experimental research?
Even though it is now quite clear that in quasi-experimental research, researchers do not randomly assign people to control or study groups. There are different aspects that let the experts divide people. These different types are called nonequivalent group design, regression discontinuity, and natural experiments. Here is an explanation of these types and some examples:
In true experimental research, the only variable that divides the two groups is the variable you want. However, in a quasi-experimental approach, the groups may have more than one difference as you can not study them and divide them equally and randomly. Therefore, this is the part where it makes this type nonequivalent. This is the most popular type as it is the one most fits the criteria.
Let us say there is a new teaching method a school has implemented for its students. And, as a researcher, you want to know if this teaching method has a positive effect. As you can not divide the school in half as you would do in a true experimental design, you can go with pre-existing groups, such as choosing another school that does not implement this method.
Afterward, you can do the research and see if there is a major difference in the outcome of the success of students. However, as there are different confounding variables between the two groups, they could affect the outcomes. To minimize the differences, researchers would need to control for factors such as prior academic performance, student demographics, or teaching experience in their analysis.
Regression discontinuity means that the researcher does not randomly assign participants to a treatment and control group. Instead, this type of experiment relies on the presence of a natural threshold or dividing point . And only people above or below the threshold get treatment, while the other group does not. As the divide between the two groups is minimal, the differences between them would be minimal as well. Therefore, it provides a good starting point.
A good example of regression discontinuity would be researching the impact of giving financial aid to students who have more than a 3.0 GPA. Only the students whose scores are higher would receive financial aid, and students whose scores are just below 3.0 or similar would be included in the study as a second group.
Afterward, the next step would be to compare the two group’s outcomes ( e.g., graduation rates, job placements, or incomes ) to estimate the effect of the financial aid program. This is a good example of quasi-experimental research design and how to conduct them without interfering much.
Normally, in a true experiment, researchers assign people to either a control group or a treatment group. Instead, a random or irregular assignment of patients to the treatment group takes place in a natural experiment as an external scenario (“nature”). Natural experiments are not qualified as actual experiments because they are observational.
A birth control shot will be made available to low-income villages in third-world countries. And a number of villages want to receive the treatment for free. However, there are not enough stocks to get to everyone. In that scenario, the experts can do a random lottery to distribute the medicine.
Experts could investigate the program’s impact by utilizing enrolled villages as a treatment group and those who were qualified but did not get picked as an experimental group.
Although true experiments have a higher internal validity, sometimes it would be useful to conduct a quasi-experimental design for different reasons. As you can not deliberately withhold or provide some people with treatment, sometimes conducting an experimental study would be unethical . If there is a cure for an illness, you can not randomly assign people to receive the treatment or not. But, if there is a different reason why not everyone can get the same medicine, that gives you a place to start.
Secondly, conducting a true experiment could be unfeasible, too expensive, or too much work for it to be practical. If the researchers do not have enough funding or experimental subjects, a quasi-experiment could be helpful to do the research. And there are different approaches the researcher can take in an experiment like this.
When doing any kind of research, it is a good way to start going through existing data, as someone may have done a similar study already. This can give you a pre-knowledge of what to expect. And it is quite an affordable option.
Researchers can build online surveys to collect data from study participants in a short amount of time. They can also send periodic surveys to keep collecting data as time passes. It is a very effortless and affordable option, and the participants can answer questions anytime, anywhere.
Quasi-experimental designs have various pros and cons compared to other types of studies. It is up to the researchers and experts to decide whether to go with a true or quasi-experimental design. And it is important to remember that even though you want to have a true experiment, you can only do one for a variety of reasons. Now, let us go through some of the advantages and disadvantages.
✅Quasi-experimental designs often involve real-world situations instead of artificial laboratory settings, therefore, have higher external validity.
✅Higher internal validity than other non-experimental research types as this allows you to control for confounding variables better than other studies.
✅Because the control or comparison group participants are not randomized, the nonequivalent dependent variables in the research can be more controlled, targeted, and efficient.
✅Allows to make studies in areas where experimenting would be unethical or impractical.
✅When working on a tight budget, a quasi-experiment helps conclude without needing to pay as much for studies.
❌Lack of randomization makes it more challenging, or even impossible, to rule out confounding variables and their effect on the relationship that the research is about.
❌The use of secondary data already collected for other purposes can be inaccurate, incomplete, or difficult to access.
❌Quasi-experimental studies aren’t as effective in establishing causality.
❌Because a quasi-experimental design often borrows information from other experimental methods, there’s a chance that the data is not complete or accurate.
In conclusion, quasi-experimental is a type of experiment with its own advantages and disadvantages. It works as an option when doing a true experiment does not work because of different reasons. And online surveys and secondary data collection are good methods to go within this type of experiment. The best tool that can help with any research is forms.app!
forms.app is a great survey maker and is the helper everyone needs. It has more than 1000 ready-to-go templates and is very easy to use. You can check it out today and start doing your own research without any trouble!
Defne is a content writer at forms.app. She is also a translator specializing in literary translation. Defne loves reading, writing, and translating professionally and as a hobby. Her expertise lies in survey research, research methodologies, content writing, and translation.
Fatih Özkan
Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.
Chapter 7: Nonexperimental Research
Learning Objectives
The prefix quasi means “resembling.” Thus quasi-experimental research is research that resembles experimental research but is not true experimental research. Although the independent variable is manipulated, participants are not randomly assigned to conditions or orders of conditions (Cook & Campbell, 1979). [1] Because the independent variable is manipulated before the dependent variable is measured, quasi-experimental research eliminates the directionality problem. But because participants are not randomly assigned—making it likely that there are other differences between conditions—quasi-experimental research does not eliminate the problem of confounding variables. In terms of internal validity, therefore, quasi-experiments are generally somewhere between correlational studies and true experiments.
Quasi-experiments are most likely to be conducted in field settings in which random assignment is difficult or impossible. They are often conducted to evaluate the effectiveness of a treatment—perhaps a type of psychotherapy or an educational intervention. There are many different kinds of quasi-experiments, but we will discuss just a few of the most common ones here.
Recall that when participants in a between-subjects experiment are randomly assigned to conditions, the resulting groups are likely to be quite similar. In fact, researchers consider them to be equivalent. When participants are not randomly assigned to conditions, however, the resulting groups are likely to be dissimilar in some ways. For this reason, researchers consider them to be nonequivalent. A nonequivalent groups design , then, is a between-subjects design in which participants have not been randomly assigned to conditions.
Imagine, for example, a researcher who wants to evaluate a new method of teaching fractions to third graders. One way would be to conduct a study with a treatment group consisting of one class of third-grade students and a control group consisting of another class of third-grade students. This design would be a nonequivalent groups design because the students are not randomly assigned to classes by the researcher, which means there could be important differences between them. For example, the parents of higher achieving or more motivated students might have been more likely to request that their children be assigned to Ms. Williams’s class. Or the principal might have assigned the “troublemakers” to Mr. Jones’s class because he is a stronger disciplinarian. Of course, the teachers’ styles, and even the classroom environments, might be very different and might cause different levels of achievement or motivation among the students. If at the end of the study there was a difference in the two classes’ knowledge of fractions, it might have been caused by the difference between the teaching methods—but it might have been caused by any of these confounding variables.
Of course, researchers using a nonequivalent groups design can take steps to ensure that their groups are as similar as possible. In the present example, the researcher could try to select two classes at the same school, where the students in the two classes have similar scores on a standardized math test and the teachers are the same sex, are close in age, and have similar teaching styles. Taking such steps would increase the internal validity of the study because it would eliminate some of the most important confounding variables. But without true random assignment of the students to conditions, there remains the possibility of other important confounding variables that the researcher was not able to control.
In a pretest-posttest design , the dependent variable is measured once before the treatment is implemented and once after it is implemented. Imagine, for example, a researcher who is interested in the effectiveness of an antidrug education program on elementary school students’ attitudes toward illegal drugs. The researcher could measure the attitudes of students at a particular elementary school during one week, implement the antidrug program during the next week, and finally, measure their attitudes again the following week. The pretest-posttest design is much like a within-subjects experiment in which each participant is tested first under the control condition and then under the treatment condition. It is unlike a within-subjects experiment, however, in that the order of conditions is not counterbalanced because it typically is not possible for a participant to be tested in the treatment condition first and then in an “untreated” control condition.
If the average posttest score is better than the average pretest score, then it makes sense to conclude that the treatment might be responsible for the improvement. Unfortunately, one often cannot conclude this with a high degree of certainty because there may be other explanations for why the posttest scores are better. One category of alternative explanations goes under the name of history . Other things might have happened between the pretest and the posttest. Perhaps an antidrug program aired on television and many of the students watched it, or perhaps a celebrity died of a drug overdose and many of the students heard about it. Another category of alternative explanations goes under the name of maturation . Participants might have changed between the pretest and the posttest in ways that they were going to anyway because they are growing and learning. If it were a yearlong program, participants might become less impulsive or better reasoners and this might be responsible for the change.
Another alternative explanation for a change in the dependent variable in a pretest-posttest design is regression to the mean . This refers to the statistical fact that an individual who scores extremely on a variable on one occasion will tend to score less extremely on the next occasion. For example, a bowler with a long-term average of 150 who suddenly bowls a 220 will almost certainly score lower in the next game. Her score will “regress” toward her mean score of 150. Regression to the mean can be a problem when participants are selected for further study because of their extreme scores. Imagine, for example, that only students who scored especially low on a test of fractions are given a special training program and then retested. Regression to the mean all but guarantees that their scores will be higher even if the training program has no effect. A closely related concept—and an extremely important one in psychological research—is spontaneous remission . This is the tendency for many medical and psychological problems to improve over time without any form of treatment. The common cold is a good example. If one were to measure symptom severity in 100 common cold sufferers today, give them a bowl of chicken soup every day, and then measure their symptom severity again in a week, they would probably be much improved. This does not mean that the chicken soup was responsible for the improvement, however, because they would have been much improved without any treatment at all. The same is true of many psychological problems. A group of severely depressed people today is likely to be less depressed on average in 6 months. In reviewing the results of several studies of treatments for depression, researchers Michael Posternak and Ivan Miller found that participants in waitlist control conditions improved an average of 10 to 15% before they received any treatment at all (Posternak & Miller, 2001) [2] . Thus one must generally be very cautious about inferring causality from pretest-posttest designs.
Does Psychotherapy Work?
Early studies on the effectiveness of psychotherapy tended to use pretest-posttest designs. In a classic 1952 article, researcher Hans Eysenck summarized the results of 24 such studies showing that about two thirds of patients improved between the pretest and the posttest (Eysenck, 1952) [3] . But Eysenck also compared these results with archival data from state hospital and insurance company records showing that similar patients recovered at about the same rate without receiving psychotherapy. This parallel suggested to Eysenck that the improvement that patients showed in the pretest-posttest studies might be no more than spontaneous remission. Note that Eysenck did not conclude that psychotherapy was ineffective. He merely concluded that there was no evidence that it was, and he wrote of “the necessity of properly planned and executed experimental studies into this important field” (p. 323). You can read the entire article here: Classics in the History of Psychology .
Fortunately, many other researchers took up Eysenck’s challenge, and by 1980 hundreds of experiments had been conducted in which participants were randomly assigned to treatment and control conditions, and the results were summarized in a classic book by Mary Lee Smith, Gene Glass, and Thomas Miller (Smith, Glass, & Miller, 1980) [4] . They found that overall psychotherapy was quite effective, with about 80% of treatment participants improving more than the average control participant. Subsequent research has focused more on the conditions under which different types of psychotherapy are more or less effective.
A variant of the pretest-posttest design is the interrupted time-series design . A time series is a set of measurements taken at intervals over a period of time. For example, a manufacturing company might measure its workers’ productivity each week for a year. In an interrupted time series-design, a time series like this one is “interrupted” by a treatment. In one classic example, the treatment was the reduction of the work shifts in a factory from 10 hours to 8 hours (Cook & Campbell, 1979) [5] . Because productivity increased rather quickly after the shortening of the work shifts, and because it remained elevated for many months afterward, the researcher concluded that the shortening of the shifts caused the increase in productivity. Notice that the interrupted time-series design is like a pretest-posttest design in that it includes measurements of the dependent variable both before and after the treatment. It is unlike the pretest-posttest design, however, in that it includes multiple pretest and posttest measurements.
Figure 7.3 shows data from a hypothetical interrupted time-series study. The dependent variable is the number of student absences per week in a research methods course. The treatment is that the instructor begins publicly taking attendance each day so that students know that the instructor is aware of who is present and who is absent. The top panel of Figure 7.3 shows how the data might look if this treatment worked. There is a consistently high number of absences before the treatment, and there is an immediate and sustained drop in absences after the treatment. The bottom panel of Figure 7.3 shows how the data might look if this treatment did not work. On average, the number of absences after the treatment is about the same as the number before. This figure also illustrates an advantage of the interrupted time-series design over a simpler pretest-posttest design. If there had been only one measurement of absences before the treatment at Week 7 and one afterward at Week 8, then it would have looked as though the treatment were responsible for the reduction. The multiple measurements both before and after the treatment suggest that the reduction between Weeks 7 and 8 is nothing more than normal week-to-week variation.
A type of quasi-experimental design that is generally better than either the nonequivalent groups design or the pretest-posttest design is one that combines elements of both. There is a treatment group that is given a pretest, receives a treatment, and then is given a posttest. But at the same time there is a control group that is given a pretest, does not receive the treatment, and then is given a posttest. The question, then, is not simply whether participants who receive the treatment improve but whether they improve more than participants who do not receive the treatment.
Imagine, for example, that students in one school are given a pretest on their attitudes toward drugs, then are exposed to an antidrug program, and finally are given a posttest. Students in a similar school are given the pretest, not exposed to an antidrug program, and finally are given a posttest. Again, if students in the treatment condition become more negative toward drugs, this change in attitude could be an effect of the treatment, but it could also be a matter of history or maturation. If it really is an effect of the treatment, then students in the treatment condition should become more negative than students in the control condition. But if it is a matter of history (e.g., news of a celebrity drug overdose) or maturation (e.g., improved reasoning), then students in the two conditions would be likely to show similar amounts of change. This type of design does not completely eliminate the possibility of confounding variables, however. Something could occur at one of the schools but not the other (e.g., a student drug overdose), so students at the first school would be affected by it while students at the other school would not.
Finally, if participants in this kind of design are randomly assigned to conditions, it becomes a true experiment rather than a quasi experiment. In fact, it is the kind of experiment that Eysenck called for—and that has now been conducted many times—to demonstrate the effectiveness of psychotherapy.
Key Takeaways
Figure 7.3 image description: Two line graphs charting the number of absences per week over 14 weeks. The first 7 weeks are without treatment and the last 7 weeks are with treatment. In the first line graph, there are between 4 to 8 absences each week. After the treatment, the absences drop to 0 to 3 each week, which suggests the treatment worked. In the second line graph, there is no noticeable change in the number of absences per week after the treatment, which suggests the treatment did not work. [Return to Figure 7.3]
A between-subjects design in which participants have not been randomly assigned to conditions.
The dependent variable is measured once before the treatment is implemented and once after it is implemented.
A category of alternative explanations for differences between scores such as events that happened between the pretest and posttest, unrelated to the study.
An alternative explanation that refers to how the participants might have changed between the pretest and posttest in ways that they were going to anyway because they are growing and learning.
The statistical fact that an individual who scores extremely on a variable on one occasion will tend to score less extremely on the next occasion.
The tendency for many medical and psychological problems to improve over time without any form of treatment.
A set of measurements taken at intervals over a period of time that are interrupted by a treatment.
Research Methods in Psychology - 2nd Canadian Edition Copyright © 2015 by Paul C. Price, Rajiv Jhangiani, & I-Chant A. Chiang is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.
An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .
Associated data.
Quasi-experimental study designs, often described as nonrandomized, pre-post intervention studies, are common in the medical informatics literature. Yet little has been written about the benefits and limitations of the quasi-experimental approach as applied to informatics studies. This paper outlines a relative hierarchy and nomenclature of quasi-experimental study designs that is applicable to medical informatics intervention studies. In addition, the authors performed a systematic review of two medical informatics journals, the Journal of the American Medical Informatics Association (JAMIA) and the International Journal of Medical Informatics (IJMI), to determine the number of quasi-experimental studies published and how the studies are classified on the above-mentioned relative hierarchy. They hope that future medical informatics studies will implement higher level quasi-experimental study designs that yield more convincing evidence for causal links between medical informatics interventions and outcomes.
Quasi-experimental studies encompass a broad range of nonrandomized intervention studies. These designs are frequently used when it is not logistically feasible or ethical to conduct a randomized controlled trial. Examples of quasi-experimental studies follow. As one example of a quasi-experimental study, a hospital introduces a new order-entry system and wishes to study the impact of this intervention on the number of medication-related adverse events before and after the intervention. As another example, an informatics technology group is introducing a pharmacy order-entry system aimed at decreasing pharmacy costs. The intervention is implemented and pharmacy costs before and after the intervention are measured.
In medical informatics, the quasi-experimental, sometimes called the pre-post intervention, design often is used to evaluate the benefits of specific interventions. The increasing capacity of health care institutions to collect routine clinical data has led to the growing use of quasi-experimental study designs in the field of medical informatics as well as in other medical disciplines. However, little is written about these study designs in the medical literature or in traditional epidemiology textbooks. 1 , 2 , 3 In contrast, the social sciences literature is replete with examples of ways to implement and improve quasi-experimental studies. 4 , 5 , 6
In this paper, we review the different pretest-posttest quasi-experimental study designs, their nomenclature, and the relative hierarchy of these designs with respect to their ability to establish causal associations between an intervention and an outcome. The example of a pharmacy order-entry system aimed at decreasing pharmacy costs will be used throughout this article to illustrate the different quasi-experimental designs. We discuss limitations of quasi-experimental designs and offer methods to improve them. We also perform a systematic review of four years of publications from two informatics journals to determine the number of quasi-experimental studies, classify these studies into their application domains, determine whether the potential limitations of quasi-experimental studies were acknowledged by the authors, and place these studies into the above-mentioned relative hierarchy.
The authors reviewed articles and book chapters on the design of quasi-experimental studies. 4 , 5 , 6 , 7 , 8 , 9 , 10 Most of the reviewed articles referenced two textbooks that were then reviewed in depth. 4 , 6
Key advantages and disadvantages of quasi-experimental studies, as they pertain to the study of medical informatics, were identified. The potential methodological flaws of quasi-experimental medical informatics studies, which have the potential to introduce bias, were also identified. In addition, a summary table outlining a relative hierarchy and nomenclature of quasi-experimental study designs is described. In general, the higher the design is in the hierarchy, the greater the internal validity that the study traditionally possesses because the evidence of the potential causation between the intervention and the outcome is strengthened. 4
We then performed a systematic review of four years of publications from two informatics journals. First, we determined the number of quasi-experimental studies. We then classified these studies on the above-mentioned hierarchy. We also classified the quasi-experimental studies according to their application domain. The categories of application domains employed were based on categorization used by Yearbooks of Medical Informatics 1992–2005 and were similar to the categories of application domains employed by Annual Symposiums of the American Medical Informatics Association. 11 The categories were (1) health and clinical management; (2) patient records; (3) health information systems; (4) medical signal processing and biomedical imaging; (5) decision support, knowledge representation, and management; (6) education and consumer informatics; and (7) bioinformatics. Because the quasi-experimental study design has recognized limitations, we sought to determine whether authors acknowledged the potential limitations of this design. Examples of acknowledgment included mention of lack of randomization, the potential for regression to the mean, the presence of temporal confounders and the mention of another design that would have more internal validity.
All original scientific manuscripts published between January 2000 and December 2003 in the Journal of the American Medical Informatics Association (JAMIA) and the International Journal of Medical Informatics (IJMI) were reviewed. One author (ADH) reviewed all the papers to identify the number of quasi-experimental studies. Other authors (ADH, JCM, JF) then independently reviewed all the studies identified as quasi-experimental. The three authors then convened as a group to resolve any disagreements in study classification, application domain, and acknowledgment of limitations.
What is a quasi-experiment.
Quasi-experiments are studies that aim to evaluate interventions but that do not use randomization. Similar to randomized trials, quasi-experiments aim to demonstrate causality between an intervention and an outcome. Quasi-experimental studies can use both preintervention and postintervention measurements as well as nonrandomly selected control groups.
Using this basic definition, it is evident that many published studies in medical informatics utilize the quasi-experimental design. Although the randomized controlled trial is generally considered to have the highest level of credibility with regard to assessing causality, in medical informatics, researchers often choose not to randomize the intervention for one or more reasons: (1) ethical considerations, (2) difficulty of randomizing subjects, (3) difficulty to randomize by locations (e.g., by wards), (4) small available sample size. Each of these reasons is discussed below.
Ethical considerations typically will not allow random withholding of an intervention with known efficacy. Thus, if the efficacy of an intervention has not been established, a randomized controlled trial is the design of choice to determine efficacy. But if the intervention under study incorporates an accepted, well-established therapeutic intervention, or if the intervention has either questionable efficacy or safety based on previously conducted studies, then the ethical issues of randomizing patients are sometimes raised. In the area of medical informatics, it is often believed prior to an implementation that an informatics intervention will likely be beneficial and thus medical informaticians and hospital administrators are often reluctant to randomize medical informatics interventions. In addition, there is often pressure to implement the intervention quickly because of its believed efficacy, thus not allowing researchers sufficient time to plan a randomized trial.
For medical informatics interventions, it is often difficult to randomize the intervention to individual patients or to individual informatics users. So while this randomization is technically possible, it is underused and thus compromises the eventual strength of concluding that an informatics intervention resulted in an outcome. For example, randomly allowing only half of medical residents to use pharmacy order-entry software at a tertiary care hospital is a scenario that hospital administrators and informatics users may not agree to for numerous reasons.
Similarly, informatics interventions often cannot be randomized to individual locations. Using the pharmacy order-entry system example, it may be difficult to randomize use of the system to only certain locations in a hospital or portions of certain locations. For example, if the pharmacy order-entry system involves an educational component, then people may apply the knowledge learned to nonintervention wards, thereby potentially masking the true effect of the intervention. When a design using randomized locations is employed successfully, the locations may be different in other respects (confounding variables), and this further complicates the analysis and interpretation.
In situations where it is known that only a small sample size will be available to test the efficacy of an intervention, randomization may not be a viable option. Randomization is beneficial because on average it tends to evenly distribute both known and unknown confounding variables between the intervention and control group. However, when the sample size is small, randomization may not adequately accomplish this balance. Thus, alternative design and analytical methods are often used in place of randomization when only small sample sizes are available.
The lack of random assignment is the major weakness of the quasi-experimental study design. Associations identified in quasi-experiments meet one important requirement of causality since the intervention precedes the measurement of the outcome. Another requirement is that the outcome can be demonstrated to vary statistically with the intervention. Unfortunately, statistical association does not imply causality, especially if the study is poorly designed. Thus, in many quasi-experiments, one is most often left with the question: “Are there alternative explanations for the apparent causal association?” If these alternative explanations are credible, then the evidence of causation is less convincing. These rival hypotheses, or alternative explanations, arise from principles of epidemiologic study design.
Shadish et al. 4 outline nine threats to internal validity that are outlined in ▶ . Internal validity is defined as the degree to which observed changes in outcomes can be correctly inferred to be caused by an exposure or an intervention. In quasi-experimental studies of medical informatics, we believe that the methodological principles that most often result in alternative explanations for the apparent causal effect include (a) difficulty in measuring or controlling for important confounding variables, particularly unmeasured confounding variables, which can be viewed as a subset of the selection threat in ▶ ; (b) results being explained by the statistical principle of regression to the mean . Each of these latter two principles is discussed in turn.
Threats to Internal Validity
1. Ambiguous temporal precedence: Lack of clarity about whether intervention occurred before outcome |
2. Selection: Systematic differences over conditions in respondent characteristics that could also cause the observed effect |
3. History: Events occurring concurrently with intervention could cause the observed effect |
4. Maturation: Naturally occurring changes over time could be confused with a treatment effect |
5. Regression: When units are selected for their extreme scores, they will often have less extreme subsequent scores, an occurrence that can be confused with an intervention effect |
6. Attrition: Loss of respondents can produce artifactual effects if that loss is correlated with intervention |
7. Testing: Exposure to a test can affect scores on subsequent exposures to that test |
8. Instrumentation: The nature of a measurement may change over time or conditions |
9. Interactive effects: The impact of an intervention may depend on the level of another intervention |
Adapted from Shadish et al. 4
An inability to sufficiently control for important confounding variables arises from the lack of randomization. A variable is a confounding variable if it is associated with the exposure of interest and is also associated with the outcome of interest; the confounding variable leads to a situation where a causal association between a given exposure and an outcome is observed as a result of the influence of the confounding variable. For example, in a study aiming to demonstrate that the introduction of a pharmacy order-entry system led to lower pharmacy costs, there are a number of important potential confounding variables (e.g., severity of illness of the patients, knowledge and experience of the software users, other changes in hospital policy) that may have differed in the preintervention and postintervention time periods ( ▶ ). In a multivariable regression, the first confounding variable could be addressed with severity of illness measures, but the second confounding variable would be difficult if not nearly impossible to measure and control. In addition, potential confounding variables that are unmeasured or immeasurable cannot be controlled for in nonrandomized quasi-experimental study designs and can only be properly controlled by the randomization process in randomized controlled trials.
Example of confounding. To get the true effect of the intervention of interest, we need to control for the confounding variable.
Another important threat to establishing causality is regression to the mean. 12 , 13 , 14 This widespread statistical phenomenon can result in wrongly concluding that an effect is due to the intervention when in reality it is due to chance. The phenomenon was first described in 1886 by Francis Galton who measured the adult height of children and their parents. He noted that when the average height of the parents was greater than the mean of the population, the children tended to be shorter than their parents, and conversely, when the average height of the parents was shorter than the population mean, the children tended to be taller than their parents.
In medical informatics, what often triggers the development and implementation of an intervention is a rise in the rate above the mean or norm. For example, increasing pharmacy costs and adverse events may prompt hospital informatics personnel to design and implement pharmacy order-entry systems. If this rise in costs or adverse events is really just an extreme observation that is still within the normal range of the hospital's pharmaceutical costs (i.e., the mean pharmaceutical cost for the hospital has not shifted), then the statistical principle of regression to the mean predicts that these elevated rates will tend to decline even without intervention. However, often informatics personnel and hospital administrators cannot wait passively for this decline to occur. Therefore, hospital personnel often implement one or more interventions, and if a decline in the rate occurs, they may mistakenly conclude that the decline is causally related to the intervention. In fact, an alternative explanation for the finding could be regression to the mean.
In the social sciences literature, quasi-experimental studies are divided into four study design groups 4 , 6 :
There is a relative hierarchy within these categories of study designs, with category D studies being sounder than categories C, B, or A in terms of establishing causality. Thus, if feasible from a design and implementation point of view, investigators should aim to design studies that fall in to the higher rated categories. Shadish et al. 4 discuss 17 possible designs, with seven designs falling into category A, three designs in category B, and six designs in category C, and one major design in category D. In our review, we determined that most medical informatics quasi-experiments could be characterized by 11 of 17 designs, with six study designs in category A, one in category B, three designs in category C, and one design in category D because the other study designs were not used or feasible in the medical informatics literature. Thus, for simplicity, we have summarized the 11 study designs most relevant to medical informatics research in ▶ .
Relative Hierarchy of Quasi-experimental Designs
Quasi-experimental Study Designs | Design Notation |
---|---|
A. Quasi-experimental designs without control groups | |
1. The one-group posttest-only design | X O1 |
2. The one-group pretest-posttest design | O1 X O2 |
3. The one-group pretest-posttest design using a double pretest | O1 O2 X O3 |
4. The one-group pretest-posttest design using a nonequivalent dependent variable | (O1a, O1b) X (O2a, O2b) |
5. The removed-treatment design | O1 X O2 O3 removeX O4 |
6. The repeated-treatment design | O1 X O2 removeX O3 X O4 |
B. Quasi-experimental designs that use a control group but no pretest | |
1. Posttest-only design with nonequivalent groups | Intervention group: X O1 |
Control group: O2 | |
C. Quasi-experimental designs that use control groups and pretests | |
1. Untreated control group with dependent pretest and posttest samples | Intervention group: O1a X O2a |
Control group: O1b O2b | |
2. Untreated control group design with dependent pretest and posttest samples using a double pretest | Intervention group: O1a O2a X O3a |
Control group: O1b O2b O3b | |
3. Untreated control group design with dependent pretest and posttest samples using switching replications | Intervention group: O1a X O2a O3a |
Control group: O1b O2b X O3b | |
D. Interrupted time-series design | |
1. Multiple pretest and posttest observations spaced at equal intervals of time | O1 O2 O3 O4 O5 X O6 O7 O8 O9 O10 |
O = Observational Measurement; X = Intervention Under Study. Time moves from left to right.
The nomenclature and relative hierarchy were used in the systematic review of four years of JAMIA and the IJMI. Similar to the relative hierarchy that exists in the evidence-based literature that assigns a hierarchy to randomized controlled trials, cohort studies, case-control studies, and case series, the hierarchy in ▶ is not absolute in that in some cases, it may be infeasible to perform a higher level study. For example, there may be instances where an A6 design established stronger causality than a B1 design. 15 , 16 , 17
Here, X is the intervention and O is the outcome variable (this notation is continued throughout the article). In this study design, an intervention (X) is implemented and a posttest observation (O1) is taken. For example, X could be the introduction of a pharmacy order-entry intervention and O1 could be the pharmacy costs following the intervention. This design is the weakest of the quasi-experimental designs that are discussed in this article. Without any pretest observations or a control group, there are multiple threats to internal validity. Unfortunately, this study design is often used in medical informatics when new software is introduced since it may be difficult to have pretest measurements due to time, technical, or cost constraints.
This is a commonly used study design. A single pretest measurement is taken (O1), an intervention (X) is implemented, and a posttest measurement is taken (O2). In this instance, period O1 frequently serves as the “control” period. For example, O1 could be pharmacy costs prior to the intervention, X could be the introduction of a pharmacy order-entry system, and O2 could be the pharmacy costs following the intervention. Including a pretest provides some information about what the pharmacy costs would have been had the intervention not occurred.
The advantage of this study design over A2 is that adding a second pretest prior to the intervention helps provide evidence that can be used to refute the phenomenon of regression to the mean and confounding as alternative explanations for any observed association between the intervention and the posttest outcome. For example, in a study where a pharmacy order-entry system led to lower pharmacy costs (O3 < O2 and O1), if one had two preintervention measurements of pharmacy costs (O1 and O2) and they were both elevated, this would suggest that there was a decreased likelihood that O3 is lower due to confounding and regression to the mean. Similarly, extending this study design by increasing the number of measurements postintervention could also help to provide evidence against confounding and regression to the mean as alternate explanations for observed associations.
This design involves the inclusion of a nonequivalent dependent variable ( b ) in addition to the primary dependent variable ( a ). Variables a and b should assess similar constructs; that is, the two measures should be affected by similar factors and confounding variables except for the effect of the intervention. Variable a is expected to change because of the intervention X, whereas variable b is not. Taking our example, variable a could be pharmacy costs and variable b could be the length of stay of patients. If our informatics intervention is aimed at decreasing pharmacy costs, we would expect to observe a decrease in pharmacy costs but not in the average length of stay of patients. However, a number of important confounding variables, such as severity of illness and knowledge of software users, might affect both outcome measures. Thus, if the average length of stay did not change following the intervention but pharmacy costs did, then the data are more convincing than if just pharmacy costs were measured.
The Removed-Treatment Design
This design adds a third posttest measurement (O3) to the one-group pretest-posttest design and then removes the intervention before a final measure (O4) is made. The advantage of this design is that it allows one to test hypotheses about the outcome in the presence of the intervention and in the absence of the intervention. Thus, if one predicts a decrease in the outcome between O1 and O2 (after implementation of the intervention), then one would predict an increase in the outcome between O3 and O4 (after removal of the intervention). One caveat is that if the intervention is thought to have persistent effects, then O4 needs to be measured after these effects are likely to have disappeared. For example, a study would be more convincing if it demonstrated that pharmacy costs decreased after pharmacy order-entry system introduction (O2 and O3 less than O1) and that when the order-entry system was removed or disabled, the costs increased (O4 greater than O2 and O3 and closer to O1). In addition, there are often ethical issues in this design in terms of removing an intervention that may be providing benefit.
The Repeated-Treatment Design
The advantage of this design is that it demonstrates reproducibility of the association between the intervention and the outcome. For example, the association is more likely to be causal if one demonstrates that a pharmacy order-entry system results in decreased pharmacy costs when it is first introduced and again when it is reintroduced following an interruption of the intervention. As for design A5, the assumption must be made that the effect of the intervention is transient, which is most often applicable to medical informatics interventions. Because in this design, subjects may serve as their own controls, this may yield greater statistical efficiency with fewer numbers of subjects.
An intervention X is implemented for one group and compared to a second group. The use of a comparison group helps prevent certain threats to validity including the ability to statistically adjust for confounding variables. Because in this study design, the two groups may not be equivalent (assignment to the groups is not by randomization), confounding may exist. For example, suppose that a pharmacy order-entry intervention was instituted in the medical intensive care unit (MICU) and not the surgical intensive care unit (SICU). O1 would be pharmacy costs in the MICU after the intervention and O2 would be pharmacy costs in the SICU after the intervention. The absence of a pretest makes it difficult to know whether a change has occurred in the MICU. Also, the absence of pretest measurements comparing the SICU to the MICU makes it difficult to know whether differences in O1 and O2 are due to the intervention or due to other differences in the two units (confounding variables).
The reader should note that with all the studies in this category, the intervention is not randomized. The control groups chosen are comparison groups. Obtaining pretest measurements on both the intervention and control groups allows one to assess the initial comparability of the groups. The assumption is that if the intervention and the control groups are similar at the pretest, the smaller the likelihood there is of important confounding variables differing between the two groups.
The use of both a pretest and a comparison group makes it easier to avoid certain threats to validity. However, because the two groups are nonequivalent (assignment to the groups is not by randomization), selection bias may exist. Selection bias exists when selection results in differences in unit characteristics between conditions that may be related to outcome differences. For example, suppose that a pharmacy order-entry intervention was instituted in the MICU and not the SICU. If preintervention pharmacy costs in the MICU (O1a) and SICU (O1b) are similar, it suggests that it is less likely that there are differences in the important confounding variables between the two units. If MICU postintervention costs (O2a) are less than preintervention MICU costs (O1a), but SICU costs (O1b) and (O2b) are similar, this suggests that the observed outcome may be causally related to the intervention.
In this design, the pretests are administered at two different times. The main advantage of this design is that it controls for potentially different time-varying confounding effects in the intervention group and the comparison group. In our example, measuring points O1 and O2 would allow for the assessment of time-dependent changes in pharmacy costs, e.g., due to differences in experience of residents, preintervention between the intervention and control group, and whether these changes were similar or different.
With this study design, the researcher administers an intervention at a later time to a group that initially served as a nonintervention control. The advantage of this design over design C2 is that it demonstrates reproducibility in two different settings. This study design is not limited to two groups; in fact, the study results have greater validity if the intervention effect is replicated in different groups at multiple times. In the example of a pharmacy order-entry system, one could implement or intervene in the MICU and then at a later time, intervene in the SICU. This latter design is often very applicable to medical informatics where new technology and new software is often introduced or made available gradually.
Interrupted Time-Series Designs
An interrupted time-series design is one in which a string of consecutive observations equally spaced in time is interrupted by the imposition of a treatment or intervention. The advantage of this design is that with multiple measurements both pre- and postintervention, it is easier to address and control for confounding and regression to the mean. In addition, statistically, there is a more robust analytic capability, and there is the ability to detect changes in the slope or intercept as a result of the intervention in addition to a change in the mean values. 18 A change in intercept could represent an immediate effect while a change in slope could represent a gradual effect of the intervention on the outcome. In the example of a pharmacy order-entry system, O1 through O5 could represent monthly pharmacy costs preintervention and O6 through O10 monthly pharmacy costs post the introduction of the pharmacy order-entry system. Interrupted time-series designs also can be further strengthened by incorporating many of the design features previously mentioned in other categories (such as removal of the treatment, inclusion of a nondependent outcome variable, or the addition of a control group).
The results of the systematic review are in ▶ . In the four-year period of JAMIA publications that the authors reviewed, 25 quasi-experimental studies among 22 articles were published. Of these 25, 15 studies were of category A, five studies were of category B, two studies were of category C, and no studies were of category D. Although there were no studies of category D (interrupted time-series analyses), three of the studies classified as category A had data collected that could have been analyzed as an interrupted time-series analysis. Nine of the 25 studies (36%) mentioned at least one of the potential limitations of the quasi-experimental study design. In the four-year period of IJMI publications reviewed by the authors, nine quasi-experimental studies among eight manuscripts were published. Of these nine, five studies were of category A, one of category B, one of category C, and two of category D. Two of the nine studies (22%) mentioned at least one of the potential limitations of the quasi-experimental study design.
Systematic Review of Four Years of Quasi-designs in JAMIA
Study | Journal | Informatics Topic Category | Quasi-experimental Design | Limitation of Quasi-design Mentioned in Article |
---|---|---|---|---|
Staggers and Kobus | JAMIA | 1 | Counterbalanced study design | Yes |
Schriger et al. | JAMIA | 1 | A5 | Yes |
Patel et al. | JAMIA | 2 | A5 (study 1, phase 1) | No |
Patel et al. | JAMIA | 2 | A2 (study 1, phase 2) | No |
Borowitz | JAMIA | 1 | A2 | No |
Patterson and Harasym | JAMIA | 6 | C1 | Yes |
Rocha et al. | JAMIA | 5 | A2 | Yes |
Lovis et al. | JAMIA | 1 | Counterbalanced study design | No |
Hersh et al. | JAMIA | 6 | B1 | No |
Makoul et al. | JAMIA | 2 | B1 | Yes |
Ruland | JAMIA | 3 | B1 | No |
DeLusignan et al. | JAMIA | 1 | A1 | No |
Mekhjian et al. | JAMIA | 1 | A2 (study design 1) | Yes |
Mekhjian et al. | JAMIA | 1 | B1 (study design 2) | Yes |
Ammenwerth et al. | JAMIA | 1 | A2 | No |
Oniki et al. | JAMIA | 5 | C1 | Yes |
Liederman and Morefield | JAMIA | 1 | A1 (study 1) | No |
Liederman and Morefield | JAMIA | 1 | A2 (study 2) | No |
Rotich et al. | JAMIA | 2 | A2 | No |
Payne et al. | JAMIA | 1 | A1 | No |
Hoch et al. | JAMIA | 3 | A2 | No |
Laerum et al. | JAMIA | 1 | B1 | Yes |
Devine et al. | JAMIA | 1 | Counterbalanced study design | |
Dunbar et al. | JAMIA | 6 | A1 | |
Lenert et al. | JAMIA | 6 | A2 | |
Koide et al. | IJMI | 5 | D4 | No |
Gonzalez-Hendrich et al. | IJMI | 2 | A1 | No |
Anantharaman and Swee Han | IJMI | 3 | B1 | No |
Chae et al. | IJMI | 6 | A2 | No |
Lin et al. | IJMI | 3 | A1 | No |
Mikulich et al. | IJMI | 1 | A2 | Yes |
Hwang et al. | IJMI | 1 | A2 | Yes |
Park et al. | IJMI | 1 | C2 | No |
Park et al. | IJMI | 1 | D4 | No |
JAMIA = Journal of the American Medical Informatics Association; IJMI = International Journal of Medical Informatics.
In addition, three studies from JAMIA were based on a counterbalanced design. A counterbalanced design is a higher order study design than other studies in category A. The counterbalanced design is sometimes referred to as a Latin-square arrangement. In this design, all subjects receive all the different interventions but the order of intervention assignment is not random. 19 This design can only be used when the intervention is compared against some existing standard, for example, if a new PDA-based order entry system is to be compared to a computer terminal–based order entry system. In this design, all subjects receive the new PDA-based order entry system and the old computer terminal-based order entry system. The counterbalanced design is a within-participants design, where the order of the intervention is varied (e.g., one group is given software A followed by software B and another group is given software B followed by software A). The counterbalanced design is typically used when the available sample size is small, thus preventing the use of randomization. This design also allows investigators to study the potential effect of ordering of the informatics intervention.
Although quasi-experimental study designs are ubiquitous in the medical informatics literature, as evidenced by 34 studies in the past four years of the two informatics journals, little has been written about the benefits and limitations of the quasi-experimental approach. As we have outlined in this paper, a relative hierarchy and nomenclature of quasi-experimental study designs exist, with some designs being more likely than others to permit causal interpretations of observed associations. Strengths and limitations of a particular study design should be discussed when presenting data collected in the setting of a quasi-experimental study. Future medical informatics investigators should choose the strongest design that is feasible given the particular circumstances.
Dr. Harris was supported by NIH grants K23 AI01752-01A1 and R01 AI60859-01A1. Dr. Perencevich was supported by a VA Health Services Research and Development Service (HSR&D) Research Career Development Award (RCD-02026-1). Dr. Finkelstein was supported by NIH grant RO1 HL71690.
IMAGES
VIDEO
COMMENTS
Quasi-Experimental Design | Definition, Types & Examples
Bei einem Quasi-Experiment findet keine Randomisierung der Teilnehmenden statt. Das heißt, dass eine Zuordnung in Versuchs- und Kontrollgruppe bereits vorgegeben ist. Diese Zuordnung erfolgt durch die Zuschreibung bestimmter identischer Merkmale, wie etwas das Alter oder die Herkunft der Teilnehmenden.
Quasi-experiment
Quasi Experimental Design Overview & Examples
7.3 Quasi-Experimental Research
Quasi-Experimental Design: Types, Examples, Pros, and ...
Quasi-experimental Research: What It Is, Types & Examples
Quasi-Experimental Design: Definition, Types, Examples
Quasi-experimental research designs play a vital role in scientific inquiry by allowing researchers to investigate cause-and-effect relationships in real-world settings. These designs offer practical and ethical alternatives to true experiments, making them valuable tools in various fields of study. With their versatility and applicability ...
Quasi-experimental design is a research method that seeks to evaluate the causal relationships between variables, but without the full control over the independent variable (s) that is available in a true experimental design. In a quasi-experimental design, the researcher uses an existing group of participants that is not randomly assigned to ...
A quasi-experimental design is a research methodology that possesses some, but not all, of the defining characteristics of a true experiment. In most cases, such designs examine the impact of one or more independent variables on dependent variables, but without assigning participants to conditions randomly or maintaining strict control over features of the experimental situation that could ...
This article discusses four of the strongest quasi-experimental designs for identifying causal effects: regression discontinuity design, instrumental variable design, matching and propensity score designs, and the comparative interrupted time series design. For each design we outline the strategy and assumptions for identifying a causal effect ...
A quasi-experimental study (also known as a non-randomized pre-post intervention) is a research design in which the independent variable is manipulated, but participants are not randomly assigned to conditions. Commonly used in medical informatics (a field that uses digital information to ensure better patient care), researchers generally use ...
Quasi-experimental research is a way of finding out if there's a cause-and-effect relationship between variables when true experiments are not possible because of practical or ethical constraints. For example, you want to know if a new medicine is effective for migraines. Instead of giving the medication to some people and not to others, you ...
Experimental vs Quasi-Experimental Design
The definition of quasi-experimental research. Quasi-experimental research is a type of experiment where the researcher does not randomly assigns subjects. Rather, unlike a true experiment, subjects are assigned to groups based on non-random criteria. The researchers may manipulate an independent variable and observe the effect on a dependent ...
Short Summary. A quasi-experimental design is common in social research when a true experimental design may not be possible. Overall, the design types are very similar, except that quasi ...
Quasi-Experimental Research
Quasi-Experimental Design | Definition, Types & Examples
In this section I will discuss two general types of quasi-experimental designs and their assumptions and limitations. For all these designs the discussions about treatment structure, design structure, and response structure from Chapter 10 apply. 11.2.1.
Selecting and Improving Quasi-Experimental Designs in ...
The Use and Interpretation of Quasi-Experimental Studies ...