• USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

  • Types of Research Designs
  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Glossary of Research Terms
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Applying Critical Thinking
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Publications
  • Qualitative Methods
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

Introduction

Before beginning your paper, you need to decide how you plan to design the study .

The research design refers to the overall strategy and analytical approach that you have chosen in order to integrate, in a coherent and logical way, the different components of the study, thus ensuring that the research problem will be thoroughly investigated. It constitutes the blueprint for the collection, measurement, and interpretation of information and data. Note that the research problem determines the type of design you choose, not the other way around!

De Vaus, D. A. Research Design in Social Research . London: SAGE, 2001; Trochim, William M.K. Research Methods Knowledge Base. 2006.

General Structure and Writing Style

The function of a research design is to ensure that the evidence obtained enables you to effectively address the research problem logically and as unambiguously as possible . In social sciences research, obtaining information relevant to the research problem generally entails specifying the type of evidence needed to test the underlying assumptions of a theory, to evaluate a program, or to accurately describe and assess meaning related to an observable phenomenon.

With this in mind, a common mistake made by researchers is that they begin their investigations before they have thought critically about what information is required to address the research problem. Without attending to these design issues beforehand, the overall research problem will not be adequately addressed and any conclusions drawn will run the risk of being weak and unconvincing. As a consequence, the overall validity of the study will be undermined.

The length and complexity of describing the research design in your paper can vary considerably, but any well-developed description will achieve the following :

  • Identify the research problem clearly and justify its selection, particularly in relation to any valid alternative designs that could have been used,
  • Review and synthesize previously published literature associated with the research problem,
  • Clearly and explicitly specify hypotheses [i.e., research questions] central to the problem,
  • Effectively describe the information and/or data which will be necessary for an adequate testing of the hypotheses and explain how such information and/or data will be obtained, and
  • Describe the methods of analysis to be applied to the data in determining whether or not the hypotheses are true or false.

The research design is usually incorporated into the introduction of your paper . You can obtain an overall sense of what to do by reviewing studies that have utilized the same research design [e.g., using a case study approach]. This can help you develop an outline to follow for your own paper.

NOTE: Use the SAGE Research Methods Online and Cases and the SAGE Research Methods Videos databases to search for scholarly resources on how to apply specific research designs and methods . The Research Methods Online database contains links to more than 175,000 pages of SAGE publisher's book, journal, and reference content on quantitative, qualitative, and mixed research methodologies. Also included is a collection of case studies of social research projects that can be used to help you better understand abstract or complex methodological concepts. The Research Methods Videos database contains hours of tutorials, interviews, video case studies, and mini-documentaries covering the entire research process.

Creswell, John W. and J. David Creswell. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches . 5th edition. Thousand Oaks, CA: Sage, 2018; De Vaus, D. A. Research Design in Social Research . London: SAGE, 2001; Gorard, Stephen. Research Design: Creating Robust Approaches for the Social Sciences . Thousand Oaks, CA: Sage, 2013; Leedy, Paul D. and Jeanne Ellis Ormrod. Practical Research: Planning and Design . Tenth edition. Boston, MA: Pearson, 2013; Vogt, W. Paul, Dianna C. Gardner, and Lynne M. Haeffele. When to Use What Research Design . New York: Guilford, 2012.

Action Research Design

Definition and Purpose

The essentials of action research design follow a characteristic cycle whereby initially an exploratory stance is adopted, where an understanding of a problem is developed and plans are made for some form of interventionary strategy. Then the intervention is carried out [the "action" in action research] during which time, pertinent observations are collected in various forms. The new interventional strategies are carried out, and this cyclic process repeats, continuing until a sufficient understanding of [or a valid implementation solution for] the problem is achieved. The protocol is iterative or cyclical in nature and is intended to foster deeper understanding of a given situation, starting with conceptualizing and particularizing the problem and moving through several interventions and evaluations.

What do these studies tell you ?

  • This is a collaborative and adaptive research design that lends itself to use in work or community situations.
  • Design focuses on pragmatic and solution-driven research outcomes rather than testing theories.
  • When practitioners use action research, it has the potential to increase the amount they learn consciously from their experience; the action research cycle can be regarded as a learning cycle.
  • Action research studies often have direct and obvious relevance to improving practice and advocating for change.
  • There are no hidden controls or preemption of direction by the researcher.

What these studies don't tell you ?

  • It is harder to do than conducting conventional research because the researcher takes on responsibilities of advocating for change as well as for researching the topic.
  • Action research is much harder to write up because it is less likely that you can use a standard format to report your findings effectively [i.e., data is often in the form of stories or observation].
  • Personal over-involvement of the researcher may bias research results.
  • The cyclic nature of action research to achieve its twin outcomes of action [e.g. change] and research [e.g. understanding] is time-consuming and complex to conduct.
  • Advocating for change usually requires buy-in from study participants.

Coghlan, David and Mary Brydon-Miller. The Sage Encyclopedia of Action Research . Thousand Oaks, CA:  Sage, 2014; Efron, Sara Efrat and Ruth Ravid. Action Research in Education: A Practical Guide . New York: Guilford, 2013; Gall, Meredith. Educational Research: An Introduction . Chapter 18, Action Research. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007; Gorard, Stephen. Research Design: Creating Robust Approaches for the Social Sciences . Thousand Oaks, CA: Sage, 2013; Kemmis, Stephen and Robin McTaggart. “Participatory Action Research.” In Handbook of Qualitative Research . Norman Denzin and Yvonna S. Lincoln, eds. 2nd ed. (Thousand Oaks, CA: SAGE, 2000), pp. 567-605; McNiff, Jean. Writing and Doing Action Research . London: Sage, 2014; Reason, Peter and Hilary Bradbury. Handbook of Action Research: Participative Inquiry and Practice . Thousand Oaks, CA: SAGE, 2001.

Case Study Design

A case study is an in-depth study of a particular research problem rather than a sweeping statistical survey or comprehensive comparative inquiry. It is often used to narrow down a very broad field of research into one or a few easily researchable examples. The case study research design is also useful for testing whether a specific theory and model actually applies to phenomena in the real world. It is a useful design when not much is known about an issue or phenomenon.

  • Approach excels at bringing us to an understanding of a complex issue through detailed contextual analysis of a limited number of events or conditions and their relationships.
  • A researcher using a case study design can apply a variety of methodologies and rely on a variety of sources to investigate a research problem.
  • Design can extend experience or add strength to what is already known through previous research.
  • Social scientists, in particular, make wide use of this research design to examine contemporary real-life situations and provide the basis for the application of concepts and theories and the extension of methodologies.
  • The design can provide detailed descriptions of specific and rare cases.
  • A single or small number of cases offers little basis for establishing reliability or to generalize the findings to a wider population of people, places, or things.
  • Intense exposure to the study of a case may bias a researcher's interpretation of the findings.
  • Design does not facilitate assessment of cause and effect relationships.
  • Vital information may be missing, making the case hard to interpret.
  • The case may not be representative or typical of the larger problem being investigated.
  • If the criteria for selecting a case is because it represents a very unusual or unique phenomenon or problem for study, then your interpretation of the findings can only apply to that particular case.

Case Studies. Writing@CSU. Colorado State University; Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 4, Flexible Methods: Case Study Design. 2nd ed. New York: Columbia University Press, 1999; Gerring, John. “What Is a Case Study and What Is It Good for?” American Political Science Review 98 (May 2004): 341-354; Greenhalgh, Trisha, editor. Case Study Evaluation: Past, Present and Future Challenges . Bingley, UK: Emerald Group Publishing, 2015; Mills, Albert J. , Gabrielle Durepos, and Eiden Wiebe, editors. Encyclopedia of Case Study Research . Thousand Oaks, CA: SAGE Publications, 2010; Stake, Robert E. The Art of Case Study Research . Thousand Oaks, CA: SAGE, 1995; Yin, Robert K. Case Study Research: Design and Theory . Applied Social Research Methods Series, no. 5. 3rd ed. Thousand Oaks, CA: SAGE, 2003.

Causal Design

Causality studies may be thought of as understanding a phenomenon in terms of conditional statements in the form, “If X, then Y.” This type of research is used to measure what impact a specific change will have on existing norms and assumptions. Most social scientists seek causal explanations that reflect tests of hypotheses. Causal effect (nomothetic perspective) occurs when variation in one phenomenon, an independent variable, leads to or results, on average, in variation in another phenomenon, the dependent variable.

Conditions necessary for determining causality:

  • Empirical association -- a valid conclusion is based on finding an association between the independent variable and the dependent variable.
  • Appropriate time order -- to conclude that causation was involved, one must see that cases were exposed to variation in the independent variable before variation in the dependent variable.
  • Nonspuriousness -- a relationship between two variables that is not due to variation in a third variable.
  • Causality research designs assist researchers in understanding why the world works the way it does through the process of proving a causal link between variables and by the process of eliminating other possibilities.
  • Replication is possible.
  • There is greater confidence the study has internal validity due to the systematic subject selection and equity of groups being compared.
  • Not all relationships are causal! The possibility always exists that, by sheer coincidence, two unrelated events appear to be related [e.g., Punxatawney Phil could accurately predict the duration of Winter for five consecutive years but, the fact remains, he's just a big, furry rodent].
  • Conclusions about causal relationships are difficult to determine due to a variety of extraneous and confounding variables that exist in a social environment. This means causality can only be inferred, never proven.
  • If two variables are correlated, the cause must come before the effect. However, even though two variables might be causally related, it can sometimes be difficult to determine which variable comes first and, therefore, to establish which variable is the actual cause and which is the  actual effect.

Beach, Derek and Rasmus Brun Pedersen. Causal Case Study Methods: Foundations and Guidelines for Comparing, Matching, and Tracing . Ann Arbor, MI: University of Michigan Press, 2016; Bachman, Ronet. The Practice of Research in Criminology and Criminal Justice . Chapter 5, Causation and Research Designs. 3rd ed. Thousand Oaks, CA: Pine Forge Press, 2007; Brewer, Ernest W. and Jennifer Kubn. “Causal-Comparative Design.” In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 125-132; Causal Research Design: Experimentation. Anonymous SlideShare Presentation; Gall, Meredith. Educational Research: An Introduction . Chapter 11, Nonexperimental Research: Correlational Designs. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007; Trochim, William M.K. Research Methods Knowledge Base. 2006.

Cohort Design

Often used in the medical sciences, but also found in the applied social sciences, a cohort study generally refers to a study conducted over a period of time involving members of a population which the subject or representative member comes from, and who are united by some commonality or similarity. Using a quantitative framework, a cohort study makes note of statistical occurrence within a specialized subgroup, united by same or similar characteristics that are relevant to the research problem being investigated, rather than studying statistical occurrence within the general population. Using a qualitative framework, cohort studies generally gather data using methods of observation. Cohorts can be either "open" or "closed."

  • Open Cohort Studies [dynamic populations, such as the population of Los Angeles] involve a population that is defined just by the state of being a part of the study in question (and being monitored for the outcome). Date of entry and exit from the study is individually defined, therefore, the size of the study population is not constant. In open cohort studies, researchers can only calculate rate based data, such as, incidence rates and variants thereof.
  • Closed Cohort Studies [static populations, such as patients entered into a clinical trial] involve participants who enter into the study at one defining point in time and where it is presumed that no new participants can enter the cohort. Given this, the number of study participants remains constant (or can only decrease).
  • The use of cohorts is often mandatory because a randomized control study may be unethical. For example, you cannot deliberately expose people to asbestos, you can only study its effects on those who have already been exposed. Research that measures risk factors often relies upon cohort designs.
  • Because cohort studies measure potential causes before the outcome has occurred, they can demonstrate that these “causes” preceded the outcome, thereby avoiding the debate as to which is the cause and which is the effect.
  • Cohort analysis is highly flexible and can provide insight into effects over time and related to a variety of different types of changes [e.g., social, cultural, political, economic, etc.].
  • Either original data or secondary data can be used in this design.
  • In cases where a comparative analysis of two cohorts is made [e.g., studying the effects of one group exposed to asbestos and one that has not], a researcher cannot control for all other factors that might differ between the two groups. These factors are known as confounding variables.
  • Cohort studies can end up taking a long time to complete if the researcher must wait for the conditions of interest to develop within the group. This also increases the chance that key variables change during the course of the study, potentially impacting the validity of the findings.
  • Due to the lack of randominization in the cohort design, its external validity is lower than that of study designs where the researcher randomly assigns participants.

Healy P, Devane D. “Methodological Considerations in Cohort Study Designs.” Nurse Researcher 18 (2011): 32-36; Glenn, Norval D, editor. Cohort Analysis . 2nd edition. Thousand Oaks, CA: Sage, 2005; Levin, Kate Ann. Study Design IV: Cohort Studies. Evidence-Based Dentistry 7 (2003): 51–52; Payne, Geoff. “Cohort Study.” In The SAGE Dictionary of Social Research Methods . Victor Jupp, editor. (Thousand Oaks, CA: Sage, 2006), pp. 31-33; Study Design 101. Himmelfarb Health Sciences Library. George Washington University, November 2011; Cohort Study. Wikipedia.

Cross-Sectional Design

Cross-sectional research designs have three distinctive features: no time dimension; a reliance on existing differences rather than change following intervention; and, groups are selected based on existing differences rather than random allocation. The cross-sectional design can only measure differences between or from among a variety of people, subjects, or phenomena rather than a process of change. As such, researchers using this design can only employ a relatively passive approach to making causal inferences based on findings.

  • Cross-sectional studies provide a clear 'snapshot' of the outcome and the characteristics associated with it, at a specific point in time.
  • Unlike an experimental design, where there is an active intervention by the researcher to produce and measure change or to create differences, cross-sectional designs focus on studying and drawing inferences from existing differences between people, subjects, or phenomena.
  • Entails collecting data at and concerning one point in time. While longitudinal studies involve taking multiple measures over an extended period of time, cross-sectional research is focused on finding relationships between variables at one moment in time.
  • Groups identified for study are purposely selected based upon existing differences in the sample rather than seeking random sampling.
  • Cross-section studies are capable of using data from a large number of subjects and, unlike observational studies, is not geographically bound.
  • Can estimate prevalence of an outcome of interest because the sample is usually taken from the whole population.
  • Because cross-sectional designs generally use survey techniques to gather data, they are relatively inexpensive and take up little time to conduct.
  • Finding people, subjects, or phenomena to study that are very similar except in one specific variable can be difficult.
  • Results are static and time bound and, therefore, give no indication of a sequence of events or reveal historical or temporal contexts.
  • Studies cannot be utilized to establish cause and effect relationships.
  • This design only provides a snapshot of analysis so there is always the possibility that a study could have differing results if another time-frame had been chosen.
  • There is no follow up to the findings.

Bethlehem, Jelke. "7: Cross-sectional Research." In Research Methodology in the Social, Behavioural and Life Sciences . Herman J Adèr and Gideon J Mellenbergh, editors. (London, England: Sage, 1999), pp. 110-43; Bourque, Linda B. “Cross-Sectional Design.” In  The SAGE Encyclopedia of Social Science Research Methods . Michael S. Lewis-Beck, Alan Bryman, and Tim Futing Liao. (Thousand Oaks, CA: 2004), pp. 230-231; Hall, John. “Cross-Sectional Survey Design.” In Encyclopedia of Survey Research Methods . Paul J. Lavrakas, ed. (Thousand Oaks, CA: Sage, 2008), pp. 173-174; Helen Barratt, Maria Kirwan. Cross-Sectional Studies: Design Application, Strengths and Weaknesses of Cross-Sectional Studies. Healthknowledge, 2009. Cross-Sectional Study. Wikipedia.

Descriptive Design

Descriptive research designs help provide answers to the questions of who, what, when, where, and how associated with a particular research problem; a descriptive study cannot conclusively ascertain answers to why. Descriptive research is used to obtain information concerning the current status of the phenomena and to describe "what exists" with respect to variables or conditions in a situation.

  • The subject is being observed in a completely natural and unchanged natural environment. True experiments, whilst giving analyzable data, often adversely influence the normal behavior of the subject [a.k.a., the Heisenberg effect whereby measurements of certain systems cannot be made without affecting the systems].
  • Descriptive research is often used as a pre-cursor to more quantitative research designs with the general overview giving some valuable pointers as to what variables are worth testing quantitatively.
  • If the limitations are understood, they can be a useful tool in developing a more focused study.
  • Descriptive studies can yield rich data that lead to important recommendations in practice.
  • Appoach collects a large amount of data for detailed analysis.
  • The results from a descriptive research cannot be used to discover a definitive answer or to disprove a hypothesis.
  • Because descriptive designs often utilize observational methods [as opposed to quantitative methods], the results cannot be replicated.
  • The descriptive function of research is heavily dependent on instrumentation for measurement and observation.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 5, Flexible Methods: Descriptive Research. 2nd ed. New York: Columbia University Press, 1999; Given, Lisa M. "Descriptive Research." In Encyclopedia of Measurement and Statistics . Neil J. Salkind and Kristin Rasmussen, editors. (Thousand Oaks, CA: Sage, 2007), pp. 251-254; McNabb, Connie. Descriptive Research Methodologies. Powerpoint Presentation; Shuttleworth, Martyn. Descriptive Research Design, September 26, 2008; Erickson, G. Scott. "Descriptive Research Design." In New Methods of Market Research and Analysis . (Northampton, MA: Edward Elgar Publishing, 2017), pp. 51-77; Sahin, Sagufta, and Jayanta Mete. "A Brief Study on Descriptive Research: Its Nature and Application in Social Science." International Journal of Research and Analysis in Humanities 1 (2021): 11; K. Swatzell and P. Jennings. “Descriptive Research: The Nuts and Bolts.” Journal of the American Academy of Physician Assistants 20 (2007), pp. 55-56; Kane, E. Doing Your Own Research: Basic Descriptive Research in the Social Sciences and Humanities . London: Marion Boyars, 1985.

Experimental Design

A blueprint of the procedure that enables the researcher to maintain control over all factors that may affect the result of an experiment. In doing this, the researcher attempts to determine or predict what may occur. Experimental research is often used where there is time priority in a causal relationship (cause precedes effect), there is consistency in a causal relationship (a cause will always lead to the same effect), and the magnitude of the correlation is great. The classic experimental design specifies an experimental group and a control group. The independent variable is administered to the experimental group and not to the control group, and both groups are measured on the same dependent variable. Subsequent experimental designs have used more groups and more measurements over longer periods. True experiments must have control, randomization, and manipulation.

  • Experimental research allows the researcher to control the situation. In so doing, it allows researchers to answer the question, “What causes something to occur?”
  • Permits the researcher to identify cause and effect relationships between variables and to distinguish placebo effects from treatment effects.
  • Experimental research designs support the ability to limit alternative explanations and to infer direct causal relationships in the study.
  • Approach provides the highest level of evidence for single studies.
  • The design is artificial, and results may not generalize well to the real world.
  • The artificial settings of experiments may alter the behaviors or responses of participants.
  • Experimental designs can be costly if special equipment or facilities are needed.
  • Some research problems cannot be studied using an experiment because of ethical or technical reasons.
  • Difficult to apply ethnographic and other qualitative methods to experimentally designed studies.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 7, Flexible Methods: Experimental Research. 2nd ed. New York: Columbia University Press, 1999; Chapter 2: Research Design, Experimental Designs. School of Psychology, University of New England, 2000; Chow, Siu L. "Experimental Design." In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 448-453; "Experimental Design." In Social Research Methods . Nicholas Walliman, editor. (London, England: Sage, 2006), pp, 101-110; Experimental Research. Research Methods by Dummies. Department of Psychology. California State University, Fresno, 2006; Kirk, Roger E. Experimental Design: Procedures for the Behavioral Sciences . 4th edition. Thousand Oaks, CA: Sage, 2013; Trochim, William M.K. Experimental Design. Research Methods Knowledge Base. 2006; Rasool, Shafqat. Experimental Research. Slideshare presentation.

Exploratory Design

An exploratory design is conducted about a research problem when there are few or no earlier studies to refer to or rely upon to predict an outcome . The focus is on gaining insights and familiarity for later investigation or undertaken when research problems are in a preliminary stage of investigation. Exploratory designs are often used to establish an understanding of how best to proceed in studying an issue or what methodology would effectively apply to gathering information about the issue.

The goals of exploratory research are intended to produce the following possible insights:

  • Familiarity with basic details, settings, and concerns.
  • Well grounded picture of the situation being developed.
  • Generation of new ideas and assumptions.
  • Development of tentative theories or hypotheses.
  • Determination about whether a study is feasible in the future.
  • Issues get refined for more systematic investigation and formulation of new research questions.
  • Direction for future research and techniques get developed.
  • Design is a useful approach for gaining background information on a particular topic.
  • Exploratory research is flexible and can address research questions of all types (what, why, how).
  • Provides an opportunity to define new terms and clarify existing concepts.
  • Exploratory research is often used to generate formal hypotheses and develop more precise research problems.
  • In the policy arena or applied to practice, exploratory studies help establish research priorities and where resources should be allocated.
  • Exploratory research generally utilizes small sample sizes and, thus, findings are typically not generalizable to the population at large.
  • The exploratory nature of the research inhibits an ability to make definitive conclusions about the findings. They provide insight but not definitive conclusions.
  • The research process underpinning exploratory studies is flexible but often unstructured, leading to only tentative results that have limited value to decision-makers.
  • Design lacks rigorous standards applied to methods of data gathering and analysis because one of the areas for exploration could be to determine what method or methodologies could best fit the research problem.

Cuthill, Michael. “Exploratory Research: Citizen Participation, Local Government, and Sustainable Development in Australia.” Sustainable Development 10 (2002): 79-89; Streb, Christoph K. "Exploratory Case Study." In Encyclopedia of Case Study Research . Albert J. Mills, Gabrielle Durepos and Eiden Wiebe, editors. (Thousand Oaks, CA: Sage, 2010), pp. 372-374; Taylor, P. J., G. Catalano, and D.R.F. Walker. “Exploratory Analysis of the World City Network.” Urban Studies 39 (December 2002): 2377-2394; Exploratory Research. Wikipedia.

Field Research Design

Sometimes referred to as ethnography or participant observation, designs around field research encompass a variety of interpretative procedures [e.g., observation and interviews] rooted in qualitative approaches to studying people individually or in groups while inhabiting their natural environment as opposed to using survey instruments or other forms of impersonal methods of data gathering. Information acquired from observational research takes the form of “ field notes ” that involves documenting what the researcher actually sees and hears while in the field. Findings do not consist of conclusive statements derived from numbers and statistics because field research involves analysis of words and observations of behavior. Conclusions, therefore, are developed from an interpretation of findings that reveal overriding themes, concepts, and ideas. More information can be found HERE .

  • Field research is often necessary to fill gaps in understanding the research problem applied to local conditions or to specific groups of people that cannot be ascertained from existing data.
  • The research helps contextualize already known information about a research problem, thereby facilitating ways to assess the origins, scope, and scale of a problem and to gage the causes, consequences, and means to resolve an issue based on deliberate interaction with people in their natural inhabited spaces.
  • Enables the researcher to corroborate or confirm data by gathering additional information that supports or refutes findings reported in prior studies of the topic.
  • Because the researcher in embedded in the field, they are better able to make observations or ask questions that reflect the specific cultural context of the setting being investigated.
  • Observing the local reality offers the opportunity to gain new perspectives or obtain unique data that challenges existing theoretical propositions or long-standing assumptions found in the literature.

What these studies don't tell you

  • A field research study requires extensive time and resources to carry out the multiple steps involved with preparing for the gathering of information, including for example, examining background information about the study site, obtaining permission to access the study site, and building trust and rapport with subjects.
  • Requires a commitment to staying engaged in the field to ensure that you can adequately document events and behaviors as they unfold.
  • The unpredictable nature of fieldwork means that researchers can never fully control the process of data gathering. They must maintain a flexible approach to studying the setting because events and circumstances can change quickly or unexpectedly.
  • Findings can be difficult to interpret and verify without access to documents and other source materials that help to enhance the credibility of information obtained from the field  [i.e., the act of triangulating the data].
  • Linking the research problem to the selection of study participants inhabiting their natural environment is critical. However, this specificity limits the ability to generalize findings to different situations or in other contexts or to infer courses of action applied to other settings or groups of people.
  • The reporting of findings must take into account how the researcher themselves may have inadvertently affected respondents and their behaviors.

Historical Design

The purpose of a historical research design is to collect, verify, and synthesize evidence from the past to establish facts that defend or refute a hypothesis. It uses secondary sources and a variety of primary documentary evidence, such as, diaries, official records, reports, archives, and non-textual information [maps, pictures, audio and visual recordings]. The limitation is that the sources must be both authentic and valid.

  • The historical research design is unobtrusive; the act of research does not affect the results of the study.
  • The historical approach is well suited for trend analysis.
  • Historical records can add important contextual background required to more fully understand and interpret a research problem.
  • There is often no possibility of researcher-subject interaction that could affect the findings.
  • Historical sources can be used over and over to study different research problems or to replicate a previous study.
  • The ability to fulfill the aims of your research are directly related to the amount and quality of documentation available to understand the research problem.
  • Since historical research relies on data from the past, there is no way to manipulate it to control for contemporary contexts.
  • Interpreting historical sources can be very time consuming.
  • The sources of historical materials must be archived consistently to ensure access. This may especially challenging for digital or online-only sources.
  • Original authors bring their own perspectives and biases to the interpretation of past events and these biases are more difficult to ascertain in historical resources.
  • Due to the lack of control over external variables, historical research is very weak with regard to the demands of internal validity.
  • It is rare that the entirety of historical documentation needed to fully address a research problem is available for interpretation, therefore, gaps need to be acknowledged.

Howell, Martha C. and Walter Prevenier. From Reliable Sources: An Introduction to Historical Methods . Ithaca, NY: Cornell University Press, 2001; Lundy, Karen Saucier. "Historical Research." In The Sage Encyclopedia of Qualitative Research Methods . Lisa M. Given, editor. (Thousand Oaks, CA: Sage, 2008), pp. 396-400; Marius, Richard. and Melvin E. Page. A Short Guide to Writing about History . 9th edition. Boston, MA: Pearson, 2015; Savitt, Ronald. “Historical Research in Marketing.” Journal of Marketing 44 (Autumn, 1980): 52-58;  Gall, Meredith. Educational Research: An Introduction . Chapter 16, Historical Research. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007.

Longitudinal Design

A longitudinal study follows the same sample over time and makes repeated observations. For example, with longitudinal surveys, the same group of people is interviewed at regular intervals, enabling researchers to track changes over time and to relate them to variables that might explain why the changes occur. Longitudinal research designs describe patterns of change and help establish the direction and magnitude of causal relationships. Measurements are taken on each variable over two or more distinct time periods. This allows the researcher to measure change in variables over time. It is a type of observational study sometimes referred to as a panel study.

  • Longitudinal data facilitate the analysis of the duration of a particular phenomenon.
  • Enables survey researchers to get close to the kinds of causal explanations usually attainable only with experiments.
  • The design permits the measurement of differences or change in a variable from one period to another [i.e., the description of patterns of change over time].
  • Longitudinal studies facilitate the prediction of future outcomes based upon earlier factors.
  • The data collection method may change over time.
  • Maintaining the integrity of the original sample can be difficult over an extended period of time.
  • It can be difficult to show more than one variable at a time.
  • This design often needs qualitative research data to explain fluctuations in the results.
  • A longitudinal research design assumes present trends will continue unchanged.
  • It can take a long period of time to gather results.
  • There is a need to have a large sample size and accurate sampling to reach representativness.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 6, Flexible Methods: Relational and Longitudinal Research. 2nd ed. New York: Columbia University Press, 1999; Forgues, Bernard, and Isabelle Vandangeon-Derumez. "Longitudinal Analyses." In Doing Management Research . Raymond-Alain Thiétart and Samantha Wauchope, editors. (London, England: Sage, 2001), pp. 332-351; Kalaian, Sema A. and Rafa M. Kasim. "Longitudinal Studies." In Encyclopedia of Survey Research Methods . Paul J. Lavrakas, ed. (Thousand Oaks, CA: Sage, 2008), pp. 440-441; Menard, Scott, editor. Longitudinal Research . Thousand Oaks, CA: Sage, 2002; Ployhart, Robert E. and Robert J. Vandenberg. "Longitudinal Research: The Theory, Design, and Analysis of Change.” Journal of Management 36 (January 2010): 94-120; Longitudinal Study. Wikipedia.

Meta-Analysis Design

Meta-analysis is an analytical methodology designed to systematically evaluate and summarize the results from a number of individual studies, thereby, increasing the overall sample size and the ability of the researcher to study effects of interest. The purpose is to not simply summarize existing knowledge, but to develop a new understanding of a research problem using synoptic reasoning. The main objectives of meta-analysis include analyzing differences in the results among studies and increasing the precision by which effects are estimated. A well-designed meta-analysis depends upon strict adherence to the criteria used for selecting studies and the availability of information in each study to properly analyze their findings. Lack of information can severely limit the type of analyzes and conclusions that can be reached. In addition, the more dissimilarity there is in the results among individual studies [heterogeneity], the more difficult it is to justify interpretations that govern a valid synopsis of results. A meta-analysis needs to fulfill the following requirements to ensure the validity of your findings:

  • Clearly defined description of objectives, including precise definitions of the variables and outcomes that are being evaluated;
  • A well-reasoned and well-documented justification for identification and selection of the studies;
  • Assessment and explicit acknowledgment of any researcher bias in the identification and selection of those studies;
  • Description and evaluation of the degree of heterogeneity among the sample size of studies reviewed; and,
  • Justification of the techniques used to evaluate the studies.
  • Can be an effective strategy for determining gaps in the literature.
  • Provides a means of reviewing research published about a particular topic over an extended period of time and from a variety of sources.
  • Is useful in clarifying what policy or programmatic actions can be justified on the basis of analyzing research results from multiple studies.
  • Provides a method for overcoming small sample sizes in individual studies that previously may have had little relationship to each other.
  • Can be used to generate new hypotheses or highlight research problems for future studies.
  • Small violations in defining the criteria used for content analysis can lead to difficult to interpret and/or meaningless findings.
  • A large sample size can yield reliable, but not necessarily valid, results.
  • A lack of uniformity regarding, for example, the type of literature reviewed, how methods are applied, and how findings are measured within the sample of studies you are analyzing, can make the process of synthesis difficult to perform.
  • Depending on the sample size, the process of reviewing and synthesizing multiple studies can be very time consuming.

Beck, Lewis W. "The Synoptic Method." The Journal of Philosophy 36 (1939): 337-345; Cooper, Harris, Larry V. Hedges, and Jeffrey C. Valentine, eds. The Handbook of Research Synthesis and Meta-Analysis . 2nd edition. New York: Russell Sage Foundation, 2009; Guzzo, Richard A., Susan E. Jackson and Raymond A. Katzell. “Meta-Analysis Analysis.” In Research in Organizational Behavior , Volume 9. (Greenwich, CT: JAI Press, 1987), pp 407-442; Lipsey, Mark W. and David B. Wilson. Practical Meta-Analysis . Thousand Oaks, CA: Sage Publications, 2001; Study Design 101. Meta-Analysis. The Himmelfarb Health Sciences Library, George Washington University; Timulak, Ladislav. “Qualitative Meta-Analysis.” In The SAGE Handbook of Qualitative Data Analysis . Uwe Flick, editor. (Los Angeles, CA: Sage, 2013), pp. 481-495; Walker, Esteban, Adrian V. Hernandez, and Micheal W. Kattan. "Meta-Analysis: It's Strengths and Limitations." Cleveland Clinic Journal of Medicine 75 (June 2008): 431-439.

Mixed-Method Design

  • Narrative and non-textual information can add meaning to numeric data, while numeric data can add precision to narrative and non-textual information.
  • Can utilize existing data while at the same time generating and testing a grounded theory approach to describe and explain the phenomenon under study.
  • A broader, more complex research problem can be investigated because the researcher is not constrained by using only one method.
  • The strengths of one method can be used to overcome the inherent weaknesses of another method.
  • Can provide stronger, more robust evidence to support a conclusion or set of recommendations.
  • May generate new knowledge new insights or uncover hidden insights, patterns, or relationships that a single methodological approach might not reveal.
  • Produces more complete knowledge and understanding of the research problem that can be used to increase the generalizability of findings applied to theory or practice.
  • A researcher must be proficient in understanding how to apply multiple methods to investigating a research problem as well as be proficient in optimizing how to design a study that coherently melds them together.
  • Can increase the likelihood of conflicting results or ambiguous findings that inhibit drawing a valid conclusion or setting forth a recommended course of action [e.g., sample interview responses do not support existing statistical data].
  • Because the research design can be very complex, reporting the findings requires a well-organized narrative, clear writing style, and precise word choice.
  • Design invites collaboration among experts. However, merging different investigative approaches and writing styles requires more attention to the overall research process than studies conducted using only one methodological paradigm.
  • Concurrent merging of quantitative and qualitative research requires greater attention to having adequate sample sizes, using comparable samples, and applying a consistent unit of analysis. For sequential designs where one phase of qualitative research builds on the quantitative phase or vice versa, decisions about what results from the first phase to use in the next phase, the choice of samples and estimating reasonable sample sizes for both phases, and the interpretation of results from both phases can be difficult.
  • Due to multiple forms of data being collected and analyzed, this design requires extensive time and resources to carry out the multiple steps involved in data gathering and interpretation.

Burch, Patricia and Carolyn J. Heinrich. Mixed Methods for Policy Research and Program Evaluation . Thousand Oaks, CA: Sage, 2016; Creswell, John w. et al. Best Practices for Mixed Methods Research in the Health Sciences . Bethesda, MD: Office of Behavioral and Social Sciences Research, National Institutes of Health, 2010Creswell, John W. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches . 4th edition. Thousand Oaks, CA: Sage Publications, 2014; Domínguez, Silvia, editor. Mixed Methods Social Networks Research . Cambridge, UK: Cambridge University Press, 2014; Hesse-Biber, Sharlene Nagy. Mixed Methods Research: Merging Theory with Practice . New York: Guilford Press, 2010; Niglas, Katrin. “How the Novice Researcher Can Make Sense of Mixed Methods Designs.” International Journal of Multiple Research Approaches 3 (2009): 34-46; Onwuegbuzie, Anthony J. and Nancy L. Leech. “Linking Research Questions to Mixed Methods Data Analysis Procedures.” The Qualitative Report 11 (September 2006): 474-498; Tashakorri, Abbas and John W. Creswell. “The New Era of Mixed Methods.” Journal of Mixed Methods Research 1 (January 2007): 3-7; Zhanga, Wanqing. “Mixed Methods Application in Health Intervention Research: A Multiple Case Study.” International Journal of Multiple Research Approaches 8 (2014): 24-35 .

Observational Design

This type of research design draws a conclusion by comparing subjects against a control group, in cases where the researcher has no control over the experiment. There are two general types of observational designs. In direct observations, people know that you are watching them. Unobtrusive measures involve any method for studying behavior where individuals do not know they are being observed. An observational study allows a useful insight into a phenomenon and avoids the ethical and practical difficulties of setting up a large and cumbersome research project.

  • Observational studies are usually flexible and do not necessarily need to be structured around a hypothesis about what you expect to observe [data is emergent rather than pre-existing].
  • The researcher is able to collect in-depth information about a particular behavior.
  • Can reveal interrelationships among multifaceted dimensions of group interactions.
  • You can generalize your results to real life situations.
  • Observational research is useful for discovering what variables may be important before applying other methods like experiments.
  • Observation research designs account for the complexity of group behaviors.
  • Reliability of data is low because seeing behaviors occur over and over again may be a time consuming task and are difficult to replicate.
  • In observational research, findings may only reflect a unique sample population and, thus, cannot be generalized to other groups.
  • There can be problems with bias as the researcher may only "see what they want to see."
  • There is no possibility to determine "cause and effect" relationships since nothing is manipulated.
  • Sources or subjects may not all be equally credible.
  • Any group that is knowingly studied is altered to some degree by the presence of the researcher, therefore, potentially skewing any data collected.

Atkinson, Paul and Martyn Hammersley. “Ethnography and Participant Observation.” In Handbook of Qualitative Research . Norman K. Denzin and Yvonna S. Lincoln, eds. (Thousand Oaks, CA: Sage, 1994), pp. 248-261; Observational Research. Research Methods by Dummies. Department of Psychology. California State University, Fresno, 2006; Patton Michael Quinn. Qualitiative Research and Evaluation Methods . Chapter 6, Fieldwork Strategies and Observational Methods. 3rd ed. Thousand Oaks, CA: Sage, 2002; Payne, Geoff and Judy Payne. "Observation." In Key Concepts in Social Research . The SAGE Key Concepts series. (London, England: Sage, 2004), pp. 158-162; Rosenbaum, Paul R. Design of Observational Studies . New York: Springer, 2010;Williams, J. Patrick. "Nonparticipant Observation." In The Sage Encyclopedia of Qualitative Research Methods . Lisa M. Given, editor.(Thousand Oaks, CA: Sage, 2008), pp. 562-563.

Philosophical Design

Understood more as an broad approach to examining a research problem than a methodological design, philosophical analysis and argumentation is intended to challenge deeply embedded, often intractable, assumptions underpinning an area of study. This approach uses the tools of argumentation derived from philosophical traditions, concepts, models, and theories to critically explore and challenge, for example, the relevance of logic and evidence in academic debates, to analyze arguments about fundamental issues, or to discuss the root of existing discourse about a research problem. These overarching tools of analysis can be framed in three ways:

  • Ontology -- the study that describes the nature of reality; for example, what is real and what is not, what is fundamental and what is derivative?
  • Epistemology -- the study that explores the nature of knowledge; for example, by what means does knowledge and understanding depend upon and how can we be certain of what we know?
  • Axiology -- the study of values; for example, what values does an individual or group hold and why? How are values related to interest, desire, will, experience, and means-to-end? And, what is the difference between a matter of fact and a matter of value?
  • Can provide a basis for applying ethical decision-making to practice.
  • Functions as a means of gaining greater self-understanding and self-knowledge about the purposes of research.
  • Brings clarity to general guiding practices and principles of an individual or group.
  • Philosophy informs methodology.
  • Refine concepts and theories that are invoked in relatively unreflective modes of thought and discourse.
  • Beyond methodology, philosophy also informs critical thinking about epistemology and the structure of reality (metaphysics).
  • Offers clarity and definition to the practical and theoretical uses of terms, concepts, and ideas.
  • Limited application to specific research problems [answering the "So What?" question in social science research].
  • Analysis can be abstract, argumentative, and limited in its practical application to real-life issues.
  • While a philosophical analysis may render problematic that which was once simple or taken-for-granted, the writing can be dense and subject to unnecessary jargon, overstatement, and/or excessive quotation and documentation.
  • There are limitations in the use of metaphor as a vehicle of philosophical analysis.
  • There can be analytical difficulties in moving from philosophy to advocacy and between abstract thought and application to the phenomenal world.

Burton, Dawn. "Part I, Philosophy of the Social Sciences." In Research Training for Social Scientists . (London, England: Sage, 2000), pp. 1-5; Chapter 4, Research Methodology and Design. Unisa Institutional Repository (UnisaIR), University of South Africa; Jarvie, Ian C., and Jesús Zamora-Bonilla, editors. The SAGE Handbook of the Philosophy of Social Sciences . London: Sage, 2011; Labaree, Robert V. and Ross Scimeca. “The Philosophical Problem of Truth in Librarianship.” The Library Quarterly 78 (January 2008): 43-70; Maykut, Pamela S. Beginning Qualitative Research: A Philosophic and Practical Guide . Washington, DC: Falmer Press, 1994; McLaughlin, Hugh. "The Philosophy of Social Research." In Understanding Social Work Research . 2nd edition. (London: SAGE Publications Ltd., 2012), pp. 24-47; Stanford Encyclopedia of Philosophy . Metaphysics Research Lab, CSLI, Stanford University, 2013.

Sequential Design

  • The researcher has a limitless option when it comes to sample size and the sampling schedule.
  • Due to the repetitive nature of this research design, minor changes and adjustments can be done during the initial parts of the study to correct and hone the research method.
  • This is a useful design for exploratory studies.
  • There is very little effort on the part of the researcher when performing this technique. It is generally not expensive, time consuming, or workforce intensive.
  • Because the study is conducted serially, the results of one sample are known before the next sample is taken and analyzed. This provides opportunities for continuous improvement of sampling and methods of analysis.
  • The sampling method is not representative of the entire population. The only possibility of approaching representativeness is when the researcher chooses to use a very large sample size significant enough to represent a significant portion of the entire population. In this case, moving on to study a second or more specific sample can be difficult.
  • The design cannot be used to create conclusions and interpretations that pertain to an entire population because the sampling technique is not randomized. Generalizability from findings is, therefore, limited.
  • Difficult to account for and interpret variation from one sample to another over time, particularly when using qualitative methods of data collection.

Betensky, Rebecca. Harvard University, Course Lecture Note slides; Bovaird, James A. and Kevin A. Kupzyk. "Sequential Design." In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 1347-1352; Cresswell, John W. Et al. “Advanced Mixed-Methods Research Designs.” In Handbook of Mixed Methods in Social and Behavioral Research . Abbas Tashakkori and Charles Teddle, eds. (Thousand Oaks, CA: Sage, 2003), pp. 209-240; Henry, Gary T. "Sequential Sampling." In The SAGE Encyclopedia of Social Science Research Methods . Michael S. Lewis-Beck, Alan Bryman and Tim Futing Liao, editors. (Thousand Oaks, CA: Sage, 2004), pp. 1027-1028; Nataliya V. Ivankova. “Using Mixed-Methods Sequential Explanatory Design: From Theory to Practice.” Field Methods 18 (February 2006): 3-20; Bovaird, James A. and Kevin A. Kupzyk. “Sequential Design.” In Encyclopedia of Research Design . Neil J. Salkind, ed. Thousand Oaks, CA: Sage, 2010; Sequential Analysis. Wikipedia.

Systematic Review

  • A systematic review synthesizes the findings of multiple studies related to each other by incorporating strategies of analysis and interpretation intended to reduce biases and random errors.
  • The application of critical exploration, evaluation, and synthesis methods separates insignificant, unsound, or redundant research from the most salient and relevant studies worthy of reflection.
  • They can be use to identify, justify, and refine hypotheses, recognize and avoid hidden problems in prior studies, and explain data inconsistencies and conflicts in data.
  • Systematic reviews can be used to help policy makers formulate evidence-based guidelines and regulations.
  • The use of strict, explicit, and pre-determined methods of synthesis, when applied appropriately, provide reliable estimates about the effects of interventions, evaluations, and effects related to the overarching research problem investigated by each study under review.
  • Systematic reviews illuminate where knowledge or thorough understanding of a research problem is lacking and, therefore, can then be used to guide future research.
  • The accepted inclusion of unpublished studies [i.e., grey literature] ensures the broadest possible way to analyze and interpret research on a topic.
  • Results of the synthesis can be generalized and the findings extrapolated into the general population with more validity than most other types of studies .
  • Systematic reviews do not create new knowledge per se; they are a method for synthesizing existing studies about a research problem in order to gain new insights and determine gaps in the literature.
  • The way researchers have carried out their investigations [e.g., the period of time covered, number of participants, sources of data analyzed, etc.] can make it difficult to effectively synthesize studies.
  • The inclusion of unpublished studies can introduce bias into the review because they may not have undergone a rigorous peer-review process prior to publication. Examples may include conference presentations or proceedings, publications from government agencies, white papers, working papers, and internal documents from organizations, and doctoral dissertations and Master's theses.

Denyer, David and David Tranfield. "Producing a Systematic Review." In The Sage Handbook of Organizational Research Methods .  David A. Buchanan and Alan Bryman, editors. ( Thousand Oaks, CA: Sage Publications, 2009), pp. 671-689; Foster, Margaret J. and Sarah T. Jewell, editors. Assembling the Pieces of a Systematic Review: A Guide for Librarians . Lanham, MD: Rowman and Littlefield, 2017; Gough, David, Sandy Oliver, James Thomas, editors. Introduction to Systematic Reviews . 2nd edition. Los Angeles, CA: Sage Publications, 2017; Gopalakrishnan, S. and P. Ganeshkumar. “Systematic Reviews and Meta-analysis: Understanding the Best Evidence in Primary Healthcare.” Journal of Family Medicine and Primary Care 2 (2013): 9-14; Gough, David, James Thomas, and Sandy Oliver. "Clarifying Differences between Review Designs and Methods." Systematic Reviews 1 (2012): 1-9; Khan, Khalid S., Regina Kunz, Jos Kleijnen, and Gerd Antes. “Five Steps to Conducting a Systematic Review.” Journal of the Royal Society of Medicine 96 (2003): 118-121; Mulrow, C. D. “Systematic Reviews: Rationale for Systematic Reviews.” BMJ 309:597 (September 1994); O'Dwyer, Linda C., and Q. Eileen Wafford. "Addressing Challenges with Systematic Review Teams through Effective Communication: A Case Report." Journal of the Medical Library Association 109 (October 2021): 643-647; Okoli, Chitu, and Kira Schabram. "A Guide to Conducting a Systematic Literature Review of Information Systems Research."  Sprouts: Working Papers on Information Systems 10 (2010); Siddaway, Andy P., Alex M. Wood, and Larry V. Hedges. "How to Do a Systematic Review: A Best Practice Guide for Conducting and Reporting Narrative Reviews, Meta-analyses, and Meta-syntheses." Annual Review of Psychology 70 (2019): 747-770; Torgerson, Carole J. “Publication Bias: The Achilles’ Heel of Systematic Reviews?” British Journal of Educational Studies 54 (March 2006): 89-102; Torgerson, Carole. Systematic Reviews . New York: Continuum, 2003.

  • << Previous: Purpose of Guide
  • Next: Design Flaws to Avoid >>
  • Last Updated: Jul 30, 2024 10:20 AM
  • URL: https://libguides.usc.edu/writingguide

Chapter 5 Research Design

Research design is a comprehensive plan for data collection in an empirical research project. It is a “blueprint” for empirical research aimed at answering specific research questions or testing specific hypotheses, and must specify at least three processes: (1) the data collection process, (2) the instrument development process, and (3) the sampling process. The instrument development and sampling processes are described in next two chapters, and the data collection process (which is often loosely called “research design”) is introduced in this chapter and is described in further detail in Chapters 9-12.

Broadly speaking, data collection methods can be broadly grouped into two categories: positivist and interpretive. Positivist methods , such as laboratory experiments and survey research, are aimed at theory (or hypotheses) testing, while interpretive methods, such as action research and ethnography, are aimed at theory building. Positivist methods employ a deductive approach to research, starting with a theory and testing theoretical postulates using empirical data. In contrast, interpretive methods employ an inductive approach that starts with data and tries to derive a theory about the phenomenon of interest from the observed data. Often times, these methods are incorrectly equated with quantitative and qualitative research. Quantitative and qualitative methods refers to the type of data being collected (quantitative data involve numeric scores, metrics, and so on, while qualitative data includes interviews, observations, and so forth) and analyzed (i.e., using quantitative techniques such as regression or qualitative techniques such as coding). Positivist research uses predominantly quantitative data, but can also use qualitative data. Interpretive research relies heavily on qualitative data, but can sometimes benefit from including quantitative data as well. Sometimes, joint use of qualitative and quantitative data may help generate unique insight into a complex social phenomenon that are not available from either types of data alone, and hence, mixed-mode designs that combine qualitative and quantitative data are often highly desirable.

Key Attributes of a Research Design

The quality of research designs can be defined in terms of four key design attributes: internal validity, external validity, construct validity, and statistical conclusion validity.

Internal validity , also called causality, examines whether the observed change in a dependent variable is indeed caused by a corresponding change in hypothesized independent variable, and not by variables extraneous to the research context. Causality requires three conditions: (1) covariation of cause and effect (i.e., if cause happens, then effect also happens; and if cause does not happen, effect does not happen), (2) temporal precedence: cause must precede effect in time, (3) no plausible alternative explanation (or spurious correlation). Certain research designs, such as laboratory experiments, are strong in internal validity by virtue of their ability to manipulate the independent variable (cause) via a treatment and observe the effect (dependent variable) of that treatment after a certain point in time, while controlling for the effects of extraneous variables. Other designs, such as field surveys, are poor in internal validity because of their inability to manipulate the independent variable (cause), and because cause and effect are measured at the same point in time which defeats temporal precedence making it equally likely that the expected effect might have influenced the expected cause rather than the reverse. Although higher in internal validity compared to other methods, laboratory experiments are, by no means, immune to threats of internal validity, and are susceptible to history, testing, instrumentation, regression, and other threats that are discussed later in the chapter on experimental designs. Nonetheless, different research designs vary considerably in their respective level of internal validity.

External validity or generalizability refers to whether the observed associations can be generalized from the sample to the population (population validity), or to other people, organizations, contexts, or time (ecological validity). For instance, can results drawn from a sample of financial firms in the United States be generalized to the population of financial firms (population validity) or to other firms within the United States (ecological validity)? Survey research, where data is sourced from a wide variety of individuals, firms, or other units of analysis, tends to have broader generalizability than laboratory experiments where artificially contrived treatments and strong control over extraneous variables render the findings less generalizable to real-life settings where treatments and extraneous variables cannot be controlled. The variation in internal and external validity for a wide range of research designs are shown in Figure 5.1.

blue print of research work is known as

Single

Multiple

Cone of Validity case study case study

Field

experiment

Ethnography

Longitudinal External Cross-sectional field survey validity field survey

Multiple lab

Simulation

experiment

Validity

Math

frontier

Single lab

proofs

experiment

Internal validity

Figure 5.1. Internal and external validity

Some researchers claim that there is a tradeoff between internal and external validity: higher external validity can come only at the cost of internal validity and vice-versa. But this is not always the case. Research designs such as field experiments, longitudinal field surveys, and multiple case studies have higher degrees of both internal and external validities. Personally, I prefer research designs that have reasonable degrees of both internal and external validities, i.e., those that fall within the cone of validity shown in Figure 5.1. But this should not suggest that designs outside this cone are any less useful or valuable. Researchers’ choice of designs is ultimately a matter of their personal preference and competence, and the level of internal and external validity they desire.

Construct validity examines how well a given measurement scale is measuring the theoretical construct that it is expected to measure. Many constructs used in social science research such as empathy, resistance to change, and organizational learning are difficult to define, much less measure. For instance, construct validity must assure that a measure of empathy is indeed measuring empathy and not compassion, which may be difficult since these constructs are somewhat similar in meaning. Construct validity is assessed in positivist research based on correlational or factor analysis of pilot test data, as described in the next chapter.

Statistical conclusion validity examines the extent to which conclusions derived using a statistical procedure is valid. For example, it examines whether the right statistical method was used for hypotheses testing, whether the variables used meet the assumptions of that statistical test (such as sample size or distributional requirements), and so forth. Because interpretive research designs do not employ statistical test, statistical conclusion validity is not applicable for such analysis. The different kinds of validity and where they exist at the theoretical/empirical levels are illustrated in Figure 5.2.

blue print of research work is known as

Blueprints for Academic Research Projects

Today's post is written by Dr. Ben Ellway, the founder of www.academic-toolkit.com . Ben completed his Ph.D. at The University of Cambridge and created the  Research Design Canvas , a multipurpose tool for learning about academic research and designing a research project. 

Based on requests from students for examples of completed research canvases, Ben created the  Research Model Builder Canvas . 

This canvas modifies the original questions in the nine building blocks to enable students to search for key information in a journal article and then reassemble it on the canvas to form a research model — a single-page visual summary of the journal article which captures how the research was designed and conducted. 

Ben’s second book,  Building Research Models,  explains how to use the Research Model Builder Canvas to become a more confident and competent reader of academic journal articles, while simultaneously building research models to use as blueprints to guide the design of your own project .  

Ben has created a template for Stormboard based on this tool and this is his brief guide on how to begin using it.

Starting with a blank page can be daunting

The Research Design Canvas brings together the key building blocks of academic research on a single page and provides targeted questions to help you design your own project. However, starting with a blank page can be a daunting prospect! 

Academic research is complex as it involves multiple components, so designing and conducting your own project can be overwhelming, especially if you lack confidence in making decisions or are confused about how the components of a project fit together. It is much easier to start a complex task and long process such as designing a research project when you have an existing research model or ‘blueprint’ to work from. 

Starting with a ‘blueprint’ — tailored to your topic area — is much easier

Using the Research Model Builder Canvas, you can transform a journal article in your topic into a research model or blueprint — a single-page visualization of how a project was designed and conducted. 

The research model — and equally importantly the process of building it — will improve your understanding of academic research, and will also provide you with a personalized learning resource for your Thesis. You can use the research model as a blueprint to refer to specific decisions and their justification, and how components of research fit together, to help you begin to build your own project. 

Obviously, each project is unique so you’ll be using the blueprint as a guide rather than as a ‘cookie cutter’ solution. Seeing the components of a completed research project together on a single page (which  you  produced from a ten or twenty-page journal article) — is a very powerful learning resource to have on your academic research journey.

Build research models on Stormboard 

If you prefer to work digitally rather than with paper and pen, you can use the Research Model Builder Canvas Template in Stormboard. 

By using the Stormboard template, you’ll be able to identify key content and points from the journal article and then quickly summarize these on digital sticky notes. You can easily edit the sticky notes to rearrange, delete, or expand upon the ideas and points. You can then refer back to the permanent visual research model you created, share it with fellow students, or discuss it with your supervisors.

What are the building blocks of the research model?

The template has nine building blocks. 

The original questions in the building blocks of the research design canvas are modified in the research model builder canvas. They are designed to help you locate the most important points, decisions, and details in a journal article.  

blue print of research work is known as

A brief introduction to the purpose of each building block is provided below to help you familiarize yourself with the research model you will build.

Phenomenon / Problem

What does the research focus on? What were the main ‘things’ investigated and discussed in the journal article? Did the research involve a real-world problem?

What area (or areas) of past literature are identified and introduced? Which sources are especially important?

Observations & Arguments 

What are the most crucial points made by the authors in their analysis of past research? What evidence, issues, and themes are the focus of the literature review? Is a gap in past research identified? 

Research Questions / Hypotheses 

What are the research questions and/or hypotheses? How are they justified? If none are stated, what line of investigation is pursued?  

Theory & Concepts 

Does the research involve a theoretical or conceptual component? If so, what are the key concepts / theory? What role do they play in the research?  

Methodology / Design / Methods  

What methods and data were used? How are the decisions justified? 

Sample / Context 

What sampling method is used? Is the research context important?

Contributions

What contribution(s) do the authors claim that their research makes? Is the value-add more academically or practically-oriented? Are real-world stakeholders and the implications for them mentioned? 

Philosophical Assumptions / Research Paradigm 

These are not usually mentioned or discussed in journal articles. Indeed, this building block can be confusing if you are not familiar with research philosophy or are confused by its seemingly abstract focus. If you understand these ideas, can you identify any implicit assumptions or a research paradigm in the article?

Compare two research models to appreciate the diversity of research

The easiest way to increase your appreciation of the different types and ways of conducting academic research is to build  multiple  research models. 

Start by building two models. Compare and contrast them. Which decisions and aspects are similar and which are different? What can you learn from each research model and how can this help you when designing your own research and Thesis? 

Building research models will help you to appreciate the diversity in the different types of research conducted in your topic area.

Transforming a ten or twenty-page journal article into a single-page visual summary is a powerful way to learn about how academic research is designed and conducted — and also what a completed research project looks like. 

The Stormboard template makes the process of building research models easy, and the ability to save, edit, and share them ensures that you’ll be able to refer back to these blueprints at various stages throughout your research journey and Thesis writing process. 

When you get confused, become stuck, or feel overwhelmed by the complexity of academic research, you can fall back on the research models you created to guide you and get you back on track. Good luck!

Are you interested in trying the Research Model Builder Canvas? Sign up for a free trial now!

Keep reading.

A Beginner’s Guide to Agile Retrospective Meetings

A Retrospective meeting (or Retrospective sprint as it is sometimes called) is a step in the Agile model that allows for teams to take an overall look at what they have done over a period of time — a week, a month, etc. — to determine what’s working for them and what’s not.

Stormboard Now Offers Beat Board Templates for Screenplays and Novels&nbsp;

Get free Save the Cat and 5-act beat board templates with Stormboard! Simplify your writing process with unlimited space, seamless collaboration, and easy recovery of discarded ideas. Perfect for novelists and screenwriters, our templates help you map out your story visually and clearly.

Scrum Confessions: Real Devs Spill Their Agile Pet Peeves&nbsp;

Discover the unspoken challenges faced by developers in agile teams. This blog reveals the core issues beyond common complaints, helping scrum masters improve team dynamics and productivity.

The Most Underrated Use for AI During Meetings: Being Your Devil’s Advocate&nbsp;

Explore the counterintuitive idea that even well-functioning teams with solid brainstorming sessions can benefit from AI. By playing the role of the devil's advocate, AI can introduce healthy disagreement and challenge the status quo, potentially leading to groundbreaking ideas.

5 Best Practices for Running a Daily Standup Meeting

If your Agile team uses the Scrum methodology, you are probably already practicing Daily Standup meetings (or Daily Scrum meetings). These meetings are an essential part of Scrum that should be done to keep the process on track. What is…

5 Reasons to Keep PI Planning Remotely in 2024&nbsp;

Enterprise organizations who have adopted the highly-effective Agile PI Planning strategy, have recently been forced to do their planning sessions remotely…

Defining and Adopting the Agile Framework for your Business

By definition, the word Agile means the “ability to move with quick, easy grace.” While this is how most of us would define Agile, the term has grown over the years to have a much more diverse, broad meaning — especially in the business world.

Collaborative Work for Hybrid Agile Teams – Tips and Best Practices

In today's evolving work environment, creating agile teams can change the face of team building in the modern workplace. Learn more about how you can manage your agile teams easily - whether you are in office, remote, or hybrid.

What Your Agile Team Wished You Knew About Your Meetings

Explore the hidden costs of excessive meetings in Agile environments and learn how to streamline your team's workflow for optimal productivity. Discover practical solutions to common complaints and transform your meetings into valuable assets that drive efficiency and collaboration.

StormAI: Our Next-Gen Update to the Industry’s First AI Collaborator

Discover the latest advancements in StormAI, the industry's first augmented intelligence collaborator, with exciting updates that enhance its capabilities. Learn about the innovative features and improvements that make StormAI 2.0 a groundbreaking technology for collaborative work.

Scrum Strategies: What Should Be Done With Spilled Stories?&nbsp;

Discover the contrasting views within the Agile community regarding spilled stories (or spillover) in sprint cycles and delve into strategies adopted by different teams. Gain insights into the pros and cons of each approach to better inform your Agile methodology.

Integrating Renewable Energy: The Role of Collaborative Platforms in Managing Transition

Uncover the pivotal role collaborative platforms play in facilitating a seamless transition towards sustainable energy solutions. From project management to real-time communication, explore how these tools are reshaping the landscape for utility companies amidst the push for renewable energy adoption.

How to Track Your Team’s Workflow Remotely

Archive - 2021 business trends.

blue print of research work is known as

Research Design 101

Everything You Need To Get Started (With Examples)

By: Derek Jansen (MBA) | Reviewers: Eunice Rautenbach (DTech) & Kerryn Warren (PhD) | April 2023

Research design for qualitative and quantitative studies

Navigating the world of research can be daunting, especially if you’re a first-time researcher. One concept you’re bound to run into fairly early in your research journey is that of “ research design ”. Here, we’ll guide you through the basics using practical examples , so that you can approach your research with confidence.

Overview: Research Design 101

What is research design.

  • Research design types for quantitative studies
  • Video explainer : quantitative research design
  • Research design types for qualitative studies
  • Video explainer : qualitative research design
  • How to choose a research design
  • Key takeaways

Research design refers to the overall plan, structure or strategy that guides a research project , from its conception to the final data analysis. A good research design serves as the blueprint for how you, as the researcher, will collect and analyse data while ensuring consistency, reliability and validity throughout your study.

Understanding different types of research designs is essential as helps ensure that your approach is suitable  given your research aims, objectives and questions , as well as the resources you have available to you. Without a clear big-picture view of how you’ll design your research, you run the risk of potentially making misaligned choices in terms of your methodology – especially your sampling , data collection and data analysis decisions.

The problem with defining research design…

One of the reasons students struggle with a clear definition of research design is because the term is used very loosely across the internet, and even within academia.

Some sources claim that the three research design types are qualitative, quantitative and mixed methods , which isn’t quite accurate (these just refer to the type of data that you’ll collect and analyse). Other sources state that research design refers to the sum of all your design choices, suggesting it’s more like a research methodology . Others run off on other less common tangents. No wonder there’s confusion!

In this article, we’ll clear up the confusion. We’ll explain the most common research design types for both qualitative and quantitative research projects, whether that is for a full dissertation or thesis, or a smaller research paper or article.

Free Webinar: Research Methodology 101

Research Design: Quantitative Studies

Quantitative research involves collecting and analysing data in a numerical form. Broadly speaking, there are four types of quantitative research designs: descriptive , correlational , experimental , and quasi-experimental . 

Descriptive Research Design

As the name suggests, descriptive research design focuses on describing existing conditions, behaviours, or characteristics by systematically gathering information without manipulating any variables. In other words, there is no intervention on the researcher’s part – only data collection.

For example, if you’re studying smartphone addiction among adolescents in your community, you could deploy a survey to a sample of teens asking them to rate their agreement with certain statements that relate to smartphone addiction. The collected data would then provide insight regarding how widespread the issue may be – in other words, it would describe the situation.

The key defining attribute of this type of research design is that it purely describes the situation . In other words, descriptive research design does not explore potential relationships between different variables or the causes that may underlie those relationships. Therefore, descriptive research is useful for generating insight into a research problem by describing its characteristics . By doing so, it can provide valuable insights and is often used as a precursor to other research design types.

Correlational Research Design

Correlational design is a popular choice for researchers aiming to identify and measure the relationship between two or more variables without manipulating them . In other words, this type of research design is useful when you want to know whether a change in one thing tends to be accompanied by a change in another thing.

For example, if you wanted to explore the relationship between exercise frequency and overall health, you could use a correlational design to help you achieve this. In this case, you might gather data on participants’ exercise habits, as well as records of their health indicators like blood pressure, heart rate, or body mass index. Thereafter, you’d use a statistical test to assess whether there’s a relationship between the two variables (exercise frequency and health).

As you can see, correlational research design is useful when you want to explore potential relationships between variables that cannot be manipulated or controlled for ethical, practical, or logistical reasons. It is particularly helpful in terms of developing predictions , and given that it doesn’t involve the manipulation of variables, it can be implemented at a large scale more easily than experimental designs (which will look at next).

That said, it’s important to keep in mind that correlational research design has limitations – most notably that it cannot be used to establish causality . In other words, correlation does not equal causation . To establish causality, you’ll need to move into the realm of experimental design, coming up next…

Need a helping hand?

blue print of research work is known as

Experimental Research Design

Experimental research design is used to determine if there is a causal relationship between two or more variables . With this type of research design, you, as the researcher, manipulate one variable (the independent variable) while controlling others (dependent variables). Doing so allows you to observe the effect of the former on the latter and draw conclusions about potential causality.

For example, if you wanted to measure if/how different types of fertiliser affect plant growth, you could set up several groups of plants, with each group receiving a different type of fertiliser, as well as one with no fertiliser at all. You could then measure how much each plant group grew (on average) over time and compare the results from the different groups to see which fertiliser was most effective.

Overall, experimental research design provides researchers with a powerful way to identify and measure causal relationships (and the direction of causality) between variables. However, developing a rigorous experimental design can be challenging as it’s not always easy to control all the variables in a study. This often results in smaller sample sizes , which can reduce the statistical power and generalisability of the results.

Moreover, experimental research design requires random assignment . This means that the researcher needs to assign participants to different groups or conditions in a way that each participant has an equal chance of being assigned to any group (note that this is not the same as random sampling ). Doing so helps reduce the potential for bias and confounding variables . This need for random assignment can lead to ethics-related issues . For example, withholding a potentially beneficial medical treatment from a control group may be considered unethical in certain situations.

Quasi-Experimental Research Design

Quasi-experimental research design is used when the research aims involve identifying causal relations , but one cannot (or doesn’t want to) randomly assign participants to different groups (for practical or ethical reasons). Instead, with a quasi-experimental research design, the researcher relies on existing groups or pre-existing conditions to form groups for comparison.

For example, if you were studying the effects of a new teaching method on student achievement in a particular school district, you may be unable to randomly assign students to either group and instead have to choose classes or schools that already use different teaching methods. This way, you still achieve separate groups, without having to assign participants to specific groups yourself.

Naturally, quasi-experimental research designs have limitations when compared to experimental designs. Given that participant assignment is not random, it’s more difficult to confidently establish causality between variables, and, as a researcher, you have less control over other variables that may impact findings.

All that said, quasi-experimental designs can still be valuable in research contexts where random assignment is not possible and can often be undertaken on a much larger scale than experimental research, thus increasing the statistical power of the results. What’s important is that you, as the researcher, understand the limitations of the design and conduct your quasi-experiment as rigorously as possible, paying careful attention to any potential confounding variables .

The four most common quantitative research design types are descriptive, correlational, experimental and quasi-experimental.

Research Design: Qualitative Studies

There are many different research design types when it comes to qualitative studies, but here we’ll narrow our focus to explore the “Big 4”. Specifically, we’ll look at phenomenological design, grounded theory design, ethnographic design, and case study design.

Phenomenological Research Design

Phenomenological design involves exploring the meaning of lived experiences and how they are perceived by individuals. This type of research design seeks to understand people’s perspectives , emotions, and behaviours in specific situations. Here, the aim for researchers is to uncover the essence of human experience without making any assumptions or imposing preconceived ideas on their subjects.

For example, you could adopt a phenomenological design to study why cancer survivors have such varied perceptions of their lives after overcoming their disease. This could be achieved by interviewing survivors and then analysing the data using a qualitative analysis method such as thematic analysis to identify commonalities and differences.

Phenomenological research design typically involves in-depth interviews or open-ended questionnaires to collect rich, detailed data about participants’ subjective experiences. This richness is one of the key strengths of phenomenological research design but, naturally, it also has limitations. These include potential biases in data collection and interpretation and the lack of generalisability of findings to broader populations.

Grounded Theory Research Design

Grounded theory (also referred to as “GT”) aims to develop theories by continuously and iteratively analysing and comparing data collected from a relatively large number of participants in a study. It takes an inductive (bottom-up) approach, with a focus on letting the data “speak for itself”, without being influenced by preexisting theories or the researcher’s preconceptions.

As an example, let’s assume your research aims involved understanding how people cope with chronic pain from a specific medical condition, with a view to developing a theory around this. In this case, grounded theory design would allow you to explore this concept thoroughly without preconceptions about what coping mechanisms might exist. You may find that some patients prefer cognitive-behavioural therapy (CBT) while others prefer to rely on herbal remedies. Based on multiple, iterative rounds of analysis, you could then develop a theory in this regard, derived directly from the data (as opposed to other preexisting theories and models).

Grounded theory typically involves collecting data through interviews or observations and then analysing it to identify patterns and themes that emerge from the data. These emerging ideas are then validated by collecting more data until a saturation point is reached (i.e., no new information can be squeezed from the data). From that base, a theory can then be developed .

As you can see, grounded theory is ideally suited to studies where the research aims involve theory generation , especially in under-researched areas. Keep in mind though that this type of research design can be quite time-intensive , given the need for multiple rounds of data collection and analysis.

blue print of research work is known as

Ethnographic Research Design

Ethnographic design involves observing and studying a culture-sharing group of people in their natural setting to gain insight into their behaviours, beliefs, and values. The focus here is on observing participants in their natural environment (as opposed to a controlled environment). This typically involves the researcher spending an extended period of time with the participants in their environment, carefully observing and taking field notes .

All of this is not to say that ethnographic research design relies purely on observation. On the contrary, this design typically also involves in-depth interviews to explore participants’ views, beliefs, etc. However, unobtrusive observation is a core component of the ethnographic approach.

As an example, an ethnographer may study how different communities celebrate traditional festivals or how individuals from different generations interact with technology differently. This may involve a lengthy period of observation, combined with in-depth interviews to further explore specific areas of interest that emerge as a result of the observations that the researcher has made.

As you can probably imagine, ethnographic research design has the ability to provide rich, contextually embedded insights into the socio-cultural dynamics of human behaviour within a natural, uncontrived setting. Naturally, however, it does come with its own set of challenges, including researcher bias (since the researcher can become quite immersed in the group), participant confidentiality and, predictably, ethical complexities . All of these need to be carefully managed if you choose to adopt this type of research design.

Case Study Design

With case study research design, you, as the researcher, investigate a single individual (or a single group of individuals) to gain an in-depth understanding of their experiences, behaviours or outcomes. Unlike other research designs that are aimed at larger sample sizes, case studies offer a deep dive into the specific circumstances surrounding a person, group of people, event or phenomenon, generally within a bounded setting or context .

As an example, a case study design could be used to explore the factors influencing the success of a specific small business. This would involve diving deeply into the organisation to explore and understand what makes it tick – from marketing to HR to finance. In terms of data collection, this could include interviews with staff and management, review of policy documents and financial statements, surveying customers, etc.

While the above example is focused squarely on one organisation, it’s worth noting that case study research designs can have different variation s, including single-case, multiple-case and longitudinal designs. As you can see in the example, a single-case design involves intensely examining a single entity to understand its unique characteristics and complexities. Conversely, in a multiple-case design , multiple cases are compared and contrasted to identify patterns and commonalities. Lastly, in a longitudinal case design , a single case or multiple cases are studied over an extended period of time to understand how factors develop over time.

As you can see, a case study research design is particularly useful where a deep and contextualised understanding of a specific phenomenon or issue is desired. However, this strength is also its weakness. In other words, you can’t generalise the findings from a case study to the broader population. So, keep this in mind if you’re considering going the case study route.

Case study design often involves investigating an individual to gain an in-depth understanding of their experiences, behaviours or outcomes.

How To Choose A Research Design

Having worked through all of these potential research designs, you’d be forgiven for feeling a little overwhelmed and wondering, “ But how do I decide which research design to use? ”. While we could write an entire post covering that alone, here are a few factors to consider that will help you choose a suitable research design for your study.

Data type: The first determining factor is naturally the type of data you plan to be collecting – i.e., qualitative or quantitative. This may sound obvious, but we have to be clear about this – don’t try to use a quantitative research design on qualitative data (or vice versa)!

Research aim(s) and question(s): As with all methodological decisions, your research aim and research questions will heavily influence your research design. For example, if your research aims involve developing a theory from qualitative data, grounded theory would be a strong option. Similarly, if your research aims involve identifying and measuring relationships between variables, one of the experimental designs would likely be a better option.

Time: It’s essential that you consider any time constraints you have, as this will impact the type of research design you can choose. For example, if you’ve only got a month to complete your project, a lengthy design such as ethnography wouldn’t be a good fit.

Resources: Take into account the resources realistically available to you, as these need to factor into your research design choice. For example, if you require highly specialised lab equipment to execute an experimental design, you need to be sure that you’ll have access to that before you make a decision.

Keep in mind that when it comes to research, it’s important to manage your risks and play as conservatively as possible. If your entire project relies on you achieving a huge sample, having access to niche equipment or holding interviews with very difficult-to-reach participants, you’re creating risks that could kill your project. So, be sure to think through your choices carefully and make sure that you have backup plans for any existential risks. Remember that a relatively simple methodology executed well generally will typically earn better marks than a highly-complex methodology executed poorly.

blue print of research work is known as

Recap: Key Takeaways

We’ve covered a lot of ground here. Let’s recap by looking at the key takeaways:

  • Research design refers to the overall plan, structure or strategy that guides a research project, from its conception to the final analysis of data.
  • Research designs for quantitative studies include descriptive , correlational , experimental and quasi-experimenta l designs.
  • Research designs for qualitative studies include phenomenological , grounded theory , ethnographic and case study designs.
  • When choosing a research design, you need to consider a variety of factors, including the type of data you’ll be working with, your research aims and questions, your time and the resources available to you.

If you need a helping hand with your research design (or any other aspect of your research), check out our private coaching services .

blue print of research work is known as

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

10 Comments

Wei Leong YONG

Is there any blog article explaining more on Case study research design? Is there a Case study write-up template? Thank you.

Solly Khan

Thanks this was quite valuable to clarify such an important concept.

hetty

Thanks for this simplified explanations. it is quite very helpful.

Belz

This was really helpful. thanks

Imur

Thank you for your explanation. I think case study research design and the use of secondary data in researches needs to be talked about more in your videos and articles because there a lot of case studies research design tailored projects out there.

Please is there any template for a case study research design whose data type is a secondary data on your repository?

Sam Msongole

This post is very clear, comprehensive and has been very helpful to me. It has cleared the confusion I had in regard to research design and methodology.

Robyn Pritchard

This post is helpful, easy to understand, and deconstructs what a research design is. Thanks

kelebogile

how to cite this page

Peter

Thank you very much for the post. It is wonderful and has cleared many worries in my mind regarding research designs. I really appreciate .

ali

how can I put this blog as my reference(APA style) in bibliography part?

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

Research Design

  • First Online: 01 January 2013

Cite this chapter

blue print of research work is known as

  • Pradip Kumar Sahu 2  

6953 Accesses

A research design is the blueprint of the different steps to be undertaken starting with the formulation of the hypothesis to drawing inference during a research process. The research design clearly explains the different steps to be taken during a research program to reach the objective of a particular research. This is nothing but advance planning of the methods to be adopted at various steps in the research, keeping in view the objective of the research, the availability of the resources, time, etc. As such, the research design raises various questions before it is meticulously formed. The following questions need to be clarified before the formulation of the research design:

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Author information

Authors and affiliations.

Department of Agricultural Statistics, Bidhan Chandra Krishi Viswavidyalaya, Mohanpur, West Bengal, India

Pradip Kumar Sahu

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer India

About this chapter

Sahu, P.K. (2013). Research Design. In: Research Methodology: A Guide for Researchers In Agricultural Science, Social Science and Other Related Fields. Springer, India. https://doi.org/10.1007/978-81-322-1020-7_4

Download citation

DOI : https://doi.org/10.1007/978-81-322-1020-7_4

Published : 21 January 2013

Publisher Name : Springer, India

Print ISBN : 978-81-322-1019-1

Online ISBN : 978-81-322-1020-7

eBook Packages : Biomedical and Life Sciences Biomedical and Life Sciences (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Curriculum Theory
  • Educational Measurement

A practical guide to test blueprinting

Taylor & Francis

  • National Conference of Bar Examiners

Joseph P Grande at Mayo Clinic - Rochester

  • Mayo Clinic - Rochester

Abstract and Figures

Miller's pyramid with sample behavioral objectives and suitable methods of assessment.

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations
  • Hussein Abdellatif

Amira Ebrahim Alsemeh

  • Tarek Khamis
  • Mohamed-Rachid Boulassel

Mila Nu  Nu Htay

  • Ganesh Kamath

Soumendra Sahoo

  • BMC Med Educ
  • Reza Khorammakan
  • Seyed Hadi Roudbari
  • Ahmad Ghadami
  • EUR J DENT EDUC

Kamran Ali

  • Tim Wilkinson

Lionel Green-Thompson

  • David A Cook

Steven J Durning

  • Christopher R. Stephenson
  • Matt Lineberry

Neville Gabriel Chiavaroli

  • Adam Wineland
  • Marcela Pellegrini Peçanha
  • Ana Lúcia Cabulon
  • Débora Aparecida Rodrigueiro
  • Marta Wey Vieira
  • SunitaY Patil

Manasi Gosavi

  • HemaB Bannur
  • Ashwini Ratnakar
  • Malcolm Cox

David M Irby

  • Mohsen Tavakol

Reg Dennick

  • Xiaohui Zhao

Keith Dowd

  • Steven V. Angus
  • T Robert Vu
  • Andrew J Halvorsen

Furman S McDonald

  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

share this!

September 8, 2023

This article has been reviewed according to Science X's editorial process and policies . Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

A guide to 'big team science' creates a blueprint for research collaboration on a large scale

by Concordia University

A guide to Big Team Science creates a blueprint for research collaboration on a large scale

Scientific research depends on collaboration between researchers and institutions. But over the past decade, there has been a surge of large-scale research projects involving extraordinarily large numbers of researchers, from dozens to hundreds, all working on a common project.

Examples of this trend include ManyBabies , centered on infant cognition and development, and ManyManys , focused on comparative cognition and behavior across animal taxa. These kinds of projects, known as big team science (BTS), benefit from pooled human and material resources and draw on diverse data sets to gird studies with a robustness not found in smaller ones.

However, as exciting as these projects can be, they can also be monsters to manage. Communication, team building, governance, authorship and credit are just some of the issues BTS team leaders must negotiate, along with the logistical difficulties involved in working across languages, cultures and time zones.

Fortunately, a group of BTS veterans, including two Concordia researchers, has published a how-to guide to help their fellow academics build their own projects. The article, published in the Royal Society Open Science journal, is based on expertise gained over multiple BTS projects, and it provides a road map for best practices and overcoming challenges.

"BTS is a new way of conducting research where many researchers come together to answer a common question that is crucial to their field," says Nicolás Alessandroni, a postdoctoral fellow at Concordia's Infant Research Lab. Alessandroni works under the direction of Krista Byers-Heinlein, a professor of psychology and the Concordia University Research Chair in Bilingualism and Open Science. The two co-authored the article with researchers at Stanford University, the University of British Columbia and the University of Manitoba.

"This is important because research has traditionally been conducted in a siloed manner, where individual teams from one institution work with small, limited samples."

Building up step by step

The authors acknowledge that no two projects are alike, and there is no one model to creating a BTS study. But success can be achieved by applying a common approach.

They suggest starting by identifying whether a research community believes the project is necessary. Consensus buy-in gets things rolling.

Once a project is a go, the authors outline how team leaders can work together to share data and collaborate on writing. The guide is designed to provide a path forward for researchers and to smooth over differences that are almost inevitable when the number of collaborators reaches three or even four digits. It touches on topics ranging from governance and codes of ethics to designated writing teams and authorship protocols.

"The beauty of big team science is that anyone can join, from undergraduates to faculty," says Alessandroni. "There are all these different experiences that coalesce around BTS, and this provides an opportunity to integrate many perspectives into a project . You can have some people who are very seasoned researchers and others who are young students willing to collaborate and embrace this new way of doing science."

Open to all

He admits that BTS projects are not easy to manage, but they do have clear strengths and benefits.

"Its very definition relates to important values in science: transparency, collaboration, accessibility, equity, diversity and inclusion—it touches on many important topics that have been disregarded in the practice of science traditionally. In many ways, it overlaps with the concept of open science , where data is shared openly and publications are available in open-access journals and repositories, making knowledge available without charging readers."

Alessandroni notes that universities will have to adapt to accommodate changes in the way research is supported.

"Institutions worldwide can help foster BTS collaborations by devising new workflows, policies and incentive structures," he says. "Naturally, this would involve important changes to the academic ecosystem, so there is much to discuss."

Journal information: Royal Society Open Science

Provided by Concordia University

Explore further

Feedback to editors

blue print of research work is known as

Saturday Citations: Warp drive disasters; cancer prospects across generations; a large COVID vaccination study

Aug 3, 2024

blue print of research work is known as

Study yields new insights into the link between global warming and rising sea levels

Aug 2, 2024

blue print of research work is known as

Coinfecting viruses obstruct each other's cell invasion

blue print of research work is known as

Scientists discuss why we might not spot solar panel technosignatures

blue print of research work is known as

BNP-Track algorithm offers a clearer picture of biomolecules in motion

blue print of research work is known as

New compound found to be effective against 'flesh-eating' bacteria

blue print of research work is known as

Not the day after tomorrow: Why we can't predict the timing of climate tipping points

blue print of research work is known as

Scientists pin down the origins of the moon's tenuous atmosphere

blue print of research work is known as

Solving the doping problem: Enhancing performance in organic semiconductors

blue print of research work is known as

Genetic signatures of domestication identified in pigs and chickens

Relevant physicsforums posts, sources to study basic logic for precocious 10-year old.

Jul 21, 2024

Free Abstract Algebra curriculum in Urdu and Hindi.

Jul 20, 2024

Kumon Math and Similar Programs

Jul 19, 2024

AAPT 2024 Summer Meeting Boston, MA (July 2024) - are you going?

Jul 4, 2024

How is Physics taught without Calculus?

Jun 25, 2024

Is "College Algebra" really just high school "Algebra II"?

Jun 16, 2024

More from STEM Educators and Teaching

Related Stories

blue print of research work is known as

African researchers are ready to share more work openly—now policy must make it possible

Feb 13, 2023

blue print of research work is known as

Who's writing open access articles?

Jan 19, 2021

blue print of research work is known as

Biomedical institutions agree on a set of open science practices to monitor

Jan 24, 2023

blue print of research work is known as

My CV is gender biased. Here's what I plan to do about it

Mar 8, 2019

blue print of research work is known as

Zoos and universities must work together to safeguard wildlife and improve conservation, say researchers

Jun 26, 2023

blue print of research work is known as

The youngest Canadian bilinguals are not a monolithic group, new research shows

Jun 28, 2022

Recommended for you

blue print of research work is known as

Autonomy boosts college student attendance and performance

Jul 31, 2024

blue print of research work is known as

Study reveals young scientists face career hurdles in interdisciplinary research

Jul 29, 2024

blue print of research work is known as

Transforming higher education for minority students: Minor adjustments, major impacts

blue print of research work is known as

Communicating numbers boosts trust in climate change science, research suggests

Jul 26, 2024

blue print of research work is known as

Smartphone reminders found to have negative impact on learning times

Jul 17, 2024

blue print of research work is known as

High ceilings linked to poorer exam results for uni students

Jul 3, 2024

Let us know if there is a problem with our content

Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form . For general feedback, use the public comments section below (please adhere to guidelines ).

Please select the most appropriate category to facilitate processing of your request

Thank you for taking time to provide your feedback to the editors.

Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.

E-mail the story

Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient's address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Phys.org in any form.

Newsletter sign up

Get weekly and/or daily updates delivered to your inbox. You can unsubscribe at any time and we'll never share your details to third parties.

More information Privacy policy

Donate and enjoy an ad-free experience

We keep our content available to everyone. Consider supporting Science X's mission by getting a premium account.

E-mail newsletter

TutorsIndia

  • Publication
  • Development
  • Editing Services

The Research Proposal – The Blueprint Of Your Dissertation

As a student pursuing your Masters/PhD you have to undergo rigorous research. Well, what is research? It is a scientific and organized evaluation to gather new knowledge. You will be getting the relevant solution to your problem through a streamlined research. By research there is vast expansion in the domain of knowledge. But there is one important step to perform before conducting a research i.e. the research proposal . Writing a research proposal is a pivotal step in the dissertation process .

blue print of research work is known as

The Significance Of A Research Proposal

All proposals show that the student has found out a research topic and has gone through information such that he/she knows what other researchers have researched about the topic. There is the formation of the research question, which is the heart of the proposal. A pertinent methodology is formed to answer the research question .

  • The dissertation proposal is the key that opens the door to the research journey .
  • Whatever is important for the dissertation, the research proposal should encompass it.
  • The importance of getting engaged in the research work is highlighted in the research proposal.
  • Planning is done diligently in the research proposal.
  • The proposal should provide justification for approval. It can either contribute to available knowledge or adds to the existing knowledge.
  • A well-formed research proposal helps the researcher not to think about various other alternatives once the research commences.
  • The proposal gives information on how the data will be gathered, handled, and interpreted.
  • The structure of a research proposal.

A Typical Research Proposal Has The Following Format

  • Introduction
  • Literature Review
  • Research Methodology

Common Mistakes In Dissertation Proposal

  • Not giving the appropriate context to develop the research question.
  • Not citing important studies.
  • Not staying on track of the research question.
  • Not forming a comprehensible and compelling argument for the proposed research.
  • Beating about the bush and emphasizing on minor concerns.
  • Not maintaining a strong sense of direction.

What Is The Student’s Contribution To The Research Proposal?

  • The student has to prove that he/she is contributing something new to the domain.
  • The student should choose a viable topic with regard to data, financing, materials and supervisors.
  • The student proposes that he/she will complete the research within the expected time.
  • The student takes into consideration all ethical issues.
  • The student gets approval from all pertinent bodies.

Will the student be able to carry out independent research? This is the question that the research proposal answers . The research proposal brings forth a problem, and examines pertinent research endeavors. What are the steps required to solve the problem? This is also the question that is answered by the research proposal. The research proposal also goes a step beyond in collecting and evaluating the data.

Overall, the questions what, why, where, whom and when are provided answers by the research proposal. The dissertation proposal assists you in concentrating on your research aims, get a clear idea about the significance and the needs, elucidate on the methods, and forecast issues and results. Eventually it plans alternative solutions and assistance, if needed.

Links, References, Related Posts

– What are the elements of a research proposal?

– What is a Research Proposal?

– The Comprehensive Checklist for Research Proposal Writing

– Top Tips For Writing PhD Research Proposal In Finance

– A Research Proposal: An Initial Success of Your Research

  • dissertation proposal writing help
  • Dissertation Writing Services
  • dissertation writing services uk
  • masters dissertation writing services uk
  • research proposal writing services uk

Comments are closed.

Enquire Now

Recent posts.

  • Exploring Emerging Tech with the help of Tutor India: How IT Grad Students Choose Doctoral Research Topics
  • Maximizing Impact of Research Data in Law: A Professional Manual
  • Understanding Student Stress: What You Need to Know
  • Mastering the Art: Balancing Qualitative and Quantitative Data in Management Research and Writing
  • Avoiding the 5 common errors in student research: strategies for success applicable to diverse academic domains learn from the experts.

Dissertation Writing Journey

The purpose of a research proposal in dissertation writing.

Tutors India, is world’s reputed academic guidance provider for the past 15 years have guided more than 4,500 Ph.D. scholars and 10,500 Masters Students across the globe.

FUNCTIONAL AREA

  • – Masters Dissertation Writing
  • – PhD/DBA Dissertation Writing
  • – Coursework Writing
  • – Publication Support
  • – Development Services
  • – Dissertation Editing/Rewriting
  • – Assignment Writing
  • – Dissertation Writing
  • – PhD Dissertation Writing

CORPORATE OFFICE

  • #10, Kutty Street,
  • Nungambakkam, Chennai,
  • India No : +91 8754446690,
  • UK No : +44-1143520021,
  • US No : +1-9725029262
  • Email: [email protected]

Website: www.tutorsindia.com

© 2024 TutorsIndia. All Rights Reserved.

U.S. flag

NIH Blueprint Overview

The blueprint mission .

The NIH Blueprint for Neuroscience Research aims to accelerate transformative discoveries in brain function in health, aging, and disease. Blueprint is a collaborative framework that includes the NIH Office of the Director together with NIH Institutes and Centers that support research on the nervous system. By pooling resources and expertise, Blueprint identifies cross-cutting areas of research and confronts challenges too large for any single Institute or Center. Since its inception in 2004, Blueprint has supported the development of new research tools, training opportunities, and resources to assist neuroscientists. 

In addition to supporting cross-cutting neuroscience activities like research training , workforce diversity , and  therapeutic development , Blueprint also funds research initiatives. Topics have ranged from transforming our understanding of dynamic neuroimmune interactions to enhancing our fundamental knowledge of interoception, supporting the development of innovative tools and technologies to monitor and manipulate biomolecular condensates, and more. To learn about both current and past areas of research, visit the Blueprint Research Initiatives page . 

Blueprint Grand Challenges

In 2009, the Blueprint Grand Challenges were launched to catalyze research with the potential to transform our basic understanding of the brain and our approaches to treating brain disorders.

The Human Connectome Project (HCP) is an ambitious effort to map all the connections within the human brain. Beginning in 2010, Blueprint awarded $40 million to two major research consortia which took complementary approaches to deciphering the brain’s complex wiring diagram. In five years, this highly coordinated effort mapped the connections of 1,200 healthy adults paired with behavioral assessments and GWAS results, resulting in the publication of over 100 papers. The MRI scanner system developed by  HCP scientists was 4-8 times more powerful than conventional systems, providing ten-fold faster imaging times and better spatial resolution than ever before. Building on the success of the Connectome Project, in 2014 Blueprint authorized funds to expand the age range of normal subjects to include both young people and older adults. The Connectome Coordination Facility, funded by Blueprint in 2015, maintains a central data repository for HCP data and offers advice to the research community regarding data collection strategies and harmonization. 

The Grand Challenge on Chronic Neuropathic Pain supported research to understand the changes in the nervous system that cause acute, temporary pain to become chronic. The initiative has supported multi-investigator projects partnering researchers in the pain field with researchers in the neuroplasticity field. Starting in 2010, Blueprint funded 10 R01 grants investigating various models, mechanisms, and plasticity in the transition to chronic pain, resulting in more than 80 publications related to pain and neural plasticity.

The  Blueprint Neurotherapeutics Network (BPN)  helps small labs develop new drugs for nervous system disorders. BPN provides research funding, plus access to millions of dollars’ worth of services and expertise to assist in every step of the drug development process, from laboratory studies to preparation for clinical trials. Since 2010, project teams across the U.S. have received funding to pursue drugs for conditions from vision loss toneurodegenerative disease to depression.  A hallmark of the program is  the research institution retains the intellectual property rights. Now in its eighth year, BPN has awarded 22 grants resulting in 1 Phase 1 clinical trial, 5 licensed programs, and several successful partnerships with industry. 

The BRAIN Initiative ®

April 2013 marked the beginning of the  Brain Research Through Advancing Innovative Neurotechnologies® (BRAIN) Initiative , a coordinated effort among public and private institutions and agencies aimed at revolutionizing our understanding of the human brain. NIH has a large role in this effort. Blueprint was one of the inaugural sponsors of the BRAIN Initiative by investing $10 million in 2014 on initial high priority research areas and continues to partner with NIH BRAIN and invest in BRAIN research.  

Historic Blueprint Resources 

Since 2004, Blueprint has supported the development of new resources , tools, and opportunities for neuroscientists. From fiscal years 2007 to 2009, Blueprint focused on three major themes of neuroscience - neurodegeneration, neurodevelopment, and neuroplasticity. These efforts enabled unique funding opportunities and training programs, and helped establish new resources that continue to be available to researchers and the public. Some of these resources include:

  • The  Gene Expression Nervous System Atlas (GENSAT)  and the  Cre Driver Network  are projects that have developed, characterized and continue to distribute transgenic mouse lines (GFP reporters and Cre drivers) to serve as tools for research on the central nervous system. Over 100 lines are available from the Cre driver network and over 1400 (GFP and Cre) lines are available from GENSAT.
  • The  Neuroimaging Informatics Tools and Resources Clearinghouse (NITRC)  triad of services include a resources registry, data commons, and cloud-based virtual machine with popular neuroimaging software pre-installed. These services help researchers save time, meet data sharing requirements, and leverage cloud-based computing on increasingly larger data sets. 
  • The  Neuroscience Information Framework (NIF)  is an online portal to neuroscience information that includes a customized search engine, a curated registry of resources and direct access to more than 100 databases.
  • The  NIH Toolbox for Assessment of Neurological and Behavioral Function  is a set of integrated tools for measuring neurologic and behavioral function, and for generating data that can be used and compared across diverse clinical studies.
  • The  NIH Blueprint Enhancing Neuroscience Diversity through Undergraduate Research Experiences (ENDURE)  supports undergraduates from underrepresented groups in a two-year neuroscience research program and encourages matriculation into PhD programs.

Download the NIH Blueprint Overview Flyer (pdf, 1215 KB) .

Blueprint of a Proposal

Trying to make sense of proposal preparation, review and submission at the UW?

This course introduces participants to the UW processes, concepts and terminology that will help get you started in the right direction.

Through discussion, hands-on exercises, annotated online resources and in class handouts, we will cover:

  • Proposal Process Policies and Procedures
  • Roles & Responsibilities
  • Where to find critical information needed for proposal preparation
  • Proposal best practices

Anyone involved in the preparation or review of sponsored programs proposals to external sponsors, especially those new to the process.

Research Administration Certificate

Last updated, course materials.

Class Slides , Reading the FOA (online exercise)

Related Learning

SAGE: Creating and Submitting eGC1s

Introduction to Sponsored Project Budgets

Workshop: Preparing Sponsored Project Budgets

SAGE:  Budget

SAGE:  Creating NIH Proposals in Grant Runner

Course # 1001
Topic Submit Proposal
Lifecycle Plan/Propose
Type Classroom Course
Length 2.5 hrs.
Prerequisites
Frequency Fall, Winter, Spring

CORE [email protected] 206.616.0804

University of Washington Office of Research

Or support offices.

  • Human Subjects Division (HSD)
  • Office of Animal Welfare (OAW)
  • Office of Research (OR)
  • Office of Research Information Services (ORIS)
  • Office of Sponsored Programs (OSP)

OR Research Units

  • Applied Physics Laboratory (APL-UW)
  • WA National Primate Research Center (WaNPRC)

Research Partner Offices

  • Corporate and Foundation Relations (CFR)
  • Enivronmental Health and Safety (EH&S)
  • Grant and Contract Accounting (GCA)
  • Institute of Translational Health Sciences (ITHS)
  • Management Accounting and Analysis (MAA)
  • Post Award Fiscal Compliance (PAFC)

Collaboration

  • Centers and Institutes
  • Collaborative Proposal Development Resources
  • Research Fact Sheet
  • Research Annual Report
  • Stats and Rankings
  • Honors and Awards
  • Office of Research

© 2024 University of Washington | Seattle, WA

What is a blueprint of a research paper?

User Avatar

A blueprint of a research paper is kind of an outline except less formal and with more information.

Here are some links that I found very helpful....

http://www.teachervision.fen.com/research-papers/writing/2123.html?detoured=1

http://www.suite101.com/content/writing-a-research-paper-a191693

!!! i hope that you found this information helpful!!!

Add your answer:

imp

How do you keep a research paper from being biased?

You can keep a research paper from being biased by presenting the facts. You can also research both sides and present them in your paper.

Is a background paper the same as a research paper?

A "background" paper refers to a person's background and includes the past actions or past dealings. A research paper refers to facts about something that has been chosen as the topic of research.

Is a survey research paper the same as a persuasive research paper?

definitely not. while a survey research paper discusses what the overall people think, that is the research you would be doing, a persuasive research paper is researching something &amp; then telling the reader what they ought to do based on this research you have just presented. for example, if you researched going green, in a survey research paper you might say how most people in x place do x thing. but in a persuasive research paper, you would say people in x place should do z thing instead.

Should you use contractions in a research paper?

Hey, never use contractions in a research paper. It was meant for words.

Are you supposed to use information from outline to do research paper?

Yes, you should use the information from the outline to do the research paper.

imp

Top Categories

Answers Logo

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Sultan Qaboos Univ Med J
  • v.16(4); 2016 Nov

Perceptions of the Use of Blueprinting in a Formative Theory Assessment in Pharmacology Education

This study aimed to assess perceptions of the use of a blueprint in a pharmacology formative theory assessment.

This study took place from October 2015 to February 2016 at a medical college in Gujurat, India. Faculty from the Department of Pharmacology used an internal syllabus to prepare an assessment blueprint. A total of 12 faculty members prepared learning objectives and categorised cognitive domain levels by consensus. Learning objectives were scored according to clinical importance and marks were distributed according to proportional weighting. A three-dimensional test specification table of syllabus content, assessment tools and cognitive domains was prepared. Based on this table, a theory paper was created and administered to 126 pharmacology students. Feedback was then collected from the faculty members and students using a 5-point Likert scale.

The majority of faculty members agreed that using a blueprint ensured proper weighting of marks for important topics (90.00%), aligned questions with learning objectives (80.00%), distributed questions according to clinical importance (100.00%) and minimised inter-examiner variations in selecting questions (90.00%). Few faculty members believed that use of the blueprint created too many easy questions (10.00%) or too many difficult questions (10.00%). Most students felt that the paper had a uniform distribution of questions from the syllabus (90.24%), that important topics were appropriately weighted (77.23%), was well organised (79.67%) and tested indepth subject knowledge (74.80%).

These findings indicate that blueprinting should be an integral part of written assessments in pharmacology education.

Advances in Knowledge

  • - This study prepared a blueprint for a formative theory assessment in pharmacology. A blueprint aligns an assessment with learning objectives and distributes questions according to weighting based on clinical importance and the core learning objectives of the syllabus.
  • - The findings of this study indicated that the majority of the faculty members and students had positive perceptions of the use of blueprinting in the formation of the assessment.

Application to Patient Care

  • - A blueprint is an important approach to stimulating deep learning among medical students, thus indirectly influencing future patient care and clinical practice.
  • - Positive attitudes towards scientific research among undergraduate medical students is likely to enhance the quality of future patient care.

Written examinations are the most commonly employed method to assess knowledge in medical education and are used to test recall abilities as well as higher-order cognitive functions, such as the interpretation of data and problem-solving skills. 1 , 2 Valid assessment methods are necessary to determine whether students have learned the required information. 3 Content validity gauges the extent to which an assessment covers a representative sample of the material which should be assessed; for example, if examination questions cover the learning objectives of the syllabus, the examination is considered to have content validity. 4 In contrast, construct validity covers all aspects of subject knowledge such as application, data gathering and interpretation as a collection of interrelated components, which together allow for the assessment to make sense. 5

Construct under-representation and construct irrelevance variance are two important challenges to construct validity. If an examiner creates a test with overly easy or difficult questions, asks well-known questions or uses unnecessarily complex language, it would mean that there is construct irrelevance variance, which can lead to the inflation or deflation of test scores. 6 , 7 Content under-representation in a paper can occur due to the inadequate weighting of marks for clinically relevant topics, unequal distribution of course content across the assessment or examiner bias, such as a tendency to focus on popular topics. 7 Moreover, the teachers who deliver lectures are usually not the ones who create the assessments; this can reduce the content and construct validity of an assessment, which in turn can lead to a mistrust in the assessment system on the part of the students. 7 , 8 Content imbalance may also result in students focusing less on key areas of learning during revision. 7 For an assessment paper to be valid, it should match course content, have proportional weighting of content according to clinical importance, consist of questions which are neither overly difficult nor easy and have multiple tools to determine various types of information. 7

Blueprinting can be defined as the creation of a template to determine the content of a test; it lists the number and type of questions across the course content, with learning objectives and relative weighting given to each topic. 7 , 8 A blueprint provides a systematic multi-step approach to an assessment, defining the purpose (e.g. formative/summative and written/practical) and scope (e.g. for undergraduate or postgraduate students) of the test in order to subsequently determine content and method of assessment. Based on the content, learning objectives and their domains are identified and different assessment tools are chosen, such as short answer questions (SAQs), essay questions (EQs) or multiple choice questions. 7 , 8 The content of the assessment is then proportionally weighted as per clinical importance, learning domains and methods of assessment. The total weighting of the number of items to be included in the assessment is decided and a three-dimensional (3D) table of test specifications is prepared to align content, learning domains and assessment tools, as well as prepare individual questions. 7 , 8

Blueprinting is increasingly used in the field of medical education worldwide. 9 , 10 In the UK, assessments created using a blueprint are considered essential to enable future doctors to meet mandatory standards; the assessments are prepared in such a way that students who have not met important learning outcomes are not able to graduate. 10 Previous research indicates that blueprinting optimises student assessment, making a positive impact and helping them to focus on key areas in an examination, thereby improving performance. 11 – 13 This approach can reduce inter-individual variability by providing guidance to examiners; moreover, set question papers are usually more valid and reliable than those created without a blueprint. 7 However, studies conducted on the use of blueprinting in India are scarce.

At the Gujarat Medical Education & Research Society (GMERS) Medical College in Gotri, Vadodara, Gujurat, India, a five-year undergraduate medical education programme results in a Bachelor of Medicine and Bachelor of Surgery (MBBS). Pharmacology education is mainly taught to second-year MBBS students for a total of three semesters of six months’ duration each. The assessment system consists of one summative and two formative examinations, with two written papers in the summative examination and one written paper in one of the formative examinations. Each paper is scored out of 58 marks, although students now receive up to 40 marks due to optional questions. The papers include constructed-response open-ended questions including EQs (4 marks each), short EQs (3 marks each) and SAQs (2 marks each); the maximum number of marks for EQs, short EQs and SAQs are 20, 24 and 14, respectively. In order to pass, a student must get at least 50% on the examination. Traditionally, the assessment content is determined by a paper setter who selects the questions according to the syllabus and question format. This study aimed to prepare a blueprint for a written theory paper and to analyse subsequent feedback on its use in a formative examination by pharmacology faculty members and students.

This study was conducted between October 2015 and February 2016 at the Department of Pharmacology of GMERS Medical College. In order to familiarise departmental faculty members with the concept of preparing a theory paper via blueprinting, a pilot test consisting of 16 SAQs worth two marks each was administered to 12 members of the faculty. A half-day interactive session was then conducted in three sessions: the first focused on assessment methods and tools, the second on validity and reliability in assessments and the third on the purpose and implementation of blueprinting, including the weighting of topics and assessment methods and how to prepare tables of test specifications. Faculty members were then re-tested using the same SAQs in order to determine the learning outcomes of the interactive session.

The syllabus of the first formative theory examination for second-year MBBS students was used to prepare a blueprint for the written paper, consisting of seven topics: general pharmacology; the autonomic nervous system; the peripheral nervous system; the respiratory system; the gastrointestinal tract; autacoid-related drugs; and drugs affecting blood and blood formation. A literature review of the undergraduate regulations, vision documents, essential drug lists and national health programmes of the Medical Council of India as well as pharmacology textbooks and previous test papers was undertaken to determine learning objectives for each topic. 14 – 20 Each learning objective was categorised into either recall or reasoning domain categories as per Miller’s pyramid of competence. 21 Learning objectives and cognitive domain categorisations were then discussed by the same 12 faculty members who had participated in the pilot test. Each faculty member scored learning objectives individually according to clinical importance, with a score of 3 indicating high importance, a score of 2 indicating moderate importance and a score of 1 reflecting little/no clinical importance. 7

Mean scores for each learning objective were calculated using a Microsoft Excel spreadsheet, Version 2010 (Microsoft Corp., Redmond, Washington, USA). Differences were resolved via consensus. Based on the learning objective scores, total scores were calculated for each topic as well as the overall syllabus. Proportional weighting was calculated for each topic by dividing the topic score with the total syllabus score. 3 , 7 Table 1 shows the learning objectives, total scores, proportional weighting and mark distribution calculations. Based on the literature review and faculty consensus, a total of 292 learning objectives were identified. The total syllabus score was 663 and the maximum number of marks on the theory test paper was 58, with marks in each topic weighted proportionally.

Learning objectives, mean total scores, proportional weighting and distribution of marks per syllabus topic

Syllabus topicLearning objectivesMean total score Proportional weighting Distribution of marks
General pharmacology872010.3017.40 (17)
Autonomic nervous system701630.2514.50 (15)
Peripheral nervous system24450.074.06 (4)
Gastrointestinal tract21440.074.06 (4)
Respiratory tract17430.063.48 (4)
Autacoid-related drugs31710.116.38 (6)
Drugs affecting blood and blood formation42960.148.12 (8)

In the next phase, a 3D table of test specifications for the distribution of marks was prepared by aligning content, assessment tools and cognitive domain categories. Table 2 shows the 3D table of test specifications, representing the mark distribution for each topic according to assessment tool (e.g. EQ, short EQ or SAQ) and cognitive domain category (i.e. recall or reasoning). In terms of assessment tools, EQs were used only to assess reasoning abilities (20 marks; 100.00%). Short EQs were more often used to assess reasoning (15 marks; 62.50%) rather than recall (9 marks; 37.50%). In comparison, SAQs were more frequently used to test recall (8 marks; 57.14%) rather than reasoning (6 marks; 42.86%). Overall, the distribution of recall (17 marks; 29.31%) to reasoning (41 marks; 70.69%) marks was approximately 30:70.

Table of test specifications showing the distribution of marks for each topic according to assessment tool and cognitive domain category

Syllabus topicMarks allotted (questions)
EQsShort EQsSAQsTotal
ReasoningRecallReasoningRecallReasoning
General pharmacology8 (2)3 (1)3 (1)2 (1)2 (1)18 (6)
Autonomic nervous system4 (1)3 (1)3 (1)2 (1)2 (1)14 (5)
Peripheral nervous system--3 (1)--3 (1)
Gastro-intestinal tract--3 (1)2 (1)-5 (2)
Respiratory system4 (1)----4 (1)
Autacoid-related drugs4 (1)--2 (1)-6 (2)
Drugs affecting blood and blood formation-3 (1)3 (1)-2 (1)8 (3)

EQs = essay questions; SAQs = short answer questions.

Based on the blueprint, a paper setter prepared individual test questions following good practices. 7 The questions were framed using directive verbs from the revised Bloom’s taxonomy of learning domains to assess the appropriate cognitive domain according to the blueprint and to give students clear directions for their responses. 22 The verb “remember” was used to frame recall questions, while the verbs “understand”, “apply” and “analyse” were used for reasoning questions. 22 The verbs “evaluate” and “create” were not used. The paper setter selected questions of high, moderate and no/little clinical importance in a ratio of 60:30:10. The paper was then administered to a total of 126 second year pharmacology students.

The 12 faculty members were requested to provide feedback regarding their perceptions of the quality and use of the theory assessment paper designed using blueprinting. They were provided with two previous assessment papers for comparison. Feedback was collected using closed- and open-ended questionnaires. For the former, participants were given 14 statements and asked to respond on a 5-point Likert scale using the following responses: strongly disagree, disagree, neutral, agree and strongly agree. In the open-ended questionnaire, they were asked to describe benefits and difficulties in the preparation and implementation of the blueprint as well as the feasibility of this approach for future implementation in formative assessments. The perceptions of the students regarding the paper were determined one week after the assessment. A questionnaire consisting of nine statements was distributed and the students were asked to respond on the previously described 5-point Likert scale.

For the pilot test, the SAQ scores of the faculty members were presented as means and standard deviations and compared using a paired t-test. Results from the feedback questionnaires were presented as percentages. Strongly disagree and disagree responses were merged, as were the agree and strongly agree responses. Statistical analysis was performed using GraphPad Prism, Version 6.0 (GraphPad Software Inc., La Jolla, California, USA). A P value of <0.050 was considered statistically significant.

This study received ethical approval from the Institutional Human Ethics Committee of GMERS Medical College, Gotri (IHEC #101/2015/Pharmacology-16). Informed consent was obtained from the faculty and undergraduate medical students before data collection.

Among the 12 faculty members, the mean test scores before and after the interactive session were 7.72 ± 4.19 and 19.29 ± 5.29, respectively, out of a maximum score of 32 (paired t-test t-value: 10.23; degree of freedom: 11; P <0.001). In total, 10 out of 12 faculty members gave their opinions of the blueprint (response rate: 83.33%). The two remaining faculty members were transferred before the formative assessment was conducted. All of the faculty members agreed that the blueprint ensured a uniform distribution of questions across the syllabus topics, helped to maintain a balance between questions in the recall and reasoning domains and assured the distribution of questions according to clinical importance. In addition, the majority agreed that the blueprint resulted in the adequate weighting of important topics (90.00%), aligned questions with learning objectives (80.00%), ensured well-organised theory test papers (70.00%), tested in-depth subject knowledge (60.00%) and minimised inter-examiner variations in selecting questions (90.00%). Few faculty members believed that the blueprint created too many easy questions (10.00%) or too many difficult questions (10.00%) [ Figure 1 ].

An external file that holds a picture, illustration, etc.
Object name is squmj1611-e475-481f1.jpg

Perceptions of faculty members regarding the use of a blueprint in the formation of a pharmacology theory assessment paper (N = 10).

*One faculty member did not respond to these statements.

The majority of faculty members (90.00%) believed that a blueprint should be incorporated into the design of future examinations and all of them agreed that the blueprint should be an integral part of theory paper framing. Moreover, the majority of faculty members thought that blueprints should be prepared for the entire syllabus (90.00%) and that there was a need to change teaching schedules as per the blueprint (90.00%) [ Figure 2 ]. Table 3 summarises faculty responses to open-ended questions about their perceptions of the blueprint and assessment paper.

An external file that holds a picture, illustration, etc.
Object name is squmj1611-e475-481f2.jpg

Perceptions of faculty members regarding the feasibility of future implementation of a blueprint in pharmacology education (N = 10).

Summary of faculty responses to open-ended questions regarding the use of a blueprint in the formation of a pharmacology theory assessment paper (N = 10)

Open-ended responses

A total of 123 out of 126 students provided feedback regarding the formative assessment paper (response rate: 97.62%). The majority of students believed that the distribution of questions was uniform and covered each topic (90.24%) and allowed for proper weighting of clinically important topics (77.23%). In addition, the paper was generally perceived to be well organised (79.67%). Most of the students believed that all of the questions were from the defined syllabus (90.24%) and that the paper tested in-depth knowledge of the subject (74.80%). Overall, few students believed that there were too many easy questions (12.20%) or too many difficult questions (9.75%); indeed, only a minority of the students felt that the paper was exhausting/lengthy (26.83%) or stressful (17.08%) [ Figure 3 ].

An external file that holds a picture, illustration, etc.
Object name is squmj1611-e475-481f3.jpg

Perceptions of students regarding the use of a blueprint in the formation of a pharmacology theory assessment paper (N = 123).

An assessment drives, directs and influences learning and is a tool for educational improvement. 7 , 23 Students successfully learn when a relationship exists between teaching, assessment and results. In an assessment, every question format (e.g. EQs, short EQs and SAQs) has advantages and disadvantages; using a variety of formats helps to counter the possible bias associated with one individual format. 1 A blueprint helps to ensure the balance between different format questions by specifying the content, learning domains across the syllabus and assessment tools of an examination in a rational manner. 7 , 8 In the current study, a blueprint was used to create a written assessment paper for pharmacology students; learning objectives were identified and syllabus topics were marked using a proportional weighting system based on clinical importance to ensure content validity. Moreover, the paper assessed not only knowledge recall, but also understanding, application and critical analysis abilities using the interrelated cognitive domain of reasoning, thus ensuring construct validity. 5

In the current study, the majority of faculty members and students agreed that the use of a blueprint in the formation of a pharmacology formative assessment paper resulted in a uniform distribution of questions and adequate weighting of clinically important topics and testing of in-depth subject knowledge. Faculty members also agreed that the test paper struck an appropriate balance between questions in the recall and reasoning domains and that the questions were aligned with the learning objectives of the syllabus. These findings suggest that the test paper created through the blueprint was able to avoid construct under-representation. Previous research has also indicated that the use of a blueprint in assessment ensures coverage of all aspects of the curriculum, learning objectives and educational domains. 5 , 24 , 25 Very few faculty members and students in the current study believed that the assessment contained questions which were too easy or too difficult; this suggests that the use of a blueprint resulted in a rational paper and avoided construct irrelevance variance. Moreover, the blueprint seemed to make the assessment more ‘fair’ in the eyes of both faculty members and students. The experience of an authentic assessment seems to be a motivating factor; among medical students, a fair assessment has been reported to stimulate a deep approach to learning when tailored to curricular objectives. 26 This may subsequently affect clinical practice and patient care once the students have graduated.

Earlier reports in the literature have provided evidence that blueprints make it easier for an examiner to select questions, set papers according to accepted norms and standards, test higher-order cognitive domains and create well-organised theory papers. 7 , 24 These findings were also reflected in the current study during faculty responses to open-ended questions concerning the benefits of blueprinting. However, certain challenges to the use of blueprinting were also identified; many faculty members stated that the preparation of a blueprint was time-consuming and that it was occasionally difficult to reach a consensus. Nevertheless, despite these barriers, the majority of faculty members found blueprinting to be a promising approach for preparing assessment papers and indicated that they would use this approach in future examinations and change their teaching patterns accordingly. Overall, the use of a blueprint in a formative assessment paper had a positive impact on faculty perceptions; it is therefore realistic to recommend the inclusion of this approach as a valid tool to frame theory papers for summative assessment. Nevertheless, the data from this study would be more valuable if they were combined with evidence of increased academic achievement due to the blueprint; unfortunately, the study was limited in this regard as the students’ performance could not be compared to students in previous years due to the differences in syllabus.

According to the perceptions of both pharmacology faculty members and students, the use of blueprinting in the formation of a written formative assessment paper was found to result in high content and construct validity. As this approach helps to align the content, cognitive domains and assessment tools of a paper in a rational way, it should be implemented as an integral part of the framing of theory assessments in future.

ACKNOWLEDGEMENTS

The authors wish to thank all of the faculty members and students who participated in this study. In addition, they are grateful to Dr. Neena Doshi of the GMERS Medical College, Gotri, for critically reviewing the manuscript before publication.

CONFLICT OF INTEREST

The authors declare no conflicts of interest.

No funding was received for this study.

Your browser is not supported

Sorry but it looks as if your browser is out of date. To get the best experience using our site we recommend that you upgrade or switch browsers.

Find a solution

  • Skip to main content
  • Skip to navigation

blue print of research work is known as

  • Back to parent navigation item
  • Primary teacher
  • Secondary/FE teacher
  • Early career or student teacher
  • Higher education
  • Curriculum support
  • Literacy in science teaching
  • Periodic table
  • Interactive periodic table
  • Climate change and sustainability
  • Resources shop
  • Collections
  • Remote teaching support
  • Starters for ten
  • Screen experiments
  • Assessment for learning
  • Microscale chemistry
  • Faces of chemistry
  • Classic chemistry experiments
  • Nuffield practical collection
  • Anecdotes for chemistry teachers
  • On this day in chemistry
  • Global experiments
  • PhET interactive simulations
  • Chemistry vignettes
  • Context and problem based learning
  • Journal of the month
  • Chemistry and art
  • Art analysis
  • Pigments and colours
  • Ancient art: today's technology
  • Psychology and art theory
  • Art and archaeology
  • Artists as chemists
  • The physics of restoration and conservation
  • Ancient Egyptian art
  • Ancient Greek art
  • Ancient Roman art
  • Classic chemistry demonstrations
  • In search of solutions
  • In search of more solutions
  • Creative problem-solving in chemistry
  • Solar spark
  • Chemistry for non-specialists
  • Health and safety in higher education
  • Analytical chemistry introductions
  • Exhibition chemistry
  • Introductory maths for higher education
  • Commercial skills for chemists
  • Kitchen chemistry
  • Journals how to guides
  • Chemistry in health
  • Chemistry in sport
  • Chemistry in your cupboard
  • Chocolate chemistry
  • Adnoddau addysgu cemeg Cymraeg
  • The chemistry of fireworks
  • Festive chemistry
  • Education in Chemistry
  • Teach Chemistry
  • On-demand online
  • Live online
  • Selected PD articles
  • PD for primary teachers
  • PD for secondary teachers
  • What we offer
  • Chartered Science Teacher (CSciTeach)
  • Teacher mentoring
  • UK Chemistry Olympiad
  • Who can enter?
  • How does it work?
  • Resources and past papers
  • Top of the Bench
  • Schools' Analyst
  • Regional support
  • Education coordinators
  • RSC Yusuf Hamied Inspirational Science Programme
  • RSC Education News
  • Supporting teacher training
  • Interest groups

A primary school child raises their hand in a classroom

  • More navigation items

Making and using blueprint paper

Blueprints use the cyanotype process invented by the astronomer John Herschel in 1842. The paper is coated with a solution of two soluble iron(III) salts. The two iron salts do not react with each other in the dark, but when they are exposed to ultraviolet light the iron(III) ammonium citrate becomes an iron(II) salt. The iron(II) ion reacts with the potassium ferricyanide to form an insoluble blue compound, blue iron(III) ferrocyanide, also known as Prussian blue.

Student Sheet

In this practical I will be:

  • Carrying out an experiment to produce Blueprint paper.
  • Producing an image or diagram on my Blueprint paper.
  • Investigating the process of producing Blueprints and the role UV light plays.

Introduction:

While on a school trip, you saw that some renovation work was being carried out by some builders. On a table were the Blueprints for the building. You realise that the shades of white and blue would be perfect for a piece of art you are currently working on. However, before you can use these shades, you need to understand how they are made. You decide to investigate further…   

  • 1 beaker (250 cm 3  )
  • 2 beakers (100 cm 3 )
  • 1 measuring cylinder (100 cm 3 )
  • 1 glass stirring rod
  • 1 plastic tray
  • 1 wash bottle containing distilled water
  • 20 sheets (or access to) plain A4 paper avoid shiny or very absorbent papers
  • 2 weighing boats (or gallipots)
  • Potassium hexacyanoferrate(III) – labelled “Substance A – Irritant ” (low hazard)
  • Ammonium iron(III) citrate – labelled “Substance B” (low hazard)
  • 1 drying line with 2 bulldog clips (or string and pegs)
  • Digital balance
  • Drying line (string and pegs)
  • Paper towelling
  • Disposable gloves
  • Newspaper (to cover the work area)

Making the blueprint paper

Wear gloves and goggles. 

  • Get two 100 cm 3 beakers, a measuring cylinder and a stirring rod. Mark one beaker A and the other B.
  • Weigh 5 g of Substance A into the beaker marked A.
  • Now weigh 9 g of Substance B into the beaker marked B. 

Use the measuring cylinder to measure 50 cm 3 of water and pour the water into beaker A. 

  • Stir carefully until all the crystals have dissolved.
  • Now measure out another 50 cm 3 of water and pour into beaker B.
  • Stir carefully with a clean glass rod until all the crystals have dissolved.
  • Do steps 10–12 in a dark part of the lab. 
  • Mix the two liquids together, and pour them into a tray. Move the tray gently to get the liquid to cover the base of the tray properly.
  • Put a piece of white A4 paper into the liquid just long enough to get it damp - not wet! Place a piece of A4 paper onto the liquid in the tray, then lift the paper out of the tray by the two corners nearest to you. Allow the excess solution to drip into the tray before placing it wet side up onto some newspaper on a desk. 

Your paper will turn greenish blue. Hang it up to dry out in a dark part of the laboratory or store it lying flat in a dark drawer. Hang your paper up using the string line and pegs in a darkened area to dry.

  • Why do you think you have to wear gloves and goggles?
  • What does dissolve mean?
  • Why do you think the mixing has to be carried out in a dark place?
  • Why do think the experiment will not work if the paper is wet?

Making the blueprints

Wear disposable plastic gloves

  • When dry place your prepared paper under another piece of paper to keep it away from the sun.
  • Place the package by the window so the light can fall on it.
  • Remove the protecting piece of paper and place an object on the surface.
  • Leave it in the light for about 1–5 minutes. Longer exposure leaves a shadow; shorter exposure times produce a sharper image. 
  • When you think it has gone blue enough, take the object off the paper. The covered parts will still be green.
  • Wash the paper with water to wash away the green chemicals and leave the blue behind.
  • Hang your blueprint up to dry out.
  • Why does your prepared blueprint paper need to be kept in the dark?
  • Does the paper change colour quickly when it is exposed to the light?
  • What does the washing do to the paper?
  • Why do you have to wash your hands at the end?

Going further:

Try a range of different types of paper to see if the paper type makes a difference to exposure time, depth of exposure, etc.

If you can get some old black and white negatives try using those on the blueprint paper. You will have to experiment with exposure times.

Describe how the blueprint paper is similar and how different it is to photographic developing with a film. Research the chemicals used in photography.

Blueprints use the cyanotype process invented by the astronomer John Herschel in 1842. The paper is coated with a solution of two soluble iron(III) salts - potassium hexacyanoferrate(III) (potassium ferricyanide) and iron(III) ammonium citrate.

The two iron salts do not react with each other in the dark, but when they are exposed to ultraviolet light the iron(III) ammonium citrate becomes an iron(II) salt. The iron(II) ion reacts with the potassium ferricyanide to form an insoluble blue compound, blue iron(III) ferrocyanide, also known as Prussian blue.

A blueprint starts out as a black ink sketch on clear plastic or tracing paper. The ink sketch is laid on top of a sheet of blueprint paper and exposed to ultraviolet light or sunlight. Where the light strikes the paper, it turns blue. The black ink prevents the area under the drawing from turning blue. After exposure to UV light, the water-soluble chemicals are washed off the blueprint, leaving a white (or whatever colour the paper is) drawing on a blue background. The resulting blueprint is light-stable and as permanent as the substrate upon which it is printed.

Teacher and Technician Sheet

In this practical students will:

  • Produce Blueprint paper.
  • Create an image or diagram on Blueprint paper.
  • Investigate the process of producing Blueprints and the role UV light plays.

Introduction: 

(The topic could start with a group discussion during which teachers introduce the following ideas, especially the words in bold.)

A blueprint is an old term used for a reproduction of a technical drawing of an object such as an architectural or engineering design. They were made by a contact process using light-sensitive sheets. It was important because it allowed the rapid and accurate reproduction of design documents. It was called a blueprint because of the light lines on a blue background, forming a negative of the original. 

Paper was frequently used but for more durable prints linen was sometimes used. Sadly, over time the linen prints would shrink slightly, so later imitation vellum and polyester film were used instead. Nowadays drawings are produced on computer, printed, and then photocopied.

These blueprint papers have absorbed certain chemicals that are changed when visible light or ultraviolet (UV) light falls on them. Hence objects put onto the dried blueprinting paper will block visible or UV light from getting to the chemicals and those areas, untouched by the visible or UV light, stay unchanged.

Where the visible or UV light can get to the paper, an intense blue colour develops. The blue colour will not wash out of the paper, but the greenish colour left under the object will. This leaves a white image of the object on a blue background. It is possible to investigate the effects of differing exposure times , screening with certain materials.

(To make the process easier for the students and safer the two solutions can be made up in the dark and stored in dark bottles.)

(This practical can be done with pupils working as individuals or in groups of two. Groups of two allows for good discussion between the pupils. Teachers can use the questions set as the stimulus for discussion and the answers can be used as a group report, article, presentation, poster or talk.)

Curriculum range:

Suitable for middle school or lower secondary students; it links with:

  • ask questions and develop a line of enquiry based on observations of the real world, alongside prior knowledge and experience; 
  • use appropriate techniques, apparatus, and materials during fieldwork and laboratory work, paying attention to health and safety; 
  • make and record observations and measurements using a range of methods for different investigations; and evaluate the reliability of methods and suggest possible improvements; 
  • present observations and data using appropriate methods, including tables and graphs; 
  • interpret observations and data, including identifying patterns and using observations, measurements and data to draw conclusions; 
  • present reasoned explanations, including explaining data in relation to predictions and hypotheses; 
  • the concept of a pure substance; 
  • mixtures, including dissolving. 

Hazard warnings: 

Potassium hexacyanoferrate(III) – Skin/eye irritant, (Cat 2)  Respiratory irritant (STOT SE3) Ammonium iron(III) citrate –  Skin/eye irritant, (Cat 2)  

Good practice would require exposure to be kept to a minimum and suitable gloves be used by the students. Students with impaired respiratory function may incur further disability if excessive concentrations of particulate are inhaled so good ventilation is required. 

In addition, contact with strong acids causes the release of highly toxic hydrogen cyanide. This is not likely to be an issue but care should be taken on disposal to ensure that the drain/sink does not have acid already present.

Ammonium iron(III) citrate is slightly hazardous as an irritant through skin or eye contact.

Wear safety glasses. Wear disposable gloves.

For a group of students:

  • 1 beaker (250 cm 3 )
  • 2 glass stirring rods
  • 20 sheets (or access to) plain A4 paper (avoid shiny or very absorbent papers)
  • 10 g potassium hexacyanoferrate(III) – labelled “Substance A – Irritant”
  • 15 g ammonium iron(III) citrate – labelled “Substance B – Irritant”
  • 1 digital balance

Technical notes:

If available use a fume cupboard to hang the string lines up in ready to peg the paper to dry and close any blinds near it.

It is possible to dry the prepared sheets more quickly by using radiators and/or hairdryers if available, but otherwise this practical would have to be carried out over two lessons to allow for drying time.

The amount of pages that can be hung out to dry is limited by the amount of space available.

Laminating sheets can be drawn on and placed onto the prepared sheets before placing in bright light to leave an imprint on the paper.

An alternative is to get pupils to use image editing software to produce a negative of their choice that can then be printed out on transparency film.

The paper may stain yellow and dry yellow, but it will still change colour when exposed to bright light and develop a blueprint when washed with water.

This practical works well in normal daylight with the internal lights switched off. 

The amount of space to dry the papers directs how many sheets can be used in a practical.

Good results can be obtained using ordinary A4 paper and using laminating sheets to draw on.

Any shadows will also be processed on the paper so try to place the paper where it is in direct light and is laid flat.

The amount of chemicals used and solution produced could be halved and still cover about 10 sides of A4.

The hazards are minimal assuming the expected level of behaviour from students.

Making and using blueprint paper: student sheet

Making and using blueprint paper: teacher sheet.

  • 11-14 years
  • Practical experiments
  • Cross-curriculum
  • Reactions and synthesis

Related articles

A diagram and graph showing how a reversible reaction reaches equilibrium

Help learners master equilibrium and reversible reactions

2024-06-24T06:59:00Z By Emma Owens

Use this poster, fact sheet and storyboard activity to ensure your 14–16 students understand dynamic equilibrium

A hand using scissor-handle tweezers to hold a piece of paper that is on fire but not burning

Non-burning paper: investigate the fire triangle and conditions for combustion

2024-06-10T05:00:00Z By Declan Fleming

Use this reworking of the classic non-burning £5 note demonstration to explore combustion with learners aged 11–16 years

A bottle of bromine water next to two test tubes - one contains only clear liquid and the other contains clear liquid sitting on an orange liquid

Everything you need to introduce alkenes

2024-06-04T08:22:00Z By Dan Beech

Help your 14–16 learners to master the fundamentals of the reactions of alkenes with these ideas and activities

More Resources

Example pages from the teacher guidance showing answers, and student worksheets at three levels

Fractional distillation and hydrocarbons | Review my learning worksheets | 14–16 years

By Lyn Nicholls

Identify learning gaps and misconceptions with this set of worksheets offering three levels of support

Previews of the Review my learning: chromatography teacher guidance and scaffolded student sheets

Chromatography | Review my learning worksheets | 14–16 years

2024-05-10T13:33:00Z By Lyn Nicholls

Previews of the Review my learning: solubility teacher guidance and scaffolded student sheets

Solubility | Review my learning worksheets | 14–16 years

  • Contributors
  • Email alerts

Site powered by Webvision Cloud

Logo for University of Southern Queensland

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

5 Research design

Research design is a comprehensive plan for data collection in an empirical research project. It is a ‘blueprint’ for empirical research aimed at answering specific research questions or testing specific hypotheses, and must specify at least three processes: the data collection process, the instrument development process, and the sampling process. The instrument development and sampling processes are described in the next two chapters, and the data collection process—which is often loosely called ‘research design’—is introduced in this chapter and is described in further detail in Chapters 9–12.

Broadly speaking, data collection methods can be grouped into two categories: positivist and interpretive. Positivist methods , such as laboratory experiments and survey research, are aimed at theory (or hypotheses) testing, while interpretive methods, such as action research and ethnography, are aimed at theory building. Positivist methods employ a deductive approach to research, starting with a theory and testing theoretical postulates using empirical data. In contrast, interpretive methods employ an inductive approach that starts with data and tries to derive a theory about the phenomenon of interest from the observed data. Often times, these methods are incorrectly equated with quantitative and qualitative research. Quantitative and qualitative methods refers to the type of data being collected—quantitative data involve numeric scores, metrics, and so on, while qualitative data includes interviews, observations, and so forth—and analysed (i.e., using quantitative techniques such as regression or qualitative techniques such as coding). Positivist research uses predominantly quantitative data, but can also use qualitative data. Interpretive research relies heavily on qualitative data, but can sometimes benefit from including quantitative data as well. Sometimes, joint use of qualitative and quantitative data may help generate unique insight into a complex social phenomenon that is not available from either type of data alone, and hence, mixed-mode designs that combine qualitative and quantitative data are often highly desirable.

Key attributes of a research design

The quality of research designs can be defined in terms of four key design attributes: internal validity, external validity, construct validity, and statistical conclusion validity.

Internal validity , also called causality, examines whether the observed change in a dependent variable is indeed caused by a corresponding change in a hypothesised independent variable, and not by variables extraneous to the research context. Causality requires three conditions: covariation of cause and effect (i.e., if cause happens, then effect also happens; if cause does not happen, effect does not happen), temporal precedence (cause must precede effect in time), and spurious correlation, or there is no plausible alternative explanation for the change. Certain research designs, such as laboratory experiments, are strong in internal validity by virtue of their ability to manipulate the independent variable (cause) via a treatment and observe the effect (dependent variable) of that treatment after a certain point in time, while controlling for the effects of extraneous variables. Other designs, such as field surveys, are poor in internal validity because of their inability to manipulate the independent variable (cause), and because cause and effect are measured at the same point in time which defeats temporal precedence making it equally likely that the expected effect might have influenced the expected cause rather than the reverse. Although higher in internal validity compared to other methods, laboratory experiments are by no means immune to threats of internal validity, and are susceptible to history, testing, instrumentation, regression, and other threats that are discussed later in the chapter on experimental designs. Nonetheless, different research designs vary considerably in their respective level of internal validity.

External validity or generalisability refers to whether the observed associations can be generalised from the sample to the population (population validity), or to other people, organisations, contexts, or time (ecological validity). For instance, can results drawn from a sample of financial firms in the United States be generalised to the population of financial firms (population validity) or to other firms within the United States (ecological validity)? Survey research, where data is sourced from a wide variety of individuals, firms, or other units of analysis, tends to have broader generalisability than laboratory experiments where treatments and extraneous variables are more controlled. The variation in internal and external validity for a wide range of research designs is shown in Figure 5.1.

Internal and external validity

Some researchers claim that there is a trade-off between internal and external validity—higher external validity can come only at the cost of internal validity and vice versa. But this is not always the case. Research designs such as field experiments, longitudinal field surveys, and multiple case studies have higher degrees of both internal and external validities. Personally, I prefer research designs that have reasonable degrees of both internal and external validities, i.e., those that fall within the cone of validity shown in Figure 5.1. But this should not suggest that designs outside this cone are any less useful or valuable. Researchers’ choice of designs are ultimately a matter of their personal preference and competence, and the level of internal and external validity they desire.

Construct validity examines how well a given measurement scale is measuring the theoretical construct that it is expected to measure. Many constructs used in social science research such as empathy, resistance to change, and organisational learning are difficult to define, much less measure. For instance, construct validity must ensure that a measure of empathy is indeed measuring empathy and not compassion, which may be difficult since these constructs are somewhat similar in meaning. Construct validity is assessed in positivist research based on correlational or factor analysis of pilot test data, as described in the next chapter.

Statistical conclusion validity examines the extent to which conclusions derived using a statistical procedure are valid. For example, it examines whether the right statistical method was used for hypotheses testing, whether the variables used meet the assumptions of that statistical test (such as sample size or distributional requirements), and so forth. Because interpretive research designs do not employ statistical tests, statistical conclusion validity is not applicable for such analysis. The different kinds of validity and where they exist at the theoretical/empirical levels are illustrated in Figure 5.2.

Different types of validity in scientific research

Improving internal and external validity

The best research designs are those that can ensure high levels of internal and external validity. Such designs would guard against spurious correlations, inspire greater faith in the hypotheses testing, and ensure that the results drawn from a small sample are generalisable to the population at large. Controls are required to ensure internal validity (causality) of research designs, and can be accomplished in five ways: manipulation, elimination, inclusion, and statistical control, and randomisation.

In manipulation , the researcher manipulates the independent variables in one or more levels (called ‘treatments’), and compares the effects of the treatments against a control group where subjects do not receive the treatment. Treatments may include a new drug or different dosage of drug (for treating a medical condition), a teaching style (for students), and so forth. This type of control is achieved in experimental or quasi-experimental designs, but not in non-experimental designs such as surveys. Note that if subjects cannot distinguish adequately between different levels of treatment manipulations, their responses across treatments may not be different, and manipulation would fail.

The elimination technique relies on eliminating extraneous variables by holding them constant across treatments, such as by restricting the study to a single gender or a single socioeconomic status. In the inclusion technique, the role of extraneous variables is considered by including them in the research design and separately estimating their effects on the dependent variable, such as via factorial designs where one factor is gender (male versus female). Such technique allows for greater generalisability, but also requires substantially larger samples. In statistical control , extraneous variables are measured and used as covariates during the statistical testing process.

Finally, the randomisation technique is aimed at cancelling out the effects of extraneous variables through a process of random sampling, if it can be assured that these effects are of a random (non-systematic) nature. Two types of randomisation are: random selection , where a sample is selected randomly from a population, and random assignment , where subjects selected in a non-random manner are randomly assigned to treatment groups.

Randomisation also ensures external validity, allowing inferences drawn from the sample to be generalised to the population from which the sample is drawn. Note that random assignment is mandatory when random selection is not possible because of resource or access constraints. However, generalisability across populations is harder to ascertain since populations may differ on multiple dimensions and you can only control for a few of those dimensions.

Popular research designs

As noted earlier, research designs can be classified into two categories—positivist and interpretive—depending on the goal of the research. Positivist designs are meant for theory testing, while interpretive designs are meant for theory building. Positivist designs seek generalised patterns based on an objective view of reality, while interpretive designs seek subjective interpretations of social phenomena from the perspectives of the subjects involved. Some popular examples of positivist designs include laboratory experiments, field experiments, field surveys, secondary data analysis, and case research, while examples of interpretive designs include case research, phenomenology, and ethnography. Note that case research can be used for theory building or theory testing, though not at the same time. Not all techniques are suited for all kinds of scientific research. Some techniques such as focus groups are best suited for exploratory research, others such as ethnography are best for descriptive research, and still others such as laboratory experiments are ideal for explanatory research. Following are brief descriptions of some of these designs. Additional details are provided in Chapters 9–12.

Experimental studies are those that are intended to test cause-effect relationships (hypotheses) in a tightly controlled setting by separating the cause from the effect in time, administering the cause to one group of subjects (the ‘treatment group’) but not to another group (‘control group’), and observing how the mean effects vary between subjects in these two groups. For instance, if we design a laboratory experiment to test the efficacy of a new drug in treating a certain ailment, we can get a random sample of people afflicted with that ailment, randomly assign them to one of two groups (treatment and control groups), administer the drug to subjects in the treatment group, but only give a placebo (e.g., a sugar pill with no medicinal value) to subjects in the control group. More complex designs may include multiple treatment groups, such as low versus high dosage of the drug or combining drug administration with dietary interventions. In a true experimental design , subjects must be randomly assigned to each group. If random assignment is not followed, then the design becomes quasi-experimental . Experiments can be conducted in an artificial or laboratory setting such as at a university (laboratory experiments) or in field settings such as in an organisation where the phenomenon of interest is actually occurring (field experiments). Laboratory experiments allow the researcher to isolate the variables of interest and control for extraneous variables, which may not be possible in field experiments. Hence, inferences drawn from laboratory experiments tend to be stronger in internal validity, but those from field experiments tend to be stronger in external validity. Experimental data is analysed using quantitative statistical techniques. The primary strength of the experimental design is its strong internal validity due to its ability to isolate, control, and intensively examine a small number of variables, while its primary weakness is limited external generalisability since real life is often more complex (i.e., involving more extraneous variables) than contrived lab settings. Furthermore, if the research does not identify ex ante relevant extraneous variables and control for such variables, such lack of controls may hurt internal validity and may lead to spurious correlations.

Field surveys are non-experimental designs that do not control for or manipulate independent variables or treatments, but measure these variables and test their effects using statistical methods. Field surveys capture snapshots of practices, beliefs, or situations from a random sample of subjects in field settings through a survey questionnaire or less frequently, through a structured interview. In cross-sectional field surveys , independent and dependent variables are measured at the same point in time (e.g., using a single questionnaire), while in longitudinal field surveys , dependent variables are measured at a later point in time than the independent variables. The strengths of field surveys are their external validity (since data is collected in field settings), their ability to capture and control for a large number of variables, and their ability to study a problem from multiple perspectives or using multiple theories. However, because of their non-temporal nature, internal validity (cause-effect relationships) are difficult to infer, and surveys may be subject to respondent biases (e.g., subjects may provide a ‘socially desirable’ response rather than their true response) which further hurts internal validity.

Secondary data analysis is an analysis of data that has previously been collected and tabulated by other sources. Such data may include data from government agencies such as employment statistics from the U.S. Bureau of Labor Services or development statistics by countries from the United Nations Development Program, data collected by other researchers (often used in meta-analytic studies), or publicly available third-party data, such as financial data from stock markets or real-time auction data from eBay. This is in contrast to most other research designs where collecting primary data for research is part of the researcher’s job. Secondary data analysis may be an effective means of research where primary data collection is too costly or infeasible, and secondary data is available at a level of analysis suitable for answering the researcher’s questions. The limitations of this design are that the data might not have been collected in a systematic or scientific manner and hence unsuitable for scientific research, since the data was collected for a presumably different purpose, they may not adequately address the research questions of interest to the researcher, and interval validity is problematic if the temporal precedence between cause and effect is unclear.

Case research is an in-depth investigation of a problem in one or more real-life settings (case sites) over an extended period of time. Data may be collected using a combination of interviews, personal observations, and internal or external documents. Case studies can be positivist in nature (for hypotheses testing) or interpretive (for theory building). The strength of this research method is its ability to discover a wide variety of social, cultural, and political factors potentially related to the phenomenon of interest that may not be known in advance. Analysis tends to be qualitative in nature, but heavily contextualised and nuanced. However, interpretation of findings may depend on the observational and integrative ability of the researcher, lack of control may make it difficult to establish causality, and findings from a single case site may not be readily generalised to other case sites. Generalisability can be improved by replicating and comparing the analysis in other case sites in a multiple case design .

Focus group research is a type of research that involves bringing in a small group of subjects (typically six to ten people) at one location, and having them discuss a phenomenon of interest for a period of one and a half to two hours. The discussion is moderated and led by a trained facilitator, who sets the agenda and poses an initial set of questions for participants, makes sure that the ideas and experiences of all participants are represented, and attempts to build a holistic understanding of the problem situation based on participants’ comments and experiences. Internal validity cannot be established due to lack of controls and the findings may not be generalised to other settings because of the small sample size. Hence, focus groups are not generally used for explanatory or descriptive research, but are more suited for exploratory research.

Action research assumes that complex social phenomena are best understood by introducing interventions or ‘actions’ into those phenomena and observing the effects of those actions. In this method, the researcher is embedded within a social context such as an organisation and initiates an action—such as new organisational procedures or new technologies—in response to a real problem such as declining profitability or operational bottlenecks. The researcher’s choice of actions must be based on theory, which should explain why and how such actions may cause the desired change. The researcher then observes the results of that action, modifying it as necessary, while simultaneously learning from the action and generating theoretical insights about the target problem and interventions. The initial theory is validated by the extent to which the chosen action successfully solves the target problem. Simultaneous problem solving and insight generation is the central feature that distinguishes action research from all other research methods, and hence, action research is an excellent method for bridging research and practice. This method is also suited for studying unique social problems that cannot be replicated outside that context, but it is also subject to researcher bias and subjectivity, and the generalisability of findings is often restricted to the context where the study was conducted.

Ethnography is an interpretive research design inspired by anthropology that emphasises that research phenomenon must be studied within the context of its culture. The researcher is deeply immersed in a certain culture over an extended period of time—eight months to two years—and during that period, engages, observes, and records the daily life of the studied culture, and theorises about the evolution and behaviours in that culture. Data is collected primarily via observational techniques, formal and informal interaction with participants in that culture, and personal field notes, while data analysis involves ‘sense-making’. The researcher must narrate her experience in great detail so that readers may experience that same culture without necessarily being there. The advantages of this approach are its sensitiveness to the context, the rich and nuanced understanding it generates, and minimal respondent bias. However, this is also an extremely time and resource-intensive approach, and findings are specific to a given culture and less generalisable to other cultures.

Selecting research designs

Given the above multitude of research designs, which design should researchers choose for their research? Generally speaking, researchers tend to select those research designs that they are most comfortable with and feel most competent to handle, but ideally, the choice should depend on the nature of the research phenomenon being studied. In the preliminary phases of research, when the research problem is unclear and the researcher wants to scope out the nature and extent of a certain research problem, a focus group (for an individual unit of analysis) or a case study (for an organisational unit of analysis) is an ideal strategy for exploratory research. As one delves further into the research domain, but finds that there are no good theories to explain the phenomenon of interest and wants to build a theory to fill in the unmet gap in that area, interpretive designs such as case research or ethnography may be useful designs. If competing theories exist and the researcher wishes to test these different theories or integrate them into a larger theory, positivist designs such as experimental design, survey research, or secondary data analysis are more appropriate.

Regardless of the specific research design chosen, the researcher should strive to collect quantitative and qualitative data using a combination of techniques such as questionnaires, interviews, observations, documents, or secondary data. For instance, even in a highly structured survey questionnaire, intended to collect quantitative data, the researcher may leave some room for a few open-ended questions to collect qualitative data that may generate unexpected insights not otherwise available from structured quantitative data alone. Likewise, while case research employ mostly face-to-face interviews to collect most qualitative data, the potential and value of collecting quantitative data should not be ignored. As an example, in a study of organisational decision-making processes, the case interviewer can record numeric quantities such as how many months it took to make certain organisational decisions, how many people were involved in that decision process, and how many decision alternatives were considered, which can provide valuable insights not otherwise available from interviewees’ narrative responses. Irrespective of the specific research design employed, the goal of the researcher should be to collect as much and as diverse data as possible that can help generate the best possible insights about the phenomenon of interest.

Social Science Research: Principles, Methods and Practices (Revised edition) Copyright © 2019 by Anol Bhattacherjee is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Mcqmate logo

Q.
A. research problem
B. research design
C. research tools
D. research
Answer» B. research design

View all MCQs in

No comments yet

Related MCQs

  • A research paper is a brief report of research work based on
  • What type of research would be least likely to include a research hypothesis?
  • Motivation Research is a type of research
  • Which of the following periodical is specifically meant for publishing research work?
  • The research which is exploring new facts through the study of the past is called
  • In research, something that does not „vary‟ is called a
  • Pure research is otherwise called ..............
  • Research conducted in class room atmosphere is called
  • Research through experiment and observation Is called
  • A comprehensive full Report of the research process is called

medRxiv

A One Health Investigation into H5N1 Avian Influenza Virus Epizootics on Two Dairy Farms

  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Gregory C. Gray
  • For correspondence: [email protected]
  • Info/History
  • Preview PDF

Background In early April 2024 we studied two Texas dairy farms which had suffered incursions of H5N1 highly pathogenic avian influenza virus (HPAIV) the previous month.

Methods We employed molecular assays, cell and egg culture, Sanger and next generation sequencing to isolate and characterize viruses from multiple farm specimens (cow nasal swab, milk specimens, fecal slurry, and a dead bird).

Results We detected H5N1 HPAIV in 64% (9/14) of milk specimens, 2.6% (1/39) of cattle nasal swab specimens, and none of 17 cattle worker nasopharyngeal swab specimens. We cultured and characterized virus from eight H5N1-positive specimens. Sanger and next-generation sequencing revealed the viruses were closely related into other recent Texas epizootic H5N1 strains of clade 2.3.4.4b. Our isolates had multiple mutations associated with increased spillover potential. Surprisingly, we detected SARS-CoV-2 in a nasal swab from a sick cow. Additionally, 14.3% (2/14) of the farm workers who donated sera were recently symptomatic and had elevated neutralizing antibodies against a related H5N1 strain.

Conclusions While our sampling was limited, these data offer additional insight into the large H5N1 HPAIV epizootic which thus far has impacted at least 96 cattle farms in twelve US states. Due to fears that research might damage dairy businesses, studies like this one have been few. We need to find ways to work with dairy farms in collecting more comprehensive epidemiological data that are necessary for the design of future interventions against H5N1 HPAIV on cattle farms.

Competing Interest Statement

The authors have declared no competing interest.

Funding Statement

This project was supported in part by the Agriculture and Food Research Initiative Competitive Grant from the American Rescue Plan Act, award number 2023-70432-39558, through USDA APHIS and Professor Gregory C. Grays startup funding from the University of Texas Medical Branch. The findings and conclusions in this presentation are those of the authors and should not be construed to represent any official USDA or US Government determination or policy.

Author Declarations

I confirm all relevant ethical guidelines have been followed, and any necessary IRB and/or ethics committee approvals have been obtained.

The details of the IRB/oversight body that provided approval or exemption for the research described are given below:

This research was approved by the University of Texas Medical Branch IRB, Protocol 23-0085

I confirm that all necessary patient/participant consent has been obtained and the appropriate institutional forms have been archived, and that any patient/participant/sample identifiers included were not known to anyone (e.g., hospital staff, patients or participants themselves) outside the research group so cannot be used to identify individuals.

I understand that all clinical trials and any other prospective interventional studies must be registered with an ICMJE-approved registry, such as ClinicalTrials.gov. I confirm that any such study reported in the manuscript has been registered and the trial registration ID is provided (note: if posting a prospective study registered retrospectively, please provide a statement in the trial ID field explaining why the study was not registered in advance).

I have followed all appropriate research reporting guidelines, such as any relevant EQUATOR Network research reporting checklist(s) and other pertinent material, if applicable.

View the discussion thread.

Thank you for your interest in spreading the word about medRxiv.

NOTE: Your email address is requested solely to identify you as the sender of this article.

Twitter logo

Citation Manager Formats

  • EndNote (tagged)
  • EndNote 8 (xml)
  • RefWorks Tagged
  • Ref Manager
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Subject Area

  • Epidemiology
  • Addiction Medicine (336)
  • Allergy and Immunology (658)
  • Anesthesia (177)
  • Cardiovascular Medicine (2573)
  • Dentistry and Oral Medicine (310)
  • Dermatology (218)
  • Emergency Medicine (390)
  • Endocrinology (including Diabetes Mellitus and Metabolic Disease) (915)
  • Epidemiology (12087)
  • Forensic Medicine (10)
  • Gastroenterology (743)
  • Genetic and Genomic Medicine (3993)
  • Geriatric Medicine (376)
  • Health Economics (667)
  • Health Informatics (2578)
  • Health Policy (992)
  • Health Systems and Quality Improvement (959)
  • Hematology (357)
  • HIV/AIDS (825)
  • Infectious Diseases (except HIV/AIDS) (13573)
  • Intensive Care and Critical Care Medicine (783)
  • Medical Education (396)
  • Medical Ethics (107)
  • Nephrology (423)
  • Neurology (3759)
  • Nursing (206)
  • Nutrition (559)
  • Obstetrics and Gynecology (718)
  • Occupational and Environmental Health (688)
  • Oncology (1958)
  • Ophthalmology (569)
  • Orthopedics (233)
  • Otolaryngology (301)
  • Pain Medicine (247)
  • Palliative Medicine (72)
  • Pathology (469)
  • Pediatrics (1088)
  • Pharmacology and Therapeutics (453)
  • Primary Care Research (442)
  • Psychiatry and Clinical Psychology (3353)
  • Public and Global Health (6429)
  • Radiology and Imaging (1364)
  • Rehabilitation Medicine and Physical Therapy (793)
  • Respiratory Medicine (859)
  • Rheumatology (394)
  • Sexual and Reproductive Health (400)
  • Sports Medicine (336)
  • Surgery (431)
  • Toxicology (51)
  • Transplantation (184)
  • Urology (164)

COMMENTS

  1. A Research Blueprint

    The best blueprint for research, of course, is a flexible document that can never be complete. As we proceed to study published works, their reference notes and bibliographies will expose us to new materials. The manuscripts we use will point us to other documents. New record collections, long in private hands, continue to surface.

  2. Organizing Your Social Sciences Research Paper

    Anastas, Jeane W. Research Design for Social Work and the Human Services. Chapter 7, Flexible Methods: Experimental Research. 2nd ed. New York: Columbia University Press, 1999; Chapter 2: Research Design, Experimental Designs. ... The research helps contextualize already known information about a research problem, thereby facilitating ways to ...

  3. Chapter 5 Research Design

    Research design is a comprehensive plan for data collection in an empirical research project. It is a "blueprint" for empirical research aimed at answering specific research questions or testing specific hypotheses, and must specify at least three processes: (1) the data collection process, (2) the instrument development process, and (3) the sampling process.

  4. Study designs: Part 1

    The study design used to answer a particular research question depends on the nature of the question and the availability of resources. In this article, which is the first part of a series on "study designs," we provide an overview of research study designs and their classification. The subsequent articles will focus on individual designs.

  5. Blueprints for Academic Research Projects

    It is much easier to start a complex task and long process such as designing a research project when you have an existing research model or 'blueprint' to work from. Starting with a 'blueprint' — tailored to your topic area — is much easier. Using the Research Model Builder Canvas, you can transform a journal article in your topic ...

  6. 6.1: Introduction- Building with a Blueprint

    One way to assess the validity of a theoretical explanation is to understand the research design. Research design is an action plan that guides researchers in providing evidence to support their theory. Another way to think of research design is as a blueprint. When building a house, it is necessary to first create a plan that will provide the ...

  7. What Is Research Design? 8 Types + Examples

    Research design refers to the overall plan, structure or strategy that guides a research project, from its conception to the final analysis of data. Research designs for quantitative studies include descriptive, correlational, experimental and quasi-experimenta l designs. Research designs for qualitative studies include phenomenological ...

  8. Research Design

    Abstract. A research design is the blueprint of the different steps to be undertaken starting with the formulation of the hypothesis to drawing inference during a research process. The research design clearly explains the different steps to be taken during a research program to reach the objective of a particular research.

  9. (PDF) A practical guide to test blueprinting

    A test blueprint describes the key elements of a test, including the content to be covered, the amount of emphasis allocated to each content area, and other important features.

  10. A guide to 'big team science' creates a blueprint for research

    The guide is designed to provide a path forward for researchers and to smooth over differences that are almost inevitable when the number of collaborators reaches three or even four digits. It ...

  11. Blueprinting Evaluation Evidence: Data Sources and Methods

    What Is Known. Data should serve the information needs of stakeholders (utility) while also being accurate, feasible, and fair/ethical. 1 Consider an adopt, adapt, and/or author (3A's) approach to data collection. Can you adopt data collection using existing surveys, performance data, and examination scores? Can you adapt an available tool to include items specific to your evaluation focus and ...

  12. A Blueprint for Your Research Paper by Michele Oliver on Prezi

    A Blueprint for Your Research Paper by Michele Oliver on Prezi. Blog. July 22, 2024. Make every lesson count with these student engagement strategies. July 18, 2024. Product presentations: defining them and creating your own. July 17, 2024. Get the most out of your studies with these time management tips for students.

  13. The Research Proposal

    The research proposal also goes a step beyond in collecting and evaluating the data. Overall, the questions what, why, where, whom and when are provided answers by the research proposal. The dissertation proposal assists you in concentrating on your research aims, get a clear idea about the significance and the needs, elucidate on the methods ...

  14. NIH Blueprint Overview

    The NIH Blueprint for Neuroscience Research aims to accelerate transformative discoveries in brain function in health, aging, and disease. Blueprint is a collaborative framework that includes the NIH Office of the Director together with NIH Institutes and Centers that support research on the nervous system. By pooling resources and expertise ...

  15. Blueprint of a Proposal

    Blueprint of a Proposal. Trying to make sense of proposal preparation, review and submission at the UW? This course introduces participants to the UW processes, concepts and terminology that will help get you started in the right direction. Through discussion, hands-on exercises, annotated online resources and in class handouts, we will cover:

  16. What is a blueprint of a research paper?

    A blueprint of a research paper is kind of an outline except less formal and with more information. ... Research design is a blueprint or panning for research work and research method is an action ...

  17. [Solved] . A Blue print of Research work is called

    A Blue print of Research work is called A. Research Problem: B. Research design: C. Research tools: D. Research methods: Answer» B. Research design 3.4k. 0. Do you find this helpful? 15 View all MCQs in. Research Methodology (RM) Discussion No comments yet Login to comment ...

  18. Perceptions of the Use of Blueprinting in a Formative Theory Assessment

    Previous research has also indicated that the use of a blueprint in assessment ensures coverage of all aspects of the curriculum, learning objectives and educational domains.5,24,25 Very few faculty members and students in the current study believed that the assessment contained questions which were too easy or too difficult; this suggests that ...

  19. Making and using blueprint paper

    Carrying out an experiment to produce Blueprint paper. Producing an image or diagram on my Blueprint paper. Investigating the process of producing Blueprints and the role UV light plays. Introduction: While on a school trip, you saw that some renovation work was being carried out by some builders. On a table were the Blueprints for the building.

  20. Most young people who die by suicide in the US do not have ...

    Suicide is a leading cause of death among young people in the United States, and new research suggests that the majority of young people who have died by suicide did not have a documented mental ...

  21. Research design

    Research design is a comprehensive plan for data collection in an empirical research project. It is a 'blueprint' for empirical research aimed at answering specific research questions or testing specific hypotheses, and must specify at least three processes: the data collection process, the instrument development process, and the sampling process.

  22. Cheney Methodist Church: August 4, 2024 Sunday Morning ...

    Cheney Methodist Church: August 4, 2024 Sunday Morning Worship Service, Pastor Aaron Duell

  23. [Solved] Blue print of a research work is called

    Blue print of a research work is called A. research problem: B. research design: C. research tools: D. research: Answer» B. research design 859. 0. Do you find this helpful? 1 View all MCQs in. Methodology of Research in Political Science Discussion No comments yet Login to comment ...

  24. A One Health Investigation into H5N1 Avian Influenza Virus Epizootics

    Conclusions While our sampling was limited, these data offer additional insight into the large H5N1 HPAIV epizootic which thus far has impacted at least 96 cattle farms in twelve US states. Due to fears that research might damage dairy businesses, studies like this one have been few. We need to find ways to work with dairy farms in collecting more comprehensive epidemiological data that are ...