Quantitative Data Analysis: A Comprehensive Guide

By: Ofem Eteng | Published: May 18, 2022

Related Articles

analysis and interpretation of data in quantitative research

A healthcare giant successfully introduces the most effective drug dosage through rigorous statistical modeling, saving countless lives. A marketing team predicts consumer trends with uncanny accuracy, tailoring campaigns for maximum impact.

Table of Contents

These trends and dosages are not just any numbers but are a result of meticulous quantitative data analysis. Quantitative data analysis offers a robust framework for understanding complex phenomena, evaluating hypotheses, and predicting future outcomes.

In this blog, we’ll walk through the concept of quantitative data analysis, the steps required, its advantages, and the methods and techniques that are used in this analysis. Read on!

What is Quantitative Data Analysis?

Quantitative data analysis is a systematic process of examining, interpreting, and drawing meaningful conclusions from numerical data. It involves the application of statistical methods, mathematical models, and computational techniques to understand patterns, relationships, and trends within datasets.

Quantitative data analysis methods typically work with algorithms, mathematical analysis tools, and software to gain insights from the data, answering questions such as how many, how often, and how much. Data for quantitative data analysis is usually collected from close-ended surveys, questionnaires, polls, etc. The data can also be obtained from sales figures, email click-through rates, number of website visitors, and percentage revenue increase. 

Quantitative Data Analysis vs Qualitative Data Analysis

When we talk about data, we directly think about the pattern, the relationship, and the connection between the datasets – analyzing the data in short. Therefore when it comes to data analysis, there are broadly two types – Quantitative Data Analysis and Qualitative Data Analysis.

Quantitative data analysis revolves around numerical data and statistics, which are suitable for functions that can be counted or measured. In contrast, qualitative data analysis includes description and subjective information – for things that can be observed but not measured.

Let us differentiate between Quantitative Data Analysis and Quantitative Data Analysis for a better understanding.

Numerical data – statistics, counts, metrics measurementsText data – customer feedback, opinions, documents, notes, audio/video recordings
Close-ended surveys, polls and experiments.Open-ended questions, descriptive interviews
What? How much? Why (to a certain extent)?How? Why? What are individual experiences and motivations?
Statistical programming software like R, Python, SAS and Data visualization like Tableau, Power BINVivo, Atlas.ti for qualitative coding.
Word processors and highlighters – Mindmaps and visual canvases
Best used for large sample sizes for quick answers.Best used for small to middle sample sizes for descriptive insights

Data Preparation Steps for Quantitative Data Analysis

Quantitative data has to be gathered and cleaned before proceeding to the stage of analyzing it. Below are the steps to prepare a data before quantitative research analysis:

  • Step 1: Data Collection

Before beginning the analysis process, you need data. Data can be collected through rigorous quantitative research, which includes methods such as interviews, focus groups, surveys, and questionnaires.

  • Step 2: Data Cleaning

Once the data is collected, begin the data cleaning process by scanning through the entire data for duplicates, errors, and omissions. Keep a close eye for outliers (data points that are significantly different from the majority of the dataset) because they can skew your analysis results if they are not removed.

This data-cleaning process ensures data accuracy, consistency and relevancy before analysis.

  • Step 3: Data Analysis and Interpretation

Now that you have collected and cleaned your data, it is now time to carry out the quantitative analysis. There are two methods of quantitative data analysis, which we will discuss in the next section.

However, if you have data from multiple sources, collecting and cleaning it can be a cumbersome task. This is where Hevo Data steps in. With Hevo, extracting, transforming, and loading data from source to destination becomes a seamless task, eliminating the need for manual coding. This not only saves valuable time but also enhances the overall efficiency of data analysis and visualization, empowering users to derive insights quickly and with precision

Hevo is the only real-time ELT No-code Data Pipeline platform that cost-effectively automates data pipelines that are flexible to your needs. With integration with 150+ Data Sources (40+ free sources), we help you not only export data from sources & load data to the destinations but also transform & enrich your data, & make it analysis-ready.

Start for free now!

Now that you are familiar with what quantitative data analysis is and how to prepare your data for analysis, the focus will shift to the purpose of this article, which is to describe the methods and techniques of quantitative data analysis.

Methods and Techniques of Quantitative Data Analysis

Quantitative data analysis employs two techniques to extract meaningful insights from datasets, broadly. The first method is descriptive statistics, which summarizes and portrays essential features of a dataset, such as mean, median, and standard deviation.

Inferential statistics, the second method, extrapolates insights and predictions from a sample dataset to make broader inferences about an entire population, such as hypothesis testing and regression analysis.

An in-depth explanation of both the methods is provided below:

  • Descriptive Statistics
  • Inferential Statistics

1) Descriptive Statistics

Descriptive statistics as the name implies is used to describe a dataset. It helps understand the details of your data by summarizing it and finding patterns from the specific data sample. They provide absolute numbers obtained from a sample but do not necessarily explain the rationale behind the numbers and are mostly used for analyzing single variables. The methods used in descriptive statistics include: 

  • Mean:   This calculates the numerical average of a set of values.
  • Median: This is used to get the midpoint of a set of values when the numbers are arranged in numerical order.
  • Mode: This is used to find the most commonly occurring value in a dataset.
  • Percentage: This is used to express how a value or group of respondents within the data relates to a larger group of respondents.
  • Frequency: This indicates the number of times a value is found.
  • Range: This shows the highest and lowest values in a dataset.
  • Standard Deviation: This is used to indicate how dispersed a range of numbers is, meaning, it shows how close all the numbers are to the mean.
  • Skewness: It indicates how symmetrical a range of numbers is, showing if they cluster into a smooth bell curve shape in the middle of the graph or if they skew towards the left or right.

2) Inferential Statistics

In quantitative analysis, the expectation is to turn raw numbers into meaningful insight using numerical values, and descriptive statistics is all about explaining details of a specific dataset using numbers, but it does not explain the motives behind the numbers; hence, a need for further analysis using inferential statistics.

Inferential statistics aim to make predictions or highlight possible outcomes from the analyzed data obtained from descriptive statistics. They are used to generalize results and make predictions between groups, show relationships that exist between multiple variables, and are used for hypothesis testing that predicts changes or differences.

There are various statistical analysis methods used within inferential statistics; a few are discussed below.

  • Cross Tabulations: Cross tabulation or crosstab is used to show the relationship that exists between two variables and is often used to compare results by demographic groups. It uses a basic tabular form to draw inferences between different data sets and contains data that is mutually exclusive or has some connection with each other. Crosstabs help understand the nuances of a dataset and factors that may influence a data point.
  • Regression Analysis: Regression analysis estimates the relationship between a set of variables. It shows the correlation between a dependent variable (the variable or outcome you want to measure or predict) and any number of independent variables (factors that may impact the dependent variable). Therefore, the purpose of the regression analysis is to estimate how one or more variables might affect a dependent variable to identify trends and patterns to make predictions and forecast possible future trends. There are many types of regression analysis, and the model you choose will be determined by the type of data you have for the dependent variable. The types of regression analysis include linear regression, non-linear regression, binary logistic regression, etc.
  • Monte Carlo Simulation: Monte Carlo simulation, also known as the Monte Carlo method, is a computerized technique of generating models of possible outcomes and showing their probability distributions. It considers a range of possible outcomes and then tries to calculate how likely each outcome will occur. Data analysts use it to perform advanced risk analyses to help forecast future events and make decisions accordingly.
  • Analysis of Variance (ANOVA): This is used to test the extent to which two or more groups differ from each other. It compares the mean of various groups and allows the analysis of multiple groups.
  • Factor Analysis:   A large number of variables can be reduced into a smaller number of factors using the factor analysis technique. It works on the principle that multiple separate observable variables correlate with each other because they are all associated with an underlying construct. It helps in reducing large datasets into smaller, more manageable samples.
  • Cohort Analysis: Cohort analysis can be defined as a subset of behavioral analytics that operates from data taken from a given dataset. Rather than looking at all users as one unit, cohort analysis breaks down data into related groups for analysis, where these groups or cohorts usually have common characteristics or similarities within a defined period.
  • MaxDiff Analysis: This is a quantitative data analysis method that is used to gauge customers’ preferences for purchase and what parameters rank higher than the others in the process. 
  • Cluster Analysis: Cluster analysis is a technique used to identify structures within a dataset. Cluster analysis aims to be able to sort different data points into groups that are internally similar and externally different; that is, data points within a cluster will look like each other and different from data points in other clusters.
  • Time Series Analysis: This is a statistical analytic technique used to identify trends and cycles over time. It is simply the measurement of the same variables at different times, like weekly and monthly email sign-ups, to uncover trends, seasonality, and cyclic patterns. By doing this, the data analyst can forecast how variables of interest may fluctuate in the future. 
  • SWOT analysis: This is a quantitative data analysis method that assigns numerical values to indicate strengths, weaknesses, opportunities, and threats of an organization, product, or service to show a clearer picture of competition to foster better business strategies

How to Choose the Right Method for your Analysis?

Choosing between Descriptive Statistics or Inferential Statistics can be often confusing. You should consider the following factors before choosing the right method for your quantitative data analysis:

1. Type of Data

The first consideration in data analysis is understanding the type of data you have. Different statistical methods have specific requirements based on these data types, and using the wrong method can render results meaningless. The choice of statistical method should align with the nature and distribution of your data to ensure meaningful and accurate analysis.

2. Your Research Questions

When deciding on statistical methods, it’s crucial to align them with your specific research questions and hypotheses. The nature of your questions will influence whether descriptive statistics alone, which reveal sample attributes, are sufficient or if you need both descriptive and inferential statistics to understand group differences or relationships between variables and make population inferences.

Pros and Cons of Quantitative Data Analysis

1. Objectivity and Generalizability:

  • Quantitative data analysis offers objective, numerical measurements, minimizing bias and personal interpretation.
  • Results can often be generalized to larger populations, making them applicable to broader contexts.

Example: A study using quantitative data analysis to measure student test scores can objectively compare performance across different schools and demographics, leading to generalizable insights about educational strategies.

2. Precision and Efficiency:

  • Statistical methods provide precise numerical results, allowing for accurate comparisons and prediction.
  • Large datasets can be analyzed efficiently with the help of computer software, saving time and resources.

Example: A marketing team can use quantitative data analysis to precisely track click-through rates and conversion rates on different ad campaigns, quickly identifying the most effective strategies for maximizing customer engagement.

3. Identification of Patterns and Relationships:

  • Statistical techniques reveal hidden patterns and relationships between variables that might not be apparent through observation alone.
  • This can lead to new insights and understanding of complex phenomena.

Example: A medical researcher can use quantitative analysis to pinpoint correlations between lifestyle factors and disease risk, aiding in the development of prevention strategies.

1. Limited Scope:

  • Quantitative analysis focuses on quantifiable aspects of a phenomenon ,  potentially overlooking important qualitative nuances, such as emotions, motivations, or cultural contexts.

Example: A survey measuring customer satisfaction with numerical ratings might miss key insights about the underlying reasons for their satisfaction or dissatisfaction, which could be better captured through open-ended feedback.

2. Oversimplification:

  • Reducing complex phenomena to numerical data can lead to oversimplification and a loss of richness in understanding.

Example: Analyzing employee productivity solely through quantitative metrics like hours worked or tasks completed might not account for factors like creativity, collaboration, or problem-solving skills, which are crucial for overall performance.

3. Potential for Misinterpretation:

  • Statistical results can be misinterpreted if not analyzed carefully and with appropriate expertise.
  • The choice of statistical methods and assumptions can significantly influence results.

This blog discusses the steps, methods, and techniques of quantitative data analysis. It also gives insights into the methods of data collection, the type of data one should work with, and the pros and cons of such analysis.

Gain a better understanding of data analysis with these essential reads:

  • Data Analysis and Modeling: 4 Critical Differences
  • Exploratory Data Analysis Simplified 101
  • 25 Best Data Analysis Tools in 2024

Carrying out successful data analysis requires prepping the data and making it analysis-ready. That is where Hevo steps in.

Want to give Hevo a try? Sign Up for a 14-day free trial and experience the feature-rich Hevo suite first hand. You may also have a look at the amazing Hevo price , which will assist you in selecting the best plan for your requirements.

Share your experience of understanding Quantitative Data Analysis in the comment section below! We would love to hear your thoughts.

Ofem Eteng is a seasoned technical content writer with over 12 years of experience. He has held pivotal roles such as System Analyst (DevOps) at Dagbs Nigeria Limited and Full-Stack Developer at Pedoquasphere International Limited. He specializes in data science, data analytics and cutting-edge technologies, making him an expert in the data industry.

No-code Data Pipeline for your Data Warehouse

  • Data Analysis
  • Data Warehouse
  • Quantitative Data Analysis

Continue Reading

analysis and interpretation of data in quantitative research

Khawaja Abdul Ahad

How dbt Semantic Layer Simplifies Data for Decision-Making

analysis and interpretation of data in quantitative research

Chirag Agarwal

Luigi vs Airflow: Which is the Better Tool?

analysis and interpretation of data in quantitative research

Sherly Angel

Top 7 Metadata Management Tools

I want to read this e-book.

analysis and interpretation of data in quantitative research

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

The Beginner's Guide to Statistical Analysis | 5 Steps & Examples

Statistical analysis means investigating trends, patterns, and relationships using quantitative data . It is an important research tool used by scientists, governments, businesses, and other organizations.

To draw valid conclusions, statistical analysis requires careful planning from the very start of the research process . You need to specify your hypotheses and make decisions about your research design, sample size, and sampling procedure.

After collecting data from your sample, you can organize and summarize the data using descriptive statistics . Then, you can use inferential statistics to formally test hypotheses and make estimates about the population. Finally, you can interpret and generalize your findings.

This article is a practical introduction to statistical analysis for students and researchers. We’ll walk you through the steps using two research examples. The first investigates a potential cause-and-effect relationship, while the second investigates a potential correlation between variables.

Table of contents

Step 1: write your hypotheses and plan your research design, step 2: collect data from a sample, step 3: summarize your data with descriptive statistics, step 4: test hypotheses or make estimates with inferential statistics, step 5: interpret your results, other interesting articles.

To collect valid data for statistical analysis, you first need to specify your hypotheses and plan out your research design.

Writing statistical hypotheses

The goal of research is often to investigate a relationship between variables within a population . You start with a prediction, and use statistical analysis to test that prediction.

A statistical hypothesis is a formal way of writing a prediction about a population. Every research prediction is rephrased into null and alternative hypotheses that can be tested using sample data.

While the null hypothesis always predicts no effect or no relationship between variables, the alternative hypothesis states your research prediction of an effect or relationship.

  • Null hypothesis: A 5-minute meditation exercise will have no effect on math test scores in teenagers.
  • Alternative hypothesis: A 5-minute meditation exercise will improve math test scores in teenagers.
  • Null hypothesis: Parental income and GPA have no relationship with each other in college students.
  • Alternative hypothesis: Parental income and GPA are positively correlated in college students.

Planning your research design

A research design is your overall strategy for data collection and analysis. It determines the statistical tests you can use to test your hypothesis later on.

First, decide whether your research will use a descriptive, correlational, or experimental design. Experiments directly influence variables, whereas descriptive and correlational studies only measure variables.

  • In an experimental design , you can assess a cause-and-effect relationship (e.g., the effect of meditation on test scores) using statistical tests of comparison or regression.
  • In a correlational design , you can explore relationships between variables (e.g., parental income and GPA) without any assumption of causality using correlation coefficients and significance tests.
  • In a descriptive design , you can study the characteristics of a population or phenomenon (e.g., the prevalence of anxiety in U.S. college students) using statistical tests to draw inferences from sample data.

Your research design also concerns whether you’ll compare participants at the group level or individual level, or both.

  • In a between-subjects design , you compare the group-level outcomes of participants who have been exposed to different treatments (e.g., those who performed a meditation exercise vs those who didn’t).
  • In a within-subjects design , you compare repeated measures from participants who have participated in all treatments of a study (e.g., scores from before and after performing a meditation exercise).
  • In a mixed (factorial) design , one variable is altered between subjects and another is altered within subjects (e.g., pretest and posttest scores from participants who either did or didn’t do a meditation exercise).
  • Experimental
  • Correlational

First, you’ll take baseline test scores from participants. Then, your participants will undergo a 5-minute meditation exercise. Finally, you’ll record participants’ scores from a second math test.

In this experiment, the independent variable is the 5-minute meditation exercise, and the dependent variable is the math test score from before and after the intervention. Example: Correlational research design In a correlational study, you test whether there is a relationship between parental income and GPA in graduating college students. To collect your data, you will ask participants to fill in a survey and self-report their parents’ incomes and their own GPA.

Measuring variables

When planning a research design, you should operationalize your variables and decide exactly how you will measure them.

For statistical analysis, it’s important to consider the level of measurement of your variables, which tells you what kind of data they contain:

  • Categorical data represents groupings. These may be nominal (e.g., gender) or ordinal (e.g. level of language ability).
  • Quantitative data represents amounts. These may be on an interval scale (e.g. test score) or a ratio scale (e.g. age).

Many variables can be measured at different levels of precision. For example, age data can be quantitative (8 years old) or categorical (young). If a variable is coded numerically (e.g., level of agreement from 1–5), it doesn’t automatically mean that it’s quantitative instead of categorical.

Identifying the measurement level is important for choosing appropriate statistics and hypothesis tests. For example, you can calculate a mean score with quantitative data, but not with categorical data.

In a research study, along with measures of your variables of interest, you’ll often collect data on relevant participant characteristics.

Variable Type of data
Age Quantitative (ratio)
Gender Categorical (nominal)
Race or ethnicity Categorical (nominal)
Baseline test scores Quantitative (interval)
Final test scores Quantitative (interval)
Parental income Quantitative (ratio)
GPA Quantitative (interval)

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

analysis and interpretation of data in quantitative research

In most cases, it’s too difficult or expensive to collect data from every member of the population you’re interested in studying. Instead, you’ll collect data from a sample.

Statistical analysis allows you to apply your findings beyond your own sample as long as you use appropriate sampling procedures . You should aim for a sample that is representative of the population.

Sampling for statistical analysis

There are two main approaches to selecting a sample.

  • Probability sampling: every member of the population has a chance of being selected for the study through random selection.
  • Non-probability sampling: some members of the population are more likely than others to be selected for the study because of criteria such as convenience or voluntary self-selection.

In theory, for highly generalizable findings, you should use a probability sampling method. Random selection reduces several types of research bias , like sampling bias , and ensures that data from your sample is actually typical of the population. Parametric tests can be used to make strong statistical inferences when data are collected using probability sampling.

But in practice, it’s rarely possible to gather the ideal sample. While non-probability samples are more likely to at risk for biases like self-selection bias , they are much easier to recruit and collect data from. Non-parametric tests are more appropriate for non-probability samples, but they result in weaker inferences about the population.

If you want to use parametric tests for non-probability samples, you have to make the case that:

  • your sample is representative of the population you’re generalizing your findings to.
  • your sample lacks systematic bias.

Keep in mind that external validity means that you can only generalize your conclusions to others who share the characteristics of your sample. For instance, results from Western, Educated, Industrialized, Rich and Democratic samples (e.g., college students in the US) aren’t automatically applicable to all non-WEIRD populations.

If you apply parametric tests to data from non-probability samples, be sure to elaborate on the limitations of how far your results can be generalized in your discussion section .

Create an appropriate sampling procedure

Based on the resources available for your research, decide on how you’ll recruit participants.

  • Will you have resources to advertise your study widely, including outside of your university setting?
  • Will you have the means to recruit a diverse sample that represents a broad population?
  • Do you have time to contact and follow up with members of hard-to-reach groups?

Your participants are self-selected by their schools. Although you’re using a non-probability sample, you aim for a diverse and representative sample. Example: Sampling (correlational study) Your main population of interest is male college students in the US. Using social media advertising, you recruit senior-year male college students from a smaller subpopulation: seven universities in the Boston area.

Calculate sufficient sample size

Before recruiting participants, decide on your sample size either by looking at other studies in your field or using statistics. A sample that’s too small may be unrepresentative of the sample, while a sample that’s too large will be more costly than necessary.

There are many sample size calculators online. Different formulas are used depending on whether you have subgroups or how rigorous your study should be (e.g., in clinical research). As a rule of thumb, a minimum of 30 units or more per subgroup is necessary.

To use these calculators, you have to understand and input these key components:

  • Significance level (alpha): the risk of rejecting a true null hypothesis that you are willing to take, usually set at 5%.
  • Statistical power : the probability of your study detecting an effect of a certain size if there is one, usually 80% or higher.
  • Expected effect size : a standardized indication of how large the expected result of your study will be, usually based on other similar studies.
  • Population standard deviation: an estimate of the population parameter based on a previous study or a pilot study of your own.

Once you’ve collected all of your data, you can inspect them and calculate descriptive statistics that summarize them.

Inspect your data

There are various ways to inspect your data, including the following:

  • Organizing data from each variable in frequency distribution tables .
  • Displaying data from a key variable in a bar chart to view the distribution of responses.
  • Visualizing the relationship between two variables using a scatter plot .

By visualizing your data in tables and graphs, you can assess whether your data follow a skewed or normal distribution and whether there are any outliers or missing data.

A normal distribution means that your data are symmetrically distributed around a center where most values lie, with the values tapering off at the tail ends.

Mean, median, mode, and standard deviation in a normal distribution

In contrast, a skewed distribution is asymmetric and has more values on one end than the other. The shape of the distribution is important to keep in mind because only some descriptive statistics should be used with skewed distributions.

Extreme outliers can also produce misleading statistics, so you may need a systematic approach to dealing with these values.

Calculate measures of central tendency

Measures of central tendency describe where most of the values in a data set lie. Three main measures of central tendency are often reported:

  • Mode : the most popular response or value in the data set.
  • Median : the value in the exact middle of the data set when ordered from low to high.
  • Mean : the sum of all values divided by the number of values.

However, depending on the shape of the distribution and level of measurement, only one or two of these measures may be appropriate. For example, many demographic characteristics can only be described using the mode or proportions, while a variable like reaction time may not have a mode at all.

Calculate measures of variability

Measures of variability tell you how spread out the values in a data set are. Four main measures of variability are often reported:

  • Range : the highest value minus the lowest value of the data set.
  • Interquartile range : the range of the middle half of the data set.
  • Standard deviation : the average distance between each value in your data set and the mean.
  • Variance : the square of the standard deviation.

Once again, the shape of the distribution and level of measurement should guide your choice of variability statistics. The interquartile range is the best measure for skewed distributions, while standard deviation and variance provide the best information for normal distributions.

Using your table, you should check whether the units of the descriptive statistics are comparable for pretest and posttest scores. For example, are the variance levels similar across the groups? Are there any extreme values? If there are, you may need to identify and remove extreme outliers in your data set or transform your data before performing a statistical test.

Pretest scores Posttest scores
Mean 68.44 75.25
Standard deviation 9.43 9.88
Variance 88.96 97.96
Range 36.25 45.12
30

From this table, we can see that the mean score increased after the meditation exercise, and the variances of the two scores are comparable. Next, we can perform a statistical test to find out if this improvement in test scores is statistically significant in the population. Example: Descriptive statistics (correlational study) After collecting data from 653 students, you tabulate descriptive statistics for annual parental income and GPA.

It’s important to check whether you have a broad range of data points. If you don’t, your data may be skewed towards some groups more than others (e.g., high academic achievers), and only limited inferences can be made about a relationship.

Parental income (USD) GPA
Mean 62,100 3.12
Standard deviation 15,000 0.45
Variance 225,000,000 0.16
Range 8,000–378,000 2.64–4.00
653

A number that describes a sample is called a statistic , while a number describing a population is called a parameter . Using inferential statistics , you can make conclusions about population parameters based on sample statistics.

Researchers often use two main methods (simultaneously) to make inferences in statistics.

  • Estimation: calculating population parameters based on sample statistics.
  • Hypothesis testing: a formal process for testing research predictions about the population using samples.

You can make two types of estimates of population parameters from sample statistics:

  • A point estimate : a value that represents your best guess of the exact parameter.
  • An interval estimate : a range of values that represent your best guess of where the parameter lies.

If your aim is to infer and report population characteristics from sample data, it’s best to use both point and interval estimates in your paper.

You can consider a sample statistic a point estimate for the population parameter when you have a representative sample (e.g., in a wide public opinion poll, the proportion of a sample that supports the current government is taken as the population proportion of government supporters).

There’s always error involved in estimation, so you should also provide a confidence interval as an interval estimate to show the variability around a point estimate.

A confidence interval uses the standard error and the z score from the standard normal distribution to convey where you’d generally expect to find the population parameter most of the time.

Hypothesis testing

Using data from a sample, you can test hypotheses about relationships between variables in the population. Hypothesis testing starts with the assumption that the null hypothesis is true in the population, and you use statistical tests to assess whether the null hypothesis can be rejected or not.

Statistical tests determine where your sample data would lie on an expected distribution of sample data if the null hypothesis were true. These tests give two main outputs:

  • A test statistic tells you how much your data differs from the null hypothesis of the test.
  • A p value tells you the likelihood of obtaining your results if the null hypothesis is actually true in the population.

Statistical tests come in three main varieties:

  • Comparison tests assess group differences in outcomes.
  • Regression tests assess cause-and-effect relationships between variables.
  • Correlation tests assess relationships between variables without assuming causation.

Your choice of statistical test depends on your research questions, research design, sampling method, and data characteristics.

Parametric tests

Parametric tests make powerful inferences about the population based on sample data. But to use them, some assumptions must be met, and only some types of variables can be used. If your data violate these assumptions, you can perform appropriate data transformations or use alternative non-parametric tests instead.

A regression models the extent to which changes in a predictor variable results in changes in outcome variable(s).

  • A simple linear regression includes one predictor variable and one outcome variable.
  • A multiple linear regression includes two or more predictor variables and one outcome variable.

Comparison tests usually compare the means of groups. These may be the means of different groups within a sample (e.g., a treatment and control group), the means of one sample group taken at different times (e.g., pretest and posttest scores), or a sample mean and a population mean.

  • A t test is for exactly 1 or 2 groups when the sample is small (30 or less).
  • A z test is for exactly 1 or 2 groups when the sample is large.
  • An ANOVA is for 3 or more groups.

The z and t tests have subtypes based on the number and types of samples and the hypotheses:

  • If you have only one sample that you want to compare to a population mean, use a one-sample test .
  • If you have paired measurements (within-subjects design), use a dependent (paired) samples test .
  • If you have completely separate measurements from two unmatched groups (between-subjects design), use an independent (unpaired) samples test .
  • If you expect a difference between groups in a specific direction, use a one-tailed test .
  • If you don’t have any expectations for the direction of a difference between groups, use a two-tailed test .

The only parametric correlation test is Pearson’s r . The correlation coefficient ( r ) tells you the strength of a linear relationship between two quantitative variables.

However, to test whether the correlation in the sample is strong enough to be important in the population, you also need to perform a significance test of the correlation coefficient, usually a t test, to obtain a p value. This test uses your sample size to calculate how much the correlation coefficient differs from zero in the population.

You use a dependent-samples, one-tailed t test to assess whether the meditation exercise significantly improved math test scores. The test gives you:

  • a t value (test statistic) of 3.00
  • a p value of 0.0028

Although Pearson’s r is a test statistic, it doesn’t tell you anything about how significant the correlation is in the population. You also need to test whether this sample correlation coefficient is large enough to demonstrate a correlation in the population.

A t test can also determine how significantly a correlation coefficient differs from zero based on sample size. Since you expect a positive correlation between parental income and GPA, you use a one-sample, one-tailed t test. The t test gives you:

  • a t value of 3.08
  • a p value of 0.001

Prevent plagiarism. Run a free check.

The final step of statistical analysis is interpreting your results.

Statistical significance

In hypothesis testing, statistical significance is the main criterion for forming conclusions. You compare your p value to a set significance level (usually 0.05) to decide whether your results are statistically significant or non-significant.

Statistically significant results are considered unlikely to have arisen solely due to chance. There is only a very low chance of such a result occurring if the null hypothesis is true in the population.

This means that you believe the meditation intervention, rather than random factors, directly caused the increase in test scores. Example: Interpret your results (correlational study) You compare your p value of 0.001 to your significance threshold of 0.05. With a p value under this threshold, you can reject the null hypothesis. This indicates a statistically significant correlation between parental income and GPA in male college students.

Note that correlation doesn’t always mean causation, because there are often many underlying factors contributing to a complex variable like GPA. Even if one variable is related to another, this may be because of a third variable influencing both of them, or indirect links between the two variables.

Effect size

A statistically significant result doesn’t necessarily mean that there are important real life applications or clinical outcomes for a finding.

In contrast, the effect size indicates the practical significance of your results. It’s important to report effect sizes along with your inferential statistics for a complete picture of your results. You should also report interval estimates of effect sizes if you’re writing an APA style paper .

With a Cohen’s d of 0.72, there’s medium to high practical significance to your finding that the meditation exercise improved test scores. Example: Effect size (correlational study) To determine the effect size of the correlation coefficient, you compare your Pearson’s r value to Cohen’s effect size criteria.

Decision errors

Type I and Type II errors are mistakes made in research conclusions. A Type I error means rejecting the null hypothesis when it’s actually true, while a Type II error means failing to reject the null hypothesis when it’s false.

You can aim to minimize the risk of these errors by selecting an optimal significance level and ensuring high power . However, there’s a trade-off between the two errors, so a fine balance is necessary.

Frequentist versus Bayesian statistics

Traditionally, frequentist statistics emphasizes null hypothesis significance testing and always starts with the assumption of a true null hypothesis.

However, Bayesian statistics has grown in popularity as an alternative approach in the last few decades. In this approach, you use previous research to continually update your hypotheses based on your expectations and observations.

Bayes factor compares the relative strength of evidence for the null versus the alternative hypothesis rather than making a conclusion about rejecting the null hypothesis or not.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval

Methodology

  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Likert scale

Research bias

  • Implicit bias
  • Framing effect
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hostile attribution bias
  • Affect heuristic

Is this article helpful?

Other students also liked.

  • Descriptive Statistics | Definitions, Types, Examples
  • Inferential Statistics | An Easy Introduction & Examples
  • Choosing the Right Statistical Test | Types & Examples

More interesting articles

  • Akaike Information Criterion | When & How to Use It (Example)
  • An Easy Introduction to Statistical Significance (With Examples)
  • An Introduction to t Tests | Definitions, Formula and Examples
  • ANOVA in R | A Complete Step-by-Step Guide with Examples
  • Central Limit Theorem | Formula, Definition & Examples
  • Central Tendency | Understanding the Mean, Median & Mode
  • Chi-Square (Χ²) Distributions | Definition & Examples
  • Chi-Square (Χ²) Table | Examples & Downloadable Table
  • Chi-Square (Χ²) Tests | Types, Formula & Examples
  • Chi-Square Goodness of Fit Test | Formula, Guide & Examples
  • Chi-Square Test of Independence | Formula, Guide & Examples
  • Coefficient of Determination (R²) | Calculation & Interpretation
  • Correlation Coefficient | Types, Formulas & Examples
  • Frequency Distribution | Tables, Types & Examples
  • How to Calculate Standard Deviation (Guide) | Calculator & Examples
  • How to Calculate Variance | Calculator, Analysis & Examples
  • How to Find Degrees of Freedom | Definition & Formula
  • How to Find Interquartile Range (IQR) | Calculator & Examples
  • How to Find Outliers | 4 Ways with Examples & Explanation
  • How to Find the Geometric Mean | Calculator & Formula
  • How to Find the Mean | Definition, Examples & Calculator
  • How to Find the Median | Definition, Examples & Calculator
  • How to Find the Mode | Definition, Examples & Calculator
  • How to Find the Range of a Data Set | Calculator & Formula
  • Hypothesis Testing | A Step-by-Step Guide with Easy Examples
  • Interval Data and How to Analyze It | Definitions & Examples
  • Levels of Measurement | Nominal, Ordinal, Interval and Ratio
  • Linear Regression in R | A Step-by-Step Guide & Examples
  • Missing Data | Types, Explanation, & Imputation
  • Multiple Linear Regression | A Quick Guide (Examples)
  • Nominal Data | Definition, Examples, Data Collection & Analysis
  • Normal Distribution | Examples, Formulas, & Uses
  • Null and Alternative Hypotheses | Definitions & Examples
  • One-way ANOVA | When and How to Use It (With Examples)
  • Ordinal Data | Definition, Examples, Data Collection & Analysis
  • Parameter vs Statistic | Definitions, Differences & Examples
  • Pearson Correlation Coefficient (r) | Guide & Examples
  • Poisson Distributions | Definition, Formula & Examples
  • Probability Distribution | Formula, Types, & Examples
  • Quartiles & Quantiles | Calculation, Definition & Interpretation
  • Ratio Scales | Definition, Examples, & Data Analysis
  • Simple Linear Regression | An Easy Introduction & Examples
  • Skewness | Definition, Examples & Formula
  • Statistical Power and Why It Matters | A Simple Introduction
  • Student's t Table (Free Download) | Guide & Examples
  • T-distribution: What it is and how to use it
  • Test statistics | Definition, Interpretation, and Examples
  • The Standard Normal Distribution | Calculator, Examples & Uses
  • Two-Way ANOVA | Examples & When To Use It
  • Type I & Type II Errors | Differences, Examples, Visualizations
  • Understanding Confidence Intervals | Easy Examples & Formulas
  • Understanding P values | Definition and Examples
  • Variability | Calculating Range, IQR, Variance, Standard Deviation
  • What is Effect Size and Why Does It Matter? (Examples)
  • What Is Kurtosis? | Definition, Examples & Formula
  • What Is Standard Error? | How to Calculate (Guide with Examples)

What is your plagiarism score?

analysis and interpretation of data in quantitative research

Quantitative Data Analysis 101

The lingo, methods and techniques, explained simply.

By: Derek Jansen (MBA)  and Kerryn Warren (PhD) | December 2020

Quantitative data analysis is one of those things that often strikes fear in students. It’s totally understandable – quantitative analysis is a complex topic, full of daunting lingo , like medians, modes, correlation and regression. Suddenly we’re all wishing we’d paid a little more attention in math class…

The good news is that while quantitative data analysis is a mammoth topic, gaining a working understanding of the basics isn’t that hard , even for those of us who avoid numbers and math . In this post, we’ll break quantitative analysis down into simple , bite-sized chunks so you can approach your research with confidence.

Quantitative data analysis methods and techniques 101

Overview: Quantitative Data Analysis 101

  • What (exactly) is quantitative data analysis?
  • When to use quantitative analysis
  • How quantitative analysis works

The two “branches” of quantitative analysis

  • Descriptive statistics 101
  • Inferential statistics 101
  • How to choose the right quantitative methods
  • Recap & summary

What is quantitative data analysis?

Despite being a mouthful, quantitative data analysis simply means analysing data that is numbers-based – or data that can be easily “converted” into numbers without losing any meaning.

For example, category-based variables like gender, ethnicity, or native language could all be “converted” into numbers without losing meaning – for example, English could equal 1, French 2, etc.

This contrasts against qualitative data analysis, where the focus is on words, phrases and expressions that can’t be reduced to numbers. If you’re interested in learning about qualitative analysis, check out our post and video here .

What is quantitative analysis used for?

Quantitative analysis is generally used for three purposes.

  • Firstly, it’s used to measure differences between groups . For example, the popularity of different clothing colours or brands.
  • Secondly, it’s used to assess relationships between variables . For example, the relationship between weather temperature and voter turnout.
  • And third, it’s used to test hypotheses in a scientifically rigorous way. For example, a hypothesis about the impact of a certain vaccine.

Again, this contrasts with qualitative analysis , which can be used to analyse people’s perceptions and feelings about an event or situation. In other words, things that can’t be reduced to numbers.

How does quantitative analysis work?

Well, since quantitative data analysis is all about analysing numbers , it’s no surprise that it involves statistics . Statistical analysis methods form the engine that powers quantitative analysis, and these methods can vary from pretty basic calculations (for example, averages and medians) to more sophisticated analyses (for example, correlations and regressions).

Sounds like gibberish? Don’t worry. We’ll explain all of that in this post. Importantly, you don’t need to be a statistician or math wiz to pull off a good quantitative analysis. We’ll break down all the technical mumbo jumbo in this post.

Need a helping hand?

analysis and interpretation of data in quantitative research

As I mentioned, quantitative analysis is powered by statistical analysis methods . There are two main “branches” of statistical methods that are used – descriptive statistics and inferential statistics . In your research, you might only use descriptive statistics, or you might use a mix of both , depending on what you’re trying to figure out. In other words, depending on your research questions, aims and objectives . I’ll explain how to choose your methods later.

So, what are descriptive and inferential statistics?

Well, before I can explain that, we need to take a quick detour to explain some lingo. To understand the difference between these two branches of statistics, you need to understand two important words. These words are population and sample .

First up, population . In statistics, the population is the entire group of people (or animals or organisations or whatever) that you’re interested in researching. For example, if you were interested in researching Tesla owners in the US, then the population would be all Tesla owners in the US.

However, it’s extremely unlikely that you’re going to be able to interview or survey every single Tesla owner in the US. Realistically, you’ll likely only get access to a few hundred, or maybe a few thousand owners using an online survey. This smaller group of accessible people whose data you actually collect is called your sample .

So, to recap – the population is the entire group of people you’re interested in, and the sample is the subset of the population that you can actually get access to. In other words, the population is the full chocolate cake , whereas the sample is a slice of that cake.

So, why is this sample-population thing important?

Well, descriptive statistics focus on describing the sample , while inferential statistics aim to make predictions about the population, based on the findings within the sample. In other words, we use one group of statistical methods – descriptive statistics – to investigate the slice of cake, and another group of methods – inferential statistics – to draw conclusions about the entire cake. There I go with the cake analogy again…

With that out the way, let’s take a closer look at each of these branches in more detail.

Descriptive statistics vs inferential statistics

Branch 1: Descriptive Statistics

Descriptive statistics serve a simple but critically important role in your research – to describe your data set – hence the name. In other words, they help you understand the details of your sample . Unlike inferential statistics (which we’ll get to soon), descriptive statistics don’t aim to make inferences or predictions about the entire population – they’re purely interested in the details of your specific sample .

When you’re writing up your analysis, descriptive statistics are the first set of stats you’ll cover, before moving on to inferential statistics. But, that said, depending on your research objectives and research questions , they may be the only type of statistics you use. We’ll explore that a little later.

So, what kind of statistics are usually covered in this section?

Some common statistical tests used in this branch include the following:

  • Mean – this is simply the mathematical average of a range of numbers.
  • Median – this is the midpoint in a range of numbers when the numbers are arranged in numerical order. If the data set makes up an odd number, then the median is the number right in the middle of the set. If the data set makes up an even number, then the median is the midpoint between the two middle numbers.
  • Mode – this is simply the most commonly occurring number in the data set.
  • In cases where most of the numbers are quite close to the average, the standard deviation will be relatively low.
  • Conversely, in cases where the numbers are scattered all over the place, the standard deviation will be relatively high.
  • Skewness . As the name suggests, skewness indicates how symmetrical a range of numbers is. In other words, do they tend to cluster into a smooth bell curve shape in the middle of the graph, or do they skew to the left or right?

Feeling a bit confused? Let’s look at a practical example using a small data set.

Descriptive statistics example data

On the left-hand side is the data set. This details the bodyweight of a sample of 10 people. On the right-hand side, we have the descriptive statistics. Let’s take a look at each of them.

First, we can see that the mean weight is 72.4 kilograms. In other words, the average weight across the sample is 72.4 kilograms. Straightforward.

Next, we can see that the median is very similar to the mean (the average). This suggests that this data set has a reasonably symmetrical distribution (in other words, a relatively smooth, centred distribution of weights, clustered towards the centre).

In terms of the mode , there is no mode in this data set. This is because each number is present only once and so there cannot be a “most common number”. If there were two people who were both 65 kilograms, for example, then the mode would be 65.

Next up is the standard deviation . 10.6 indicates that there’s quite a wide spread of numbers. We can see this quite easily by looking at the numbers themselves, which range from 55 to 90, which is quite a stretch from the mean of 72.4.

And lastly, the skewness of -0.2 tells us that the data is very slightly negatively skewed. This makes sense since the mean and the median are slightly different.

As you can see, these descriptive statistics give us some useful insight into the data set. Of course, this is a very small data set (only 10 records), so we can’t read into these statistics too much. Also, keep in mind that this is not a list of all possible descriptive statistics – just the most common ones.

But why do all of these numbers matter?

While these descriptive statistics are all fairly basic, they’re important for a few reasons:

  • Firstly, they help you get both a macro and micro-level view of your data. In other words, they help you understand both the big picture and the finer details.
  • Secondly, they help you spot potential errors in the data – for example, if an average is way higher than you’d expect, or responses to a question are highly varied, this can act as a warning sign that you need to double-check the data.
  • And lastly, these descriptive statistics help inform which inferential statistical techniques you can use, as those techniques depend on the skewness (in other words, the symmetry and normality) of the data.

Simply put, descriptive statistics are really important , even though the statistical techniques used are fairly basic. All too often at Grad Coach, we see students skimming over the descriptives in their eagerness to get to the more exciting inferential methods, and then landing up with some very flawed results.

Don’t be a sucker – give your descriptive statistics the love and attention they deserve!

Examples of descriptive statistics

Branch 2: Inferential Statistics

As I mentioned, while descriptive statistics are all about the details of your specific data set – your sample – inferential statistics aim to make inferences about the population . In other words, you’ll use inferential statistics to make predictions about what you’d expect to find in the full population.

What kind of predictions, you ask? Well, there are two common types of predictions that researchers try to make using inferential stats:

  • Firstly, predictions about differences between groups – for example, height differences between children grouped by their favourite meal or gender.
  • And secondly, relationships between variables – for example, the relationship between body weight and the number of hours a week a person does yoga.

In other words, inferential statistics (when done correctly), allow you to connect the dots and make predictions about what you expect to see in the real world population, based on what you observe in your sample data. For this reason, inferential statistics are used for hypothesis testing – in other words, to test hypotheses that predict changes or differences.

Inferential statistics are used to make predictions about what you’d expect to find in the full population, based on the sample.

Of course, when you’re working with inferential statistics, the composition of your sample is really important. In other words, if your sample doesn’t accurately represent the population you’re researching, then your findings won’t necessarily be very useful.

For example, if your population of interest is a mix of 50% male and 50% female , but your sample is 80% male , you can’t make inferences about the population based on your sample, since it’s not representative. This area of statistics is called sampling, but we won’t go down that rabbit hole here (it’s a deep one!) – we’ll save that for another post .

What statistics are usually used in this branch?

There are many, many different statistical analysis methods within the inferential branch and it’d be impossible for us to discuss them all here. So we’ll just take a look at some of the most common inferential statistical methods so that you have a solid starting point.

First up are T-Tests . T-tests compare the means (the averages) of two groups of data to assess whether they’re statistically significantly different. In other words, do they have significantly different means, standard deviations and skewness.

This type of testing is very useful for understanding just how similar or different two groups of data are. For example, you might want to compare the mean blood pressure between two groups of people – one that has taken a new medication and one that hasn’t – to assess whether they are significantly different.

Kicking things up a level, we have ANOVA, which stands for “analysis of variance”. This test is similar to a T-test in that it compares the means of various groups, but ANOVA allows you to analyse multiple groups , not just two groups So it’s basically a t-test on steroids…

Next, we have correlation analysis . This type of analysis assesses the relationship between two variables. In other words, if one variable increases, does the other variable also increase, decrease or stay the same. For example, if the average temperature goes up, do average ice creams sales increase too? We’d expect some sort of relationship between these two variables intuitively , but correlation analysis allows us to measure that relationship scientifically .

Lastly, we have regression analysis – this is quite similar to correlation in that it assesses the relationship between variables, but it goes a step further to understand cause and effect between variables, not just whether they move together. In other words, does the one variable actually cause the other one to move, or do they just happen to move together naturally thanks to another force? Just because two variables correlate doesn’t necessarily mean that one causes the other.

Stats overload…

I hear you. To make this all a little more tangible, let’s take a look at an example of a correlation in action.

Here’s a scatter plot demonstrating the correlation (relationship) between weight and height. Intuitively, we’d expect there to be some relationship between these two variables, which is what we see in this scatter plot. In other words, the results tend to cluster together in a diagonal line from bottom left to top right.

Sample correlation

As I mentioned, these are are just a handful of inferential techniques – there are many, many more. Importantly, each statistical method has its own assumptions and limitations .

For example, some methods only work with normally distributed (parametric) data, while other methods are designed specifically for non-parametric data. And that’s exactly why descriptive statistics are so important – they’re the first step to knowing which inferential techniques you can and can’t use.

Remember that every statistical method has its own assumptions and limitations,  so you need to be aware of these.

How to choose the right analysis method

To choose the right statistical methods, you need to think about two important factors :

  • The type of quantitative data you have (specifically, level of measurement and the shape of the data). And,
  • Your research questions and hypotheses

Let’s take a closer look at each of these.

Factor 1 – Data type

The first thing you need to consider is the type of data you’ve collected (or the type of data you will collect). By data types, I’m referring to the four levels of measurement – namely, nominal, ordinal, interval and ratio. If you’re not familiar with this lingo, check out the video below.

Why does this matter?

Well, because different statistical methods and techniques require different types of data. This is one of the “assumptions” I mentioned earlier – every method has its assumptions regarding the type of data.

For example, some techniques work with categorical data (for example, yes/no type questions, or gender or ethnicity), while others work with continuous numerical data (for example, age, weight or income) – and, of course, some work with multiple data types.

If you try to use a statistical method that doesn’t support the data type you have, your results will be largely meaningless . So, make sure that you have a clear understanding of what types of data you’ve collected (or will collect). Once you have this, you can then check which statistical methods would support your data types here .

If you haven’t collected your data yet, you can work in reverse and look at which statistical method would give you the most useful insights, and then design your data collection strategy to collect the correct data types.

Another important factor to consider is the shape of your data . Specifically, does it have a normal distribution (in other words, is it a bell-shaped curve, centred in the middle) or is it very skewed to the left or the right? Again, different statistical techniques work for different shapes of data – some are designed for symmetrical data while others are designed for skewed data.

This is another reminder of why descriptive statistics are so important – they tell you all about the shape of your data.

Factor 2: Your research questions

The next thing you need to consider is your specific research questions, as well as your hypotheses (if you have some). The nature of your research questions and research hypotheses will heavily influence which statistical methods and techniques you should use.

If you’re just interested in understanding the attributes of your sample (as opposed to the entire population), then descriptive statistics are probably all you need. For example, if you just want to assess the means (averages) and medians (centre points) of variables in a group of people.

On the other hand, if you aim to understand differences between groups or relationships between variables and to infer or predict outcomes in the population, then you’ll likely need both descriptive statistics and inferential statistics.

So, it’s really important to get very clear about your research aims and research questions, as well your hypotheses – before you start looking at which statistical techniques to use.

Never shoehorn a specific statistical technique into your research just because you like it or have some experience with it. Your choice of methods must align with all the factors we’ve covered here.

Time to recap…

You’re still with me? That’s impressive. We’ve covered a lot of ground here, so let’s recap on the key points:

  • Quantitative data analysis is all about  analysing number-based data  (which includes categorical and numerical data) using various statistical techniques.
  • The two main  branches  of statistics are  descriptive statistics  and  inferential statistics . Descriptives describe your sample, whereas inferentials make predictions about what you’ll find in the population.
  • Common  descriptive statistical methods include  mean  (average),  median , standard  deviation  and  skewness .
  • Common  inferential statistical methods include  t-tests ,  ANOVA ,  correlation  and  regression  analysis.
  • To choose the right statistical methods and techniques, you need to consider the  type of data you’re working with , as well as your  research questions  and hypotheses.

analysis and interpretation of data in quantitative research

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

77 Comments

Oddy Labs

Hi, I have read your article. Such a brilliant post you have created.

Derek Jansen

Thank you for the feedback. Good luck with your quantitative analysis.

Abdullahi Ramat

Thank you so much.

Obi Eric Onyedikachi

Thank you so much. I learnt much well. I love your summaries of the concepts. I had love you to explain how to input data using SPSS

MWASOMOLA, BROWN

Very useful, I have got the concept

Lumbuka Kaunda

Amazing and simple way of breaking down quantitative methods.

Charles Lwanga

This is beautiful….especially for non-statisticians. I have skimmed through but I wish to read again. and please include me in other articles of the same nature when you do post. I am interested. I am sure, I could easily learn from you and get off the fear that I have had in the past. Thank you sincerely.

Essau Sefolo

Send me every new information you might have.

fatime

i need every new information

Dr Peter

Thank you for the blog. It is quite informative. Dr Peter Nemaenzhe PhD

Mvogo Mvogo Ephrem

It is wonderful. l’ve understood some of the concepts in a more compréhensive manner

Maya

Your article is so good! However, I am still a bit lost. I am doing a secondary research on Gun control in the US and increase in crime rates and I am not sure which analysis method I should use?

Joy

Based on the given learning points, this is inferential analysis, thus, use ‘t-tests, ANOVA, correlation and regression analysis’

Peter

Well explained notes. Am an MPH student and currently working on my thesis proposal, this has really helped me understand some of the things I didn’t know.

Jejamaije Mujoro

I like your page..helpful

prashant pandey

wonderful i got my concept crystal clear. thankyou!!

Dailess Banda

This is really helpful , thank you

Lulu

Thank you so much this helped

wossen

Wonderfully explained

Niamatullah zaheer

thank u so much, it was so informative

mona

THANKYOU, this was very informative and very helpful

Thaddeus Ogwoka

This is great GRADACOACH I am not a statistician but I require more of this in my thesis

Include me in your posts.

Alem Teshome

This is so great and fully useful. I would like to thank you again and again.

Mrinal

Glad to read this article. I’ve read lot of articles but this article is clear on all concepts. Thanks for sharing.

Emiola Adesina

Thank you so much. This is a very good foundation and intro into quantitative data analysis. Appreciate!

Josyl Hey Aquilam

You have a very impressive, simple but concise explanation of data analysis for Quantitative Research here. This is a God-send link for me to appreciate research more. Thank you so much!

Lynnet Chikwaikwai

Avery good presentation followed by the write up. yes you simplified statistics to make sense even to a layman like me. Thank so much keep it up. The presenter did ell too. i would like more of this for Qualitative and exhaust more of the test example like the Anova.

Adewole Ikeoluwa

This is a very helpful article, couldn’t have been clearer. Thank you.

Samih Soud ALBusaidi

Awesome and phenomenal information.Well done

Nūr

The video with the accompanying article is super helpful to demystify this topic. Very well done. Thank you so much.

Lalah

thank you so much, your presentation helped me a lot

Anjali

I don’t know how should I express that ur article is saviour for me 🥺😍

Saiqa Aftab Tunio

It is well defined information and thanks for sharing. It helps me a lot in understanding the statistical data.

Funeka Mvandaba

I gain a lot and thanks for sharing brilliant ideas, so wish to be linked on your email update.

Rita Kathomi Gikonyo

Very helpful and clear .Thank you Gradcoach.

Hilaria Barsabal

Thank for sharing this article, well organized and information presented are very clear.

AMON TAYEBWA

VERY INTERESTING AND SUPPORTIVE TO NEW RESEARCHERS LIKE ME. AT LEAST SOME BASICS ABOUT QUANTITATIVE.

Tariq

An outstanding, well explained and helpful article. This will help me so much with my data analysis for my research project. Thank you!

chikumbutso

wow this has just simplified everything i was scared of how i am gonna analyse my data but thanks to you i will be able to do so

Idris Haruna

simple and constant direction to research. thanks

Mbunda Castro

This is helpful

AshikB

Great writing!! Comprehensive and very helpful.

himalaya ravi

Do you provide any assistance for other steps of research methodology like making research problem testing hypothesis report and thesis writing?

Sarah chiwamba

Thank you so much for such useful article!

Lopamudra

Amazing article. So nicely explained. Wow

Thisali Liyanage

Very insightfull. Thanks

Melissa

I am doing a quality improvement project to determine if the implementation of a protocol will change prescribing habits. Would this be a t-test?

Aliyah

The is a very helpful blog, however, I’m still not sure how to analyze my data collected. I’m doing a research on “Free Education at the University of Guyana”

Belayneh Kassahun

tnx. fruitful blog!

Suzanne

So I am writing exams and would like to know how do establish which method of data analysis to use from the below research questions: I am a bit lost as to how I determine the data analysis method from the research questions.

Do female employees report higher job satisfaction than male employees with similar job descriptions across the South African telecommunications sector? – I though that maybe Chi Square could be used here. – Is there a gender difference in talented employees’ actual turnover decisions across the South African telecommunications sector? T-tests or Correlation in this one. – Is there a gender difference in the cost of actual turnover decisions across the South African telecommunications sector? T-tests or Correlation in this one. – What practical recommendations can be made to the management of South African telecommunications companies on leveraging gender to mitigate employee turnover decisions?

Your assistance will be appreciated if I could get a response as early as possible tomorrow

Like

This was quite helpful. Thank you so much.

kidane Getachew

wow I got a lot from this article, thank you very much, keep it up

FAROUK AHMAD NKENGA

Thanks for yhe guidance. Can you send me this guidance on my email? To enable offline reading?

Nosi Ruth Xabendlini

Thank you very much, this service is very helpful.

George William Kiyingi

Every novice researcher needs to read this article as it puts things so clear and easy to follow. Its been very helpful.

Adebisi

Wonderful!!!! you explained everything in a way that anyone can learn. Thank you!!

Miss Annah

I really enjoyed reading though this. Very easy to follow. Thank you

Reza Kia

Many thanks for your useful lecture, I would be really appreciated if you could possibly share with me the PPT of presentation related to Data type?

Protasia Tairo

Thank you very much for sharing, I got much from this article

Fatuma Chobo

This is a very informative write-up. Kindly include me in your latest posts.

naphtal

Very interesting mostly for social scientists

Boy M. Bachtiar

Thank you so much, very helpfull

You’re welcome 🙂

Dr Mafaza Mansoor

woow, its great, its very informative and well understood because of your way of writing like teaching in front of me in simple languages.

Opio Len

I have been struggling to understand a lot of these concepts. Thank you for the informative piece which is written with outstanding clarity.

Eric

very informative article. Easy to understand

Leena Fukey

Beautiful read, much needed.

didin

Always greet intro and summary. I learn so much from GradCoach

Mmusyoka

Quite informative. Simple and clear summary.

Jewel Faver

I thoroughly enjoyed reading your informative and inspiring piece. Your profound insights into this topic truly provide a better understanding of its complexity. I agree with the points you raised, especially when you delved into the specifics of the article. In my opinion, that aspect is often overlooked and deserves further attention.

Shantae

Absolutely!!! Thank you

Thazika Chitimera

Thank you very much for this post. It made me to understand how to do my data analysis.

lule victor

its nice work and excellent job ,you have made my work easier

Pedro Uwadum

Wow! So explicit. Well done.

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

Logo for UEN Digital Press with Pressbooks

Part II: Data Analysis Methods in Quantitative Research

Data analysis methods in quantitative research.

We started this module with levels of measurement as a way to categorize our data. Data analysis is directed toward answering the original research question and achieving the study purpose (or aim). Now, we are going to delve into two main statistical analyses to describe our data and make inferences about our data:

Descriptive Statistics and Inferential Statistics.

Descriptive Statistics:

Before you panic, we will not be going into statistical analyses very deeply. We want to simply get a good overview of some of the types of general statistical analyses so that it makes some sense to us when we read results in published research articles.

Descriptive statistics   summarize or describe the characteristics of a data set. This is a method of simply organizing and describing our data. Why? Because data that are not organized in some fashion are super difficult to interpret.

Let’s say our sample is golden retrievers (population “canines”). Our descriptive statistics  tell us more about the same.

  • 37% of our sample is male, 43% female
  • The mean age is 4 years
  • Mode is 6 years
  • Median age is 5.5 years

Image of golden retriever in field

Let’s explore some of the types of descriptive statistics.

Frequency Distributions : A frequency distribution describes the number of observations for each possible value of a measured variable. The numbers are arranged from lowest to highest and features a count of how many times each value occurred.

For example, if 18 students have pet dogs, dog ownership has a frequency of 18.

We might see what other types of pets that students have. Maybe cats, fish, and hamsters. We find that 2 students have hamsters, 9 have fish, 1 has a cat.

You can see that it is very difficult to interpret the various pets into any meaningful interpretation, yes?

Now, let’s take those same pets and place them in a frequency distribution table.                          

Type of Pet

Frequency

Dog

18

Fish

9

Hamsters

2

Cat

1

As we can now see, this is much easier to interpret.

Let’s say that we want to know how many books our sample population of  students have read in the last year. We collect our data and find this:

Number of Books

Frequency (How many students read that number of books)

13

1

12

6

11

18

10

58

9

99

8

138

7

99

6

56

5

21

4

8

3

2

2

1

1

0

We can then take that table and plot it out on a frequency distribution graph. This makes it much easier to see how the numbers are disbursed. Easier on the eyes, yes?

Chart, histogram Description automatically generated

Here’s another example of symmetrical, positive skew, and negative skew:

Understanding Descriptive Statistics | by Sarang Narkhede | Towards Data Science

Correlation : Relationships between two research variables are called correlations . Remember, correlation is not cause-and-effect. Correlations  simply measure the extent of relationship between two variables. To measure correlation in descriptive statistics, the statistical analysis called Pearson’s correlation coefficient I is often used.  You do not need to know how to calculate this for this course. But, do remember that analysis test because you will often see this in published research articles. There really are no set guidelines on what measurement constitutes a “strong” or “weak” correlation, as it really depends on the variables being measured.

However, possible values for correlation coefficients range from -1.00 through .00 to +1.00. A value of +1 means that the two variables are positively correlated, as one variable goes up, the other goes up. A value of r = 0 means that the two variables are not linearly related.

Often, the data will be presented on a scatter plot. Here, we can view the data and there appears to be a straight line (linear) trend between height and weight. The association (or correlation) is positive. That means, that there is a weight increase with height. The Pearson correlation coefficient in this case was r = 0.56.

analysis and interpretation of data in quantitative research

A type I error is made by rejecting a null hypothesis that is true. This means that there was no difference but the researcher concluded that the hypothesis was true.

A type II error is made by accepting that the null hypothesis is true when, in fact, it was false. Meaning there was actually a difference but the researcher did not think their hypothesis was supported.

Hypothesis Testing Procedures : In a general sense, the overall testing of a hypothesis has a systematic methodology. Remember, a hypothesis is an educated guess about the outcome. If we guess wrong, we might set up the tests incorrectly and might get results that are invalid. Sometimes, this is super difficult to get right. The main purpose of statistics is to test a hypothesis.

  • Selecting a statistical test. Lots of factors go into this, including levels of measurement of the variables.
  • Specifying the level of significance. Usually 0.05 is chosen.
  • Computing a test statistic. Lots of software programs to help with this.
  • Determining degrees of freedom ( df ). This refers to the number of observations free to vary about a parameter. Computing this is easy (but you don’t need to know how for this course).
  • Comparing the test statistic to a theoretical value. Theoretical values exist for all test statistics, which is compared to the study statistics to help establish significance.

Some of the common inferential statistics you will see include:

Comparison tests: Comparison tests look for differences among group means. They can be used to test the effect of a categorical variable on the mean value of some other characteristic.

T-tests are used when comparing the means of precisely two groups (e.g., the average heights of men and women). ANOVA and MANOVA tests are used when comparing the means of more than two groups (e.g., the average heights of children, teenagers, and adults).

  • t -tests (compares differences in two groups) – either paired t-test (example: What is the effect of two different test prep programs on the average exam scores for students from the same class?) or independent t-test (example: What is the difference in average exam scores for students from two different schools?)
  • analysis of variance (ANOVA, which compares differences in three or more groups) (example: What is the difference in average pain levels among post-surgical patients given three different painkillers?) or MANOVA (compares differences in three or more groups, and 2 or more outcomes) (example: What is the effect of flower species on petal length, petal width, and stem length?)

Correlation tests: Correlation tests check whether variables are related without hypothesizing a cause-and-effect relationship.

  • Pearson r (measures the strength and direction of the relationship between two variables) (example: How are latitude and temperature related?)

Nonparametric tests: Non-parametric tests don’t make as many assumptions about the data, and are useful when one or more of the common statistical assumptions are violated. However, the inferences they make aren’t as strong as with parametric tests.

  • chi-squared ( X 2 ) test (measures differences in proportions). Chi-square tests are often used to test hypotheses. The chi-square statistic compares the size of any discrepancies between the expected results and the actual results, given the size of the sample and the number of variables in the relationship. For example, the results of tossing a fair coin meet these criteria. We can apply a chi-square test to determine which type of candy is most popular and make sure that our shelves are well stocked. Or maybe you’re a scientist studying the offspring of cats to determine the likelihood of certain genetic traits being passed to a litter of kittens.

Inferential Versus Descriptive Statistics Summary Table

Inferential Statistics

Descriptive Statistics

Used to make conclusions about the population by using analytical tools on the sample data.

Used to qualify the characteristics of the data.

Hypothesis testing.

Measures of central tendency and measures of dispersion are the important tools used.

Is used to make inferences about an unknown population.

Used to describe the characteristics of a known sample or population.

Measures of inferential statistics include t-tests, ANOVA, chi-squared test, etc.

Measures of descriptive statistics are variances, range, mean, median, etc.

Statistical Significance Versus Clinical Significance

Finally, when it comes to statistical significance  in hypothesis testing, the normal probability value in nursing is <0.05. A p=value (probability) is a statistical measurement used to validate a hypothesis against measured data in the study. Meaning, it measures the likelihood that the results were actually observed due to the intervention, or if the results were just due by chance. The p-value, in measuring the probability of obtaining the observed results, assumes the null hypothesis is true.

The lower the p-value, the greater the statistical significance of the observed difference.

In the example earlier about our diabetic patients receiving online diet education, let’s say we had p = 0.05. Would that be a statistically significant result?

If you answered yes, you are correct!

What if our result was p = 0.8?

Not significant. Good job!

That’s pretty straightforward, right? Below 0.05, significant. Over 0.05 not   significant.

Could we have significance clinically even if we do not have statistically significant results? Yes. Let’s explore this a bit.

Statistical hypothesis testing provides little information for interpretation purposes. It’s pretty mathematical and we can still get it wrong. Additionally, attaining statistical significance does not really state whether a finding is clinically meaningful. With a large enough sample, even a small very tiny relationship may be statistically significant. But, clinical significance  is the practical importance of research. Meaning, we need to ask what the palpable effects may be on the lives of patients or healthcare decisions.

Remember, hypothesis testing cannot prove. It also cannot tell us much other than “yeah, it’s probably likely that there would be some change with this intervention”. Hypothesis testing tells us the likelihood that the outcome was due to an intervention or influence and not just by chance. Also, as nurses and clinicians, we are not concerned with a group of people – we are concerned at the individual, holistic level. The goal of evidence-based practice is to use best evidence for decisions about specific individual needs.

analysis and interpretation of data in quantitative research

Additionally, begin your Discussion section. What are the implications to practice? Is there little evidence or a lot? Would you recommend additional studies? If so, what type of study would you recommend, and why?

analysis and interpretation of data in quantitative research

  • Were all the important results discussed?
  • Did the researchers discuss any study limitations and their possible effects on the credibility of the findings? In discussing limitations, were key threats to the study’s validity and possible biases reviewed? Did the interpretations take limitations into account?
  • What types of evidence were offered in support of the interpretation, and was that evidence persuasive? Were results interpreted in light of findings from other studies?
  • Did the researchers make any unjustifiable causal inferences? Were alternative explanations for the findings considered? Were the rationales for rejecting these alternatives convincing?
  • Did the interpretation consider the precision of the results and/or the magnitude of effects?
  • Did the researchers draw any unwarranted conclusions about the generalizability of the results?
  • Did the researchers discuss the study’s implications for clinical practice or future nursing research? Did they make specific recommendations?
  • If yes, are the stated implications appropriate, given the study’s limitations and the magnitude of the effects as well as evidence from other studies? Are there important implications that the report neglected to include?
  • Did the researchers mention or assess clinical significance? Did they make a distinction between statistical and clinical significance?
  • If clinical significance was examined, was it assessed in terms of group-level information (e.g., effect sizes) or individual-level results? How was clinical significance operationalized?

References & Attribution

“ Green check mark ” by rawpixel licensed CC0 .

“ Magnifying glass ” by rawpixel licensed CC0

“ Orange flame ” by rawpixel licensed CC0 .

Polit, D. & Beck, C. (2021).  Lippincott CoursePoint Enhanced for Polit’s Essentials of Nursing Research  (10th ed.). Wolters Kluwer Health 

Vaid, N. K. (2019) Statistical performance measures. Medium. https://neeraj-kumar-vaid.medium.com/statistical-performance-measures-12bad66694b7

Evidence-Based Practice & Research Methodologies Copyright © by Tracy Fawns is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Data Analysis in Quantitative Research

  • Living reference work entry
  • First Online: 28 December 2017
  • Cite this living reference work entry

analysis and interpretation of data in quantitative research

  • Yong Moon Jung 2  

822 Accesses

Quantitative data analysis serves as part of an essential process of evidence-making in health and social sciences. It is adopted for any types of research question and design whether it is descriptive, explanatory, or causal. However, compared with qualitative counterpart, quantitative data analysis has less flexibility. Conducting quantitative data analysis requires a prerequisite understanding of the statistical knowledge and skills. It also requires rigor in the choice of appropriate analysis model and the interpretation of the analysis outcomes. Basically, the choice of appropriate analysis techniques is determined by the type of research question and the nature of the data. In addition, different analysis techniques require different assumptions of data. This chapter provides introductory guides for readers to assist them with their informed decision-making in choosing the correct analysis models. To this end, it begins with discussion of the levels of measure: nominal, ordinal, and scale. Some commonly used analysis techniques in univariate, bivariate, and multivariate data analysis are presented for practical examples. Example analysis outcomes are produced by the use of SPSS (Statistical Package for Social Sciences).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Similar content being viewed by others

analysis and interpretation of data in quantitative research

Data Analysis Techniques for Quantitative Study

analysis and interpretation of data in quantitative research

Meta-Analytic Methods for Public Health Research

Armstrong JS. Significance tests harm progress in forecasting. Int J Forecast. 2007;23(2):321–7.

Article   Google Scholar  

Babbie E. The practice of social research. 14th ed. Belmont: Cengage Learning; 2016.

Google Scholar  

Brockopp DY, Hastings-Tolsma MT. Fundamentals of nursing research. Boston: Jones & Bartlett; 2003.

Creswell JW. Research design: qualitative, quantitative, and mixed methods approaches. Thousand Oaks: Sage; 2014.

Fawcett J. The relationship of theory and research. Philadelphia: F. A. Davis; 1999.

Field A. Discovering statistics using IBM SPSS statistics. London: Sage; 2013.

Grove SK, Gray JR, Burns N. Understanding nursing research: building an evidence-based practice. 6th ed. St. Louis: Elsevier Saunders; 2015.

Hair JF, Black WC, Babin BJ, Anderson RE, Tatham RD. Multivariate data analysis. Upper Saddle River: Pearson Prentice Hall; 2006.

Katz MH. Multivariable analysis: a practical guide for clinicians. Cambridge: Cambridge University Press; 2006.

Book   Google Scholar  

McHugh ML. Scientific inquiry. J Specialists Pediatr Nurs. 2007; 8 (1):35–7. Volume 8, Issue 1, Version of Record online: 22 FEB 2007

Pallant J. SPSS survival manual: a step by step guide to data analysis using IBM SPSS. Sydney: Allen & Unwin; 2016.

Polit DF, Beck CT. Nursing research: principles and methods. Philadelphia: Lippincott Williams & Wilkins; 2004.

Trochim WMK, Donnelly JP. Research methods knowledge base. 3rd ed. Mason: Thomson Custom Publishing; 2007.

Tabachnick, B. G., & Fidell, L. S. (2013). Using multivariate statistics. Boston: Pearson Education.

Wells CS, Hin JM. Dealing with assumptions underlying statistical tests. Psychol Sch. 2007;44(5):495–502.

Download references

Author information

Authors and affiliations.

Centre for Business and Social Innovation, University of Technology Sydney, Ultimo, NSW, Australia

Yong Moon Jung

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Yong Moon Jung .

Editor information

Editors and affiliations.

Health, Locked Bag 1797, CA.02.35, Western Sydney Univ, School of Science & Health, Locked Bag 1797, CA.02.35, Penrith, New South Wales, Australia

Pranee Liamputtong

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Singapore Pte Ltd.

About this entry

Cite this entry.

Jung, Y.M. (2018). Data Analysis in Quantitative Research. In: Liamputtong, P. (eds) Handbook of Research Methods in Health Social Sciences . Springer, Singapore. https://doi.org/10.1007/978-981-10-2779-6_109-1

Download citation

DOI : https://doi.org/10.1007/978-981-10-2779-6_109-1

Received : 01 November 2017

Accepted : 10 November 2017

Published : 28 December 2017

Publisher Name : Springer, Singapore

Print ISBN : 978-981-10-2779-6

Online ISBN : 978-981-10-2779-6

eBook Packages : Springer Reference Social Sciences Reference Module Humanities and Social Sciences Reference Module Business, Economics and Social Sciences

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case AskWhy Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

analysis and interpretation of data in quantitative research

Home Market Research Research Tools and Apps

Data Interpretation: Definition and Steps with Examples

Data interpretation is the process of collecting data from one or more sources, analyzing it using appropriate methods, & drawing conclusions.

A good data interpretation process is key to making your data usable. It will help you make sure you’re drawing the correct conclusions and acting on your information.

No matter what, data is everywhere in the modern world. There are two groups and organizations: those drowning in data or not using it appropriately and those benefiting.

In this blog, you will learn the definition of data interpretation and its primary steps and examples.

What is Data Interpretation

Data interpretation is the process of reviewing data and arriving at relevant conclusions using various analytical research methods. Data analysis assists researchers in categorizing, manipulating data , and summarizing data to answer critical questions.

LEARN ABOUT: Level of Analysis

In business terms, the interpretation of data is the execution of various processes. This process analyzes and revises data to gain insights and recognize emerging patterns and behaviors. These conclusions will assist you as a manager in making an informed decision based on numbers while having all of the facts at your disposal.

Importance of Data Interpretation

Raw data is useless unless it’s interpreted. Data interpretation is important to businesses and people. The collected data helps make informed decisions.

Make better decisions

Any decision is based on the information that is available at the time. People used to think that many diseases were caused by bad blood, which was one of the four humors. So, the solution was to get rid of the bad blood. We now know that things like viruses, bacteria, and immune responses can cause illness and can act accordingly.

In the same way, when you know how to collect and understand data well, you can make better decisions. You can confidently choose a path for your organization or even your life instead of working with assumptions.

The most important thing is to follow a transparent process to reduce mistakes and tiredness when making decisions.

Find trends and take action

Another practical use of data interpretation is to get ahead of trends before they reach their peak. Some people have made a living by researching industries, spotting trends, and then making big bets on them.

LEARN ABOUT: Action Research

With the proper data interpretations and a little bit of work, you can catch the start of trends and use them to help your business or yourself grow. 

Better resource allocation

The last importance of data interpretation we will discuss is the ability to use people, tools, money, etc., more efficiently. For example, If you know via strong data interpretation that a market is underserved, you’ll go after it with more energy and win.

In the same way, you may find out that a market you thought was a good fit is actually bad. This could be because the market is too big for your products to serve, there is too much competition, or something else.

No matter what, you can move the resources you need faster and better to get better results.

What are the steps in interpreting data?

Here are some steps to interpreting data correctly.

Gather the data

The very first step in data interpretation is gathering all relevant data. You can do this by first visualizing it in a bar, graph, or pie chart. This step aims to analyze the data accurately and without bias. Now is the time to recall how you conducted your research.

Here are two question patterns that will help you to understand better.

  • Were there any flaws or changes that occurred during the data collection process?
  • Have you saved any observatory notes or indicators?

You can proceed to the next stage when you have all of your data.

  • Develop your discoveries

This is a summary of your findings. Here, you thoroughly examine the data to identify trends, patterns, or behavior. If you are researching a group of people using a sample population, this is the section where you examine behavioral patterns. You can compare these deductions to previous data sets, similar data sets, or general hypotheses in your industry. This step’s goal is to compare these deductions before drawing any conclusions.

  • Draw Conclusions

After you’ve developed your findings from your data sets, you can draw conclusions based on your discovered trends. Your findings should address the questions that prompted your research. If they do not respond, inquire about why; it may produce additional research or questions.

LEARN ABOUT: Research Process Steps

  • Give recommendations

The interpretation procedure of data comes to a close with this stage. Every research conclusion must include a recommendation. As recommendations are a summary of your findings and conclusions, they should be brief. There are only two options for recommendations; you can either recommend a course of action or suggest additional research.

Data interpretation examples

Here are two examples of data interpretations to help you understand it better:

Let’s say your users fall into four age groups. So a company can see which age group likes their content or product. Based on bar charts or pie charts, they can develop a marketing strategy to reach uninvolved groups or an outreach strategy to grow their core user base.

Another example of data analysis is the use of recruitment CRM by businesses. They utilize it to find candidates, track their progress, and manage their entire hiring process to determine how they can better automate their workflow.

Overall, data interpretation is an essential factor in data-driven decision-making. It should be performed on a regular basis as part of an iterative interpretation process. Investors, developers, and sales and acquisition professionals can benefit from routine data interpretation. It is what you do with those insights that determine the success of your business.

Contact QuestionPro experts if you need assistance conducting research or creating a data analysis. We can walk you through the process and help you make the most of your data.

MORE LIKE THIS

target population

Target Population: What It Is + Strategies for Targeting

Aug 29, 2024

Microsoft Customer Voice vs QuestionPro: Choosing the Best

statistical methods

Statistical Methods: What It Is, Process, Analyze & Present

Aug 28, 2024

analysis and interpretation of data in quantitative research

Velodu and QuestionPro: Connecting Data with a Human Touch

Other categories.

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Tuesday CX Thoughts (TCXT)
  • Uncategorized
  • What’s Coming Up
  • Workforce Intelligence
  • Business & Enterprise
  • Education, Learning & Skills
  • Energy & Environment
  • Financial Services
  • Health & Wellbeing
  • Higher Education
  • Work & Welfare
  • Behavioural insights
  • Business Spotlight – IFF’s business omnibus
  • Customer experience research
  • Customer satisfaction measurement
  • National statistics and complex surveys
  • Stakeholder research
  • Tenant satisfaction measures
  • Our approach
  • Trusted partner
  • Equality, diversity & inclusion at IFF
  • Sustainability at IFF
  • Charity giving
  • Meet the team
  • News & resources
  • Case studies

How to analyse and interpret data

You’ve run your survey, have your data and now it is time to make sense of the information you have collected and understand what the data is telling you.

Start your analysis early

Considering the analysis at all stages in the research process means it is much easier to make sense of your findings. For example, earlier in the series we discussed the importance of planning your study in terms of scale e.g. talking to  enough of the right  people / employers so you can be confident in the accuracy of your results. We also advised that questions are written in such a way that enables easy interpretation of the answer.

Once the survey is underway you should be able to look at question summaries on survey platforms like Survey Monkey, which will indicate the direction and strength of findings. You can use these to:

  • Check data quality:  At some questions you might have ‘don’t know’ answer options, but if these attract too high a level of response, you’re not going to get much definitive insight. If this is the case, you should consider re-wording your question and adjusting your answer options.

Similarly, if you have allowed an ‘Other (specify)’ response option to closed questions, you should check how often this is being used. Remember that if you have a lot of free text answers, you’re going to need to manually group these into themes which will be a time-consuming process. It’s worth scanning through these at an early stage during the fieldwork period to see if you can add an extra pre-defined response option to save more work further down the line.

  • Discuss emerging / interim findings:  Looking at results early can also give you a head start on analysis and identify themes that are developing strongly. This will help set an overall framework for you to fill in with more detail later and help shape more formal analysis.
  • Inform your more formal / detailed analysis outputs:  If you have the ability to produce data outputs like tabulations in-house then looking at question summaries early on can help shape your final data-set. A look through early findings might identify a group of interest you might have not previously considered. For example, in a survey about first semester satisfaction and likelihood to drop out, one question might report a high level of dissatisfaction among students with how their course is delivered. It would be a good idea to track these students though the survey and compare their answers to other questions against those who are satisfied with course delivery to try and explain why they are dissatisfied.

Top tip: Try and make your data as user friendly as possible to minimise extra data cleaning at the end which is likely to distract you from actually interpreting and understanding what the data is telling you

Organising your final data set.

You’ll want to organise your data so it doesn’t feel overwhelming, but also so it allows you to do some more detailed analysis. As well as looking at question summaries / frequencies, you’re likely to want to understand how answers differ depending on the type of (prospective) student, graduate or employer.

Sticking with the course satisfaction example. As well as understanding the total percentage of students likely to drop out, you’ll want to understand how this sentiment varies by different types of student based on characteristics like their gender or ethnicity, whether they are a home / international student etc.

You will also want to understand whether there is a specific type of student driving the overall finding. For example, if a high proportion of your survey responses are from international students who are more likely to say they want to drop out, then the overall figure reporting that are likely to drop out will also be high.

Running cross tabulations – crossing each question against different groups of interest is an effective way of organising your data to do this sort of analysis. On platforms like Survey Monkey, it is possible to filter results by certain groups or compare groups by questions and present this in cross-tabulations. If you’re able to export your survey data into Excel, then you can also replicate these sorts of outputs using pivot tables.

If you have the skills in-house to tabulate your data, you may find the following pointers useful:

  • For questions that use rating scales, think about how to summarise this data, for example, creating a summary row of ‘agree’/‘strongly agree’ grouped together and ‘disagree’/’strongly disagree’ grouped together
  • Think about combining certain questions in one data table, where a series of questions will be more compelling to report if combined. For example, a question capturing levels of loneliness with another that captures likelihood to drop out of their course.
  • For numerical questions, think about showing the average to avoid having to manually calculate this at the analysis stage

Top tip: Remind yourself of the research aims and objectives when organising your data. This will help focus your mind and ensure you have all the data you need to answer the key questions

Interpreting data.

The best way to conduct quantitative analysis is by taking a methodical approach and where possible, involving at least one other person so you can talk through your respective interpretations of the findings, challenge one another, and agree on a coherent narrative.

  • Look through the question summaries. Read them and let them sink in – you need a little undisturbed thinking time. If working you are working as a group, split up the questions and assign each colleague a section.
  • Think about what questions you need to answer to fulfil the research brief. Set yourself some clear follow-up questions – if you were your stakeholder(s) what questions would you ask next? What hypotheses do you have about what might be going on?
  • Use data tables to answer these questions. Be selective and let your questions dictate what you look at. If your data tables do not answer these questions, think about how they could be restructured so they do answer them.
  • Look out for differences by groups of interest – which ones are the most important to pull out?
  • Plan your report around answering the research questions. Using your data as evidence, find the ‘story’.
  • If you are working in a group, at this stage come together to discuss the overall story and fine tune the narrative. Challenge each other and check that there are no contradictions / there is an agreed message.

Types of question

How you ask your questions will determine the sort of data you collect and the type of analysis you can conduct. Think about how you will use your survey data at the end and what this means for how you ask your questions. Some common question types include:

  • Scales with labels or numbers , for example ‘very good, fairly good, neither good nor poor, fairly poor, very poor’ or a scale of 1 to 5 with 1 being ‘very good’ and 5 being ‘very poor’. Scales should always be balanced with the same number of ‘positive’ options as ‘negative’ options and the two ends of the scales should be genuine polar opposites (e.g. ‘very good’ and ‘very poor’ rather than ‘excellent’ at one end and ‘very poor’ at the other end).
  • Open questions , where a respondent answers in their own words and are best used when you don’t have a good idea of what the answer might be or if you want to collect quotes for example, ‘What have you enjoyed most about your time at university?’
  • Closed questions , where an answer is selected from a pre-determined list. Be careful not to introduce any response bias by rotating the order in which response options appear so the same answer doesn’t always appear at the top of the list. Make sure important answers aren’t missing from the list – add an ‘Other (specify) as a safety net – or that two or more contradictory answers can’t both be selected.
  • Ranking  used to find order of preference for items on a list. This type of question is most useful to differentiate between items when everything is obviously either a ‘good thing’ or a ‘bad thing’. The list should be limited to 7 or 8 items – for longer lists, asking for 1 st , 2 nd  and 3 rd  preference would be a better option.

Top Tip: Consider limiting the number of open questions you include in your survey – firstly they are more burdensome for the respondent to answer and secondly, they are more time consuming for you to analyse.

Work out what your overall ‘story’ is before you start to write it down

Consider rebasing some questions so that the story makes more ‘sense’ – often the findings will be more powerful if rebased to be on ‘all respondents’ even if the question was asked only of a subset

Consider what your key variables are – what ‘clever’ things might you be able to do with a handful of questions to really add value

Report questions in question order or feel obliged to report every question

Get waylaid by ‘interesting’ detail – focus on the main findings first

Be scared of reporting the obvious

Top tip: Don’t underestimate the thinking time needed when conducting quantitative analysis. Build in time to do your own thinking, as well as time to discuss your thinking with colleagues.

A few final pointers on format, accessibility and length.

As part of planning the project, you will have already decided how you are going to run your survey –online, telephone, or indeed a mix of these.

Remember,  how you run your survey can influence how questions are answered . For example, you want to ask an applicant a closed question – Why did you shortlist this university in your application?  When you display this question on an online survey, the applicant will see a list of response options to choose from. However, unless you  read out  these options in the equivalent telephone survey, they will not ‘have sight’ of them. Their exposure to the list of response options is different depending on how they are surveyed which may result in different answers being given.

Make sure you cater for all needs, for example large font or high contrast colours when completing an online survey for those with visual impairments or advance sight of the questions for those completing a telephone survey but are hard of hearing.

Typically, the shorter the questionnaire, the better. The longer a questionnaire is, the higher the dropout rate. As rule of thumb it is good practice to keep an online questionnaire to no more than 10 minutes, and a telephone questionnaire to 15 minutes.

Top tip: If you are running an online survey you should think about how each question will look visually. Make sure questions display correctly for all devices including mobile phones. Don’t let bad formatting put respondents off completing the survey!

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

An Overview of the Fundamentals of Data Management, Analysis, and Interpretation in Quantitative Research

Affiliations.

  • 1 Reader, School of Medicine, Dentistry & Nursing, University of Glasgow, Glasgow, Scotland, UK. Electronic address: [email protected].
  • 2 Clinical Nurse Specialist, Department of Head and Neck and ENT Cancer Surgery of the Portuguese Institute of Oncology of Francisco Gentil, Lisbon, Portugal.
  • 3 Senior Lecturer, School of Nursing and Midwifery, University of Galway, Galway, Ireland.
  • 4 Associate Professor, Catalan Institute of Oncology and Faculty of Medicine and Health Sciences, University of Barcelona, Barcelona, Spain.
  • 5 Senior Nurse Scientist, Institute of Higher Education and Research in Healthcare (IUFRS), Faculty of Biology and Medicine, University of Lausanne, and Lausanne University Hospital, Lausanne, Switzerland.
  • 6 Associate Professor, School of Nursing, Koc University, Istanbul, Turkey.
  • 7 Clinical Nurse Specialist, Department of Gastrointestinal Surgery, Cancer Center, Ghent University Hospital, Ghent, Belgium.
  • 8 Associate Professor, School of Nursing, Psychotherapy and Community Health, Dublin City University, Dublin, Ireland.
  • 9 Reader, School of Nursing, Institute of Nursing and Health Research, Ulster University, Belfast, UK.
  • 10 Professor, Department of Clinical Research, University of Southern Denmark, Department of Oncology, Odense University Hospital, Odense, Denmark.
  • 11 Reader, School of Health and Life Sciences, University of the West of Scotland, South Lanarkshire, Scotland, UK.
  • PMID: 36868925
  • DOI: 10.1016/j.soncn.2023.151398

Objectives: To provide an overview of three consecutive stages involved in the processing of quantitative research data (ie, data management, analysis, and interpretation) with the aid of practical examples to foster enhanced understanding.

Data sources: Published scientific articles, research textbooks, and expert advice were used.

Conclusion: Typically, a considerable amount of numerical research data is collected that require analysis. On entry into a data set, data must be carefully checked for errors and missing values, and then variables must be defined and coded as part of data management. Quantitative data analysis involves the use of statistics. Descriptive statistics help summarize the variables in a data set to show what is typical for a sample. Measures of central tendency (ie, mean, median, mode), measures of spread (standard deviation), and parameter estimation measures (confidence intervals) may be calculated. Inferential statistics aid in testing hypotheses about whether or not a hypothesized effect, relationship, or difference is likely true. Inferential statistical tests produce a value for probability, the P value. The P value informs about whether an effect, relationship, or difference might exist in reality. Crucially, it must be accompanied by a measure of magnitude (effect size) to help interpret how small or large this effect, relationship, or difference is. Effect sizes provide key information for clinical decision-making in health care.

Implications for nursing practice: Developing capacity in the management, analysis, and interpretation of quantitative research data can have a multifaceted impact in enhancing nurses' confidence in understanding, evaluating, and applying quantitative evidence in cancer nursing practice.

Keywords: Data analysis; Data management; Empirical research; Interpretation; Quantitative studies; Statistics.

Copyright © 2023 The Authors. Published by Elsevier Inc. All rights reserved.

PubMed Disclaimer

Similar articles

  • Folic acid supplementation and malaria susceptibility and severity among people taking antifolate antimalarial drugs in endemic areas. Crider K, Williams J, Qi YP, Gutman J, Yeung L, Mai C, Finkelstain J, Mehta S, Pons-Duran C, Menéndez C, Moraleda C, Rogers L, Daniels K, Green P. Crider K, et al. Cochrane Database Syst Rev. 2022 Feb 1;2(2022):CD014217. doi: 10.1002/14651858.CD014217. Cochrane Database Syst Rev. 2022. PMID: 36321557 Free PMC article.
  • Descriptive Statistics: Reporting the Answers to the 5 Basic Questions of Who, What, Why, When, Where, and a Sixth, So What? Vetter TR. Vetter TR. Anesth Analg. 2017 Nov;125(5):1797-1802. doi: 10.1213/ANE.0000000000002471. Anesth Analg. 2017. PMID: 28891910
  • The future of Cochrane Neonatal. Soll RF, Ovelman C, McGuire W. Soll RF, et al. Early Hum Dev. 2020 Nov;150:105191. doi: 10.1016/j.earlhumdev.2020.105191. Epub 2020 Sep 12. Early Hum Dev. 2020. PMID: 33036834
  • Interpretation and use of statistics in nursing research. Giuliano KK, Polanowicz M. Giuliano KK, et al. AACN Adv Crit Care. 2008 Apr-Jun;19(2):211-22. doi: 10.1097/01.AACN.0000318124.33889.6e. AACN Adv Crit Care. 2008. PMID: 18560290 Review.
  • Translational Metabolomics of Head Injury: Exploring Dysfunctional Cerebral Metabolism with Ex Vivo NMR Spectroscopy-Based Metabolite Quantification. Wolahan SM, Hirt D, Glenn TC. Wolahan SM, et al. In: Kobeissy FH, editor. Brain Neurotrauma: Molecular, Neuropsychological, and Rehabilitation Aspects. Boca Raton (FL): CRC Press/Taylor & Francis; 2015. Chapter 25. In: Kobeissy FH, editor. Brain Neurotrauma: Molecular, Neuropsychological, and Rehabilitation Aspects. Boca Raton (FL): CRC Press/Taylor & Francis; 2015. Chapter 25. PMID: 26269925 Free Books & Documents. Review.
  • Conducting and Writing Quantitative and Qualitative Research. Barroga E, Matanguihan GJ, Furuta A, Arima M, Tsuchiya S, Kawahara C, Takamiya Y, Izumi M. Barroga E, et al. J Korean Med Sci. 2023 Sep 18;38(37):e291. doi: 10.3346/jkms.2023.38.e291. J Korean Med Sci. 2023. PMID: 37724495 Free PMC article. Review.

Publication types

  • Search in MeSH

Related information

Linkout - more resources, full text sources.

  • Elsevier Science
  • Enlighten: Publications, University of Glasgow

full text provider logo

  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Indian J Anaesth
  • v.60(9); 2016 Sep

Interpretation and display of research results

Dilip kumar kulkarni.

Department of Anaesthesiology and Intensive Care, Nizam's Institute of Medical Sciences, Hyderabad, Telangana, India

It important to properly collect, code, clean and edit the data before interpreting and displaying the research results. Computers play a major role in different phases of research starting from conceptual, design and planning, data collection, data analysis and research publication phases. The main objective of data display is to summarize the characteristics of a data and to make the data more comprehensible and meaningful. Usually data is presented depending upon the type of data in different tables and graphs. This will enable not only to understand the data behaviour, but also useful in choosing the different statistical tests to be applied.

INTRODUCTION

Collection of data and display of results is very important in any study. The data of an experimental study, observational study or a survey are required to be collected in properly designed format for documentation, taking into consideration the design of study and different end points of the study. Usually data are collected in the proforma of the study. The data recorded and documented should be stored carefully in documents and in electronic form for example, excel sheets or data bases.

The data are usually classified into qualitative and quantitative [ Table 1 ]. Qualitative data is further divided into two categories, unordered qualitative data, such as blood groups (A, B, O, AB); and ordered qualitative data, such as severity of pain (mild, moderate, severe). Quantitative data are numerical and fall into two categories: discrete quantitative data, such as the internal diameter of endotracheal tube; and continuous quantitative data, such as blood pressure.[ 1 ]

Examples of types of data and display of data

An external file that holds a picture, illustration, etc.
Object name is IJA-60-657-g001.jpg

Data Coding is needed to allow the data recorded in categories to be used easily in statistical analysis with a computer. Coding assigns a unique number to each possible response. A few statistical packages analyse categorical data directly. If a number is assigned to categorical data, it becomes easier to analyse. This means that when the data are analysed and reported, the appropriate label needs to be assigned back to the numerical value to make it meaningful. The codes such as 1/0 for yes/no has the added advantage that the variable's 1/0 values can be easily analysed. The record of the codes modified is to be stored for later reference. Such coding can also be done for categorical ordinal data to convert in to numerical ordinal data, for example the severity of pain mild, moderate and severe into 1, 2 and 3 respectively.

PROCESS OF DATA CHECKING, CLEANING AND EDITING

In clinical research, errors occur despite designing the study properly, entering data carefully and preventing errors. Data cleaning and editing are carried out to identify and correct these errors, so that the study results will be accurate.[ 2 ]

Data entry errors in case of sex, dates, double entries and unexpected results are to be corrected unquestionably. Data editing can be done in three phases namely screening, diagnosing and editing [ Figure 1 ].

An external file that holds a picture, illustration, etc.
Object name is IJA-60-657-g002.jpg

Process of data checking, cleaning and editing in three phases

Screening phase

During screening of data, it is possible to distinguish the odd data, excess of data, double entries, outliers, and unexpected results. Screening methods are checking of questionnaires, data validation, browsing the excel sheets, data tables and graphical methods to observe data distribution.

Diagnostic phase

The nature of the data can be assessed in this phase. The data entries can be true normal, true errors, outliers, unexpected results.

Treatment phase

Once the data nature is identified the editing can be done by correcting, deleting or leaving the data sets unchanged.

The abnormal data points usually have to be corrected or to be deleted.[ 2 ] However some authors advocate these data points to be included in analysis.[ 3 ] If these extreme data points are deleted, they should be reported as “excluded from analysis”.[ 4 ]

ROLE OF COMPUTERS IN RESEARCH

The role of computers in scientific research is very high; the computers have the ability to perform the analytic tasks with high speed, accuracy and consistency. The Computers role in research process can be explained in different phases.[ 5 ]

Role of computer in conceptual phase

The conceptual phase consists of formulation of research problem, literature survey, theoretical frame work and developing the hypothesis. Computers are useful in searching the literatures. The references can be stored in the electronic database.

Role of computers in design and planning phase

This phase consists of research design preparation and determining sample design, population size, research variables, sampling plan, reviewing research plan and pilot study. The role of computers in these process is almost indispensable.

Role of computers in data collection phase

The data obtained from the subjects stored in computers are word files or excel spread sheets or statistical software data files or from data centers of hospital information management systems (data warehouse). If the data are stored in electronic format checking the data becomes easier. Thus, computers help in data entry, data editing, and data management including follow up actions. Examples of editors are Word Pad, SPSS data editor, word processors.

Role of computers in data analysis

This phase mainly consist of statistical analysis of the data and interpretation of results. Software like Minitab (Minitab Inc. USA.), SPSS (IBM Crop. New York), NCSS (LLC. Kaysville, Utah, USA) and spreadsheets are widely used.

Role of computer in research publication

Research article, research paper, research thesis or research dissertation is typed in word processing software in computers and stored. Which can be easily published in different electronic formats.[ 5 ]

DATA DISPLAY AND DESCRIPTION OF RESEARCH DATA

Data display and description is an important part of any research project which helps in knowing the distribution of data, detecting errors, missing values and outliers. Ultimately the data should be more comprehensible and meaningful.

Tables are commonly used for describing both qualitative and quantitative data. The graphs are useful for visualising the data and understanding the variations and trends of the data. Qualitative data are usually described by using bar or pie charts. Histograms, polygons or box plots are used to represent quantitative data.[ 1 ]

Qualitative data

Tabulation of qualitative data.

The qualitative observations are categorised in to different categories. The category frequency is nothing but the number of observations with in that category. The category relative frequency can be calculated by dividing the number of observations in the category by total number of observations. The Percentage for a category is more commonly used to describe qualitative data. It can be computed by multiplying relative frequency with hundred.[ 6 , 7 ]

The classification of 30 Patients of a group by severity of postoperative pain presented in Table 2 . The frequency table for this data computed by using the software NCSS[ 8 ] is shown in Table 3 .

The classification of post-operative pain in patients

An external file that holds a picture, illustration, etc.
Object name is IJA-60-657-g003.jpg

The frequency table for the variable pain

An external file that holds a picture, illustration, etc.
Object name is IJA-60-657-g004.jpg

Graphical display of qualitative data

The qualitative data are commonly displayed by bar graphs and pie charts.[ 9 ]

Bar graphs displays information of the frequency, relative frequency or percentage of each category on vertical axis or horizontal axis of the graph. [ Figure 2 ] Pie charts depicts the same information in divided slices in a complete circle. The area for the circle is equal to the frequency, relative frequency or percentage of that category [ Figure 3 ].

An external file that holds a picture, illustration, etc.
Object name is IJA-60-657-g005.jpg

The bar graph generated by computer using NCSS software for the variable pain

An external file that holds a picture, illustration, etc.
Object name is IJA-60-657-g006.jpg

The Pie graph generated by computer using NCSS software for the variable pain

Quantitative data

Tabulation of quantitative data.

The quantitative data are usually presented as frequency distribution or relative frequency rather than percentage. The data are divided into different classes. The upper and lower limits or the width of classes will depend up on the size of the data and can easily be adjusted.

The frequency distribution and relative frequency distribution table can be constructed in the following manner:

  • The quantitative data are divided into number of classes. The lower limit and upper limit of the classes have to be defined.
  • The range or width of the class intervals can be calculated by dividing the difference in the upper limit and lower limit by total number of classes.
  • The class frequency is the number of observations that fall in that class.
  • The relative class frequency can be calculated by dividing class frequency by total number of observations.

Example of frequency table for the data of Systolic blood pressure of 60 patients undergoing craniotomy is shown in Table 4 . The number of classes were 20, the lower limit and the upper limit were 86 mm of Hg and 186 mm of Hg respectively.

Frequency tabulation of systolic blood pressure in sixty patients (unit is mm Hg)

An external file that holds a picture, illustration, etc.
Object name is IJA-60-657-g007.jpg

Graphical description of quantitative data

The frequency distribution is usually depicted in histograms. The count or frequency is plotted along the vertical axis and the horizontal axis represents data values. The normality of distribution can be assessed visually by histograms. A frequency histogram is constructed for the dataset of systolic blood pressure, from the frequency Table 4 [ Figure 4 ].

An external file that holds a picture, illustration, etc.
Object name is IJA-60-657-g008.jpg

The frequency histogram for the data set of systolic blood pressure (BP), for which the frequency table is constructed in Table 4

Box plot gives the information of spread of observations in a single group around a centre value. The distribution pattern and extreme values can be easily viewed by box plot. A boxplot is constructed for the dataset of systolic blood pressure, from the frequency Table 4 [ Figure 5 ].

An external file that holds a picture, illustration, etc.
Object name is IJA-60-657-g009.jpg

Box plot is constructed from data of Table 4

Polygon construction is similar to histogram. However it is a line graph connecting the data points at mid points of class intervals. The polygon is simpler and outline the data pattern clearly[ 8 ] [ Figure 6 ].

An external file that holds a picture, illustration, etc.
Object name is IJA-60-657-g010.jpg

A frequency polygon constructed from data of Table 4 in NCSS software

It is often necessary to further summarise quantitative data, for example, for hypothesis testing. The most important elements of a data are its location, which is measured by mean, median and mode. The other parameters are variability (range, interquartile range, standard deviation and variance) and shape of the distribution (normal, skewness, and kurtosis). The details of which will be discussed in the next chapter.

The proper designing of research methodology is an important step from the conceptual phase to the conclusion phase and the computers play an invaluable role from the beginning to the end of a study. The data collection, data storage and data management are vital for any study. The data display and interpretation will help in understating the behaviour of the data and also to know the assumptions for statistical analysis.

Lean Six Sigma Training Certification

6sigma.us

  • Facebook Instagram Twitter LinkedIn YouTube
  • (877) 497-4462

SixSigma.us

The Importance of Qualitative Data Analysis in Research: A Comprehensive Guide

August 29th, 2024

Qualitative data analysis, in essence, is the systematic examination of non-numerical information to uncover patterns, themes, and insights.

This process is crucial in various fields, from product development to business process improvement.

Key Highlights

  • Defining qualitative data analysis and its importance
  • Comparing qualitative and quantitative research methods
  • Exploring key approaches: thematic, grounded theory, content analysis
  • Understanding the qualitative data analysis process
  • Reviewing CAQDAS tools for efficient analysis
  • Ensuring rigor through triangulation and member checking
  • Addressing challenges and ethical considerations
  • Examining future trends in qualitative research

Introduction to Qualitative Data Analysis

Qualitative data analysis is a sophisticated process of examining non-numerical information to extract meaningful insights.

It’s not just about reading through text; it’s about diving deep into the nuances of human experiences, opinions, and behaviors.

This analytical approach is crucial in various fields, from product development to process improvement , and even in understanding complex social phenomena.

Image: Qualitative Data Analysis

Importance of Qualitative Research Methods

The importance of qualitative research methods cannot be overstated. In my work with companies like 3M , Dell , and Intel , I’ve seen how qualitative analysis can uncover insights that numbers alone simply can’t reveal.

These methods allow us to understand the ‘why’ behind the ‘what’, providing context and depth to our understanding of complex issues.

Whether it’s improving a manufacturing process or developing a new product, qualitative research methods offer a rich, nuanced perspective that’s invaluable for informed decision-making.

Comparing Qualitative vs Quantitative Analysis

While both qualitative and quantitative analyses are essential tools in a researcher’s arsenal, they serve different purposes.

Quantitative analysis, which I’ve extensively used in Six Sigma projects, deals with numerical data and statistical methods.

It’s excellent for measuring, ranking, and categorizing phenomena. On the other hand, qualitative analysis focuses on the rich, contextual data that can’t be easily quantified.

It’s about understanding meanings, experiences, and perspectives.

Image: Qualitative and Quantitative Analysis

Key Approaches in Qualitative Data Analysis

Explore essential techniques like thematic analysis, grounded theory, content analysis, and discourse analysis.

Understand how each approach offers unique insights into qualitative data interpretation and theory building.

Thematic Analysis Techniques

Thematic analysis is a cornerstone of qualitative data analysis. It involves identifying patterns or themes within qualitative data.

In my workshops on Statistical Thinking and Business Process Charting , I often emphasize the power of thematic analysis in uncovering underlying patterns in complex datasets.

This approach is particularly useful when dealing with interview transcripts or open-ended survey responses.

The key is to immerse yourself in the data, coding it systematically, and then stepping back to see the broader themes emerge.

Grounded Theory Methodology

Grounded theory is another powerful approach in qualitative data analysis. Unlike methods that start with a hypothesis, grounded theory allows theories to emerge from the data itself.

I’ve found this particularly useful in projects where we’re exploring new territory without preconceived notions.

It’s a systematic yet flexible approach that can lead to fresh insights and innovative solutions.

The iterative nature of grounded theory, with its constant comparison of data, aligns well with the continuous improvement philosophy of Six Sigma .

Content Analysis Strategies

Content analysis is a versatile method that can be both qualitative and quantitative.

In my experience working with diverse industries, content analysis has been invaluable in making sense of large volumes of textual data.

Whether it’s analyzing customer feedback or reviewing technical documentation, content analysis provides a structured way to categorize and quantify qualitative information.

The key is to develop a robust coding framework that captures the essence of your research questions.

Discourse Analysis Approaches

Discourse analysis takes a deeper look at language use and communication practices.

It’s not just about what is said, but how it’s said and in what context. In my work on improving communication processes within organizations , discourse analysis has been a powerful tool.

It helps uncover underlying assumptions, power dynamics, and cultural nuances that might otherwise go unnoticed.

This approach is particularly useful when dealing with complex organizational issues or when trying to understand stakeholder perspectives in depth.

Image: Integrations of Different Qualitative Data Analysis Approaches

The Qualitative Data Analysis Process

Navigate through data collection, coding techniques, theme development, and interpretation. Learn how to transform raw qualitative data into meaningful insights through systematic analysis.

Data collection methods (interviews, focus groups, observation)

The foundation of any good qualitative analysis lies in robust data collection. In my experience, a mix of methods often yields the best results.

In-depth interviews provide individual perspectives, focus groups offer insights into group dynamics, and observation allows us to see behaviors in their natural context.

When working on process improvement projects , I often combine these methods to get a comprehensive view of the situation.

The key is to align your data collection methods with your research questions and the nature of the information you’re seeking.

Qualitative Data Coding Techniques

Coding is the heart of qualitative data analysis. It’s the process of labeling and organizing your qualitative data to identify different themes and the relationships between them.

In my workshops, I emphasize the importance of developing a clear, consistent coding system.

This might involve open coding to identify initial concepts, axial coding to make connections between categories, and selective coding to integrate and refine the theory.

The goal is to transform raw data into meaningful, analyzable units.

Developing Themes and Patterns

Once your data is coded, the next step is to look for overarching themes and patterns. This is where the analytical magic happens.

It’s about stepping back from the details and seeing the bigger picture. In my work with companies like Motorola and HP, I’ve found that visual tools like mind maps or thematic networks can be incredibly helpful in this process.

They allow you to see connections and hierarchies within your data that might not be immediately apparent in text form.

Data Interpretation and Theory Building

The final step in the qualitative data analysis process is interpretation and theory building.

This is where you bring together your themes and patterns to construct a coherent narrative or theory that answers your research questions.

It’s crucial to remain grounded in your data while also being open to new insights. In my experience, the best interpretations often challenge our initial assumptions and lead to innovative solutions.

Tools and Software for Qualitative Analysis

Discover the power of CAQDAS in streamlining qualitative data analysis workflows. Explore popular tools like NVivo, ATLAS.ti, and MAXQDA for efficient data management and analysis .

Overview of CAQDAS (Computer Assisted Qualitative Data Analysis Software)

Computer Assisted Qualitative Data Analysis Software (CAQDAS) has revolutionized the way we approach qualitative analysis.

These tools streamline the coding process, help manage large datasets, and offer sophisticated visualization options.

As someone who’s seen the evolution of these tools over the past two decades, I can attest to their transformative power.

They allow researchers to handle much larger datasets and perform more complex analyses than ever before.

Popular Tools: NVivo, ATLAS.ti, MAXQDA

Among the most popular CAQDAS tools are NVivo, ATLAS.ti, and MAXQDA.

Each has its strengths, and the choice often depends on your specific needs and preferences. NVivo , for instance, offers robust coding capabilities and is excellent for managing multimedia data.

ATLAS.ti is known for its intuitive interface and powerful network view feature. MAXQDA stands out for its mixed methods capabilities, blending qualitative and quantitative approaches seamlessly.

Ensuring Rigor in Qualitative Data Analysis

Implement strategies like data triangulation, member checking, and audit trails to enhance credibility. Understand the importance of reflexivity in maintaining objectivity throughout the research process.

Data triangulation methods

Ensuring rigor in qualitative analysis is crucial for producing trustworthy results.

Data triangulation is a powerful method for enhancing the credibility of your findings. It involves using multiple data sources, methods, or investigators to corroborate your results.

In my Six Sigma projects, I often employ methodological triangulation, combining interviews, observations, and document analysis to get a comprehensive view of a process or problem.

Member Checking for Validity

Member checking is another important technique for ensuring the validity of your qualitative analysis.

This involves taking your findings back to your participants to confirm that they accurately represent their experiences and perspectives.

In my work with various organizations, I’ve found that this not only enhances the credibility of the research but also often leads to new insights as participants reflect on the findings.

Creating an Audit Trail

An audit trail is essential for demonstrating the rigor of your qualitative analysis.

It’s a detailed record of your research process, including your raw data, analysis notes, and the evolution of your coding scheme.

Practicing Reflexivity

Reflexivity is about acknowledging and critically examining your own role in the research process. As researchers, we bring our own biases and assumptions to our work.

Practicing reflexivity involves constantly questioning these assumptions and considering how they might be influencing our analysis.

Challenges and Best Practices in Qualitative Data Analysis

Address common hurdles such as data saturation , researcher bias, and ethical considerations. Learn best practices for conducting rigorous and ethical qualitative research in various contexts.

Dealing with data saturation

One of the challenges in qualitative research is knowing when you’ve reached data saturation – the point at which new data no longer brings new insights.

In my experience, this requires a balance of systematic analysis and intuition. It’s important to continuously review and compare your data as you collect it.

In projects I’ve led, we often use data matrices or summary tables to track emerging themes and identify when we’re no longer seeing new patterns emerge.

Overcoming Researcher Bias

Researcher bias is an ever-present challenge in qualitative analysis. Our own experiences and preconceptions can inadvertently influence how we interpret data.

To overcome this, I advocate for a combination of strategies. Regular peer debriefing sessions , where you discuss your analysis with colleagues, can help uncover blind spots.

Additionally, actively seeking out negative cases or contradictory evidence can help challenge your assumptions and lead to more robust findings.

Ethical Considerations in Qualitative Research

Ethical considerations are paramount in qualitative research, given the often personal and sensitive nature of the data.

Protecting participant confidentiality, ensuring informed consent, and being transparent about the research process are all crucial.

In my work across various industries and cultures, I’ve learned the importance of being sensitive to cultural differences and power dynamics.

It’s also vital to consider the potential impact of your research on participants and communities.

Ethical qualitative research is not just about following guidelines, but about constantly reflecting on the implications of your work.

The Future of Qualitative Data Analysis

As we look to the future of qualitative data analysis, several exciting trends are emerging.

The increasing use of artificial intelligence and machine learning in qualitative analysis tools promises to revolutionize how we handle large datasets.

We’re also seeing a growing interest in visual and sensory methods of data collection and analysis, expanding our understanding of qualitative data beyond text.

In conclusion, mastering qualitative data analysis is an ongoing journey. It requires a combination of rigorous methods, creative thinking, and ethical awareness.

As we move forward, the field will undoubtedly continue to evolve, but its fundamental importance in research and decision-making will remain constant.

For those willing to dive deep into the complexities of qualitative data, the rewards in terms of insights and understanding are immense.

SixSigma.us offers both Live Virtual classes as well as Online Self-Paced training. Most option includes access to the same great Master Black Belt instructors that teach our World Class in-person sessions. Sign-up today!

Virtual Classroom Training Programs Self-Paced Online Training Programs

SixSigma.us Accreditation & Affiliations

PMI-logo-6sigma-us

Monthly Management Tips

  • Be the first one to receive the latest updates and information from 6Sigma
  • Get curated resources from industry-experts
  • Gain an edge with complete guides and other exclusive materials
  • Become a part of one of the largest Six Sigma community
  • Unlock your path to become a Six Sigma professional

" * " indicates required fields

The University of Edinburgh home

  • Schools & departments

The Roslin Institute

Global genomics and animal breeding group

Led by Professor Georgios Banos, the group research is focused on the development and application of computational and statistical methods for the analysis of large farm animal data in order to unravel the genetic background of economically important traits, using also biomarkers as predictors of animal health, fitness, longevity, robustness and welfare.

We also work in the development of methods to improve the accuracy of genomic evaluations and to optimize breeding strategies to enhance animal productivity and resilience while maintaining genetic diversity.

Group members

analysis and interpretation of data in quantitative research

Publications

Newspaper Reference Data on Development and Ecological Change in Tanzania, 1965-1985

Description.

This dataset contains reference information for newspaper articles on development policies and planned sustainability communities instituted in Tanzania between 1965 to 1985. The dataset also contains quantitative content analysis data related to the frequency and distribution of articles across key concepts and ideologies about societal reorganization, and the political, ecological, and economic transformations that endured in Tanzania’s immediate post-independence era. The dataset does not provide the actual articles themselves as the historical articles are copyrighted material. Creating a dataset of newspaper reference information is essential for various research and practical purposes. Particularly as it relates to Tanzania’s Ujamaa development policy during the mid-1960s to mid-1980s, the dataset offers insights into historical, social, and political contexts that enable researchers to track the evolution of news coverage and public discourse over time. This dataset further supports the analysis of the evolution of media perspective on the Ujamaa policies and communities. This allows for the identification of trends and patterns in public discourse on topics of interest or historical significance. The dataset also contains reference information on news articles that cover inter-state conflicts, tribal politics, and social, economic, environmental, and diplomatic issues related to Tanzania at that time. This dataset could help researchers quickly locate news articles when searching the archives of news companies or major third-party digital archives/repositories such as ProQuest, which helps academic research and media literacy education. It can also be used to develop search algorithms and natural language processing models, enhancing information retrieval and analysis in various digital archives. The dataset also provides analyzed data from a content analysis aimed at deducing eco-conscious best practices (or lessons) on how to approach present and future green transitions and sustainability in Africa, using the Ujamaa development experiment as a case study. This contributes to a further understanding through public and news-media history of Tanzania’s Ujamaa development policy and its implications for future sustainable development agendas in Africa. The Newspaper_Reference_Data.zip consists of the following files: 1) Newspaper_Reference_Data_Master_2024.xlsx: provides all the reference information for newspaper articles and coding, as well as the input data variables and analysis results for the archival search and related content analysis. 2) Data_Description_2024.pdf: describes the data, rationale, and value of the newspaper sources referenced in the dataset. 3) Methodology_Description_2024.pdf: describes the quantitative content analysis method conducted on the newspaper sources.

Steps to reproduce

Included in the data folder is a PDF document that outlines the methodology (content analysis) in greater detail. To conduct content analysis on newspaper references, follow these general steps: 1) Gain access to the relevant newspapers through authorized databases or subscriptions as indicated in the data description. 2) Retrieve the articles listed in the dataset while complying with copyright laws. 3) Utilize the specified coding schemes or keywords to categorize and analyze the content as outlined in the description. Please note that the actual news articles are not included due to copyright restrictions and must be accessed through the authorized sources mentioned. This adherence ensures compliance with legal and ethical standards, allowing for accurate replication of the analysis.

IMAGES

  1. Week 12: Quantitative Research Methods

    analysis and interpretation of data in quantitative research

  2. What Is Data Interpretation? Meaning & Analysis Examples

    analysis and interpretation of data in quantitative research

  3. PPT

    analysis and interpretation of data in quantitative research

  4. What Is Data Analysis In Quantitative Research

    analysis and interpretation of data in quantitative research

  5. Quantitative Analysis

    analysis and interpretation of data in quantitative research

  6. Quantitative Data Analysis for Survey Research

    analysis and interpretation of data in quantitative research

VIDEO

  1. Difference Between Quantitative and Qualitative Research

  2. How to Prepare Chapter 4 Analysis , Interpretation Quantitative Data

  3. Understanding Quantitative Research Methods

  4. Mastering Research & Data Analysis: Variable Types for Statisticians and Data Scientists

  5. Quantitative Data Analysis Explained

  6. Quantitative Data Analysis

COMMENTS

  1. An Overview of the Fundamentals of Data Management, Analysis, and

    Introduction. Quantitative research assumes that the constructs under study can be measured. As such, quantitative research aims to process numerical data (or numbers) to identify trends and relationships and to verify the measurements made to answer questions like who, how much, what, where, when, how many, and how. 1, 2 In this context, the processing of numerical data is a series of steps ...

  2. Quantitative Data Analysis: A Comprehensive Guide

    Below are the steps to prepare a data before quantitative research analysis: Step 1: Data Collection. Before beginning the analysis process, you need data. Data can be collected through rigorous quantitative research, which includes methods such as interviews, focus groups, surveys, and questionnaires. Step 2: Data Cleaning.

  3. A Really Simple Guide to Quantitative Data Analysis

    It is important to know w hat kind of data you are planning to collect or analyse as this w ill. affect your analysis method. A 12 step approach to quantitative data analysis. Step 1: Start with ...

  4. Data Analysis in Quantitative Research

    Abstract. Quantitative data analysis serves as part of an essential process of evidence-making in health and social sciences. It is adopted for any types of research question and design whether it is descriptive, explanatory, or causal. However, compared with qualitative counterpart, quantitative data analysis has less flexibility.

  5. The Beginner's Guide to Statistical Analysis

    Statistical analysis means investigating trends, patterns, and relationships using quantitative data. It is an important research tool used by scientists, governments, businesses, and other organizations. To draw valid conclusions, statistical analysis requires careful planning from the very start of the research process. You need to specify ...

  6. Introduction to Research Statistical Analysis: An Overview of the

    Introduction. Statistical analysis is necessary for any research project seeking to make quantitative conclusions. The following is a primer for research-based statistical analysis. It is intended to be a high-level overview of appropriate statistical testing, while not diving too deep into any specific methodology.

  7. Quantitative Data Analysis Methods & Techniques 101

    Quantitative data analysis is one of those things that often strikes fear in students. It's totally understandable - quantitative analysis is a complex topic, full of daunting lingo, like medians, modes, correlation and regression.Suddenly we're all wishing we'd paid a little more attention in math class…. The good news is that while quantitative data analysis is a mammoth topic ...

  8. A practical guide to data analysis in general literature reviews

    This article is a practical guide to conducting data analysis in general literature reviews. The general literature review is a synthesis and analysis of published research on a relevant clinical issue, and is a common format for academic theses at the bachelor's and master's levels in nursing, physiotherapy, occupational therapy, public health and other related fields.

  9. Part II: Data Analysis Methods in Quantitative Research

    Data Analysis Methods in Quantitative Research We started this module with levels of measurement as a way to categorize our data. Data analysis is directed toward answering the original research question and achieving the study purpose (or aim).

  10. Data Analysis in Quantitative Research

    Quantitative data analysis serves as part of an essential process of evidence-making in health and social sciences. It is adopted for any types of research question and design whether it is descriptive, explanatory, or causal. However, compared with qualitative counterpart, quantitative data analysis has less exi-. fl.

  11. Data Interpretation: Definition and Steps with Examples

    Data interpretation is the process of reviewing data and arriving at relevant conclusions using various analytical research methods. Data analysis assists researchers in categorizing, manipulating data, and summarizing data to answer critical questions. LEARN ABOUT: Level of Analysis. In business terms, the interpretation of data is the ...

  12. Data Interpretation

    Data interpretation and data analysis are two different but closely related processes in data-driven decision-making. Data analysis refers to the process of examining and examining data using statistical and computational methods to derive insights and conclusions from it. It involves cleaning, transforming, and modeling the data to uncover ...

  13. PDF Chapter 4: Analysis and Interpretation of Results

    The analysis and interpretation of data is carried out in two phases. The. first part, which is based on the results of the questionnaire, deals with a quantitative. analysis of data. The second, which is based on the results of the interview and focus group. discussions, is a qualitative interpretation.

  14. How To Analyze and Interpret Data

    Interpreting data. The best way to conduct quantitative analysis is by taking a methodical approach and where possible, involving at least one other person so you can talk through your respective interpretations of the findings, challenge one another, and agree on a coherent narrative. Look through the question summaries.

  15. Chapter Four Data Presentation, Analysis and Interpretation 4.0

    DATA PRESENTATION, ANALYSIS AND INTERPRETATION. 4.0 Introduction. This chapter is concerned with data pres entation, of the findings obtained through the study. The. findings are presented in ...

  16. Analysis and Interpretation of Data

    There are 4 modules in this course. This course focuses on the analysis and interpretation of data. The focus will be placed on data preparation and description and quantitative and qualitative data analysis. The course commences with a discussion of data preparation, scale internal consistency, appropriate data analysis and the Pearson ...

  17. An Overview of the Fundamentals of Data Management, Analysis, and

    Objectives: To provide an overview of three consecutive stages involved in the processing of quantitative research data (ie, data management, analysis, and interpretation) with the aid of practical examples to foster enhanced understanding. Data sources: Published scientific articles, research textbooks, and expert advice were used. ...

  18. PDF A Really Simple Guide to Quantitative Data Analysis

    based decisions rather than exact mathematical proof.The quantitative research processThis guide focuses on descriptive statistics and statistical testing as these are the c. mmon forms of quantitative data analysis required at the university and research level. It is assumed that dat. ollowing stages:Define your aim and research questionsCarry ...

  19. Introduction to Quantitative Data Analysis in the Behavioral and Social

    Guides readers through the quantitative data analysis process including contextualizing data within a research situation, connecting data to the appropriate statistical tests, and drawing valid conclusions Introduction to Quantitative Data Analysis in the Behavioral and Social Sciences presents a clear and accessible introduction to the basics of quantitative data analysis and focuses on how ...

  20. Interpretation and display of research results

    It important to properly collect, code, clean and edit the data before interpreting and displaying the research results. Computers play a major role in different phases of research starting from conceptual, design and planning, data collection, data analysis and research publication phases. The main objective of data display is to summarize the ...

  21. The Data Puzzle: The Nature of Interpretation in Quantitative Research

    This essay presents a "reconstructed logic" of the interpretation process involved in quantita-tive data analysis. Argument: Drawing upon a broad literature on interpretation, the paper shows how the interpretive processes for quantitative "data" has significant similarities to in-terpretation in other settings.

  22. Research Guide: Data analysis and reporting findings

    Data analysis is the most crucial part of any research. Data analysis summarizes collected data. It involves the interpretation of data gathered through the use of analytical and logical reasoning to determine patterns, relationships or trends. ... Quantitative Analysis of Questionnaires by Steve Humble Bringing together the techniques required ...

  23. An Overview of Data Analysis and Interpretations in Research

    Research is a scientific field which helps to generate new knowledge and solve the existing problem. So, data analysis is the cru cial part of research which makes the result of the stu dy more ...

  24. Qualitative Data Analysis: A Complete Guide [2024]

    The Importance of Qualitative Data Analysis in Research: A Comprehensive Guide. August 29th, 2024. Qualitative data analysis, in essence, is the systematic examination of non-numerical information to uncover patterns, themes, and insights. ... Comparing qualitative and quantitative research methods; Exploring key approaches: thematic, grounded ...

  25. Legal Analysis of the Interpretation of Adapted Physical Education in

    Little is known about how these areas are interpreted within court cases. We employed a legal content analysis to examine the interpretation of IDEA in state and federal court cases related to PE and APE. Among the seven cases examined, schools were identified as the prevailing party in most cases.

  26. Global genomics and animal breeding group

    Led by Professor Georgios Banos, the group research is focused on the development and application of computational and statistical methods for the analysis of large farm animal data in order to unravel the genetic background of economically important traits, using also biomarkers as predictors of animal health, fitness, longevity, robustness and welfare.

  27. Newspaper Reference Data on Development and Ecological Change in

    This dataset contains reference information for newspaper articles on development policies and planned sustainability communities instituted in Tanzania between 1965 to 1985. The dataset also contains quantitative content analysis data related to the frequency and distribution of articles across key concepts and ideologies about societal reorganization, and the political, ecological, and ...