• Skip to secondary menu
  • Skip to main content
  • Skip to primary sidebar

Statistics By Jim

Making statistics intuitive

One-Tailed and Two-Tailed Hypothesis Tests Explained

By Jim Frost 60 Comments

Choosing whether to perform a one-tailed or a two-tailed hypothesis test is one of the methodology decisions you might need to make for your statistical analysis. This choice can have critical implications for the types of effects it can detect, the statistical power of the test, and potential errors.

In this post, you’ll learn about the differences between one-tailed and two-tailed hypothesis tests and their advantages and disadvantages. I include examples of both types of statistical tests. In my next post, I cover the decision between one and two-tailed tests in more detail.

What Are Tails in a Hypothesis Test?

First, we need to cover some background material to understand the tails in a test. Typically, hypothesis tests take all of the sample data and convert it to a single value, which is known as a test statistic. You’re probably already familiar with some test statistics. For example, t-tests calculate t-values . F-tests, such as ANOVA, generate F-values . The chi-square test of independence and some distribution tests produce chi-square values. All of these values are test statistics. For more information, read my post about Test Statistics .

These test statistics follow a sampling distribution. Probability distribution plots display the probabilities of obtaining test statistic values when the null hypothesis is correct. On a probability distribution plot, the portion of the shaded area under the curve represents the probability that a value will fall within that range.

The graph below displays a sampling distribution for t-values. The two shaded regions cover the two-tails of the distribution.

Plot that display critical regions in the two tails of the distribution.

Keep in mind that this t-distribution assumes that the null hypothesis is correct for the population. Consequently, the peak (most likely value) of the distribution occurs at t=0, which represents the null hypothesis in a t-test. Typically, the null hypothesis states that there is no effect. As t-values move further away from zero, it represents larger effect sizes. When the null hypothesis is true for the population, obtaining samples that exhibit a large apparent effect becomes less likely, which is why the probabilities taper off for t-values further from zero.

Related posts : How t-Tests Work and Understanding Probability Distributions

Critical Regions in a Hypothesis Test

In hypothesis tests, critical regions are ranges of the distributions where the values represent statistically significant results. Analysts define the size and location of the critical regions by specifying both the significance level (alpha) and whether the test is one-tailed or two-tailed.

Consider the following two facts:

  • The significance level is the probability of rejecting a null hypothesis that is correct.
  • The sampling distribution for a test statistic assumes that the null hypothesis is correct.

Consequently, to represent the critical regions on the distribution for a test statistic, you merely shade the appropriate percentage of the distribution. For the common significance level of 0.05, you shade 5% of the distribution.

Related posts : Significance Levels and P-values and T-Distribution Table of Critical Values

Two-Tailed Hypothesis Tests

Two-tailed hypothesis tests are also known as nondirectional and two-sided tests because you can test for effects in both directions. When you perform a two-tailed test, you split the significance level percentage between both tails of the distribution. In the example below, I use an alpha of 5% and the distribution has two shaded regions of 2.5% (2 * 2.5% = 5%).

When a test statistic falls in either critical region, your sample data are sufficiently incompatible with the null hypothesis that you can reject it for the population.

In a two-tailed test, the generic null and alternative hypotheses are the following:

  • Null : The effect equals zero.
  • Alternative :  The effect does not equal zero.

The specifics of the hypotheses depend on the type of test you perform because you might be assessing means, proportions, or rates.

Example of a two-tailed 1-sample t-test

Suppose we perform a two-sided 1-sample t-test where we compare the mean strength (4.1) of parts from a supplier to a target value (5). We use a two-tailed test because we care whether the mean is greater than or less than the target value.

To interpret the results, simply compare the p-value to your significance level. If the p-value is less than the significance level, you know that the test statistic fell into one of the critical regions, but which one? Just look at the estimated effect. In the output below, the t-value is negative, so we know that the test statistic fell in the critical region in the left tail of the distribution, indicating the mean is less than the target value. Now we know this difference is statistically significant.

Statistical output from a two-tailed 1-sample t-test.

We can conclude that the population mean for part strength is less than the target value. However, the test had the capacity to detect a positive difference as well. You can also assess the confidence interval. With a two-tailed hypothesis test, you’ll obtain a two-sided confidence interval. The confidence interval tells us that the population mean is likely to fall between 3.372 and 4.828. This range excludes the target value (5), which is another indicator of significance.

Advantages of two-tailed hypothesis tests

You can detect both positive and negative effects. Two-tailed tests are standard in scientific research where discovering any type of effect is usually of interest to researchers.

One-Tailed Hypothesis Tests

One-tailed hypothesis tests are also known as directional and one-sided tests because you can test for effects in only one direction. When you perform a one-tailed test, the entire significance level percentage goes into the extreme end of one tail of the distribution.

In the examples below, I use an alpha of 5%. Each distribution has one shaded region of 5%. When you perform a one-tailed test, you must determine whether the critical region is in the left tail or the right tail. The test can detect an effect only in the direction that has the critical region. It has absolutely no capacity to detect an effect in the other direction.

In a one-tailed test, you have two options for the null and alternative hypotheses, which corresponds to where you place the critical region.

You can choose either of the following sets of generic hypotheses:

  • Null : The effect is less than or equal to zero.
  • Alternative : The effect is greater than zero.

Plot that displays a single critical region for a one-tailed test.

  • Null : The effect is greater than or equal to zero.
  • Alternative : The effect is less than zero.

Plot that displays a single critical region in the left tail for a one-tailed test.

Again, the specifics of the hypotheses depend on the type of test you perform.

Notice how for both possible null hypotheses the tests can’t distinguish between zero and an effect in a particular direction. For example, in the example directly above, the null combines “the effect is greater than or equal to zero” into a single category. That test can’t differentiate between zero and greater than zero.

Example of a one-tailed 1-sample t-test

Suppose we perform a one-tailed 1-sample t-test. We’ll use a similar scenario as before where we compare the mean strength of parts from a supplier (102) to a target value (100). Imagine that we are considering a new parts supplier. We will use them only if the mean strength of their parts is greater than our target value. There is no need for us to differentiate between whether their parts are equally strong or less strong than the target value—either way we’d just stick with our current supplier.

Consequently, we’ll choose the alternative hypothesis that states the mean difference is greater than zero (Population mean – Target value > 0). The null hypothesis states that the difference between the population mean and target value is less than or equal to zero.

Statistical output for a one-tailed 1-sample t-test.

To interpret the results, compare the p-value to your significance level. If the p-value is less than the significance level, you know that the test statistic fell into the critical region. For this study, the statistically significant result supports the notion that the population mean is greater than the target value of 100.

Confidence intervals for a one-tailed test are similarly one-sided. You’ll obtain either an upper bound or a lower bound. In this case, we get a lower bound, which indicates that the population mean is likely to be greater than or equal to 100.631. There is no upper limit to this range.

A lower-bound matches our goal of determining whether the new parts are stronger than our target value. The fact that the lower bound (100.631) is higher than the target value (100) indicates that these results are statistically significant.

This test is unable to detect a negative difference even when the sample mean represents a very negative effect.

Advantages and disadvantages of one-tailed hypothesis tests

One-tailed tests have more statistical power to detect an effect in one direction than a two-tailed test with the same design and significance level. One-tailed tests occur most frequently for studies where one of the following is true:

  • Effects can exist in only one direction.
  • Effects can exist in both directions but the researchers only care about an effect in one direction. There is no drawback to failing to detect an effect in the other direction. (Not recommended.)

The disadvantage of one-tailed tests is that they have no statistical power to detect an effect in the other direction.

As part of your pre-study planning process, determine whether you’ll use the one- or two-tailed version of a hypothesis test. To learn more about this planning process, read 5 Steps for Conducting Scientific Studies with Statistical Analyses .

This post explains the differences between one-tailed and two-tailed statistical hypothesis tests. How these forms of hypothesis tests function is clear and based on mathematics. However, there is some debate about when you can use one-tailed tests. My next post explores this decision in much more depth and explains the different schools of thought and my opinion on the matter— When Can I Use One-Tailed Hypothesis Tests .

If you’re learning about hypothesis testing and like the approach I use in my blog, check out my Hypothesis Testing book! You can find it at Amazon and other retailers.

Cover image of my Hypothesis Testing: An Intuitive Guide ebook.

Share this:

what is a 2 tailed hypothesis test

Reader Interactions

' src=

June 26, 2022 at 12:14 pm

Hi, Can help me with figuring out the null and alternative hypothesis of the following statement? Some claimed that the real average expenditure on beverage by general people is at least $10.

' src=

February 19, 2022 at 6:02 am

thank you for the thoroughly explanation, I’m still strugling to wrap my mind around the t-table and the relation between the alpha values for one or two tail probability and the confidence levels on the bottom (I’m understanding it so wrongly that for me it should be the oposite, like one tail 0,05 should correspond 95% CI and two tailed 0,025 should correspond to 95% because then you got the 2,5% on each side). In my mind if I picture the one tail diagram with an alpha of 0,05 I see the rest 95% inside the diagram, but for a one tail I only see 90% CI paired with a 5% alpha… where did the other 5% go? I tried to understand when you said we should just double the alpha for a one tail probability in order to find the CI but I still cant picture it. I have been trying to understand this. Like if you only have one tail and there is 0,05, shouldn’t the rest be on the other side? why is it then 90%… I know I’m missing a point and I can’t figure it out and it’s so frustrating…

' src=

February 23, 2022 at 10:01 pm

The alpha is the total shaded area. So, if the alpha = 0.05, you know that 5% of the distribution is shaded. The number of tails tells you how to divide the shaded areas. Is it all in one region (1-tailed) or do you split the shaded regions in two (2-tailed)?

So, for a one-tailed test with an alpha of 0.05, the 5% shading is all in one tail. If alpha = 0.10, then it’s 10% on one side. If it’s two-tailed, then you need to split that 10% into two–5% in both tails. Hence, the 5% in a one-tailed test is the same as a two-tailed test with an alpha of 0.10 because that test has the same 5% on one side (but there’s another 5% in the other tail).

It’s similar for CIs. However, for CIs, you shade the middle rather than the extremities. I write about that in one my articles about hypothesis testing and confidence intervals .

I’m not sure if I’m answering your question or not.

' src=

February 17, 2022 at 1:46 pm

I ran a post hoc Dunnett’s test alpha=0.05 after a significant Anova test in Proc Mixed using SAS. I want to determine if the means for treatment (t1, t2, t3) is significantly less than the means for control (p=pathogen). The code for the dunnett’s test is – LSmeans trt / diff=controll (‘P’) adjust=dunnett CL plot=control; I think the lower bound one tailed test is the correct test to run but I’m not 100% sure. I’m finding conflicting information online. In the output table for the dunnett’s test the mean difference between the control and the treatments is t1=9.8, t2=64.2, and t3=56.5. The control mean estimate is 90.5. The adjusted p-value by treatment is t1(p=0.5734), t2 (p=.0154) and t3(p=.0245). The adjusted lower bound confidence limit in order from t1-t3 is -38.8, 13.4, and 7.9. The adjusted upper bound for all test is infinity. The graphical output for the dunnett’s test in SAS is difficult to understand for those of us who are beginner SAS users. All treatments appear as a vertical line below the the horizontal line for control at 90.5 with t2 and t3 in the shaded area. For treatment 1 the shaded area is above the line for control. Looking at just the output table I would say that t2 and t3 are significantly lower than the control. I guess I would like to know if my interpretation of the outputs is correct that treatments 2 and 3 are statistically significantly lower than the control? Should I have used an upper bound one tailed test instead?

' src=

November 10, 2021 at 1:00 am

Thanks Jim. Please help me understand how a two tailed testing can be used to minimize errors in research

' src=

July 1, 2021 at 9:19 am

Hi Jim, Thanks for posting such a thorough and well-written explanation. It was extremely useful to clear up some doubts.

' src=

May 7, 2021 at 4:27 pm

Hi Jim, I followed your instructions for the Excel add-in. Thank you. I am very new to statistics and sort of enjoy it as I enter week number two in my class. I am to select if three scenarios call for a one or two-tailed test is required and why. The problem is stated:

30% of mole biopsies are unnecessary. Last month at his clinic, 210 out of 634 had benign biopsy results. Is there enough evidence to reject the dermatologist’s claim?

Part two, the wording changes to “more than of 30% of biopsies,” and part three, the wording changes to “less than 30% of biopsies…”

I am not asking for the problem to be solved for me, but I cannot seem to find direction needed. I know the elements i am dealing with are =30%, greater than 30%, and less than 30%. 210 and 634. I just don’t know what to with the information. I can’t seem to find an example of a similar problem to work with.

May 9, 2021 at 9:22 pm

As I detail in this post, a two-tailed test tells you whether an effect exists in either direction. Or, is it different from the null value in either direction. For the first example, the wording suggests you’d need a two-tailed test to determine whether the population proportion is ≠ 30%. Whenever you just need to know ≠, it suggests a two-tailed test because you’re covering both directions.

For part two, because it’s in one direction (greater than), you need a one-tailed test. Same for part three but it’s less than. Look in this blog post to see how you’d construct the null and alternative hypotheses for these cases. Note that you’re working with a proportion rather than the mean, but the principles are the same! Just plug your scenario and the concept of proportion into the wording I use for the hypotheses.

I hope that helps!

' src=

April 11, 2021 at 9:30 am

Hello Jim, great website! I am using a statistics program (SPSS) that does NOT compute one-tailed t-tests. I am trying to compare two independent groups and have justifiable reasons why I only care about one direction. Can I do the following? Use SPSS for two-tailed tests to calculate the t & p values. Then report the p-value as p/2 when it is in the predicted direction (e.g , SPSS says p = .04, so I report p = .02), and report the p-value as 1 – (p/2) when it is in the opposite direction (e.g., SPSS says p = .04, so I report p = .98)? If that is incorrect, what do you suggest (hopefully besides changing statistics programs)? Also, if I want to report confidence intervals, I realize that I would only have an upper or lower bound, but can I use the CI’s from SPSS to compute that? Thank you very much!

April 11, 2021 at 5:42 pm

Yes, for p-values, that’s absolutely correct for both cases.

For confidence intervals, if you take one endpoint of a two-side CI, it becomes a one-side bound with half the confidence level.

Consequently, to obtain a one-sided bound with your desired confidence level, you need to take your desired significance level (e.g., 0.05) and double it. Then subtract it from 1. So, if you’re using a significance level of 0.05, double that to 0.10 and then subtract from 1 (1 – 0.10 = 0.90). 90% is the confidence level you want to use for a two-sided test. After obtaining the two-sided CI, use one of the endpoints depending on the direction of your hypothesis (i.e., upper or lower bound). That’s produces the one-sided the bound with the confidence level that you want. For our example, we calculated a 95% one-sided bound.

' src=

March 3, 2021 at 8:27 am

Hi Jim. I used the one-tailed(right) statistical test to determine an anomaly in the below problem statement: On a daily basis, I calculate the (mapped_%) in a common field between two tables.

The way I used the t-test is: On any particular day, I calculate the sample_mean, S.D and sample_count (n=30) for the last 30 days including the current day. My null hypothesis, H0 (pop. mean)=95 and H1>95 (alternate hypothesis). So, I calculate the t-stat based on the sample_mean, pop.mean, sample S.D and n. I then choose the t-crit value for 0.05 from my t-ditribution table for dof(n-1). On the current day if my abs.(t-stat)>t-crit, then I reject the null hypothesis and I say the mapped_pct on that day has passed the t-test.

I get some weird results here, where if my mapped_pct is as low as 6%-8% in all the past 30 days, the t-test still gets a “pass” result. Could you help on this? If my hypothesis needs to be changed.

I would basically look for the mapped_pct >95, if it worked on a static trigger. How can I use the t-test effectively in this problem statement?

' src=

December 18, 2020 at 8:23 pm

Hello Dr. Jim, I am wondering if there is evidence in one of your books or other source you could provide, which supports that it is OK not to divide alpha level by 2 in one-tailed hypotheses. I need the source for supporting evidence in a Portfolio exercise and couldn’t find one.

I am grateful for your reply and for your statistics knowledge sharing!

' src=

November 27, 2020 at 10:31 pm

If I did a one directional F test ANOVA(one tail ) and wanted to calculate a confidence interval for each individual groups (3) mean . Would I use a one tailed or two tailed t , within my confidence interval .

November 29, 2020 at 2:36 am

Hi Bashiru,

F-tests for ANOVA will always be one-tailed for the reasons I discuss in this post. To learn more about, read my post about F-tests in ANOVA .

For the differences between my groups, I would not use t-tests because the family-wise error rate quickly grows out of hand. To learn more about how to compare group means while controlling the familywise error rate, read my post about using post hoc tests with ANOVA . Typically, these are two-side intervals but you’d be able to use one-sided.

' src=

November 26, 2020 at 10:51 am

Hi Jim, I had a question about the formulation of the hypotheses. When you want to test if a beta = 1 or a beta = 0. What will be the null hypotheses? I’m having trouble with finding out. Because in most cases beta = 0 is the null hypotheses but in this case you want to test if beta = 0. so i’m having my doubts can it in this case be the alternative hypotheses or is it still the null hypotheses?

Kind regards, Noa

November 27, 2020 at 1:21 am

Typically, the null hypothesis represents no effect or no relationship. As an analyst, you’re hoping that your data have enough evidence to reject the null and favor the alternative.

Assuming you’re referring to beta as in regression coefficients, zero represents no relationship. Consequently, beta = 0 is the null hypothesis.

You might hope that beta = 1, but you don’t usually include that in your alternative hypotheses. The alternative hypothesis usually states that it does not equal no effect. In other words, there is an effect but it doesn’t state what it is.

There are some exceptions to the above but I’m writing about the standard case.

' src=

November 22, 2020 at 8:46 am

Your articles are a help to intro to econometrics students. Keep up the good work! More power to you!

' src=

November 6, 2020 at 11:25 pm

Hello Jim. Can you help me with these please?

Write the null and alternative hypothesis using a 1-tailed and 2-tailed test for each problem. (In paragraph and symbols)

A teacher wants to know if there is a significant difference in the performance in MAT C313 between her morning and afternoon classes.

It is known that in our university canteen, the average waiting time for a customer to receive and pay for his/her order is 20 minutes. Additional personnel has been added and now the management wants to know if the average waiting time had been reduced.

November 8, 2020 at 12:29 am

I cover how to write the hypotheses for the different types of tests in this post. So, you just need to figure which type of test you need to use. In your case, you want to determine whether the mean waiting time is less than the target value of 20 minutes. That’s a 1-sample t-test because you’re comparing a mean to a target value (20 minutes). You specifically want to determine whether the mean is less than the target value. So, that’s a one-tailed test. And, you’re looking for a mean that is “less than” the target.

So, go to the one-tailed section in the post and look for the hypotheses for the effect being less than. That’s the one with the critical region on the left side of the curve.

Now, you need include your own information. In your case, you’re comparing the sample estimate to a population mean of 20. The 20 minutes is your null hypothesis value. Use the symbol mu μ to represent the population mean.

You put all that together and you get the following:

Null: μ ≥ 20 Alternative: μ 0 to denote the null hypothesis and H 1 or H A to denote the alternative hypothesis if that’s what you been using in class.

' src=

October 17, 2020 at 12:11 pm

I was just wondering if you could please help with clarifying what the hypothesises would be for say income for gamblers and, age of gamblers. I am struggling to find which means would be compared.

October 17, 2020 at 7:05 pm

Those are both continuous variables, so you’d use either correlation or regression for them. For both of those analyses, the hypotheses are the following:

Null : The correlation or regression coefficient equals zero (i.e., there is no relationship between the variables) Alternative : The coefficient does not equal zero (i.e., there is a relationship between the variables.)

When the p-value is less than your significance level, you reject the null and conclude that a relationship exists.

' src=

October 17, 2020 at 3:05 am

I was ask to choose and justify the reason between a one tailed and two tailed test for dummy variables, how do I do that and what does it mean?

October 17, 2020 at 7:11 pm

I don’t have enough information to answer your question. A dummy variable is also known as an indicator variable, which is a binary variable that indicates the presence or absence of a condition or characteristic. If you’re using this variable in a hypothesis test, I’d presume that you’re using a proportions test, which is based on the binomial distribution for binary data.

Choosing between a one-tailed or two-tailed test depends on subject area issues and, possibly, your research objectives. Typically, use a two-tailed test unless you have a very good reason to use a one-tailed test. To understand when you might use a one-tailed test, read my post about when to use a one-tailed hypothesis test .

' src=

October 16, 2020 at 2:07 pm

In your one-tailed example, Minitab describes the hypotheses as “Test of mu = 100 vs > 100”. Any idea why Minitab says the null is “=” rather than “= or less than”? No ASCII character for it?

October 16, 2020 at 4:20 pm

I’m not entirely sure even though I used to work there! I know we had some discussions about how to represent that hypothesis but I don’t recall the exact reasoning. I suspect that it has to do with the conclusions that you can draw. Let’s focus on the failing to reject the null hypothesis. If the test statistic falls in that region (i.e., it is not significant), you fail to reject the null. In this case, all you know is that you have insufficient evidence to say it is different than 100. I’m pretty sure that’s why they use the equal sign because it might as well be one.

Mathematically, I think using ≤ is more accurate, which you can really see when you look at the distribution plots. That’s why I phrase the hypotheses using ≤ or ≥ as needed. However, in terms of the interpretation, the “less than” portion doesn’t really add anything of importance. You can conclude that its equal to 100 or greater than 100, but not less than 100.

' src=

October 15, 2020 at 5:46 am

Thank you so much for your timely feedback. It helps a lot

October 14, 2020 at 10:47 am

How can i use one tailed test at 5% alpha on this problem?

A manufacturer of cellular phone batteries claims that when fully charged, the mean life of his product lasts for 26 hours with a standard deviation of 5 hours. Mr X, a regular distributor, randomly picked and tested 35 of the batteries. His test showed that the average life of his sample is 25.5 hours. Is there a significant difference between the average life of all the manufacturer’s batteries and the average battery life of his sample?

October 14, 2020 at 8:22 pm

I don’t think you’d want to use a one-tailed test. The goal is to determine whether the sample is significantly different than the manufacturer’s population average. You’re not saying significantly greater than or less than, which would be a one-tailed test. As phrased, you want a two-tailed test because it can detect a difference in either direct.

It sounds like you need to use a 1-sample t-test to test the mean. During this test, enter 26 as the test mean. The procedure will tell you if the sample mean of 25.5 hours is a significantly different from that test mean. Similarly, you’d need a one variance test to determine whether the sample standard deviation is significantly different from the test value of 5 hours.

For both of these tests, compare the p-value to your alpha of 0.05. If the p-value is less than this value, your results are statistically significant.

' src=

September 22, 2020 at 4:16 am

Hi Jim, I didn’t get an idea that when to use two tail test and one tail test. Will you please explain?

September 22, 2020 at 10:05 pm

I have a complete article dedicated to that: When Can I Use One-Tailed Tests .

Basically, start with the assumption that you’ll use a two-tailed test but then consider scenarios where a one-tailed test can be appropriate. I talk about all of that in the article.

If you have questions after reading that, please don’t hesitate to ask!

' src=

July 31, 2020 at 12:33 pm

Thank you so so much for this webpage.

I have two scenarios that I need some clarification. I will really appreciate it if you can take a look:

So I have several of materials that I know when they are tested after production. My hypothesis is that the earlier they are tested after production, the higher the mean value I should expect. At the same time, the later they are tested after production, the lower the mean value. Since this is more like a “greater or lesser” situation, I should use one tail. Is that the correct approach?

On the other hand, I have several mix of materials that I don’t know when they are tested after production. I only know the mean values of the test. And I only want to know whether one mean value is truly higher or lower than the other, I guess I want to know if they are only significantly different. Should I use two tail for this? If they are not significantly different, I can judge based on the mean values of test alone. And if they are significantly different, then I will need to do other type of analysis. Also, when I get my P-value for two tail, should I compare it to 0.025 or 0.05 if my confidence level is 0.05?

Thank you so much again.

July 31, 2020 at 11:19 pm

For your first, if you absolutely know that the mean must be lower the later the material is tested, that it cannot be higher, that would be a situation where you can use a one-tailed test. However, if that’s not a certainty, you’re just guessing, use a two-tail test. If you’re measuring different items at the different times, use the independent 2-sample t-test. However, if you’re measuring the same items at two time points, use the paired t-test. If it’s appropriate, using the paired t-test will give you more statistical power because it accounts for the variability between items. For more information, see my post about when it’s ok to use a one-tailed test .

For the mix of materials, use a two-tailed test because the effect truly can go either direction.

Always compare the p-value to your full significance level regardless of whether it’s a one or two-tailed test. Don’t divide the significance level in half.

' src=

June 17, 2020 at 2:56 pm

Is it possible that we reach to opposite conclusions if we use a critical value method and p value method Secondly if we perform one tail test and use p vale method to conclude our Ho, then do we need to convert sig value of 2 tail into sig value of one tail. That can be done just by dividing it with 2

June 18, 2020 at 5:17 pm

The p-value method and critical value method will always agree as long as you’re not changing anything about how the methodology.

If you’re using statistical software, you don’t need to make any adjustments. The software will do that for you.

However, if you calculating it by hand, you’ll need to take your significance level and then look in the table for your test statistic for a one-tailed test. For example, you’ll want to look up 5% for a one-tailed test rather than a two-tailed test. That’s not as simple as dividing by two. In this article, I show examples of one-tailed and two-tailed tests for the same degrees of freedom. The t critical value for the two-tailed test is +/- 2.086 while for the one-sided test it is 1.725. It is true that probability associated with those critical values doubles for the one-tailed test (2.5% -> 5%), but the critical value itself is not half (2.086 -> 1.725). Study the first several graphs in this article to see why that is true.

For the p-value, you can take a two-tailed p-value and divide by 2 to determine the one-sided p-value. However, if you’re using statistical software, it does that for you.

' src=

June 11, 2020 at 3:46 pm

Hello Jim, if you have the time I’d be grateful if you could shed some clarity on this scenario:

“A researcher believes that aromatherapy can relieve stress but wants to determine whether it can also enhance focus. To test this, the researcher selected a random sample of students to take an exam in which the average score in the general population is 77. Prior to the exam, these students studied individually in a small library room where a lavender scent was present. If students in this group scored significantly above the average score in general population [is this one-tailed or two-tailed hypothesis?], then this was taken as evidence that the lavender scent enhanced focus.”

Thank you for your time if you do decide to respond.

June 11, 2020 at 4:00 pm

It’s unclear from the information provided whether the researchers used a one-tailed or two-tailed test. It could be either. A two-tailed test can detect effects in both directions, so it could definitely detect an average group score above the population score. However, you could also detect that effect using a one-tailed test if it was set up correctly. So, there’s not enough information in what you provided to know for sure. It could be either.

However, that’s irrelevant to answering the question. The tricky part, as I see it, is that you’re not entirely sure about why the scores are higher. Are they higher because the lavender scent increased concentration or are they higher because the subjects have lower stress from the lavender? Or, maybe it’s not even related to the scent but some other characteristic of the room or testing conditions in which they took the test. You just know the scores are higher but not necessarily why they’re higher.

I’d say that, no, it’s not necessarily evidence that the lavender scent enhanced focus. There are competing explanations for why the scores are higher. Also, it would be best do this as an experiment with a control and treatment group where subjects are randomly assigned to either group. That process helps establish causality rather than just correlation and helps rules out competing explanations for why the scores are higher.

By the way, I spend a lot of time on these issues in my Introduction to Statistics ebook .

' src=

June 9, 2020 at 1:47 pm

If a left tail test has an alpha value of 0.05 how will you find the value in the table

' src=

April 19, 2020 at 10:35 am

Hi Jim, My question is in regards to the results in the table in your example of the one-sample T (Two-Tailed) test. above. What about the P-value? The P-value listed is .018. I assuming that is compared to and alpha of 0.025, correct?

In regression analysis, when I get a test statistic for the predictive variable of -2.099 and a p-value of 0.039. Am I comparing the p-value to an alpha of 0.025 or 0.05? Now if I run a Bootstrap for coefficients analysis, the results say the sig (2-tail) is 0.098. What are the critical values and alpha in this case? I’m trying to reconcile what I am seeing in both tables.

Thanks for your help.

April 20, 2020 at 3:24 am

Hi Marvalisa,

For one-tailed tests, you don’t need to divide alpha in half. If you can tell your software to perform a one-tailed test, it’ll do all the calculations necessary so you don’t need to adjust anything. So, if you’re using an alpha of 0.05 for a one-tailed test and your p-value is 0.04, it is significant. The procedures adjust the p-values automatically and it all works out. So, whether you’re using a one-tailed or two-tailed test, you always compare the p-value to the alpha with no need to adjust anything. The procedure does that for you!

The exception would be if for some reason your software doesn’t allow you to specify that you want to use a one-tailed test instead of a two-tailed test. Then, you divide the p-value from a two-tailed test in half to get the p-value for a one tailed test. You’d still compare it to your original alpha.

For regression, the same thing applies. If you want to use a one-tailed test for a cofficient, just divide the p-value in half if you can’t tell the software that you want a one-tailed test. The default is two-tailed. If your software has the option for one-tailed tests for any procedure, including regression, it’ll adjust the p-value for you. So, in the normal course of things, you won’t need to adjust anything.

' src=

March 26, 2020 at 12:00 pm

Hey Jim, for a one-tailed hypothesis test with a .05 confidence level, should I use a 95% confidence interval or a 90% confidence interval? Thanks

March 26, 2020 at 5:05 pm

You should use a one-sided 95% confidence interval. One-sided CIs have either an upper OR lower bound but remains unbounded on the other side.

' src=

March 16, 2020 at 4:30 pm

This is not applicable to the subject but… When performing tests of equivalence, we look at the confidence interval of the difference between two groups, and we perform two one-sided t-tests for equivalence..

' src=

March 15, 2020 at 7:51 am

Thanks for this illustrative blogpost. I had a question on one of your points though.

By definition of H1 and H0, a two-sided alternate hypothesis is that there is a difference in means between the test and control. Not that anything is ‘better’ or ‘worse’.

Just because we observed a negative result in your example, does not mean we can conclude it’s necessarily worse, but instead just ‘different’.

Therefore while it enables us to spot the fact that there may be differences between test and control, we cannot make claims about directional effects. So I struggle to see why they actually need to be used instead of one-sided tests.

What’s your take on this?

March 16, 2020 at 3:02 am

Hi Dominic,

If you’ll notice, I carefully avoid stating better or worse because in a general sense you’re right. However, given the context of a specific experiment, you can conclude whether a negative value is better or worse. As always in statistics, you have to use your subject-area knowledge to help interpret the results. In some cases, a negative value is a bad result. In other cases, it’s not. Use your subject-area knowledge!

I’m not sure why you think that you can’t make claims about directional effects? Of course you can!

As for why you shouldn’t use one-tailed tests for most cases, read my post When Can I Use One-Tailed Tests . That should answer your questions.

' src=

May 10, 2019 at 12:36 pm

Your website is absolutely amazing Jim, you seem like the nicest guy for doing this and I like how there’s no ulterior motive, (I wasn’t automatically signed up for emails or anything when leaving this comment). I study economics and found econometrics really difficult at first, but your website explains it so clearly its been a big asset to my studies, keep up the good work!

May 10, 2019 at 2:12 pm

Thank you so much, Jack. Your kind words mean a lot!

' src=

April 26, 2019 at 5:05 am

Hy Jim I really need your help now pls

One-tailed and two- tailed hypothesis, is it the same or twice, half or unrelated pls

April 26, 2019 at 11:41 am

Hi Anthony,

I describe how the hypotheses are different in this post. You’ll find your answers.

' src=

February 8, 2019 at 8:00 am

Thank you for your blog Jim, I have a Statistics exam soon and your articles let me understand a lot!

February 8, 2019 at 10:52 am

You’re very welcome! I’m happy to hear that it’s been helpful. Best of luck on your exam!

' src=

January 12, 2019 at 7:06 am

Hi Jim, When you say target value is 5. Do you mean to say the population mean is 5 and we are trying to validate it with the help of sample mean 4.1 using Hypo tests ?.. If it is so.. How can we measure a population parameter as 5 when it is almost impossible o measure a population parameter. Please clarify

January 12, 2019 at 6:57 pm

When you set a target for a one-sample test, it’s based on a value that is important to you. It’s not a population parameter or anything like that. The example in this post uses a case where we need parts that are stronger on average than a value of 5. We derive the value of 5 by using our subject area knowledge about what is required for a situation. Given our product knowledge for the hypothetical example, we know it should be 5 or higher. So, we use that in the hypothesis test and determine whether the population mean is greater than that target value.

When you perform a one-sample test, a target value is optional. If you don’t supply a target value, you simply obtain a confidence interval for the range of values that the parameter is likely to fall within. But, sometimes there is meaningful number that you want to test for specifically.

I hope that clarifies the rational behind the target value!

' src=

November 15, 2018 at 8:08 am

I understand that in Psychology a one tailed hypothesis is preferred. Is that so

November 15, 2018 at 11:30 am

No, there’s no overall preference for one-tailed hypothesis tests in statistics. That would be a study-by-study decision based on the types of possible effects. For more information about this decision, read my post: When Can I Use One-Tailed Tests?

' src=

November 6, 2018 at 1:14 am

I’m grateful to you for the explanations on One tail and Two tail hypothesis test. This opens my knowledge horizon beyond what an average statistics textbook can offer. Please include more examples in future posts. Thanks

November 5, 2018 at 10:20 am

Thank you. I will search it as well.

Stan Alekman

November 4, 2018 at 8:48 pm

Jim, what is the difference between the central and non-central t-distributions w/respect to hypothesis testing?

November 5, 2018 at 10:12 am

Hi Stan, this is something I will need to look into. I know central t-distribution is the common Student t-distribution, but I don’t have experience using non-central t-distributions. There might well be a blog post in that–after I learn more!

' src=

November 4, 2018 at 7:42 pm

this is awesome.

Comments and Questions Cancel reply

  • Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Institute for Digital Research and Education

FAQ: What are the differences between one-tailed and two-tailed tests?

When you conduct a test of statistical significance, whether it is from a correlation, an ANOVA, a regression or some other kind of test, you are given a p-value somewhere in the output.  If your test statistic is symmetrically distributed, you can select one of three alternative hypotheses. Two of these correspond to one-tailed tests and one corresponds to a two-tailed test.  However, the p-value presented is (almost always) for a two-tailed test.  But how do you choose which test?  Is the p-value appropriate for your test? And, if it is not, how can you calculate the correct p-value for your test given the p-value in your output?  

What is a two-tailed test?

First let’s start with the meaning of a two-tailed test.  If you are using a significance level of 0.05, a two-tailed test allots half of your alpha to testing the statistical significance in one direction and half of your alpha to testing statistical significance in the other direction.  This means that .025 is in each tail of the distribution of your test statistic. When using a two-tailed test, regardless of the direction of the relationship you hypothesize, you are testing for the possibility of the relationship in both directions.  For example, we may wish to compare the mean of a sample to a given value x using a t-test.  Our null hypothesis is that the mean is equal to x . A two-tailed test will test both if the mean is significantly greater than x and if the mean significantly less than x . The mean is considered significantly different from x if the test statistic is in the top 2.5% or bottom 2.5% of its probability distribution, resulting in a p-value less than 0.05.     

What is a one-tailed test?

Next, let’s discuss the meaning of a one-tailed test.  If you are using a significance level of .05, a one-tailed test allots all of your alpha to testing the statistical significance in the one direction of interest.  This means that .05 is in one tail of the distribution of your test statistic. When using a one-tailed test, you are testing for the possibility of the relationship in one direction and completely disregarding the possibility of a relationship in the other direction.  Let’s return to our example comparing the mean of a sample to a given value x using a t-test.  Our null hypothesis is that the mean is equal to x . A one-tailed test will test either if the mean is significantly greater than x or if the mean is significantly less than x , but not both. Then, depending on the chosen tail, the mean is significantly greater than or less than x if the test statistic is in the top 5% of its probability distribution or bottom 5% of its probability distribution, resulting in a p-value less than 0.05.  The one-tailed test provides more power to detect an effect in one direction by not testing the effect in the other direction. A discussion of when this is an appropriate option follows.   

When is a one-tailed test appropriate?

Because the one-tailed test provides more power to detect an effect, you may be tempted to use a one-tailed test whenever you have a hypothesis about the direction of an effect. Before doing so, consider the consequences of missing an effect in the other direction.  Imagine you have developed a new drug that you believe is an improvement over an existing drug.  You wish to maximize your ability to detect the improvement, so you opt for a one-tailed test. In doing so, you fail to test for the possibility that the new drug is less effective than the existing drug.  The consequences in this example are extreme, but they illustrate a danger of inappropriate use of a one-tailed test.

So when is a one-tailed test appropriate? If you consider the consequences of missing an effect in the untested direction and conclude that they are negligible and in no way irresponsible or unethical, then you can proceed with a one-tailed test. For example, imagine again that you have developed a new drug. It is cheaper than the existing drug and, you believe, no less effective.  In testing this drug, you are only interested in testing if it less effective than the existing drug.  You do not care if it is significantly more effective.  You only wish to show that it is not less effective. In this scenario, a one-tailed test would be appropriate. 

When is a one-tailed test NOT appropriate?

Choosing a one-tailed test for the sole purpose of attaining significance is not appropriate.  Choosing a one-tailed test after running a two-tailed test that failed to reject the null hypothesis is not appropriate, no matter how "close" to significant the two-tailed test was.  Using statistical tests inappropriately can lead to invalid results that are not replicable and highly questionable–a steep price to pay for a significance star in your results table!   

Deriving a one-tailed test from two-tailed output

The default among statistical packages performing tests is to report two-tailed p-values.  Because the most commonly used test statistic distributions (standard normal, Student’s t) are symmetric about zero, most one-tailed p-values can be derived from the two-tailed p-values.   

Below, we have the output from a two-sample t-test in Stata.  The test is comparing the mean male score to the mean female score.  The null hypothesis is that the difference in means is zero.  The two-sided alternative is that the difference in means is not zero.  There are two one-sided alternatives that one could opt to test instead: that the male score is higher than the female score (diff  > 0) or that the female score is higher than the male score (diff < 0).  In this instance, Stata presents results for all three alternatives.  Under the headings Ha: diff < 0 and Ha: diff > 0 are the results for the one-tailed tests. In the middle, under the heading Ha: diff != 0 (which means that the difference is not equal to 0), are the results for the two-tailed test. 

Two-sample t test with equal variances ------------------------------------------------------------------------------ Group | Obs Mean Std. Err. Std. Dev. [95% Conf. Interval] ---------+-------------------------------------------------------------------- male | 91 50.12088 1.080274 10.30516 47.97473 52.26703 female | 109 54.99083 .7790686 8.133715 53.44658 56.53507 ---------+-------------------------------------------------------------------- combined | 200 52.775 .6702372 9.478586 51.45332 54.09668 ---------+-------------------------------------------------------------------- diff | -4.869947 1.304191 -7.441835 -2.298059 ------------------------------------------------------------------------------ Degrees of freedom: 198 Ho: mean(male) - mean(female) = diff = 0 Ha: diff < 0 Ha: diff != 0 Ha: diff > 0 t = -3.7341 t = -3.7341 t = -3.7341 P < t = 0.0001 P > |t| = 0.0002 P > t = 0.9999

Note that the test statistic, -3.7341, is the same for all of these tests.  The two-tailed p-value is P > |t|. This can be rewritten as P(>3.7341) + P(< -3.7341).  Because the t-distribution is symmetric about zero, these two probabilities are equal: P > |t| = 2 *  P(< -3.7341).  Thus, we can see that the two-tailed p-value is twice the one-tailed p-value for the alternative hypothesis that (diff < 0).  The other one-tailed alternative hypothesis has a p-value of P(>-3.7341) = 1-(P<-3.7341) = 1-0.0001 = 0.9999.   So, depending on the direction of the one-tailed hypothesis, its p-value is either 0.5*(two-tailed p-value) or 1-0.5*(two-tailed p-value) if the test statistic symmetrically distributed about zero. 

In this example, the two-tailed p-value suggests rejecting the null hypothesis of no difference. Had we opted for the one-tailed test of (diff > 0), we would fail to reject the null because of our choice of tails. 

The output below is from a regression analysis in Stata.  Unlike the example above, only the two-sided p-values are presented in this output.

Source | SS df MS Number of obs = 200 -------------+------------------------------ F( 2, 197) = 46.58 Model | 7363.62077 2 3681.81039 Prob > F = 0.0000 Residual | 15572.5742 197 79.0486001 R-squared = 0.3210 -------------+------------------------------ Adj R-squared = 0.3142 Total | 22936.195 199 115.257261 Root MSE = 8.8909 ------------------------------------------------------------------------------ socst | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- science | .2191144 .0820323 2.67 0.008 .0573403 .3808885 math | .4778911 .0866945 5.51 0.000 .3069228 .6488594 _cons | 15.88534 3.850786 4.13 0.000 8.291287 23.47939 ------------------------------------------------------------------------------

For each regression coefficient, the tested null hypothesis is that the coefficient is equal to zero.  Thus, the one-tailed alternatives are that the coefficient is greater than zero and that the coefficient is less than zero. To get the p-value for the one-tailed test of the variable science having a coefficient greater than zero, you would divide the .008 by 2, yielding .004 because the effect is going in the predicted direction. This is P(>2.67). If you had made your prediction in the other direction (the opposite direction of the model effect), the p-value would have been 1 – .004 = .996.  This is P(<2.67). For all three p-values, the test statistic is 2.67. 

Your Name (required)

Your Email (must be a valid email for us to receive the report!)

Comment/Error Report (required)

How to cite this page

  • © 2021 UC REGENTS

If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

To log in and use all the features of Khan Academy, please enable JavaScript in your browser.

Statistics and probability

Course: statistics and probability   >   unit 12.

  • Hypothesis testing and p-values

One-tailed and two-tailed tests

  • Z-statistics vs. T-statistics
  • Small sample hypothesis test
  • Large sample proportion hypothesis testing

what is a 2 tailed hypothesis test

Want to join the conversation?

  • Upvote Button navigates to signup page
  • Downvote Button navigates to signup page
  • Flag Button navigates to signup page

Good Answer

Video transcript

what is a 2 tailed hypothesis test

Hypothesis Testing for Means & Proportions

  •   1  
  • |   2  
  • |   3  
  • |   4  
  • |   5  
  • |   6  
  • |   7  
  • |   8  
  • |   9  
  • |   10  

On This Page sidebar

Hypothesis Testing: Upper-, Lower, and Two Tailed Tests

Type i and type ii errors.

Learn More sidebar

All Modules

More Resources sidebar

Z score Table

t score Table

The procedure for hypothesis testing is based on the ideas described above. Specifically, we set up competing hypotheses, select a random sample from the population of interest and compute summary statistics. We then determine whether the sample data supports the null or alternative hypotheses. The procedure can be broken down into the following five steps.  

  • Step 1. Set up hypotheses and select the level of significance α.

H 0 : Null hypothesis (no change, no difference);  

H 1 : Research hypothesis (investigator's belief); α =0.05

 

Upper-tailed, Lower-tailed, Two-tailed Tests

The research or alternative hypothesis can take one of three forms. An investigator might believe that the parameter has increased, decreased or changed. For example, an investigator might hypothesize:  

: μ > μ , where μ is the comparator or null value (e.g., μ =191 in our example about weight in men in 2006) and an increase is hypothesized - this type of test is called an ; : μ < μ , where a decrease is hypothesized and this is called a ; or : μ ≠ μ where a difference is hypothesized and this is called a .  

The exact form of the research hypothesis depends on the investigator's belief about the parameter of interest and whether it has possibly increased, decreased or is different from the null value. The research hypothesis is set up by the investigator before any data are collected.

 

  • Step 2. Select the appropriate test statistic.  

The test statistic is a single number that summarizes the sample information.   An example of a test statistic is the Z statistic computed as follows:

When the sample size is small, we will use t statistics (just as we did when constructing confidence intervals for small samples). As we present each scenario, alternative test statistics are provided along with conditions for their appropriate use.

  • Step 3.  Set up decision rule.  

The decision rule is a statement that tells under what circumstances to reject the null hypothesis. The decision rule is based on specific values of the test statistic (e.g., reject H 0 if Z > 1.645). The decision rule for a specific test depends on 3 factors: the research or alternative hypothesis, the test statistic and the level of significance. Each is discussed below.

  • The decision rule depends on whether an upper-tailed, lower-tailed, or two-tailed test is proposed. In an upper-tailed test the decision rule has investigators reject H 0 if the test statistic is larger than the critical value. In a lower-tailed test the decision rule has investigators reject H 0 if the test statistic is smaller than the critical value.  In a two-tailed test the decision rule has investigators reject H 0 if the test statistic is extreme, either larger than an upper critical value or smaller than a lower critical value.
  • The exact form of the test statistic is also important in determining the decision rule. If the test statistic follows the standard normal distribution (Z), then the decision rule will be based on the standard normal distribution. If the test statistic follows the t distribution, then the decision rule will be based on the t distribution. The appropriate critical value will be selected from the t distribution again depending on the specific alternative hypothesis and the level of significance.  
  • The third factor is the level of significance. The level of significance which is selected in Step 1 (e.g., α =0.05) dictates the critical value.   For example, in an upper tailed Z test, if α =0.05 then the critical value is Z=1.645.  

The following figures illustrate the rejection regions defined by the decision rule for upper-, lower- and two-tailed Z tests with α=0.05. Notice that the rejection regions are in the upper, lower and both tails of the curves, respectively. The decision rules are written below each figure.

Rejection Region for Upper-Tailed Z Test (H : μ > μ ) with α=0.05

The decision rule is: Reject H if Z 1.645.

 

 

α

Z

0.10

1.282

0.05

1.645

0.025

1.960

0.010

2.326

0.005

2.576

0.001

3.090

0.0001

3.719

Standard normal distribution with lower tail at -1.645 and alpha=0.05

Rejection Region for Lower-Tailed Z Test (H 1 : μ < μ 0 ) with α =0.05

The decision rule is: Reject H 0 if Z < 1.645.

a

Z

0.10

-1.282

0.05

-1.645

0.025

-1.960

0.010

-2.326

0.005

-2.576

0.001

-3.090

0.0001

-3.719

Standard normal distribution with two tails

Rejection Region for Two-Tailed Z Test (H 1 : μ ≠ μ 0 ) with α =0.05

The decision rule is: Reject H 0 if Z < -1.960 or if Z > 1.960.

0.20

1.282

0.10

1.645

0.05

1.960

0.010

2.576

0.001

3.291

0.0001

3.819

The complete table of critical values of Z for upper, lower and two-tailed tests can be found in the table of Z values to the right in "Other Resources."

Critical values of t for upper, lower and two-tailed tests can be found in the table of t values in "Other Resources."

  • Step 4. Compute the test statistic.  

Here we compute the test statistic by substituting the observed sample data into the test statistic identified in Step 2.

  • Step 5. Conclusion.  

The final conclusion is made by comparing the test statistic (which is a summary of the information observed in the sample) to the decision rule. The final conclusion will be either to reject the null hypothesis (because the sample data are very unlikely if the null hypothesis is true) or not to reject the null hypothesis (because the sample data are not very unlikely).  

If the null hypothesis is rejected, then an exact significance level is computed to describe the likelihood of observing the sample data assuming that the null hypothesis is true. The exact level of significance is called the p-value and it will be less than the chosen level of significance if we reject H 0 .

Statistical computing packages provide exact p-values as part of their standard output for hypothesis tests. In fact, when using a statistical computing package, the steps outlined about can be abbreviated. The hypotheses (step 1) should always be set up in advance of any analysis and the significance criterion should also be determined (e.g., α =0.05). Statistical computing packages will produce the test statistic (usually reporting the test statistic as t) and a p-value. The investigator can then determine statistical significance using the following: If p < α then reject H 0 .  

 

 

  • Step 1. Set up hypotheses and determine level of significance

H 0 : μ = 191 H 1 : μ > 191                 α =0.05

The research hypothesis is that weights have increased, and therefore an upper tailed test is used.

  • Step 2. Select the appropriate test statistic.

Because the sample size is large (n > 30) the appropriate test statistic is

  • Step 3. Set up decision rule.  

In this example, we are performing an upper tailed test (H 1 : μ> 191), with a Z test statistic and selected α =0.05.   Reject H 0 if Z > 1.645.

We now substitute the sample data into the formula for the test statistic identified in Step 2.  

We reject H 0 because 2.38 > 1.645. We have statistically significant evidence at a =0.05, to show that the mean weight in men in 2006 is more than 191 pounds. Because we rejected the null hypothesis, we now approximate the p-value which is the likelihood of observing the sample data if the null hypothesis is true. An alternative definition of the p-value is the smallest level of significance where we can still reject H 0 . In this example, we observed Z=2.38 and for α=0.05, the critical value was 1.645. Because 2.38 exceeded 1.645 we rejected H 0 . In our conclusion we reported a statistically significant increase in mean weight at a 5% level of significance. Using the table of critical values for upper tailed tests, we can approximate the p-value. If we select α=0.025, the critical value is 1.96, and we still reject H 0 because 2.38 > 1.960. If we select α=0.010 the critical value is 2.326, and we still reject H 0 because 2.38 > 2.326. However, if we select α=0.005, the critical value is 2.576, and we cannot reject H 0 because 2.38 < 2.576. Therefore, the smallest α where we still reject H 0 is 0.010. This is the p-value. A statistical computing package would produce a more precise p-value which would be in between 0.005 and 0.010. Here we are approximating the p-value and would report p < 0.010.                  

In all tests of hypothesis, there are two types of errors that can be committed. The first is called a Type I error and refers to the situation where we incorrectly reject H 0 when in fact it is true. This is also called a false positive result (as we incorrectly conclude that the research hypothesis is true when in fact it is not). When we run a test of hypothesis and decide to reject H 0 (e.g., because the test statistic exceeds the critical value in an upper tailed test) then either we make a correct decision because the research hypothesis is true or we commit a Type I error. The different conclusions are summarized in the table below. Note that we will never know whether the null hypothesis is really true or false (i.e., we will never know which row of the following table reflects reality).

Table - Conclusions in Test of Hypothesis

 

is True

Correct Decision

Type I Error

is False

Type II Error

Correct Decision

In the first step of the hypothesis test, we select a level of significance, α, and α= P(Type I error). Because we purposely select a small value for α, we control the probability of committing a Type I error. For example, if we select α=0.05, and our test tells us to reject H 0 , then there is a 5% probability that we commit a Type I error. Most investigators are very comfortable with this and are confident when rejecting H 0 that the research hypothesis is true (as it is the more likely scenario when we reject H 0 ).

When we run a test of hypothesis and decide not to reject H 0 (e.g., because the test statistic is below the critical value in an upper tailed test) then either we make a correct decision because the null hypothesis is true or we commit a Type II error. Beta (β) represents the probability of a Type II error and is defined as follows: β=P(Type II error) = P(Do not Reject H 0 | H 0 is false). Unfortunately, we cannot choose β to be small (e.g., 0.05) to control the probability of committing a Type II error because β depends on several factors including the sample size, α, and the research hypothesis. When we do not reject H 0 , it may be very likely that we are committing a Type II error (i.e., failing to reject H 0 when in fact it is false). Therefore, when tests are run and the null hypothesis is not rejected we often make a weak concluding statement allowing for the possibility that we might be committing a Type II error. If we do not reject H 0 , we conclude that we do not have significant evidence to show that H 1 is true. We do not conclude that H 0 is true.

Lightbulb icon signifying an important idea

 The most common reason for a Type II error is a small sample size.

return to top | previous page | next page

Content ©2017. All Rights Reserved. Date last modified: November 6, 2017. Wayne W. LaMorte, MD, PhD, MPH

  • Study Guides
  • One- and Two-Tailed Tests
  • Method of Statistical Inference
  • Types of Statistics
  • Steps in the Process
  • Making Predictions
  • Comparing Results
  • Probability
  • Quiz: Introduction to Statistics
  • What Are Statistics?
  • Quiz: Bar Chart
  • Quiz: Pie Chart
  • Introduction to Graphic Displays
  • Quiz: Dot Plot
  • Quiz: Introduction to Graphic Displays
  • Frequency Histogram
  • Relative Frequency Histogram
  • Quiz: Relative Frequency Histogram
  • Frequency Polygon
  • Quiz: Frequency Polygon
  • Frequency Distribution
  • Stem-and-Leaf
  • Box Plot (Box-and-Whiskers)
  • Quiz: Box Plot (Box-and-Whiskers)
  • Scatter Plot
  • Measures of Central Tendency
  • Quiz: Measures of Central Tendency
  • Measures of Variability
  • Quiz: Measures of Variability
  • Measurement Scales
  • Quiz: Introduction to Numerical Measures
  • Classic Theory
  • Relative Frequency Theory
  • Probability of Simple Events
  • Quiz: Probability of Simple Events
  • Independent Events
  • Dependent Events
  • Introduction to Probability
  • Quiz: Introduction to Probability
  • Probability of Joint Occurrences
  • Quiz: Probability of Joint Occurrences
  • Non-Mutually-Exclusive Outcomes
  • Quiz: Non-Mutually-Exclusive Outcomes
  • Double-Counting
  • Conditional Probability
  • Quiz: Conditional Probability
  • Probability Distributions
  • Quiz: Probability Distributions
  • The Binomial
  • Quiz: The Binomial
  • Quiz: Sampling Distributions
  • Random and Systematic Error
  • Central Limit Theorem
  • Quiz: Central Limit Theorem
  • Populations, Samples, Parameters, and Statistics
  • Properties of the Normal Curve
  • Quiz: Populations, Samples, Parameters, and Statistics
  • Sampling Distributions
  • Quiz: Properties of the Normal Curve
  • Normal Approximation to the Binomial
  • Quiz: Normal Approximation to the Binomial
  • Quiz: Stating Hypotheses
  • The Test Statistic
  • Quiz: The Test Statistic
  • Quiz: One- and Two-Tailed Tests
  • Type I and II Errors
  • Quiz: Type I and II Errors
  • Stating Hypotheses
  • Significance
  • Quiz: Significance
  • Point Estimates and Confidence Intervals
  • Quiz: Point Estimates and Confidence Intervals
  • Estimating a Difference Score
  • Quiz: Estimating a Difference Score
  • Univariate Tests: An Overview
  • Quiz: Univariate Tests: An Overview
  • One-Sample z-test
  • Quiz: One-Sample z-test
  • One-Sample t-test
  • Quiz: One-Sample t-test
  • Two-Sample z-test for Comparing Two Means
  • Quiz: Introduction to Univariate Inferential Tests
  • Quiz: Two-Sample z-test for Comparing Two Means
  • Two Sample t test for Comparing Two Means
  • Quiz: Two-Sample t-test for Comparing Two Means
  • Paired Difference t-test
  • Quiz: Paired Difference t-test
  • Test for a Single Population Proportion
  • Quiz: Test for a Single Population Proportion
  • Test for Comparing Two Proportions
  • Quiz: Test for Comparing Two Proportions
  • Quiz: Simple Linear Regression
  • Chi-Square (X2)
  • Quiz: Chi-Square (X2)
  • Correlation
  • Quiz: Correlation
  • Simple Linear Regression
  • Common Mistakes
  • Statistics Tables
  • Quiz: Cumulative Review A
  • Quiz: Cumulative Review B
  • Statistics Quizzes

In the previous example, you tested a research hypothesis that predicted not only that the sample mean would be different from the population mean but that it would be different in a specific direction—it would be lower. This test is called a directional or one‐tailed test because the region of rejection is entirely within one tail of the distribution.

Some hypotheses predict only that one value will be different from another, without additionally predicting which will be higher. The test of such a hypothesis is nondirectional or two‐tailed because an extreme test statistic in either tail of the distribution (positive or negative) will lead to the rejection of the null hypothesis of no difference.

Suppose that you suspect that a particular class's performance on a proficiency test is not representative of those people who have taken the test. The national mean score on the test is 74.

The research hypothesis is:

The mean score of the class on the test is not 74.

Or in notation: H a : μ ≠ 74

The null hypothesis is:

The mean score of the class on the test is 74.

In notation: H 0 : μ = 74

As in the last example, you decide to use a 5 percent probability level for the test. Both tests have a region of rejection, then, of 5 percent, or 0.05. In this example, however, the rejection region must be split between both tails of the distribution—0.025 in the upper tail and 0.025 in the lower tail—because your hypothesis specifies only a difference, not a direction, as shown in Figure 1(a). You will reject the null hypotheses of no difference if the class sample mean is either much higher or much lower than the population mean of 74. In the previous example, only a sample mean much lower than the population mean would have led to the rejection of the null hypothesis.

Figure 1.Comparison of (a) a two‐tailed test and (b) a one‐tailed test, at the same probability level (95 percent).

what is a 2 tailed hypothesis test

The decision of whether to use a one‐ or a two‐tailed test is important because a test statistic that falls in the region of rejection in a one‐tailed test may not do so in a two‐tailed test, even though both tests use the same probability level. Suppose the class sample mean in your example was 77, and its corresponding z ‐score was computed to be 1.80. Table 2 in "Statistics Tables" shows the critical z ‐scores for a probability of 0.025 in either tail to be –1.96 and 1.96. In order to reject the null hypothesis, the test statistic must be either smaller than –1.96 or greater than 1.96. It is not, so you cannot reject the null hypothesis. Refer to Figure 1(a).

Suppose, however, you had a reason to expect that the class would perform better on the proficiency test than the population, and you did a one‐tailed test instead. For this test, the rejection region of 0.05 would be entirely within the upper tail. The critical z ‐value for a probability of 0.05 in the upper tail is 1.65. (Remember that Table 2 in "Statistics Tables" gives areas of the curve below z ; so you look up the z ‐value for a probability of 0.95.) Your computed test statistic of z = 1.80 exceeds the critical value and falls in the region of rejection, so you reject the null hypothesis and say that your suspicion that the class was better than the population was supported. See Figure 1(b).

In practice, you should use a one‐tailed test only when you have good reason to expect that the difference will be in a particular direction. A two‐tailed test is more conservative than a one‐tailed test because a two‐tailed test takes a more extreme test statistic to reject the null hypothesis.

Previous Quiz: The Test Statistic

Next Quiz: One- and Two-Tailed Tests

  • Online Quizzes for CliffsNotes Statistics QuickReview, 2nd Edition

what is a 2 tailed hypothesis test

User Preferences

Content preview.

Arcu felis bibendum ut tristique et egestas quis:

  • Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris
  • Duis aute irure dolor in reprehenderit in voluptate
  • Excepteur sint occaecat cupidatat non proident

Keyboard Shortcuts

S.3.2 hypothesis testing (p-value approach).

The P -value approach involves determining "likely" or "unlikely" by determining the probability — assuming the null hypothesis was true — of observing a more extreme test statistic in the direction of the alternative hypothesis than the one observed. If the P -value is small, say less than (or equal to) \(\alpha\), then it is "unlikely." And, if the P -value is large, say more than \(\alpha\), then it is "likely."

If the P -value is less than (or equal to) \(\alpha\), then the null hypothesis is rejected in favor of the alternative hypothesis. And, if the P -value is greater than \(\alpha\), then the null hypothesis is not rejected.

Specifically, the four steps involved in using the P -value approach to conducting any hypothesis test are:

  • Specify the null and alternative hypotheses.
  • Using the sample data and assuming the null hypothesis is true, calculate the value of the test statistic. Again, to conduct the hypothesis test for the population mean μ , we use the t -statistic \(t^*=\frac{\bar{x}-\mu}{s/\sqrt{n}}\) which follows a t -distribution with n - 1 degrees of freedom.
  • Using the known distribution of the test statistic, calculate the P -value : "If the null hypothesis is true, what is the probability that we'd observe a more extreme test statistic in the direction of the alternative hypothesis than we did?" (Note how this question is equivalent to the question answered in criminal trials: "If the defendant is innocent, what is the chance that we'd observe such extreme criminal evidence?")
  • Set the significance level, \(\alpha\), the probability of making a Type I error to be small — 0.01, 0.05, or 0.10. Compare the P -value to \(\alpha\). If the P -value is less than (or equal to) \(\alpha\), reject the null hypothesis in favor of the alternative hypothesis. If the P -value is greater than \(\alpha\), do not reject the null hypothesis.

Example S.3.2.1

Mean gpa section  .

In our example concerning the mean grade point average, suppose that our random sample of n = 15 students majoring in mathematics yields a test statistic t * equaling 2.5. Since n = 15, our test statistic t * has n - 1 = 14 degrees of freedom. Also, suppose we set our significance level α at 0.05 so that we have only a 5% chance of making a Type I error.

Right Tailed

The P -value for conducting the right-tailed test H 0 : μ = 3 versus H A : μ > 3 is the probability that we would observe a test statistic greater than t * = 2.5 if the population mean \(\mu\) really were 3. Recall that probability equals the area under the probability curve. The P -value is therefore the area under a t n - 1 = t 14 curve and to the right of the test statistic t * = 2.5. It can be shown using statistical software that the P -value is 0.0127. The graph depicts this visually.

t-distrbution graph showing the right tail beyond a t value of 2.5

The P -value, 0.0127, tells us it is "unlikely" that we would observe such an extreme test statistic t * in the direction of H A if the null hypothesis were true. Therefore, our initial assumption that the null hypothesis is true must be incorrect. That is, since the P -value, 0.0127, is less than \(\alpha\) = 0.05, we reject the null hypothesis H 0 : μ = 3 in favor of the alternative hypothesis H A : μ > 3.

Note that we would not reject H 0 : μ = 3 in favor of H A : μ > 3 if we lowered our willingness to make a Type I error to \(\alpha\) = 0.01 instead, as the P -value, 0.0127, is then greater than \(\alpha\) = 0.01.

Left Tailed

In our example concerning the mean grade point average, suppose that our random sample of n = 15 students majoring in mathematics yields a test statistic t * instead of equaling -2.5. The P -value for conducting the left-tailed test H 0 : μ = 3 versus H A : μ < 3 is the probability that we would observe a test statistic less than t * = -2.5 if the population mean μ really were 3. The P -value is therefore the area under a t n - 1 = t 14 curve and to the left of the test statistic t* = -2.5. It can be shown using statistical software that the P -value is 0.0127. The graph depicts this visually.

t distribution graph showing left tail below t value of -2.5

The P -value, 0.0127, tells us it is "unlikely" that we would observe such an extreme test statistic t * in the direction of H A if the null hypothesis were true. Therefore, our initial assumption that the null hypothesis is true must be incorrect. That is, since the P -value, 0.0127, is less than α = 0.05, we reject the null hypothesis H 0 : μ = 3 in favor of the alternative hypothesis H A : μ < 3.

Note that we would not reject H 0 : μ = 3 in favor of H A : μ < 3 if we lowered our willingness to make a Type I error to α = 0.01 instead, as the P -value, 0.0127, is then greater than \(\alpha\) = 0.01.

In our example concerning the mean grade point average, suppose again that our random sample of n = 15 students majoring in mathematics yields a test statistic t * instead of equaling -2.5. The P -value for conducting the two-tailed test H 0 : μ = 3 versus H A : μ ≠ 3 is the probability that we would observe a test statistic less than -2.5 or greater than 2.5 if the population mean μ really was 3. That is, the two-tailed test requires taking into account the possibility that the test statistic could fall into either tail (hence the name "two-tailed" test). The P -value is, therefore, the area under a t n - 1 = t 14 curve to the left of -2.5 and to the right of 2.5. It can be shown using statistical software that the P -value is 0.0127 + 0.0127, or 0.0254. The graph depicts this visually.

t-distribution graph of two tailed probability for t values of -2.5 and 2.5

Note that the P -value for a two-tailed test is always two times the P -value for either of the one-tailed tests. The P -value, 0.0254, tells us it is "unlikely" that we would observe such an extreme test statistic t * in the direction of H A if the null hypothesis were true. Therefore, our initial assumption that the null hypothesis is true must be incorrect. That is, since the P -value, 0.0254, is less than α = 0.05, we reject the null hypothesis H 0 : μ = 3 in favor of the alternative hypothesis H A : μ ≠ 3.

Note that we would not reject H 0 : μ = 3 in favor of H A : μ ≠ 3 if we lowered our willingness to make a Type I error to α = 0.01 instead, as the P -value, 0.0254, is then greater than \(\alpha\) = 0.01.

Now that we have reviewed the critical value and P -value approach procedures for each of the three possible hypotheses, let's look at three new examples — one of a right-tailed test, one of a left-tailed test, and one of a two-tailed test.

The good news is that, whenever possible, we will take advantage of the test statistics and P -values reported in statistical software, such as Minitab, to conduct our hypothesis tests in this course.

Statology

Statistics Made Easy

How to Identify a Left Tailed Test vs. a Right Tailed Test

In statistics, we use hypothesis tests to determine whether some claim about a population parameter is true or not.

Whenever we perform a hypothesis test, we always write a null hypothesis and an alternative hypothesis , which take the following forms:

H 0 (Null Hypothesis): Population parameter = ≤, ≥ some value

H A (Alternative Hypothesis): Population parameter <, >, ≠ some value

There are three different types of hypothesis tests:

  • Two-tailed test: The alternative hypothesis contains the “≠” sign
  • Left-tailed test: The alternative hypothesis contains the “<” sign
  • Right-tailed test: The alternative hypothesis contains the “>” sign

Notice that we only have to look at the sign in the alternative hypothesis to determine the type of hypothesis test.

Left-tailed test: The alternative hypothesis contains the “<” sign   Right-tailed test: The alternative hypothesis contains the “>” sign

The following examples show how to identify left-tailed and right-tailed tests in practice.

Example: Left-Tailed Test

Suppose it’s assumed that the average weight of a certain widget produced at a factory is 20 grams. However, one inspector believes the true average weight is less than 20 grams.

To test this, he weighs a simple random sample of 20 widgets and obtains the following information:

  • n = 20 widgets
  • x = 19.8 grams
  • s = 3.1 grams

He then performs a hypothesis test using the following null and alternative hypotheses:

H 0 (Null Hypothesis): μ ≥ 20 grams

H A (Alternative Hypothesis): μ < 20 grams

The test statistic is calculated as:

  • t  = ( x – µ) / (s/√ n )
  • t = (19.8-20) / (3.1/√ 20 )

According to the t-Distribution table , the t critical value at α = .05 and n-1 = 19 degrees of freedom is – 1.729 .

Since the test statistic is not less than this value, the inspector fails to reject the null hypothesis. He does not have sufficient evidence to say that the true mean weight of widgets produced at this factory is less than 20 grams.

Example: Right-Tailed Test

Suppose it’s assumed that the average height of a certain species of plant is 10 inches tall. However, one botanist claims the true average height is greater than 10 inches.

To test this claim, she goes out and measures the height of a simple random sample of 15 plants and obtains the following information:

  • n = 15 plants
  • x = 11.4 inches
  • s = 2.5 inches

She then performs a hypothesis test using the following null and alternative hypotheses:

H 0 (Null Hypothesis): μ ≤ 10 inches

H A (Alternative Hypothesis): μ > 10 inches

  • t = (11.4-10) / (2.5/√ 15 )

According to the t-Distribution table , the t critical value at α = .05 and n-1 = 14 degrees of freedom is 1.761 .

Since the test statistic is greater than this value, the botanist can reject the null hypothesis. She has sufficient evidence to say that the true mean height for this species of plant is greater than 10 inches.

Additional Resources

How to Read the t-Distribution Table One Sample t-test Calculator Two Sample t-test Calculator

Featured Posts

what is a 2 tailed hypothesis test

Hey there. My name is Zach Bobbitt. I have a Masters of Science degree in Applied Statistics and I’ve worked on machine learning algorithms for professional businesses in both healthcare and retail. I’m passionate about statistics, machine learning, and data visualization and I created Statology to be a resource for both students and teachers alike.  My goal with this site is to help you learn statistics through using simple terms, plenty of real-world examples, and helpful illustrations.

2 Replies to “How to Identify a Left Tailed Test vs. a Right Tailed Test”

This was so helpful you are a life saver. Thank you so much

Left-tailed test example, -2.885 is less than -1.729, why you mention -2.885 is “not lesss” …

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Join the Statology Community

Sign up to receive Statology's exclusive study resource: 100 practice problems with step-by-step solutions. Plus, get our latest insights, tutorials, and data analysis tips straight to your inbox!

By subscribing you accept Statology's Privacy Policy.

One and Two Tailed Tests

Suppose we have a null hypothesis H 0 and an alternative hypothesis H 1 . We consider the distribution given by the null hypothesis and perform a test to determine whether or not the null hypothesis should be rejected in favour of the alternative hypothesis.

There are two different types of tests that can be performed. A one-tailed test looks for an increase or decrease in the parameter whereas a two-tailed test looks for any change in the parameter (which can be any change- increase or decrease).

We can perform the test at any level (usually 1%, 5% or 10%). For example, performing the test at a 5% level means that there is a 5% chance of wrongly rejecting H 0 .

If we perform the test at the 5% level and decide to reject the null hypothesis, we say "there is significant evidence at the 5% level to suggest the hypothesis is false".

One-Tailed Test

We choose a critical region. In a one-tailed test, the critical region will have just one part (the red area below). If our sample value lies in this region, we reject the null hypothesis in favour of the alternative.

Suppose we are looking for a definite decrease. Then the critical region will be to the left. Note, however, that in the one-tailed test the value of the parameter can be as high as you like.

Suppose we are given that X has a Poisson distribution and we want to carry out a hypothesis test on the mean, l, based upon a sample observation of 3.

Suppose the hypotheses are: H 0 : l = 9 H 1 : l < 9

We want to test if it is "reasonable" for the observed value of 3 to have come from a Poisson distribution with parameter 9. So what is the probability that a value as low as 3 has come from a Po(9)?

P(X < 3) = 0.0212 (this has come from a Poisson table)

The probability is less than 0.05, so there is less than a 5% chance that the value has come from a Poisson(3) distribution. We therefore reject the null hypothesis in favour of the alternative at the 5% level.

However, the probability is greater than 0.01, so we would not reject the null hypothesis in favour of the alternative at the 1% level.

Two-Tailed Test

In a two-tailed test, we are looking for either an increase or a decrease. So, for example, H 0 might be that the mean is equal to 9 (as before). This time, however, H 1 would be that the mean is not equal to 9. In this case, therefore, the critical region has two parts:

Lets test the parameter p of a Binomial distribution at the 10% level.

Suppose a coin is tossed 10 times and we get 7 heads. We want to test whether or not the coin is fair. If the coin is fair, p = 0.5 . Put this as the null hypothesis:

H 0 : p = 0.5 H 1 : p =(doesn' equal) 0.5

Now, because the test is 2-tailed, the critical region has two parts. Half of the critical region is to the right and half is to the left. So the critical region contains both the top 5% of the distribution and the bottom 5% of the distribution (since we are testing at the 10% level).

If H 0 is true, X ~ Bin(10, 0.5).

If the null hypothesis is true, what is the probability that X is 7 or above? P(X > 7) = 1 - P(X < 7) = 1 - P(X < 6) = 1 - 0.8281 = 0.1719

Is this in the critical region? No- because the probability that X is at least 7 is not less than 0.05 (5%), which is what we need it to be.

So there is not significant evidence at the 10% level to reject the null hypothesis.

The London Interdisciplinary School banner

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Hypothesis Testing | A Step-by-Step Guide with Easy Examples

Published on November 8, 2019 by Rebecca Bevans . Revised on June 22, 2023.

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics . It is most often used by scientists to test specific predictions, called hypotheses, that arise from theories.

There are 5 main steps in hypothesis testing:

  • State your research hypothesis as a null hypothesis and alternate hypothesis (H o ) and (H a  or H 1 ).
  • Collect data in a way designed to test the hypothesis.
  • Perform an appropriate statistical test .
  • Decide whether to reject or fail to reject your null hypothesis.
  • Present the findings in your results and discussion section.

Though the specific details might vary, the procedure you will use when testing a hypothesis will always follow some version of these steps.

Table of contents

Step 1: state your null and alternate hypothesis, step 2: collect data, step 3: perform a statistical test, step 4: decide whether to reject or fail to reject your null hypothesis, step 5: present your findings, other interesting articles, frequently asked questions about hypothesis testing.

After developing your initial research hypothesis (the prediction that you want to investigate), it is important to restate it as a null (H o ) and alternate (H a ) hypothesis so that you can test it mathematically.

The alternate hypothesis is usually your initial hypothesis that predicts a relationship between variables. The null hypothesis is a prediction of no relationship between the variables you are interested in.

  • H 0 : Men are, on average, not taller than women. H a : Men are, on average, taller than women.

Prevent plagiarism. Run a free check.

For a statistical test to be valid , it is important to perform sampling and collect data in a way that is designed to test your hypothesis. If your data are not representative, then you cannot make statistical inferences about the population you are interested in.

There are a variety of statistical tests available, but they are all based on the comparison of within-group variance (how spread out the data is within a category) versus between-group variance (how different the categories are from one another).

If the between-group variance is large enough that there is little or no overlap between groups, then your statistical test will reflect that by showing a low p -value . This means it is unlikely that the differences between these groups came about by chance.

Alternatively, if there is high within-group variance and low between-group variance, then your statistical test will reflect that with a high p -value. This means it is likely that any difference you measure between groups is due to chance.

Your choice of statistical test will be based on the type of variables and the level of measurement of your collected data .

  • an estimate of the difference in average height between the two groups.
  • a p -value showing how likely you are to see this difference if the null hypothesis of no difference is true.

Based on the outcome of your statistical test, you will have to decide whether to reject or fail to reject your null hypothesis.

In most cases you will use the p -value generated by your statistical test to guide your decision. And in most cases, your predetermined level of significance for rejecting the null hypothesis will be 0.05 – that is, when there is a less than 5% chance that you would see these results if the null hypothesis were true.

In some cases, researchers choose a more conservative level of significance, such as 0.01 (1%). This minimizes the risk of incorrectly rejecting the null hypothesis ( Type I error ).

The results of hypothesis testing will be presented in the results and discussion sections of your research paper , dissertation or thesis .

In the results section you should give a brief summary of the data and a summary of the results of your statistical test (for example, the estimated difference between group means and associated p -value). In the discussion , you can discuss whether your initial hypothesis was supported by your results or not.

In the formal language of hypothesis testing, we talk about rejecting or failing to reject the null hypothesis. You will probably be asked to do this in your statistics assignments.

However, when presenting research results in academic papers we rarely talk this way. Instead, we go back to our alternate hypothesis (in this case, the hypothesis that men are on average taller than women) and state whether the result of our test did or did not support the alternate hypothesis.

If your null hypothesis was rejected, this result is interpreted as “supported the alternate hypothesis.”

These are superficial differences; you can see that they mean the same thing.

You might notice that we don’t say that we reject or fail to reject the alternate hypothesis . This is because hypothesis testing is not designed to prove or disprove anything. It is only designed to test whether a pattern we measure could have arisen spuriously, or by chance.

If we reject the null hypothesis based on our research (i.e., we find that it is unlikely that the pattern arose by chance), then we can say our test lends support to our hypothesis . But if the pattern does not pass our decision rule, meaning that it could have arisen by chance, then we say the test is inconsistent with our hypothesis .

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Descriptive statistics
  • Measures of central tendency
  • Correlation coefficient

Methodology

  • Cluster sampling
  • Stratified sampling
  • Types of interviews
  • Cohort study
  • Thematic analysis

Research bias

  • Implicit bias
  • Cognitive bias
  • Survivorship bias
  • Availability heuristic
  • Nonresponse bias
  • Regression to the mean

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.

A hypothesis states your predictions about what your research will find. It is a tentative answer to your research question that has not yet been tested. For some research projects, you might have to write several hypotheses that address different aspects of your research question.

A hypothesis is not just a guess — it should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations and statistical analysis of data).

Null and alternative hypotheses are used in statistical hypothesis testing . The null hypothesis of a test always predicts no effect or no relationship between variables, while the alternative hypothesis states your research prediction of an effect or relationship.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bevans, R. (2023, June 22). Hypothesis Testing | A Step-by-Step Guide with Easy Examples. Scribbr. Retrieved June 7, 2024, from https://www.scribbr.com/statistics/hypothesis-testing/

Is this article helpful?

Rebecca Bevans

Rebecca Bevans

Other students also liked, choosing the right statistical test | types & examples, understanding p values | definition and examples, what is your plagiarism score.

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons

Margin Size

  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Statistics LibreTexts

11.4: One- and Two-Tailed Tests

  • Last updated
  • Save as PDF
  • Page ID 2148

  • Rice University

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

Learning Objectives

  • Define Type I and Type II errors
  • Interpret significant and non-significant differences
  • Explain why the null hypothesis should not be accepted when the effect is not significant

In the James Bond case study, Mr. Bond was given \(16\) trials on which he judged whether a martini had been shaken or stirred. He was correct on \(13\) of the trials. From the binomial distribution, we know that the probability of being correct \(13\) or more times out of \(16\) if one is only guessing is \(0.0106\). Figure \(\PageIndex{1}\) shows a graph of the binomial distribution. The red bars show the values greater than or equal to \(13\). As you can see in the figure, the probabilities are calculated for the upper tail of the distribution. A probability calculated in only one tail of the distribution is called a "one-tailed probability."

Binomial Calculator

A slightly different question can be asked of the data: "What is the probability of getting a result as extreme or more extreme than the one observed?" Since the chance expectation is \(8/16\), a result of \(3/16\) is equally as extreme as \(13/16\). Thus, to calculate this probability, we would consider both tails of the distribution. Since the binomial distribution is symmetric when \(\pi =0.5\), this probability is exactly double the probability of \(0.0106\) computed previously. Therefore, \(p = 0.0212\). A probability calculated in both tails of a distribution is called a "two-tailed probability" (see Figure \(\PageIndex{2}\)).

Should the one-tailed or the two-tailed probability be used to assess Mr. Bond's performance? That depends on the way the question is posed. If we are asking whether Mr. Bond can tell the difference between shaken or stirred martinis, then we would conclude he could if he performed either much better than chance or much worse than chance. If he performed much worse than chance, we would conclude that he can tell the difference, but he does not know which is which. Therefore, since we are going to reject the null hypothesis if Mr. Bond does either very well or very poorly, we will use a two-tailed probability.

On the other hand, if our question is whether Mr. Bond is better than chance at determining whether a martini is shaken or stirred, we would use a one-tailed probability. What would the one-tailed probability be if Mr. Bond were correct on only \(3\) of the \(16\) trials? Since the one-tailed probability is the probability of the right-hand tail, it would be the probability of getting \(3\) or more correct out of \(16\). This is a very high probability and the null hypothesis would not be rejected.

The null hypothesis for the two-tailed test is \(\pi =0.5\). By contrast, the null hypothesis for the one-tailed test is \(\pi \leq 0.5\). Accordingly, we reject the two-tailed hypothesis if the sample proportion deviates greatly from \(0.5\) in either direction. The one-tailed hypothesis is rejected only if the sample proportion is much greater than \(0.5\). The alternative hypothesis in the two-tailed test is \(\pi \neq 0.5\). In the one-tailed test it is \(\pi > 0.5\).

You should always decide whether you are going to use a one-tailed or a two-tailed probability before looking at the data. Statistical tests that compute one-tailed probabilities are called one-tailed tests; those that compute two-tailed probabilities are called two-tailed tests. Two-tailed tests are much more common than one-tailed tests in scientific research because an outcome signifying that something other than chance is operating is usually worth noting. One-tailed tests are appropriate when it is not important to distinguish between no effect and an effect in the unexpected direction. For example, consider an experiment designed to test the efficacy of a treatment for the common cold. The researcher would only be interested in whether the treatment was better than a placebo control. It would not be worth distinguishing between the case in which the treatment was worse than a placebo and the case in which it was the same because in both cases the drug would be worthless.

Some have argued that a one-tailed test is justified whenever the researcher predicts the direction of an effect. The problem with this argument is that if the effect comes out strongly in the non-predicted direction, the researcher is not justified in concluding that the effect is not zero. Since this is unrealistic, one-tailed tests are usually viewed skeptically if justified on this basis alone.

  • Key Differences

Know the Differences & Comparisons

Difference Between One-tailed and Two-tailed Test

One tailed vs two tailed test

To test the hypothesis, test statistics is required, which follows a known distribution. In a test, there are two divisions of probability density curve, i.e. region of acceptance and region of rejection. the region of rejection is called as a critical region .

In the field of research and experiments, it pays to know the difference between one-tailed and two-tailed test, as they are quite commonly used in the process.

Content: One-tailed Test Vs Two-tailed Test

Comparison chart.

Basis of ComparisonOne-tailed TestTwo-tailed Test
MeaningA statistical hypothesis test in which alternative hypothesis has only one end, is known as one tailed test.A significance test in which alternative hypothesis has two ends, is called two-tailed test.
HypothesisDirectionalNon-directional
Region of rejectionEither left or rightBoth left and right
DeterminesIf there is a relationship between variables in single direction.If there is a relationship between variables in either direction.
ResultGreater or less than certain value.Greater or less than certain range of values.
Sign in alternative hypothesis> or

Definition of One-tailed Test

One-tailed test alludes to the significance test in which the region of rejection appears on one end of the sampling distribution. It represents that the estimated test parameter is greater or less than the critical value. When the sample tested falls in the region of rejection, i.e. either left or right side, as the case may be, it leads to the acceptance of alternative hypothesis rather than the null hypothesis. It is primarily applied in chi-square distribution; that ascertains the goodness of fit.

In this statistical hypothesis test, all the critical region, related to α , is placed in any one of the two tails. One-tailed test can be:

  • Left-tailed test : When the population parameter is believed to be lower than the assumed one, the hypothesis test carried out is the left-tailed test.
  • Right-tailed test : When the population parameter is supposed to be greater than the assumed one, the statistical test conducted is a right-tailed test.

Definition of Two-tailed Test

The two-tailed test is described as a hypothesis test, in which the region of rejection or say the critical area is on both the ends of the normal distribution. It determines whether the sample tested falls within or outside a certain range of values. Therefore, an alternative hypothesis is accepted in place of the null hypothesis, if the calculated value falls in either of the two tails of the probability distribution.

In this test, α is bifurcated into two equal parts, placing half on each side, i.e. it considers the possibility of both positive and negative effects. It is performed to see, whether the estimated parameter is either above or below the assumed parameter, so the extreme values, work as evidence against the null hypothesis.

Key Differences Between One-tailed and Two-tailed Test

The fundamental differences between one-tailed and two-tailed test, is explained below in points:

  • One-tailed test, as the name suggest is the statistical hypothesis test, in which the alternative hypothesis has a single end. On the other hand, two-tailed test implies the hypothesis test; wherein the alternative hypothesis has dual ends.
  • In the one-tailed test, the alternative hypothesis is represented directionally. Conversely, the two-tailed test is a non-directional hypothesis test.
  • In a one-tailed test, the region of rejection is either on the left or right of the sampling distribution. On the contrary, the region of rejection is on both the sides of the sampling distribution.
  • A one-tailed test is used to ascertain if there is any relationship between variables in a single direction, i.e. left or right. As against this, the two-tailed test is used to identify whether or not there is any relationship between variables in either direction.
  • In a one-tailed test, the test parameter calculated is more or less than the critical value. Unlike, two-tailed test, the result obtained is within or outside critical value.
  • When an alternative hypothesis has ‘≠’ sign, then a two-tailed test is performed. In contrast, when an alternative hypothesis has ‘> or <‘ sign, then one-tailed test is carried out.

To sum up, we can say that the basic difference between one-tailed and two-tailed test lies in the direction, i.e. in case the research hypothesis entails the direction of interrelation or difference, then one-tailed test is applied, but if the research hypothesis does not signify the direction of interaction or difference, we use two-tailed test.

You Might Also Like:

what is a 2 tailed hypothesis test

Swati Aggarwal says

April 24, 2018 at 11:47 am

Very Informative and specifically summarised. thank you.

Aurobindo says

January 20, 2019 at 8:22 am

Amazing Surbhi. I recently started following this site and I really find it very very useful. The simplicity of language, the key distinctions, range of depth covered in giving the differences. Hats off to you for your effort. Very amazing. I have definitely bookmarked this website as ‘my favorite’ and I shall keep visiting it again and again. Keep it up. Thanks.

Marian Truehill says

July 23, 2020 at 11:48 pm

This website is very useful and easy to understand for Statistics methods and concepts.

Kuje Samson says

January 11, 2023 at 5:13 am

The website is very linear to a layman statistics. I really enjoyed visiting the site

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Indian J Crit Care Med
  • v.23(Suppl 3); 2019 Sep

An Introduction to Statistics: Understanding Hypothesis Testing and Statistical Errors

Priya ranganathan.

1 Department of Anesthesiology, Critical Care and Pain, Tata Memorial Hospital, Mumbai, Maharashtra, India

2 Department of Surgical Oncology, Tata Memorial Centre, Mumbai, Maharashtra, India

The second article in this series on biostatistics covers the concepts of sample, population, research hypotheses and statistical errors.

How to cite this article

Ranganathan P, Pramesh CS. An Introduction to Statistics: Understanding Hypothesis Testing and Statistical Errors. Indian J Crit Care Med 2019;23(Suppl 3):S230–S231.

Two papers quoted in this issue of the Indian Journal of Critical Care Medicine report. The results of studies aim to prove that a new intervention is better than (superior to) an existing treatment. In the ABLE study, the investigators wanted to show that transfusion of fresh red blood cells would be superior to standard-issue red cells in reducing 90-day mortality in ICU patients. 1 The PROPPR study was designed to prove that transfusion of a lower ratio of plasma and platelets to red cells would be superior to a higher ratio in decreasing 24-hour and 30-day mortality in critically ill patients. 2 These studies are known as superiority studies (as opposed to noninferiority or equivalence studies which will be discussed in a subsequent article).

SAMPLE VERSUS POPULATION

A sample represents a group of participants selected from the entire population. Since studies cannot be carried out on entire populations, researchers choose samples, which are representative of the population. This is similar to walking into a grocery store and examining a few grains of rice or wheat before purchasing an entire bag; we assume that the few grains that we select (the sample) are representative of the entire sack of grains (the population).

The results of the study are then extrapolated to generate inferences about the population. We do this using a process known as hypothesis testing. This means that the results of the study may not always be identical to the results we would expect to find in the population; i.e., there is the possibility that the study results may be erroneous.

HYPOTHESIS TESTING

A clinical trial begins with an assumption or belief, and then proceeds to either prove or disprove this assumption. In statistical terms, this belief or assumption is known as a hypothesis. Counterintuitively, what the researcher believes in (or is trying to prove) is called the “alternate” hypothesis, and the opposite is called the “null” hypothesis; every study has a null hypothesis and an alternate hypothesis. For superiority studies, the alternate hypothesis states that one treatment (usually the new or experimental treatment) is superior to the other; the null hypothesis states that there is no difference between the treatments (the treatments are equal). For example, in the ABLE study, we start by stating the null hypothesis—there is no difference in mortality between groups receiving fresh RBCs and standard-issue RBCs. We then state the alternate hypothesis—There is a difference between groups receiving fresh RBCs and standard-issue RBCs. It is important to note that we have stated that the groups are different, without specifying which group will be better than the other. This is known as a two-tailed hypothesis and it allows us to test for superiority on either side (using a two-sided test). This is because, when we start a study, we are not 100% certain that the new treatment can only be better than the standard treatment—it could be worse, and if it is so, the study should pick it up as well. One tailed hypothesis and one-sided statistical testing is done for non-inferiority studies, which will be discussed in a subsequent paper in this series.

STATISTICAL ERRORS

There are two possibilities to consider when interpreting the results of a superiority study. The first possibility is that there is truly no difference between the treatments but the study finds that they are different. This is called a Type-1 error or false-positive error or alpha error. This means falsely rejecting the null hypothesis.

The second possibility is that there is a difference between the treatments and the study does not pick up this difference. This is called a Type 2 error or false-negative error or beta error. This means falsely accepting the null hypothesis.

The power of the study is the ability to detect a difference between groups and is the converse of the beta error; i.e., power = 1-beta error. Alpha and beta errors are finalized when the protocol is written and form the basis for sample size calculation for the study. In an ideal world, we would not like any error in the results of our study; however, we would need to do the study in the entire population (infinite sample size) to be able to get a 0% alpha and beta error. These two errors enable us to do studies with realistic sample sizes, with the compromise that there is a small possibility that the results may not always reflect the truth. The basis for this will be discussed in a subsequent paper in this series dealing with sample size calculation.

Conventionally, type 1 or alpha error is set at 5%. This means, that at the end of the study, if there is a difference between groups, we want to be 95% certain that this is a true difference and allow only a 5% probability that this difference has occurred by chance (false positive). Type 2 or beta error is usually set between 10% and 20%; therefore, the power of the study is 90% or 80%. This means that if there is a difference between groups, we want to be 80% (or 90%) certain that the study will detect that difference. For example, in the ABLE study, sample size was calculated with a type 1 error of 5% (two-sided) and power of 90% (type 2 error of 10%) (1).

Table 1 gives a summary of the two types of statistical errors with an example

Statistical errors

(a) Types of statistical errors
: Null hypothesis is
TrueFalse
Null hypothesis is actuallyTrueCorrect results!Falsely rejecting null hypothesis - Type I error
FalseFalsely accepting null hypothesis - Type II errorCorrect results!
(b) Possible statistical errors in the ABLE trial
There is difference in mortality between groups receiving fresh RBCs and standard-issue RBCsThere difference in mortality between groups receiving fresh RBCs and standard-issue RBCs
TruthThere is difference in mortality between groups receiving fresh RBCs and standard-issue RBCsCorrect results!Falsely rejecting null hypothesis - Type I error
There difference in mortality between groups receiving fresh RBCs and standard-issue RBCsFalsely accepting null hypothesis - Type II errorCorrect results!

In the next article in this series, we will look at the meaning and interpretation of ‘ p ’ value and confidence intervals for hypothesis testing.

Source of support: Nil

Conflict of interest: None

  • Statistics with Python
  • Data Analysis Tutorial
  • Python – Data visualization tutorial
  • Machine Learning Projects
  • Machine Learning Interview Questions
  • Machine Learning Mathematics
  • Deep Learning Tutorial
  • Deep Learning Project
  • Deep Learning Interview Questions
  • Computer Vision Tutorial
  • Computer Vision Projects
  • NLP Project
  • NLP Interview Questions
  • 100 Days of Machine Learning

Difference Between One-Tailed and Two-Tailed Tests

  • Differences Between two-sample, t-test and paired t-test
  • Difference Between Test E And Test C
  • Difference between End-to-end Testing and Unit Testing
  • Difference between JUnit and TestNG
  • Difference between End-to-end Testing and Sandwich Testing
  • Difference between Unit Testing and System Testing
  • Difference between Volume Testing and Load Testing
  • Difference between Adhoc Testing and Monkey Testing
  • Difference between Tester and SDET
  • Difference between Monkey Testing and Gorilla Testing
  • Difference between Waterfall Testing and V-Model Testing
  • Differences between Interface and Integration Testing
  • Difference Between Single and Double Quotes
  • Difference between Frontend Testing and Backend Testing
  • Difference between GUI Testing and Usability Testing
  • Difference Between Dominant and Recessive Traits
  • Difference between System Testing and End-to-end Testing
  • Difference between Unit Testing and Sandwich Testing
  • Difference Between Pandas Head, Tail And Sample

One and Two-Tailed Tests are ways to identify the relationship between the statistical variables. For checking the relationship between variables in a single direction (Left or Right direction), we use a one-tailed test. A two-tailed test is used to check whether the relations between variables are in any direction or not.

One-Tailed Test

A one-tailed test is based on a uni-directional hypothesis where the area of rejection is on only one side of the sampling distribution. It determines whether a particular population parameter is larger or smaller than the predefined parameter. It uses one single critical value to test the data.

what is a 2 tailed hypothesis test

Alternative Hypothesis ( H 1​):

H_1: \theta > \theta_0

Test Statistic: Depending on the type of test and the distribution, the test statistic is computed ( Z -score for normal distribution).

Decision Rule: If the test statistic falls in the critical region, reject the null hypothesis in favor of the alternative hypothesis.

Example: Effect of participants of students in coding competition on their fear level.

  • H0: There is no important effect of students in coding competition on their fear level. 

The main intention is to check the decreased fear level when students participate in a coding competition.

Two-Tailed Test

A two-tailed test is also called a nondirectional hypothesis. For checking whether the sample is greater or less than a range of values, we use the two-tailed. It is used for null hypothesis testing.

what is a 2 tailed hypothesis test

Test Statistic: Compute the test statistic as appropriate for the distribution ( Z -score for normal distribution).

Decision Rule: If the test statistic falls in either tail of the distribution’s critical region, reject the null hypothesis in favor of the alternative hypothesis.

Example: Effect of new bill pass on the loan of farmers. 

  • H0: There is no significant effect of the new bill passed on loans of farmers.

New bill passes can affect in both ways either increase or decrease the loan of farmers.

Difference Between One and Two-Tailed Test:

A test of any statistical hypothesis, where the alternative hypothesis is either right-tailed or left-tailed.A test of a statistical hypothesis, where the alternative hypothesis is .
For one-tailed, we use either or sign for the alternative hypothesis.For two-tailed, we use sign for the alternative hypothesis.
When the alternative hypothesis specifies a direction then we use a one-tailed test.If no direction is given then we will use a two-tailed test.
Critical region lies entirely on either the right side or left side of the sampling distribution. Critical region is given by the portion of the area lying in both the tails of the probability curve of the test statistic.
Here, the Entire i.e. 5% has either in the left tail or right tail.It splits the into half.
Rejection region is either from the left side or right side of the sampling distribution.Rejection region is from both sides i.e. left and right of the sampling distribution.
It checks the relation between the variable in a singles direction. It checks the relation between the variables in any direction.
It is used to check whether the one mean is different from another mean or not.It is used to check whether the two mean different from one another or not.

Please Login to comment...

Similar reads.

  • ML-Statistics
  • Difference Between

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

Left Tailed Test or Right Tailed Test ? How to Decide

Hypothesis testing basics: one tail or two, left tailed test or right tailed test example, how to run a right tailed test.

In a hypothesis test , you have to decide if a claim is true or not. Before you can figure out if you have a left tailed test or right tailed test, you have to make sure you have a single tail to begin with. A tail in hypothesis testing refers to the tail at either end of a distribution curve.

Left Tailed Test or Right Tailed Test. How to Decide in Hypothesis Testing.

Basic Hypothesis Testing Steps

Need help with a homework question? Check out our tutoring page!

  • Decide if you have a one-tailed test or a two-tailed test ( How to decide if a hypothesis test is a one-tailed test or a two-tailed test ). If you have a two-tailed test, you don’t need to worry about whether it’s a left tailed or right tailed test (because it’s both!).
  • Find out if it’s a left tailed test or right tailed test (see below).

If you can sketch a graph, you can figure out which tail is in your test. Back to Top

Example question: You are testing the hypothesis that the drop out rate is more than 75% (>75%). Is this a left-tailed test or a right-tailed test?

Step 1: Write your null hypothesis statement and your alternate hypothesis statement. This step is key to drawing the right graph, so if you aren’t sure about writing a hypothesis statement, see: How to State the Null Hypothesis.

Step 2: Draw a normal distribution curve.

Step 3: Shade in the related area under the normal distribution curve . The area under a curve represents 100%, so shade the area accordingly. The number line goes from left to right, so the first 25% is on the left and the 75% mark would be at the left tail.

right of z score2

The yellow area in this picture illustrates the area greater than 75%. Left Tailed Test or Right Tailed Test? From this diagram you can clearly see that it is a right-tailed test, because the shaded area is on the right .

That’s it!

Hypothesis tests can be three different types:

  • Right tailed test.
  • Left tailed test.
  • Two tailed test .

The right tailed test and the left tailed test are examples of one-tailed tests . They are called “one tailed” tests because the rejection region (the area where you would reject the null hypothesis ) is only in one tail. The two tailed test is called a two tailed test because the rejection region can be in either tail.

Here’s what the right tailed test looks like on a graph:

Left Tailed Test or Right Tailed Test image

What is a Right Tailed Test?

A right tailed test (sometimes called an upper test) is where your hypothesis statement contains a greater than (>) symbol. In other words, the inequality points to the right. For example, you might be comparing the life of batteries before and after a manufacturing change. If you want to know if the battery life is greater than the original (let’s say 90 hours), your hypothesis statements might be: Null hypothesis : No change or less than (H 0 ≤ 90). Alternate hypothesis : Battery life has increased (H 1 ) > 90.

The important factor here is that the alternate hypothesis (H 1 ) determines if you have a right tailed test, not the null hypothesis .

Right Tailed Test Example.

A high-end computer manufacturer sets the retail cost of their computers based in the manufacturing cost, which is $1800. However, the company thinks there are hidden costs and that the average cost to manufacture the computers is actually much more. The company randomly selects 40 computers from its facilities and finds that the mean cost to produce a computer is $1950 with a standard deviation of $500. Run a hypothesis test to see if this thought is true.

Step 1: Write your hypothesis statement (see: How to state the null hypothesis ). H 0 : μ ≤ 1800 H 1 : μ > 1800

left tail right tail z score formula

Step 3: Choose an alpha level . No alpha is mentioned in the question, so use the standard (0.05). 1 – 0.05 = .95 Look up that value (.95) in the middle of the z-table. The area corresponds to a z-value of 1.645. That means you would reject the null hypothesis if your test statistic is greater than 1.645.*

1.897 is greater than 1.645, so you can reject the null hypothesis .

* Not sure how I got 1.645? The left hand half of the curve is 50%, so you look up 45% in the “right of the mean” table on this site (50% + 45% = 95%).

right of mean

This z-table shows the area to the right of the mean , so you’re actually looking up .45, not .95. That’s because half of the area (.5) is not actually showing on the table.

Back to Top

Left Tailed Test or Right Tailed Test: References

Dodge, Y. (2008). The Concise Encyclopedia of Statistics . Springer. Gonick, L. (1993). The Cartoon Guide to Statistics . HarperPerennial. “Klein, G. (2013). The Cartoon Introduction to Statistics. Hill & Wamg. Kotz, S.; et al., eds. (2006), Encyclopedia of Statistical Sciences , Wiley. Wheelan, C. (2014). Naked Statistics . W. W. Norton & Company

P-Value And Statistical Significance: What It Is & Why It Matters

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

The p-value in statistics quantifies the evidence against a null hypothesis. A low p-value suggests data is inconsistent with the null, potentially favoring an alternative hypothesis. Common significance thresholds are 0.05 or 0.01.

P-Value Explained in Normal Distribution

Hypothesis testing

When you perform a statistical test, a p-value helps you determine the significance of your results in relation to the null hypothesis.

The null hypothesis (H0) states no relationship exists between the two variables being studied (one variable does not affect the other). It states the results are due to chance and are not significant in supporting the idea being investigated. Thus, the null hypothesis assumes that whatever you try to prove did not happen.

The alternative hypothesis (Ha or H1) is the one you would believe if the null hypothesis is concluded to be untrue.

The alternative hypothesis states that the independent variable affected the dependent variable, and the results are significant in supporting the theory being investigated (i.e., the results are not due to random chance).

What a p-value tells you

A p-value, or probability value, is a number describing how likely it is that your data would have occurred by random chance (i.e., that the null hypothesis is true).

The level of statistical significance is often expressed as a p-value between 0 and 1.

The smaller the p -value, the less likely the results occurred by random chance, and the stronger the evidence that you should reject the null hypothesis.

Remember, a p-value doesn’t tell you if the null hypothesis is true or false. It just tells you how likely you’d see the data you observed (or more extreme data) if the null hypothesis was true. It’s a piece of evidence, not a definitive proof.

Example: Test Statistic and p-Value

Suppose you’re conducting a study to determine whether a new drug has an effect on pain relief compared to a placebo. If the new drug has no impact, your test statistic will be close to the one predicted by the null hypothesis (no difference between the drug and placebo groups), and the resulting p-value will be close to 1. It may not be precisely 1 because real-world variations may exist. Conversely, if the new drug indeed reduces pain significantly, your test statistic will diverge further from what’s expected under the null hypothesis, and the p-value will decrease. The p-value will never reach zero because there’s always a slim possibility, though highly improbable, that the observed results occurred by random chance.

P-value interpretation

The significance level (alpha) is a set probability threshold (often 0.05), while the p-value is the probability you calculate based on your study or analysis.

A p-value less than or equal to your significance level (typically ≤ 0.05) is statistically significant.

A p-value less than or equal to a predetermined significance level (often 0.05 or 0.01) indicates a statistically significant result, meaning the observed data provide strong evidence against the null hypothesis.

This suggests the effect under study likely represents a real relationship rather than just random chance.

For instance, if you set α = 0.05, you would reject the null hypothesis if your p -value ≤ 0.05. 

It indicates strong evidence against the null hypothesis, as there is less than a 5% probability the null is correct (and the results are random).

Therefore, we reject the null hypothesis and accept the alternative hypothesis.

Example: Statistical Significance

Upon analyzing the pain relief effects of the new drug compared to the placebo, the computed p-value is less than 0.01, which falls well below the predetermined alpha value of 0.05. Consequently, you conclude that there is a statistically significant difference in pain relief between the new drug and the placebo.

What does a p-value of 0.001 mean?

A p-value of 0.001 is highly statistically significant beyond the commonly used 0.05 threshold. It indicates strong evidence of a real effect or difference, rather than just random variation.

Specifically, a p-value of 0.001 means there is only a 0.1% chance of obtaining a result at least as extreme as the one observed, assuming the null hypothesis is correct.

Such a small p-value provides strong evidence against the null hypothesis, leading to rejecting the null in favor of the alternative hypothesis.

A p-value more than the significance level (typically p > 0.05) is not statistically significant and indicates strong evidence for the null hypothesis.

This means we retain the null hypothesis and reject the alternative hypothesis. You should note that you cannot accept the null hypothesis; we can only reject it or fail to reject it.

Note : when the p-value is above your threshold of significance,  it does not mean that there is a 95% probability that the alternative hypothesis is true.

One-Tailed Test

Probability and statistical significance in ab testing. Statistical significance in a b experiments

Two-Tailed Test

statistical significance two tailed

How do you calculate the p-value ?

Most statistical software packages like R, SPSS, and others automatically calculate your p-value. This is the easiest and most common way.

Online resources and tables are available to estimate the p-value based on your test statistic and degrees of freedom.

These tables help you understand how often you would expect to see your test statistic under the null hypothesis.

Understanding the Statistical Test:

Different statistical tests are designed to answer specific research questions or hypotheses. Each test has its own underlying assumptions and characteristics.

For example, you might use a t-test to compare means, a chi-squared test for categorical data, or a correlation test to measure the strength of a relationship between variables.

Be aware that the number of independent variables you include in your analysis can influence the magnitude of the test statistic needed to produce the same p-value.

This factor is particularly important to consider when comparing results across different analyses.

Example: Choosing a Statistical Test

If you’re comparing the effectiveness of just two different drugs in pain relief, a two-sample t-test is a suitable choice for comparing these two groups. However, when you’re examining the impact of three or more drugs, it’s more appropriate to employ an Analysis of Variance ( ANOVA) . Utilizing multiple pairwise comparisons in such cases can lead to artificially low p-values and an overestimation of the significance of differences between the drug groups.

How to report

A statistically significant result cannot prove that a research hypothesis is correct (which implies 100% certainty).

Instead, we may state our results “provide support for” or “give evidence for” our research hypothesis (as there is still a slight probability that the results occurred by chance and the null hypothesis was correct – e.g., less than 5%).

Example: Reporting the results

In our comparison of the pain relief effects of the new drug and the placebo, we observed that participants in the drug group experienced a significant reduction in pain ( M = 3.5; SD = 0.8) compared to those in the placebo group ( M = 5.2; SD  = 0.7), resulting in an average difference of 1.7 points on the pain scale (t(98) = -9.36; p < 0.001).

The 6th edition of the APA style manual (American Psychological Association, 2010) states the following on the topic of reporting p-values:

“When reporting p values, report exact p values (e.g., p = .031) to two or three decimal places. However, report p values less than .001 as p < .001.

The tradition of reporting p values in the form p < .10, p < .05, p < .01, and so forth, was appropriate in a time when only limited tables of critical values were available.” (p. 114)

  • Do not use 0 before the decimal point for the statistical value p as it cannot equal 1. In other words, write p = .001 instead of p = 0.001.
  • Please pay attention to issues of italics ( p is always italicized) and spacing (either side of the = sign).
  • p = .000 (as outputted by some statistical packages such as SPSS) is impossible and should be written as p < .001.
  • The opposite of significant is “nonsignificant,” not “insignificant.”

Why is the p -value not enough?

A lower p-value  is sometimes interpreted as meaning there is a stronger relationship between two variables.

However, statistical significance means that it is unlikely that the null hypothesis is true (less than 5%).

To understand the strength of the difference between the two groups (control vs. experimental) a researcher needs to calculate the effect size .

When do you reject the null hypothesis?

In statistical hypothesis testing, you reject the null hypothesis when the p-value is less than or equal to the significance level (α) you set before conducting your test. The significance level is the probability of rejecting the null hypothesis when it is true. Commonly used significance levels are 0.01, 0.05, and 0.10.

Remember, rejecting the null hypothesis doesn’t prove the alternative hypothesis; it just suggests that the alternative hypothesis may be plausible given the observed data.

The p -value is conditional upon the null hypothesis being true but is unrelated to the truth or falsity of the alternative hypothesis.

What does p-value of 0.05 mean?

If your p-value is less than or equal to 0.05 (the significance level), you would conclude that your result is statistically significant. This means the evidence is strong enough to reject the null hypothesis in favor of the alternative hypothesis.

Are all p-values below 0.05 considered statistically significant?

No, not all p-values below 0.05 are considered statistically significant. The threshold of 0.05 is commonly used, but it’s just a convention. Statistical significance depends on factors like the study design, sample size, and the magnitude of the observed effect.

A p-value below 0.05 means there is evidence against the null hypothesis, suggesting a real effect. However, it’s essential to consider the context and other factors when interpreting results.

Researchers also look at effect size and confidence intervals to determine the practical significance and reliability of findings.

How does sample size affect the interpretation of p-values?

Sample size can impact the interpretation of p-values. A larger sample size provides more reliable and precise estimates of the population, leading to narrower confidence intervals.

With a larger sample, even small differences between groups or effects can become statistically significant, yielding lower p-values. In contrast, smaller sample sizes may not have enough statistical power to detect smaller effects, resulting in higher p-values.

Therefore, a larger sample size increases the chances of finding statistically significant results when there is a genuine effect, making the findings more trustworthy and robust.

Can a non-significant p-value indicate that there is no effect or difference in the data?

No, a non-significant p-value does not necessarily indicate that there is no effect or difference in the data. It means that the observed data do not provide strong enough evidence to reject the null hypothesis.

There could still be a real effect or difference, but it might be smaller or more variable than the study was able to detect.

Other factors like sample size, study design, and measurement precision can influence the p-value. It’s important to consider the entire body of evidence and not rely solely on p-values when interpreting research findings.

Can P values be exactly zero?

While a p-value can be extremely small, it cannot technically be absolute zero. When a p-value is reported as p = 0.000, the actual p-value is too small for the software to display. This is often interpreted as strong evidence against the null hypothesis. For p values less than 0.001, report as p < .001

Further Information

  • P-values and significance tests (Kahn Academy)
  • Hypothesis testing and p-values (Kahn Academy)
  • Wasserstein, R. L., Schirm, A. L., & Lazar, N. A. (2019). Moving to a world beyond “ p “< 0.05”.
  • Criticism of using the “ p “< 0.05”.
  • Publication manual of the American Psychological Association
  • Statistics for Psychology Book Download

Bland, J. M., & Altman, D. G. (1994). One and two sided tests of significance: Authors’ reply.  BMJ: British Medical Journal ,  309 (6958), 874.

Goodman, S. N., & Royall, R. (1988). Evidence and scientific research.  American Journal of Public Health ,  78 (12), 1568-1574.

Goodman, S. (2008, July). A dirty dozen: twelve p-value misconceptions . In  Seminars in hematology  (Vol. 45, No. 3, pp. 135-140). WB Saunders.

Lang, J. M., Rothman, K. J., & Cann, C. I. (1998). That confounded P-value.  Epidemiology (Cambridge, Mass.) ,  9 (1), 7-8.

Print Friendly, PDF & Email

Related Articles

Exploratory Data Analysis

Exploratory Data Analysis

What Is Face Validity In Research? Importance & How To Measure

Research Methodology , Statistics

What Is Face Validity In Research? Importance & How To Measure

Criterion Validity: Definition & Examples

Criterion Validity: Definition & Examples

Convergent Validity: Definition and Examples

Convergent Validity: Definition and Examples

Content Validity in Research: Definition & Examples

Content Validity in Research: Definition & Examples

Construct Validity In Psychology Research

Construct Validity In Psychology Research

Datagy logo

  • Learn Python
  • Python Lists
  • Python Dictionaries
  • Python Strings
  • Python Functions
  • Learn Pandas & NumPy
  • Pandas Tutorials
  • Numpy Tutorials
  • Learn Data Visualization
  • Python Seaborn
  • Python Matplotlib

How to Perform T-Tests in Python (One- and Two-Sample)

  • February 12, 2024 January 13, 2024

How to Perform T-Tests in Python (One- and Two-Sample) Cover Image

In this post, you’ll learn how to perform t-tests in Python using the popular SciPy library . T-tests are used to test for statistical significance and can be hugely advantageous when working with smaller sample sizes.

By the end of this tutorial, you’ll have learned the following:

  • What the different t-tests are and when they should be applied
  • How to perform a one-sample t-test and a two-sample t-test in Python
  • How to interpret the results from your statistical tests

Table of Contents

Understanding the T-Test

The t-test, or often referred to as the student’s t-test , dates back to the early 20th century. An Irish statistician working for Guinness Brewery, William Sealy Gosset, introduced the concept. Because the brewery was working with small sample sizes and was under strict orders of confidentiality, Gosset published his findings under the pseudonym “Student”. His seminal work, “The Probable Error of a Mean,” laid the groundwork for what we now know as Student’s t-test.

This leads us to one of the primary benefits of the t-test: the t-test is able to make reliable inferences about a population using a small sample size . Let’s explore how this works by discussing the theory behind the t-test in the following section.

Understanding the Student’s T-Test

Statistical tests are used to make assumptions about some population parameters. For example, it lets us test whether or not the average test score for any given group of students is 70%. The T-Test works in two different ways:

  • The one-sample t-test allows us to test whether or not the population mean is equal to some value
  • The two-sample t-test allows us to test whether or not two population means are equal

Let’s explore these in a little more depth.

Understanding the One-Sample T-Test

The one-sample t-test is used to test the null hypothesis that the population mean inferred from a sample is equal to some given value. It can be described as below:

There are actually three different alternative hypotheses:

  • Two-tailed : The population mean is not equal to some given value
  • Left-tailed : The population mean is less than some given value
  • Right-tailed : The population mean is greater than some given value

We can use the following formula to calculate our test statistic:

  • x:  the sample mean
  • μ 0 :  a hypothesized population mean
  • s:  the sample standard deviation
  • n:  the sample size

We then need to calculate the p-value using degrees of freedom equal to n – 1. If the p-value is less than your chosen significance level, we can reject the null hypothesis and say that the means differ.

Understanding the Two-Sample T-Test

The two-sample t-test is used to test whether two population means are equal (or if they differ in a significant way). In this case, the null hypothesis assumes that the two population means are equal.

When we sample two different groups, we are almost guaranteed that their sample means will differ. But the t-test allows us to test whether or not this difference is different in a statistically significant way.

Similar to the one-sample t-test, there are three different alternative hypotheses:

  • Two-tailed : The two means are not equal
  • Left-tailed : Population mean #1 is less than population mean #2
  • Right-tailed : Population mean #1 is greater than population mean #2

The formula for the two-sample t-test can be written as:

  • X 1​ and X 2​ are the sample means of the two groups.
  • s 1 and s 2​ are the sample variances of the two groups.
  • n 1​ and n 2​ are the sample sizes of the two groups.

We then need to calculate the p-value using degrees of freedom equal to (n 1 +n 2 -1). If the p-value is less than your chosen significance level, we can reject the null hypothesis and say that the means differ.

Requirements for the Student T-Test

Both types of t-tests follow a key set of assumptions, including:

  • Observations should be independent of one another
  • The data should be relatively normally distributed
  • The samples should have approximately equal variances (this only applies to the two-sample t-test)
  • The samples were collected using random sampling

It’s easy to test for these assumptions using Python (and I have included links to tutorials covering how to do this). Let’s take a look at example walkthroughs of how to conduct both of these tests in Python.

Perform a One-Sample T-Test in Python

In this section, you’ll learn how to conduct a one-sample t-test in Python. Suppose you are a teacher and have just given a test. You know that the population mean for this test is 85% and you want to see whether the score of the class is significantly different from this population mean.

Let’s start by importing our required function, ttest_1samp() from SciPy and defining our data:

In the code block above, we first imported our required library. We then defined our sample as a list of values and defined our population mean as its own variable.

We can now pass these values into the function, as shown below:

The function returns a test statistic and the corresponding p-value. We can print these values out using f-strings to simplify the labeling , as shown above.

Finally, we can write a simple if-else statement to evaluate whether or not our sample mean is significantly different from the population mean:

We can see that by running this if-else statement, that our test indicates that there is no significant difference in the exam scores.

In order to calculate the different one-sample t-test alternative hypotheses, we can use the alternative= parameter:

  • alternative='two-sided' is the default value, checking for a two-sided alternative hypothesis
  • alternative='less' checks whether the provided mean is less than the population mean
  • alternative='greater' checks whether the provided mean is greater than the population mean

Now that you have a strong understanding of how to perform a one-sample t-test, let’s dive into the exciting world of two-sample t-tests!

Perform a Two-Sample T-Test in Python

A two-sample t-test is used to test whether the means of two samples are equal. The test requires that both samples be normally distributed, have similar variances, and be independent of one another.

Imagine that we want to compare the test scores of two different classes. This is the perfect example of when to use a t-test. Let’s begin by running a two-tailed test, which only evaluates whether or not the two means are equal. It begins with the null hypothesis, which states that the two means are equal.

Let’s take a look at how we can run a two-tailed t-test in Python:

We can see that the ttest_ind() function returns both a test statistic and a p-value. We can run a simple if-else statement to check whether or not we can reject or fail to reject the null hypothesis:

We can see that there is a significant difference between the two sets of scores. However, the two-tailed test doesn’t tell us in which direction.

In order to do this, we need to use a right- or left-tailed two-sample t-test. To do this in SciPy, we use the alternative= parameter. By default, this is set to 'two-sided' . However, we can modify this to either 'less' or 'greater' , if we want to evaluate whether or not the mean for one sample is less than or greater than another.

Let’s see how we can check if the mean of class 2 is significantly higher than that of class 1:

Because our p-value is less than our defined value of 0.05, we can say that the mean of class 2 is higher with statistical significance.

In conclusion, this comprehensive guide has equipped you with the knowledge and practical skills to perform t-tests in Python using the SciPy library. T-tests are invaluable tools for assessing statistical significance, particularly when working with smaller sample sizes.

Throughout this tutorial, you’ve gained insights into:

  • The different types of t-tests and their applications.
  • How to conduct one-sample and two-sample t-tests in Python.
  • Interpretation of results obtained from statistical tests.

Remember that t-tests come with certain assumptions, and it’s crucial to validate them before applying these tests to your data. Python provides tools to check these assumptions, ensuring the robustness and reliability of your statistical analyses.

To learn more about these functions, check out the official documentation for the one-sample t-test and for the two-sample t-test in SciPy.

Nik Piepenbreier

Nik is the author of datagy.io and has over a decade of experience working with data analytics, data science, and Python. He specializes in teaching developers how to use Python for data science using hands-on tutorials. View Author posts

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Search Search Please fill out this field.

What Is a One-Tailed Test?

  • Determining Significance
  • One-Tailed Test FAQs
  • Corporate Finance
  • Financial Analysis

One-Tailed Test Explained: Definition and Example

what is a 2 tailed hypothesis test

Investopedia / Xiaojie Liu

A one-tailed test is a statistical test in which the critical area of a distribution is one-sided so that it is either greater than or less than a certain value, but not both. If the sample being tested falls into the one-sided critical area, the alternative hypothesis will be accepted instead of the null hypothesis.

Financial analysts use the one-tailed test to test an investment or portfolio hypothesis.

Key Takeaways

  • A one-tailed test is a statistical hypothesis test set up to show that the sample mean would be higher or lower than the population mean, but not both.
  • When using a one-tailed test, the analyst is testing for the possibility of the relationship in one direction of interest and completely disregarding the possibility of a relationship in another direction.
  • Before running a one-tailed test, the analyst must set up a null and alternative hypothesis and establish a probability value (p-value).

A basic concept in inferential statistics is hypothesis testing . Hypothesis testing is run to determine whether a claim is true or not, given a population parameter. A test that is conducted to show whether the mean of the sample is significantly greater than and significantly less than the mean of a population is considered a two-tailed test . When the testing is set up to show that the sample mean would be higher or lower than the population mean, it is referred to as a one-tailed test. The one-tailed test gets its name from testing the area under one of the tails (sides) of a normal distribution , although the test can be used in other non-normal distributions.

Before the one-tailed test can be performed, null and alternative hypotheses must be established. A null hypothesis is a claim that the researcher hopes to reject. An alternative hypothesis is the claim supported by rejecting the null hypothesis.

A one-tailed test is also known as a directional hypothesis or directional test.

Example of the One-Tailed Test

Let's say an analyst wants to prove that a portfolio manager outperformed the S&P 500 index in a given year by 16.91%. They may set up the null (H 0 ) and alternative (H a ) hypotheses as:

H 0 : μ ≤ 16.91

H a : μ > 16.91

The null hypothesis is the measurement that the analyst hopes to reject. The alternative hypothesis is the claim made by the analyst that the portfolio manager performed better than the S&P 500. If the outcome of the one-tailed test results in rejecting the null, the alternative hypothesis will be supported. On the other hand, if the outcome of the test fails to reject the null, the analyst may carry out further analysis and investigation into the portfolio manager’s performance.

The region of rejection is on only one side of the sampling distribution in a one-tailed test. To determine how the portfolio’s return on investment compares to the market index, the analyst must run an upper-tailed significance test in which extreme values fall in the upper tail (right side) of the normal distribution curve. The one-tailed test conducted in the upper or right tail area of the curve will show the analyst how much higher the portfolio return is than the index return and whether the difference is significant.

1%, 5% or 10%

The most common significance levels (p-values) used in a one-tailed test.

Determining Significance in a One-Tailed Test

To determine how significant the difference in returns is, a significance level must be specified. The significance level is almost always represented by the letter p, which stands for probability. The level of significance is the probability of incorrectly concluding that the null hypothesis is false. The significance value used in a one-tailed test is either 1%, 5%, or 10%, although any other probability measurement can be used at the discretion of the analyst or statistician. The probability value is calculated with the assumption that the null hypothesis is true. The lower the p-value , the stronger the evidence that the null hypothesis is false.

If the resulting p-value is less than 5%, the difference between both observations is statistically significant, and the null hypothesis is rejected. Following our example above, if the p-value = 0.03, or 3%, then the analyst can be 97% confident that the portfolio returns did not equal or fall below the return of the market for the year. They will, therefore, reject H 0  and support the claim that the portfolio manager outperformed the index. The probability calculated in only one tail of a distribution is half the probability of a two-tailed distribution if similar measurements were tested using both hypothesis testing tools.

When using a one-tailed test, the analyst is testing for the possibility of the relationship in one direction of interest and completely disregarding the possibility of a relationship in another direction. Using our example above, the analyst is interested in whether a portfolio’s return is greater than the market’s. In this case, they do not need to statistically account for a situation in which the portfolio manager underperformed the S&P 500 index. For this reason, a one-tailed test is only appropriate when it is not important to test the outcome at the other end of a distribution.

How Do You Determine If It Is a One-Tailed or Two-Tailed Test?

A one-tailed test looks for an increase or decrease in a parameter. A two-tailed test looks for change, which could be a decrease or an increase.

What Is a One-Tailed T Test Used for?

A one-tailed T-test checks for the possibility of a one-direction relationship but does not consider a directional relationship in another direction.

When Should a Two-Tailed Test Be Used?

You would use a two-tailed test when you want to test your hypothesis in both directions.

University of Southern California. " FAQ: What Are the Differences Between One-Tailed and Two-Tailed Tests? "

what is a 2 tailed hypothesis test

  • Terms of Service
  • Editorial Policy
  • Privacy Policy

what is a 2 tailed hypothesis test

Squareholder Value

what is a 2 tailed hypothesis test

What is a p-value?

I'll answer that question, explain a statistical test you might not have heard of, and introduce you to my new obsession: bridge..

what is a 2 tailed hypothesis test

After leaving my last job, I started playing a lot of contract bridge (or “contact bridge” as it’s called in my family, as games tend to get heated). I recently participated in a regional bridge tournament. I was inspired to write this post about p -values when I found myself in what I thought was a very unlikely situation during a bridge session.

A crash course in bridge

I could talk bridge all day, but I know it’s not for everyone. Given that, I’ll try to minimize the background for this article. Bridge is a trick-taking card game played with a standard deck of 52 cards. There are four players seated around the table: North, East, South, and West. North and South are partners, and East and West are partners. Each hand is preceded by an auction in which the players bid on 1) how many tricks they think they can win, and 2) which suit is the trump suit. For example, if North-South wins the auction with a bid of 4 hearts, then they claim that they will take 6 + 4 = 10 tricks with hearts as trump. (You add 6 because there are 52/4 = 13 total tricks, so it doesn’t make sense to proclaim that you’ll take fewer than half.)

Thanks for reading Squareholder Value! Subscribe for free to receive new posts and support my work.

The weird thing about bridge (well, one weird thing) is that only three of the four players play each hand. One person from the team that wins the auction plays the hand. That person is called the “declarer.” 1 The declarer’s partner is called the “dummy,” and the dummy’s hand is placed face up after the opening lead by the defense. The declarer then plays both hands, concealing their own. All eyes are on you as the declarer, so that is the most exciting and stressful position to be in. For concreteness, suppose that North is the declarer. This means that South, North’s partner, is the dummy, while East and West play defense. Defense always plays the first card, and they lead into the dummy. Therefore, East would lead in this case. As soon as East plays their first card, South reveals their hand, and then play continues clockwise around the compass (with North playing both North’s and South’s hands). After all 13 tricks are played, you count how many tricks the declarer took and award points according to how they fared against their bid.

My unlikely experience

So what does this have to do with p -values? As I said above, bridge is most exciting when you are the declarer. In one of my games during the recent tournament, my team played 24 hands, and I noticed that I was starting to get bored. It turns out that I was the declarer on only two of the 24 hands! Two out of 24 is 8.3%, which seemed like a very small percentage to me. Since North, East, South, and West are just names, you expect each player should declare 25% of the time in the long run. This made me wonder if chance was to blame, or rather if something about the bidding habits of the people at the table skewed the results.

Statistics gives us a way of understanding how unlikely experiences like my bridge game are. Let’s assume that North really should play 25% of hands. In that case, the proportion of hands that North plays out of N = 24 random deals is approximately normally distributed. This is a consequence of the central limit theorem . The mean of that normal distribution is π = .25, and the standard deviation is

Knowing this, we can compute the z -score of the sample proportion p = 2/24 = 8.3%:

This means that the observed proportion of 2/24 hands is almost two standard deviations below the mean of 6/24 hands. The next step is to convert this z -score into a p -value . The p -value is the area under the standard normal (i.e., bell-shaped) curve to the left of z = -1.889. You can compute it Excel with the function NORM.S.DIST(-1.889, 1). The 1 means that you want the cumulative probability (i.e., area) and not the height of the curve at that point. Entering that function returns a p-value of about .0297 (cf., picture below).

what is a 2 tailed hypothesis test

The next question is, how do we interpret the p -value? Before getting to that, it’s best to explain the context in which p -values most often arise. Hypothesis testing is the branch of statistical inference wherein you collect samples from a population to test an assumption about the population. In this case, the null hypothesis is that the true proportion of hands with North as declarer is 25%. The alternative hypothesis—based on our observation—is that North actually declares less than 25% of the time. A good real-life example is a clinical trial for a drug. In that case, you have a treatment and control group. The null hypothesis is that there is no difference in outcomes (measured however you want) between those two groups. The alternative hypothesis is that the treatment group fares better than the control group. In either case, the p -value is interpreted as follows:

Assuming that the null hypothesis is true, there is a p% chance that a random sample of this size would be as extreme as this sample.

Applied to the bridge case: Assuming that North declares on 25% of hands, there is a 2.97% chance that North would declare two or fewer times in a game with 24 hands. The 24 hands part is critical; the standard deviation of the sampling distribution depends on N , the number of hands. The more hands you play, the more unlikely it would be for North to declare in fewer than 8.3% of hands. In a hypothesis test, you typically have some cutoff—called α—that you compare to the p- value. For example, if α = .01, then you would say that a p -value less than 1% is just too unlikely to believe. The only possible conclusion is that your assumption—the null hypothesis—must be false. Depending on the context, α can be .001, .01, .05, or sometimes .1.

There are a few common misconceptions about the p -value. The first is that it depends on α. As we saw, you can define the p -value without any reference to α, which is simply your line in the sand for what constitutes “beyond a reasonable doubt.” Another common misconception is that the p -value is the probability that the null hypothesis is true. That’s not right. All the p -value tells you is the probability of observing your sample, if the null hypothesis is true . Finally, some people claim that the smaller the p -value, the stronger the effect (in a treatment/control scenario). This is also not true. The p -value just measures how unlikely it is to observe a sample. Suppose you knew that a drug reduced someone’s cholesterol by an average of 15 mg/dL. If you repeatedly ran controlled experiments, you could make the p -value as small as you want by increasing the sizes of the treatment and control groups (thereby reducing variability). However, the magnitude of the effect will always be 15 mg/dL, on average, regardless of the p -value.

Another perspective: goodness of fit

With a p -value of under 3%, it’s tempting to argue that declaring on two out of 24 hands is too rare to attribute to chance. 2 However, I didn’t share all the details of my game. Of the 24 hands, I (North) declared twice, East five times, South eight times, and West nine times. As discussed, each player expects to be declarer on approximately 25% of hands. There’s another statistical test— χ -square goodness of fit —which can be used to analyze the distribution of categorical variables. This test works by comparing the observed counts of each possible value of a variable to the expected counts (6 each for North, South, East, and West in our case). To avoid positive and negative differences offsetting, each error is squared. The squared differences are then divided by the expected count and added together. (This is not unlike linear regression, which I discussed in this previous post .) The result is a single number called the χ -square statistic. This number is always positive, and it captures how far away the distribution of a variable is from what is expected. Looking at the table below, we see that χ -square = 5.0 in this case.

what is a 2 tailed hypothesis test

To compute a p -value, we need to know what type of distribution χ -square follows. The central limit theorem says that, for each player, the square root of Column F in the above picture converges to the standard normal normal as N increases. 3 This makes χ -square the sum of squares of normally distributed variables. Shockingly, such sums follow a neat distribution called the χ -square distribution (hence the name of the statistic/test). Once you know that, then the calculation of the p -value and its interpretation are exactly the same. To calculate the p-value, you can type

in Excel. 5 is the value of the χ -square statistic. 3 is the degrees of freedom (number of players - 1) and 1 means cumulative, as with NORM.S.DIST. We have to subtract the value from 1 because now the extreme side is the right tail, not the left tail.

what is a 2 tailed hypothesis test

Which test is right?

Notice that the p -value from the χ -square test is much larger: 17.2%. This number is large enough that we would not be able to reject the null hypothesis, which is that each player plays 25% of the hands. So, on the one hand, the first test says that it is very unlikely (about 3 out of 100) that North would play zero, one, or two out of 24 hands. On the other hand, the χ -square test says that an arrangement as uneven as (N, E, S, W) = (2, 5, 8, 9) is not all that rare: it would happen more than one in six times that you play 24 hands.

This begs the question, which test is right? The short answer is that they’re both right. They’re just answering different questions. In the first case, I zoomed in on the experience of North. From North’s perspective, it’s true that it’s very rare to declare two or fewer times out of 24 hands. However, declaring is a zero-sum game (well, I guess a 24-sum game): if any of the other seats declare more times than expected, that means someone has to declare fewer times than expected. The χ -square test considers all four hands together. When you do that, the arrangement (2, 8, 9, 5) is seen to be not that pathological. In fact, I played six times during the tournament, so the chances are good that I would see some declarer distribution with p = .172 at least once. Reflecting on the tournament now, I can’t remember the number of times I declared in any of the other sessions. You could argue that I engaged in “ p hacking” by focusing on the one game in which I declared only twice. p hacking is a dishonest (yet common) approach to statistical analysis in which you take multiple samples until you get a p -value that supports the conclusion you want to draw.

To close, I want to clarify one thing about the χ -square statistic. It reduces the deviation from the expected counts for all players into a single number. For example, (3, 3, 9, 9) has a higher χ -square value of 6.0 (thus, a lower p -value), even though no one declared two or fewer times. If you wanted to get the exact probability of someone declaring two or fewer times, you would apply the multinomial distribution , which models the different ways you can add up four numbers to get 24. This is tricky, though, because it’s difficult to systematically list the arrangements (N, E, S, W) that have a minimum value of 0, 1, or 2 in one of the hands. 4 Instead, I simulated 500 sessions of 24 hands and noted 1) the percentage of sessions that each player played two or fewer hands, and 2) the percentage of sessions that each of 0, 1, 2, 3, 4, 5, and 6 was the minimum number of hands declared by any player. (The minimum can’t be greater than 6 because the average is 6.) As you see below, the minimum number of hands played was two or fewer in 16.8% of the 500 sessions, which is slightly less than the χ -square test. I re-ran this experiment another ~30 times, and the average was closer to 15.6%.

what is a 2 tailed hypothesis test

That’s all for today. Thank you for reading and indulging my bridge obsession. Please subscribe and share if you enjoyed this. Most importantly, please let me know if you’re looking for a bridge partner.

The declarer is the first person on the declaring team to mention the ultimate trump suit in the auction. For example, if North opens the bidding with 1 heart, they would play the hand if North-South ends up winning the auction with hearts as the trump suit. This is true even if South makes the final bid of 4 hearts.

Some possible explanations: the hands aren’t random, our bidding or our opponents bidding is unusual, etc.

Briefly, if you convert the counts to frequencies, you get a binomial distribution, and then it’s easier to see how to apply CLT. If you want more rigor, don’t read blogs. (Just kidding… here’s a proof .)

The “ stars and bars ” method tells you that there are 2,925 (= 27 choose 3) ways that N-E-S-W can share declarer in 24 hands. There are probably several hundred that have at least one 0, 1, or 2 in one of the positions. I was not up to the task of enumerating them.

what is a 2 tailed hypothesis test

Ready for more?

COMMENTS

  1. One-Tailed and Two-Tailed Hypothesis Tests Explained

    Two-tailed hypothesis tests are also known as nondirectional and two-sided tests because you can test for effects in both directions. When you perform a two-tailed test, you split the significance level percentage between both tails of the distribution. In the example below, I use an alpha of 5% and the distribution has two shaded regions of 2. ...

  2. Two-Tailed Hypothesis Tests: 3 Example Problems

    To test this, she can perform a one-tailed hypothesis test with the following null and alternative hypotheses: H 0 (Null Hypothesis): μ = 10 inches; H A (Alternative Hypothesis): μ ≠ 10 inches; This is an example of a two-tailed hypothesis test because the alternative hypothesis contains the not equal "≠" sign. The botanist believes ...

  3. What Is a Two-Tailed Test? Definition and Example

    Two-Tailed Test: A two-tailed test is a statistical test in which the critical area of a distribution is two-sided and tests whether a sample is greater than or less than a certain range of values ...

  4. Two Tailed Test: Definition, Examples

    A two tailed test tells you that you're finding the area in the middle of a distribution. In other words, your rejection region (the place where you would reject the null hypothesis) is in both tails. For example, let's say you were running a z test with an alpha level of 5% (0.05). In a one tailed test, the entire 5% would be in a single tail.

  5. One- and two-tailed tests

    In coin flipping, the null hypothesis is a sequence of Bernoulli trials with probability 0.5, yielding a random variable X which is 1 for heads and 0 for tails, and a common test statistic is the sample mean (of the number of heads) ¯. If testing for whether the coin is biased towards heads, a one-tailed test would be used - only large numbers of heads would be significant.

  6. Two Sample t-test: Definition, Formula, and Example

    A two-sample t-test always uses the following null hypothesis: H 0: μ 1 = μ 2 (the two population means are equal) The alternative hypothesis can be either two-tailed, left-tailed, or right-tailed: H 1 (two-tailed): μ 1 ≠ μ 2 (the two population means are not equal) H 1 (left-tailed): μ 1 < μ 2 (population 1 mean is less than population ...

  7. FAQ: What are the differences between one-tailed and two-tailed tests?

    A two-tailed test will test both if the mean is significantly greater than x and if the mean significantly less than x. ... So, depending on the direction of the one-tailed hypothesis, its p-value is either .5*(two-tailed p-value) or 1-.5*(two-tailed p-value) if the test statistic symmetrically distributed about zero.

  8. One-tailed and two-tailed tests (video)

    A one tailed test does not leave more room to conclude that the alternative hypothesis is true. The benefit (increased certainty) of a one tailed test doesn't come free, as the analyst must know "something more", which is the direction of the effect, compared to a two tailed test. Show more...

  9. Hypothesis testing: One-tailed and two-tailed tests

    The alternate hypothesis for a two-sided t-test would simply state that the mean blood pressure for the medication group is different than the placebo group, but it wouldn't specify if medication would raise or lower the mean blood pressure. Typically, researchers choose to use two-sided t-tests, since they usually don't know how a ...

  10. Hypothesis Testing

    So if the alternate hypothesis is written with a ≠ sign that means that we are going to perform a 2-tailed test because chances are it could be more than 100 or less than 100 which makes it 2-tailed. So, after stating the Null and Alternative hypothesis, it's time to move to step-2 which is: 2. Choose the level of Significance(α)

  11. One Tailed Test or Two in Hypothesis Testing: How ...

    The two red tails are the alpha level, divided by two (i.e. α/2). Alpha levels (sometimes just called "significance levels") are used in hypothesis tests; it is the probability of making the wrong decision when the null hypothesis is true. A one-tailed test has the entire 5% of the alpha level in one tail (in either the left, or the right tail).

  12. Hypothesis Testing: Upper-, Lower, and Two Tailed Tests

    The procedure for hypothesis testing is based on the ideas described above. Specifically, we set up competing hypotheses, select a random sample from the population of interest and compute summary statistics. ... In a two-tailed test the decision rule has investigators reject H 0 if the test statistic is extreme, either larger than an upper ...

  13. One- and Two-Tailed Tests

    This test is called a directional or one‐tailed test because the region of rejection is entirely within one tail of the distribution. Some hypotheses predict only that one value will be different from another, without additionally predicting which will be higher. The test of such a hypothesis is nondirectional or two‐tailed because an ...

  14. S.3.2 Hypothesis Testing (P-Value Approach)

    Two-Tailed. In our example concerning the mean grade point average, suppose again that our random sample of n = 15 students majoring in mathematics yields a test statistic t* instead of equaling -2.5.The P-value for conducting the two-tailed test H 0: μ = 3 versus H A: μ ≠ 3 is the probability that we would observe a test statistic less than -2.5 or greater than 2.5 if the population mean ...

  15. How to Identify a Left Tailed Test vs. a Right Tailed Test

    There are three different types of hypothesis tests: Two-tailed test: The alternative hypothesis contains the "≠" sign. Left-tailed test: The alternative hypothesis contains the "<" sign. Right-tailed test: The alternative hypothesis contains the ">" sign. Notice that we only have to look at the sign in the alternative hypothesis ...

  16. One and Two Tailed Tests

    A one-tailed test looks for an increase or decrease in the parameter whereas a two-tailed test looks for any change in the parameter (which can be any change- increase or decrease). We can perform the test at any level (usually 1%, 5% or 10%). For example, performing the test at a 5% level means that there is a 5% chance of wrongly rejecting H 0.

  17. Hypothesis Testing

    Step 2: Collect data. For a statistical test to be valid, it is important to perform sampling and collect data in a way that is designed to test your hypothesis. If your data are not representative, then you cannot make statistical inferences about the population you are interested in. Hypothesis testing example.

  18. 11.4: One- and Two-Tailed Tests

    The one-tailed hypothesis is rejected only if the sample proportion is much greater than \(0.5\). The alternative hypothesis in the two-tailed test is \(\pi \neq 0.5\). In the one-tailed test it is \(\pi > 0.5\). You should always decide whether you are going to use a one-tailed or a two-tailed probability before looking at the data.

  19. Difference Between One-tailed and Two-tailed Test

    One-tailed test, as the name suggest is the statistical hypothesis test, in which the alternative hypothesis has a single end. On the other hand, two-tailed test implies the hypothesis test; wherein the alternative hypothesis has dual ends. In the one-tailed test, the alternative hypothesis is represented directionally.

  20. An Introduction to Statistics: Understanding Hypothesis Testing and

    HYPOTHESIS TESTING. A clinical trial begins with an assumption or belief, and then proceeds to either prove or disprove this assumption. In statistical terms, this belief or assumption is known as a hypothesis. Counterintuitively, what the researcher believes in (or is trying to prove) is called the "alternate" hypothesis, and the opposite ...

  21. Difference Between One-Tailed and Two-Tailed Tests

    For a right-tailed test: For a left-tailed test: Test Statistic: Depending on the type of test and the distribution, the test statistic is computed (Z-score for normal distribution). Decision Rule: If the test statistic falls in the critical region, reject the null hypothesis in favor of the alternative hypothesis. Example: Effect of participants of students in coding competition on their fear ...

  22. Left Tailed Test or Right Tailed Test ? How to Decide

    Two tailed test. The right tailed test and the left tailed test are examples of one-tailed tests. They are called "one tailed" tests because the rejection region (the area where you would reject the null hypothesis) is only in one tail. The two tailed test is called a two tailed test because the rejection region can be in either tail.

  23. Choosing One-Tailed vs Two-Tailed Tests in BI

    When delving into Business Intelligence (BI), hypothesis testing is a cornerstone of data analysis, providing insights and guiding decision-making. Choosing between a one-tailed and two-tailed ...

  24. Understanding P-Values and Statistical Significance

    In a one-tailed test, the entire significance level is allocated to one tail of the distribution. For example, if you are using a significance level of 0.05 (5%), you would reject the null hypothesis if your data point falls in the 5% tail on either the right (for a right-tailed test) or the left (for a left-tailed test) end of the distribution.

  25. How to Perform T-Tests in Python (One- and Two-Sample)

    The formula for the two-sample t-test can be written as: Where: X 1 and X 2 are the sample means of the two groups. s 1 and s 2 are the sample variances of the two groups. n 1 and n 2 are the sample sizes of the two groups. We then need to calculate the p-value using degrees of freedom equal to (n 1 +n 2 -1).

  26. One-Tailed Test Explained: Definition and Example

    One-Tailed Test: A one-tailed test is a statistical test in which the critical area of a distribution is one-sided so that it is either greater than or less than a certain value, but not both. If ...

  27. What is a p-value?

    Hypothesis testing is the branch of statistical inference wherein you collect samples from a population to test an assumption about the population. In this case, the null hypothesis is that the true proportion of hands with North as declarer is 25%. The alternative hypothesis—based on our observation—is that North actually declares less ...