The F-test of overall significance indicates whether your linear regression model provides a better fit to the data than a model that contains no independent variables. In this post, I look at how the F-test of overall significance fits in with other regression statistics, such as R-squared. R-squared tells you how well your model fits the data, and the F-test is related to it.

An F-test is a type of statistical test that is very flexible. You can use them in a wide variety of settings. F-tests can evaluate multiple model terms simultaneously, which allows them to compare the fits of different linear models. In contrast, t-tests can evaluate just one term at a time.

Read my blog post about how F-tests work in ANOVA.

To calculate the F-test of overall significance, your statistical software just needs to include the proper terms in the two models that it compares. The overall F-test compares the model that you specify to the model with no independent variables. This type of model is also known as an intercept-only model.

The F-test for overall significance has the following two hypotheses:

- The null hypothesis states that the model with no independent variables fits the data as well as your model.
- The alternative hypothesis says that your model fits the data better than the intercept-only model.

In statistical output, you can find the overall F-test in the ANOVA table. An example is below.

## Interpreting the Overall F-test of Significance

Compare the p-value for the F-test to your significance level. If the p-value is less than the significance level, your sample data provide sufficient evidence to conclude that your regression model fits the data better than the model with no independent variables.

This finding is good news because it means that the independent variables in your model improve the fit!

Generally speaking, if none of your independent variables are statistically significant, the overall F-test is also not statistically significant. Occasionally, the tests can produce conflicting results. This disagreement can occur because the F-test of overall significance assesses all of the coefficients jointly whereas the t-test for each coefficient examines them individually. For example, the overall F-test can find that the coefficients are significant *jointly *while the t-tests can fail to find significance *individually*.

These conflicting test results can be hard to understand, but think about it this way. The F-test sums the predictive power of all independent variables and determines that it is unlikely that *all* of the coefficients equal zero. However, it’s possible that each variable isn’t predictive enough on its own to be statistically significant. In other words, your sample provides sufficient evidence to conclude that your model is significant, but not enough to conclude that any individual variable is significant.

**Related post**: How to Interpret Regression Coefficients and their P-values.

## Additional Ways to Interpret the F-test of Overall Significance

If you have a statistically significant overall F-test, you can draw several other conclusions.

For the model with no independent variables, the intercept-only model, all of the model’s predictions equal the mean of the dependent variable. Consequently, if the overall F-test is statistically significant, your model’s predictions are an improvement over using the mean.

R-squared measures the strength of the relationship between your model and the dependent variable. However, it is not a formal test for the relationship. The F-test of overall significance is the hypothesis test for this relationship. If the overall F-test is significant, you can conclude that R-squared does not equal zero, and the correlation between the model and dependent variable is statistically significant.

It’s fabulous if your regression model is statistically significant! However, check your residual plots to determine whether the results are trustworthy! And, learn how to choose the correct regression model!

If you’re learning regression, check out my Regression Tutorial!

**Note: I wrote a different version of this post that appeared elsewhere. I’ve completely rewritten and updated it for my blog site.**

Duc-Anh Luong says

Hi Jim,

Thank you so much for your interesting and easily understandable post. However, I have a question when we have the conflict between overall F-test and significant t-test for each predictor. What should we do if the t-test for some of prediction is non-significant? Should we remove this predictors and fit the model again?

Many thanks,

Duc Anh

Jim Frost says

Hi Duc-Anh,

Frequently you do remove an independent variable from a model if it is not statistically significant. There are some exceptions to this rule. If you believe that theoretical considerations suggest that the variable should be in the model despite an insignificant p-value, you could consider leaving it in. Additionally, if it is a variable that you are specifically testing in an experiment, you would leave it in to demonstrate the test results.

But, yes, frequently you would consider removing the predictor from the model if it is not statistically significant. Your dataset provides insufficient evidence to conclude that there is a relationship between that predictor and the response.

One more point, be sure to check the residual plots. There might be a curvilinear relationship.

I hope this helps,

Jim

Duc-Anh Luong says

Hi Jim,

Thank you so much for your reply. In case we keep one or more predictors that are not statistically significant based on some except rule you mentioned in the previous comment, how can we interpret the results now?

Best regards,

Duc Anh

Jim Frost says

Hi again Duc Anh,

It depends on why you leave the predictor in the model. If you’re leaving it in the model because it’s the specific term you are testing for your experiment, then you state that you have insufficient evidence to conclude that there is a relationship between this variable and the response.

However, if you’re leaving the variable in for theoretical reasons, that’s what you should state. The variable wasn’t statistically significant but theory/other studies suggest it belongs in the model. You might even investigate possible reasons for why it is not significant, such as a small sample size, noisy data, a fluky sample, etc. Even though you suspect the variable belongs in the model, your sample still provides insufficient evidence to conclude that the relationship exists. You really have to make sure you have a good strong reason for this approach and state clearly why you are doing so.

I hope this helps,

Jim

Duc-Anh Luong says

Hi Jim,

Thank you so much for your very specific response. I think that it is very true when we interpret the model parameter. How’s about when we use the model with one or more non-statistically significant variables to make prediction? Sorry for my stupid questions!

Best regards,

Duc Anh

Jim Frost says

Hi Duc Anh,

I was referring to the case where you leave a predictor in the model when it is not significant. If you’re using the model to make predictions, you have the additional consideration of the precision of the predictions. Leaving an insignificant predictor in the model might reduce the precision.

What you want to do is to compare the predicted R-squared and width of the prediction intervals between the model with the insignificant predictors and the model with only significant predictors. Read my post about using regression to make predictions for more information!

And, there really is no such thing as a stupid question! π

Jim

Aasia says

Hi Jim. i want to know in ANOVA table of regression analysis if p value is significant, still is there any limit for F value? what if it comes as big as 300 0r 450 etc

Jim Frost says

Hi Aasia, an F-value is the ratio of two variances. Theoretically, there is no limit to the F value. In terms of the explained variance, the better your model is compared to the intercept only model, the higher the F-value. However, for a specific model with a given number of degrees of freedom in the numerator and denominator, higher F-values occur less frequently.

I hope this helps! Thanks for the great question!

Kate says

Hi Jim,

I’m trying to interpret the results of a general linear model I have run. I have two factors – treatment and date (where the same experiment was repeated on different dates). Both give a significant p value but one has a much higher F value (136 compared to 8). Does that mean the factor with the higher F value is having a greater effect?

Jim Frost says

Hi Kate, that’s a great question. In a nutshell, no, the higher F-value doesn’t indicate a greater effect. I write about how to identify the most important variables in your model. I talk about it in the regression context, but you can apply some of the principles to ANOVA as well. I think that post will help you with this issue.

Nara says

Hi Jim,

if the overall regression model and none of predictor is not significant, how should I interpret the F-value and R-squared?

Jim Frost says

Hi Nara, unfortunately, when the overall F-test is not significant and none of the predictors are significant, you really have no evidence of any relationships between your model and the response variable.

In terms of how to interpret the F-value, that’s the test statistic for F-tests. The test uses this statistic to calculate the p-value. The F-value is the ratio of two variances. For this type of test, the ratio is: Variance explained by your model / Variance explained by the intercept-only model. As the F-value increases for this test, it indicates that your model is doing better compared to the intercept-only model. When the F-value reaches a critical value, you can reject the null hypothesis. I’ve written about how the F-test works in one-way ANOVA. That post shows how F-values are converted to P-values. That’s a different use of the F-test, but the ideas are very much the same. You just change the variances that are included in the ratio.

As for the R-squared. Because your F-test of overall significance is not statistically significant, it means you have insufficient evidence to conclude that your R-squared is greater than zero. The R-squared value in your analysis might not equal zero, but that’s probably just due to chance correlations rather than a true explanation of the population variance.

Dawn says

Jim,

Thank you for a great post! I have a question. If my F-value was found to not be significant (p=.069), do I still interpret the individual t-values in the coefficients table? If I do, then there are two variables that are significant (p<.05). I have been searching and having difficulty finding an answer to this!

Thank You

Dawn

Jim Frost says

Hi Dawn, yes, the results of the different tests can disagree. Despite the insignificant F-test, you can still conclude that your two variables are statistically significant. I’d guess that either you’re leaving insignificant variables in the model and/or those two variables are close to the significance level.

CORDELIA says

Hello Jim,

I enjoy every bit of your lectures here. Please does it imply, when F-test is statistically insignificant?

Jim Frost says

Hi Cordelia,

I’m glad that you find these to be helpful!

If your F-test of overall significance is NOT significant, then you fail to reject the null hypothesis. For this test, you can interpret it in several equivalent ways.

You can say that you have insufficient evidence to conclude that your model explains the variation in the dependent variable any better than just using the mean of the dependent variable.

Or, you can say that you have insufficient evidence to conclude that the R-squared is significantly greater than zero.

In short, your model is not explaining the variability in dependent variable to a statistically significant degree.

I hope this helps!

Olusola says

pls give a lecture on the Wald Chi square compared to F-value or f-test

Emmanuel Nkant says

Merci beaucoup, tres concit, a real pleasure…to follow your teaching

Jim Frost says

Thank you, Emmanuel!

Emily says

Thank you for this! I have a question regarding this. Can I use the F test to pick which model fits my data the best? I have tried several different models, and the F test for some of them is significant. Since I have multiple significant models, how do I choose which one to use? Do I choose the one with the lowest significant F test? The p-values for the individuals vary between the models, so picking which model to use really affects the results of my analysis.

Jim Frost says

Hi Emily,

Determining the best model for your data can be complicated. It will involving looking at more than one statistics–such as the F-test. I’ve written a blog post about how to choose the best model, which goes over a variety of things you should check. That post should answer a lot of your questions. But, don’t hesitate to ask if you have more!

Also, if the p-values and coefficients change dramatically depending on the variables that you include in the model, your model might have multicollinearity (correlated independent variables). This issue can make identifying the best model more difficult. You might want to read about it in my post about multicollinearity.

I hope this helps!

Honey Shandilya says

Hello sir,my regression model has the r-square value is 0.392 and p value is 0.00029 ,which is good,but none of my independent variables have good coefficient or are not significant.So , what should i conclude from this.Help me sir.

Jim Frost says

Hi,

If that p-value is for the overall F-test, it suggests that your model is statistically significant. It is unusual to see such a low p-value for the overall F-test and not have any of the independent variables also be significant. It’s hard to know for sure, but it’s possible that your model has multicollinearity (correlated independent variables). This problem can make significant variables appear to be insignificant. However, multicollinearity does not affect R-squared and the overall F-test–which might explain what that p-value is still significant while the others are not.

To learn if this problem affects your model, read my post about multicollinearity.

I hope this helps!

Hifza says

When i run the regression i took 1 dependent and 2 dependent variable.. After run the regression my results are F =8.385337 and F Significance=0.106549 and Rsquare=0.893450 and p value=0.0027062 so plz tell me according to this results what is the interpretation of R-square and model significance as per probability of F test …

I mean what is the actual conclusion of the interpretation plz guide me…..

Jim Frost says

Hi Hifza,

Usually you don’t need to interpret the F-value itself. Your statistical software uses the F-value to calculate the p-value, which is what you should focus on. (I don’t know what the F significance value refers to.)

Because your p-value for the overall F-test (0.002) is less than the typical significance level of 0.05, we can conclude that your model explains the variability of the dependent variable around its mean better than using the mean itself. As for how it relates to the R-squared, you have sufficient evidence to reject the null hypothesis that your model’s R-squared equals zero. By most standards, you have a nice and high R-squared value. Fortunately, it all suggests that you have a good model–at least according to the statistics. However, be sure to check the residual plots as well!

Usually, you also look at the p-values for the specific independent variables to see which ones are significant on their own.

I hope this helps. Best of luck with your analysis!

Yamila says

Hi Jim,

When running the linear model in RStudio I get this results.

Coefficients:

Estimate Std. Error t value Pr(>|t|)

(Intercept) 11063.069 7305.303 1.514 0.1353

mpg -114.743 79.172 -1.449 0.1526

rep78 710.879 322.117 2.207 0.0312 *

headroom -725.636 416.660 -1.742 0.0868 .

trunk 70.113 103.633 0.677 0.5013

weight 4.034 1.532 2.634 0.0108 *

length -84.390 43.982 -1.919 0.0599 .

turn -207.480 131.497 -1.578 0.1200

displacement 16.630 8.995 1.849 0.0695 .

gear_ratio 1642.587 1061.150 1.548 0.1270

—

Signif. codes: 0 β***β 0.001 β**β 0.01 β*β 0.05 β.β 0.1 β β 1

Residual standard error: 2178 on 59 degrees of freedom

(5 observations deleted due to missingness)

Multiple R-squared: 0.5149, Adjusted R-squared: 0.4409

F-statistic: 6.958 on 9 and 59 DF, p-value: 9.131e-07

p-value: 9.131e-07 is not in the threshold right? but my R-Square and F-Stats have a good high result. I am having a hard time interpretating this result. Could you help me?

Jim Frost says

Hi Yamila,

Your p-value is written in scientific notation. You need to move the decimal point 7 places to the left. Your p-value is actually 0.0000009131. That’s extremely low and your model is statistically significant.

Jackson says

Hi Jim,

I have values for F-statistics ranging from 39.39 to 69.81 for 6 different models with their respective p-values all <0.0001. What would you make of such information?

Jim Frost says

Hi Jackson,

Typically, you don’t interpret the F-values directly. Instead, you can use the p-values. Because your p-values are less than all common levels of significance, your models are statistically significant. This post tells you what a statistically significant model means.