Nonlinear regression analysis cannot calculate P values for the independent variables in your model. Why not? And, what do you use instead? Those are the topics of this blog post.
Nonlinear regression is an excellent statistical analysis when you need the maximum flexibility for fitting curves in your data. However, just like there are sound reasons for no R-squared values in nonlinear regression, there are valid reasons for why there are no P values for the coefficient estimates.
Why Are P Values Possible in Linear Regression?
The question above is probably not one that you’ve asked.
P values for the independent variables in linear regression are a valuable statistical tool that seems quite natural. In linear regression, a P value indicates whether the relationship between an independent variable and the dependent variable is statistically significant while controlling for the other variables in the model. For more information, read my post about interpreting P values and regression coefficients.
However, you need to understand why P values are possible in linear regression before you can figure out why they are impossible to calculate for nonlinear regression.
The key point to understand is that a linear regression model is a very restricted form of a model. In a linear regression equation, all terms are either the constant or a parameter multiplied by an independent variable (IV). Then, you build the equation by only adding the terms together. These rules limit the form to just one type:
Dependent variable = constant + parameter * IV + … + parameter * IV
Because of these restrictions, you end up with a consistent form that makes it possible to create a single hypothesis test that is appropriate for all parameter estimates in all linear regression models. Regardless of what an independent variable measures, if the parameter is zero, the value of that term equals zero (0 * IV = 0). This condition indicates that the independent variable has no relationship with the dependent variable because it literally adds nothing to the dependent variable in the equation.
Given the consistent form, the following hypothesis test is valid for all terms in all linear regression models. βi represents the parameter value for an independent variable.
- H0: βi = 0
- HA: βi <> 0
The P value for each term measures the amount of evidence against the null hypothesis that the parameter (coefficient) equals zero. If the P value is less than your significance level, reject the null and conclude that the parameter does not equal zero. Changes in the independent variable are related to changes in the dependent variable.
Why Are P Values Incalculable in Nonlinear Regression?
Conversely, nonlinear regression models can take on virtually an infinite number of forms. There are almost no restrictions on how you can use parameters in a nonlinear regression equation. On the positive side, this flexibility provides nonlinear regression with the most flexible curve-fitting abilities.
However, because there is an incredibly diverse array of potential model forms, it’s impossible to devise a single hypothesis test for all parameters. Instead, the null hypothesis value of each parameter depends on the nonlinear function, the parameter’s location in it, and the research question.
What can you use instead of P values? You’ll need to use your knowledge of both the research area and the nonlinear function to identify the parameter value that corresponds to the null hypothesis. Then, assess the parameter estimates, and particularly the confidence interval of the estimate, to determine whether the variable is statistically significant. If the confidence interval of the estimate excludes the null value, you can conclude that the parameter is statistically significant.
For examples of nonlinear functions, see my post about the differences between linear and nonlinear regression.
To learn about when to use nonlinear regression, read the following:
- How to Choose Between Linear and Nonlinear Regression
- Curve Fitting using Linear and Nonlinear Regression