Nonlinear regression is an extremely flexible analysis that can fit most any curve that is present in your data. R-squared seems like a very intuitive way to assess the goodness-of-fit for a regression model. Unfortunately, the two just don’t go together. R-squared is invalid for nonlinear regression.

Some statistical software calculates R-squared for these models even though it is statistically incorrect. Consequently, it’s important that you understand why you should not trust R-squared for models that are not linear. In this post, I highlight research that shows you how assessing R-squared for nonlinear regression causes serious problems and leads you astray.

## Why Is R-squared Valid for Only Linear Regression?

In my post about how to interpret R-squared, I explain how R-squared is the following proportion:

Furthermore, the variances always add up in a particular way:

Explained variance + Error variance = Total variance.

This arrangement produces an R-squared that is always between 0 – 100%.

That all makes sense, right? For linear models, this works out as you expect.

However, this math works out correctly only for linear regression models. In nonlinear regression, these underlying assumptions are incorrect. Explained variance + Error variance DO NOT add up to the total variance! The result is that R-squared isn’t necessarily between 0 and 100%. There are other problems with it as well.

This problem completely undermines R-squared in the context of nonlinear regression.

Keep in mind that I’m referring specifically to nonlinear models. R-squared *is* valid for linear models that use polynomials to model curvature. If you’re not clear about the difference between these two types of models, read my post to learn how to distinguish between linear and nonlinear regression.

## Specific Problems of Using R-squared with Nonlinear Regression

The general mathematic framework for R-squared doesn’t work out correctly if the regression model is not linear. Despite this issue, most statistical software still calculates R-squared for nonlinear models. This questionable practice can cause problems for you. Let’s see the ramifications!

Spiess and Neumeyer* performed a simulation study to look at the effect of using R-squared to assess the goodness-of-fit for models that are not linear. Their study ran thousands of simulations and found that R-squared leads you to draw false conclusions about which nonlinear models are best.

If you use R-squared for nonlinear models, their study indicates you will experience the following problems:

- R-squared is consistently high for both excellent and appalling models.
- R-squared will not rise for better models all of the time.
- If you use R-squared to pick the best model, it leads to the proper model only 28-43% of the time.

If you take all of these together, R-squared can’t differentiate between good and bad nonlinear models. It just doesn’t work. The authors go on to disparage the continuing practice of statistical software to calculate R-squared for nonlinear regression:

In the field of biochemical and pharmacological literature there is a reasonably high occurrence in the use of R

^{2 }as the basis of arguing against or in favor of a certain model. . . . Additionally, almost all of the commercially available statistical software packages calculate R^{2 }values for nonlinear fits, which is bound to unintentionally corroborate its frequent use. . . . As a result from this work, we would like to advocate that R^{2 }should not be reported or demanded in pharmacological and biochemical literature when discussing nonlinear data analysis.

If your statistical software calculates R-squared for nonlinear models, don’t trust it!

There are other goodness-of-fit measures you can use for regression models that are not linear. For instance, you can use the standard error of the regression. For this statistic, smaller values represent better models.

If you’re learning regression, check out my Regression Tutorial!

### Reference

Spiess, Andrej-Nikolai, Natalie Neumeyer. An evaluation of R^{2} as an inadequate measure for nonlinear models in pharmacological and biochemical research: a Monte Carlo approach. *BMC Pharmacology.* 2010; 10: 6.

Dennis says

Dear Jim,

thank you a lot for your effort to make this huge universe of statistics accesible and understandable to all! So far it helped me a lot through my studies. Now finally I am writing my bachelor thesis and I was wondering, if you could help me finding a reference which states that using r-square in linear regression modelling with curve fitting is valid and robust? Through this post I understand that it´s allowed. Nonetheless I need a reference to citate for my thesis and couldn´t find anything as helpfull as your post on this topic.

Jim Frost says

Hi Dennis,

I don’t have a specific journal article as a reference. However, any textbook that covers regression will support using R-squared for linear models. It’s a completely standard and acceptable goodness-of-fit measure for linear models.

Darshan says

Thanks for the your response, Jim! I started using Standard error of regression (S) for nonlinear fits. I was looking for other research articles which might have used it. I could only find one article: https://www.ncbi.nlm.nih.gov/pubmed/8289285 . Since, S is essentially RMSE (root mean squared error), this paper is using S for comparing 3D protein models.

Are you aware of any other research paper or a statistics textbook which has used or explains the use of S for nonlinear fits especially for biological data?

Darshan says

Hi Jim,

Thanks for your wonderful post! I was aware that R^2 cannot be used for reporting goodness of fit for nonlinear regression but I was unable to find a better statistic to report in research articles till I read this article.

Is it a good idea to report the ‘standard error of the regression, S’ alone in research articles to show the goodness of fit?

I saw your other post (http://statisticsbyjim.com/regression/standard-error-regression-vs-r-squared/) where you explain why to use S over R^2 and reporting both for goodness of fit of the model. For nonlinear fits, will it be a good option to report R^2 and S together?

Jim Frost says

Hi Darshan,

Thanks so much for your kind words! I really appreciate them!

For linear regression, I think reporting S alone isn’t a good idea because most people are expecting R-squared. I would particularly include both R-sq and S if you’re primarily using your model to make predictions. If you’re not making predictions, then reporting S isn’t as important, although I still like it!

I definitely would not report R-sq for nonlinear regression. R-sq is misleading for nonlinear models and can lead to incorrect conclusions (as the other blog post talks about). If you’re trying to publish in a journal and they require R-sq, then I guess go ahead and report it, but that would be a misguided policy on their part. In general, don’t report R-sq for nonlinear regression. You might need to explain why because it appears that many aren’t aware of the problems!