Interpreting
Regression Output
Link to
MBA 510 Page
Link to
Dr. Frederick's Page
Check assumptions
To
test the normality of the residuals
To
test for autocorrelation of the residuals
To
test the linearity of the model
Homoskedasticity
Multicollinearity
Estimation of the model
To estimate
a predetermined model
To
select a model for estimation
Check for outliers, leverage points,
validity
Outliers
Leverage
points
Influential
observations
To
validate the model
All-possible-subsets regression does exactly what the name implies: it regresses Y on all possible subsets of the X variables. If there are h potential regressors, there will be 2^{h} regressions to perform. This is not much of a problem for modern computers. However, it will give a lot of output to sort through.
The other three methods use short cuts to avoid having to regress all possible subsets. Usually they work well, but they can miss the best model. When deciding whether to add a regressor to the model or delete a regressor from the model, these methods use criteria to judge the contribution of the regressor to the model as a whole. Some criteria are:
Forward addition starts with only the intercept and then performs h regression with the intercept and each regressor one at a time. The regressor that contributed the most to the explanation of Y is added to the model. The next step is to perform h-1 regressions with the the intercept, the first variable, and each of the h-1 remaining regressors to find the second most important variable to add to the model. The process is repeated until none of the remaining regressors has a significant contribution to the model, given the regressors that are already in the model.
Stepwise regression is similar to forward addition except that after each variable has been added to the model the t statistics of the regressors in the model are examined to see whether any of them should now be dropped from the model. The criterion for adding a variable to the model should be more stringent than the criterion for keeping a variable in the model. That is, t_{in} > t_{out}, or F_{in} > F_{out}, or a_{in} < a_{out}. Otherwise, the process could get stuck in a loop of adding a variable and then removing it, only to add it in again.
Stepwise regression is performed by clicking on Analyze on the Top Menu, then clicking on Fit Model. After selecting a Y variable (highlight the variable on the left and click on the Y button) and a group of X variables (highlight the Xs on the left and click on the Add button), click on the triangle in the upper-right hand corner of the window (in the box labeled "Personality:"). From the options that appear, select Stepwise. The Fit Stepwise window will appear. Click on the Go button, then on the Make Model button. (If you want to see the individual steps in the regressor selection process, you can click on the Step button repeatedly instead of using the Go button.) A new Stepped Model window will appear that is similar to the Fit Model window. Click on Run Model in this window. Your regression results will appear after about a second.
Test normality of the residuals
How to do it:
Save the residuals
From the Fit Least Squares screen,
click on the red triangle beside Response,
then click on
Save Columns,
then Residuals.
Return to your data table.
You should see a new column of residuals in your data table.
From the Top Menu Bar, click on
Analyze, then Distribution, then Fit Distribution.
The Fit Distribution dialog box
will appear. Choose the column of residuals as Y and click OK.
The Distribution report window
will appear. Click on the second pop-up menu (red triangle), then
click on Fit Distribution, then highlight Normal. JMP will add a
section to the Distribution window which provides information about the
Fitted Normal distribution. Click on the pop-up menu on the Fittted
Normal bar. The W test statistic for the Shapiro-Wilk test and its
p
value will appear in the Fitted Normal section. (If the sample size
is greater than 2000, the Kolmogoro-Smirnov-Lilliefors statistic will appear
instead of the Shapiro-Wilk W.) The null hypothesis of this test is that
the data do come from a normal distribution. Small p values
indicate that the hypotisis of normality of the residuals should be rejected.
What happens if the residuals are
not normal:
The estimates of the regression
coefficients are still unbiased and they have small variances. However,
we can no longer use the Z, t, and F distributions to test the coefficients
or the model as a whole.
What to do about nonnormal residuals:
Sometimes a transformation of the
Y values will create normally distributed residuals.
You might try replacing Y with
Y raised to some power (positive or negative, less than one or greater
than one), e^{Y}, or LnY
Transforming Y may create other
problems, such as nonlinearity or heteroskedasticity.
To
check for autocorrelation of the residuals
Autocorrelation of the residuals
occurs when there the residuals are correlated with lagged values of themselves;
that is, when e_{t} tends to be correlated
with e_{t-s}. The Durbin-Watson statistic
tests for correlations between e_{t} and e_{t-1},
which is called serial correlation.
The Durbin-Watson statistic will
be near 2.0 if there is no autocorrelation.
If the statistic is near 0.0, there
is evidence of positive autocorrelation (high residuals tend to be followed
by high residuals, and negative residuals tend to be followed by negative
residuals).
On the other hand, if the statistic
is near 4, there is evidence of negative autocorrelation (positive residuals
tend to be followed by negative residuals, and vice versa).
Note that the Durbin-Watson statistic
should not be used when one of the regressors is a lagged value of the
regressand.
How to do it:
In the Fit Least Squares report
window, click the pop-up menu (red triangle) beside the Response, then
click on Row Diagnostics, then click Durbin Watson Test to place a check
beside it. The Durbin-Watson statistic will appear near the bottom
of the Fit Least Squares window. To get the p value for the
DW statistic, click on the pop-up menu on the Durbin-Watson bar, and then
click Significance P Value. The p value is for the DW statistic
under the null hypothesis that there is no autocorrelation among the residuals.
This p value is the probability of finding a smaller DW statistic in a new random sample if there is no autocorrelation. When DW = 0.00, 2.00, and 4.00, the p values will be 0.0, 0.5, and 1.0, respectively. If we have no prior reason to believe that the autocorrelation should be positive or negative, then we should use a two-tailed rejection region here. Using a = .05 we would reject the null hypothesis of no autocorrelation whenever the p value < 0.025 or p value > 0.975. Using a = .05 with a left-tailed test for postitive autocorrelation, we would reject the null hypothesis whenever p value < 0.05. Using a = .05 with a right-tailed test for negative autocorrelation, we would reject the null hypothesis whenever p value > 0.95.
What happens if there is autocorrelation
among the residuals:
Essentially, it is as if there
were fewer observations in the sample than there really are. Because
the data are not independent of each other, a sample with 200 autocorrelated
observations does not have as much information in it as a sample of 200
uncorrelated observations.
The regression coefficients are
unbiased, but the estimates of the variances will be biased. When
there is positive autocorrelation, s² will be too small (systematically
smaller than the true s²), so it will be
too easy to reject the null hypothesis. A researcher who thinks he
is using a = 5% could actually be using a
= 15%.
What to do about autocorrelated
residuals:
One approach is to try replacing
Y with DY and each X with DX.
In this form b_{0}
should be close to zero.
Another approach is two-stage least
squares (2SLS).
Check
the linearity of the model
How to check linearity:
In the Fit Least Squares report
window, look at the graph labeled Residual by Predicted Plot.
If the points in the graph trace
out a U shaped pattern or an inverted U, there are nonlinear effects that
you have not incorporated into your model.
What to do about nonlinearity:
Try different transformations of
Y or the Xs.
Y = b_{0} + b_{1}•X_{1} + b_{2}•X_{2} + b_{3}•X_{3} + . . . + e
The estimates of the b_{i} (denoted by b_{i}) are given in the Parameter Estimates section of the Fit Least Squares report window. "Term" indicates the coefficient with "Intercept" indicating b_{0}. "Estimate" gives the value of the b_{i} . "Std Error" gives the standard error (standard deviation) of the estimate, b_{i}. "t Ratio" gives the test statistic t for a test of the null hypothesis that b_{i} = 0. If this hypothesis is true, then X_{i} has no effect on Y and can be deleted from the regression model. "Prob>|t|" gives the p value for a two-tailed test of the null hypothesis that b_{i} = 0. Small values in this column indicate that the corresponding X variable really does have an effect on Y.
Whereas the t statistics in the Parameter Estimates section test the importance of individual X variables, the Analysis of Variance section of the Fit Least Squares report window provides a test of the overall equation. The F statistic reported in the last column, and its p value below it, test the null hypothesis that allb_{i}, except for the intercept b_{0}, are equal to 0. To reject the null hypothesis, the F ratio must be significantly larger than 1.0. Sometimes multicollinearity among the Xs can result in small t ratios, but the F will be large. This would indicate that some combination of the Xs can explain Y, but it is impossible to tell exactly which Xs are responsible for the correlation.
The fundamental variance in the equation is estimated by the Mean Squared Error (MSE), which is found under the "Mean Square" column in the "Error" row in the Analysis of Variance section of the Fit Least Squares report window. The MSE estimates the variance of the e.
The "RSquare" and "RSquare Adj" are the coefficient of determination and the adjusted coefficient of determination. They estimate the fraction in the overall variance of Y (as measured by the C.(orrected) Total Sum of Squares in the Analysis of Variance section) that can be explained by the changes in the Xs (as measured by the Model Sum of Squares in the Analysis of Variance section). A low R² indicates that there are some important variables that contribute to the variation in Y besides the ones that you included in your model. (It may be impossible to measure these variables.)
Outliers: an outlier is a point that has an unusually large error, e. (e = Y - Y^, where Y^ is the predicted value of Y that corresponds to the same observed value of Y. The ^ would ordinarily be written over the Y and it would be called "Y-hat". Y^ = b_{0} + b_{1}•X_{1} + b_{2}•X_{2} + b_{3}•X_{3} and Y = b_{0} + b_{1}•X_{1} + b_{2}•X_{2} + b_{3}•X_{3} + e.) A good graph for detecting outliers is the Residual by Predicted Plot. (If this plot does not appear in your Fit Least Squares report window, click on the red pop-up menu by the Response bar, click on Row Diagnostics, then click on Plot Residual by Predicted.) Points that lie significantly above or below the rest of the data are outliers.
Leverage points: a leverage point is an observation that has an unusual value for one of its Xs. A good graph for detecting leverage points is the Leverage Plot near the top of the Fit Least Squares report window. There is a Leverage Plot for each regressor, X. The slope of the red line through these plots is the slope estimate, b. Points which are far to one side of the the average X value will be able to exert strong influence over b. If there is a single point at an unusual value of X, it will exert more influence over b than would, say, one of three points at that value of X.
Influential observations: an influential observation is any observation that if omitted from the regression would have a large effect on the parameter estimates of the model. Often, influential variables are outliers or leverage points. Two measures of influential observations are often used:
How to calculate the PRESS:
In the Fit Least Squares report
window, click the pop-up menu (red triangle) beside the Response, then
click on Row Diagnostics, then click Press to place a check beside it.
The PRESS statistic will be reported immediately below the graph labeled
Residual by Predicted Plot.
What does lack of validity do?
Lack of validity can result from
outliers, influential observations, or leverage points. These are
occasional data points that have significant on the results. Lack
of validity means that the results you found may be due to these unusual
data points and may not be true in other samples or in the population as
a whole.
What to do about lack of validity?
If the outliers, influential observations,
or leverage points can be identified, you might be able to detect something
unusual about these observations. This could lead you to include
a new variable to accommodate these data points. Sometimes they will
reveal a new variable that is relevant to your model. Sometimes they
may simply reveal that one of the researchers observed data differently
than the others did, or on Friday afternoons your observations were not
as good as they were on other days.
Proof that the variation in the square root of Y
is less than the variation in Y when Y > 1/4.
The key point is whether the slope of the square-root relationship
is greater than 1 or less than 1. If the slope is greater than 1,
the transformation increases the spread among the Y values, but if the
slope is less than 1, it decreases the spread. So we need to find
the values of Y for which the slope of Y^{0.5} is less than 1.
d(Y^{0.5})/dY = 0.5Y^{-0.5}. We set this equal to
1 and solve for Y. 0.5Y^{-0.5} = 1
Y^{0.5} = 0.5/1 = 0.5
Y = 0.5² = 0.25
See? There really is a reason to study calculus. Now go
impress your spouse with it.
created by James R. Frederick, March
28, 2001
copyright 2001 James R. Frederick