Two variables are a perfectly linear function of each other when one variable can be entirely explained by the movement in the other variable and vice versa even though the absolute change in each variable may differ.
Examples of positively perfectly collinear variables would include age and experience or sales and taxes paid. Whereas examples of perfectly negatively collinear variables would include games won versus games lost.
Pearson’s correlation coefficient is measured between -1 to +1. Any figure that is positive and near to +1 is said to have a strong positive relationship whereas any correlation coefficient nearer to -1 is said to have a strong negative relationship. A correlation coefficient of zero indicates that there is no relationship between the two variables. Two variables are said to be perfectly collinear when they exhibit a Pearson’s correlation coefficient of -1 if perfectly negatively correlated or +1 if perfectly positively correlated.
The issue when running an Ordinary Least Squares (OLS) Linear Regression equation when there are two or more perfectly collinear explanatory variables is that statistical software will not be capable of distinguishing between the two variables in question. If there are two variables which have a perfectly collinear relationship in your model, then the model is said to exhibit perfect collinearity. If there is more than two variables, it is referred to as multicollinearity. Therefore, it is necessary to remove one or more of the variables in question to maintain the integrity of the model.
Find us on facebook:
Check out our World-Class Econometrics courses here: