Unravelling Linearity: My Journey in Regression Modeling

ccinaza

Blessing Angus

Posted on April 14, 2024

Unravelling Linearity: My Journey in Regression Modeling

Imagine a detective board filled with clues – features or independent variables – that might help solve a case (the dependent variable). Linearity states that the relationship between these clues and the outcome we're trying to predict should be linear, like a straight line on a graph. If this assumption isn't met, our model's predictions could be skewed. A curved line, for instance, might suggest a more complex relationship.
Graphs showing Linearity and Non-Linearity

My Case: Unveiling Linearity in My Data

Here's where my detective work began. I wanted to build a regression model that would predict the price of houses using a housing dataset. To ensure the validity of my model, I needed to verify the linearity assumption between the independent and dependent variables. This involved employing a combination of diagnostic techniques, including visual inspection (residual plot) and statistical tests (Rainbow Test). Let's dive into the code snippets and diagnostic plots to gain a deeper understanding of this validation process.

Imports
I imported the following libraries: pandas, matplotlib, scipy, statsmodels, and sklearn.
Imports

Cleaning and Feature Engineering
During the data cleaning process, I encountered a pivotal challenge regarding one of my predictors: the "location" column having over 800 unique values! Directly one-hot-encoding this variable would create a nightmare – a massive increase in features due to the curse of dimensionality. This can cripple my model's ability to learn.

To tackle this, I implemented a group_location function that grouped infrequent locations into an "Other" category. This approach condensed the number of categorical values, mitigating the adverse effects of high dimensionality and facilitating smoother model training.
group_location function

Fitting The Model - An OLS Model
After defining the dependent and independent variables and adding an intercept term, the model was then instantiated and fitted. Then, the data was split for evaluation, leading to predictions being made and residuals calculated.
fitting the model

Residual Plot
The residual plot shows a slight curvature, with residuals becoming increasingly positive or negative for higher fitted values. This suggests a potential violation of the homoscedasticity assumption, where the variance of the errors might not be constant across the range of predicted prices.
Residual Plot

Rainbow Test
A high p-value (typically > 0.05) suggests that there is no evidence against linearity, meaning the linear model is an appropriate fit for the data.

Rainbow Test

While the test statistic (0.9555) still indicates a positive correlation, it might not be entirely reliable due to the observed pattern in the residuals.

The high p-value (0.9104) technically allows us to fail to reject linearity based on the test alone. However, the visual evidence from the residuals suggests further investigation might be needed.

Conclusion
While the rainbow test didn't statistically reject linearity, the residual plot's non-random pattern hints at a potential issue with homoscedasticity. This calls for further investigation!

To address this, I plan to explore data transformations, investigate alternative models, and perform additional diagnostics.

How do you approach validating assumptions in your regression models? Share your strategies, insights, or questions in the comments below – I'm curious to hear your perspective!

💖 💪 🙅 🚩
ccinaza
Blessing Angus

Posted on April 14, 2024

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related