Logistic Regression!!!!

adityaberi8

Adityaberi

Posted on April 23, 2020

Logistic Regression!!!!

We're going to start with the definition of regression and then we'll parse out what logistic regression is. So as a general term, regression is a statistical process for estimating the relationships among variables. This is often used to make a prediction about some outcome. Linear regression is one type of regression that is used when you have a continuous target variable. For instance in this case where we're trying to predict the number of umbrellas sold by the amount of rainfall and the algorithm definition for linear regression is y = mx + b. So that's linear regression. But let's come back to logistic regression.

Logistic regression is a form of regression where the target variable or the thing you're trying to predict is binary. So just zero or one, or true or false, or anything like that. why do we need two different algorithms for regression? Why won't linear regression work for a binary target variable? So imagine a plot where we're just using one x feature along the x axis to predict a binary y outcome. If we use linear regression for a binary target like this, with a best fit line that makes any sense. Linear regression will try to fit a line that fits all of the data and it will end up predicting negative values and values over one, which is impossible.

Logistic regression is built off of a logistic or sigmoid curve which looks like this S shape here that you see below. . This will always be between zero and one, and it makes it a much better fit for a binary classification problem. So we saw the equation that represents What does the equation look like for logistic regression? What does the equation look like for logistic regression? Basically, it just takes the linear regression algorithm for a line mx + b. And it tucks it up as a negative exponent for e. So our full equation is 1 over 1 + e to the negative mx + b. And that's what creates this nice sigmoid S curve that makes it a good fit for binary classification problems

Alt text of image

So when to use Logistic Regression and when not to!!
WHEN TO USE
Binary target variable
Transparency is important or interested in significance of predictors.
Fairly well-behaved data
Need a quick initial benchmark..

WHEN NOT TO USE
Continuous target variable
Massive data
Performance is the only thing that matters

Let's make the Logistic Regression model, predicting whether a user will purchase the product or not.

Inputing Libraries

import pandas as pd 
import numpy as np 
import matplotlib.pyplot as plt
dataset = pd.read_csv('...\\User_Data.csv')
Enter fullscreen mode Exit fullscreen mode

Alt text of image

Now, to predict whether a user will purchase the product or not, one needs to find out the relationship between Age and Estimated Salary. Here User ID and Gender are not important factors for finding out this.

 input 
x = dataset.iloc[:, [2, 3]].values 

 output 
y = dataset.iloc[:, 4].values
Enter fullscreen mode Exit fullscreen mode

Splitting the dataset to train and test. 75% of data is used for training the model and 25% of it is used to test the performance of our model.

from sklearn.cross_validation import train_test_split 
xtrain, xtest, ytrain, ytest = train_test_split( 
x, y, test_size = 0.25, random_state = 0)
Enter fullscreen mode Exit fullscreen mode

Now, it is very important to perform feature scaling here because Age and Estimated Salary values lie in different ranges. If we don't scale the features then Estimated Salary feature will dominate Age feature when the model finds the nearest neighbor to a data point in data space.

from sklearn.preprocessing import StandardScaler 
sc_x = StandardScaler() 
xtrain = sc_x.fit_transform(xtrain) 
xtest = sc_x.transform(xtest) 

print (xtrain[0:10, :])


Output :
[[ 0.58164944 -0.88670699]
 [-0.60673761  1.46173768]
 [-0.01254409 -0.5677824 ]
 [-0.60673761  1.89663484]
 [ 1.37390747 -1.40858358]
 [ 1.47293972  0.99784738]
 [ 0.08648817 -0.79972756]
 [-0.01254409 -0.24885782]
 [-0.21060859 -0.5677824 ]
 [-0.21060859 -0.19087153]]
Enter fullscreen mode Exit fullscreen mode

Here once see that Age and Estimated salary features values are sacled and now there in the -1 to 1. Hence, each feature will contribute equally in decision making i.e. finalizing the hypothesis.
Finally, we are training our Logistic Regression model.

from sklearn.linear_model import LogisticRegression 
classifier = LogisticRegression(random_state = 0) 
classifier.fit(xtrain, ytrain)
Enter fullscreen mode Exit fullscreen mode

After training the model, it time to use it to do prediction on testing data.

y_pred = classifier.predict(xtest)
Let's test the performance of our model - Confusion Matrix
from sklearn.metrics import confusion_matrix 
cm = confusion_matrix(ytest, y_pred) 

print ("Confusion Matrix : \n", cm)
Confusion Matrix : 
 [[65  3]
 [ 8 24]]
Enter fullscreen mode Exit fullscreen mode

Out of 100 :
TruePostive + TrueNegative = 65 + 24
FalsePositive + FalseNegative = 3 + 8
Performance measure - Accuracy

from sklearn.metrics import accuracy_score 
print ("Accuracy : ", accuracy_score(ytest, y_pred))




Accuracy :  0.89
Enter fullscreen mode Exit fullscreen mode

I will soon be publishing this article on Geeksforgeeks

💖 💪 🙅 🚩
adityaberi8
Adityaberi

Posted on April 23, 2020

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related