In this tutorial we will use multiple linear regression to predict health insurance cost for individuals based on multiple factors (age, gender, BMI, # of children, smoking and geo-location)
import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt
# read the csv file insurance_df = pd.read_csv('insurance.csv')
# check if there are any Null values insurance_df.isnull().sum()
# Check the dataframe info insurance_df.info()
We can group our data with the .groupby() function
# Grouping by region to see any relationship between region and charges # Seems like south east region has the highest charges and body mass index df_region = insurance_df.groupby(by='region').mean() df_region
It’s actually an array of unique values but…
# Check unique values in the 'sex' column insurance_df['sex'].unique()
We must convert all string based data to numerical data or else we will encounter an error later when we convert everything to float32 format
# convert categorical variable to numerical insurance_df['sex'] = insurance_df['sex'].apply(lambda x: 0 if x == 'female' else 1)
Convert all of our string options into a matrix of numerical indicator variables
region_dummies = pd.get_dummies(insurance_df['region'], drop_first = True)
Now that we have turned our region string column into a matrix of booleans we need to concat the matrix onto the end of the datafield and then remove the region column
insurance_df = pd.concat([insurance_df, region_dummies], axis = 1)
Rows and Columns can be deleted with the
# Let's drop the original 'region' column insurance_df.drop(['region'], axis = 1, inplace = True)
Now that we have normalized our data, let’s create a series of histograms for each parameter.
insurance_df[['age', 'sex', 'bmi', 'children', 'smoker', 'charges']].hist(bins = 30, figsize = (20,20), color = 'r')
We now have our data shaped into a format that we can use Seaborn to create a regression line without any machine learning. Let’s go ahead and do that.
Here is the linear regression for Age
sns.regplot(x = 'age', y = 'charges', data = insurance_df) plt.show()
Here is the linear regression for BMI
sns.regplot(x = 'bmi', y = 'charges', data = insurance_df) plt.show()
We can create a correlation matrix and then convert that to a heatmap to read it more easily
corr = insurance_df.corr() corr
# resize heatmap so it is legible plt.figure(figsize = (10,10)) sns.heatmap(corr, annot = True)
And with this correlation matrix heatmap we can see that the factor with the most correlation to insurance cost is whether or not a person is a smoker.
Now let’s shape our data into training and testing data sets. Let’s start by separating our independent variables from our dependent variables.
X = insurance_df.drop(columns =['charges']) y = insurance_df['charges']
And then we can review our new variables
The documentation states that we must convert all numbers to float32 format for regression analysis so lets do that.
X = np.array(X).astype('float32') y = np.array(y).astype('float32')
After we have converted to float32 let us reshape y so that it has a column
from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=42)
random_state here controls the shuffling applied to the data before applying the split. Pass an int for reproducible output across multiple function calls.
In short, the reason why we must scale our data is so that all our features are roughly using the same scale. If a feature has a variance that is orders of magnitude larger that others, it might dominate the objective function and make the estimator unable to learn from other features correctly as expected.
This is not necessary for single feature linear regression because there is only one feature. However this IS required for multiple linear regression.
Data that has been scaled is referred to as normalized data
#scaling the data before feeding the model from sklearn.preprocessing import StandardScaler, MinMaxScaler scaler_x = StandardScaler() X_train = scaler_x.fit_transform(X_train) X_test = scaler_x.transform(X_test) scaler_y = StandardScaler() y_train = scaler_y.fit_transform(y_train) y_test = scaler_y.transform(y_test)
Note that we are not using SageMaker Algorithms yet. This is a standard SK-Learn model.
# using linear regression model from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error, accuracy_score regresssion_model_sklearn = LinearRegression() regresssion_model_sklearn.fit(X_train, y_train)
In the highlighted line we have fit the line
regression_model_sklearn now contains the trained parameters
LinearRegression(copy_X=True, fit_intercept=True, n_jobs=None, normalize=False)
Now we can get an accuracy score
regresssion_model_sklearn_accuracy = regresssion_model_sklearn.score(X_test, y_test) regresssion_model_sklearn_accuracy
So we have achieved about 78% accuracy.
Now we can feed in our X_test data that we set aside earlier to get an array of y predictions.
y_predict = regresssion_model_sklearn.predict(X_test) y_predict
And we can see that we get an array back, however all these numbers look a little small for insurance costs don’t they? Well remember that earlier we normalized this data by scaling it down.
And we can see that when we use the inverse transform method of our scaler we get numbers in the range of what we would expect.
y_predict_orig = scaler_y.inverse_transform(y_predict) y_predict_orig
We can now calculate some of the metrics that we covered in Ncoughlin: Regression Metrics and KPI’s
from sklearn.metrics import r2_score, mean_squared_error, mean_absolute_error from math import sqrt RMSE = float(format(np.sqrt(mean_squared_error(y_test_orig, y_predict_orig)),'.3f')) MSE = mean_squared_error(y_test_orig, y_predict_orig) MAE = mean_absolute_error(y_test_orig, y_predict_orig) r2 = r2_score(y_test_orig, y_predict_orig) adj_r2 = 1-(1-r2)*(n-1)/(n-k-1)