February 20, 2023
On this tutorial, we’ll find out about linear regression and how you can implement it in Python. First, we’ll discover a pattern machine studying drawback, after which we’ll develop a mannequin to make predictions. (This tutorial assumes some familiarity with Python syntax and knowledge cleansing.)
The Downside
The dataset that we’ll be inspecting is the Vehicle Information Set from the UCI Machine Studying Repository. This dataset incorporates data on numerous automobile traits, together with automobile kind and engine kind, amongst many others.
Think about that we’re taking up the function of a knowledge analyst at an auto insurance coverage firm. We’ve been tasked with rating automobiles when it comes to their “riskiness,” a measure of how doubtless a automobile is to get into an accident and due to this fact require the motive force to make use of their insurance coverage. Riskiness isn’t one thing we find out about a automobile simply by it, so we have to use different qualities that we are able to see and measure.
To unravel our drawback, we’ll flip to a machine studying mannequin that may convert our knowledge into helpful predictions. There are a number of machine studying fashions that we are able to use, however we’ll flip our consideration to linear regression.
The Linear Regression Mannequin
Earlier than we start the evaluation, we’ll study the linear regression mannequin to know the way it will help remedy our drawback. A linear regression mannequin with a single function appears like the next:
$$Y = beta_0 + beta_1 X_1 + epsilon$$
$Y$ represents the result that we need to predict. In our instance, it’s automobile riskiness. $X_1$ here’s a “function” or “predictor”, which represents a automobile attribute that we need to use to foretell the result. $X$ and $Y$ are issues we observe and acquire knowledge on. Beneath, we present a visualization of the linear regression above:
$beta_1$ represents the “slope”, or how the result $Y$ adjustments when the function $X$ adjustments. $beta_0$ represents the “intercept”, which might be the common worth of the result when the function is 0. $epsilon$ represents the “error” left over that isn’t defined by the function $X$, visualized by the purple traces. These values, $beta_0$, $beta_1$, and $epsilon$, are known as parameters, and we have to calculate them from the information.
We may additionally add extra predictors into the mannequin by including one other parameter $beta_2$ to be related to the opposite options. For instance, including a second function would end in a mannequin that appears like this:
$$Y = beta_0 + beta_1 X_1 + beta_2 X_2 + epsilon$$
We are able to calculate these parameters by hand, however it could be extra environment friendly to make use of Python to create our linear regression mannequin.
Checking The Information
Step one in making a machine studying mannequin is to look at the information! We’ll load within the pandas library, in order that we are able to learn within the Vehicles Information Set, which is saved as a .csv file.
import pandas as pd
vehicles = pd.read_csv(“vehicles.csv”)
print(vehicles.columns)
[1] Index([‘symboling’, ‘normalized_losses’, ‘make’, ‘fuel_type’, ‘aspiration’, ‘num_of_doors’, ‘body_style’, ‘drive_wheels’, ‘engine_location’, ‘wheel_base’, ‘length’, ‘width’, ‘height’, ‘curb_weight’, ‘engine_type’, ‘num_of_cylinders’, ‘engine_size’, ‘fuel_system’, ‘bore’, ‘stroke’, ‘compression_ratio’, ‘horsepower’, ‘peak_rpm’, ‘city_mpg’, ‘highway_mpg’, ‘price’], dtype=’object’)
For this tutorial, we’ll use the engine_size and horsepower columns for our options within the linear regression mannequin. Our instinct right here is that as engine dimension will increase, the automobile turns into extra highly effective and able to greater speeds. These greater speeds may result in extra accidents, which result in greater “riskiness”.
The column that captures this “riskiness” is the symboling column. The symboling column ranges from -3 to three, the place the upper the worth, the riskier the automobile.
Realistically, the method of choosing options for a linear regression mannequin is finished extra by trial-and-error. We’ve picked engine dimension utilizing an instinct, however it could be higher to attempt to enhance our predictions primarily based on this preliminary mannequin.
The Answer
We are able to rapidly create linear regressions utilizing the scikit-learn Python library. Linear regressions are contained within the LinearRegression class, so we’ll import all the things we’d like under:
from sklearn.linear_model import LinearRegression
mannequin = LinearRegression()
We’ve imported the LinearRegression class and saved an occasion of it within the mannequin variable. The following step is to divide the information right into a coaching set and a take a look at set. We’ll use the coaching set to estimate the parameters of the linear regression, and we’ll use the take a look at set to verify how effectively the mannequin predicts the riskiness of automobiles it hasn’t seen earlier than.
import math
# Calculate what number of rows 80% of the information could be
nrows = math.flooring(vehicles.form[0] * 0.8)
# Divide the information utilizing this calculation
coaching = vehicles.loc[:nrows]
take a look at = vehicles.loc[nrows:]
Within the code above, we’ve devoted 80% of the information to the coaching set and the remaining 20% for the take a look at set. Now that now we have a coaching set, we may give the options and end result to our mannequin object to estimate the parameters of the linear regression. That is also referred to as mannequin becoming.
X = coaching[[“engine_size”, “horsepower”]]
Y = coaching[“symboling”]
mannequin.match(X, Y)
The match() technique takes within the options and the result and makes use of them to estimate the mannequin parameters. After these parameters are estimated, now we have a usable mannequin!
Mannequin Efficiency
We are able to attempt to predict the values of the symboling column within the take a look at set and see the way it performs.
import numpy as np
predictions = mannequin.predict(take a look at[[“engine_size”, “horsepower”]])
mae = np.imply((take a look at[“symboling”]- predictions)**2)
After operating the match() technique on the coaching knowledge, we are able to name the predict() technique on new knowledge containing the identical columns. Utilizing these predictions, we are able to calculate the imply absolute error (MAE). The MAE describes how far the mannequin predictions are from the precise symboling values on common.
print(mae)
[1] 1.7894647963388066
The mannequin has a mean take a look at error of about 1.79. It is a stable begin, however we’d be capable to enhance the error by together with extra options or utilizing a distinct mannequin.
Subsequent Steps
On this tutorial, we discovered in regards to the linear regression mannequin, and we used it to foretell automobile riskiness primarily based on engine dimension and horsepower. The linear regression is likely one of the mostly used knowledge science instruments as a result of it matches effectively with human instinct. We are able to see how adjustments within the predictors produces proportion adjustments within the end result. We examined the information, constructed a mannequin in Python, and used this mannequin to supply predictions. This course of is on the core of the machine studying workflow and is crucial information for any knowledge scientist.
If you happen to’d prefer to study extra about linear regression and add it to your machine studying ability set, Dataquest has a full course masking the subject in our Information Scientist in Python Profession Path.

Concerning the writer
Christian Pascual
Christian is a PhD pupil learning biostatistics in California. He enjoys making statistics and programming extra accessible to a wider viewers. Exterior of college, he enjoys going to the health club, language studying, and woodworking.