Top 5 metrics for evaluating regression models

In my previous posts, I have covered some regression models (simple linear regression, polynomial regression) and classification models (k-nearest neighbors, support vector machines). However, I haven’t really discussed in-depth different ways to evaluate these models. Without proper metrics, not only can you not claim the accuracy of your models confidently but you also cannot compare different models to pick the most accurate one.

In this post, I want to focus on some of the most popular metrics that are used to evaluate regression models. These metrics are (in no particular order):

  • Explained Variance Score (EVS)
  • Mean Absolute Error (MAE)
  • Mean Squared Error (MSE)
  • R Squared Score (R2 Score)
  • Adjusted R Squared Score

These metrics were calculated in my post (except for adjusted R2 score) about implementing polynomial regression model.

Continue…

Implementing Support Vector Machine (SVM) algorithm in python

As you have probably noticed by now, there are several machine learning algorithms available at your disposal. In my previous post, I covered a very popular classification algorithm called K-Nearest Neighbors. In today’s post, I will cover another very common and powerful classification algorithm called Support Vector Machine (SVM).

What is SVM and how does it work?

Just like KNN, SVM is a supervised learning model which means that it learns from the training set that we feed it. It can be used for both classification and regression problems but it’s mostly used for classification. In this post, we will focus on using SVM for classification.

SVM consists of picking support vectors and then using them to define a decision boundary for classifying features into different classes. The decision boundary is more formally known as hyperplane. Points on different sides of the plane belong to different classes. However, different sets of points can be segregated by numerous hyperplanes so how do you decide which hyperplane to select? That’s where the support vectors come into the picture.

Continue…

Implementing k-nearest neighbors in python

Last time, we looked into one of the simplest classification algorithms in machine learning called binomial logistic regression. In this post, I am going to cover another common classification algorithm called K Nearest Neighbors, otherwise known as KNN.

To recap, we have mostly discussed regression models such as simple and multivariate linear regression and polynomial regression which are used for predicting a quantity. On the other hand, classification models are used for predicting a category such as yes/no, will buy car/scooter/truck, will turn pink/green/red etc.

Continue…

Implementing a Binomial Logistic Regression model in Python

Note: You can now subscribe to my blog updates here to receive latest updates.

So far, we have only discussed regression modelling. However, there is another type of modelling called classification modelling. The primary difference between regression models and classification models is that while regression models are used to predict a quantity, classification models are used to predict a category.

For example, in my post on simple linear regression, we tried to predict soda sales through day’s temperature. Total sales of soda (our label) is a quantitative value and hence we used a regression model. In the example today, we are going to predict whether someone will purchase soda or not by looking at day’s temperature. Here we have two categories, whether customer will purchase or not purchase soda. This makes our label (dependent variable) categorical and suitable for logistic regression.As there were different variations of linear regression model, we also have different types of logistic regression model.

 

Continue…

Implementing a Polynomial Regression Model in Python

So far, we have looked at two types of linear regression models and how to implement them in python using scikit-learn. To recap, we began with a simple linear regression (SLR) model where we have one independent variable (feature) and one dependent variable (label). We then expended it slightly to a more general use case where we had multiple independent variables and one dependent variable. We called it multivariate linear regression model.

Both of these models result in a straight line or plane (if in multiple dimensions) which is very convenient but a bit too simplistic in the real world. Most real world problems cannot be easily modeled by a simple or multivariate linear regression model. For them, you need a non-linear model such as a polynomial regression model.

A polynomial regression model can be represented by an equation of this form:

Polynomial regression model is a type of linear regression model which can be confusing to some. The reason is that while the model is nonlinear, the regression function that is used to estimate the coefficients is linear. In fact, polynomial regression is a special case of multivariate linear regression.

How can I implement polynomial regression model?

Implementing a polynomial regression model is slightly different than implementing a simple or multivariate linear regression model. You still use the linear regression model but before you do that, you have to construct polynomial features of your coefficients.

Here are the steps we are going to follow as usual:

  • Exploring the dataset
  • Splitting the dataset into training and testing set
  • Building the model
  • Evaluating the model

Continue…

Setting up Apache Spark on an AWS EC2 instance

I am currently learning Apache Spark and how to use it for in-memory analytics as well as machine learning (ML). Scikit-learn is a great library for ML but when you want to deploy an ML model in prod to analyze billions of rows (‘big data’), you want to be working with some technology or framework such as hadoop that supports distributed computing.

Apache Spark is an open-source engine built on top of hadoop and provides significant improvement over just native hadoop MapReduce operations due to its support for in-memory computing. Spark also has a very nice api available in scala, java, python and R which makes it easy to use. Of course, I will be focusing on python since that’s the language I am most familiar with.

Moreover, when working with a distributed computing system, you want to make sure that it’s running on some cloud system such as AWS, Azure or Google Cloud which would allow you to scale your cluster flexibly. For example, if you had to quickly analyze billions of rows, you can spin up a bunch of EC2 instances with spark running and run your ML models on the cluster. After you are done, you can easily terminate your session.

In this blog post, I will be showing you how to spin up a free instance of AWS Elastic Compute Cloud (EC2) and install Spark on it. Let’s get started!

Continue…

Implementing a Multivariate Linear Regression model in python

Earlier, I wrote about how to implement a simple linear regression (SLR) model in python. SLR is probably the easiest model to implement among the most popular machine learning algorithms. In this post, we are going to take it one step further and instead of working with just one independent variable, we will be working with multiple independent variables. Such a model is called a multivariate linear regression (MLR) model.

How does the model work?

A multivariate linear model can be described by a linear equation consisting of multiple independent variables.

For example:

In this equation, ß (beta) defines all the coefficients, x defines all the independent variables and y defines dependent variable.

An SLR model is a simplified version of an MLR model where there is only one x. Linear regression models use a technique called Ordinary Least Squares (OLS) to find the optimum value for the betas. OLS consists of calculating the error which is the difference between predicted value and actual value and then taking square of it. The goal is to find the betas that minimize the sum of the squared errors.

If you want to learn more about SLM and OLS, I highly recommend this visual explanation.

Continue…

Implementing Simple Linear Regression Model in Python

So far, I have discussed some of the theory behind machine learning algorithms and shown you how to perform vital steps when it comes to data preprocessing such as feature scaling and feature encoding. We are now ready to start with the simplest machine learning algorithm which is simple linear regression (SLR).

Remember, back in school, you would collect data for one of your science lab experiments and then use it to predict some values by plotting your data in Microsoft Excel and then drawing a line of best fit through your data? That line of best fit is an outcome of SLR model. An SLR model is a linear model which assumes that two variables (independent and dependent variables) exhibit linear relationship. A linear model with multiple independent variables is called multiple linear regression model.

How does the model work?

Since SLR model exhibits a linear relationship, we know the line of best fit is described by a linear equation of the form: y = mx + b where y is the dependent variable, x is the independent variable, m is the slope and b is the y-intercept. Alternatively, m and b are also known as betas. The key to finding a good SLR model is to find the values for these betas that get you the most accurate predictions.

SLR model uses a technique called Ordinary Least Squares (OLS) to find the optimum value for the betas. OLS consists of calculating the error which is the difference between predicted value and actual value and then taking square of it. The goal is to find the betas that minimize the sum of the squared errors.

If you want to learn more about SLM and OLS, I highly recommend this visual explanation.

Continue…

Feature scaling in python using scikit-learn

In my previous post, I explained the importance of feature encoding and how to do it in python using scikit-learn. In this post, we are going to talk about another component of the preprocessing step in applying machine learning models which is feature scaling. Very rarely would you be dealing with features that share the same scale. What do I mean by that? For example, let’s look at the famous wine dataset which can be found here. This dataset contains several features such as alcohol content, malic acid and color intensity which describe a type of wine. Focusing on just these three features, we can see that they do not share same scale. Alcohol content is measured in alcohol/volume where as malic acid is measured in g/l.

Why is feature scaling important?

If we were to leave the features as they are and feed them to a machine learning algorithm, we may get incorrect predictions. This is because most algorithms such as SVM, K-nearest neighbors, and logistic regression expect features to be scaled. If the features are not scaled, your machine learning algorithm might assign increased weight to one feature compared to another solely based on its value.

Continue…

Feature encoding in python using scikit-learn

Note: You can now subscribe to my blog updates here to receive latest updates.

A key step in applying machine learning models to your data is feature encoding and in this post, we are going to discuss what that consists of and how we can do that in python using scikit-learn.

Not all the fields in your dataset will be numerical. Many times you will have at least one non-numerical feature, which is also known as a categorical feature. For example, your dataset might have a feature called ‘ethnicity’ to describe the ethnicity of employees at a company. Similarly, you can also have a categorical dependent variable if you are dealing with a classification problem where your dataset is used to predict a class instead of a number (regression).

For example, let’s look at a famous machine learning dataset called Iris. This dataset has 4 numerical features: sepal length, sepal width, petal length and petal width. The output is a type of species which can be one of these three classes: setosa, versicolor and virginica.

Continue…