# Setting up Apache Spark on an AWS EC2 instance

I am currently learning Apache Spark and how to use it for in-memory analytics as well as machine learning (ML). Scikit-learn is a great library for ML but when you want to deploy an ML model in prod to analyze billions of rows (‘big data’), you want to be working with some technology or framework such as hadoop that supports distributed computing.

Apache Spark is an open-source engine built on top of hadoop and provides significant improvement over just native hadoop MapReduce operations due to its support for in-memory computing. Spark also has a very nice api available in scala, java, python and R which makes it easy to use. Of course, I will be focusing on python since that’s the language I am most familiar with.

Moreover, when working with a distributed computing system, you want to make sure that it’s running on some cloud system such as AWS, Azure or Google Cloud which would allow you to scale your cluster flexibly. For example, if you had to quickly analyze billions of rows, you can spin up a bunch of EC2 instances with spark running and run your ML models on the cluster. After you are done, you can easily terminate your session.

In this blog post, I will be showing you how to spin up a free instance of AWS Elastic Compute Cloud (EC2) and install Spark on it. Let’s get started!

# Implementing a Multivariate Linear Regression model in python

Earlier, I wrote about how to implement a simple linear regression (SLR) model in python. SLR is probably the easiest model to implement among the most popular machine learning algorithms. In this post, we are going to take it one step further and instead of working with just one independent variable, we will be working with multiple independent variables. Such a model is called a multivariate linear regression (MLR) model.

### How does the model work?

A multivariate linear model can be described by a linear equation consisting of multiple independent variables.

For example:

In this equation, ß (beta) defines all the coefficients, x defines all the independent variables and y defines dependent variable.

An SLR model is a simplified version of an MLR model where there is only one x. Linear regression models use a technique called Ordinary Least Squares (OLS) to find the optimum value for the betas. OLS consists of calculating the error which is the difference between predicted value and actual value and then taking square of it. The goal is to find the betas that minimize the sum of the squared errors.

# Implementing Simple Linear Regression Model in Python

So far, I have discussed some of the theory behind machine learning algorithms and shown you how to perform vital steps when it comes to data preprocessing such as feature scaling and feature encoding. We are now ready to start with the simplest machine learning algorithm which is simple linear regression (SLR).

Remember, back in school, you would collect data for one of your science lab experiments and then use it to predict some values by plotting your data in Microsoft Excel and then drawing a line of best fit through your data? That line of best fit is an outcome of SLR model. An SLR model is a linear model which assumes that two variables (independent and dependent variables) exhibit linear relationship. A linear model with multiple independent variables is called multiple linear regression model.

### How does the model work?

Since SLR model exhibits a linear relationship, we know the line of best fit is described by a linear equation of the form: y = mx + b where y is the dependent variable, x is the independent variable, m is the slope and b is the y-intercept. Alternatively, m and b are also known as betas. The key to finding a good SLR model is to find the values for these betas that get you the most accurate predictions.

SLR model uses a technique called Ordinary Least Squares (OLS) to find the optimum value for the betas. OLS consists of calculating the error which is the difference between predicted value and actual value and then taking square of it. The goal is to find the betas that minimize the sum of the squared errors.

# Feature scaling in python using scikit-learn

In my previous post, I explained the importance of feature encoding and how to do it in python using scikit-learn. In this post, we are going to talk about another component of the preprocessing step in applying machine learning models which is feature scaling. Very rarely would you be dealing with features that share the same scale. What do I mean by that? For example, let’s look at the famous wine dataset which can be found here. This dataset contains several features such as alcohol content, malic acid and color intensity which describe a type of wine. Focusing on just these three features, we can see that they do not share same scale. Alcohol content is measured in alcohol/volume where as malic acid is measured in g/l.

### Why is feature scaling important?

If we were to leave the features as they are and feed them to a machine learning algorithm, we may get incorrect predictions. This is because most algorithms such as SVM, K-nearest neighbors, and logistic regression expect features to be scaled. If the features are not scaled, your machine learning algorithm might assign increased weight to one feature compared to another solely based on its value.

# Feature encoding in python using scikit-learn

A key step in applying machine learning models to your data is feature encoding and in this post, we are going to discuss what that consists of and how we can do that in python using scikit-learn.

Not all the fields in your dataset will be numerical. Many times you will have at least one non-numerical feature, which is also known as a categorical feature. For example, your dataset might have a feature called ‘ethnicity’ to describe the ethnicity of employees at a company. Similarly, you can also have a categorical dependent variable if you are dealing with a classification problem where your dataset is used to predict a class instead of a number (regression).

For example, let’s look at a famous machine learning dataset called Iris. This dataset has 4 numerical features: sepal length, sepal width, petal length and petal width. The output is a type of species which can be one of these three classes: setosa, versicolor and virginica.

# Python api for getting market and financial data from IEX

Most of you have probably heard about IEX: The Investors Exchange. IEX is the exchange started by Brad Katsuyama who was the protagonist of Michael Lewis’s famous book Flash Boys (review). Just last year, IEX scored a major win when SEC approved its application to register as a national securities exchange. As time passes by, IEX continues to gain more and more market share.

Just like any other exchange, one of IEX’s most valuable asset is the market data generated by all the trading. However, unlike other exchanges, IEX makes its data available to public for free via web API. On February 22, 2017, IEX wrote a blog post announcing release of its web API. Since then, IEX has made quite a few enhancements and added support for newer datasets as well.

As of today, some of the data that IEX provides includes:

• pricing data (latest trade and quote data as well as summary data going back up to 5 years),
• reference data,
• new data,
• earnings data, and
• financial data.

# Understanding sets in python

As I learn more and more about python’s different data types, I find myself surprised that not enough people use (or even know) sets. At my job, I am often taking some data and transforming it. Once transformed, I have to do analysis on how the data may have changed and sets are great for such comparisons.

In this post, I will cover how to create sets and show some examples on how to use them.

What is a set?
A set is an unordered collection of unique items in python. They are sort-of like lists but they only contain unique items and don’t maintain order. They also have a lot of helpful unique operations.

# Understanding list, set and dict comprehensions

Just few days ago, you were having a good time with your friends and counting down to 2017. Few days have passed and you are left with a typical cold snowy day in January. You are busy writing code for a high profile project at work. Suddenly, a situation arises where you need to create a new list from an existing one. You code it like you have always been coding:

>>> old = ['adam', 'mike', 'olga']
>>> for name in old:
new.append(name+ ' last')
>>> new
['adam last', 'mike last', 'olga last']

But then you realize that one of your 5 new year’s resolutions is to start using list comprehensions! You have heard about them but were always a little intimidated by them. You were also not really sure of their point.

If you have ever tried to learn a new language (not a programming language), you know that we always think in our native language before we translate it to the new language. This can lead to you forming some sentences that don’t make sense in the new language but are perfectly normal in your native language. For example, in a lot of languages, you ‘open’ an electronic gadget such as fan, AC or cell phone. When you say that in English, it means to literally open the gadget instead of turning it on.

The same is true for programming languages. As we pick up new languages, such as python, we are using our prior knowledge of programming in another language (q, java, c++ etc) and translating that to python. Many times, your code will work but it won’t be ‘pretty’ or fast. In python terms, your code won’t be ‘pythonic’.

In this post, I would like to cover some python idioms that can be very helpful. These idioms will:

2. Speed up your code, and
3. Set you apart from beginners

Let’s begin!

Note: All examples are written in python 2.

Update: Thanks to Diane and my other readers for pointing out some errors in my examples!

# Getting started with regex in python

I have been wanting to learn regex. Not just because it reminds me of a bit but because it’s actually very useful and can be used within numerous languages including python. In case you don’t know, regex stands for regular expression. A regex is “a sequence of characters that define a search pattern.” Most popular use case is to search strings for a pattern.

I was looking at videos from this year’s PyCon and stumbled up on a video of Trey Hunner conducting regex workshop at PyCon 2016. If you are looking to learn regex, I encourage you to watch the video. You can also find most of the stuff mentioned in the video on this website.

After I watched the video, I did some practice on my own which I wanted to share here.