{
"cells": [
{
"cell_type": "markdown",
"id": "3c81d7d2",
"metadata": {},
"source": [
"# Introduction to Machine Learning \n",
" Compiled by: \n",
" Ibrahim O. Alabi and Evi Ofekeze (Computing PhD, Data Science, Boise State University) "
]
},
{
"cell_type": "markdown",
"id": "fd2c41b2",
"metadata": {
"tags": []
},
"source": [
"## Learning Objectives\n",
"\n",
"1. Understand the goals and main concepts of a Machine Learning Algorithm\n",
"2. Prepare a SnowEx dataset for Machine Learning\n",
"3. Understand the fundamental types of and techniques in Machine Learning\n",
"4. Implement Machine Learning with a SnowEx dataset\n",
"5. How to deploy a Machine Learning Model\n",
"\n",
"## What is Machine Learning?\n",
"\n",
"Machine Learning simply means building algorithms or computer models using data. The goal is to use these \"trained\" computer models to make decisions. \n",
"\n",
"Here is a general definition;\n",
"\n",
"> Machine Learning is the field of study that gives computers the ability to learn without being explicitly programmed (Arthur Samuel, 1959)\n",
"\n",
"Over the years, ML algorithms have achieved great success in a wide variety of fields. Its success stories include disease diagnostics, image recognition, self-driving cars, spam detectors, and handwritten digit recognition. In this tutorial we will train a model using a SnowEx dataset.\n",
"\n",
"### Binary Example\n",
"\n",
"Suppose we want to build a computer model that returns 1 if the input is a prime number and 0 otherwise. This model may be represented as follows;\n",
"\n",
"$$Y = F(X)$$\n",
"\n",
"Where:\n",
"\n",
" * $X \\to$ the number entered also called a feature\n",
" * $Y \\to$ the outcome we want to predict\n",
" * $F \\to$ the model that gets the job done\n",
"\n",
"Contrary to classical programming where we program the function, in Machine Learning we learn the function by training the algorithm with data. \n",
"\n",
"\n",
"\n",
"\n",
"$\\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad$ image by [Kpaxs on Twitter](https://twitter.com/Kpaxs/status/1163058544402411520)\n",
"\n",
"Machine Learning is useful when the function cannot be programmed or the relationship between the features and outcome is unknown. \n",
"\n",
"## SnowEx Data\n",
"\n",
"Now we will import data from the 2017 SnowEx Airborne Campaign. In Machine Learning terminologies, it contains the following features;\n",
"\n",
"- phase\n",
"- coherence\n",
"- amplitude\n",
"- incidence angle\n",
"\n",
"and outcome\n",
"\n",
"- snow depth\n",
"\n",
"The goal is to use the data to learn the computer model $F$ so that\n",
"\n",
"snow depth = $F$(phase, coherence, amplitude, incidence angle) \n",
"\n",
"Once $F$ is learned, it can be used to predict snow depth given the features."
]
},
{
"cell_type": "markdown",
"id": "3ca5f55a",
"metadata": {},
"source": [
"## Load Dataset\n",
"\n",
"Note that this dataset has been cleaned in a separate notebook, and it is available for anyone interested.\n",
"\n",
"Load libraries:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "667716e4",
"metadata": {},
"outputs": [],
"source": [
"import time\n",
"import numpy as np\n",
"import pandas as pd\n",
"import seaborn as sns\n",
"import matplotlib.pyplot as plt"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a7170953",
"metadata": {},
"outputs": [],
"source": [
"from sklearn.preprocessing import MinMaxScaler\n",
"from sklearn.metrics import mean_squared_error, r2_score, mean_absolute_percentage_error, mean_absolute_error"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cf939ccd",
"metadata": {},
"outputs": [],
"source": [
"dataset = pd.read_csv(\"./data/snow_depth_data.csv\")\n",
"dataset.info()\n",
"dataset.head()"
]
},
{
"cell_type": "markdown",
"id": "0ff0fdc0",
"metadata": {},
"source": [
"## Train and Test Sets\n",
"\n",
"For the algorithm to learn the relationship pattern between the feature(s) and the outcome variable, it has to be exposed to examples. The dataset containing the examples for training a learning machine is called the *train set* ($\\mathcal{D}^{(tr)}$). \n",
"\n",
"On the other hand, the accuracy of an algorithm is measured on how well it predicts the outcome of observations it has not seen before. The dataset containing the observations not used in training the ML algorithm is called the *test set* ($\\mathcal{D}^{(te)}$). \n",
"\n",
"In practice, we divide our dataset into train and test sets, train the algorithm on the train set and evaluate its performance on the test set."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8ddf81d5",
"metadata": {},
"outputs": [],
"source": [
"train_dataset = dataset.sample(frac=0.8, random_state=0)\n",
"test_dataset = dataset.drop(train_dataset.index)"
]
},
{
"cell_type": "markdown",
"id": "c2b133df",
"metadata": {},
"source": [
"### Inspect the Data\n",
"\n",
"**Visualization**\n",
"\n",
"Before modelling, it is always a good idea to visualize our dataset. With visualization, we gain insights into the relationships between the variables and the shape of the distribution of each variable. For this data, we shall look into the scatterplot matrix."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9b42fb49",
"metadata": {},
"outputs": [],
"source": [
"sns.pairplot(train_dataset[['snow_depth','amplitude', 'coherence', 'phase', 'inc_ang']], diag_kind='kde');"
]
},
{
"cell_type": "markdown",
"id": "512a136b",
"metadata": {},
"source": [
"Each panel (excluding the main diagonal) of the scatterplot matrix is a scatterplot for a pair of variables whose identities are given by the corresponding row and column labels. The main diagonal is density plot for each variable. None of the features have a linear relationship with *snow_depth*. This may indicate that a linear model might not be the best option."
]
},
{
"cell_type": "markdown",
"id": "b823fd07",
"metadata": {},
"source": [
"**Descriptive Statistics**\n",
"\n",
"- count: the size of the training set\n",
"- mean: arithmetic mean\n",
"- std: sample standard deviation\n",
"- min: minimum value\n",
"- 25%: 25% of the values fall below this number\n",
"- 50%: 50$^{th}$ percentile also called the median. 50% of the values fall below this number and 50% of the values fall above this number\n",
"- 75%: 75% of the values fall below this number\n",
"- max: maximum value"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8aa6cdf2",
"metadata": {},
"outputs": [],
"source": [
"train_dataset.describe().transpose()"
]
},
{
"cell_type": "markdown",
"id": "b76d2bfa-0b5b-4dc3-8ea2-ae7f2aa37c84",
"metadata": {},
"source": [
"**Feature Engineering**\n",
"\n",
"Feature engineering means different things to different data scientists. However, in this tutorial, we will define feature engineering as the art of manipulating and\n",
"transforming data into a format that optimally represents the underlying problem. In other words, any transformation/cleaning you do between data collection and model fitting may be called **feature engineering**.\n",
"\n",
"**Types of Feature Engineering**\n",
"\n",
"1. Feature improvement (normalization, missing value imputation, etc.).\n",
"2. Feature construction (creating new features from original data).\n",
"3. Feature selection (hypthesis testing, information gain from tree-based models).\n",
"4. Feature extraction (Principal Component Analysis (PCA), Singular Value Decomposition (SVD), etc.)."
]
},
{
"cell_type": "markdown",
"id": "29e80682",
"metadata": {},
"source": [
"### Normalization\n",
"\n",
"The features and outcome are measured in different units and hence are on different scales. Machine Learning algorithms are very sensitive and important information can be lost if they are not on the same scale. A simple way to address this is to project all the variables onto the same scale, in a process known as **Normalization**.\n",
"\n",
"Normalization simply consists of transforming the variables such that all values are in the unit interval [0, 1]. With Normalization, if $X_j$ is one of the variables, and we have $n$ observations $X_{1j}, X_{2j}, \\cdots, X_{nj}$, then the normalized version of $X_{ij}$ is given by:\n",
"\n",
"$$\n",
"\\tilde{X}{ij} = \\frac{X_{ij} - \\min X_j}{\\max X_j - \\min X_j}\n",
"$$\n",
"\n",
"**Note:** whether the outcome variable should be normalized or not is an issue of preference. In this tutorial, we will not normalize the output variable. If you wish to normalize yours, you have to transform your predictions back to their original scales. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "19993714",
"metadata": {},
"outputs": [],
"source": [
"scaler = MinMaxScaler()\n",
"scaled_train_features = scaler.fit_transform(train_dataset.drop(columns=['snow_depth']))\n",
"scaled_test_features = scaler.transform(test_dataset.drop(columns=['snow_depth'])) ## fit_transform != transform. \n",
" ## transform uses the parameters of fit_transform\n",
" ## DO NOT TRANSFORM YOUR TEST SET SEPARATELY, USE PARAMETERS LEARNED FROM TRAINING SET!!!!"
]
},
{
"cell_type": "markdown",
"id": "91f58304",
"metadata": {},
"source": [
"### Separate Features from Labels\n",
"\n",
"Our last step for preparing the data for Machine Learning is to separate the features from the outcome."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a0195d2f",
"metadata": {},
"outputs": [],
"source": [
"train_X, train_y = scaled_train_features, train_dataset['snow_depth']\n",
"test_X, test_y = scaled_test_features, train_dataset['snow_depth']"
]
},
{
"cell_type": "markdown",
"id": "c5f75d9e",
"metadata": {},
"source": [
"## Why Estimate $F$?\n",
"\n",
"We estimate $F$ for two main reasons;\n",
"\n",
"1. Prediction: in this case, the features $X$ are available, but there is no explicit rule for obtaining the outcome $Y$.\n",
"2. Inference: in practice, we are sometimes interested in how changing the input $X$ effects $Y$. Inference can tell us which features are significantly associated with a particular outcome and the nature of the relationship between $X$ and $Y$.\n",
" \n",
"\n",
"## How Do We Estimate $F$?\n",
"\n",
"### Machine Learning Algorithms\n",
"\n",
"Machine learning algorithms can be categorized based on different criteria. In this tutorial, our categorization will be based on the amount and type of supervision needed during the training process. Based on this criterion, there are four major categories; supervised learning, unsupervised learning, semisupervised learning, and reinforcement learning. We shall limit our definition to the first two;\n",
"\n",
"* **Supervised Learning**: this refers to tasks where we have a specific outcome to predict. That is, every observation of the features has a corresponding outcome. An example of a supervised learning task is predicting snow depth based on some influencing features.\n",
"\t \n",
"* **Unsupervised Learning**: this refers to tasks where we have no outcome to predict. Here, rather than predict an outcome, we seek to understand the relationship between the features or between the observations or to detect anomalous observations. Considering the example above, we assume the snow depth variable does not exist and we either understand the relationship between the features or between the observations based on the features.\n",
"\n",
"\n",
"It is worth noting that a variable can either be categorical or continuous. For now, let's focus on the nature of the outcome variable. In *Supervised Learning* parlance, if the outcome variable is categorical, we have a *classification* task and if continuous, we are in the presence of a *regression* task. Categorical implies that the variable is made of distinct categories (e.g. blood types: A, AB, O) and continuous implies that the variable is measured (e.g. snow depth). For the rest of this tutorial, we will focus on *Supervised Learning* tasks with a special concentration on regression tasks.\n",
"\n",
"\n",
"## Machine Learning Installation Guide\n",
"\n",
"For this notebook to run successfully, you have to install the following packages;\n",
"\n",
"1. TensorFlow\n",
"2. Keras\n",
"\n",
"**Tensorflow**: it is an end-to-end open source platform for machine learning. It makes it easy to train and deploy machine learning models. TensorFlow was created at Google and supports many of their large-scale Machine Learning applications. Read more at [About TensorFlow](https://www.tensorflow.org/).\n",
"\n",
"**Keras:** Keras is a deep learning Application Programming Interface (API) written in Python, running on top of the machine learning platform TensorFlow. It was developed with a focus on enabling fast experimentation. Read more at [About Keras](https://keras.io/about/).\n",
"\n",
"Other packages needed come pre-installed in Anaconda.\n",
"\n",
"You only need to install TensorFlow as Keras comes as a submodule in TensorFlow. To install TensorFlow, launch the **Jupyter Lab Terminal** and type the following command:\n",
"\n",
" $\\texttt{C:\\Users\\Default> pip install tensorflow}$\n",
"\n",
"After installation, you will be able to continue with the tutorial. For this tutorial, both Tensorflow and Keras used are version 2.8\n",
"\n",
"\n",
"**Most Machine Learning techniques can be characterised as either *parametric* or *non-parametric*.**\n",
"\n",
"**Parametric:** The parametric approach simplifies the problem of estimating $F$ to a parameter estimation problem. The disadvantage is that we assume a particular shape of $F$ that may not match the true shape of $F$. The major advantage of using parametric approach that when inference is the goal we can understand how changing $X_1, X_2, \\cdots, X_k$ effects $Y$. A common parametric method is the Linear Regression.\n",
"\n",
"**Non-parametric:** Non-parametric approaches do not assume any shape for $F$. Instead, they try to estimate $F$ that gets as close to the data points as possible. The disadvantage of non-parametric approaches is that they require extensive training observations to estimate $F$ accurately. Common examples of non-parametric methods include Random Forest, Neural Networks, and support vector machines.\n",
"\n",
"In this tutorial, we will focus on non-parametric approaches with a specific concentration on Random Forest and Neural networks.\n",
"\n",
"## Random Forest\n",
"\n",
"Random forest is a simple but powerful algorithm that handles classification and regression tasks. The structural building blocks of Random Forest are decision trees; we will begin our explanation with decision trees. When we use decision trees for regression tasks, we call them regression trees; when we use decision trees for classification tasks, we call them classification trees. Subsequently, we use “regression trees” instead of “decision trees” because we will solve a regression problem in this tutorial.\n",
"\n",
"### Regression Trees\n",
"\n",
"Consider a dataset $\\mathcal{D}_n = \\left\\lbrace (\\textbf{x}_1, y_1), (\\textbf{x}_2, y_2), \\cdots, (\\textbf{x}_n, y_n) \\right\\rbrace$ where $\\textbf{x}_i^\\top \\equiv ({x}_{i1}, {x}_{i2}, \\cdots, {x}_{ik})$ denotes the $k$-dimensional vector of features, and $y_i$ represents the corresponding outcome (continuous in this case).\n",
"\n",
"The fundamental concept underlying regression trees is to split the feature space (set of possible values for the features) into two sub-regions. The sub-regions are further divided into two until a stopping criterion is reached. The predicted value of a given observation is the arithmetic mean of the training observations in the region it belongs.\n",
"\n",
"\n",
"## Performance Evaluation\n",
"\n",
"Each time we estimate the true outcome ($Y$) using a trained ML algorithm ($F(X)$), the discrepancy between the observed and predicted must be quantified. The question is, how do we quantify this discrepancy? This brings the notion of **loss function**. \n",
"\n",
"*Loss Function* $\\mathcal{L} (\\cdot,\\cdot)$ is a bivariate function that quantifies the loss (error) we sustain from predicting $Y$ with $F(X)$. Put another way, **loss function** quantifies how close the prediction $F(X)$ is to the ground truth $Y$.\n",
"\n",
"* Regression Loss Function\n",
"\n",
"There exists quite a number of ways for which the loss of a regression problem may be quantified. We now illustrate two of them;\n",
"\n",
"1. \n",
"\n",
"$$\n",
"\\mathcal{L} (Y,F(X)) = (Y - F(X))^2\n",
"$$\n",
"\n",
"This is popularly known as the *squared error loss* and it is simply the square of the difference between the observed and the predicted values. The loss is squared so that the function reaches its minimum (convex).\n",
"\n",
"2.\n",
"\n",
"$$\n",
"\\mathcal{L} (Y,F(X)) = |Y - F(X)|\n",
"$$\n",
"\n",
"Another way to quantify regression loss is by taking the absolute value of the difference between the observed ($Y$) and the predicted ($F(X)$) values. This is called the L1 loss.\n",
"\n",
"It is worth noting that the *loss function* as defined above corresponds to a single observation. However, in practice, we want to quantify the loss over the entire dataset and this is where the notion of **empirical risk** comes in. Loss quantified over the entire dataset is called the *empirical risk*. Our goal in ML is to develop an algorithm such that the *empirical risk* is as minimum as possible. *Empirical risk* is also called the *cost function* or the *objective function* we want to minimize.\n",
"\n",
"* Regression Empirical Risk\n",
"\n",
"$$\n",
"\\widehat{\\mathcal{R}}_n(F) = \\frac{1}{n}\\sum_{i=1}^n{\\mathcal{L}(Y_i, F(X_i))}\n",
"$$\n",
"\n",
"The empirical risk corresponding to the squared error loss is called \"mean squared error\", while the empirical risk corresponding to the L1 loss is called \"mean absolute error\". Other Regression Loss functions can be found at [Keras: losses](https://keras.io/api/losses/)"
]
},
{
"cell_type": "markdown",
"id": "40b4a6a4",
"metadata": {},
"source": [
"## Random Forest Setup\n",
"\n",
"Load Random forest Libraries:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4f591b43-9a94-4b8f-87c5-e82a48b3d55b",
"metadata": {},
"outputs": [],
"source": [
"from sklearn.pipeline import Pipeline\n",
"from sklearn.ensemble import RandomForestRegressor\n",
"from sklearn.tree import DecisionTreeRegressor, plot_tree\n",
"from sklearn.model_selection import train_test_split, GridSearchCV"
]
},
{
"cell_type": "markdown",
"id": "bae31b29-cd52-4824-a501-62646bf4467d",
"metadata": {},
"source": [
"### Regression Tree\n",
"\n",
"The sklearn documentation for regression tree can be found [here](https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeRegressor.html)."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0ecde4f3-40ee-456d-9a7e-d3e04a389e55",
"metadata": {},
"outputs": [],
"source": [
"tree = DecisionTreeRegressor(random_state=0, max_depth=2) ## random_state=0 for reproducible results\n",
"tree.fit(train_dataset.drop(columns='snow_depth'), \n",
" train_dataset['snow_depth'])"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b2c38812-482c-4f56-8d66-7f2efb1ca4e1",
"metadata": {},
"outputs": [],
"source": [
"features = list(train_dataset.columns[:-1])\n",
"\n",
"plt.figure(figsize = (20, 8))\n",
"plot_tree(tree, \n",
" filled = True, \n",
" fontsize = 12,\n",
" node_ids = True,\n",
" rounded = True,\n",
" proportion=True,\n",
" feature_names= features)\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"id": "4cd2928e-4a64-440c-8fb5-03b6392f31c7",
"metadata": {},
"source": [
"### Now let's explain the regression tree above\n",
"\n",
"The root node (node #0) assigns values of **incidence angle** lesser than or equal to 0.818 to the right branch and values of **incidence angle** greater than 0.818 to the left node. Observations with incidence angle lesser than or equal to 0.818 are further partitioned based on **phase** values. Overall, the regression tree splits the observations into four disjoint groups or regions ($R_i$) of the feature space.\n",
"\n",
"\\begin{align*}\n",
"R_1 &= \\left\\lbrace X \\ | \\ \\text{inc_angle} \\le 0.818,\\ \\text{phase} \\le -8.423 \\right\\rbrace \\qquad \\quad \\hat{y}_{R_1} = 0.712 \\\\\n",
"R_2 &= \\left\\lbrace X \\ | \\ \\text{inc_angle} \\le 0.818,\\ \\text{phase} > -8.423 \\right\\rbrace \\qquad \\quad \\hat{y}_{R_2} = 1.174 \\\\\n",
"R_3 &= \\left\\lbrace X \\ | \\ \\text{inc_angle} > 0.818,\\ \\text{inc_angle} \\le 0.835 \\right\\rbrace \\qquad \\hat{y}_{R_3} = 1.861 \\\\\n",
"R_4 &= \\left\\lbrace X \\ | \\ \\text{inc_angle} > 0.818,\\ \\text{inc_angle} > 0.835 \\right\\rbrace \\qquad \\hat{y}_{R_4} = 2.238 \\\\\n",
"\\end{align*}\n",
"\n",
"where $\\hat{y}_{R_i}$ is the mean of the respective regions. The regions $R_i, \\ i = 1,2,3, 4$ are called *terminal nodes* or *leaves*, the points where the feature space ($X$) is partitioned are called *internal nodes* and the line segments connecting the nodes are referred to as *branches*. Generally, to know which region an observation belongs to, we ask series of question(s) starting from the root node until we get to the terminal node and the value predicted for the observation is mean of all training observations in that region. Mathematically we write;\n",
" \\begin{equation}\n",
" \\hat{y} = \\hat{f}(\\text{x}) = \\frac{1}{|R_j|} \\sum_{i = 1}^{n} y_i 1(\\text{x}_i \\in R_j). \n",
" \\end{equation}\n",
" \n",
"Here, if the $i$th observation belongs to region $R_j$, the indicator function equals 1, else, it equals zero.\n",
"\n",
"* Root node: node #0.\n",
"* Internal nodes: node #1 and node #4.\n",
"* Terminal nodes: node #2, node #3, node #5, and node #6.\n",
"\n",
"We have mentioned the recursive splitting of observations into regions, the natural question that comes to mind is, how do we construct the regions? Our goal is to find regions such that the expected loss is minimized. To perform the recursive binary splitting, we consider all features in the feature space and all possible cut-point(s) for the individual features. The feature and cut-point leading to the greatest reduction in expected loss are chosen for splitting. This process is then continued until a stopping criterion is reached. A stopping criterion could be until no region contains more than five observations. In sklearn, the stopping criterion is controlled by $\\texttt{max_depth}$ and $\\texttt{min_samples_split}$."
]
},
{
"cell_type": "markdown",
"id": "8180f76d-626f-4139-a6c7-f728f19e512a",
"metadata": {},
"source": [
"### How to make predictions\n",
"\n",
"An example:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d321cf7d-cda3-496f-ae61-5230fd9fbba1",
"metadata": {},
"outputs": [],
"source": [
"sample = test_dataset.loc[[0], ['amplitude', 'coherence', 'phase', 'inc_ang']]\n",
"sample"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "147cf348-5a9d-4e4b-b526-676f81c11dde",
"metadata": {},
"outputs": [],
"source": [
"tree.predict(sample)"
]
},
{
"cell_type": "markdown",
"id": "33a32961-e474-4af5-8ed6-709bf7959740",
"metadata": {},
"source": [
"* incidence angle is lesser than 0.818 and phase is lesser than -8.423. This falls in region one ($R_1$), and our prediction will be 0.712."
]
},
{
"cell_type": "markdown",
"id": "28bd9afd-e29a-47f6-a7a3-14e2bf22cd79",
"metadata": {
"tags": []
},
"source": [
"### Feature Importance\n",
"\n",
"The total amount by which the empirical risk is decreased due to splits over a given predictor is documented. The larger the value, the more important the variable. [Sklearn's](https://scikit-learn.org/stable/) implementation of variable importance normalizes the variable importance so that they add up to 1. Mathematically, feature importance is computed as:\n",
"\n",
"$$I_j= \\sum_j w_jC_j - w_{left(j)}C_{left(j)} - w_{right(j)}C_{right(j)}$$\n",
"\n",
"* $I_j$= importance of variable $j$\n",
"* $C_j$= the MSE of node $j$ \n",
"* $w_j$= percentage of the observation reaching node $j$"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "06be186d-d7fb-4bf4-84ca-c92cdb868cda",
"metadata": {},
"outputs": [],
"source": [
"## compute inc_ang importance\n",
"\n",
"node0=0.396 - 0.799*0.156 - 0.201*0.07\n",
"node4=0.201*0.07 - 0.026*0.043 - 0.175*0.056\n",
"imp_inc_ang=node0+node4\n",
"\n",
"## compute phase importance\n",
"\n",
"imp_phase=0.799*0.156 - 0.432*0.061 - 0.151*0.367 \n",
"print('Incidence Angle Importance: {}, Phase Importance: {}'.format(imp_inc_ang,imp_phase))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f5646a13-1376-492c-8769-cae72aee7e56",
"metadata": {},
"outputs": [],
"source": [
"feature_impotanceRT = pd.DataFrame({\n",
" 'Unormalized Importance': tree.tree_.compute_feature_importances(normalize=False),\n",
" 'Normalized Importance': tree.feature_importances_\n",
"}, index=features)\n",
"feature_impotanceRT\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3427c1e0-8686-4255-8e90-f33acac8703b",
"metadata": {},
"outputs": [],
"source": [
"feature_impotanceRT.plot(kind = 'bar')\n",
"plt.title('Regression Tree Feature Importance')\n",
"plt.xlabel('Feature')\n",
"plt.ylabel('Importance Score')\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"id": "f89de23a-7512-45eb-ab02-1e26fa0af5d6",
"metadata": {},
"source": [
"### Exercise:\n",
"\n",
"Compute the normalized importance manually and compare it with sklearn's output above."
]
},
{
"cell_type": "markdown",
"id": "8370d9da-b62c-496b-bb8c-083ca636ac3b",
"metadata": {},
"source": [
"## Random Forest\n",
"\n",
"> If you want to go fast, go alone. If you want to go far, go together (African Proverb).\n",
"\n",
"Regression trees suffer from high variance, that is, they have a habit of overfitting the training observations. A straightforward way to reduce the variance is to employ the Random Forest (RF) algorithm. RF is an ensemble learning technique that can be used for both classification and regression. The term “ensemble learning” means combining multiple algorithms to obtain an improved version of the constituting algorithms. RF does this using the concept of Bagging.\n",
"\n",
"* **Bagging:** Bagging consists of two words; Bootstrap and Aggregation. Bootstrap is a popular resampling technique where random sub-samples (with replacement) are created from the training data. The individual bootstrapped sub-sample is called a “Bag.” Aggregation, however, means combining the results from machines trained on the respective bags. We could say, given our training data, we generate $B$ bootstrapped training sub-samples from the original training data, then we train an algorithm on the respective bootstrapped sub-sample to obtain $\\hat{F}^1 (\\text{x}), \\hat{F}^2 (\\text{x}), \\cdots , \\hat{F}^B (\\text{x})$. The final predicted value is the average of all the individual predictions and it is given by:\n",
"\\begin{equation}\n",
"\\hat{F}_{\\text{bag}} (\\text{x}) = \\frac{1}{B} \\sum_{b = 1}^{B} \\hat{F}^{b} (\\text{x}).\n",
"\\end{equation}\n",
"\n",
"This is bagging! In the RF algorithm, regression trees are built on the different bootstrapped samples and the result is a forest of regression trees, hence the \"forest\" in Random Forest. \n",
"\n",
"* **What makes the forest random?**\n",
"\n",
"Recall that when training a regression tree, splitting is done at each internal node, however, in the RF algorithm, when building the individual regression trees, only a random sample of the total features is considered for optimal splitting and this is what makes the forest random! In essence, when we combine bagging with random feature selection at each internal node of the constituting regression trees, we are said to have constructed a **Random Forest** learning machine."
]
},
{
"cell_type": "markdown",
"id": "ec706415-66ea-42bc-95af-5fb72d615a16",
"metadata": {},
"source": [
"### Training Random Forest\n",
"\n",
"Sklearn's documentation for Random Forest Regression can be found [here](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html). The documentation can also be gotten directly from your Jupyter Notebook using the `help` function as shown below."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7bb2fe08-9d78-4028-8a3f-a108dadb49b2",
"metadata": {},
"outputs": [],
"source": [
"#help(RandomForestRegressor)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "88f6c5d2-1674-4f47-a524-745c1e1ccac1",
"metadata": {},
"outputs": [],
"source": [
"rf = RandomForestRegressor(n_estimators=2,max_depth=2,random_state=0)\n",
"\n",
"rf.fit(train_dataset.drop(columns='snow_depth'), \n",
" train_dataset['snow_depth'])"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1496f8b8-5c3b-49be-aff9-6077a9c3418c",
"metadata": {},
"outputs": [],
"source": [
"plt.figure(figsize = (25, 8))\n",
"\n",
"plt.subplot(121)\n",
"plot_tree(rf.estimators_[0], \n",
" filled = True, \n",
" fontsize = 12,\n",
" node_ids = True,\n",
" rounded = True,\n",
" proportion=True, \n",
" feature_names= features)\n",
"\n",
"plt.subplot(122)\n",
"plot_tree(rf.estimators_[1], \n",
" filled = True, \n",
" fontsize = 12,\n",
" node_ids = True,\n",
" rounded = True,\n",
" proportion=True, \n",
" feature_names= features)\n",
"\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"id": "3baf3717-a696-4619-aa89-0c040392528c",
"metadata": {},
"source": [
"### Making Predictions"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c96c0df3-83dc-4cf1-9151-1073749469d9",
"metadata": {},
"outputs": [],
"source": [
"sample"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5a1975bb-48fe-4c64-8fc5-5221d0a3c6be",
"metadata": {},
"outputs": [],
"source": [
"## Overall predictions\n",
"\n",
"rf.predict(sample)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9b57b2b7-9c5c-445e-8ab1-c3b618321df4",
"metadata": {},
"outputs": [],
"source": [
"## Prediction from the first tree\n",
"\n",
"rf.estimators_[0].predict(sample)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e00462cf-3a49-4fa8-b943-abd711c2bed5",
"metadata": {},
"outputs": [],
"source": [
"## Prediction from the second tree\n",
"\n",
"rf.estimators_[1].predict(sample)"
]
},
{
"cell_type": "markdown",
"id": "906ecafc-1d53-468b-a84b-0ba7e33efb21",
"metadata": {},
"source": [
"Using the same approach as above, the first regression tree predicts 0.7072, and the second regression tree predicts 0.7065. Hence, the final prediction is;\n",
"\n",
"$$\\frac{0.7072+0.7065}{2} = 0.70685$$"
]
},
{
"cell_type": "markdown",
"id": "6f004f12-682a-4aad-8b69-3af5e64f5ff4",
"metadata": {},
"source": [
"### Feature Importance\n",
"\n",
"The total amount by which the squared error loss is decreased due to splits over a given predictor is documented and averaged over all trees in the forest. The larger the value, the more important the variable."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "41850105-da24-4fb4-88bb-069cec95199f",
"metadata": {},
"outputs": [],
"source": [
"feature_impotanceRF = pd.Series(rf.feature_importances_, index=features)\n",
"feature_impotanceRF.plot(kind = 'bar')\n",
"plt.xlabel('Feature')\n",
"plt.ylabel('Importance Score')\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"id": "7c1584d8-a3c0-4853-a9d4-46061e41d874",
"metadata": {},
"source": [
"### Exercise:\n",
"\n",
"Compute the random forest's normalized importance manually and compare it with sklearn's output above."
]
},
{
"cell_type": "markdown",
"id": "9f116aaf-3db0-40a1-9823-f00c83287e44",
"metadata": {},
"source": [
"### Putting it all together\n",
"\n",
"We'll use Sklearn's [Pipeline](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html) to preprocess and fit our model in one shot."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "55b4649f-5078-4655-b859-2232bdfbb000",
"metadata": {},
"outputs": [],
"source": [
"rf_pipe = Pipeline([\n",
" ('Scaler', MinMaxScaler()),\n",
" ('regressor', RandomForestRegressor())\n",
"])\n",
"\n",
"rf_pipe.fit(train_dataset.drop(columns='snow_depth'), \n",
" train_dataset['snow_depth'])"
]
},
{
"cell_type": "markdown",
"id": "dc5d7715",
"metadata": {
"tags": []
},
"source": [
"## Neural Networks\n",
"\n",
"Neural networks are learning machines inspired by the functionality of biological neurons. They are not models of the brain, because there is no proven evidence that the brain operates the same way neural networks learn representations. However, some of the concepts were inspired by understanding the brain. Every NN consists of three distinct layers; \n",
"\n",
"1. Input layer \n",
"2. Hidden layer(s) and \n",
"3. Output layer. \n",
"\n",
"To understand the functioning of these layers, we begin by exploring the building blocks of neural networks; a single neuron or perceptron.\n",
"\n",
"### A perceptron or Single Neuron\n",
"\n",
"Each layer in a NN consists of small individual units called neurons (usually represented with a circle). A neuron receives inputs from other neurons, performs some mathematical operations, and then produces an output. Each neuron in the input layer represents a feature. In essence, the number of neurons in the input layer equals the number of features. Each neuron in the input layer is connected to every neuron in the hidden layer. The number of neurons in the hidden layer is not fixed, it is problem dependent and it is often determined via cross-validation or validation set approach in practice. \n",
"\n",
"\n",
"\n",
"The configuration of the hidden layer(s) controls the complexity of the network. A NN may contain more than one hidden layer. A NN with more than one hidden layer is called a **deep neural network** while that with a single hidden layer is a **shallow neural network**. The final layer of a neural network is called the output layer and it holds the predicted outcomes of observations passed into the input layer. Every inter-neuron connection has an associated weight, these weights are what the neural network learns during the training process. When connections between the layers do not form a loop, such networks are called **feedforward neural networks** and if otherwise, they are called **recurrent neural networks**. In this tutorial, we shall concentrate on the feed forward neural networks.\n",
"\n",
"\n",
"\n",
"#### How the perceptron works\n",
"\n",
"Consider a dataset $\\mathcal{D}_n = \\left\\lbrace (\\textbf{x}_1, y_1), (\\textbf{x}_2, y_2), \\cdots, (\\textbf{x}_n, y_n) \\right\\rbrace$ where $\\textbf{x}_i^\\top \\equiv ({x}_{i1}, {x}_{i2}, \\cdots, {x}_{ik})$ denotes the $k$-dimensional vector of features, and $y_i$ represents the corresponding outcome. Given a set of input fed into the network through the input layer, the output of a neuron in the hidden layer is\n",
"\n",
"$$\n",
" f(\\textbf{x}_i;\\textbf{w}) = g(w_0 + \\textbf{w}^\\top \\textbf{x}_i),\n",
"$$\n",
"\n",
"where $\\textbf{w} = (w_1, w_2, \\cdots, w_k)^\\top$ is a vector of weights and $w_0$ is the bias term associated with the neuron. The weights can be thought of as the slopes in a linear regression and the bias as the intercept. The function $g(\\cdot)$ is known as the activation function and it is used to introduce non-linearity into the network.\n",
"\n",
"\n",
"\n",
"$\\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad$ A Perceptron\n",
"\n",
"There exists a number of activation functions in practice, the commonly used ones are\n",
"\n",
"1. Sigmoid or Logistic activation function $$g(z) = \\frac{1}{1 + e^{-z}}.$$\n",
"2. Hyperbolic Tangent activation function $$g(z) = \\tanh (z) = \\frac{e^{z}-e^{-z}}{e^{z}+e^{-z}}.$$\n",
"3. Rectified Linear Unit activation function $$g(z) = \\max(0,z).$$"
]
},
{
"cell_type": "markdown",
"id": "2887e834",
"metadata": {},
"source": [
"### Visualizing Activation Functions"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8fff65bb",
"metadata": {},
"outputs": [],
"source": [
"def Sigmoid(z):\n",
" \"\"\"\n",
" A function that performs the sigmoid transformation\n",
" \n",
" Arguments:\n",
" ---------\n",
" -* z: array/list of numbers to activate\n",
" \n",
" Returns:\n",
" --------\n",
" -* logistic: the transformed/activated version of the array\n",
" \"\"\"\n",
" \n",
" logistic = 1/(1+ np.exp(-z))\n",
" return logistic\n",
" \n",
"\n",
"def Tanh(z):\n",
" \"\"\"\n",
" A function that performs the hyperbolic tangent transformation\n",
" \n",
" Arguments:\n",
" ---------\n",
" -* z: array/list of numbers to activate\n",
" \n",
" Returns:\n",
" --------\n",
" -* hyp: the transformed/activated version of the array\n",
" \"\"\"\n",
" \n",
" hyp = np.tanh(z)\n",
" return hyp\n",
"\n",
"\n",
"def ReLu(z):\n",
" \"\"\"\n",
" A function that performs the hyperbolic tangent transformation\n",
" \n",
" Arguments:\n",
" ---------\n",
" -* z: array/list of numbers to activate\n",
" \n",
" Returns:\n",
" --------\n",
" -* points: the transformed/activated version of the array\n",
" \"\"\"\n",
" \n",
" points = np.where(z < 0, 0, z)\n",
" return points\n",
"\n",
"z = np.linspace(-10,10)\n",
"fa = plt.figure(figsize=(16,5))\n",
"\n",
"plt.subplot(1,3,1)\n",
"plt.plot(z,Sigmoid(z),color=\"red\", label=r'$\\frac{1}{1 + e^{-z}}$')\n",
"plt.grid(True, which='both')\n",
"plt.xlabel('z')\n",
"plt.ylabel('g(z)', fontsize=15)\n",
"plt.title(\"Sigmoid Activation Function\")\n",
"plt.legend(loc='best',fontsize = 22)\n",
"\n",
"\n",
"plt.subplot(1,3,2)\n",
"plt.plot(z,Tanh(z),color=\"red\", label=r'$\\tanh (z)$')\n",
"plt.grid(True, which='both')\n",
"plt.xlabel('z')\n",
"plt.ylabel('g(z)', fontsize=15)\n",
"plt.title(\"Hyperbolic Tangent Activation Function\")\n",
"plt.legend(loc='best',fontsize = 18)\n",
"\n",
"plt.subplot(1,3,3)\n",
"plt.plot(z,ReLu(z),color=\"red\", label=r'$\\max(0,z)$')\n",
"plt.grid(True, which='both')\n",
"plt.xlabel('z')\n",
"plt.ylabel('g(z)', fontsize=15)\n",
"plt.title(\"Rectified Linear Unit Activation Function\")\n",
"plt.legend(loc='best', fontsize = 18)"
]
},
{
"cell_type": "markdown",
"id": "5f928ad1",
"metadata": {},
"source": [
"### Families of Neural Networks\n",
"\n",
"1. Feedforward Neural Networks: often used for structured data.\n",
"2. Convolutional Neural Networks: the gold standard for computer vision.\n",
"3. Transfer Learning: reusing knowledge from previous tasks on new tasks.\n",
"4. Recurrent Neural Networks: well suited for sequence data such as texts, time series, drawing generation.\n",
"5. Encoder-Decoders: commonly used for but not limited to machine translation.\n",
"6. Generative Adversarial Networks: usually used for 3D modelling for video games, animation, etc.\n",
"7. Graph Neural Networks: usually used to work with graph data."
]
},
{
"cell_type": "markdown",
"id": "aa1696df-e325-450c-9f0c-7dc7e0996546",
"metadata": {},
"source": [
"## Neural Network Setup\n",
"\n",
"Load libraries:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3406e044-55f1-4380-b816-df474debb269",
"metadata": {},
"outputs": [],
"source": [
"import tensorflow as tf # Keras is part of standard TensorFlow library (Keras is available as a submodule in the TensorFlow library)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e41c6a4e-5e1f-4e47-a221-5965910045a4",
"metadata": {},
"outputs": [],
"source": [
"### Check the version of Tensorflow and Keras\n",
"print(\"TensorFlow version ==>\", tf.__version__) \n",
"print(\"Keras version ==>\",tf.keras.__version__)"
]
},
{
"cell_type": "markdown",
"id": "4c54107c",
"metadata": {
"tags": []
},
"source": [
"### Feedforward Neural Network\n",
"\n",
"* **Specify Architecture**\n",
"\n",
"Here, we shall include three hidden layers with 1000, 512, and 256 neurons respectively. Layers added using the \"$\\texttt{network.add($\\cdots$)}$\" command."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "546c2382",
"metadata": {},
"outputs": [],
"source": [
"tf.random.set_seed(1000) ## For reproducible results\n",
"network = tf.keras.models.Sequential() # Specify layers in their sequential order\n",
"# inputs are 4 dimensions (4 dimensions = 4 features)\n",
"# Dense = Fully Connected. \n",
"# First hidden layer has 1000 neurons with relu activations.\n",
"# Second hidden layer has 512 neurons with relu activations\n",
"# Third hidden layer has 256 neurons with Sigmoid activations\n",
"network.add(tf.keras.layers.Dense(1000, activation='relu' ,input_shape=(train_X.shape[1],)))\n",
"network.add(tf.keras.layers.Dense(512, activation='relu')) # sigmoid, tanh\n",
"network.add(tf.keras.layers.Dense(256, activation='sigmoid'))\n",
"# Output layer uses no activation with 1 output neurons\n",
"network.add(tf.keras.layers.Dense(1)) # Output layer"
]
},
{
"cell_type": "markdown",
"id": "d143433f",
"metadata": {},
"source": [
"* **Compile**\n",
"\n",
"The learning rate controls how the weights and bias are optimized. A discussion of the optimization procedure for Neural Networks can be found in Appendix B. If the learning rate is too small the training may get stuck, while if it is too big the trained model may be unreliable."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cbd2a1a8",
"metadata": {},
"outputs": [],
"source": [
"opt = tf.keras.optimizers.Adam(learning_rate=0.0001)\n",
"network.compile(optimizer = opt, loss='mean_squared_error', \n",
" metrics = [tf.keras.metrics.RootMeanSquaredError()])"
]
},
{
"cell_type": "markdown",
"id": "ab0b7e2e",
"metadata": {},
"source": [
"* **Print Architecture**"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "18c4c4ba",
"metadata": {},
"outputs": [],
"source": [
"network.summary()"
]
},
{
"cell_type": "markdown",
"id": "d9ac327d",
"metadata": {},
"source": [
"Number of weights connecting the input and the first hidden layer = (1000 $\\times$ 4) + 1000(bias) = 5000 \n",
"\n",
"* 4 = $\\texttt{number of features}$\n",
"* 1000 = $\\texttt{number of nodes in the first hidden layer}$\n",
"\n",
"Number of weights connecting the first hidden layer and the second hidden layer = (1000 $\\times$ 512) + 512(bias) = 512512 \n",
"\n",
"* 512 = $\\texttt{number of nodes in the second hidden layer}$\n",
"* 1000 = $\\texttt{number of nodes in the first hidden layer}$\n",
"\n",
"Number of weights connecting the second hidden layer and the third hidden layer = (512 $\\times$ 256) + 256(bias) = 131328 \n",
"\n",
"* 256 = $\\texttt{number of nodes in the third hidden layer}$\n",
"* 512 = $\\texttt{number of nodes in the second hidden layer}$\n",
"\n",
"Number of weights connecting the third (last) hidden and the output layers = (256 $\\times$ 1) + 1(bias) = 257 \n",
"\n",
"* 256 = $\\texttt{number of nodes in the third hidden layer}$\n",
"* 1 = $\\texttt{number of nodes in the output layer}$"
]
},
{
"cell_type": "markdown",
"id": "cd2b134a",
"metadata": {},
"source": [
"* **Fit Model**\n",
"\n",
"Here we find the neural network weights using the data in the training set. An epoch is when an entire batch of the training set has been used. The number of times we iterate over the dataset is known as the number of **epochs**. The validation split is the fraction of the training set that will be used to evaluate the loss at the end of each epoch."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b7fa3f0a",
"metadata": {},
"outputs": [],
"source": [
"start_time = time.time()\n",
"# NOTE: Change to verbose=1 for per-epoch output\n",
"history =network.fit(train_X, train_y, epochs=50, validation_split = 0.2, verbose=0)\n",
"print('Total time taken (mins): ', round(float((time.time()-start_time)/60),2))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d3813bf3",
"metadata": {},
"outputs": [],
"source": [
"plt.plot(history.history['loss'], label='Training loss')\n",
"plt.plot(history.history['val_loss'], label='Validation loss')\n",
"plt.xlabel('Epoch')\n",
"plt.ylabel('Error')\n",
"plt.legend()\n",
"plt.grid(True)"
]
},
{
"cell_type": "markdown",
"id": "9f3b8c66",
"metadata": {},
"source": [
"### Prediction\n",
"\n",
"We have used Machine Learning to build two computer models $F$ with SnowEx data. Now we will use the computer models to estimate snow depth for a given set of features (phase, coherence, amplitude, incidence angle)."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "14dbdda1",
"metadata": {},
"outputs": [],
"source": [
"## Random Forest\n",
"\n",
"yhat_RF = rf_pipe.predict(test_dataset.drop(columns=['snow_depth']))\n",
"\n",
"\n",
"## Feed Forward Neural Network\n",
"yhat_dnn = network.predict(test_X).flatten() \n",
"\n",
"\n",
"## True Snow Depth (Test Set)\n",
"observed = test_dataset[\"snow_depth\"].values\n",
"\n",
"\n",
"## Put Observed and Predicted (Linear Regression and DNN) in a Dataframe\n",
"prediction_df = pd.DataFrame({\"Observed\": observed,\n",
" \"RF\":yhat_RF, \"DNN\":yhat_dnn})"
]
},
{
"cell_type": "markdown",
"id": "edfeaf14",
"metadata": {},
"source": [
"### Check Performance"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bc18682a-44ab-4531-8ab1-c4171f3b5444",
"metadata": {},
"outputs": [],
"source": [
"def evaluate_model(y_true, y_pred, model_name, dp = 2):\n",
" RMSE = np.sqrt(mean_squared_error(y_true=y_true, y_pred=y_pred)).round(dp)\n",
" MAE = mean_absolute_error(y_true=y_true, y_pred=y_pred).round(dp)\n",
" MAPE = mean_absolute_percentage_error(y_true=y_true, y_pred=y_pred).round(dp)*100\n",
" RSQ = r2_score(y_true=y_true, y_pred=y_pred).round(dp)*100\n",
" \n",
" score_df = pd.DataFrame({\n",
" model_name: [RMSE, MAE, MAPE, RSQ]\n",
" }, index = ['RMSE(in)', 'MAE(in)', 'MAPE(%)', 'RSQ(%)'])\n",
" \n",
" return score_df"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3caee6f5-bc66-4e38-b055-c6447db0c5b1",
"metadata": {},
"outputs": [],
"source": [
"RF_evaluation=evaluate_model(prediction_df.RF, prediction_df.Observed, 'Random Forest')\n",
"NN_evaluation=evaluate_model(prediction_df.DNN, prediction_df.Observed, 'Neural Network')\n",
"\n",
"comparison=pd.concat([RF_evaluation, NN_evaluation], axis=1)\n",
"comparison"
]
},
{
"cell_type": "markdown",
"id": "6e834f74",
"metadata": {},
"source": [
"### Visualize Performance"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "274fe94f",
"metadata": {},
"outputs": [],
"source": [
"fa = plt.figure(figsize=(16,5))\n",
"plt.subplot(1,2,1)\n",
"plt.scatter(prediction_df['Observed'],prediction_df['RF'])\n",
"plt.xlabel('True Values [snow_depth]', fontsize=15)\n",
"plt.ylabel('Predictions [snow_depth]', fontsize=15)\n",
"plt.title(\"Random Forest\")\n",
"\n",
"\n",
"plt.subplot(1,2,2)\n",
"plt.scatter(prediction_df['Observed'],prediction_df['DNN'])\n",
"plt.xlabel('True Values [snow_depth]', fontsize=15)\n",
"plt.ylabel('Predictions [snow_depth]', fontsize=15)\n",
"plt.title(\"Deep Neural Network\")\n"
]
},
{
"cell_type": "markdown",
"id": "415d4650",
"metadata": {},
"source": [
"### Visualize Error"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5e6b714d",
"metadata": {},
"outputs": [],
"source": [
"RF_error = prediction_df['Observed'] - prediction_df['RF']\n",
"DNN_error = prediction_df['Observed'] - prediction_df['DNN']\n",
"\n",
"fa = plt.figure(figsize=(16,5))\n",
"\n",
"plt.subplot(1,2,1)\n",
"RF_error.hist()\n",
"plt.xlabel('Error', fontsize=15)\n",
"plt.ylabel('Frequency', fontsize=15)\n",
"plt.title(\"Random Forest\")\n",
"\n",
"plt.subplot(1,2,2)\n",
"DNN_error.hist()\n",
"plt.xlabel('Error', fontsize=15)\n",
"plt.ylabel('Frequency', fontsize=15)\n",
"plt.title(\"Deep Neural Network\")\n",
"\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"id": "24597f9c",
"metadata": {},
"source": [
"### Main Challenges of Machine Learning\n",
"\n",
"1. Bad Data (insufficient training data, nonrepresentative training data, irrelevant features)\n",
"2. Bad Algorithm (overfitting the training the data, underfitting the training data)"
]
},
{
"cell_type": "markdown",
"id": "f0ed3d18",
"metadata": {},
"source": [
"### Improving your Model\n",
"\n",
"Here are some ways to improve your machine learning models;\n",
"\n",
"1. Regularization (early stopping, dropout, etc)\n",
"2. Hyperparameter Optimization"
]
},
{
"cell_type": "markdown",
"id": "72cb972e-7e51-4454-8b87-b19f4210144d",
"metadata": {},
"source": [
"### Hyperparameter Optimization\n",
"\n",
"Hyperameters are parameters that are a step above other model parameters because their values have to be specified before fitting the model. They are not learned during the training process. In practice, different values are often \"tuned\" to obtain the optimal value, called hyperparameter optimization. Hyperparameter optimization is often done via a cross-validation (CV) or validation set approach. \n",
"\n",
"* In the validation set approach, we hold out a portion of the training set and use the held-out portion to test the performance of different hyperparameters.\n",
"* Cross-validation: the training data is divided into *k* disjoint sets. The idea is to use one of the k folds as test set and the remaining k − 1 folds as training set. The algorithm then changes the test set until every fold has served as test set. Performance metric is computed at each iteration and averaged in the end.Sklearn's implementation of grid search cross validation can be found [here](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html).\n",
"\n",
"\n",
"\n",
"\n",
"$\\qquad \\qquad \\qquad \\qquad$ Source [Machine Learning Bookcamp](https://www.manning.com/books/machine-learning-bookcamp?query=machine)\n",
"\n",
"* **Random Forest Cross-validation**"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e5a60e41-b0fc-44f3-8f86-164f9c64d0bc",
"metadata": {},
"outputs": [],
"source": [
"rf_pipe.get_params() #list of hyperparameters you may tune"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fb42adf2-7ea7-48f9-b146-7e0b8c8b1d35",
"metadata": {},
"outputs": [],
"source": [
"from sklearn.model_selection import GridSearchCV\n",
"\n",
"parameter_grid = {'regressor__min_samples_split': [2,3],\n",
" 'regressor__n_estimators': [50, 100], \n",
" 'regressor__max_depth': [3, 5]}\n",
"\n",
"rf_pipeline_CV = GridSearchCV(rf_pipe, parameter_grid, verbose=1, cv=3) ## set njobs = -1 to use all processors. cv defaults to 5\n",
"rf_pipeline_CV.fit(train_dataset.drop(columns=['snow_depth']), \n",
" train_dataset['snow_depth'])"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1531814e-a505-4574-ae35-8581210d812d",
"metadata": {},
"outputs": [],
"source": [
"print('Best Parameters are: {}'.format(rf_pipeline_CV.best_params_))\n",
"print()\n",
"pred=rf_pipeline_CV.best_estimator_.predict(test_dataset.drop(columns=['snow_depth']))\n",
"evaluate_model(pred, prediction_df.Observed, 'Random Forest (CV)')"
]
},
{
"cell_type": "markdown",
"id": "97d0ecda-9329-4994-9b77-e35d9c45e09a",
"metadata": {},
"source": [
"* **Neural Network Cross-validation**"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "291b6d54-83d3-48cd-b228-94d766739a2b",
"metadata": {},
"outputs": [],
"source": [
"tf.random.set_seed(1000) ## For reproducible results\n",
"network = tf.keras.models.Sequential() # Specify layers in their sequential order\n",
"# inputs are 4 dimensions (4 dimensions = 4 features)\n",
"# Dense = Fully Connected. \n",
"# First hidden layer has 1000 neurons with relu activations.\n",
"# Second hidden layer has 512 neurons with relu activations\n",
"# Third hidden layer has 256 neurons with Sigmoid activations\n",
"network.add(tf.keras.layers.Dense(1000, activation='relu' ,input_shape=(train_X.shape[1],)))\n",
"network.add(tf.keras.layers.Dense(512, activation='relu')) # sigmoid, tanh\n",
"network.add(tf.keras.layers.Dense(256, activation='sigmoid'))\n",
"# Output layer uses no activation with 1 output neurons\n",
"network.add(tf.keras.layers.Dense(1)) # Output layer"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fcb7856d-c5f9-48a8-8b4f-48c4c7c9b862",
"metadata": {},
"outputs": [],
"source": [
"from tensorflow.keras.wrappers.scikit_learn import KerasRegressor\n",
"\n",
"\n",
"def ANN(hidden_node1 = 512, hidden_node2 = 256, activation = 'relu', optimizer = 'adam'):\n",
" tf.random.set_seed(100)\n",
" model = tf.keras.models.Sequential()\n",
" model.add(tf.keras.layers.Dense(units = hidden_node1, activation = activation))\n",
" model.add(tf.keras.layers.Dropout(0.5))\n",
" model.add(tf.keras.layers.Dense(units = hidden_node2, activation = activation))\n",
" model.add(tf.keras.layers.Dropout(0.35))\n",
" model.add(tf.keras.layers.Dense(units = 1))\n",
" \n",
" model.compile(loss = 'mean_squared_error',\n",
" optimizer = optimizer,\n",
" metrics = [tf.keras.metrics.RootMeanSquaredError()])\n",
" return model"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d7c1ef78-e8b4-4ae0-90e7-ce1dd4bbea15",
"metadata": {},
"outputs": [],
"source": [
"ann_pipeline = Pipeline([\n",
" ('Scaler', MinMaxScaler()),\n",
" ('regressor', KerasRegressor(build_fn=ANN, batch_size = 32, epochs = 10, validation_split = 0.2, verbose = 0))\n",
"])"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fb79f85d-a105-455c-ae10-4a6058bab427",
"metadata": {},
"outputs": [],
"source": [
"param_grid = {'regressor__optimizer': ['adam', 'rmsprop'],\n",
" #'regressor__epochs': [5, 10],\n",
" 'regressor__hidden_node1': [50, 75],\n",
" 'regressor__hidden_node2': [50, 75],\n",
" #'regressor__activation': ['tanh', 'sigmoid']\n",
" }\n",
"\n",
"ann_cv = GridSearchCV(ann_pipeline, param_grid=param_grid, verbose=0, cv=2)\n",
"ann_cv.fit(train_dataset.drop(columns=['snow_depth']).values, \n",
" train_dataset['snow_depth'])"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dd9a1b91-5a57-4be0-8df0-a2ede0f5df02",
"metadata": {},
"outputs": [],
"source": [
"print('Best Parameters are: {}'.format(ann_cv.best_params_))\n",
"print()\n",
"pred=ann_cv.best_estimator_.predict(test_dataset.drop(columns=['snow_depth']))\n",
"evaluate_model(pred, prediction_df.Observed, 'ANN (CV)')"
]
},
{
"cell_type": "markdown",
"id": "63e1f1bc-4ea7-4711-9776-e7d5df2aab4f",
"metadata": {
"tags": []
},
"source": [
"## Deploying Machine Learning Models\n",
"\n",
"Now, our model lives in a Jupyter Notebook. Once we close our Jupyter Notebook or restart our kernel, the model disappears. How do we take our model out of our Jupyter Notebook? Model deployment is mainly concerned with putting optimal models into use/production. Steps for deploying our machine learning models:\n",
"\n",
"1. Save the model (e.g. using pickle)\n",
"2. Serve the model: this is the process of making the model available to others. This is usually done using web services. A simple way to to implement a web service in Python is to use [Flask](https://flask.palletsprojects.com/en/2.1.x/).\n",
"3. Deployment: We don't run production services on personal machines, we need special services for that. Sample services include; Amazon Web Services, Google Cloud, Microsoft Azure, and Digital Ocean. These services are used to deploy a web service.\n",
"\n",
"Other approaches for model deployments are:\n",
"\n",
"* TensorFlow Lite: TensorFlow Lite is a lightweight alternative to “full” TensorFlow. It is a serverless approach that makes deploying models with AWS Lambda faster and simpler.\n",
"* Kubernetes: it is a powerful but uneasy tool for model deployment. More information can be found [here](https://kubernetes.io/)."
]
},
{
"cell_type": "markdown",
"id": "a54e3f06-4388-4cf9-b1d7-fcf765fcd47a",
"metadata": {},
"source": [
"### Save Models\n",
"\n",
"* Deep Learning"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c026b7ed-2d9c-485d-ad03-7384e93e6d55",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"os.listdir()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0147d8d0-361e-47ba-a478-98bf18209f7b",
"metadata": {},
"outputs": [],
"source": [
"network.save('SWE_Prediction_DNN')\n",
"\n",
"## To load model, use;\n",
"reloaded_dnn = tf.keras.models.load_model('SWE_Prediction_DNN')"
]
},
{
"cell_type": "markdown",
"id": "60ada484-3bec-47dc-9c16-94e52e20d6e8",
"metadata": {},
"source": [
"* Random Forest"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "844fc4c9-e7f8-4c9a-8b82-0f8ef438f36e",
"metadata": {},
"outputs": [],
"source": [
"import pickle\n",
"\n",
"f_out = open('SWE_Prediction_RF.bin', 'wb')\n",
"pickle.dump(rf_pipe, f_out)\n",
"f_out.close()\n",
"\n",
"\n",
"## To load model, use;\n",
"f_in=open('SWE_Prediction_RF.bin', 'rb')\n",
"reloaded_rf = pickle.load(f_in)\n",
"f_in.close()"
]
},
{
"cell_type": "markdown",
"id": "5db4f9a7",
"metadata": {
"jp-MarkdownHeadingCollapsed": true,
"tags": []
},
"source": [
"## Your Turn \n",
"\n",
"Using the same dataset, perform hyperparameter optimization for a range of more comprehensive hyperparameter values and compare the performance of the two models. "
]
},
{
"cell_type": "markdown",
"id": "ca50aef5",
"metadata": {},
"source": [
"## Acknowledgements\n",
"\n",
"* Many thanks to HP Marshall for allowing me present this tutorial. \n",
"* Many thanks to e-Science institute and all organizing members for allowing me deploy my tutorial."
]
},
{
"cell_type": "markdown",
"id": "90c9c2c7",
"metadata": {},
"source": [
"## Reference\n",
"\n",
"1. [Santiago on Twitter](https://twitter.com/svpino)\n",
"2. [A Taxonomy of Big Data for Optimal Predictive Machine Learning and Data Mining by Ernest Fokoue](https://arxiv.org/abs/1501.00604)\n",
"3. [An Introduction to Statistical Learning with Applications in R](https://link.springer.com/book/10.1007%2F978-1-4614-7138-7) (available online for free)\n",
"4. [Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems 2nd Edition](https://www.amazon.com/Hands-Machine-Learning-Scikit-Learn-TensorFlow/dp/1492032646)\n",
"5. [MIT 6.S191: Introduction to Deep Learning](https://www.youtube.com/channel/UCtslD4DGH6PKyG_1gFAX7sg)\n",
"6. [Intro to Deep Learning (ML Tech Talks)](https://www.youtube.com/watch?v=AhE8RhPGH1A&t=2685s)\n",
"7. [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python)\n",
"8. [Machine Learning Bookcamp](https://www.manning.com/books/machine-learning-bookcamp?query=machine)"
]
},
{
"cell_type": "markdown",
"id": "5049f18c",
"metadata": {},
"source": [
"## Appendix \n",
"\n",
"### Neural Network Weights\n",
"\n",
"Building up a NN with at least one hidden layer requires only repeating the mathematical operation illustrated for the perceptron for every neuron in the hidden layer(s). Using the feed-forward image displayed above:\n",
"\n",
"$$z_j = w_{0,j}^{(1)} + \\textbf{w}_j^{\\top(1)}\\textbf{x}_i$$\n",
"\n",
"The final output (predicted value) is;\n",
"\n",
"$$\\hat{y}_i =h(w_{0,j}^{(2)} + \\textbf{w}_j^{\\top(2)}g(\\textbf{z}_i))$$\n",
"\n",
"where $\\textbf{w}_j^{(1)}$ is the vector of weights connecting the input layer to the $j$th neuron of the hidden layer, $\\textbf{w}_j^{(2)}$ is the vector of weights connecting the $j$th neuron in the hidden layer to the output neuron, and $h(\\cdot)$ is the activation function applied to the output layer (if any).\n",
"\n",
"### Training a Neural Network (Finding Optimal Weights for Prediction)\n",
"\n",
"A neural network is trained using the gradient descent approach.\n",
"\n",
"**Gradient Decent**: the gradient descent algorithm is as follows:\n",
"\n",
"1. Initialize weights and biases with random numbers.\n",
"2. Loop until convergence:\n",
" 1. Compute the gradient; $\\frac{\\partial \\widehat{\\mathcal{R}}_n(\\textbf{w})}{\\partial \\textbf{w}}$\n",
" 2. Updated weights; $\\textbf{w} \\leftarrow \\textbf{w} - \\eta \\frac{\\partial \\widehat{\\mathcal{R}}_n(\\textbf{w})}{\\partial \\textbf{w}}$\n",
"3. Return the weights\n",
"\n",
"This is repeated for the bias and every single weight. The $\\eta$ in step 2(B) of the gradient descent algorithm is called the learning rate. It is a measure of how fast we descend the hill (since we are moving from large to small errors). Large learning rates may cause divergence and very small learning rates may take too many iterations before convergence. is performed. In neural networks, the gradient ($\\frac{\\partial \\widehat{\\mathcal{R}}_n(\\textbf{w})}{\\partial \\textbf{w}}$) is computed using the Backpropagation approach. The details of backpropagation is beyond the scope of this tutorial.\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.13"
}
},
"nbformat": 4,
"nbformat_minor": 5
}