{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Classification\n",
"\n",
"###### COMP4670/8600 - Statistical Machine Learning - Tutorial"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Textbook Questions\n",
"These questions are hand picked to both be of reasonable difficulty and demonstrate what you are expected to be able to solve. The questions are labelled in Bishop as either $\\star$, $\\star\\star$, or $\\star\\star\\star$ to rate its difficulty.\n",
"\n",
"- **Question 4.4**: If you are unfamiliar with lagrange multipliers, look at Appendix E of the textbook. (Difficulty $\\star$, simple algebraic derivation)\n",
"- **Question 4.5**: (Difficulty $\\star$, simple algebraic derivation)\n",
"- **Question 1.24**: Note that in the equation $L_{kj}=1-I_{kj}$, $I$ is the identity matrix, so if $k=j$ then $I_{kj}=1$ and $L_{kj}=1-1=0$. (Difficulty $\\star\\star$, requires good understanding of the formulation of how to minimise expected loss)\n",
"- **Question 1.25**: This requires calculus of variations (used much more later in the course), which is in Appendix D, specifically the Euler-Lagrange result. Assume that everything is continuous and continuously differentiable so that you can bring the differentiation inside the integral sign. (Difficulty $\\star$, simple extension of proof in textbook to multiple target variables)\n",
"- **Question 4.9**: First state the likelihood. When maximising this, what constraints need to be set? Given such constraints, use lagrange multipliers to derive the results. (Difficulty $\\star$, simple algebraic derivation)\n",
"- **Question 4.10**: For the covariance matrix, you should be able to only use identities from [Sam Roweis' Matrix Identities](https://cs.nyu.edu/~roweis/notes/matrixid.pdf) to derive the result. Note you can use the cyclic property on $$(x_n-\\mu_k)^T\\Sigma^{-1}(x_n-\\mu_k)$$ as it is a square matrix (scalar). (Difficulty $\\star\\star$, covariance matrix derivation requires uncommon identities)\n",
"- **Question 4.11**: (Difficulty $\\star\\star$, short derivation but requires understanding what the question setup allows you to apply)\n",
"- **Question 4.12**: (Difficulty $\\star$, simple algebraic derivation)\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this lab we will build, train, and test a logistic regression classifier.\n",
"\n",
"### Assumed knowledge:\n",
"\n",
"- Optimisation in Python (lab)\n",
"- Regression (lab)\n",
"- Binary classification with logistic regression (lectures)\n",
"\n",
"### After this lab, you should be comfortable with:\n",
"\n",
"- Implementing logistic regression\n",
"- Practical binary classification problems"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import matplotlib.pyplot as plt\n",
"import numpy as np\n",
"import scipy.optimize as opt\n",
"\n",
"%matplotlib inline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## The data set\n",
"\n",
"We will be working with the census-income dataset, which shows income levels for people in the 1994 US Census. We will predict whether a person has $\\leq \\$50000$ or $> \\$50000$ income per year.\n",
"\n",
"The data are included with this notebook as `04-dataset.tsv`, a textfile where in each row of data, the individual entries are delimited by tab characters. Download the data from the [course website](https://machlearn.gitlab.io/sml2020/tutorials/04-dataset.tsv)\n",
"Load the data into a NumPy array called `data` using `numpy.genfromtxt`:\n",
"\n",
"```python\n",
" numpy.genfromtxt(filename)\n",
"```\n",
"\n",
"The column names are given in the variable `columns` below.\n",
"The `income` column are the targets, and the other columns will form our data used to try and guess the `income`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"columns = ['income', 'age', 'education', 'private-work', 'married', 'capital-gain', 'capital-loss', 'hours-per-week']"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data_raw = np.genfromtxt(\"04-dataset.tsv\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Recap - Binary classification\n",
"\n",
"The idea behind this lab is that for each person, we want to\n",
"try and predict if their income is above the threshold of $\\$50,000$ or not,\n",
"based on a series of other data about their person: `age, education,...`.\n",
"\n",
"As per usual, for the $n^\\text{th}$ row, the first entry is the target $t_n$, and the rest\n",
"forms the data vector $\\mathbf{x}_n$.\n",
"\n",
"We have two classes, $C_1$ representing the class of $ <\\$ 50,000$, which corresponds to\n",
"a target of $t_n = 0$, and $C_2$, representing the class of $ >\\$50,000$, corresponding to\n",
"a target of $t_n = 1$. Our objective is to learn a discriminative function $f_{\\mathbf{w}}(\\mathbf{x})$,\n",
"parametrised by a weight vector $\\mathbf{w}$ that\n",
"predicts which income class the person is in, based on the data given.\n",
"\n",
"We assume that each piece of information $(t_n, \\mathbf{x}_n)$ is i.i.d, and\n",
"that there is some hidden probability distribution from which these target/data points are drawn.\n",
"We will construct a likelihood function that indicates \"What is the likelihood of this particular\n",
"weight vector $\\mathbf{w}$ having generated the observed training data $\\left\\{(t_n, \\mathbf{x}_n)\\right\\}_{n=1}^N$\".\n",
"\n",
"## Recap - Feature map, basis function\n",
"\n",
"Now some classes are not linearly seperable (we cannot draw a line such that all of one class is on one side,\n",
"and all of the other class is on the other side). But by applying many fixed non-linear \n",
"transformations to the inputs $\\mathbf{x}_n$ first, for some suitable choice\n",
"of transformation $\\phi$ the result will usually be linearly separable\n",
"(See week 3, pg 342 of the lecture slides).\n",
"\n",
"We let\n",
"$$\n",
"\\mathbf{\\phi}_n := \\phi(\\mathbf{x}_n)\n",
"$$\n",
"\n",
"and work in this feature space rather than the input space.\n",
"For the case of two classes, we could guess that the target is a linear combination of the features,\n",
"$$\n",
"\\hat{t}_n = \\mathbf{w}^T \\mathbf{\\phi}_n\n",
"$$\n",
"but $\\mathbf{w}^T \\mathbf{\\phi}_n$ is a real number, and we want $\\hat{t}_n \\in \\{0,1\\}$.\n",
"We could threshold the result,\n",
"$$\n",
"\\hat{t}_n =\n",
"\\begin{cases}\n",
"1 & \\mathbf{w}^T \\mathbf{\\phi}_n \\geq 0 \\\\\n",
"0 & \\mathbf{w}^T \\mathbf{\\phi}_n < 0\n",
"\\end{cases}\n",
"$$\n",
"but the discontinuity makes it impossible to define a sensible gradient. \n",
"\n",
"## Recap - Logistic Regression\n",
"\n",
"(We assume that the classes are already linearly seperable, and use our input space as our feature space.\n",
"We also assume the data is i.i.d).\n",
"\n",
"Instead of using a hard threshold like above, in logistic regression\n",
"we can use the sigmoid function $\\sigma(a)$\n",
"$$\n",
"\\sigma(a) := \\frac{1}{1 + e^{-a}}\n",
"$$\n",
"which has the intended effect of \"squishing\" the real line to the interval $[0,1]$.\n",
"This gives a smooth version of the threshold function above, that we can differentiate.\n",
"The numbers it returns can be interpreted as a probability of the estimated target $\\hat{t}$ belonging\n",
"to a class $C_i$ given the element $\\phi$ of feature space. In the case of two classes, we define\n",
"\n",
"\\begin{align}\n",
"p(C_1 | \\phi ) &:= \\sigma (\\mathbf{w}^T \\phi) \\\\\n",
"p(C_2 | \\phi ) &:= 1 - p(C_1 | \\phi)\n",
"\\end{align}\n",
"\n",
"\n",
"The likelihood function $p(\\mathbf{t} | \\mathbf{w}, \\mathbf{x})$ is what we want to maximise as a function\n",
"of $\\mathbf{w}$. Since $\\mathbf{x}$ is fixed, we usually write the likelihood function as $p(\\mathbf{t} | \\mathbf{w})$.\n",
"\n",
"$$\n",
"\\begin{align}\n",
"p(\\mathbf{t} | \\mathbf{w})\n",
"&= \\prod_{n=1}^N p(t_n | \\mathbf{w}) \\\\\n",
"&= \\prod_{n=1}^N \n",
"\\begin{cases}\n",
"p(C_1 | \\phi_n) & t_n = 1 \\\\\n",
"p(C_2 | \\phi_n) & t_n = 0\n",
"\\end{cases}\n",
"\\end{align}\n",
"$$\n",
"Note that\n",
"$$\n",
"\\begin{cases}\n",
" y_n & t_n = 1 \\\\\n",
"1 - y_n & t_n = 0\n",
"\\end{cases}\n",
"= y_n^{t_n} (1-y_n)^{1-t_n}\n",
"$$\n",
"as if $t_n = 1$, then $y_n^1 (1-y_n)^{1-1} = y_n$ and if $t_n = 0$ then $y_n^0 (1-y_n)^{1-0} = 1-y_n$.\n",
"This is why we use the strange encoding of $t_n=0$ corresponds to $C_2$ and $t_n=1$ corresponds to $C_1$.\n",
"Hence, our likelihood function is \n",
"$$\n",
"p(\\mathbf{t} | \\mathbf{w}) = \\prod_{n=1}^N y_n^{t_n} (1-y_n)^{1-t_n}, \\quad y_n = \\sigma(\\mathbf{w}^T \\phi_n)\n",
"$$\n",
"This function is quite unpleasant to try and differentiate, but we note that $p(\\mathbf{t} | \\mathbf{w})$\n",
"is maximised when $\\log p(\\mathbf{t} | \\mathbf{w})$ is maximised.\n",
"\\begin{align}\n",
"\\log p(\\mathbf{t} | \\mathbf{w}) \n",
"&= \\log \\prod_{n=1}^N y_n^{t_n} (1-y_n)^{1-t_n} \\\\\n",
"&= \\sum_{n=1}^N \\log \\left( y_n^{t_n} (1-y_n)^{1-t_n} \\right) \\\\\n",
"&= \\sum_{n=1}^N \\left( t_n \\log y_n + (1-t_n) \\log (1-y_n) \\right)\n",
"\\end{align}\n",
"Which is maximised when $- \\log p(\\mathbf{t} | \\mathbf{w})$ is minimised, giving us our error function.\n",
"$$\n",
"E(\\mathbf{w}) := - \\sum_{n=1}^N \\left( t_n \\log y_n + (1-t_n) \\log (1-y_n) \\right)\n",
"$$\n",
"We can then take the derivative of this, which gives us\n",
"$$\n",
"\\nabla_\\mathbf{w} E(\\mathbf{w}) = \\sum_{n=1}^N (y_n - t_n) \\phi_n\n",
"$$\n",
"\n",
"(Note: We also usually divide the error by the number of data points, to obtain the average error. The error\n",
"shouldn't get 10 times as large just because there is more data avaliable, so we should divide by the\n",
"number of error points to reflect that.)\n",
"\n",
"### **Exercise**\n",
"Take the derivative of $E(\\mathbf{w})$, and show that it is equal to the above. Note that the derivative doesn't have any sigmoid functions. (Hint: Use the identity $\\sigma'(a) = \\sigma(a) \\left( 1- \\sigma(a) \\right)$ to simplify).\n",
"\n",
"## Recap - $L_2$ regularisation, Gaussian prior\n",
"\n",
"To help avoid overfitting, we can add a penalty term to the cost function of the form \n",
"$\\frac{\\lambda}{2} ||\\mathbf{w}||^2$. By tweaking the value of $\\lambda$, we can indicate how\n",
"much to penalise large terms in the weight vector $\\mathbf{w}$. Don't forget to take the regularistion term into\n",
"account when you compute the corresponding gradient $\\nabla_\\mathbf{w} E(\\mathbf{w})$.\n",
"\n",
"### **Exercise**\n",
"Take the derivative of $E(\\mathbf{w})$ again, accounting for the added regularisation term.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Explain logistic regression (10 minutes)\n",
"\n",
"Find a partner in your lab (or groups of 3). Take turns to explain the topics above to each other, without referring to the lab sheet. Be as precise as possible, by writing down the relevant equations.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Classification with logistic regression\n",
"\n",
"Implement binary classification using logistic regression and $L_2$ regularisation. Make sure you write good quality code with comments and docstrings where appropriate.\n",
"\n",
"Use ```scipy.optimize.fmin_bfgs``` to optimise your cost function. ```fmin_bfgs``` takes as arguments the cost function to be optimised, and a tuple of extra arguments to the cost function:\n",
"\n",
"```python\n",
" scipy.optimise.fmin_bfgs(cost_function, initial_guess, args=())\n",
"```\n",
"\n",
"By following equations in lectures, implement three functions:\n",
"\n",
"- `grad(w, X, t, a)`, which calculates the gradient of the cost function,\n",
"- `train(X, t, a)`, which returns the maximum likelihood weight vector, and\n",
"- `test(w, X)`, which returns predicted class probabilities,\n",
"\n",
"where \n",
"* $w$ is a weight vector, \n",
"* $X$ is a matrix of examples, \n",
"* $t$ is a vector of labels/targets, \n",
"* $a$ is the regularisation weight. \n",
"\n",
"(We would use $\\lambda$ for the regularisation term, but `a` is easier to type than `lambda`, and\n",
"`lambda` is a reserved keyword in python, for lambda functions).\n",
"\n",
"See below for expected usage.\n",
"\n",
"We add an extra column of ones to represent the bias term.\n",
"\n",
"## Note\n",
"\n",
"* You should use 80% of the data as your training set, and 20% of the data as your test set.\n",
"* You also may want to normalise the data before hand. If the magnitude of $\\mathbf{w}^T \\phi_n$\n",
"is very large, the gradient of $\\sigma(\\mathbf{w}^T \\phi_n)$ will be very near zero, which can\n",
"cause convergence issues during numerical minimisation. If each element in a particular column is\n",
"multiplied by a scalar (say, all elements of the `age` column) then the result is essentially the same\n",
"as stretching the space in which the data lives. The model will also be proportionally stretched,\n",
"but will not fundamentally change the behaviour. So by normalising each column, we can avoid\n",
"issues related to numerical convergence."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"assert data_raw.shape[1] == len(columns)\n",
"data = np.concatenate([data_raw, np.ones((data_raw.shape[0], 1))], axis=1) # add a column of ones\n",
"data.shape"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# replace this with your solution, add and remove code and markdown cells as appropriate"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# replace this with your solution, add and remove code and markdown cells as appropriate"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# replace this with your solution, add and remove code and markdown cells as appropriate"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# replace this with your solution, add and remove code and markdown cells as appropriate"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Performance measure\n",
"\n",
"There are many ways to compute the performance of a binary classifier. The key concept is the idea of a confusion matrix:\n",
"\n",
"| | | Label | |\n",
"|:-------------:|:--:|:-----:|:--:|\n",
"| | | 0 | 1 |\n",
"|**Prediction**| 0 | TN | FN |\n",
"| | 1 | FP | TP |\n",
"\n",
"where\n",
"* TP - true positive\n",
"* FP - false positive\n",
"* FN - false negative\n",
"* TN - true negative\n",
"\n",
"Implement three functions:\n",
"\n",
"- `confusion_matrix(y_true, y_pred)`, which returns the confusion matrix as a list of lists given a list of true labels and a list of predicted labels;\n",
"- `accuracy(cm)`, which takes a confusion matrix and returns the accuracy; and\n",
"- `balanced_accuracy(cm)`, which takes a confusion matrix and returns the balanced accuracy.\n",
"\n",
"The accuracy is defined as $\\frac{TP + TN}{n}$, where $n$ is the total number of examples. The balanced accuracy is defined as $\\frac{1}{2}\\left(\\frac{TP}{P} + \\frac{TN}{N}\\right)$, where $T$ and $N$ are the total number of positive and negative examples respectively."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# replace this with your solution, add and remove code and markdown cells as appropriate"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Accuracy vs balanced accuracy\n",
"\n",
"What is the purpose of balanced accuracy? When might you prefer it to accuracy?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Answer\n",
"*--- replace this with your solution, add and remove code and markdown cells as appropriate ---*"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Putting everything together\n",
"\n",
"Consider the following code, which trains on all the examples, predicts on the training set, and then computes the accuracy and balanced accuracy. Discuss the results."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# replace this with your solution, add and remove code and markdown cells as appropriate"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Answer\n",
"*--- replace this with your solution, add and remove code and markdown cells as appropriate ---*"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Looking back at the prediction task\n",
"\n",
"Based on your results, what feature of the dataset is most useful for determining the income level? What feature is least useful? Why?"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# replace this with your solution, add and remove code and markdown cells as appropriate"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}