1
0
This commit is contained in:
2024-06-15 03:38:29 +08:00
parent 76b634c95b
commit e881b78f3d
40 changed files with 4289 additions and 1 deletions

View File

@@ -0,0 +1,160 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Optional Lab: Brief Introduction to Python and Jupyter Notebooks\n",
"Welcome to the first optional lab! \n",
"Optional labs are available to:\n",
"- provide information - like this notebook\n",
"- reinforce lecture material with hands-on examples\n",
"- provide working examples of routines used in the graded labs"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Goals\n",
"In this lab, you will:\n",
"- Get a brief introduction to Jupyter notebooks\n",
"- Take a tour of Jupyter notebooks\n",
"- Learn the difference between markdown cells and code cells\n",
"- Practice some basic python\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The easiest way to become familiar with Jupyter notebooks is to take the tour available above in the Help menu:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<figure>\n",
" <center> <img src=\"./images/C1W1L1_Tour.PNG\" alt='missing' width=\"400\" ><center/>\n",
"<figure/>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Jupyter notebooks have two types of cells that are used in this course. Cells such as this which contain documentation called `Markdown Cells`. The name is derived from the simple formatting language used in the cells. You will not be required to produce markdown cells. Its useful to understand the `cell pulldown` shown in graphic below. Occasionally, a cell will end up in the wrong mode and you may need to restore it to the right state:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<figure>\n",
" <img src=\"./images/C1W1L1_Markdown.PNG\" alt='missing' width=\"400\" >\n",
"<figure/>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The other type of cell is the `code cell` where you will write your code:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"This is a code cell\n"
]
}
],
"source": [
"#This is a 'Code' Cell\n",
"print(\"This is a code cell\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Python\n",
"You can write your code in the code cells. \n",
"To run the code, select the cell and either\n",
"- hold the shift-key down and hit 'enter' or 'return'\n",
"- click the 'run' arrow above\n",
"<figure>\n",
" <img src=\"./images/C1W1L1_Run.PNG\" width=\"400\" >\n",
"<figure/>\n",
"\n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Print statement\n",
"Print statements will generally use the python f-string style. \n",
"Try creating your own print in the following cell. \n",
"Try both methods of running the cell."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"f strings allow you to embed variables right in the strings! 23\n"
]
}
],
"source": [
"# print statements\n",
"variable = \"right in the strings!\"\n",
"var='23'\n",
"print(f\"f strings allow you to embed variables {variable} {var}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Congratulations!\n",
"You now know how to find your way around a Jupyter Notebook."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,352 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Optional Lab: Cost Function \n",
"<figure>\n",
" <center> <img src=\"./images/C1_W1_L3_S2_Lecture_b.png\" style=\"width:1000px;height:200px;\" ></center>\n",
"</figure>\n",
"\n"
],
"id": "39510563895119eb"
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Goals\n",
"In this lab you will:\n",
"- you will implement and explore the `cost` function for linear regression with one variable. \n"
],
"id": "a160c6cc32746122"
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Tools\n",
"In this lab we will make use of: \n",
"- NumPy, a popular library for scientific computing\n",
"- Matplotlib, a popular library for plotting data\n",
"- local plotting routines in the lab_utils_uni.py file in the local directory"
],
"id": "ab2b5e324c46a0ea"
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"%matplotlib widget\n",
"import matplotlib.pyplot as plt\n",
"from lab_utils_uni import plt_intuition, plt_stationary, plt_update_onclick, soup_bowl\n",
"plt.style.use('./deeplearning.mplstyle')"
],
"id": "f025d8d8eea42714"
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Problem Statement\n",
"\n",
"You would like a model which can predict housing prices given the size of the house. \n",
"Let's use the same two data points as before the previous lab- a house with 1000 square feet sold for \\\\$300,000 and a house with 2000 square feet sold for \\\\$500,000.\n",
"\n",
"\n",
"| Size (1000 sqft) | Price (1000s of dollars) |\n",
"| -------------------| ------------------------ |\n",
"| 1 | 300 |\n",
"| 2 | 500 |\n"
],
"id": "a0b38aaf53fecdb4"
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"x_train = np.array([1.0, 2.0]) #(size in 1000 square feet)\n",
"y_train = np.array([300.0, 500.0]) #(price in 1000s of dollars)"
],
"id": "4a5bab31ff88bda5"
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Computing Cost\n",
"The term 'cost' in this assignment might be a little confusing since the data is housing cost. Here, cost is a measure how well our model is predicting the target price of the house. The term 'price' is used for housing data.\n",
"\n",
"The equation for cost with one variable is:\n",
" $$J(w,b) = \\frac{1}{2m} \\sum\\limits_{i = 0}^{m-1} (f_{w,b}(x^{(i)}) - y^{(i)})^2 \\tag{1}$$ \n",
" \n",
"where \n",
" $$f_{w,b}(x^{(i)}) = wx^{(i)} + b \\tag{2}$$\n",
" \n",
"- $f_{w,b}(x^{(i)})$ is our prediction for example $i$ using parameters $w,b$. \n",
"- $(f_{w,b}(x^{(i)}) -y^{(i)})^2$ is the squared difference between the target value and the prediction. \n",
"- These differences are summed over all the $m$ examples and divided by `2m` to produce the cost, $J(w,b)$. \n",
">Note, in lecture summation ranges are typically from 1 to m, while code will be from 0 to m-1.\n"
],
"id": "2fde323fe15bd227"
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The code below calculates cost by looping over each example. In each loop:\n",
"- `f_wb`, a prediction is calculated\n",
"- the difference between the target and the prediction is calculated and squared.\n",
"- this is added to the total cost."
],
"id": "23fa56cab00e1046"
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
"def compute_cost(x, y, w, b): \n",
" \"\"\"\n",
" Computes the cost function for linear regression.\n",
" \n",
" Args:\n",
" x (ndarray (m,)): Data, m examples \n",
" y (ndarray (m,)): target values\n",
" w,b (scalar) : model parameters \n",
" \n",
" Returns\n",
" total_cost (float): The cost of using w,b as the parameters for linear regression\n",
" to fit the data points in x and y\n",
" \"\"\"\n",
" # number of training examples\n",
" m = x.shape[0] \n",
" \n",
" cost_sum = 0 \n",
" for i in range(m): \n",
" f_wb = w * x[i] + b \n",
" cost = (f_wb - y[i]) ** 2 \n",
" cost_sum = cost_sum + cost \n",
" total_cost = (1 / (2 * m)) * cost_sum \n",
"\n",
" return total_cost"
],
"id": "1166da6c92b7cac0"
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Cost Function Intuition"
],
"id": "efaf37cff0b92f69"
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<img align=\"left\" src=\"./images/C1_W1_Lab02_GoalOfRegression.PNG\" style=\" width:380px; padding: 10px; \" /> Your goal is to find a model $f_{w,b}(x) = wx + b$, with parameters $w,b$, which will accurately predict house values given an input $x$. The cost is a measure of how accurate the model is on the training data.\n",
"\n",
"The cost equation (1) above shows that if $w$ and $b$ can be selected such that the predictions $f_{w,b}(x)$ match the target data $y$, the $(f_{w,b}(x^{(i)}) - y^{(i)})^2 $ term will be zero and the cost minimized. In this simple two point example, you can achieve this!\n",
"\n",
"In the previous lab, you determined that $b=100$ provided an optimal solution so let's set $b$ to 100 and focus on $w$.\n",
"\n",
"<br/>\n",
"Below, use the slider control to select the value of $w$ that minimizes cost. It can take a few seconds for the plot to update."
],
"id": "a4c0cec8b6d37318"
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "7d57d1ccadbf42d6bc17488114bbf179",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Canvas(toolbar=Toolbar(toolitems=[('Home', 'Reset original view', 'home', 'home'), ('Back', 'Back to previous …"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"plt_intuition(x_train,y_train)"
],
"id": "7cd5334a6770f526"
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The plot contains a few points that are worth mentioning.\n",
"- cost is minimized when $w = 200$, which matches results from the previous lab\n",
"- Because the difference between the target and pediction is squared in the cost equation, the cost increases rapidly when $w$ is either too large or too small.\n",
"- Using the `w` and `b` selected by minimizing cost results in a line which is a perfect fit to the data."
],
"id": "30972f54ab8f1d94"
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Cost Function Visualization- 3D\n",
"\n",
"You can see how cost varies with respect to *both* `w` and `b` by plotting in 3D or using a contour plot. \n",
"It is worth noting that some of the plotting in this course can become quite involved. The plotting routines are provided and while it can be instructive to read through the code to become familiar with the methods, it is not needed to complete the course successfully. The routines are in lab_utils_uni.py in the local directory."
],
"id": "6c6395d1ef2abecf"
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Larger Data Set\n",
"It is instructive to view a scenario with a few more data points. This data set includes data points that do not fall on the same line. What does that mean for the cost equation? Can we find $w$, and $b$ that will give us a cost of 0? "
],
"id": "e8329d184d8344fc"
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
"x_train = np.array([1.0, 1.7, 2.0, 2.5, 3.0, 3.2])\n",
"y_train = np.array([250, 300, 480, 430, 630, 730,])"
],
"id": "b0c0aed371b5ca39"
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In the contour plot, click on a point to select `w` and `b` to achieve the lowest cost. Use the contours to guide your selections. Note, it can take a few seconds to update the graph. "
],
"id": "2bc747abba448d65"
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "d5cceda8ce34448290f1c2cac01ee091",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Canvas(toolbar=Toolbar(toolitems=[('Home', 'Reset original view', 'home', 'home'), ('Back', 'Back to previous …"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"plt.close('all') \n",
"fig, ax, dyn_items = plt_stationary(x_train, y_train)\n",
"updater = plt_update_onclick(fig, ax, x_train, y_train, dyn_items)"
],
"id": "9159d3905b017257"
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Above, note the dashed lines in the left plot. These represent the portion of the cost contributed by each example in your training set. In this case, values of approximately $w=209$ and $b=2.4$ provide low cost. Note that, because our training examples are not on a line, the minimum cost is not zero."
],
"id": "5f4c1ac7bb9a78d"
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Convex Cost surface\n",
"The fact that the cost function squares the loss ensures that the 'error surface' is convex like a soup bowl. It will always have a minimum that can be reached by following the gradient in all dimensions. In the previous plot, because the $w$ and $b$ dimensions scale differently, this is not easy to recognize. The following plot, where $w$ and $b$ are symmetric, was shown in lecture:"
],
"id": "763c2e6ab47d9587"
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "b9ea89764a324a98893ec2d8d2689e77",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Canvas(toolbar=Toolbar(toolitems=[('Home', 'Reset original view', 'home', 'home'), ('Back', 'Back to previous …"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"soup_bowl()"
],
"id": "13af7fc972d655b3"
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Congratulations!\n",
"You have learned the following:\n",
" - The cost equation provides a measure of how well your predictions match your training data.\n",
" - Minimizing the cost can provide optimal values of $w$, $b$."
],
"id": "850e586e9601ef7e"
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [],
"id": "67ad1bd9bfe55fd"
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.6"
},
"toc-autonumbering": false
},
"nbformat": 4,
"nbformat_minor": 5
}

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,159 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Optional Lab: Brief Introduction to Python and Jupyter Notebooks\n",
"Welcome to the first optional lab! \n",
"Optional labs are available to:\n",
"- provide information - like this notebook\n",
"- reinforce lecture material with hands-on examples\n",
"- provide working examples of routines used in the graded labs"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Goals\n",
"In this lab, you will:\n",
"- Get a brief introduction to Jupyter notebooks\n",
"- Take a tour of Jupyter notebooks\n",
"- Learn the difference between markdown cells and code cells\n",
"- Practice some basic python\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The easiest way to become familiar with Jupyter notebooks is to take the tour available above in the Help menu:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<figure>\n",
" <center> <img src=\"./images/C1W1L1_Tour.PNG\" alt='missing' width=\"400\" ><center/>\n",
"<figure/>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Jupyter notebooks have two types of cells that are used in this course. Cells such as this which contain documentation called `Markdown Cells`. The name is derived from the simple formatting language used in the cells. You will not be required to produce markdown cells. Its useful to understand the `cell pulldown` shown in graphic below. Occasionally, a cell will end up in the wrong mode and you may need to restore it to the right state:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<figure>\n",
" <img src=\"./images/C1W1L1_Markdown.PNG\" alt='missing' width=\"400\" >\n",
"<figure/>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The other type of cell is the `code cell` where you will write your code:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"This is code cell\n"
]
}
],
"source": [
"#This is a 'Code' Cell\n",
"print(\"This is code cell\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Python\n",
"You can write your code in the code cells. \n",
"To run the code, select the cell and either\n",
"- hold the shift-key down and hit 'enter' or 'return'\n",
"- click the 'run' arrow above\n",
"<figure>\n",
" <img src=\"./images/C1W1L1_Run.PNG\" width=\"400\" >\n",
"<figure/>\n",
"\n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Print statement\n",
"Print statements will generally use the python f-string style. \n",
"Try creating your own print in the following cell. \n",
"Try both methods of running the cell."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"f strings allow you to embed variables right in the strings!\n"
]
}
],
"source": [
"# print statements\n",
"variable = \"right in the strings!\"\n",
"print(f\"f strings allow you to embed variables {variable}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Congratulations!\n",
"You now know how to find your way around a Jupyter Notebook."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,33 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Ungraded Lab - Examples of Material that will be covered in this course\n",
"Work in Progress"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,398 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Optional Lab: Model Representation\n",
"\n",
"<figure>\n",
" <img src=\"./images/C1_W1_L3_S1_Lecture_b.png\" style=\"width:600px;height:200px;\">\n",
"</figure>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Goals\n",
"In this lab you will:\n",
"- Learn to implement the model $f_{w,b}$ for linear regression with one variable"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Notation\n",
"Here is a summary of some of the notation you will encounter. \n",
"\n",
"|General <img width=70/> <br /> Notation <img width=70/> | Description<img width=350/>| Python (if applicable) |\n",
"|: ------------|: ------------------------------------------------------------||\n",
"| $a$ | scalar, non bold ||\n",
"| $\\mathbf{a}$ | vector, bold ||\n",
"| **Regression** | | | |\n",
"| $\\mathbf{x}$ | Training Example feature values (in this lab - Size (1000 sqft)) | `x_train` | \n",
"| $\\mathbf{y}$ | Training Example targets (in this lab Price (1000s of dollars)).) | `y_train` \n",
"| $x^{(i)}$, $y^{(i)}$ | $i_{th}$Training Example | `x_i`, `y_i`|\n",
"| m | Number of training examples | `m`|\n",
"| $w$ | parameter: weight, | `w` |\n",
"| $b$ | parameter: bias | `b` | \n",
"| $f_{w,b}(x^{(i)})$ | The result of the model evaluation at $x^{(i)}$ parameterized by $w,b$: $f_{w,b}(x^{(i)}) = wx^{(i)}+b$ | `f_wb` | \n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Tools\n",
"In this lab you will make use of: \n",
"- NumPy, a popular library for scientific computing\n",
"- Matplotlib, a popular library for plotting data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"plt.style.use('./deeplearning.mplstyle')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Problem Statement\n",
"<img align=\"left\" src=\"./images/C1_W1_L3_S1_trainingdata.png\" style=\" width:380px; padding: 10px; \" /> \n",
"\n",
"As in the lecture, you will use the motivating example of housing price prediction. \n",
"This lab will use a simple data set with only two data points - a house with 1000 square feet(sqft) sold for \\\\$300,000 and a house with 2000 square feet sold for \\\\$500,000. These two points will constitute our *data or training set*. In this lab, the units of size are 1000 sqft and the units of price are $1000's of dollars.\n",
"\n",
"| Size (1000 sqft) | Price (1000s of dollars) |\n",
"| -------------------| ------------------------ |\n",
"| 1.0 | 300 |\n",
"| 2.0 | 500 |\n",
"\n",
"You would like to fit a linear regression model (shown above as the blue straight line) through these two points, so you can then predict price for other houses - say, a house with 1200 sqft.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Please run the following code cell to create your `x_train` and `y_train` variables. The data is stored in one-dimensional NumPy arrays."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# x_train is the input variable (size in 1000 square feet)\n",
"# y_train in the target (price in 1000s of dollars)\n",
"x_train = np.array([1.0, 2.0])\n",
"y_train = np.array([300.0, 500.0])\n",
"print(f\"x_train = {x_train}\")\n",
"print(f\"y_train = {y_train}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
">**Note**: The course will frequently utilize the python 'f-string' output formatting described [here](https://docs.python.org/3/tutorial/inputoutput.html) when printing. The content between the curly braces is evaluated when producing the output."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Number of training examples `m`\n",
"You will use `m` to denote the number of training examples. Numpy arrays have a `.shape` parameter. `x_train.shape` returns a python tuple with an entry for each dimension. `x_train.shape[0]` is the length of the array and number of examples as shown below."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# m is the number of training examples\n",
"print(f\"x_train.shape: {x_train.shape}\")\n",
"m = x_train.shape[0]\n",
"print(f\"Number of training examples is: {m}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"One can also use the Python `len()` function as shown below."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# m is the number of training examples\n",
"m = len(x_train)\n",
"print(f\"Number of training examples is: {m}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Training example `x_i, y_i`\n",
"\n",
"You will use (x$^{(i)}$, y$^{(i)}$) to denote the $i^{th}$ training example. Since Python is zero indexed, (x$^{(0)}$, y$^{(0)}$) is (1.0, 300.0) and (x$^{(1)}$, y$^{(1)}$) is (2.0, 500.0). \n",
"\n",
"To access a value in a Numpy array, one indexes the array with the desired offset. For example the syntax to access location zero of `x_train` is `x_train[0]`.\n",
"Run the next code block below to get the $i^{th}$ training example."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"i = 0 # Change this to 1 to see (x^1, y^1)\n",
"\n",
"x_i = x_train[i]\n",
"y_i = y_train[i]\n",
"print(f\"(x^({i}), y^({i})) = ({x_i}, {y_i})\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Plotting the data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can plot these two points using the `scatter()` function in the `matplotlib` library, as shown in the cell below. \n",
"- The function arguments `marker` and `c` show the points as red crosses (the default is blue dots).\n",
"\n",
"You can also use other functions in the `matplotlib` library to display the title and labels for the axes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Plot the data points\n",
"plt.scatter(x_train, y_train, marker='x', c='r')\n",
"# Set the title\n",
"plt.title(\"Housing Prices\")\n",
"# Set the y-axis label\n",
"plt.ylabel('Price (in 1000s of dollars)')\n",
"# Set the x-axis label\n",
"plt.xlabel('Size (1000 sqft)')\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Model function\n",
"\n",
"<img align=\"left\" src=\"./images/C1_W1_L3_S1_model.png\" style=\" width:380px; padding: 10px; \" > As described in lecture, the model function for linear regression (which is a function that maps from `x` to `y`) is represented as \n",
"\n",
"$$ f_{w,b}(x^{(i)}) = wx^{(i)} + b \\tag{1}$$\n",
"\n",
"The formula above is how you can represent straight lines - different values of $w$ and $b$ give you different straight lines on the plot. <br/> <br/> <br/> <br/> <br/> \n",
"\n",
"Let's try to get a better intuition for this through the code blocks below. Let's start with $w = 100$ and $b = 100$. \n",
"\n",
"**Note: You can come back to this cell to adjust the model's w and b parameters**"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"w = 100\n",
"b = 100\n",
"print(f\"w: {w}\")\n",
"print(f\"b: {b}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, let's compute the value of $f_{w,b}(x^{(i)})$ for your two data points. You can explicitly write this out for each data point as - \n",
"\n",
"for $x^{(0)}$, `f_wb = w * x[0] + b`\n",
"\n",
"for $x^{(1)}$, `f_wb = w * x[1] + b`\n",
"\n",
"For a large number of data points, this can get unwieldy and repetitive. So instead, you can calculate the function output in a `for` loop as shown in the `compute_model_output` function below.\n",
"> **Note**: The argument description `(ndarray (m,))` describes a Numpy n-dimensional array of shape (m,). `(scalar)` describes an argument without dimensions, just a magnitude. \n",
"> **Note**: `np.zero(n)` will return a one-dimensional numpy array with $n$ entries \n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def compute_model_output(x, w, b):\n",
" \"\"\"\n",
" Computes the prediction of a linear model\n",
" Args:\n",
" x (ndarray (m,)): Data, m examples \n",
" w,b (scalar) : model parameters \n",
" Returns\n",
" y (ndarray (m,)): target values\n",
" \"\"\"\n",
" m = x.shape[0]\n",
" f_wb = np.zeros(m)\n",
" for i in range(m):\n",
" f_wb[i] = w * x[i] + b\n",
" \n",
" return f_wb"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's call the `compute_model_output` function and plot the output.."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"tmp_f_wb = compute_model_output(x_train, w, b,)\n",
"\n",
"# Plot our model prediction\n",
"plt.plot(x_train, tmp_f_wb, c='b',label='Our Prediction')\n",
"\n",
"# Plot the data points\n",
"plt.scatter(x_train, y_train, marker='x', c='r',label='Actual Values')\n",
"\n",
"# Set the title\n",
"plt.title(\"Housing Prices\")\n",
"# Set the y-axis label\n",
"plt.ylabel('Price (in 1000s of dollars)')\n",
"# Set the x-axis label\n",
"plt.xlabel('Size (1000 sqft)')\n",
"plt.legend()\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As you can see, setting $w = 100$ and $b = 100$ does *not* result in a line that fits our data. \n",
"\n",
"### Challenge\n",
"Try experimenting with different values of $w$ and $b$. What should the values be for a line that fits our data?\n",
"\n",
"#### Tip:\n",
"You can use your mouse to click on the triangle to the left of the green \"Hints\" below to reveal some hints for choosing b and w."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<details>\n",
"<summary>\n",
" <font size='3', color='darkgreen'><b>Hints</b></font>\n",
"</summary>\n",
" <p>\n",
" <ul>\n",
" <li>Try $w = 200$ and $b = 100$ </li>\n",
" </ul>\n",
" </p>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Prediction\n",
"Now that we have a model, we can use it to make our original prediction. Let's predict the price of a house with 1200 sqft. Since the units of $x$ are in 1000's of sqft, $x$ is 1.2.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"w = 200 \n",
"b = 100 \n",
"x_i = 1.2\n",
"cost_1200sqft = w * x_i + b \n",
"\n",
"print(f\"${cost_1200sqft:.0f} thousand dollars\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Congratulations!\n",
"In this lab you have learned:\n",
" - Linear regression builds a model which establishes a relationship between features and targets\n",
" - In the example above, the feature was house size and the target was house price\n",
" - for simple linear regression, the model has two parameters $w$ and $b$ whose values are 'fit' using *training data*.\n",
" - once a model's parameters have been determined, the model can be used to make predictions on novel data."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.6"
},
"toc-autonumbering": false
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,307 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Optional Lab: Cost Function \n",
"<figure>\n",
" <center> <img src=\"./images/C1_W1_L3_S2_Lecture_b.png\" style=\"width:1000px;height:200px;\" ></center>\n",
"</figure>\n",
"\n"
],
"id": "7d3391ce25e5ba89"
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Goals\n",
"In this lab you will:\n",
"- you will implement and explore the `cost` function for linear regression with one variable. \n"
],
"id": "1e010f719c5d086f"
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Tools\n",
"In this lab we will make use of: \n",
"- NumPy, a popular library for scientific computing\n",
"- Matplotlib, a popular library for plotting data\n",
"- local plotting routines in the lab_utils_uni.py file in the local directory"
],
"id": "13698dcbd73ca0d9"
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"%matplotlib widget\n",
"import matplotlib.pyplot as plt\n",
"from lab_utils_uni import plt_intuition, plt_stationary, plt_update_onclick, soup_bowl\n",
"plt.style.use('./deeplearning.mplstyle')"
],
"id": "32c06d217d8bb153"
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Problem Statement\n",
"\n",
"You would like a model which can predict housing prices given the size of the house. \n",
"Let's use the same two data points as before the previous lab- a house with 1000 square feet sold for \\\\$300,000 and a house with 2000 square feet sold for \\\\$500,000.\n",
"\n",
"\n",
"| Size (1000 sqft) | Price (1000s of dollars) |\n",
"| -------------------| ------------------------ |\n",
"| 1 | 300 |\n",
"| 2 | 500 |\n"
],
"id": "68e608da30aff630"
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"x_train = np.array([1.0, 2.0]) #(size in 1000 square feet)\n",
"y_train = np.array([300.0, 500.0]) #(price in 1000s of dollars)"
],
"id": "8f289b3bf82b01f0"
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Computing Cost\n",
"The term 'cost' in this assignment might be a little confusing since the data is housing cost. Here, cost is a measure how well our model is predicting the target price of the house. The term 'price' is used for housing data.\n",
"\n",
"The equation for cost with one variable is:\n",
" $$J(w,b) = \\frac{1}{2m} \\sum\\limits_{i = 0}^{m-1} (f_{w,b}(x^{(i)}) - y^{(i)})^2 \\tag{1}$$ \n",
" \n",
"where \n",
" $$f_{w,b}(x^{(i)}) = wx^{(i)} + b \\tag{2}$$\n",
" \n",
"- $f_{w,b}(x^{(i)})$ is our prediction for example $i$ using parameters $w,b$. \n",
"- $(f_{w,b}(x^{(i)}) -y^{(i)})^2$ is the squared difference between the target value and the prediction. \n",
"- These differences are summed over all the $m$ examples and divided by `2m` to produce the cost, $J(w,b)$. \n",
">Note, in lecture summation ranges are typically from 1 to m, while code will be from 0 to m-1.\n"
],
"id": "ab44dcb0c92cfdc"
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The code below calculates cost by looping over each example. In each loop:\n",
"- `f_wb`, a prediction is calculated\n",
"- the difference between the target and the prediction is calculated and squared.\n",
"- this is added to the total cost."
],
"id": "a63a74f54bb59b2d"
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def compute_cost(x, y, w, b): \n",
" \"\"\"\n",
" Computes the cost function for linear regression.\n",
" \n",
" Args:\n",
" x (ndarray (m,)): Data, m examples \n",
" y (ndarray (m,)): target values\n",
" w,b (scalar) : model parameters \n",
" \n",
" Returns\n",
" total_cost (float): The cost of using w,b as the parameters for linear regression\n",
" to fit the data points in x and y\n",
" \"\"\"\n",
" # number of training examples\n",
" m = x.shape[0] \n",
" \n",
" cost_sum = 0 \n",
" for i in range(m): \n",
" f_wb = w * x[i] + b \n",
" cost = (f_wb - y[i]) ** 2 \n",
" cost_sum = cost_sum + cost \n",
" total_cost = (1 / (2 * m)) * cost_sum \n",
"\n",
" return total_cost"
],
"id": "60ea841f910b8fb8"
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Cost Function Intuition"
],
"id": "b1c901ba1558365"
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<img align=\"left\" src=\"./images/C1_W1_Lab02_GoalOfRegression.PNG\" style=\" width:380px; padding: 10px; \" /> Your goal is to find a model $f_{w,b}(x) = wx + b$, with parameters $w,b$, which will accurately predict house values given an input $x$. The cost is a measure of how accurate the model is on the training data.\n",
"\n",
"The cost equation (1) above shows that if $w$ and $b$ can be selected such that the predictions $f_{w,b}(x)$ match the target data $y$, the $(f_{w,b}(x^{(i)}) - y^{(i)})^2 $ term will be zero and the cost minimized. In this simple two point example, you can achieve this!\n",
"\n",
"In the previous lab, you determined that $b=100$ provided an optimal solution so let's set $b$ to 100 and focus on $w$.\n",
"\n",
"<br/>\n",
"Below, use the slider control to select the value of $w$ that minimizes cost. It can take a few seconds for the plot to update."
],
"id": "954affed3a8238a1"
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"plt_intuition(x_train,y_train)"
],
"id": "1b2e52672dee209"
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The plot contains a few points that are worth mentioning.\n",
"- cost is minimized when $w = 200$, which matches results from the previous lab\n",
"- Because the difference between the target and pediction is squared in the cost equation, the cost increases rapidly when $w$ is either too large or too small.\n",
"- Using the `w` and `b` selected by minimizing cost results in a line which is a perfect fit to the data."
],
"id": "63966209b0e61188"
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Cost Function Visualization- 3D\n",
"\n",
"You can see how cost varies with respect to *both* `w` and `b` by plotting in 3D or using a contour plot. \n",
"It is worth noting that some of the plotting in this course can become quite involved. The plotting routines are provided and while it can be instructive to read through the code to become familiar with the methods, it is not needed to complete the course successfully. The routines are in lab_utils_uni.py in the local directory."
],
"id": "8e2feee1b673a863"
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Larger Data Set\n",
"It's use instructive to view a scenario with a few more data points. This data set includes data points that do not fall on the same line. What does that mean for the cost equation? Can we find $w$, and $b$ that will give us a cost of 0? "
],
"id": "454fc752f58e3ee"
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"x_train = np.array([1.0, 1.7, 2.0, 2.5, 3.0, 3.2])\n",
"y_train = np.array([250, 300, 480, 430, 630, 730,])"
],
"id": "dc3d7ddc773fc308"
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In the contour plot, click on a point to select `w` and `b` to achieve the lowest cost. Use the contours to guide your selections. Note, it can take a few seconds to update the graph. "
],
"id": "7685a3371c85960d"
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"plt.close('all') \n",
"fig, ax, dyn_items = plt_stationary(x_train, y_train)\n",
"updater = plt_update_onclick(fig, ax, x_train, y_train, dyn_items)"
],
"id": "9132ad0f6d362e2f"
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Above, note the dashed lines in the left plot. These represent the portion of the cost contributed by each example in your training set. In this case, values of approximately $w=209$ and $b=2.4$ provide low cost. Note that, because our training examples are not on a line, the minimum cost is not zero."
],
"id": "78bf73a9e76b7764"
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Convex Cost surface\n",
"The fact that the cost function squares the loss ensures that the 'error surface' is convex like a soup bowl. It will always have a minimum that can be reached by following the gradient in all dimensions. In the previous plot, because the $w$ and $b$ dimensions scale differently, this is not easy to recognize. The following plot, where $w$ and $b$ are symmetric, was shown in lecture:"
],
"id": "136c6383bf6097f5"
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"soup_bowl()"
],
"id": "2a02853e7204d759"
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Congratulations!\n",
"You have learned the following:\n",
" - The cost equation provides a measure of how well your predictions match your training data.\n",
" - Minimizing the cost can provide optimal values of $w$, $b$."
],
"id": "1f57a3241189a2cc"
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [],
"id": "9b162b4e8ebec360"
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.6"
},
"toc-autonumbering": false
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,558 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Optional Lab: Gradient Descent for Linear Regression\n",
"\n",
"<figure>\n",
" <center> <img src=\"./images/C1_W1_L4_S1_Lecture_GD.png\" style=\"width:800px;height:200px;\" ></center>\n",
"</figure>"
],
"id": "5c7e9de1670dfc91"
},
{
"cell_type": "markdown",
"id": "da452e68",
"metadata": {},
"source": [
"## Goals\n",
"In this lab, you will:\n",
"- automate the process of optimizing $w$ and $b$ using gradient descent."
]
},
{
"cell_type": "markdown",
"id": "6f6d4021",
"metadata": {},
"source": [
"## Tools\n",
"In this lab, we will make use of: \n",
"- NumPy, a popular library for scientific computing\n",
"- Matplotlib, a popular library for plotting data\n",
"- plotting routines in the lab_utils.py file in the local directory"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ef4d610a",
"metadata": {},
"outputs": [],
"source": [
"import math, copy\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"plt.style.use('./deeplearning.mplstyle')\n",
"from lab_utils_uni import plt_house_x, plt_contour_wgrad, plt_divergence, plt_gradients"
]
},
{
"cell_type": "markdown",
"id": "c571a7b4",
"metadata": {},
"source": [
"<a name=\"toc_40291_2\"></a>\n",
"# Problem Statement\n",
"\n",
"Let's use the same two data points as before - a house with 1000 square feet sold for \\\\$300,000 and a house with 2000 square feet sold for \\\\$500,000.\n",
"\n",
"| Size (1000 sqft) | Price (1000s of dollars) |\n",
"| ----------------| ------------------------ |\n",
"| 1 | 300 |\n",
"| 2 | 500 |\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "26dd9666",
"metadata": {},
"outputs": [],
"source": [
"# Load our data set\n",
"x_train = np.array([1.0, 2.0]) #features\n",
"y_train = np.array([300.0, 500.0]) #target value"
]
},
{
"cell_type": "markdown",
"id": "2b7851bd",
"metadata": {},
"source": [
"<a name=\"toc_40291_2.0.1\"></a>\n",
"### Compute_Cost\n",
"This was developed in the last lab. We'll need it again here."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9c6ffb7d",
"metadata": {},
"outputs": [],
"source": [
"#Function to calculate the cost\n",
"def compute_cost(x, y, w, b):\n",
" \n",
" m = x.shape[0] \n",
" cost = 0\n",
" \n",
" for i in range(m):\n",
" f_wb = w * x[i] + b\n",
" cost = cost + (f_wb - y[i])**2\n",
" total_cost = 1 / (2 * m) * cost\n",
"\n",
" return total_cost"
]
},
{
"cell_type": "markdown",
"id": "fd4be849",
"metadata": {},
"source": [
"<a name=\"toc_40291_2.1\"></a>\n",
"## Gradient descent summary\n",
"So far in this course, you have developed a linear model that predicts $f_{w,b}(x^{(i)})$:\n",
"$$f_{w,b}(x^{(i)}) = wx^{(i)} + b \\tag{1}$$\n",
"In linear regression, you utilize input training data to fit the parameters $w$,$b$ by minimizing a measure of the error between our predictions $f_{w,b}(x^{(i)})$ and the actual data $y^{(i)}$. The measure is called the $cost$, $J(w,b)$. In training you measure the cost over all of our training samples $x^{(i)},y^{(i)}$\n",
"$$J(w,b) = \\frac{1}{2m} \\sum\\limits_{i = 0}^{m-1} (f_{w,b}(x^{(i)}) - y^{(i)})^2\\tag{2}$$ "
]
},
{
"cell_type": "markdown",
"id": "6061233c",
"metadata": {},
"source": [
"\n",
"In lecture, *gradient descent* was described as:\n",
"\n",
"$$\\begin{align*} \\text{repeat}&\\text{ until convergence:} \\; \\lbrace \\newline\n",
"\\; w &= w - \\alpha \\frac{\\partial J(w,b)}{\\partial w} \\tag{3} \\; \\newline \n",
" b &= b - \\alpha \\frac{\\partial J(w,b)}{\\partial b} \\newline \\rbrace\n",
"\\end{align*}$$\n",
"where, parameters $w$, $b$ are updated simultaneously. \n",
"The gradient is defined as:\n",
"$$\n",
"\\begin{align}\n",
"\\frac{\\partial J(w,b)}{\\partial w} &= \\frac{1}{m} \\sum\\limits_{i = 0}^{m-1} (f_{w,b}(x^{(i)}) - y^{(i)})x^{(i)} \\tag{4}\\\\\n",
" \\frac{\\partial J(w,b)}{\\partial b} &= \\frac{1}{m} \\sum\\limits_{i = 0}^{m-1} (f_{w,b}(x^{(i)}) - y^{(i)}) \\tag{5}\\\\\n",
"\\end{align}\n",
"$$\n",
"\n",
"Here *simultaniously* means that you calculate the partial derivatives for all the parameters before updating any of the parameters."
]
},
{
"cell_type": "markdown",
"id": "6cfb9401",
"metadata": {},
"source": [
"<a name=\"toc_40291_2.2\"></a>\n",
"## Implement Gradient Descent\n",
"You will implement batch gradient descent algorithm for one feature. You will need three functions. \n",
"- `compute_gradient` implementing equation (4) and (5) above\n",
"- `compute_cost` implementing equation (2) above (code from previous lab)\n",
"- `gradient_descent`, utilizing compute_gradient and compute_cost\n",
"\n",
"Conventions:\n",
"- The naming of python variables containing partial derivatives follows this pattern,$\\frac{\\partial J(w,b)}{\\partial b}$ will be `dj_db`.\n",
"- w.r.t is With Respect To, as in partial derivative of $J(wb)$ With Respect To $b$.\n"
]
},
{
"cell_type": "markdown",
"id": "f9b6ad38",
"metadata": {},
"source": [
"<a name=\"toc_40291_2.3\"></a>\n",
"### compute_gradient\n",
"<a name='ex-01'></a>\n",
"`compute_gradient` implements (4) and (5) above and returns $\\frac{\\partial J(w,b)}{\\partial w}$,$\\frac{\\partial J(w,b)}{\\partial b}$. The embedded comments describe the operations."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "857066af",
"metadata": {},
"outputs": [],
"source": [
"def compute_gradient(x, y, w, b): \n",
" \"\"\"\n",
" Computes the gradient for linear regression \n",
" Args:\n",
" x (ndarray (m,)): Data, m examples \n",
" y (ndarray (m,)): target values\n",
" w,b (scalar) : model parameters \n",
" Returns\n",
" dj_dw (scalar): The gradient of the cost w.r.t. the parameters w\n",
" dj_db (scalar): The gradient of the cost w.r.t. the parameter b \n",
" \"\"\"\n",
" \n",
" # Number of training examples\n",
" m = x.shape[0] \n",
" dj_dw = 0\n",
" dj_db = 0\n",
" \n",
" for i in range(m): \n",
" f_wb = w * x[i] + b \n",
" dj_dw_i = (f_wb - y[i]) * x[i] \n",
" dj_db_i = f_wb - y[i] \n",
" dj_db += dj_db_i\n",
" dj_dw += dj_dw_i \n",
" dj_dw = dj_dw / m \n",
" dj_db = dj_db / m \n",
" \n",
" return dj_dw, dj_db"
]
},
{
"cell_type": "markdown",
"id": "dccbb458",
"metadata": {},
"source": [
"<br/>"
]
},
{
"cell_type": "markdown",
"id": "42358679",
"metadata": {},
"source": [
"<img align=\"left\" src=\"./images/C1_W1_Lab03_lecture_slopes.PNG\" style=\"width:340px;\" > The lectures described how gradient descent utilizes the partial derivative of the cost with respect to a parameter at a point to update that parameter. \n",
"Let's use our `compute_gradient` function to find and plot some partial derivatives of our cost function relative to one of the parameters, $w_0$.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "01d52fb1",
"metadata": {},
"outputs": [],
"source": [
"plt_gradients(x_train,y_train, compute_cost, compute_gradient)\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"id": "20269810",
"metadata": {},
"source": [
"Above, the left plot shows $\\frac{\\partial J(w,b)}{\\partial w}$ or the slope of the cost curve relative to $w$ at three points. On the right side of the plot, the derivative is positive, while on the left it is negative. Due to the 'bowl shape', the derivatives will always lead gradient descent toward the bottom where the gradient is zero.\n",
" \n",
"The left plot has fixed $b=100$. Gradient descent will utilize both $\\frac{\\partial J(w,b)}{\\partial w}$ and $\\frac{\\partial J(w,b)}{\\partial b}$ to update parameters. The 'quiver plot' on the right provides a means of viewing the gradient of both parameters. The arrow sizes reflect the magnitude of the gradient at that point. The direction and slope of the arrow reflects the ratio of $\\frac{\\partial J(w,b)}{\\partial w}$ and $\\frac{\\partial J(w,b)}{\\partial b}$ at that point.\n",
"Note that the gradient points *away* from the minimum. Review equation (3) above. The scaled gradient is *subtracted* from the current value of $w$ or $b$. This moves the parameter in a direction that will reduce cost."
]
},
{
"cell_type": "markdown",
"id": "09cde02a",
"metadata": {},
"source": [
"<a name=\"toc_40291_2.5\"></a>\n",
"### Gradient Descent\n",
"Now that gradients can be computed, gradient descent, described in equation (3) above can be implemented. The details of the implementation are described in the comments. Below, you will utilize this function to find optimal values of $w$ and $b$ on the training data."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ca792dcb",
"metadata": {},
"outputs": [],
"source": [
"def gradient_descent(x, y, w_in, b_in, alpha, num_iters, cost_function, gradient_function): \n",
" \"\"\"\n",
" Performs batch gradient descent to fit w,b. Updates w,b by taking \n",
" num_iters gradient steps with learning rate alpha\n",
" \n",
" Args:\n",
" x (ndarray (m,)) : Data, m examples \n",
" y (ndarray (m,)) : target values\n",
" w_in,b_in (scalar): initial values of model parameters \n",
" alpha (float): Learning rate\n",
" num_iters (int): number of iterations to run gradient descent\n",
" cost_function: function to call to produce cost\n",
" gradient_function: function to call to produce gradient\n",
" \n",
" Returns:\n",
" w (scalar): Updated value of parameter after running gradient descent\n",
" b (scalar): Updated value of parameter after running gradient descent\n",
" J_history (List): History of cost values\n",
" p_history (list): History of parameters [w,b] \n",
" \"\"\"\n",
" \n",
" w = copy.deepcopy(w_in) # avoid modifying global w_in\n",
" # An array to store cost J and w's at each iteration primarily for graphing later\n",
" J_history = []\n",
" p_history = []\n",
" b = b_in\n",
" w = w_in\n",
" \n",
" for i in range(num_iters):\n",
" # Calculate the gradient and update the parameters using gradient_function\n",
" dj_dw, dj_db = gradient_function(x, y, w , b) \n",
"\n",
" # Update Parameters using equation (3) above\n",
" b = b - alpha * dj_db \n",
" w = w - alpha * dj_dw \n",
"\n",
" # Save cost J at each iteration\n",
" if i<100000: # prevent resource exhaustion \n",
" J_history.append( cost_function(x, y, w , b))\n",
" p_history.append([w,b])\n",
" # Print cost every at intervals 10 times or as many iterations if < 10\n",
" if i% math.ceil(num_iters/10) == 0:\n",
" print(f\"Iteration {i:4}: Cost {J_history[-1]:0.2e} \",\n",
" f\"dj_dw: {dj_dw: 0.3e}, dj_db: {dj_db: 0.3e} \",\n",
" f\"w: {w: 0.3e}, b:{b: 0.5e}\")\n",
" \n",
" return w, b, J_history, p_history #return w and J,w history for graphing"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "af449437",
"metadata": {},
"outputs": [],
"source": [
"# initialize parameters\n",
"w_init = 0\n",
"b_init = 0\n",
"# some gradient descent settings\n",
"iterations = 10000\n",
"tmp_alpha = 1.0e-2\n",
"# run gradient descent\n",
"w_final, b_final, J_hist, p_hist = gradient_descent(x_train ,y_train, w_init, b_init, tmp_alpha, \n",
" iterations, compute_cost, compute_gradient)\n",
"print(f\"(w,b) found by gradient descent: ({w_final:8.4f},{b_final:8.4f})\")"
]
},
{
"cell_type": "markdown",
"id": "f659147c",
"metadata": {},
"source": [
"<img align=\"left\" src=\"./images/C1_W1_Lab03_lecture_learningrate.PNG\" style=\"width:340px; padding: 15px; \" > \n",
"Take a moment and note some characteristics of the gradient descent process printed above. \n",
"\n",
"- The cost starts large and rapidly declines as described in the slide from the lecture.\n",
"- The partial derivatives, `dj_dw`, and `dj_db` also get smaller, rapidly at first and then more slowly. As shown in the diagram from the lecture, as the process nears the 'bottom of the bowl' progress is slower due to the smaller value of the derivative at that point.\n",
"- progress slows though the learning rate, alpha, remains fixed"
]
},
{
"cell_type": "markdown",
"id": "a2366fd8",
"metadata": {},
"source": [
"### Cost versus iterations of gradient descent \n",
"A plot of cost versus iterations is a useful measure of progress in gradient descent. Cost should always decrease in successful runs. The change in cost is so rapid initially, it is also helpful in this case to view a plot that does not include the initial iterations."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ceb161ee",
"metadata": {},
"outputs": [],
"source": [
"# plot cost versus iteration \n",
"fig, (ax1, ax2) = plt.subplots(1, 2, constrained_layout=True, figsize=(12,4))\n",
"ax1.plot(J_hist)\n",
"ax2.plot(1000 + np.arange(len(J_hist[1000:])), J_hist[1000:])\n",
"ax1.set_title(\"Cost vs. iteration\"); ax2.set_title(\"Cost vs. iteration (tail)\")\n",
"ax1.set_ylabel('Cost') ; ax2.set_ylabel('Cost') \n",
"ax1.set_xlabel('iteration step') ; ax2.set_xlabel('iteration step') \n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"id": "205bd40f",
"metadata": {},
"source": [
"### Predictions\n",
"Now that you have discovered the optimal values for the parameters $w$ and $b$, you can now use the model to predict housing values based on our learned parameters. As expected, the predicted values are nearly the same as the training values for the same housing. Further, the value not in the prediction is in line with the expected value."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "54271146",
"metadata": {},
"outputs": [],
"source": [
"print(f\"1000 sqft house prediction {w_final*1.0 + b_final:0.1f} Thousand dollars\")\n",
"print(f\"1200 sqft house prediction {w_final*1.2 + b_final:0.1f} Thousand dollars\")\n",
"print(f\"2000 sqft house prediction {w_final*2.0 + b_final:0.1f} Thousand dollars\")"
]
},
{
"cell_type": "markdown",
"id": "b7081853",
"metadata": {},
"source": [
"<a name=\"toc_40291_2.6\"></a>\n",
"## Plotting\n",
"You can show the progress of gradient descent during its execution by plotting the cost over iterations on a contour plot of the cost(w,b). "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e6c9e24f",
"metadata": {},
"outputs": [],
"source": [
"fig, ax = plt.subplots(1,1, figsize=(12, 6))\n",
"plt_contour_wgrad(x_train, y_train, p_hist, ax)"
]
},
{
"cell_type": "markdown",
"id": "1d3bda0a",
"metadata": {},
"source": [
"Above, the contour plot shows the $cost(w,b)$ over a range of $w$ and $b$. Cost levels are represented by the rings. Overlayed, using red arrows, is the path of gradient descent. Here are some things to note:\n",
"- The path makes steady (monotonic) progress toward its goal.\n",
"- initial steps are much larger than the steps near the goal."
]
},
{
"cell_type": "markdown",
"id": "9d2f0d6b",
"metadata": {},
"source": [
"**Zooming in**, we can see that final steps of gradient descent. Note the distance between steps shrinks as the gradient approaches zero."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5e58742e",
"metadata": {},
"outputs": [],
"source": [
"fig, ax = plt.subplots(1,1, figsize=(12, 4))\n",
"plt_contour_wgrad(x_train, y_train, p_hist, ax, w_range=[180, 220, 0.5], b_range=[80, 120, 0.5],\n",
" contours=[1,5,10,20],resolution=0.5)"
]
},
{
"cell_type": "markdown",
"id": "66a68a52",
"metadata": {},
"source": [
"<a name=\"toc_40291_2.7.1\"></a>\n",
"### Increased Learning Rate\n",
"\n",
"<figure>\n",
" <img align=\"left\", src=\"./images/C1_W1_Lab03_alpha_too_big.PNG\" style=\"width:340px;height:240px;\" >\n",
"</figure>\n",
"In the lecture, there was a discussion related to the proper value of the learning rate, $\\alpha$ in equation(3). The larger $\\alpha$ is, the faster gradient descent will converge to a solution. But, if it is too large, gradient descent will diverge. Above you have an example of a solution which converges nicely.\n",
"\n",
"Let's try increasing the value of $\\alpha$ and see what happens:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "01c2c046",
"metadata": {},
"outputs": [],
"source": [
"# initialize parameters\n",
"w_init = 0\n",
"b_init = 0\n",
"# set alpha to a large value\n",
"iterations = 10\n",
"tmp_alpha = 8.0e-1\n",
"# run gradient descent\n",
"w_final, b_final, J_hist, p_hist = gradient_descent(x_train ,y_train, w_init, b_init, tmp_alpha, \n",
" iterations, compute_cost, compute_gradient)"
]
},
{
"cell_type": "markdown",
"id": "84d9cb5e",
"metadata": {},
"source": [
"Above, $w$ and $b$ are bouncing back and forth between positive and negative with the absolute value increasing with each iteration. Further, each iteration $\\frac{\\partial J(w,b)}{\\partial w}$ changes sign and cost is increasing rather than decreasing. This is a clear sign that the *learning rate is too large* and the solution is diverging. \n",
"Let's visualize this with a plot."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cd87463e",
"metadata": {},
"outputs": [],
"source": [
"plt_divergence(p_hist, J_hist,x_train, y_train)\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"id": "1af14beb",
"metadata": {},
"source": [
"Above, the left graph shows $w$'s progression over the first few steps of gradient descent. $w$ oscillates from positive to negative and cost grows rapidly. Gradient Descent is operating on both $w$ and $b$ simultaneously, so one needs the 3-D plot on the right for the complete picture."
]
},
{
"cell_type": "markdown",
"id": "17283374",
"metadata": {},
"source": [
"\n",
"## Congratulations!\n",
"In this lab you:\n",
"- delved into the details of gradient descent for a single variable.\n",
"- developed a routine to compute the gradient\n",
"- visualized what the gradient is\n",
"- completed a gradient descent routine\n",
"- utilized gradient descent to find parameters\n",
"- examined the impact of sizing the learning rate"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "852c45c0",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"dl_toc_settings": {
"rndtag": "40291"
},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
},
"toc-autonumbering": false
},
"nbformat": 4,
"nbformat_minor": 5
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 83 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 76 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 86 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 133 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 91 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 105 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 84 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 67 KiB

View File

@@ -0,0 +1,112 @@
"""
lab_utils_common.py
functions common to all optional labs, Course 1, Week 2
"""
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('./deeplearning.mplstyle')
dlblue = '#0096ff'; dlorange = '#FF9300'; dldarkred='#C00000'; dlmagenta='#FF40FF'; dlpurple='#7030A0';
dlcolors = [dlblue, dlorange, dldarkred, dlmagenta, dlpurple]
dlc = dict(dlblue = '#0096ff', dlorange = '#FF9300', dldarkred='#C00000', dlmagenta='#FF40FF', dlpurple='#7030A0')
##########################################################
# Regression Routines
##########################################################
#Function to calculate the cost
def compute_cost_matrix(X, y, w, b, verbose=False):
"""
Computes the gradient for linear regression
Args:
X (ndarray (m,n)): Data, m examples with n features
y (ndarray (m,)) : target values
w (ndarray (n,)) : model parameters
b (scalar) : model parameter
verbose : (Boolean) If true, print out intermediate value f_wb
Returns
cost: (scalar)
"""
m = X.shape[0]
# calculate f_wb for all examples.
f_wb = X @ w + b
# calculate cost
total_cost = (1/(2*m)) * np.sum((f_wb-y)**2)
if verbose: print("f_wb:")
if verbose: print(f_wb)
return total_cost
def compute_gradient_matrix(X, y, w, b):
"""
Computes the gradient for linear regression
Args:
X (ndarray (m,n)): Data, m examples with n features
y (ndarray (m,)) : target values
w (ndarray (n,)) : model parameters
b (scalar) : model parameter
Returns
dj_dw (ndarray (n,1)): The gradient of the cost w.r.t. the parameters w.
dj_db (scalar): The gradient of the cost w.r.t. the parameter b.
"""
m,n = X.shape
f_wb = X @ w + b
e = f_wb - y
dj_dw = (1/m) * (X.T @ e)
dj_db = (1/m) * np.sum(e)
return dj_db,dj_dw
# Loop version of multi-variable compute_cost
def compute_cost(X, y, w, b):
"""
compute cost
Args:
X (ndarray (m,n)): Data, m examples with n features
y (ndarray (m,)) : target values
w (ndarray (n,)) : model parameters
b (scalar) : model parameter
Returns
cost (scalar) : cost
"""
m = X.shape[0]
cost = 0.0
for i in range(m):
f_wb_i = np.dot(X[i],w) + b #(n,)(n,)=scalar
cost = cost + (f_wb_i - y[i])**2
cost = cost/(2*m)
return cost
def compute_gradient(X, y, w, b):
"""
Computes the gradient for linear regression
Args:
X (ndarray (m,n)): Data, m examples with n features
y (ndarray (m,)) : target values
w (ndarray (n,)) : model parameters
b (scalar) : model parameter
Returns
dj_dw (ndarray Shape (n,)): The gradient of the cost w.r.t. the parameters w.
dj_db (scalar): The gradient of the cost w.r.t. the parameter b.
"""
m,n = X.shape #(number of examples, number of features)
dj_dw = np.zeros((n,))
dj_db = 0.
for i in range(m):
err = (np.dot(X[i], w) + b) - y[i]
for j in range(n):
dj_dw[j] = dj_dw[j] + err * X[i,j]
dj_db = dj_db + err
dj_dw = dj_dw/m
dj_db = dj_db/m
return dj_db,dj_dw

View File

@@ -0,0 +1,398 @@
"""
lab_utils_uni.py
routines used in Course 1, Week2, labs1-3 dealing with single variables (univariate)
"""
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.ticker import MaxNLocator
from matplotlib.gridspec import GridSpec
from matplotlib.colors import LinearSegmentedColormap
from ipywidgets import interact
from lab_utils_common import compute_cost
from lab_utils_common import dlblue, dlorange, dldarkred, dlmagenta, dlpurple, dlcolors
plt.style.use('./deeplearning.mplstyle')
n_bin = 5
dlcm = LinearSegmentedColormap.from_list(
'dl_map', dlcolors, N=n_bin)
##########################################################
# Plotting Routines
##########################################################
def plt_house_x(X, y,f_wb=None, ax=None):
''' plot house with aXis '''
if not ax:
fig, ax = plt.subplots(1,1)
ax.scatter(X, y, marker='x', c='r', label="Actual Value")
ax.set_title("Housing Prices")
ax.set_ylabel('Price (in 1000s of dollars)')
ax.set_xlabel(f'Size (1000 sqft)')
if f_wb is not None:
ax.plot(X, f_wb, c=dlblue, label="Our Prediction")
ax.legend()
def mk_cost_lines(x,y,w,b, ax):
''' makes vertical cost lines'''
cstr = "cost = (1/m)*("
ctot = 0
label = 'cost for point'
addedbreak = False
for p in zip(x,y):
f_wb_p = w*p[0]+b
c_p = ((f_wb_p - p[1])**2)/2
c_p_txt = c_p
ax.vlines(p[0], p[1],f_wb_p, lw=3, color=dlpurple, ls='dotted', label=label)
label='' #just one
cxy = [p[0], p[1] + (f_wb_p-p[1])/2]
ax.annotate(f'{c_p_txt:0.0f}', xy=cxy, xycoords='data',color=dlpurple,
xytext=(5, 0), textcoords='offset points')
cstr += f"{c_p_txt:0.0f} +"
if len(cstr) > 38 and addedbreak is False:
cstr += "\n"
addedbreak = True
ctot += c_p
ctot = ctot/(len(x))
cstr = cstr[:-1] + f") = {ctot:0.0f}"
ax.text(0.15,0.02,cstr, transform=ax.transAxes, color=dlpurple)
##########
# Cost lab
##########
def plt_intuition(x_train, y_train):
w_range = np.array([200-200,200+200])
tmp_b = 100
w_array = np.arange(*w_range, 5)
cost = np.zeros_like(w_array)
for i in range(len(w_array)):
tmp_w = w_array[i]
cost[i] = compute_cost(x_train, y_train, tmp_w, tmp_b)
@interact(w=(*w_range,10),continuous_update=False)
def func( w=150):
f_wb = np.dot(x_train, w) + tmp_b
fig, ax = plt.subplots(1, 2, constrained_layout=True, figsize=(8,4))
fig.canvas.toolbar_position = 'bottom'
mk_cost_lines(x_train, y_train, w, tmp_b, ax[0])
plt_house_x(x_train, y_train, f_wb=f_wb, ax=ax[0])
ax[1].plot(w_array, cost)
cur_cost = compute_cost(x_train, y_train, w, tmp_b)
ax[1].scatter(w,cur_cost, s=100, color=dldarkred, zorder= 10, label= f"cost at w={w}")
ax[1].hlines(cur_cost, ax[1].get_xlim()[0],w, lw=4, color=dlpurple, ls='dotted')
ax[1].vlines(w, ax[1].get_ylim()[0],cur_cost, lw=4, color=dlpurple, ls='dotted')
ax[1].set_title("Cost vs. w, (b fixed at 100)")
ax[1].set_ylabel('Cost')
ax[1].set_xlabel('w')
ax[1].legend(loc='upper center')
fig.suptitle(f"Minimize Cost: Current Cost = {cur_cost:0.0f}", fontsize=12)
plt.show()
# this is the 2D cost curve with interactive slider
def plt_stationary(x_train, y_train):
# setup figure
fig = plt.figure( figsize=(9,8))
#fig = plt.figure(constrained_layout=True, figsize=(12,10))
fig.set_facecolor('#ffffff') #white
fig.canvas.toolbar_position = 'top'
#gs = GridSpec(2, 2, figure=fig, wspace = 0.01)
gs = GridSpec(2, 2, figure=fig)
ax0 = fig.add_subplot(gs[0, 0])
ax1 = fig.add_subplot(gs[0, 1])
ax2 = fig.add_subplot(gs[1, :], projection='3d')
ax = np.array([ax0,ax1,ax2])
#setup useful ranges and common linspaces
w_range = np.array([200-300.,200+300])
b_range = np.array([50-300., 50+300])
b_space = np.linspace(*b_range, 100)
w_space = np.linspace(*w_range, 100)
# get cost for w,b ranges for contour and 3D
tmp_b,tmp_w = np.meshgrid(b_space,w_space)
z=np.zeros_like(tmp_b)
for i in range(tmp_w.shape[0]):
for j in range(tmp_w.shape[1]):
z[i,j] = compute_cost(x_train, y_train, tmp_w[i][j], tmp_b[i][j] )
if z[i,j] == 0: z[i,j] = 1e-6
w0=200;b=-100 #initial point
### plot model w cost ###
f_wb = np.dot(x_train,w0) + b
mk_cost_lines(x_train,y_train,w0,b,ax[0])
plt_house_x(x_train, y_train, f_wb=f_wb, ax=ax[0])
### plot contour ###
CS = ax[1].contour(tmp_w, tmp_b, np.log(z),levels=12, linewidths=2, alpha=0.7,colors=dlcolors)
ax[1].set_title('Cost(w,b)')
ax[1].set_xlabel('w', fontsize=10)
ax[1].set_ylabel('b', fontsize=10)
ax[1].set_xlim(w_range) ; ax[1].set_ylim(b_range)
cscat = ax[1].scatter(w0,b, s=100, color=dlblue, zorder= 10, label="cost with \ncurrent w,b")
chline = ax[1].hlines(b, ax[1].get_xlim()[0],w0, lw=4, color=dlpurple, ls='dotted')
cvline = ax[1].vlines(w0, ax[1].get_ylim()[0],b, lw=4, color=dlpurple, ls='dotted')
ax[1].text(0.5,0.95,"Click to choose w,b", bbox=dict(facecolor='white', ec = 'black'), fontsize = 10,
transform=ax[1].transAxes, verticalalignment = 'center', horizontalalignment= 'center')
#Surface plot of the cost function J(w,b)
ax[2].plot_surface(tmp_w, tmp_b, z, cmap = dlcm, alpha=0.3, antialiased=True)
ax[2].plot_wireframe(tmp_w, tmp_b, z, color='k', alpha=0.1)
plt.xlabel("$w$")
plt.ylabel("$b$")
ax[2].zaxis.set_rotate_label(False)
ax[2].xaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))
ax[2].yaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))
ax[2].zaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))
ax[2].set_zlabel("J(w, b)\n\n", rotation=90)
plt.title("Cost(w,b) \n [You can rotate this figure]", size=12)
ax[2].view_init(30, -120)
return fig,ax, [cscat, chline, cvline]
#https://matplotlib.org/stable/users/event_handling.html
class plt_update_onclick:
def __init__(self, fig, ax, x_train,y_train, dyn_items):
self.fig = fig
self.ax = ax
self.x_train = x_train
self.y_train = y_train
self.dyn_items = dyn_items
self.cid = fig.canvas.mpl_connect('button_press_event', self)
def __call__(self, event):
if event.inaxes == self.ax[1]:
ws = event.xdata
bs = event.ydata
cst = compute_cost(self.x_train, self.y_train, ws, bs)
# clear and redraw line plot
self.ax[0].clear()
f_wb = np.dot(self.x_train,ws) + bs
mk_cost_lines(self.x_train,self.y_train,ws,bs,self.ax[0])
plt_house_x(self.x_train, self.y_train, f_wb=f_wb, ax=self.ax[0])
# remove lines and re-add on countour plot and 3d plot
for artist in self.dyn_items:
artist.remove()
a = self.ax[1].scatter(ws,bs, s=100, color=dlblue, zorder= 10, label="cost with \ncurrent w,b")
b = self.ax[1].hlines(bs, self.ax[1].get_xlim()[0],ws, lw=4, color=dlpurple, ls='dotted')
c = self.ax[1].vlines(ws, self.ax[1].get_ylim()[0],bs, lw=4, color=dlpurple, ls='dotted')
d = self.ax[1].annotate(f"Cost: {cst:.0f}", xy= (ws, bs), xytext = (4,4), textcoords = 'offset points',
bbox=dict(facecolor='white'), size = 10)
#Add point in 3D surface plot
e = self.ax[2].scatter3D(ws, bs,cst , marker='X', s=100)
self.dyn_items = [a,b,c,d,e]
self.fig.canvas.draw()
def soup_bowl():
""" Create figure and plot with a 3D projection"""
fig = plt.figure(figsize=(8,8))
#Plot configuration
ax = fig.add_subplot(111, projection='3d')
ax.xaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))
ax.yaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))
ax.zaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))
ax.zaxis.set_rotate_label(False)
ax.view_init(45, -120)
#Useful linearspaces to give values to the parameters w and b
w = np.linspace(-20, 20, 100)
b = np.linspace(-20, 20, 100)
#Get the z value for a bowl-shaped cost function
z=np.zeros((len(w), len(b)))
j=0
for x in w:
i=0
for y in b:
z[i,j] = x**2 + y**2
i+=1
j+=1
#Meshgrid used for plotting 3D functions
W, B = np.meshgrid(w, b)
#Create the 3D surface plot of the bowl-shaped cost function
ax.plot_surface(W, B, z, cmap = "Spectral_r", alpha=0.7, antialiased=False)
ax.plot_wireframe(W, B, z, color='k', alpha=0.1)
ax.set_xlabel("$w$")
ax.set_ylabel("$b$")
ax.set_zlabel("$J(w,b)$", rotation=90)
ax.set_title("$J(w,b)$\n [You can rotate this figure]", size=15)
plt.show()
def inbounds(a,b,xlim,ylim):
xlow,xhigh = xlim
ylow,yhigh = ylim
ax, ay = a
bx, by = b
if (ax > xlow and ax < xhigh) and (bx > xlow and bx < xhigh) \
and (ay > ylow and ay < yhigh) and (by > ylow and by < yhigh):
return True
return False
def plt_contour_wgrad(x, y, hist, ax, w_range=[-100, 500, 5], b_range=[-500, 500, 5],
contours = [0.1,50,1000,5000,10000,25000,50000],
resolution=5, w_final=200, b_final=100,step=10 ):
b0,w0 = np.meshgrid(np.arange(*b_range),np.arange(*w_range))
z=np.zeros_like(b0)
for i in range(w0.shape[0]):
for j in range(w0.shape[1]):
z[i][j] = compute_cost(x, y, w0[i][j], b0[i][j] )
CS = ax.contour(w0, b0, z, contours, linewidths=2,
colors=[dlblue, dlorange, dldarkred, dlmagenta, dlpurple])
ax.clabel(CS, inline=1, fmt='%1.0f', fontsize=10)
ax.set_xlabel("w"); ax.set_ylabel("b")
ax.set_title('Contour plot of cost J(w,b), vs b,w with path of gradient descent')
w = w_final; b=b_final
ax.hlines(b, ax.get_xlim()[0],w, lw=2, color=dlpurple, ls='dotted')
ax.vlines(w, ax.get_ylim()[0],b, lw=2, color=dlpurple, ls='dotted')
base = hist[0]
for point in hist[0::step]:
edist = np.sqrt((base[0] - point[0])**2 + (base[1] - point[1])**2)
if(edist > resolution or point==hist[-1]):
if inbounds(point,base, ax.get_xlim(),ax.get_ylim()):
plt.annotate('', xy=point, xytext=base,xycoords='data',
arrowprops={'arrowstyle': '->', 'color': 'r', 'lw': 3},
va='center', ha='center')
base=point
return
def plt_divergence(p_hist, J_hist, x_train,y_train):
x=np.zeros(len(p_hist))
y=np.zeros(len(p_hist))
v=np.zeros(len(p_hist))
for i in range(len(p_hist)):
x[i] = p_hist[i][0]
y[i] = p_hist[i][1]
v[i] = J_hist[i]
fig = plt.figure(figsize=(12,5))
plt.subplots_adjust( wspace=0 )
gs = fig.add_gridspec(1, 5)
fig.suptitle(f"Cost escalates when learning rate is too large")
#===============
# First subplot
#===============
ax = fig.add_subplot(gs[:2], )
# Print w vs cost to see minimum
fix_b = 100
w_array = np.arange(-70000, 70000, 1000)
cost = np.zeros_like(w_array)
for i in range(len(w_array)):
tmp_w = w_array[i]
cost[i] = compute_cost(x_train, y_train, tmp_w, fix_b)
ax.plot(w_array, cost)
ax.plot(x,v, c=dlmagenta)
ax.set_title("Cost vs w, b set to 100")
ax.set_ylabel('Cost')
ax.set_xlabel('w')
ax.xaxis.set_major_locator(MaxNLocator(2))
#===============
# Second Subplot
#===============
tmp_b,tmp_w = np.meshgrid(np.arange(-35000, 35000, 500),np.arange(-70000, 70000, 500))
z=np.zeros_like(tmp_b)
for i in range(tmp_w.shape[0]):
for j in range(tmp_w.shape[1]):
z[i][j] = compute_cost(x_train, y_train, tmp_w[i][j], tmp_b[i][j] )
ax = fig.add_subplot(gs[2:], projection='3d')
ax.plot_surface(tmp_w, tmp_b, z, alpha=0.3, color=dlblue)
ax.xaxis.set_major_locator(MaxNLocator(2))
ax.yaxis.set_major_locator(MaxNLocator(2))
ax.set_xlabel('w', fontsize=16)
ax.set_ylabel('b', fontsize=16)
ax.set_zlabel('\ncost', fontsize=16)
plt.title('Cost vs (b, w)')
# Customize the view angle
ax.view_init(elev=20., azim=-65)
ax.plot(x, y, v,c=dlmagenta)
return
# draw derivative line
# y = m*(x - x1) + y1
def add_line(dj_dx, x1, y1, d, ax):
x = np.linspace(x1-d, x1+d,50)
y = dj_dx*(x - x1) + y1
ax.scatter(x1, y1, color=dlblue, s=50)
ax.plot(x, y, '--', c=dldarkred,zorder=10, linewidth = 1)
xoff = 30 if x1 == 200 else 10
ax.annotate(r"$\frac{\partial J}{\partial w}$ =%d" % dj_dx, fontsize=14,
xy=(x1, y1), xycoords='data',
xytext=(xoff, 10), textcoords='offset points',
arrowprops=dict(arrowstyle="->"),
horizontalalignment='left', verticalalignment='top')
def plt_gradients(x_train,y_train, f_compute_cost, f_compute_gradient):
#===============
# First subplot
#===============
fig,ax = plt.subplots(1,2,figsize=(12,4))
# Print w vs cost to see minimum
fix_b = 100
w_array = np.linspace(-100, 500, 50)
w_array = np.linspace(0, 400, 50)
cost = np.zeros_like(w_array)
for i in range(len(w_array)):
tmp_w = w_array[i]
cost[i] = f_compute_cost(x_train, y_train, tmp_w, fix_b)
ax[0].plot(w_array, cost,linewidth=1)
ax[0].set_title("Cost vs w, with gradient; b set to 100")
ax[0].set_ylabel('Cost')
ax[0].set_xlabel('w')
# plot lines for fixed b=100
for tmp_w in [100,200,300]:
fix_b = 100
dj_dw,dj_db = f_compute_gradient(x_train, y_train, tmp_w, fix_b )
j = f_compute_cost(x_train, y_train, tmp_w, fix_b)
add_line(dj_dw, tmp_w, j, 30, ax[0])
#===============
# Second Subplot
#===============
tmp_b,tmp_w = np.meshgrid(np.linspace(-200, 200, 10), np.linspace(-100, 600, 10))
U = np.zeros_like(tmp_w)
V = np.zeros_like(tmp_b)
for i in range(tmp_w.shape[0]):
for j in range(tmp_w.shape[1]):
U[i][j], V[i][j] = f_compute_gradient(x_train, y_train, tmp_w[i][j], tmp_b[i][j] )
X = tmp_w
Y = tmp_b
n=-2
color_array = np.sqrt(((V-n)/2)**2 + ((U-n)/2)**2)
ax[1].set_title('Gradient shown in quiver plot')
Q = ax[1].quiver(X, Y, U, V, color_array, units='width', )
ax[1].quiverkey(Q, 0.9, 0.9, 2, r'$2 \frac{m}{s}$', labelpos='E',coordinates='figure')
ax[1].set_xlabel("w"); ax[1].set_ylabel("b")

47
week1/data.txt Normal file
View File

@@ -0,0 +1,47 @@
2104,3,399900
1600,3,329900
2400,3,369000
1416,2,232000
3000,4,539900
1985,4,299900
1534,3,314900
1427,3,198999
1380,3,212000
1494,3,242500
1940,4,239999
2000,3,347000
1890,3,329999
4478,5,699900
1268,3,259900
2300,4,449900
1320,2,299900
1236,3,199900
2609,4,499998
3031,4,599000
1767,3,252900
1888,2,255000
1604,3,242900
1962,4,259900
3890,3,573900
1100,3,249900
1458,3,464500
2526,3,469000
2200,3,475000
2637,3,299900
1839,2,349900
1000,1,169900
2040,4,314900
3137,3,579900
1811,4,285900
1437,3,249900
1239,3,229900
2132,4,345000
4215,4,549000
2162,4,287000
1664,2,368500
2238,3,329900
2567,4,314000
1200,3,299000
852,2,179900
1852,4,299900
1203,3,239500

124
week1/deeplearning.mplstyle Normal file
View File

@@ -0,0 +1,124 @@
# see https://matplotlib.org/stable/tutorials/introductory/customizing.html
lines.linewidth: 4
lines.solid_capstyle: butt
legend.fancybox: true
# Verdana" for non-math text,
# Cambria Math
#Blue (Crayon-Aqua) 0096FF
#Dark Red C00000
#Orange (Apple Orange) FF9300
#Black 000000
#Magenta FF40FF
#Purple 7030A0
axes.prop_cycle: cycler('color', ['0096FF', 'FF9300', 'FF40FF', '7030A0', 'C00000'])
#axes.facecolor: f0f0f0 # grey
axes.facecolor: ffffff # white
axes.labelsize: large
axes.axisbelow: true
axes.grid: False
axes.edgecolor: f0f0f0
axes.linewidth: 3.0
axes.titlesize: x-large
patch.edgecolor: f0f0f0
patch.linewidth: 0.5
svg.fonttype: path
grid.linestyle: -
grid.linewidth: 1.0
grid.color: cbcbcb
xtick.major.size: 0
xtick.minor.size: 0
ytick.major.size: 0
ytick.minor.size: 0
savefig.edgecolor: f0f0f0
savefig.facecolor: f0f0f0
#figure.subplot.left: 0.08
#figure.subplot.right: 0.95
#figure.subplot.bottom: 0.07
#figure.facecolor: f0f0f0 # grey
figure.facecolor: ffffff # white
## ***************************************************************************
## * FONT *
## ***************************************************************************
## The font properties used by `text.Text`.
## See https://matplotlib.org/api/font_manager_api.html for more information
## on font properties. The 6 font properties used for font matching are
## given below with their default values.
##
## The font.family property can take either a concrete font name (not supported
## when rendering text with usetex), or one of the following five generic
## values:
## - 'serif' (e.g., Times),
## - 'sans-serif' (e.g., Helvetica),
## - 'cursive' (e.g., Zapf-Chancery),
## - 'fantasy' (e.g., Western), and
## - 'monospace' (e.g., Courier).
## Each of these values has a corresponding default list of font names
## (font.serif, etc.); the first available font in the list is used. Note that
## for font.serif, font.sans-serif, and font.monospace, the first element of
## the list (a DejaVu font) will always be used because DejaVu is shipped with
## Matplotlib and is thus guaranteed to be available; the other entries are
## left as examples of other possible values.
##
## The font.style property has three values: normal (or roman), italic
## or oblique. The oblique style will be used for italic, if it is not
## present.
##
## The font.variant property has two values: normal or small-caps. For
## TrueType fonts, which are scalable fonts, small-caps is equivalent
## to using a font size of 'smaller', or about 83%% of the current font
## size.
##
## The font.weight property has effectively 13 values: normal, bold,
## bolder, lighter, 100, 200, 300, ..., 900. Normal is the same as
## 400, and bold is 700. bolder and lighter are relative values with
## respect to the current weight.
##
## The font.stretch property has 11 values: ultra-condensed,
## extra-condensed, condensed, semi-condensed, normal, semi-expanded,
## expanded, extra-expanded, ultra-expanded, wider, and narrower. This
## property is not currently implemented.
##
## The font.size property is the default font size for text, given in points.
## 10 pt is the standard value.
##
## Note that font.size controls default text sizes. To configure
## special text sizes tick labels, axes, labels, title, etc., see the rc
## settings for axes and ticks. Special text sizes can be defined
## relative to font.size, using the following values: xx-small, x-small,
## small, medium, large, x-large, xx-large, larger, or smaller
font.family: sans-serif
font.style: normal
font.variant: normal
font.weight: normal
font.stretch: normal
font.size: 8.0
font.serif: DejaVu Serif, Bitstream Vera Serif, Computer Modern Roman, New Century Schoolbook, Century Schoolbook L, Utopia, ITC Bookman, Bookman, Nimbus Roman No9 L, Times New Roman, Times, Palatino, Charter, serif
font.sans-serif: Verdana, DejaVu Sans, Bitstream Vera Sans, Computer Modern Sans Serif, Lucida Grande, Geneva, Lucid, Arial, Helvetica, Avant Garde, sans-serif
font.cursive: Apple Chancery, Textile, Zapf Chancery, Sand, Script MT, Felipa, Comic Neue, Comic Sans MS, cursive
font.fantasy: Chicago, Charcoal, Impact, Western, Humor Sans, xkcd, fantasy
font.monospace: DejaVu Sans Mono, Bitstream Vera Sans Mono, Computer Modern Typewriter, Andale Mono, Nimbus Mono L, Courier New, Courier, Fixed, Terminal, monospace
## ***************************************************************************
## * TEXT *
## ***************************************************************************
## The text properties used by `text.Text`.
## See https://matplotlib.org/api/artist_api.html#module-matplotlib.text
## for more information on text properties
#text.color: black

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

BIN
week1/images/C1W1L1_Run.PNG Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 83 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 76 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 86 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 133 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 91 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 105 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 84 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 67 KiB

112
week1/lab_utils_common.py Normal file
View File

@@ -0,0 +1,112 @@
"""
lab_utils_common.py
functions common to all optional labs, Course 1, Week 2
"""
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('deeplearning.mplstyle')
dlblue = '#0096ff'; dlorange = '#FF9300'; dldarkred='#C00000'; dlmagenta='#FF40FF'; dlpurple='#7030A0';
dlcolors = [dlblue, dlorange, dldarkred, dlmagenta, dlpurple]
dlc = dict(dlblue = '#0096ff', dlorange = '#FF9300', dldarkred='#C00000', dlmagenta='#FF40FF', dlpurple='#7030A0')
##########################################################
# Regression Routines
##########################################################
#Function to calculate the cost
def compute_cost_matrix(X, y, w, b, verbose=False):
"""
Computes the gradient for linear regression
Args:
X (ndarray (m,n)): Data, m examples with n features
y (ndarray (m,)) : target values
w (ndarray (n,)) : model parameters
b (scalar) : model parameter
verbose : (Boolean) If true, print out intermediate value f_wb
Returns
cost: (scalar)
"""
m = X.shape[0]
# calculate f_wb for all examples.
f_wb = X @ w + b
# calculate cost
total_cost = (1/(2*m)) * np.sum((f_wb-y)**2)
if verbose: print("f_wb:")
if verbose: print(f_wb)
return total_cost
def compute_gradient_matrix(X, y, w, b):
"""
Computes the gradient for linear regression
Args:
X (ndarray (m,n)): Data, m examples with n features
y (ndarray (m,)) : target values
w (ndarray (n,)) : model parameters
b (scalar) : model parameter
Returns
dj_dw (ndarray (n,1)): The gradient of the cost w.r.t. the parameters w.
dj_db (scalar): The gradient of the cost w.r.t. the parameter b.
"""
m,n = X.shape
f_wb = X @ w + b
e = f_wb - y
dj_dw = (1/m) * (X.T @ e)
dj_db = (1/m) * np.sum(e)
return dj_db,dj_dw
# Loop version of multi-variable compute_cost
def compute_cost(X, y, w, b):
"""
compute cost
Args:
X (ndarray (m,n)): Data, m examples with n features
y (ndarray (m,)) : target values
w (ndarray (n,)) : model parameters
b (scalar) : model parameter
Returns
cost (scalar) : cost
"""
m = X.shape[0]
cost = 0.0
for i in range(m):
f_wb_i = np.dot(X[i],w) + b #(n,)(n,)=scalar
cost = cost + (f_wb_i - y[i])**2
cost = cost/(2*m)
return cost
def compute_gradient(X, y, w, b):
"""
Computes the gradient for linear regression
Args:
X (ndarray (m,n)): Data, m examples with n features
y (ndarray (m,)) : target values
w (ndarray (n,)) : model parameters
b (scalar) : model parameter
Returns
dj_dw (ndarray Shape (n,)): The gradient of the cost w.r.t. the parameters w.
dj_db (scalar): The gradient of the cost w.r.t. the parameter b.
"""
m,n = X.shape #(number of examples, number of features)
dj_dw = np.zeros((n,))
dj_db = 0.
for i in range(m):
err = (np.dot(X[i], w) + b) - y[i]
for j in range(n):
dj_dw[j] = dj_dw[j] + err * X[i,j]
dj_db = dj_db + err
dj_dw = dj_dw/m
dj_db = dj_db/m
return dj_db,dj_dw

398
week1/lab_utils_uni.py Normal file
View File

@@ -0,0 +1,398 @@
"""
lab_utils_uni.py
routines used in Course 1, Week2, labs1-3 dealing with single variables (univariate)
"""
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.ticker import MaxNLocator
from matplotlib.gridspec import GridSpec
from matplotlib.colors import LinearSegmentedColormap
from ipywidgets import interact
from lab_utils_common import compute_cost
from lab_utils_common import dlblue, dlorange, dldarkred, dlmagenta, dlpurple, dlcolors
plt.style.use('deeplearning.mplstyle')
n_bin = 5
dlcm = LinearSegmentedColormap.from_list(
'dl_map', dlcolors, N=n_bin)
##########################################################
# Plotting Routines
##########################################################
def plt_house_x(X, y,f_wb=None, ax=None):
''' plot house with aXis '''
if not ax:
fig, ax = plt.subplots(1,1)
ax.scatter(X, y, marker='x', c='r', label="Actual Value")
ax.set_title("Housing Prices")
ax.set_ylabel('Price (in 1000s of dollars)')
ax.set_xlabel(f'Size (1000 sqft)')
if f_wb is not None:
ax.plot(X, f_wb, c=dlblue, label="Our Prediction")
ax.legend()
def mk_cost_lines(x,y,w,b, ax):
''' makes vertical cost lines'''
cstr = "cost = (1/m)*("
ctot = 0
label = 'cost for point'
addedbreak = False
for p in zip(x,y):
f_wb_p = w*p[0]+b
c_p = ((f_wb_p - p[1])**2)/2
c_p_txt = c_p
ax.vlines(p[0], p[1],f_wb_p, lw=3, color=dlpurple, ls='dotted', label=label)
label='' #just one
cxy = [p[0], p[1] + (f_wb_p-p[1])/2]
ax.annotate(f'{c_p_txt:0.0f}', xy=cxy, xycoords='data',color=dlpurple,
xytext=(5, 0), textcoords='offset points')
cstr += f"{c_p_txt:0.0f} +"
if len(cstr) > 38 and addedbreak is False:
cstr += "\n"
addedbreak = True
ctot += c_p
ctot = ctot/(len(x))
cstr = cstr[:-1] + f") = {ctot:0.0f}"
ax.text(0.15,0.02,cstr, transform=ax.transAxes, color=dlpurple)
##########
# Cost lab
##########
def plt_intuition(x_train, y_train):
w_range = np.array([200-200,200+200])
tmp_b = 100
w_array = np.arange(*w_range, 5)
cost = np.zeros_like(w_array)
for i in range(len(w_array)):
tmp_w = w_array[i]
cost[i] = compute_cost(x_train, y_train, tmp_w, tmp_b)
@interact(w=(*w_range,10),continuous_update=False)
def func( w=150):
f_wb = np.dot(x_train, w) + tmp_b
fig, ax = plt.subplots(1, 2, constrained_layout=True, figsize=(8,4))
fig.canvas.toolbar_position = 'bottom'
mk_cost_lines(x_train, y_train, w, tmp_b, ax[0])
plt_house_x(x_train, y_train, f_wb=f_wb, ax=ax[0])
ax[1].plot(w_array, cost)
cur_cost = compute_cost(x_train, y_train, w, tmp_b)
ax[1].scatter(w,cur_cost, s=100, color=dldarkred, zorder= 10, label= f"cost at w={w}")
ax[1].hlines(cur_cost, ax[1].get_xlim()[0],w, lw=4, color=dlpurple, ls='dotted')
ax[1].vlines(w, ax[1].get_ylim()[0],cur_cost, lw=4, color=dlpurple, ls='dotted')
ax[1].set_title("Cost vs. w, (b fixed at 100)")
ax[1].set_ylabel('Cost')
ax[1].set_xlabel('w')
ax[1].legend(loc='upper center')
fig.suptitle(f"Minimize Cost: Current Cost = {cur_cost:0.0f}", fontsize=12)
plt.show()
# this is the 2D cost curve with interactive slider
def plt_stationary(x_train, y_train):
# setup figure
fig = plt.figure( figsize=(9,8))
#fig = plt.figure(constrained_layout=True, figsize=(12,10))
fig.set_facecolor('#ffffff') #white
fig.canvas.toolbar_position = 'top'
#gs = GridSpec(2, 2, figure=fig, wspace = 0.01)
gs = GridSpec(2, 2, figure=fig)
ax0 = fig.add_subplot(gs[0, 0])
ax1 = fig.add_subplot(gs[0, 1])
ax2 = fig.add_subplot(gs[1, :], projection='3d')
ax = np.array([ax0,ax1,ax2])
#setup useful ranges and common linspaces
w_range = np.array([200-300.,200+300])
b_range = np.array([50-300., 50+300])
b_space = np.linspace(*b_range, 100)
w_space = np.linspace(*w_range, 100)
# get cost for w,b ranges for contour and 3D
tmp_b,tmp_w = np.meshgrid(b_space,w_space)
z=np.zeros_like(tmp_b)
for i in range(tmp_w.shape[0]):
for j in range(tmp_w.shape[1]):
z[i,j] = compute_cost(x_train, y_train, tmp_w[i][j], tmp_b[i][j] )
if z[i,j] == 0: z[i,j] = 1e-6
w0=200;b=-100 #initial point
### plot model w cost ###
f_wb = np.dot(x_train,w0) + b
mk_cost_lines(x_train,y_train,w0,b,ax[0])
plt_house_x(x_train, y_train, f_wb=f_wb, ax=ax[0])
### plot contour ###
CS = ax[1].contour(tmp_w, tmp_b, np.log(z),levels=12, linewidths=2, alpha=0.7,colors=dlcolors)
ax[1].set_title('Cost(w,b)')
ax[1].set_xlabel('w', fontsize=10)
ax[1].set_ylabel('b', fontsize=10)
ax[1].set_xlim(w_range) ; ax[1].set_ylim(b_range)
cscat = ax[1].scatter(w0,b, s=100, color=dlblue, zorder= 10, label="cost with \ncurrent w,b")
chline = ax[1].hlines(b, ax[1].get_xlim()[0],w0, lw=4, color=dlpurple, ls='dotted')
cvline = ax[1].vlines(w0, ax[1].get_ylim()[0],b, lw=4, color=dlpurple, ls='dotted')
ax[1].text(0.5,0.95,"Click to choose w,b", bbox=dict(facecolor='white', ec = 'black'), fontsize = 10,
transform=ax[1].transAxes, verticalalignment = 'center', horizontalalignment= 'center')
#Surface plot of the cost function J(w,b)
ax[2].plot_surface(tmp_w, tmp_b, z, cmap = dlcm, alpha=0.3, antialiased=True)
ax[2].plot_wireframe(tmp_w, tmp_b, z, color='k', alpha=0.1)
plt.xlabel("$w$")
plt.ylabel("$b$")
ax[2].zaxis.set_rotate_label(False)
ax[2].xaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))
ax[2].yaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))
ax[2].zaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))
ax[2].set_zlabel("J(w, b)\n\n", rotation=90)
plt.title("Cost(w,b) \n [You can rotate this figure]", size=12)
ax[2].view_init(30, -120)
return fig,ax, [cscat, chline, cvline]
#https://matplotlib.org/stable/users/event_handling.html
class plt_update_onclick:
def __init__(self, fig, ax, x_train,y_train, dyn_items):
self.fig = fig
self.ax = ax
self.x_train = x_train
self.y_train = y_train
self.dyn_items = dyn_items
self.cid = fig.canvas.mpl_connect('button_press_event', self)
def __call__(self, event):
if event.inaxes == self.ax[1]:
ws = event.xdata
bs = event.ydata
cst = compute_cost(self.x_train, self.y_train, ws, bs)
# clear and redraw line plot
self.ax[0].clear()
f_wb = np.dot(self.x_train,ws) + bs
mk_cost_lines(self.x_train,self.y_train,ws,bs,self.ax[0])
plt_house_x(self.x_train, self.y_train, f_wb=f_wb, ax=self.ax[0])
# remove lines and re-add on countour plot and 3d plot
for artist in self.dyn_items:
artist.remove()
a = self.ax[1].scatter(ws,bs, s=100, color=dlblue, zorder= 10, label="cost with \ncurrent w,b")
b = self.ax[1].hlines(bs, self.ax[1].get_xlim()[0],ws, lw=4, color=dlpurple, ls='dotted')
c = self.ax[1].vlines(ws, self.ax[1].get_ylim()[0],bs, lw=4, color=dlpurple, ls='dotted')
d = self.ax[1].annotate(f"Cost: {cst:.0f}", xy= (ws, bs), xytext = (4,4), textcoords = 'offset points',
bbox=dict(facecolor='white'), size = 10)
#Add point in 3D surface plot
e = self.ax[2].scatter3D(ws, bs,cst , marker='X', s=100)
self.dyn_items = [a,b,c,d,e]
self.fig.canvas.draw()
def soup_bowl():
""" Create figure and plot with a 3D projection"""
fig = plt.figure(figsize=(8,8))
#Plot configuration
ax = fig.add_subplot(111, projection='3d')
ax.xaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))
ax.yaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))
ax.zaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))
ax.zaxis.set_rotate_label(False)
ax.view_init(45, -120)
#Useful linearspaces to give values to the parameters w and b
w = np.linspace(-20, 20, 100)
b = np.linspace(-20, 20, 100)
#Get the z value for a bowl-shaped cost function
z=np.zeros((len(w), len(b)))
j=0
for x in w:
i=0
for y in b:
z[i,j] = x**2 + y**2
i+=1
j+=1
#Meshgrid used for plotting 3D functions
W, B = np.meshgrid(w, b)
#Create the 3D surface plot of the bowl-shaped cost function
ax.plot_surface(W, B, z, cmap = "Spectral_r", alpha=0.7, antialiased=False)
ax.plot_wireframe(W, B, z, color='k', alpha=0.1)
ax.set_xlabel("$w$")
ax.set_ylabel("$b$")
ax.set_zlabel("$J(w,b)$", rotation=90)
ax.set_title("$J(w,b)$\n [You can rotate this figure]", size=15)
plt.show()
def inbounds(a,b,xlim,ylim):
xlow,xhigh = xlim
ylow,yhigh = ylim
ax, ay = a
bx, by = b
if (ax > xlow and ax < xhigh) and (bx > xlow and bx < xhigh) \
and (ay > ylow and ay < yhigh) and (by > ylow and by < yhigh):
return True
return False
def plt_contour_wgrad(x, y, hist, ax, w_range=[-100, 500, 5], b_range=[-500, 500, 5],
contours = [0.1,50,1000,5000,10000,25000,50000],
resolution=5, w_final=200, b_final=100,step=10 ):
b0,w0 = np.meshgrid(np.arange(*b_range),np.arange(*w_range))
z=np.zeros_like(b0)
for i in range(w0.shape[0]):
for j in range(w0.shape[1]):
z[i][j] = compute_cost(x, y, w0[i][j], b0[i][j] )
CS = ax.contour(w0, b0, z, contours, linewidths=2,
colors=[dlblue, dlorange, dldarkred, dlmagenta, dlpurple])
ax.clabel(CS, inline=1, fmt='%1.0f', fontsize=10)
ax.set_xlabel("w"); ax.set_ylabel("b")
ax.set_title('Contour plot of cost J(w,b), vs b,w with path of gradient descent')
w = w_final; b=b_final
ax.hlines(b, ax.get_xlim()[0],w, lw=2, color=dlpurple, ls='dotted')
ax.vlines(w, ax.get_ylim()[0],b, lw=2, color=dlpurple, ls='dotted')
base = hist[0]
for point in hist[0::step]:
edist = np.sqrt((base[0] - point[0])**2 + (base[1] - point[1])**2)
if(edist > resolution or point==hist[-1]):
if inbounds(point,base, ax.get_xlim(),ax.get_ylim()):
plt.annotate('', xy=point, xytext=base,xycoords='data',
arrowprops={'arrowstyle': '->', 'color': 'r', 'lw': 3},
va='center', ha='center')
base=point
return
def plt_divergence(p_hist, J_hist, x_train,y_train):
x=np.zeros(len(p_hist))
y=np.zeros(len(p_hist))
v=np.zeros(len(p_hist))
for i in range(len(p_hist)):
x[i] = p_hist[i][0]
y[i] = p_hist[i][1]
v[i] = J_hist[i]
fig = plt.figure(figsize=(12,5))
plt.subplots_adjust( wspace=0 )
gs = fig.add_gridspec(1, 5)
fig.suptitle(f"Cost escalates when learning rate is too large")
#===============
# First subplot
#===============
ax = fig.add_subplot(gs[:2], )
# Print w vs cost to see minimum
fix_b = 100
w_array = np.arange(-70000, 70000, 1000)
cost = np.zeros_like(w_array)
for i in range(len(w_array)):
tmp_w = w_array[i]
cost[i] = compute_cost(x_train, y_train, tmp_w, fix_b)
ax.plot(w_array, cost)
ax.plot(x,v, c=dlmagenta)
ax.set_title("Cost vs w, b set to 100")
ax.set_ylabel('Cost')
ax.set_xlabel('w')
ax.xaxis.set_major_locator(MaxNLocator(2))
#===============
# Second Subplot
#===============
tmp_b,tmp_w = np.meshgrid(np.arange(-35000, 35000, 500),np.arange(-70000, 70000, 500))
z=np.zeros_like(tmp_b)
for i in range(tmp_w.shape[0]):
for j in range(tmp_w.shape[1]):
z[i][j] = compute_cost(x_train, y_train, tmp_w[i][j], tmp_b[i][j] )
ax = fig.add_subplot(gs[2:], projection='3d')
ax.plot_surface(tmp_w, tmp_b, z, alpha=0.3, color=dlblue)
ax.xaxis.set_major_locator(MaxNLocator(2))
ax.yaxis.set_major_locator(MaxNLocator(2))
ax.set_xlabel('w', fontsize=16)
ax.set_ylabel('b', fontsize=16)
ax.set_zlabel('\ncost', fontsize=16)
plt.title('Cost vs (b, w)')
# Customize the view angle
ax.view_init(elev=20., azim=-65)
ax.plot(x, y, v,c=dlmagenta)
return
# draw derivative line
# y = m*(x - x1) + y1
def add_line(dj_dx, x1, y1, d, ax):
x = np.linspace(x1-d, x1+d,50)
y = dj_dx*(x - x1) + y1
ax.scatter(x1, y1, color=dlblue, s=50)
ax.plot(x, y, '--', c=dldarkred,zorder=10, linewidth = 1)
xoff = 30 if x1 == 200 else 10
ax.annotate(r"$\frac{\partial J}{\partial w}$ =%d" % dj_dx, fontsize=14,
xy=(x1, y1), xycoords='data',
xytext=(xoff, 10), textcoords='offset points',
arrowprops=dict(arrowstyle="->"),
horizontalalignment='left', verticalalignment='top')
def plt_gradients(x_train,y_train, f_compute_cost, f_compute_gradient):
#===============
# First subplot
#===============
fig,ax = plt.subplots(1,2,figsize=(12,4))
# Print w vs cost to see minimum
fix_b = 100
w_array = np.linspace(-100, 500, 50)
w_array = np.linspace(0, 400, 50)
cost = np.zeros_like(w_array)
for i in range(len(w_array)):
tmp_w = w_array[i]
cost[i] = f_compute_cost(x_train, y_train, tmp_w, fix_b)
ax[0].plot(w_array, cost,linewidth=1)
ax[0].set_title("Cost vs w, with gradient; b set to 100")
ax[0].set_ylabel('Cost')
ax[0].set_xlabel('w')
# plot lines for fixed b=100
for tmp_w in [100,200,300]:
fix_b = 100
dj_dw,dj_db = f_compute_gradient(x_train, y_train, tmp_w, fix_b )
j = f_compute_cost(x_train, y_train, tmp_w, fix_b)
add_line(dj_dw, tmp_w, j, 30, ax[0])
#===============
# Second Subplot
#===============
tmp_b,tmp_w = np.meshgrid(np.linspace(-200, 200, 10), np.linspace(-100, 600, 10))
U = np.zeros_like(tmp_w)
V = np.zeros_like(tmp_b)
for i in range(tmp_w.shape[0]):
for j in range(tmp_w.shape[1]):
U[i][j], V[i][j] = f_compute_gradient(x_train, y_train, tmp_w[i][j], tmp_b[i][j] )
X = tmp_w
Y = tmp_b
n=-2
color_array = np.sqrt(((V-n)/2)**2 + ((U-n)/2)**2)
ax[1].set_title('Gradient shown in quiver plot')
Q = ax[1].quiver(X, Y, U, V, color_array, units='width', )
ax[1].quiverkey(Q, 0.9, 0.9, 2, r'$2 \frac{m}{s}$', labelpos='E',coordinates='figure')
ax[1].set_xlabel("w"); ax[1].set_ylabel("b")