240615
This commit is contained in:
		
							
								
								
									
										
											BIN
										
									
								
								plt/arctanx&x.png
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										
											BIN
										
									
								
								plt/arctanx&x.png
									
									
									
									
									
										Normal file
									
								
							
										
											Binary file not shown.
										
									
								
							| 
		 After Width: | Height: | Size: 28 KiB  | 
							
								
								
									
										29
									
								
								plt/arctanx&x.py
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										29
									
								
								plt/arctanx&x.py
									
									
									
									
									
										Normal file
									
								
							@@ -0,0 +1,29 @@
 | 
			
		||||
import numpy as np
 | 
			
		||||
import matplotlib.pyplot as plt
 | 
			
		||||
 | 
			
		||||
# 定义 x 的范围
 | 
			
		||||
x = np.linspace(-3, 3, 400)
 | 
			
		||||
 | 
			
		||||
# 计算 y 的值
 | 
			
		||||
y_arctan = np.arctan(x)
 | 
			
		||||
y_linear = x
 | 
			
		||||
 | 
			
		||||
# 绘制图形
 | 
			
		||||
plt.figure(figsize=(8, 6))
 | 
			
		||||
plt.plot(x, y_arctan, label='y = arctan(x)', color='blue')
 | 
			
		||||
plt.plot(x, y_linear, label='y = x', color='red', linestyle='--')
 | 
			
		||||
plt.xlabel('x')
 | 
			
		||||
plt.ylabel('y')
 | 
			
		||||
plt.title('Plot of y = arctan(x) and y = x')
 | 
			
		||||
 | 
			
		||||
# 获取当前坐标轴
 | 
			
		||||
ax = plt.gca()
 | 
			
		||||
 | 
			
		||||
# 设置 x 轴和 y 轴的线条粗细
 | 
			
		||||
ax.spines['bottom'].set_linewidth(2)
 | 
			
		||||
ax.spines['left'].set_linewidth(2)
 | 
			
		||||
 | 
			
		||||
plt.legend()
 | 
			
		||||
plt.grid(True)  # 如果需要网格线,可以保留这行
 | 
			
		||||
plt.savefig('arctanx&x.png')
 | 
			
		||||
plt.show()
 | 
			
		||||
							
								
								
									
										52
									
								
								plt/style.mplstyle
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										52
									
								
								plt/style.mplstyle
									
									
									
									
									
										Normal file
									
								
							@@ -0,0 +1,52 @@
 | 
			
		||||
# Line properties
 | 
			
		||||
lines.linewidth: 4
 | 
			
		||||
lines.solid_capstyle: butt
 | 
			
		||||
 | 
			
		||||
# Legend properties
 | 
			
		||||
legend.fancybox: true
 | 
			
		||||
 | 
			
		||||
# Color and cycle properties
 | 
			
		||||
axes.prop_cycle: cycler('color', ['#0096FF', '#FF9300', '#FF40FF', '#7030A0', '#C00000'])
 | 
			
		||||
axes.facecolor: '#ffffff'  # white
 | 
			
		||||
axes.labelsize: large
 | 
			
		||||
axes.axisbelow: true
 | 
			
		||||
axes.grid: False
 | 
			
		||||
axes.edgecolor: '#f0f0f0'
 | 
			
		||||
axes.linewidth: 3.0
 | 
			
		||||
axes.titlesize: x-large
 | 
			
		||||
 | 
			
		||||
# Patch properties
 | 
			
		||||
patch.edgecolor: '#f0f0f0'
 | 
			
		||||
patch.linewidth: 0.5
 | 
			
		||||
 | 
			
		||||
# SVG properties
 | 
			
		||||
svg.fonttype: path
 | 
			
		||||
 | 
			
		||||
# Grid properties
 | 
			
		||||
grid.linestyle: '-'
 | 
			
		||||
grid.linewidth: 1.0
 | 
			
		||||
grid.color: '#cbcbcb'
 | 
			
		||||
 | 
			
		||||
# Ticks properties
 | 
			
		||||
xtick.major.size: 0
 | 
			
		||||
xtick.minor.size: 0
 | 
			
		||||
ytick.major.size: 0
 | 
			
		||||
ytick.minor.size: 0
 | 
			
		||||
 | 
			
		||||
# Savefig properties
 | 
			
		||||
savefig.edgecolor: '#f0f0f0'
 | 
			
		||||
savefig.facecolor: '#f0f0f0'
 | 
			
		||||
 | 
			
		||||
# Figure properties
 | 
			
		||||
figure.facecolor: '#ffffff'  # white
 | 
			
		||||
 | 
			
		||||
# Font properties
 | 
			
		||||
font.family: sans-serif
 | 
			
		||||
font.style: normal
 | 
			
		||||
font.variant: normal
 | 
			
		||||
font.weight: normal
 | 
			
		||||
font.stretch: normal
 | 
			
		||||
font.size: 8.0
 | 
			
		||||
 | 
			
		||||
# Text properties
 | 
			
		||||
text.color: black
 | 
			
		||||
@@ -2,3 +2,4 @@ numpy
 | 
			
		||||
pandas
 | 
			
		||||
pillow
 | 
			
		||||
matplotlib
 | 
			
		||||
ipywidgets
 | 
			
		||||
@@ -9,7 +9,8 @@
 | 
			
		||||
    "<figure>\n",
 | 
			
		||||
    " <img src=\"./images/C1_W1_L3_S1_Lecture_b.png\"   style=\"width:600px;height:200px;\">\n",
 | 
			
		||||
    "</figure>"
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "d5947faf528d757"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -18,7 +19,8 @@
 | 
			
		||||
    "## Goals\n",
 | 
			
		||||
    "In this lab you will:\n",
 | 
			
		||||
    "- Learn to implement the model $f_{w,b}$ for linear regression with one variable"
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "428fc5f273aad770"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -39,7 +41,8 @@
 | 
			
		||||
    "|  $w$  |  parameter: weight                                 | `w`    |\n",
 | 
			
		||||
    "|  $b$           |  parameter: bias                                           | `b`    |     \n",
 | 
			
		||||
    "| $f_{w,b}(x^{(i)})$ | The result of the model evaluation at $x^{(i)}$ parameterized by $w,b$: $f_{w,b}(x^{(i)}) = wx^{(i)}+b$  | `f_wb` | \n"
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "7b23d466e8a13579"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -49,7 +52,8 @@
 | 
			
		||||
    "In this lab you will make use of: \n",
 | 
			
		||||
    "- NumPy, a popular library for scientific computing\n",
 | 
			
		||||
    "- Matplotlib, a popular library for plotting data"
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "d6756737db18e1c"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "code",
 | 
			
		||||
@@ -60,7 +64,8 @@
 | 
			
		||||
    "import numpy as np\n",
 | 
			
		||||
    "import matplotlib.pyplot as plt\n",
 | 
			
		||||
    "plt.style.use('./deeplearning.mplstyle')"
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "792fae4c5017098a"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -78,14 +83,16 @@
 | 
			
		||||
    "| 2.0               | 500                      |\n",
 | 
			
		||||
    "\n",
 | 
			
		||||
    "You would like to fit a linear regression model (shown above as the blue straight line) through these two points, so you can then predict price for other houses - say, a house with 1200 sqft.\n"
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "f8e92dd846374938"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
   "metadata": {},
 | 
			
		||||
   "source": [
 | 
			
		||||
    "Please run the following code cell to create your `x_train` and `y_train` variables. The data is stored in one-dimensional NumPy arrays."
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "d877986e09640e0f"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "code",
 | 
			
		||||
@@ -108,14 +115,16 @@
 | 
			
		||||
    "y_train = np.array([300.0, 500.0])\n",
 | 
			
		||||
    "print(f\"x_train = {x_train}\")\n",
 | 
			
		||||
    "print(f\"y_train = {y_train}\")"
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "5306a3aeca0c549a"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
   "metadata": {},
 | 
			
		||||
   "source": [
 | 
			
		||||
    ">**Note**: The course will frequently utilize the python 'f-string' output formatting described [here](https://docs.python.org/3/tutorial/inputoutput.html) when printing. The content between the curly braces is evaluated when producing the output."
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "1f2af3b208024cc1"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -123,7 +132,8 @@
 | 
			
		||||
   "source": [
 | 
			
		||||
    "### Number of training examples `m`\n",
 | 
			
		||||
    "You will use `m` to denote the number of training examples. Numpy arrays have a `.shape` parameter. `x_train.shape` returns a python tuple with an entry for each dimension. `x_train.shape[0]` is the length of the array and number of examples as shown below."
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "8e8c52a75e71157d"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "code",
 | 
			
		||||
@@ -144,14 +154,16 @@
 | 
			
		||||
    "print(f\"x_train.shape: {x_train.shape}\")\n",
 | 
			
		||||
    "m = x_train.shape[0]\n",
 | 
			
		||||
    "print(f\"Number of training examples is: {m}\")"
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "eca4f6257ac4de85"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
   "metadata": {},
 | 
			
		||||
   "source": [
 | 
			
		||||
    "One can also use the Python `len()` function as shown below."
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "a0f74fc14328cbb1"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "code",
 | 
			
		||||
@@ -170,7 +182,8 @@
 | 
			
		||||
    "# m is the number of training examples\n",
 | 
			
		||||
    "m = len(x_train)\n",
 | 
			
		||||
    "print(f\"Number of training examples is: {m}\")"
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "314bdb01cd4b140b"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -182,7 +195,8 @@
 | 
			
		||||
    "\n",
 | 
			
		||||
    "To access a value in a Numpy array, one indexes the array with the desired offset. For example the syntax to access location zero of `x_train` is `x_train[0]`.\n",
 | 
			
		||||
    "Run the next code block below to get the $i^{th}$ training example."
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "7954c480ba221f81"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "code",
 | 
			
		||||
@@ -203,14 +217,16 @@
 | 
			
		||||
    "x_i = x_train[i]\n",
 | 
			
		||||
    "y_i = y_train[i]\n",
 | 
			
		||||
    "print(f\"(x^({i}), y^({i})) = ({x_i}, {y_i})\")"
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "b6a441b1c3984a36"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
   "metadata": {},
 | 
			
		||||
   "source": [
 | 
			
		||||
    "### Plotting the data"
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "34065d7bc1038bf0"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -220,7 +236,8 @@
 | 
			
		||||
    "- The function arguments `marker` and `c` show the points as red crosses (the default is blue dots).\n",
 | 
			
		||||
    "\n",
 | 
			
		||||
    "You can use other functions in the `matplotlib` library to set the title and labels to display"
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "e89956448ccd316d"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "code",
 | 
			
		||||
@@ -248,7 +265,8 @@
 | 
			
		||||
    "# Set the x-axis label\n",
 | 
			
		||||
    "plt.xlabel('Size (1000 sqft)')\n",
 | 
			
		||||
    "plt.show()"
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "c1b08142e0243c78"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -265,7 +283,8 @@
 | 
			
		||||
    "Let's try to get a better intuition for this through the code blocks below. Let's start with $w = 100$ and $b = 100$. \n",
 | 
			
		||||
    "\n",
 | 
			
		||||
    "**Note: You can come back to this cell to adjust the model's w and b parameters**"
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "3d4f63fe1b74df05"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "code",
 | 
			
		||||
@@ -286,7 +305,8 @@
 | 
			
		||||
    "b = 100\n",
 | 
			
		||||
    "print(f\"w: {w}\")\n",
 | 
			
		||||
    "print(f\"b: {b}\")"
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "6ef1590ead23e422"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -301,7 +321,8 @@
 | 
			
		||||
    "For a large number of data points, this can get unwieldy and repetitive. So instead, you can calculate the function output in a `for` loop as shown in the `compute_model_output` function below.\n",
 | 
			
		||||
    "> **Note**: The argument description `(ndarray (m,))` describes a Numpy n-dimensional array of shape (m,). `(scalar)` describes an argument without dimensions, just a magnitude.  \n",
 | 
			
		||||
    "> **Note**: `np.zero(n)` will return a one-dimensional numpy array with $n$ entries   \n"
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "2c9b74a9dafeb729"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "code",
 | 
			
		||||
@@ -324,14 +345,16 @@
 | 
			
		||||
    "        f_wb[i] = w * x[i] + b\n",
 | 
			
		||||
    "        \n",
 | 
			
		||||
    "    return f_wb"
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "b6247cda89575683"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
   "metadata": {},
 | 
			
		||||
   "source": [
 | 
			
		||||
    "Now let's call the `compute_model_output` function and plot the output.."
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "ab0afd05a817d94f"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "code",
 | 
			
		||||
@@ -366,7 +389,8 @@
 | 
			
		||||
    "plt.xlabel('Size (1000 sqft)')\n",
 | 
			
		||||
    "plt.legend()\n",
 | 
			
		||||
    "plt.show()"
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "95d34926475c6344"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -379,7 +403,8 @@
 | 
			
		||||
    "\n",
 | 
			
		||||
    "#### Tip:\n",
 | 
			
		||||
    "You can use your mouse to click on the green \"Hints\" below to reveal some hints for choosing b and w."
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "a1b84ae24243cb1b"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -394,7 +419,8 @@
 | 
			
		||||
    "        <li>Try $w = 200$ and $b = 100$ </li>\n",
 | 
			
		||||
    "    </ul>\n",
 | 
			
		||||
    "    </p>"
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "ee76a723f8f5dbd5"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -402,7 +428,8 @@
 | 
			
		||||
   "source": [
 | 
			
		||||
    "### Prediction\n",
 | 
			
		||||
    "Now that we have a model, we can use it to make our original prediction. Let's predict the price of a house with 1200 sqft. Since the units of $x$ are in 1000's of sqft, $x$ is 1.2.\n"
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "7f423cd19a7ba591"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "code",
 | 
			
		||||
@@ -424,7 +451,8 @@
 | 
			
		||||
    "cost_1200sqft = w * x_i + b    \n",
 | 
			
		||||
    "\n",
 | 
			
		||||
    "print(f\"${cost_1200sqft:.0f} thousand dollars\")"
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "9cdc794cbcf34c22"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -436,14 +464,16 @@
 | 
			
		||||
    "     - In the example above, the feature was house size and the target was house price\n",
 | 
			
		||||
    "     - for simple linear regression, the model has two parameters $w$ and $b$ whose values are 'fit' using *training data*.\n",
 | 
			
		||||
    "     - once a model's parameters have been determined, the model can be used to make predictions on novel data."
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "4c8ad73f0d6f18f2"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "code",
 | 
			
		||||
   "execution_count": null,
 | 
			
		||||
   "metadata": {},
 | 
			
		||||
   "outputs": [],
 | 
			
		||||
   "source": []
 | 
			
		||||
   "source": [],
 | 
			
		||||
   "id": "b3eb2771a91f081b"
 | 
			
		||||
  }
 | 
			
		||||
 ],
 | 
			
		||||
 "metadata": {
 | 
			
		||||
 
 | 
			
		||||
@@ -6,11 +6,11 @@
 | 
			
		||||
   "source": [
 | 
			
		||||
    "# Optional  Lab: Cost Function \n",
 | 
			
		||||
    "<figure>\n",
 | 
			
		||||
    "    <center> <img src=\"./images/C1_W1_L3_S2_Lecture_b.png\"  style=\"width:1000px;height:200px;\" ></center>\n",
 | 
			
		||||
    "    <center> <img src=\"https://i.wolves.top/picgo/202408030514986.png\"  style=\"width:1000px;height:200px;\" ></center>\n",
 | 
			
		||||
    "</figure>\n",
 | 
			
		||||
    "\n"
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "39510563895119eb"
 | 
			
		||||
   "id": "2eb3fe737905ed4d"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -20,7 +20,7 @@
 | 
			
		||||
    "In this lab you will:\n",
 | 
			
		||||
    "- you will implement and explore the `cost` function for linear regression with one variable. \n"
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "a160c6cc32746122"
 | 
			
		||||
   "id": "1fcf9bd45e9feda9"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -32,7 +32,7 @@
 | 
			
		||||
    "- Matplotlib, a popular library for plotting data\n",
 | 
			
		||||
    "- local plotting routines in the lab_utils_uni.py file in the local directory"
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "ab2b5e324c46a0ea"
 | 
			
		||||
   "id": "7d157dcf2c8adeb1"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "code",
 | 
			
		||||
@@ -46,7 +46,7 @@
 | 
			
		||||
    "from lab_utils_uni import plt_intuition, plt_stationary, plt_update_onclick, soup_bowl\n",
 | 
			
		||||
    "plt.style.use('./deeplearning.mplstyle')"
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "f025d8d8eea42714"
 | 
			
		||||
   "id": "520a04274f8b8984"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -63,7 +63,7 @@
 | 
			
		||||
    "| 1                 | 300                      |\n",
 | 
			
		||||
    "| 2                  | 500                      |\n"
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "a0b38aaf53fecdb4"
 | 
			
		||||
   "id": "8a119e41544f0e89"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "code",
 | 
			
		||||
@@ -74,7 +74,7 @@
 | 
			
		||||
    "x_train = np.array([1.0, 2.0])           #(size in 1000 square feet)\n",
 | 
			
		||||
    "y_train = np.array([300.0, 500.0])           #(price in 1000s of dollars)"
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "4a5bab31ff88bda5"
 | 
			
		||||
   "id": "f2abb1684508d059"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -94,7 +94,7 @@
 | 
			
		||||
    "- These differences are summed over all the $m$ examples and divided by `2m` to produce the cost, $J(w,b)$.  \n",
 | 
			
		||||
    ">Note, in lecture summation ranges are typically from 1 to m, while code will be from 0 to m-1.\n"
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "2fde323fe15bd227"
 | 
			
		||||
   "id": "aa7329fc4c914325"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -105,7 +105,7 @@
 | 
			
		||||
    "- the difference between the target and the prediction is calculated and squared.\n",
 | 
			
		||||
    "- this is added to the total cost."
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "23fa56cab00e1046"
 | 
			
		||||
   "id": "d3a36b72897daea3"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "code",
 | 
			
		||||
@@ -138,7 +138,7 @@
 | 
			
		||||
    "\n",
 | 
			
		||||
    "    return total_cost"
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "1166da6c92b7cac0"
 | 
			
		||||
   "id": "2c1741a176659c0a"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -146,13 +146,13 @@
 | 
			
		||||
   "source": [
 | 
			
		||||
    "## Cost Function Intuition"
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "efaf37cff0b92f69"
 | 
			
		||||
   "id": "949501e9bdaccf82"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
   "metadata": {},
 | 
			
		||||
   "source": [
 | 
			
		||||
    "<img align=\"left\" src=\"./images/C1_W1_Lab02_GoalOfRegression.PNG\"    style=\" width:380px; padding: 10px;  \" /> Your goal is to find a model $f_{w,b}(x) = wx + b$, with parameters $w,b$,  which will accurately predict house values given an input $x$. The cost is a measure of how accurate the model is on the training data.\n",
 | 
			
		||||
    "<img align=\"left\" src=\"https://i.wolves.top/picgo/202408030515720.png\"    style=\" width:380px; padding: 10px;  \" /> Your goal is to find a model $f_{w,b}(x) = wx + b$, with parameters $w,b$,  which will accurately predict house values given an input $x$. The cost is a measure of how accurate the model is on the training data.\n",
 | 
			
		||||
    "\n",
 | 
			
		||||
    "The cost equation (1) above shows that if $w$ and $b$ can be selected such that the predictions $f_{w,b}(x)$ match the target data $y$, the $(f_{w,b}(x^{(i)}) - y^{(i)})^2 $ term will be zero and the cost minimized. In this simple two point example, you can achieve this!\n",
 | 
			
		||||
    "\n",
 | 
			
		||||
@@ -161,7 +161,7 @@
 | 
			
		||||
    "<br/>\n",
 | 
			
		||||
    "Below, use the slider control to select the value of $w$ that minimizes cost. It can take a few seconds for the plot to update."
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "a4c0cec8b6d37318"
 | 
			
		||||
   "id": "44df7c59f9760dc9"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "code",
 | 
			
		||||
@@ -186,7 +186,7 @@
 | 
			
		||||
   "source": [
 | 
			
		||||
    "plt_intuition(x_train,y_train)"
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "7cd5334a6770f526"
 | 
			
		||||
   "id": "22cd3ef9d4ec0ad6"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -197,7 +197,7 @@
 | 
			
		||||
    "- Because the difference between the target and pediction is squared in the cost equation, the cost increases rapidly when $w$ is either too large or too small.\n",
 | 
			
		||||
    "- Using the `w` and `b` selected by minimizing cost results in a line which is a perfect fit to the data."
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "30972f54ab8f1d94"
 | 
			
		||||
   "id": "375ff8495648a3fc"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -208,7 +208,7 @@
 | 
			
		||||
    "You can see how cost varies with respect to *both* `w` and `b` by plotting in 3D or using a contour plot.   \n",
 | 
			
		||||
    "It is worth noting that some of the plotting in this course can become quite involved. The plotting routines are provided and while it can be instructive to read through the code to become familiar with the methods, it is not needed to complete the course successfully. The routines are in lab_utils_uni.py in the local directory."
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "6c6395d1ef2abecf"
 | 
			
		||||
   "id": "11ced4ee637d0bc2"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -217,7 +217,7 @@
 | 
			
		||||
    "### Larger Data Set\n",
 | 
			
		||||
    "It is instructive to view a scenario with a few more data points. This data set includes data points that do not fall on the same line. What does that mean for the cost equation? Can we find $w$, and $b$ that will give us a cost of 0? "
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "e8329d184d8344fc"
 | 
			
		||||
   "id": "965277e5ec9d8183"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "code",
 | 
			
		||||
@@ -228,7 +228,7 @@
 | 
			
		||||
    "x_train = np.array([1.0, 1.7, 2.0, 2.5, 3.0, 3.2])\n",
 | 
			
		||||
    "y_train = np.array([250, 300, 480,  430,   630, 730,])"
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "b0c0aed371b5ca39"
 | 
			
		||||
   "id": "e7497ddb4e257caf"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -236,7 +236,7 @@
 | 
			
		||||
   "source": [
 | 
			
		||||
    "In the contour plot, click on a point to select `w` and `b` to achieve the lowest cost. Use the contours to guide your selections. Note, it can take a few seconds to update the graph. "
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "2bc747abba448d65"
 | 
			
		||||
   "id": "7d477a675cb9fca2"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "code",
 | 
			
		||||
@@ -263,7 +263,7 @@
 | 
			
		||||
    "fig, ax, dyn_items = plt_stationary(x_train, y_train)\n",
 | 
			
		||||
    "updater = plt_update_onclick(fig, ax, x_train, y_train, dyn_items)"
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "9159d3905b017257"
 | 
			
		||||
   "id": "d93ed50102bca74e"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -271,7 +271,7 @@
 | 
			
		||||
   "source": [
 | 
			
		||||
    "Above, note the dashed lines in the left plot. These represent the portion of the cost contributed by each example in your training set. In this case, values of approximately $w=209$ and $b=2.4$ provide low cost. Note that, because our training examples are not on a line, the minimum cost is not zero."
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "5f4c1ac7bb9a78d"
 | 
			
		||||
   "id": "27a1d1e4a0575e15"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -280,7 +280,7 @@
 | 
			
		||||
    "### Convex Cost surface\n",
 | 
			
		||||
    "The fact that the cost function squares the loss ensures that the 'error surface' is convex like a soup bowl. It will always have a minimum that can be reached by following the gradient in all dimensions. In the previous plot, because the $w$ and $b$ dimensions scale differently, this is not easy to recognize. The following plot, where $w$ and $b$ are symmetric, was shown in lecture:"
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "763c2e6ab47d9587"
 | 
			
		||||
   "id": "3e066cb5f7dfd29c"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "code",
 | 
			
		||||
@@ -305,7 +305,7 @@
 | 
			
		||||
   "source": [
 | 
			
		||||
    "soup_bowl()"
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "13af7fc972d655b3"
 | 
			
		||||
   "id": "f9545acc0a1322e0"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -316,7 +316,7 @@
 | 
			
		||||
    " - The cost equation provides a measure of how well your predictions match your training data.\n",
 | 
			
		||||
    " - Minimizing the cost can provide optimal values of $w$, $b$."
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "850e586e9601ef7e"
 | 
			
		||||
   "id": "653d254837760414"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "code",
 | 
			
		||||
@@ -324,7 +324,7 @@
 | 
			
		||||
   "metadata": {},
 | 
			
		||||
   "outputs": [],
 | 
			
		||||
   "source": [],
 | 
			
		||||
   "id": "67ad1bd9bfe55fd"
 | 
			
		||||
   "id": "cc3b485e8def457"
 | 
			
		||||
  }
 | 
			
		||||
 ],
 | 
			
		||||
 "metadata": {
 | 
			
		||||
 
 | 
			
		||||
@@ -7,10 +7,10 @@
 | 
			
		||||
    "# Optional Lab: Gradient Descent for Linear Regression\n",
 | 
			
		||||
    "\n",
 | 
			
		||||
    "<figure>\n",
 | 
			
		||||
    "    <center> <img src=\"./images/C1_W1_L4_S1_Lecture_GD.png\"  style=\"width:800px;height:200px;\" ></center>\n",
 | 
			
		||||
    "    <center> <img src=\"https://i.wolves.top/picgo/202408041059271.png\"  style=\"width:800px;height:200px;\" ></center>\n",
 | 
			
		||||
    "</figure>"
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "51d539b8933c08f0"
 | 
			
		||||
   "id": "64d8b1775dd1fa3a"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -20,7 +20,7 @@
 | 
			
		||||
    "In this lab, you will:\n",
 | 
			
		||||
    "- automate the process of optimizing $w$ and $b$ using gradient descent."
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "3c5af2ac34433447"
 | 
			
		||||
   "id": "99b50273cf2c810a"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -32,7 +32,7 @@
 | 
			
		||||
    "- Matplotlib, a popular library for plotting data\n",
 | 
			
		||||
    "- plotting routines in the lab_utils.py file in the local directory"
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "7a45aef1427d3296"
 | 
			
		||||
   "id": "4502238878e707b7"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "code",
 | 
			
		||||
@@ -43,10 +43,10 @@
 | 
			
		||||
    "import math, copy\n",
 | 
			
		||||
    "import numpy as np\n",
 | 
			
		||||
    "import matplotlib.pyplot as plt\n",
 | 
			
		||||
    "plt.style.use('./deeplearning.mplstyle')\n",
 | 
			
		||||
    "plt.style.use('deeplearning.mplstyle')\n",
 | 
			
		||||
    "from lab_utils_uni import plt_house_x, plt_contour_wgrad, plt_divergence, plt_gradients"
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "a1fce6f64e2718a1"
 | 
			
		||||
   "id": "3e02dd559bcc5bce"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -62,7 +62,7 @@
 | 
			
		||||
    "| 1               | 300                      |\n",
 | 
			
		||||
    "| 2               | 500                      |\n"
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "1c0984e19ab9e6b4"
 | 
			
		||||
   "id": "a52a5b79363891a4"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "code",
 | 
			
		||||
@@ -74,7 +74,7 @@
 | 
			
		||||
    "x_train = np.array([1.0, 2.0])   #features\n",
 | 
			
		||||
    "y_train = np.array([300.0, 500.0])   #target value"
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "bf19c8fde61dfcfd"
 | 
			
		||||
   "id": "5f14f59dcc4f7e1e"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -84,7 +84,7 @@
 | 
			
		||||
    "### Compute_Cost\n",
 | 
			
		||||
    "This was developed in the last lab. We'll need it again here."
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "b1780bff7ac8aa73"
 | 
			
		||||
   "id": "7057888cd71b77c1"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "code",
 | 
			
		||||
@@ -105,7 +105,7 @@
 | 
			
		||||
    "\n",
 | 
			
		||||
    "    return total_cost"
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "fac0212ae8afd849"
 | 
			
		||||
   "id": "52ab101b90f4d2c3"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -118,7 +118,7 @@
 | 
			
		||||
    "In linear regression, you utilize input training data to fit the parameters $w$,$b$ by minimizing a measure of the error between our predictions $f_{w,b}(x^{(i)})$ and the actual data $y^{(i)}$. The measure is called the $cost$, $J(w,b)$. In training you measure the cost over all of our training samples $x^{(i)},y^{(i)}$\n",
 | 
			
		||||
    "$$J(w,b) = \\frac{1}{2m} \\sum\\limits_{i = 0}^{m-1} (f_{w,b}(x^{(i)}) - y^{(i)})^2\\tag{2}$$ "
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "ae55d8d0930866a3"
 | 
			
		||||
   "id": "71fc10831d272645"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -142,7 +142,7 @@
 | 
			
		||||
    "\n",
 | 
			
		||||
    "Here *simultaniously* means that you calculate the partial derivatives for all the parameters before updating any of the parameters."
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "760cfdd6f3f8c687"
 | 
			
		||||
   "id": "60baf048109b0868"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -159,7 +159,7 @@
 | 
			
		||||
    "- The naming of python variables containing partial derivatives follows this pattern,$\\frac{\\partial J(w,b)}{\\partial b}$  will be `dj_db`.\n",
 | 
			
		||||
    "- w.r.t is With Respect To, as in partial derivative of $J(wb)$ With Respect To $b$.\n"
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "e3d4ec60b7ca63f6"
 | 
			
		||||
   "id": "a3d60bc28ec4ab6c"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -170,7 +170,7 @@
 | 
			
		||||
    "<a name='ex-01'></a>\n",
 | 
			
		||||
    "`compute_gradient`  implements (4) and (5) above and returns $\\frac{\\partial J(w,b)}{\\partial w}$,$\\frac{\\partial J(w,b)}{\\partial b}$. The embedded comments describe the operations."
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "d2b095ba2d1b3f1a"
 | 
			
		||||
   "id": "f775b4e5b10ecad4"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "code",
 | 
			
		||||
@@ -206,7 +206,7 @@
 | 
			
		||||
    "        \n",
 | 
			
		||||
    "    return dj_dw, dj_db"
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "c60c94ca2ba78bea"
 | 
			
		||||
   "id": "ee90c07f78acea2"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -214,7 +214,7 @@
 | 
			
		||||
   "source": [
 | 
			
		||||
    "<br/>"
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "4e7b10ba866408c9"
 | 
			
		||||
   "id": "19087465bb1bb266"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -223,7 +223,7 @@
 | 
			
		||||
    "<img align=\"left\" src=\"./images/C1_W1_Lab03_lecture_slopes.PNG\"   style=\"width:340px;\" > The lectures described how gradient descent utilizes the partial derivative of the cost with respect to a parameter at a point to update that parameter.   \n",
 | 
			
		||||
    "Let's use our `compute_gradient` function to find and plot some partial derivatives of our cost function relative to one of the parameters, $w_0$.\n"
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "6a7ec85f8dc84e49"
 | 
			
		||||
   "id": "d7ec2e7ed2eda8c"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "code",
 | 
			
		||||
@@ -245,7 +245,7 @@
 | 
			
		||||
    "plt_gradients(x_train,y_train, compute_cost, compute_gradient)\n",
 | 
			
		||||
    "plt.show()"
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "92ccd2b1189c2e4f"
 | 
			
		||||
   "id": "b6ce17e4e7a6d49c"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -256,7 +256,7 @@
 | 
			
		||||
    "The left plot has fixed $b=100$. Gradient descent will utilize both $\\frac{\\partial J(w,b)}{\\partial w}$ and $\\frac{\\partial J(w,b)}{\\partial b}$ to update parameters. The 'quiver plot' on the right provides a means of viewing the gradient of both parameters. The arrow sizes reflect the magnitude of the gradient at that point. The direction and slope of the arrow reflects the ratio of $\\frac{\\partial J(w,b)}{\\partial w}$ and $\\frac{\\partial J(w,b)}{\\partial b}$ at that point.\n",
 | 
			
		||||
    "Note that the gradient points *away* from the minimum. Review equation (3) above. The scaled gradient is *subtracted* from the current value of $w$ or $b$. This moves the parameter in a direction that will reduce cost."
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "e2c8ab9d96d48ec0"
 | 
			
		||||
   "id": "b0e2daeddf6ed7cc"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -266,7 +266,7 @@
 | 
			
		||||
    "###  Gradient Descent\n",
 | 
			
		||||
    "Now that gradients can be computed,  gradient descent, described in equation (3) above can be implemented below in `gradient_descent`. The details of the implementation are described in the comments. Below, you will utilize this function to find optimal values of $w$ and $b$ on the training data."
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "9a8470ca87962ea"
 | 
			
		||||
   "id": "28097a7c403638d0"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "code",
 | 
			
		||||
@@ -321,7 +321,7 @@
 | 
			
		||||
    " \n",
 | 
			
		||||
    "    return w, b, J_history, p_history #return w and J,w history for graphing"
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "289c570d8e6d0b9f"
 | 
			
		||||
   "id": "596f5f893cabdacb"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "code",
 | 
			
		||||
@@ -358,20 +358,20 @@
 | 
			
		||||
    "                                                    iterations, compute_cost, compute_gradient)\n",
 | 
			
		||||
    "print(f\"(w,b) found by gradient descent: ({w_final:8.4f},{b_final:8.4f})\")"
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "b78236dad287b179"
 | 
			
		||||
   "id": "bff23449aa3de468"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
   "metadata": {},
 | 
			
		||||
   "source": [
 | 
			
		||||
    "<img align=\"left\" src=\"./images/C1_W1_Lab03_lecture_learningrate.PNG\"  style=\"width:340px; padding: 15px; \" > \n",
 | 
			
		||||
    "<img align=\"left\" src=\"https://i.wolves.top/picgo/202408041100217.png\"  style=\"width:340px; padding: 15px; \" > \n",
 | 
			
		||||
    "Take a moment and note some characteristics of the gradient descent process printed above.  \n",
 | 
			
		||||
    "\n",
 | 
			
		||||
    "- The cost starts large and rapidly declines as described in the slide from the lecture.\n",
 | 
			
		||||
    "- The partial derivatives, `dj_dw`, and `dj_db` also get smaller, rapidly at first and then more slowly. As shown in the diagram from the lecture, as the process nears the 'bottom of the bowl' progress is slower due to the smaller value of the derivative at that point.\n",
 | 
			
		||||
    "- progress slows though the learning rate, alpha, remains fixed"
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "ac49cab25475792b"
 | 
			
		||||
   "id": "9df6b2fd3473f9f8"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -380,7 +380,7 @@
 | 
			
		||||
    "### Cost versus iterations of gradient descent \n",
 | 
			
		||||
    "A plot of cost versus iterations is a useful measure of progress in gradient descent. Cost should always decrease in successful runs. The change in cost is so rapid initially, it is useful to plot the initial decent on a different scale than the final descent. In the plots below, note the scale of cost on the axes and the iteration step."
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "84928145fa5d834e"
 | 
			
		||||
   "id": "4dcde8c8bc9c91c7"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "code",
 | 
			
		||||
@@ -408,7 +408,7 @@
 | 
			
		||||
    "ax1.set_xlabel('iteration step')  ;  ax2.set_xlabel('iteration step') \n",
 | 
			
		||||
    "plt.show()"
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "3fb63a2e448607fa"
 | 
			
		||||
   "id": "9ab3e30b473e57d6"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -417,7 +417,7 @@
 | 
			
		||||
    "### Predictions\n",
 | 
			
		||||
    "Now that you have discovered the optimal values for the parameters $w$ and $b$, you can now use the model to predict housing values based on our learned parameters. As expected, the predicted values are nearly the same as the training values for the same housing. Further, the value not in the prediction is in line with the expected value."
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "877b4268d61cbd1e"
 | 
			
		||||
   "id": "e557c848a8191c07"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "code",
 | 
			
		||||
@@ -439,7 +439,7 @@
 | 
			
		||||
    "print(f\"1200 sqft house prediction {w_final*1.2 + b_final:0.1f} Thousand dollars\")\n",
 | 
			
		||||
    "print(f\"2000 sqft house prediction {w_final*2.0 + b_final:0.1f} Thousand dollars\")"
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "e1840d7ee6522d2b"
 | 
			
		||||
   "id": "f492269c6a06d60e"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -449,7 +449,7 @@
 | 
			
		||||
    "## Plotting\n",
 | 
			
		||||
    "You can show the progress of gradient descent during its execution by plotting the cost over iterations on a contour plot of the cost(w,b). "
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "622ef2db1d43a509"
 | 
			
		||||
   "id": "d2ecb7dc0ac32a6f"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "code",
 | 
			
		||||
@@ -471,7 +471,7 @@
 | 
			
		||||
    "fig, ax = plt.subplots(1,1, figsize=(12, 6))\n",
 | 
			
		||||
    "plt_contour_wgrad(x_train, y_train, p_hist, ax)"
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "9338e55a8dd87ceb"
 | 
			
		||||
   "id": "f758c238eb415c01"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -481,7 +481,7 @@
 | 
			
		||||
    "- The path makes steady (monotonic) progress toward its goal.\n",
 | 
			
		||||
    "- initial steps are much larger than the steps near the goal."
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "31e1cbcb0421b6e7"
 | 
			
		||||
   "id": "d4fa08759d4c0681"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -489,7 +489,7 @@
 | 
			
		||||
   "source": [
 | 
			
		||||
    "**Zooming in**, we can see that final steps of gradient descent. Note the distance between steps shrinks as the gradient approaches zero."
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "90e4ccbd6599f90a"
 | 
			
		||||
   "id": "e763eb94c55dd87"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "code",
 | 
			
		||||
@@ -512,7 +512,7 @@
 | 
			
		||||
    "plt_contour_wgrad(x_train, y_train, p_hist, ax, w_range=[180, 220, 0.5], b_range=[80, 120, 0.5],\n",
 | 
			
		||||
    "            contours=[1,5,10,20],resolution=0.5)"
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "871c3cbed53e2e1d"
 | 
			
		||||
   "id": "c5a157bda2c18e2c"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -522,13 +522,13 @@
 | 
			
		||||
    "### Increased Learning Rate\n",
 | 
			
		||||
    "\n",
 | 
			
		||||
    "<figure>\n",
 | 
			
		||||
    " <img align=\"left\", src=\"./images/C1_W1_Lab03_alpha_too_big.PNG\"   style=\"width:340px;height:240px;\" >\n",
 | 
			
		||||
    " <img align=\"left\", src=\"https://i.wolves.top/picgo/202408041100502.png\"   style=\"width:340px;height:240px;\" >\n",
 | 
			
		||||
    "</figure>\n",
 | 
			
		||||
    "In the lecture, there was a discussion related to the proper value of the learning rate, $\\alpha$ in equation(3). The larger $\\alpha$ is, the faster gradient descent will converge to a solution. But, if it is too large, gradient descent will diverge. Above you have an example of a solution which converges nicely.\n",
 | 
			
		||||
    "\n",
 | 
			
		||||
    "Let's try increasing the value of  $\\alpha$ and see what happens:"
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "fe6cbc15bef486f6"
 | 
			
		||||
   "id": "f24d64e2c8fee91c"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "code",
 | 
			
		||||
@@ -563,7 +563,7 @@
 | 
			
		||||
    "w_final, b_final, J_hist, p_hist = gradient_descent(x_train ,y_train, w_init, b_init, tmp_alpha, \n",
 | 
			
		||||
    "                                                    iterations, compute_cost, compute_gradient)"
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "904051e5b17f7611"
 | 
			
		||||
   "id": "b8c0fc11602dd2b7"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -572,7 +572,7 @@
 | 
			
		||||
    "Above, $w$ and $b$ are bouncing back and forth between positive and negative with the absolute value increasing with each iteration. Further, each iteration $\\frac{\\partial J(w,b)}{\\partial w}$ changes sign and cost is increasing rather than decreasing. This is a clear sign that the *learning rate is too large* and the solution is diverging. \n",
 | 
			
		||||
    "Let's visualize this with a plot."
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "3e128d8848ed786d"
 | 
			
		||||
   "id": "8b57dc7379bdedc"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "code",
 | 
			
		||||
@@ -594,7 +594,7 @@
 | 
			
		||||
    "plt_divergence(p_hist, J_hist,x_train, y_train)\n",
 | 
			
		||||
    "plt.show()"
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "525859aea07d903"
 | 
			
		||||
   "id": "624c656455504a91"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -602,7 +602,7 @@
 | 
			
		||||
   "source": [
 | 
			
		||||
    "Above, the left graph shows $w$'s progression over the first few steps of gradient descent. $w$ oscillates from positive to negative and cost grows rapidly. Gradient Descent is operating on both $w$ and $b$ simultaneously, so one needs the 3-D plot on the right for the complete picture."
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "f188bcb1dbebb9d"
 | 
			
		||||
   "id": "aaeedde0bcfa1065"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -618,15 +618,7 @@
 | 
			
		||||
    "- utilized gradient descent to find parameters\n",
 | 
			
		||||
    "- examined the impact of sizing the learning rate"
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "1d1832515c0982de"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "code",
 | 
			
		||||
   "execution_count": null,
 | 
			
		||||
   "metadata": {},
 | 
			
		||||
   "outputs": [],
 | 
			
		||||
   "source": [],
 | 
			
		||||
   "id": "2b544fe91032e577"
 | 
			
		||||
   "id": "5eff9781d2060439"
 | 
			
		||||
  }
 | 
			
		||||
 ],
 | 
			
		||||
 "metadata": {
 | 
			
		||||
 
 | 
			
		||||
@@ -9,7 +9,8 @@
 | 
			
		||||
    "<figure>\n",
 | 
			
		||||
    " <img src=\"./images/C1_W1_L3_S1_Lecture_b.png\"   style=\"width:600px;height:200px;\">\n",
 | 
			
		||||
    "</figure>"
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "7fae6127b91f846d"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -18,7 +19,8 @@
 | 
			
		||||
    "## Goals\n",
 | 
			
		||||
    "In this lab you will:\n",
 | 
			
		||||
    "- Learn to implement the model $f_{w,b}$ for linear regression with one variable"
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "558caa26f894e501"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -39,7 +41,8 @@
 | 
			
		||||
    "|  $w$  |  parameter: weight,                                 | `w`    |\n",
 | 
			
		||||
    "|  $b$           |  parameter: bias                                           | `b`    |     \n",
 | 
			
		||||
    "| $f_{w,b}(x^{(i)})$ | The result of the model evaluation at $x^{(i)}$ parameterized by $w,b$: $f_{w,b}(x^{(i)}) = wx^{(i)}+b$  | `f_wb` | \n"
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "387f93949917f1d2"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -49,7 +52,8 @@
 | 
			
		||||
    "In this lab you will make use of: \n",
 | 
			
		||||
    "- NumPy, a popular library for scientific computing\n",
 | 
			
		||||
    "- Matplotlib, a popular library for plotting data"
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "26fe2d14cedb9a1a"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "code",
 | 
			
		||||
@@ -60,7 +64,8 @@
 | 
			
		||||
    "import numpy as np\n",
 | 
			
		||||
    "import matplotlib.pyplot as plt\n",
 | 
			
		||||
    "plt.style.use('./deeplearning.mplstyle')"
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "e7a25e396c4b3d2d"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -78,14 +83,16 @@
 | 
			
		||||
    "| 2.0               | 500                      |\n",
 | 
			
		||||
    "\n",
 | 
			
		||||
    "You would like to fit a linear regression model (shown above as the blue straight line) through these two points, so you can then predict price for other houses - say, a house with 1200 sqft.\n"
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "a9cebb19cb18409e"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
   "metadata": {},
 | 
			
		||||
   "source": [
 | 
			
		||||
    "Please run the following code cell to create your `x_train` and `y_train` variables. The data is stored in one-dimensional NumPy arrays."
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "514bee6257b3de45"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "code",
 | 
			
		||||
@@ -99,14 +106,16 @@
 | 
			
		||||
    "y_train = np.array([300.0, 500.0])\n",
 | 
			
		||||
    "print(f\"x_train = {x_train}\")\n",
 | 
			
		||||
    "print(f\"y_train = {y_train}\")"
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "a2b5841f412550ba"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
   "metadata": {},
 | 
			
		||||
   "source": [
 | 
			
		||||
    ">**Note**: The course will frequently utilize the python 'f-string' output formatting described [here](https://docs.python.org/3/tutorial/inputoutput.html) when printing. The content between the curly braces is evaluated when producing the output."
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "8845118d4044e7df"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -114,7 +123,8 @@
 | 
			
		||||
   "source": [
 | 
			
		||||
    "### Number of training examples `m`\n",
 | 
			
		||||
    "You will use `m` to denote the number of training examples. Numpy arrays have a `.shape` parameter. `x_train.shape` returns a python tuple with an entry for each dimension. `x_train.shape[0]` is the length of the array and number of examples as shown below."
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "9435bc31d71a55a1"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "code",
 | 
			
		||||
@@ -126,14 +136,16 @@
 | 
			
		||||
    "print(f\"x_train.shape: {x_train.shape}\")\n",
 | 
			
		||||
    "m = x_train.shape[0]\n",
 | 
			
		||||
    "print(f\"Number of training examples is: {m}\")"
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "3042542073a8dee"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
   "metadata": {},
 | 
			
		||||
   "source": [
 | 
			
		||||
    "One can also use the Python `len()` function as shown below."
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "681e2c43a225a085"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "code",
 | 
			
		||||
@@ -144,7 +156,8 @@
 | 
			
		||||
    "# m is the number of training examples\n",
 | 
			
		||||
    "m = len(x_train)\n",
 | 
			
		||||
    "print(f\"Number of training examples is: {m}\")"
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "29b77e357217cb68"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -156,7 +169,8 @@
 | 
			
		||||
    "\n",
 | 
			
		||||
    "To access a value in a Numpy array, one indexes the array with the desired offset. For example the syntax to access location zero of `x_train` is `x_train[0]`.\n",
 | 
			
		||||
    "Run the next code block below to get the $i^{th}$ training example."
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "a09b9410eb2bdb4f"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "code",
 | 
			
		||||
@@ -169,14 +183,16 @@
 | 
			
		||||
    "x_i = x_train[i]\n",
 | 
			
		||||
    "y_i = y_train[i]\n",
 | 
			
		||||
    "print(f\"(x^({i}), y^({i})) = ({x_i}, {y_i})\")"
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "6c3c658d4f75db4"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
   "metadata": {},
 | 
			
		||||
   "source": [
 | 
			
		||||
    "### Plotting the data"
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "a49af67439b97d82"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -186,7 +202,8 @@
 | 
			
		||||
    "- The function arguments `marker` and `c` show the points as red crosses (the default is blue dots).\n",
 | 
			
		||||
    "\n",
 | 
			
		||||
    "You can also use other functions in the `matplotlib` library to display the title and labels for the axes."
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "b15cbcf6b0ca004c"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "code",
 | 
			
		||||
@@ -203,7 +220,8 @@
 | 
			
		||||
    "# Set the x-axis label\n",
 | 
			
		||||
    "plt.xlabel('Size (1000 sqft)')\n",
 | 
			
		||||
    "plt.show()"
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "24298cc22c6eae4b"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -220,7 +238,8 @@
 | 
			
		||||
    "Let's try to get a better intuition for this through the code blocks below. Let's start with $w = 100$ and $b = 100$. \n",
 | 
			
		||||
    "\n",
 | 
			
		||||
    "**Note: You can come back to this cell to adjust the model's w and b parameters**"
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "1650263e8dd4d997"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "code",
 | 
			
		||||
@@ -232,7 +251,8 @@
 | 
			
		||||
    "b = 100\n",
 | 
			
		||||
    "print(f\"w: {w}\")\n",
 | 
			
		||||
    "print(f\"b: {b}\")"
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "d14be934509bf334"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -247,7 +267,8 @@
 | 
			
		||||
    "For a large number of data points, this can get unwieldy and repetitive. So instead, you can calculate the function output in a `for` loop as shown in the `compute_model_output` function below.\n",
 | 
			
		||||
    "> **Note**: The argument description `(ndarray (m,))` describes a Numpy n-dimensional array of shape (m,). `(scalar)` describes an argument without dimensions, just a magnitude.  \n",
 | 
			
		||||
    "> **Note**: `np.zero(n)` will return a one-dimensional numpy array with $n$ entries   \n"
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "7d6df6dd84468603"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "code",
 | 
			
		||||
@@ -270,14 +291,16 @@
 | 
			
		||||
    "        f_wb[i] = w * x[i] + b\n",
 | 
			
		||||
    "        \n",
 | 
			
		||||
    "    return f_wb"
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "1fcf5af9a7d85129"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
   "metadata": {},
 | 
			
		||||
   "source": [
 | 
			
		||||
    "Now let's call the `compute_model_output` function and plot the output.."
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "47d041104e3c39d6"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "code",
 | 
			
		||||
@@ -301,7 +324,8 @@
 | 
			
		||||
    "plt.xlabel('Size (1000 sqft)')\n",
 | 
			
		||||
    "plt.legend()\n",
 | 
			
		||||
    "plt.show()"
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "736ec583c1cf5d67"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -314,7 +338,8 @@
 | 
			
		||||
    "\n",
 | 
			
		||||
    "#### Tip:\n",
 | 
			
		||||
    "You can use your mouse to click on the triangle to the left of the green \"Hints\" below to reveal some hints for choosing b and w."
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "7dec7d8a8efef44"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -329,7 +354,8 @@
 | 
			
		||||
    "        <li>Try $w = 200$ and $b = 100$ </li>\n",
 | 
			
		||||
    "    </ul>\n",
 | 
			
		||||
    "    </p>"
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "5ba15f56d2568e04"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -337,7 +363,8 @@
 | 
			
		||||
   "source": [
 | 
			
		||||
    "### Prediction\n",
 | 
			
		||||
    "Now that we have a model, we can use it to make our original prediction. Let's predict the price of a house with 1200 sqft. Since the units of $x$ are in 1000's of sqft, $x$ is 1.2.\n"
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "2ae9a0f157f44afb"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "code",
 | 
			
		||||
@@ -351,7 +378,8 @@
 | 
			
		||||
    "cost_1200sqft = w * x_i + b    \n",
 | 
			
		||||
    "\n",
 | 
			
		||||
    "print(f\"${cost_1200sqft:.0f} thousand dollars\")"
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "7604756c838dc53a"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "markdown",
 | 
			
		||||
@@ -363,14 +391,16 @@
 | 
			
		||||
    "     - In the example above, the feature was house size and the target was house price\n",
 | 
			
		||||
    "     - for simple linear regression, the model has two parameters $w$ and $b$ whose values are 'fit' using *training data*.\n",
 | 
			
		||||
    "     - once a model's parameters have been determined, the model can be used to make predictions on novel data."
 | 
			
		||||
   ]
 | 
			
		||||
   ],
 | 
			
		||||
   "id": "b9411842a79a5c"
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
   "cell_type": "code",
 | 
			
		||||
   "execution_count": null,
 | 
			
		||||
   "metadata": {},
 | 
			
		||||
   "outputs": [],
 | 
			
		||||
   "source": []
 | 
			
		||||
   "source": [],
 | 
			
		||||
   "id": "6854e2f089d03b0c"
 | 
			
		||||
  }
 | 
			
		||||
 ],
 | 
			
		||||
 "metadata": {
 | 
			
		||||
 
 | 
			
		||||
		Reference in New Issue
	
	Block a user