[ML Notes] 线性回归：梯度下降

对如前文所述的线性模型 $f(\boldsymbol{x})$ 和代价函数 $J(\boldsymbol{w})$

$$f(\boldsymbol{x}) = \boldsymbol{w}^\mathrm{T} \boldsymbol{x} \tag{1}$$

$$J(\boldsymbol{w}) = \frac{1}{2} \sum_{i=1}^m \big( y^{(i)} – f(\boldsymbol{x}^{(i)}) \big)^2 \tag{2}$$

使用梯度下降来最小化 $J(\boldsymbol{w})$，是计算 $\boldsymbol{w}$ 的另一种方式。此时需要对 $\boldsymbol{w}$ 进行迭代更新

$$w_j := w_j – \alpha \frac{\partial}{\partial w_j} J(\boldsymbol{w}) \tag{3}$$

由式 $(2)$，对于单个样本 $(\boldsymbol{x}, y)$，有

\begin{aligned} \frac{\partial}{\partial w_j} J(\boldsymbol{w}) &= \frac{\partial}{\partial w_j} \frac{1}{2} \big(y – f(\boldsymbol{x})\big)^2 \\ &= \big(y – f(\boldsymbol{x})\big) \cdot \frac{\partial}{\partial w_j} \big(y – f(\boldsymbol{x})\big) \\ &= \big(y – f(\boldsymbol{x})\big) \cdot \frac{\partial}{\partial w_j} \bigg(y – \sum_{i=0}^n w_i x_i\bigg) \\ &= \big(f(\boldsymbol{x}) – y\big) x_j \end{aligned} \tag{4}

$$w_j := w_j – \alpha \big(f(\boldsymbol{x}) – y\big) x_j \tag{5}$$

$$w_j := w_j – \alpha \sum_{i=1}^m \big(f(\boldsymbol{x}^{(i)}) – y^{(i)}\big) x_j^{(i)} \tag{6}$$

$$\boldsymbol{w} := \boldsymbol{w} – \alpha \sum_{i=1}^m \big(f(\boldsymbol{x}^{(i)}) – y^{(i)}\big) \boldsymbol{x}^{(i)} \tag{7}$$