Linear Regression
Updated: 2020-12-31
Hypothesis
hθ(x)=θTx
Parameter
θ
Cost function:
J(θ)=2m1i=1∑m(hθ(x(i))−y(i))2
Goal
θminJ(θ)
Gradient Descent: simultaneously update all θj
θj:=θj−α∂θj∂J(θ)
Regularization
J(θ)=2m1[i=1∑m(hθ(x(i))−y(i))2+λi=1∑nθj2]