In regularized linear regression, the cost function is augmented with a regularization term. For Ridge Regression, the cost function is: J(θ) = (1/2m) ∑ (hθ(x(i)) - y(i))² + (λ/2m) ∑ θj² For Lasso Regression, it is: J(θ) = (1/2m) ∑ (hθ(x(i)) - y(i))² + (λ/m) ∑ |θj| Here, m is the number of training examples, hθ(x(i)) is the hypothesis, y(i) is the actual output, θj are the parameters, and λ is the regularization parameter. The regularization term penalizes large coefficients, thereby reducing the risk of overfitting.