Conference paper
Margin maximizing loss functions
Saharon Rosset, Ji Zhu, et al.
NeurIPS 2003
The Lasso achieves variance reduction and variable selection by solving an ℓ1-regularized least squares problem. Huang (2003) claims that 'there always exists an interval of regularization parameter values such that the corresponding mean squared prediction error for the Lasso estimator is smaller than for the ordinary least square estimator'. This result is correct. However, its proof in Huang (2003) is not. This paper presents a corrected proof of the claim, which exposes and uses some interesting fundamental properties of the Lasso.
Saharon Rosset, Ji Zhu, et al.
NeurIPS 2003
Saharon Rosset, R. Spencer Wells, et al.
Genetics
Saharon Rosset, Ji Zhu, et al.
JMLR
Aurélie C. Lozano, Naoki Abe, et al.
Bioinformatics