Aditya Malik, Nalini Ratha, et al.
CAI 2024
We present an algorithm for finding an s-sparse vector x that minimizes the square-error ||y - Φx||2 where Φ satisfies the restricted isometry property (RIP), with isometric constant δ2s < 1/3. Our algorithm, called GraDeS (Gradient Descent with Sparsification) iteratively updates x as: x ← Hs (x+1/γ· ΦT(y- Φx)) where γ > 1 and Hs sets all but s largest magnitude coordinates to zero. GraDeS converges to the correct solution in constant number of iterations. The condition δ2s < 1/3 is most general for which a near-linear time algorithm is known. In comparison, the best condition under which a polynomial-time algorithm is known, is δ2s < √2-1 Our Matlab implementation of GraDeS outperforms previously proposed algorithms like Subspace Pursuit, StOMP, OMP, and Lasso by an order of magnitude. Curiously, our experiments also uncovered cases where L1-regularized regression (Lasso) fails but GraDeS finds the correct solution.
Aditya Malik, Nalini Ratha, et al.
CAI 2024
Leonid Karlinsky, Joseph Shtok, et al.
CVPR 2019
Erik Altman, Jovan Blanusa, et al.
NeurIPS 2023
Pavel Klavík, A. Cristiano I. Malossi, et al.
Philos. Trans. R. Soc. A