How unlabeled data improve generalization in self-training? A one-hidden-layer theoretical analysisShuai ZhangMeng Wenget al.2022ICLR 2022
MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the EdgeGeng YuanXialong Maet al.2021NeurIPS 2021
When does Contrastive Learning Preserve Adversarial Robustness from Pretraining to Finetuning?Lijie FanSijia Liuet al.2021NeurIPS 2021
Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Sparse Neural NetworksShuai ZhangMeng Wenget al.2021NeurIPS 2021
Sign-MAML: Efficient Model-Agnostic Meta-Learning by SignSGDChen FanParikshit Ramet al.2021NeurIPS 2021
Rate-improved Inexact Augmented Lagrangian Method for Constrained Nonconvex OptimizationZichong LiPin-Yu Chenet al.2021AISTATS 2021