TransGAN: Two Pure Transformers Can Make One Strong GAN, and That Can Scale UpYifan JiangShiyu Changet al.2021NeurIPS 2021
IA-RED^2 : Interpretability-Aware Redundancy Reduction for Vision TransformersBowen PanRameswar Pandaet al.2021NeurIPS 2021
Data-Efficient Double-Win Lottery Tickets from Robust Pre-trainingTianlong ChenZhenyu Zhanget al.2022ICML 2022