Forcing Generative Models to Degenerate Ones: The Power of Data Poisoning AttacksShuli JiangSwanand Ravindra Kadheet al.2023NeurIPS 2023
Unraveling the Connections between Privacy and Certified Robustness in Federated Learning Against Poisoning AttacksChulin XieYunhui Longet al.2023CCS 2023
2nd Workshop on Data Integrity and Secure Cloud Computing (DISCC)Pradip BoseJennifer Dworaket al.2023MICRO 2023
Spectral Adversarial MixUp for Few-Shot Unsupervised Domain AdaptationJiajin ZhangHanqing Chaoet al.2023MICCAI 2023
Adversarial Auditing of Machine Learning Models under Compound ShiftKaran BhanotDennis Weiet al.2023ESANN 2023
Spatially Constrained Adversarial Attack Detection and Localization in the Representation Space of Optical Flow NetworksHannah KimCelia Cintaset al.2023IJCAI 2023
TRAD: Task-agnostic Representation of the Activation Space in Deep Neural NetworksTanya Leah AkumuCelia Cintaset al.2023IJCAI 2023
On Robustness-Accuracy Characterization of Large Language Models using Synthetic DatasetsChing-yun KoPin-Yu Chenet al.2023ICML 2023