Protecting Users From Themselves: Safeguarding Contextual Privacy in Interactions with Conversational AgentsIvoline NgongSwanand Ravindra Kadheet al.2024NeurIPS 2024
Split, Unlearn, Merge: Leveraging Data Attributes for More Effective Unlearning in LLMsSwanand Ravindra KadheFarhan Ahmedet al.2024ICML 2024
Forcing Generative Models to Degenerate Ones: The Power of Data Poisoning AttacksShuli JiangSwanand Ravindra Kadheet al.2023NeurIPS 2023
FairSISA: Ensemble Post-Processing to Improve Fairness of Unlearning in LLMsSwanand Ravindra KadheAnisa Halimiet al.2023NeurIPS 2023
Benchmarking the Effect of Poisoning Defenses on the Security and Bias of Deep Learning ModelsNathalie Baracaldo AngelFarhan Ahmedet al.2023S&P 2023
Benchmarking the Effect of Poisoning Defenses on the Security and Bias of the Final ModelNathalie Baracaldo AngelKevin Eykholtet al.2022NeurIPS 2022
Federated Unlearning: How to Efficiently Erase a Client in FL?Anisa HalimiSwanand Ravindra Kadheet al.2022ICML 2022
Using Large Language Models to protect information search in Multi-Domain OperationsDinesh VermaDavid Beymeret al.2024SPIE DCS 2024
Robust Learning Protocol for Federated Tumor Segmentation ChallengeAmbrish RawatGiulio Zizzoet al.2022MICCAI 2022