Publication
KES-IDT 2024
Conference paper

Improving Membership Inference Attacks against Classification Models

Abstract

Artificial intelligence systems are prevalent in everyday life, with use cases in retail, manufacturing, health, and many other fields. With the rise in AI adoption, associated risks have been identified, including privacy risks to the people whose data was used to train models. Assessing the privacy risks of machine learning models is crucial to making knowledgeable decisions on whether to use, deploy, or share a model. A common approach to privacy risk assessment is to run one or more attacks against the model and measure their success rate. We present a novel framework for improving the accuracy of membership inference attacks against classification models. Our framework takes advantage of the ensemble method, generating many specialized attack models for different subsets of the data. We show that this approach achieves better performance than either a single attack model or an attack model per class label, on both classical and language classification tasks.