Qinyi Chen, Jason Cheuk Nam Liang, et al.
NeurIPS 2024
Recent advances in large reasoning models (LRMs) have enabled strong multi-step reasoning capabilities. However, existing machine unlearning algorithms are tailored to standard language modeling and fail to address the unique challenges posed by LRMs. In this work, we present the first systematic study of LRM unlearning and reveal that conventional unlearning methods often overlook critical information leakage in reasoning traces, even when final answers are successfully removed. To address this, we propose Reasoning-aware Representation Misdirection for Unlearning (R^2 MU), a method that suppresses sensitive reasoning traces while preserving the model’s general reasoning ability. Our experiments demonstrate that R^2-MU significantly reduces reasoning trace leakage and achieves strong performance across both reasoning and safety benchmarks, including WMDP, StrongReject, JBB-Behaviors and WildJailbreak, under state-of-the-art models such as DeepSeek-R1-Distill-LLaMA-8B and DeepSeek-R1-Distill-Qwen-14B. To the best of our knowledge, MU is the first principled approach to both expose and mitigate reasoning trace leakage in LRM unlearning, while preserving reasoning ability.
Qinyi Chen, Jason Cheuk Nam Liang, et al.
NeurIPS 2024
Yuya Jeremy Ong, Jay Pankaj Gala, et al.
IEEE CISOSE 2024
Henrik Nolte, Miriam Rateike, et al.
FAccT 2025
Ivoline Ngong, Swanand Ravindra Kadhe, et al.
NeurIPS 2024