Irene Ko, Sihui Dai, et al.
NeurIPS 2024
Large Language Models (LLMs) are increasingly acting as autonomous agents, with function calling (FC) enabling them to invoke specific tools for tasks. While prior research has primarily focused on improving FC capabilities, little attention has been given to the robustness of these agents to real-world perturbations. We introduce a benchmark assessing FC robustness in two key areas: resilience to naturalistic query variations, and stability in function calling when the toolkit expands with semantically related tools. Evaluating best-performing FC models on a carefully expanded subset of the Berkeley function calling leaderboard (BFCL), we identify critical weaknesses in existing evaluation methodologies, and highlight areas for improvement in real-world agentic deployments.
Irene Ko, Sihui Dai, et al.
NeurIPS 2024
Nafis Neehal, Bowen Wang, et al.
NAACL 2025
Henrik Nolte, Miriam Rateike, et al.
FAccT 2025
George Kour, Itay Nakash, et al.
ACL 2025