Sarath Swaminathan, Nathaniel Park, et al.
NeurIPS 2025
With the rapid adoption of Large Language Models (LLMs), LLM-adapters have become increasingly common, providing lightweight specialization of large-scale models. Serving hundreds or thousands of these adapters on a single GPU allows request aggregation, increasing throughput, but may also cause request starvation if GPU memory limits are exceeded. To address this issue, this study focuses on determining the joint configuration of concurrent and parallel adapters that maximizes GPU throughput without inducing starvation, given heterogeneous adapter and traffic properties. We propose a data-driven ML approach leveraging interpretable models to tackle this caching problem and introduce the first Digital Twin capable of reproducing an LLM-adapter serving system, enabling efficient training data generation. Experiments with the vLLM framework and LoRA adapters show that the Digital Twin reproduces throughput within 5.1% of real results, while the ML approach predicts optimal numbers of concurrent and parallel adapters with an error of at most 7.2% under heterogeneous, real-world workloads.
Sarath Swaminathan, Nathaniel Park, et al.
NeurIPS 2025
Giovanni De Felice, Arianna Casanova Flores, et al.
NeurIPS 2025
Seetharami Seelam, Apoorve Mohan, et al.
ISCA 2023
Ramon Nartallo-kaluarachchi, Robert Manson Sawko, et al.
NeurIPS 2025