Yue Zhu, Chen Wang, et al.
MASCOTS 2024
In the rapidly evolving landscape of Large Language Models (LLMs), overcoming low GPU cluster utilization (as low as 20-30% in traditional setups) is crucial for efficiently serving these models in Kubernetes. This talk will share insights from deploying and serving LLMs using MIG partitions and dynamic resource allocation (DRA). Our experiments discovered that the optimal GPU MIG partition size depends on the specific LLM model and its load, highlighting the necessity and feasibility of using Dynamic Resource Allocation (DRA) for dynamically scaling model-serving instances vertically.
We'll showcase deploying the open-source vLLM framework in Kubernetes, focusing on scaling vLLM instances for increased loads while maximizing GPU utilization. Attendees will gain practical knowledge on selecting effective MIG partitions for different models and using DRA to optimize their model-serving systems.
Yue Zhu, Chen Wang, et al.
MASCOTS 2024
Haoran Qiu, Weichao Mao, et al.
DSN 2024
Haoran Qiu, Weichao Mao, et al.
USENIX ATC 2023
Apoorve Mohan, Matthew Sheard
NVIDIA GTC 2022