Apoorve Mohan, Matthew Sheard
NVIDIA GTC 2022
In the rapidly evolving landscape of Large Language Models (LLMs), overcoming low GPU cluster utilization (as low as 20-30% in traditional setups) is crucial for efficiently serving these models in Kubernetes. This talk will share insights from deploying and serving LLMs using MIG partitions and dynamic resource allocation (DRA). Our experiments discovered that the optimal GPU MIG partition size depends on the specific LLM model and its load, highlighting the necessity and feasibility of using Dynamic Resource Allocation (DRA) for dynamically scaling model-serving instances vertically.
We'll showcase deploying the open-source vLLM framework in Kubernetes, focusing on scaling vLLM instances for increased loads while maximizing GPU utilization. Attendees will gain practical knowledge on selecting effective MIG partitions for different models and using DRA to optimize their model-serving systems.
Apoorve Mohan, Matthew Sheard
NVIDIA GTC 2022
Archit Patke, Christian Pinto, et al.
ICS 2025
Darya Kaviani, Sijun Tan, et al.
RWC 2025
Pranjal Gupta, Karan Bhukar, et al.
ICPE 2025