Marcelo Amaral, Tatsuhiro Chiba, et al.
CLOUD 2022
The emergence of Multi-Instance GPU (MIG) technology enables us to run smaller machine learning models on partitions of a GPU rather than the entire device, thus improving utilization and reducing energy consumption, albeit with potential performance trade-offs. Meanwhile, the growing energy demands of GPU-equipped data centers motivate the development of online partitioning and scheduling schemes that not only ensure fast job processing but also achieve high energy efficiency. However, achieving energy–tardiness efficiency with manageable algorithmic complexity in large-scale scheduling remains a great challenge, due to the dual objectives of deciding on the GPU partitions and scheduling jobs onto the slices of the heterogeneous partitions. To address this challenge, we propose SMART-MIG, a parallel computing system that combines Mean-Field Multi-Agent Reinforcement Learning (MF-MARL) for large-scale MIG repartitioning with tailored heuristic algorithms for job scheduling. We demonstrate that the complexity of the repartitioning component remains constant even as the number of jobs and GPUs increases. We also establish theoretical lower bounds on energy consumption and tardiness to rigorously benchmark system performance. Finally, extensive experiments show that SMART-MIG improves the energy–tardiness efficiency by 18% compared to its corresponding static partitioning counterpart, while being only 27% above the theoretical lower bound on energy consumption.
Marcelo Amaral, Tatsuhiro Chiba, et al.
CLOUD 2022
Archit Patke, Christian Pinto, et al.
ICS 2025
Darya Kaviani, Sijun Tan, et al.
RWC 2025
Pranjal Gupta, Karan Bhukar, et al.
ICPE 2025