Haoran Qiu, Weichao Mao, et al.
ASPLOS 2024
Different from traditional Large Language Model (LLM) serving that colocates the prefill and decode stages on the same GPU, dis-aggregated serving dedicates distinct GPUs to prefill and decode workload. Once the prefill GPU completes its task, the KV cache must be transferred to the decode GPU. While existing works have proposed various KV cache transfer paths across different memory and storage tiers, there remains a lack of systematic benchmarking that compares their performance and energy efficiency. Meanwhile, although optimization techniques such as frequency scaling have been utilized for disaggregated serving, their performance and en-ergy implications have not been rigorously benchmarked. In this paper, we fill this research gap by re-evaluating prefill-decode dis-aggregation under different KV transfer mediums and optimization strategies. Specifically, we include a new colocated serving base-line and evaluate disaggregated setups under different KV cache transfer paths. Through GPU profiling using dynamic voltage and frequency scaling (DVFS), we identify and compare the perfor-mance–energy Pareto frontiers across all setups to evaluate the potential energy savings enabled by disaggregation. Our results show that performance benefits from prefill–decode disaggregation are not guaranteed and depend on the request load and KV transfer mediums. In addition, stage-wise independent frequency scaling enabled by disaggregation does not lead to energy saving due to inherently higher energy consumption of disaggregated serving.
Haoran Qiu, Weichao Mao, et al.
ASPLOS 2024
Timothy Chainer, Liz Hulihan, et al.
ARPA-E COOLERCHIPS Kickoff Meeting 2023
Jose Manuel Bernabe' Murcia, Eduardo Canovas Martinez, et al.
MobiSec 2024
Sahil Suneja, Yufan Zhuang, et al.
ACM TOSEM