Title: AQUA: Network-Accelerated Memory Offloading for LLMs in Scale-Up GPU Domains (Appearing at ASPLOS 2025)

Link to the paper: https://arxiv.org/pdf/2407.21255

Abstract: Inference on large-language models (LLMs) is constrained by GPU memory capacity. A sudden increase in the number of inference requests to a cloud-hosted LLM can deplete GPU memory, leading to contention between multiple prompts for limited resources. Modern LLM serving engines deal with the challenge of limited GPU memory using admission control, which causes them to be unresponsive during request bursts. We propose that preemptive scheduling of prompts in time slices is essential for ensuring responsive LLM inference, especially under conditions of high load and limited GPU memory. However, preempting prompt inference incurs a high paging overhead, which reduces inference throughput. We present Aqua, a GPU memory management framework that significantly reduces the overhead of paging inference state, achieving both responsive and high-throughput inference even under bursty request patterns. We evaluate Aqua by hosting several state-of-the-art large generative ML models of different modalities on servers with 8 Nvidia H100 80G GPUs. Aqua improves the responsiveness of LLM inference by 20X compared to the state-of-the-art and improves LLM inference throughput over a single long prompt by 4X.

Bio: Abhishek is a PhD candidate at Cornell University, where he is advised by Rachee Singh. His research builds systems for machine learning on scale-up GPU domains by developing algorithms at control and data planes. His research spans both current generation of scale-up domains like NVlinks and also next generation scale-up domains like silicon photonics. His work is supported by the Bowers CIS LinkedIn fellowship for the academic year 2024-25.