Matt Cohen, Monodeep Kar, et al.
ISSCC 2026
Transformer-based Large Language Models (LLMs) demand large weight capacity, efficient computing, and high throughput access to large amount of dynamic memory. These challenges present great opportunities for algorithmic and hardware innovations, including Analog AI accelerators. In this paper, we describe recent progress on Phase Change Memory-based hardware and architectural designs to address the challenges for LLM inference.
Matt Cohen, Monodeep Kar, et al.
ISSCC 2026
Laura Bégon-Lours, Mattia Halter, et al.
MRS Spring Meeting 2023
Yayue Hou, Hsinyu Tsai, et al.
DATE 2025
Ying Zhou, Gi-Joon Nam, et al.
DAC 2023