Yichen Xu, Baoqi Zhu, et al.
VLSI Technology and Circuits 2026
We introduce a novel run-time method for significantly reducing the accuracy loss associated with quantizing BERT-like models to 8-bit integers. Existing methods for quantizing models either modify the training procedure, or they require an additional calibration step to adjust parameters that also requires a selected held-out dataset. Our method permits taking advantage of quantization without the need for these adjustments. We present results on several NLP tasks demonstrating the usefulness of this technique.
Yichen Xu, Baoqi Zhu, et al.
VLSI Technology and Circuits 2026
Laura Bégon-Lours, Mattia Halter, et al.
MRS Spring Meeting 2023
George Kour, Samuel Ackerman, et al.
EMNLP 2022
Ying Zhou, Gi-Joon Nam, et al.
DAC 2023