Hybrid 8-bit floating point (HFP8) training and inference for deep neural networksXiao SunJungwook Choiet al.2019NeurIPS 2019Conference paper
DLFloat: A 16-b Floating Point Format Designed for Deep Learning Training and InferenceAnkur AgrawalBruce Fleischeret al.2019ARITH 2019Conference paper
Accumulation bit-width scaling for ultra-low precision training of deep networksCharbel SakrNaigang Wanget al.2019ICLR 2019Conference paper
Innovate Practices on CyberSecurity of Hardware Semiconductor DevicesAlfred L. CrouchPeter Levinet al.2019VTS 2019Conference paper
Training deep neural networks with 8-bit floating point numbersNaigang WangJungwook Choiet al.2018NeurIPS 2018Conference paper
A Scalable Multi-TeraOPS Core for AI Training and InferenceSunil ShuklaBruce Fleischeret al.2018IEEE SSC-LPaper
A Scalable Multi-TeraOPS Deep Learning Processor Core for AI Trainina and InferenceBruce FleischerSunil Shuklaet al.2018VLSI Circuits 2018Conference paper
Novel IC sub-threshold IDDQ signature and its relationship to aging during high voltage stressFranco StellariNaigang Wanget al.2018ESSDERC 2018Conference paper
High-Q magnetic inductors for high efficiency on-chip power conversionNaigang WangBruce Doriset al.2016IEDM 2016Conference paper
An 82%-efficient multiphase voltage-regulator 3D interposer with on-chip magnetic inductorsKevin TienNoah Sturckenet al.2015VLSI Circuits 2015Conference paper
13 Mar 2023CNZL201910352202.5Very Low Precision Floating Point Representation For Deep Learning Acceleration
PCPin-Yu ChenPrincipal Research Scientist and Manager; Chief Scientist, RPI-IBM AI Research Collaboration