Han Wang

Han Wang#

Han Wang is a member of the model optimization team at AMD, focused on low‑precision (fully‑quantized) training and inference on AMD hardware. His research and development work centers on techniques to close the accuracy gap between FP32/BF16 and quantized datatypes. He is also a contributor to several open‑source projects, including Torchtitan, vLLM, xDiT, and AITER.

Posts by Han Wang