Guanchen Li#
Guanchen Li is a Software Development Engineer at AMD, where he focuses on large-scale model optimization and efficiency-oriented AI systems. His work centers on LLM compression methods—including structural pruning, sparsity, and quantization—to improve inference speed, reduce memory footprint, and enhance deployment efficiency on modern hardware.
He received his M.S. degree in Computer Technology from the University of Science and Technology Beijing and his B.S. degree in Information Management and Information Systems from the Capital University of Economics and Business. His research interests span model compression, inference acceleration, and large-scale training systems.
Guanchen has authored several peer-reviewed papers at top NLP/ML conferences, including NeurIPS, NAACL, COLING, and ICML.