Posts by Tiffany Mintz
Modernizing Taichi Lang to LLVM 20 for MI325X GPU Acceleration
- 04 December 2025
Our first Taichi Lang blog intrduced you to Taichi Lang on AMD’s MI210 and MI250X GPUs. This previous version of Taichi was limited by it’s dependence on outdated versions of LLVM. We have modernized Taichi to LLVM 20 to take advantage of the latest advances in LLVM’s code generation capabilities. This modernization also allows us to make Taichi available for execution on newer AMD Instinct GPUs, MI300X and MI325X. As with our previous blog, we provide you with a guide for understanding Taichi, and walk you through installing Taichi, as well as, writing and executing a Taichi program.
Accelerating Parallel Programming in Python with Taichi Lang on AMD GPUs
- 31 July 2025
Taichi Lang is an open-source, imperative, parallel programming language for high-performance numerical computation. It is embedded in Python and uses just-in-time (JIT) compiler frameworks (e.g. LLVM) to offload the compute-intensive Python code to the native GPU or CPU instructions. The language has broad applications spanning real-time physical simulation, numerical computation, augmented reality, artificial intelligence, vision and robotics, visual effects in films and games, general-purpose computing, and much more [1].
Triton Inference Server with vLLM on AMD GPUs
- 08 January 2025
Triton Inference Server is an open-source platform designed to streamline AI inferencing. It supports the deployment, scaling, and inference of trained AI models from various machine learning and deep learning frameworks including Tensorflow, PyTorch, and vLLM, making it adaptable for diverse AI workloads. It is designed to work across multiple environments, including cloud, data centers and edge devices.