AI - Applications & Models#

Reproduce AMD's MLPerf Training v5.0 Submission Result with Instinct™ GPUs
Follow this step-by-step guide to reproduce AMDs MLPerf 5.0 Training Submission with Instinct GPUs using ROCm

AMD’s MLPerf Training Debut: Optimizing LLM Fine-Tuning with Instinct™ GPUs
Explore the techniques we used to improve the training performance on MI300X and MI325X in our MLPerf Training 5.0 submission.

High-Throughput BERT-L Pre-Training on AMD Instinct™ GPUs: A Practical Guide
Learn how to optimize BERT-L training with mixed precision and Flash Attention v2 on AMD Instinct GPUs — follow our tested MLPerf-compliant step-by-step guide.

Scale LLM Inference with Multi-Node Infrastructure
Learn how to horizontally scale LLM inference using open-source tools on MI300X, with vLLM, nginx, Prometheus, and Grafana.

AMD Integrates llm-d on AMD Instinct MI300X Cluster For Distributed LLM Serving
AMD Integrates llm-d on AMD Instinct MI300X Cluster For Distributed LLM Serving

Step-Video-T2V Inference with xDiT on AMD Instinct MI300X GPUs
Learn how to accelerate text-to-video generation using Step-Video-T2V, a 30B parameter T2V model, on AMD MI300X GPUs with ROCm—enabling scalable, high-fidelity video generation from text

DataFrame Acceleration: hipDF and hipDF.pandas on AMD GPUs
This blog post demonstrates how hipDF significantly enhances and accelerates data manipulation, aggregation, and transformation tasks on AMD hardware using ROCm.

CuPy and hipDF on AMD: The Basics and Beyond
Learn how to deploy CuPy and hipDF on AMD GPUs. See their high-performance computing advantages, and use CuPy and hipDF in a detailed example of an investment portfolio allocation optimization using the Markowitz model.

Power Up Qwen 3 with AMD Instinct: A Developer’s Day 0 Quickstart
Explore the power of Alibaba's QWEN3 models on AMD Instinct™ MI300X and MI325X GPUs - available from Day 0 with seamless SGLang and vLLM integration

Reinforcement Learning from Human Feedback on AMD GPUs with verl and ROCm Integration
Deploy verl on AMD GPUs for fast, scalable RLHF training with ROCm optimization, Docker scripts, and impressive throughput-convergence results

Shrink LLMs, Boost Inference: INT4 Quantization on AMD GPUs with GPTQModel
Learn how to compress LLMs with GPTQModel and run them efficiently on AMD GPUs using INT4 quantization, reducing memory use, shrinking model size, and enabling fast inference

Power Up Llama 4 with AMD Instinct: A Developer’s Day 0 Quickstart
Explore the power of Meta’s Llama 4 multimodal models on AMD Instinct™ MI300X and MI325X GPUs - available from Day 0 with seamless vLLM integration