Recent Posts - Page 16#
Introducing Instella-Long: A Fully Open Language Model with Long-Context Capability
Learn about Instella-Long: AMD’s open 3B language model supporting 128K context, trained on MI300X GPUs, outperforming peers on long-context benchmarks.
AMD ROCm: Powering the World's Fastest Supercomputers
Discover how ROCm drives the world’s top supercomputers, from El Capitan to Frontier, and why its shaping the future of scalable, open and sustainable HPC
LLM Quantization with Quark on AMD GPUs: Accuracy and Performance Evaluation
Learn how to use Quark to apply FP8 quantization to LLMs on AMD GPUs, and evaluate accuracy and performance using vLLM and SGLang on AMD MI300X GPUs.
ROCm Revisited: Getting Started with HIP
New to HIP? This blog will introduce you to the HIP runtime API, its key concepts and installation and practical code examples to showcase its functionality.
The ROCm Revisited Series
We present our ROCm Revisited Series. Discover ROCm's role in leading edge supercomputing, its growing ecosystem-from HIP, to developer tools-powering AI, HPC, and data science across multi-GPU and cluster systems
ROCm Revisited: Evolution of the High-Performance GPU Computing Ecosystem
Learn how ROCm evolved to support HPC, AI, and containerized workloads with modern tools, libraries, and deployment options.
AMD’s MLPerf Training Debut: Optimizing LLM Fine-Tuning with Instinct™ GPUs
Explore the techniques we used to improve the training performance on MI300X and MI325X in our MLPerf Training 5.0 submission.
Reproduce AMD's MLPerf Training v5.0 Submission Result with Instinct™ GPUs
Follow this step-by-step guide to reproduce AMDs MLPerf 5.0 Training Submission with Instinct GPUs using ROCm
High-Throughput BERT-L Pre-Training on AMD Instinct™ GPUs: A Practical Guide
Learn how to optimize BERT-L training with mixed precision and Flash Attention v2 on AMD Instinct GPUs — follow our tested MLPerf-compliant step-by-step guide.
Scale LLM Inference with Multi-Node Infrastructure
Learn how to horizontally scale LLM inference using open-source tools on MI300X, with vLLM, nginx, Prometheus, and Grafana.
HIP 7.0 Is Coming: What You Need to Know to Stay Ahead
Get ready for HIP 7.0—explore key API changes that boost CUDA compatibility and streamline portable GPU development, start preparing your code today.
ROCm Runfile Installer Is Here!
Overview of ROCm Runfile Installer introduced in ROCm 6.4, allowing a complete single package for driver and ROCm installation without internet connectivity