Posts by Nico Holmberg

Reproducing the AMD MLPerf Inference v6.0 Submission Result

MLPerf Inference v6.0 marked AMD’s fourth round of submissions to MLPerf Inference. This blog provides a step-by-step guide to reproducing AMD’s results on different vendor systems.

Read more ...


AMD Instinct™ GPUs MLPerf Inference v6.0 Submission

The results for the MLPerf Inference v6.0 benchmark were released on April 1st 2026. In this round, AMD showcased the performance of the MI355X system, as well as the capability and versatility of the ROCm software stack.

Read more ...


Scaling AI Inference Performance with vLLM on AMD Instinct MI355X GPUs

Today, we are excited to share Large Language Model (LLM) Inference Performance with vLLM on AMD Instinct™ MI355X GPUs. Whether you are a startup, an enterprise or a hyperscaler, the AMD open software ecosystem with Instinct MI355X GPUs delivers consistent, high-performance inference at scale outperforming Nvidia Blackwell B200 GPUs as concurrency grows. For real-world users, this performance impact is directly proportional to user experience and cost efficiency in production environments.

Read more ...