Posts tagged Robotics
Training a Robotic Arm Using MuJoCo and JAX on AMD Hardware with ROCm™
- 31 March 2026
Training a robotic arm to pick up an object and place it somewhere else may sound straightforward, but teaching a robot to do this reliably in the real world is one of the harder problems in robotics. Traditional approaches rely on hand-tuned motion planning and carefully scripted control logic, which is brittle and time-consuming to maintain as environments change.
Edge-to-Cloud Robotics with AMD ROCm: From Data Collection to Real-Time Inference
- 23 March 2026
This blog walks through a full edge-to-cloud robotics AI solution, built entirely on the AMD ecosystem and the Hugging Face LeRobot framework. In case you are not familiar, LeRobot is an open source platform from Hugging Face that provides pre-trained models, datasets, and tools for real-world robotics using PyTorch.
Digital Twins on AMD: Building Robotic Simulations Using Edge AI PCs
- 09 February 2026
Digital twins are becoming a core tool in robotics, automation, and intelligent systems. They provide a virtual representation of a physical system, allowing developers to validate robot behaviors, test motion strategies, and generate datasets before deploying anything in the real world.
Building Robotics Applications with Ryzen AI and ROS 2
- 09 February 2026
This blog showcases how to deploy power-efficient Ryzen AI perception models with ROS 2 - the Robot Operating System. We utilize the Ryzen AI Max+ 395 (Strix-Halo) platform, which is equipped with an efficient Ryzen AI NPU and iGPU. The Ryzen AI CVML Library is used to deploy supported models efficiently on the Ryzen AI platform. All of the code is available on GitHub in the AMD Ryzers repository and was originally presented at ROSCon’25.
Fine-tuning Robotics Vision Language Action Models with AMD ROCm and LeRobot
- 14 July 2025
This blog showcases training and deploying robotics policy models on AMD Instinct™ GPUs using ROCm with Hugging Face’s LeRobot framework. Recent advancements in Vision Language Action Models (VLAs) represent a breakthrough in robotics AI, combining computer vision, language understanding, and robotic control into unified architectures that can process visual observations, understand task descriptions, and generate precise motor commands.