Posts by Juho Kerttula

Fine-Tune LLMs for Proteins with AMD Enterprise AI Suite

Want to teach a large language model to understand protein sequences? ROCm has got you covered.

Read more ...


AMD Inference Microservice (AIM): Production Ready Inference on AMD Instinctâ„¢ GPUs

As generative AI models continue to expand in scale, context length, and operational complexity, enterprises face a harder challenge: how to deploy and operate inference reliably, efficiently, and at production scale. Running LLMs or multimodal models on real workloads requires more than high-performance GPUs. It requires reproducible deployments, predictable performance, seamless orchestration, and an operational framework that teams can trust.

Read more ...