Posts by Douglas Jia

Transforming Words into Motion: A Guide to Video Generation with AMD GPU

This blog introduces the advancements in text-to-video generation through enhancements to the stable diffusion model and demonstrates the process of generating videos from text prompts on an AMD GPU using Alibaba’s ModelScopeT2V model.

Read more ...


Inferencing with AI2’s OLMo model on AMD GPU

In this blog, we will show you how to generate text using AI2’s OLMo model on AMD GPU.

Read more ...


Instruction fine-tuning of StarCoder with PEFT on multiple AMD GPUs

In this blog, we will show you how to fine-tune the StarCoder base model on AMD GPUs with an instruction-answer pair dataset so that it can follow instructions to generate code and answer questions. We will also show you how to use parameter-efficient fine-tuning (PEFT) to minimize the computation cost for the fine-tuning process.

Read more ...


GPU Unleashed: Training Reinforcement Learning Agents with Stable Baselines3 on an AMD GPU in Gymnasium Environment

This blog will delve into the fundamentals of deep reinforcement learning, guiding you through a practical code example that utilizes an AMD GPU to train a Deep Q-Network (DQN) policy within the Gymnasium environment.

Read more ...


Efficient image generation with Stable Diffusion models and ONNX Runtime using AMD GPUs

In this blog, we show you how to use pre-trained Stable Diffusion models to generate images from text (text-to-image), transform existing visuals (image-to-image), and restore damaged pictures (inpainting) on AMD GPUs using ONNX Runtime.

Read more ...


LLM distributed supervised fine-tuning with JAX

In this article, we review the process for fine-tuning a Bidirectional Encoder Representations from Transformers (BERT)-based large language model (LLM) using JAX for a text classification task. We explore techniques for parallelizing this fine-tuning procedure across multiple AMD GPUs, then evaluate our model’s performance on a holdout dataset. For this, we use a (BERT)-base-cased transformer model with a General Language Understanding Evaluation (GLUE) benchmark dataset on multiple AMD GPUs.

Read more ...


Pre-training a large language model with Megatron-DeepSpeed on multiple AMD GPUs

In this blog, we show you how to pre-train a GPT-3 model using the Megatron-DeepSpeed framework on multiple AMD GPUs. We also demonstrate how to perform inference on the text-generation task with your pre-trained model.

Read more ...


Efficient image generation with Stable Diffusion models and AITemplate using AMD GPUs

Stable Diffusion has emerged as a groundbreaking advancement in the field of image generation, empowering users to translate text descriptions into captivating visual output.

Read more ...


Efficient deployment of large language models with Text Generation Inference on AMD GPUs

Text Generation Inference (TGI) is a toolkit for deploying and serving Large Language Models (LLMs) with unparalleled efficiency. TGI is tailored for popular open-source LLMs, such as Llama, Falcon, StarCoder, BLOOM, GPT-NeoX, and T5. Optimizations include tensor parallelism, token streaming using Server-Sent Events (SSE), continuous batching, and optimized transformers code. It has a robust feature set that includes quantization, safetensors, watermarking (for determining if text is generated from language models), logits warper, and support for custom prompt generation and fine-tuning.

Read more ...