AI - Applications & Models - Page 9#

Automatic mixed precision in PyTorch using AMD GPUs
In this blog, we will discuss the basics of AMP, how it works, and how it can improve training efficiency on AMD GPUs. As models increase in size, the time and memory needed to train them--and consequently, the cost--also increases. Therefore, any measures we take to reduce training time and memory usage can be highly beneficial. This is where Automatic Mixed Precision (AMP) comes in.

Large language model inference optimizations on AMD GPUs
LLM Inference optimizations on AMD Instinct (TM) GPUs

Building a decoder transformer model on AMD GPU(s)
Building a decoder transformer model

Question-answering Chatbot with LangChain on an AMD GPU
Question-answering Chatbot with LangChain

Music Generation With MusicGen on an AMD GPU
Music Generation With MusicGen on an AMD GPU

Efficient image generation with Stable Diffusion models and ONNX Runtime using AMD GPUs
Efficient image generation with Stable Diffusion models and ONNX Runtime using AMD GPUs

Simplifying deep learning: A guide to PyTorch Lightning
Simplifying deep learning: A guide to PyTorch Lightning

Two-dimensional images to three-dimensional scene mapping using NeRF on an AMD GPU
Two-dimensional images to three-dimensional scene mapping using NeRF on an AMD GPU

Using LoRA for efficient fine-tuning: Fundamental principles
This blog demonstrate how to use Lora to efficiently fine-tune Llama model on AMD GPUs with ROCm.

Fine-tune Llama 2 with LoRA: Customizing a large language model for question-answering
Fine-tune Llama 2 with LoRA: Customizing a large language model for question-answering

Fine-tune Llama model with LoRA: Customizing a large language model for question-answering
This blog demonstrate how to use Lora to efficiently fine-tune Llama model on a single AMD GPU with ROCm. model for question-answering

Pre-training BERT using Hugging Face & TensorFlow on an AMD GPU
Pre-training BERT using Hugging Face & Tensorflow on an AMD GPU