AI - Applications & Models - Page 14#
Using LoRA for efficient fine-tuning: Fundamental principles
This blog demonstrate how to use Lora to efficiently fine-tune Llama model on AMD GPUs with ROCm.
Fine-tune Llama 2 with LoRA: Customizing a large language model for question-answering
Fine-tune Llama 2 with LoRA: Customizing a large language model for question-answering
Fine-tune Llama model with LoRA: Customizing a large language model for question-answering
This blog demonstrate how to use Lora to efficiently fine-tune Llama model on a single AMD GPU with ROCm. model for question-answering
Pre-training BERT using Hugging Face & TensorFlow on an AMD GPU
Pre-training BERT using Hugging Face & Tensorflow on an AMD GPU
Pre-training BERT using Hugging Face & PyTorch on an AMD GPU
Pre-training BERT using Hugging Face & PyTorch on an AMD GPU
Accelerating XGBoost with Dask using multiple AMD GPUs
Accelerating XGBoost with Dask using multiple AMD GPUs
LLM distributed supervised fine-tuning with JAX
LLM distributed supervised fine-tuning with JAX
Pre-training a large language model with Megatron-DeepSpeed on multiple AMD GPUs
Pre-training a large language model with Megatron-DeepSpeed on multiple AMD GPUs
Efficient deployment of large language models with Text Generation Inference on AMD GPUs
Efficient deployment of large language models with Hugging Face text generation inference empowered by AMD GPUs
Efficient image generation with Stable Diffusion models and AITemplate using AMD GPUs
Efficient image generation with stable diffusion models and AITemplate using AMD GPUs