Posts by Vara Lakshmi Bayanagari

Panoptic segmentation and instance segmentation with Detectron2 on AMD GPUs

This blog gives an overview of Detectron2 and the inference of segmentation pipelines in its core library on an AMD GPU.

Read more ...


Training a Neural Collaborative Filtering (NCF) Recommender on an AMD GPU

Collaborative Filtering is a type of item recommendation where new items are recommended to the user based on their past interactions. Neural Collaborative Filtering (NCF) is a recommendation system that uses neural network to model the user-item interaction function. NCF focuses on optimizing a collaborative function, which is essentially a user-item interaction model represented by a neural network and ranks the recommended items for the user.

Read more ...


PyTorch C++ Extension on AMD GPU

This blog demonstrates how to use the PyTorch C++ extension with an example and discusses its advantages over regular PyTorch modules. The experiments were carried out on AMD GPUs and ROCm 5.7.0 software. For more information about supported GPUs and operating systems, see System Requirements (Linux).

Read more ...


Total body segmentation using MONAI Deploy on an AMD GPU

Medical Open Network for Artificial Intelligence (MONAI) is an open-source organization that provides PyTorch implementation of state-of-the-art medical imaging models, ranging from classification and segmentation to image generation. Catering to the needs of researchers, clinicians, and fellow domain contributors, MONAI’s lifecycle provides three different end-to-end workflow tools: MONAI Core, MONAI Label, and MONAI Deploy.

Read more ...


Two-dimensional images to three-dimensional scene mapping using NeRF on an AMD GPU

This tutorial aims to explain the fundamentals of NeRF and its implementation in PyTorch. The code used in this tutorial is inspired by Mason McGough’s colab notebook and is implemented on an AMD GPU.

Read more ...


Pre-training BERT using Hugging Face & TensorFlow on an AMD GPU

This blog explains an end-to-end process for pre-training the Bidirectional Encoder Representations from Transformers (BERT) base model from scratch using Hugging Face libraries with a TensorFlow backend for English corpus text (WikiText-103-raw-v1).

Read more ...


Pre-training BERT using Hugging Face & PyTorch on an AMD GPU

This blog explains an end-to-end process for pre-training the Bidirectional Encoder Representations from Transformers (BERT) base model from scratch using Hugging Face libraries with a PyTorch backend for English corpus text (WikiText-103-raw-v1).

Read more ...