Posts tagged Memory
Reading AMDGCN ISA
- 13 May 2024
For an application developer it is often helpful to read the Instruction Set Architecture (ISA) for the GPU architecture that is used to perform its computations. Understanding the instructions of the pertinent code regions of interest can help in debugging and achieving performance optimization of the application.
C++17 parallel algorithms and HIPSTDPAR
- 18 April 2024
The C++17 standard added the concept of parallel algorithms to the
pre-existing C++ Standard Library. The parallel version of algorithms like
std::transform
maintain the same signature as the regular serial version,
except for the addition of an extra parameter specifying the
execution policy
to use. This flexibility allows users that are already
using the C++ Standard Library algorithms to take advantage of multi-core
architectures by just introducing minimal changes to their code.
Affinity part 2 - System topology and controlling affinity
- 16 April 2024
In Part 1 of the Affinity blog series, we looked at the importance of setting affinity for High Performance Computing (HPC) workloads. In this blog post, our goals are the following:
Affinity part 1 - Affinity, placement, and order
- 16 April 2024
Modern hardware architectures are increasingly complex with multiple sockets, many cores in each Central Processing Unit (CPU), Graphical Processing Units (GPUs), memory controllers, Network Interface Cards (NICs), etc. Peripherals such as GPUs or memory controllers will often be local to a CPU socket. Such designs present interesting challenges in optimizing memory access times, data transfer times, etc. Depending on how the system is built, hardware components are connected, and the workload being run, it may be advantageous to use the resources of the system in a specific way. In this article, we will discuss the role of affinity, placement, and order in improving performance for High Performance Computing (HPC) workloads. A short case study is also presented to familiarize you with performance considerations on a node in the Frontier supercomputer. In a follow-up article, we also aim to equip you with the tools you need to understand your system’s hardware topology and set up affinity for your application accordingly.
Sparse matrix vector multiplication - part 1
- 03 November 2023
Note: This blog was previously part of the AMD lab notes blog series.
Finite difference method - Laplacian part 4
- 18 July 2023
Note: This blog was previously part of the AMD lab notes blog series.
Register pressure in AMD CDNA™2 GPUs
- 17 May 2023
Note: This blog was previously part of the AMD lab notes blog series.
Finite difference method - Laplacian part 3
- 11 May 2023
Note: This blog was previously part of the AMD lab notes blog series.
Introduction to profiling tools for AMD hardware
- 12 April 2023
Note: This blog was previously part of the AMD lab notes blog series.
AMD Instinct™ MI200 GPU memory space overview
- 09 March 2023
Note: This blog was previously part of the AMD lab notes blog series.
Finite difference method - Laplacian part 2
- 04 January 2023
Note: This blog was previously part of the AMD lab notes blog series.
AMD matrix cores
- 14 November 2022
Note: This blog was previously part of the AMD lab notes blog series.