Posts tagged Scientific computing

Programming AMD GPUs with Julia

Julia is a high-level, general-purpose dynamic programming language that automatically compiles to efficient native code via LLVM, and supports multiple platforms. With LLVM, comes the support for programming GPUs, including AMD GPUs.

Read more ...


Accelerating XGBoost with Dask using multiple AMD GPUs

XGBoost is an optimized library for distributed gradient boosting. It has become the leading machine learning library for solving regression and classification problems. For a deeper dive into how gradient boosting works, we recommend reading Introduction to Boosted Trees.

Read more ...


Sparse matrix vector multiplication - part 1

Note: This blog was previously part of the AMD lab notes blog series.

Read more ...


Jacobi Solver with HIP and OpenMP offloading

Note: This blog was previously part of the AMD lab notes blog series.

Read more ...


Finite difference method - Laplacian part 4

Note: This blog was previously part of the AMD lab notes blog series.

Read more ...


GPU-aware MPI with ROCm

Note: This blog was previously part of the AMD lab notes blog series.

Read more ...


Finite difference method - Laplacian part 3

Note: This blog was previously part of the AMD lab notes blog series.

Read more ...


Finite difference method - Laplacian part 2

Note: This blog was previously part of the AMD lab notes blog series.

Read more ...


AMD matrix cores

Note: This blog was previously part of the AMD lab notes blog series.

Read more ...