OpenCLは、「OpenComputingLanguage」の省略形です。これは、主に高速コンピューティングのために、さまざまなプラットフォームで使用できるプログラミング言語です。複数のプラットフォームにまたがる適用性の多様性のため、ほとんどの場合、クロスプラットフォームコンピューティング言語と呼ばれます。 OpenCLでプログラムを作成し、CPU、GPU、FPGAなどのさまざまなデバイスで実行できます。
このガイドでは、GPUのみに焦点を当てます。私はNVIDIAとAMDGPUの両方を使用してきましたが、OpenCLを使用して可能な限り簡単な方法でそれらを操作できるようにする方法を紹介します。
ホストシステムにUbuntuを使用しましたが、Dockerの部分は他のすべてのLinuxディストリビューションに適用できます。
前提条件
- NVIDIA/AMDグラフィックカード
- Ubuntu Linux20.04.2LTSデスクトップ/サーバー64ビット
- Docker(アプリケーション固有の使用法用)
それでは、詳細を見ていきましょう!
NVIDIAGPU用のOpenCLのセットアップ
まず、OpenCLがメインのUbuntuデスクトップ/サーバーで機能することを確認する方法を説明します。それが終わったら、NVIDIAGPUで同じ目的でDockerコンテナを実行する方法を紹介します。
新しいUbuntuシステムでは、最初に独自のNVIDIAドライバーとCUDAをインストールする必要があります。後者は、OpenCLフレームワークがバンドルされていることを保証します。最後に、 clinfo
をインストールします OpenCLが正しくインストールされていることを確認するプログラムで、NVIDIAGPUのOpenCL仕様を詳細に示します。方法を見てみましょう:
ubuntu-driversデバイス
を使用する 推奨されるドライバーの名前を取得するコマンド:
[email protected]:~$ ubuntu-drivers devices
== /sys/devices/pci0000:00/0000:00:01.0/0000:01:00.0 ==
modalias : pci:v000010DEd00001C8Csv00001025sd00001265bc03sc00i00
vendor : NVIDIA Corporation
model : GP107M [GeForce GTX 1050 Ti Mobile]
driver : nvidia-driver-460 - distro non-free recommended
driver : nvidia-driver-418-server - distro non-free
driver : nvidia-driver-390 - distro non-free
driver : nvidia-driver-450-server - distro non-free
driver : nvidia-driver-465 - distro non-free
driver : nvidia-driver-460-server - distro non-free
driver : xserver-xorg-video-nouveau - distro free builtin
上記で、推奨されるドライバーは nvidia-driver-460
であることに注意してください。 。
それでは、推奨されるドライバーをCUDAと clinfo
と一緒にインストールしましょう。 このセクションで前述したパッケージ:
sudo apt install nvidia-driver-460 nvidia-cuda-toolkit clinfo
上記の3つのパッケージをすべてインストールしたら、Ubuntuデスクトップ/サーバーを再起動します。
[email protected]:~$ clinfo
Number of platforms 1
Platform Name NVIDIA CUDA
Platform Vendor NVIDIA Corporation
Platform Version OpenCL 1.2 CUDA 9.1.84
Platform Profile FULL_PROFILE
Platform Extensions cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64 cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_copy_opts cl_nv_create_buffer
Platform Extensions function suffix NV
Platform Name NVIDIA CUDA
Number of devices 1
Device Name GeForce GTX 1050 Ti
Device Vendor NVIDIA Corporation
Device Vendor ID 0x10de
Device Version OpenCL 1.2 CUDA
Driver Version 390.143
Device OpenCL C Version OpenCL C 1.2
Device Type GPU
Device Topology (NV) PCI-E, 01:00.0
Device Profile FULL_PROFILE
Device Available Yes
Compiler Available Yes
Linker Available Yes
Max compute units 6
Max clock frequency 1620MHz
Compute Capability (NV) 6.1
Device Partition (core)
Max number of sub-devices 1
Supported partition types None
Max work item dimensions 3
Max work item sizes 1024x1024x64
Max work group size 1024
Preferred work group size multiple 32
Warp size (NV) 32
Preferred / native vector sizes
char 1 / 1
short 1 / 1
int 1 / 1
long 1 / 1
half 0 / 0 (n/a)
float 1 / 1
double 1 / 1 (cl_khr_fp64)
Half-precision Floating-point support (n/a)
Single-precision Floating-point support (core)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Correctly-rounded divide and sqrt operations Yes
Double-precision Floating-point support (cl_khr_fp64)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Address bits 64, Little-Endian
Global memory size 4236312576 (3.945GiB)
Error Correction support No
Max memory allocation 1059078144 (1010MiB)
Unified memory for Host and Device No
Integrated memory (NV) No
Minimum alignment for any data type 128 bytes
Alignment of base address 4096 bits (512 bytes)
Global Memory cache type Read/Write
Global Memory cache size 98304 (96KiB)
Global Memory cache line size 128 bytes
Image support Yes
Max number of samplers per kernel 32
Max size for 1D images from buffer 134217728 pixels
Max 1D or 2D image array size 2048 images
Max 2D image size 16384x32768 pixels
Max 3D image size 16384x16384x16384 pixels
Max number of read image args 256
Max number of write image args 16
Local memory type Local
Local memory size 49152 (48KiB)
Registers per block (NV) 65536
Max number of constant args 9
Max constant buffer size 65536 (64KiB)
Max size of kernel argument 4352 (4.25KiB)
Queue properties
Out-of-order execution Yes
Profiling Yes
Prefer user sync for interop No
Profiling timer resolution 1000ns
Execution capabilities
Run OpenCL kernels Yes
Run native kernels No
Kernel execution timeout (NV) Yes
Concurrent copy and kernel execution (NV) Yes
Number of async copy engines 2
printf() buffer size 1048576 (1024KiB)
Built-in kernels
Device Extensions cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64 cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_copy_opts cl_nv_create_buffer
NULL platform behavior
clGetPlatformInfo(NULL, CL_PLATFORM_NAME, ...) NVIDIA CUDA
clGetDeviceIDs(NULL, CL_DEVICE_TYPE_ALL, ...) Success [NV]
clCreateContext(NULL, ...) [default] Success [NV]
clCreateContextFromType(NULL, CL_DEVICE_TYPE_DEFAULT) No platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_CPU) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_GPU) No platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_ACCELERATOR) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_CUSTOM) Invalid device type for platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_ALL) No platform
ICD loader properties
ICD loader Name OpenCL ICD Loader
ICD loader Vendor OCL Icd free software
ICD loader Version 2.2.11
ICD loader Profile OpenCL 2.1
ここでは、プラットフォーム名のみが「NVIDIACUDA」であることに注意してください。ただし、CUDAとOpenCLは互いに異なります。
それでおしまい!これで、ホストシステム上でNVIDIA GPUを使用してOpenCLアプリケーションを実行できます!
NVIDIAGPU用のDocker上のOpenCL
OpenCLがベアメタルシステムで稼働しているので、DockerコンテナにOpenCLをインストールする方法を見てみましょう!
NVIDIAコンテナランタイムをインストールする
ここでは、 nvidia-container-runtime
を追加でインストールする必要があります パッケージ。
インストールできるようにするには、最初にリポジトリの詳細を追加する必要があります。まだシステムにCurlをインストールしていない場合は、Curlがインストールされていることを確認してください。
sudo apt install curl
curl -s -L https://nvidia.github.io/nvidia-container-runtime/gpgkey | \
sudo apt-key add -
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-container-runtime/$distribution/nvidia-container-runtime.list | \
sudo tee /etc/apt/sources.list.d/nvidia-container-runtime.list
sudo apt update
sudo apt install nvidia-container-runtime
Dockerfileの作成
ホストシステムで行ったすべての操作を新しいイメージに複製して、コンテナーでカスタムOpenCLアプリケーションを起動できるようにする必要があります(これについては後で詳しく説明します)。
NVIDIA GPU OpenCLプロジェクト用の新しいディレクトリを作成し、そこに移動します:
mkdir nvidia-opencl
cd nvidia-opencl
お気に入りのテキストエディタ(Vim / Nanoまたはその他)を使用して、次のDockerfileを作成し、保存します。
FROM ubuntu:20.04
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get -y upgrade \
&& apt-get install -y \
apt-utils \
unzip \
tar \
curl \
xz-utils \
ocl-icd-libopencl1 \
opencl-headers \
clinfo \
;
RUN mkdir -p /etc/OpenCL/vendors && \
echo "libnvidia-opencl.so.1" > /etc/OpenCL/vendors/nvidia.icd
ENV NVIDIA_VISIBLE_DEVICES all
ENV NVIDIA_DRIVER_CAPABILITIES compute,utility
Dockerfileの構築
開始するために必要なDockerfileができたので、それをビルドしましょう。画像にnvidia-opencl
という名前を付けています :
docker build -t nvidia-opencl .
OpenCLコンテナを起動します
作成したばかりの新しいイメージに基づいて、新しいOpenCLコンテナーを起動します!
まず、次のコマンドを使用して、ローカルマシン上のLinuxユーザー名がXウィンドウディスプレイに接続することを許可します。
xhost +local:username
次のコマンドを使用すると、作成したばかりの新しいイメージに基づいて、ローカルコンテナのシェルに直接入ることができます。
docker run --rm -it --gpus all -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY nvidia-opencl
DockerでOpenCL構成を確認する
コンテナシェル内にいるので、 clinfo
を実行できます。 ベアメタルホストシステムで行ったのと同じように、OpenCL構成を確認するコマンド:
[email protected]:/# clinfo
Number of platforms 1
Platform Name NVIDIA CUDA
Platform Vendor NVIDIA Corporation
Platform Version OpenCL 1.2 CUDA 9.1.84
Platform Profile FULL_PROFILE
Platform Extensions cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64 cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_copy_opts cl_nv_create_buffer
Platform Extensions function suffix NV
Platform Name NVIDIA CUDA
Number of devices 1
Device Name GeForce GTX 1050 Ti
Device Vendor NVIDIA Corporation
Device Vendor ID 0x10de
Device Version OpenCL 1.2 CUDA
Driver Version 390.143
Device OpenCL C Version OpenCL C 1.2
Device Type GPU
Device Topology (NV) PCI-E, 01:00.0
Device Profile FULL_PROFILE
Device Available Yes
Compiler Available Yes
Linker Available Yes
Max compute units 6
Max clock frequency 1620MHz
Compute Capability (NV) 6.1
Device Partition (core)
Max number of sub-devices 1
Supported partition types None
Supported affinity domains (n/a)
Max work item dimensions 3
Max work item sizes 1024x1024x64
Max work group size 1024
Preferred work group size multiple 32
Warp size (NV) 32
Preferred / native vector sizes
char 1 / 1
short 1 / 1
int 1 / 1
long 1 / 1
half 0 / 0 (n/a)
float 1 / 1
double 1 / 1 (cl_khr_fp64)
Half-precision Floating-point support (n/a)
Single-precision Floating-point support (core)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Correctly-rounded divide and sqrt operations Yes
Double-precision Floating-point support (cl_khr_fp64)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Address bits 64, Little-Endian
Global memory size 4236312576 (3.945GiB)
Error Correction support No
Max memory allocation 1059078144 (1010MiB)
Unified memory for Host and Device No
Integrated memory (NV) No
Minimum alignment for any data type 128 bytes
Alignment of base address 4096 bits (512 bytes)
Global Memory cache type Read/Write
Global Memory cache size 98304 (96KiB)
Global Memory cache line size 128 bytes
Image support Yes
Max number of samplers per kernel 32
Max size for 1D images from buffer 134217728 pixels
Max 1D or 2D image array size 2048 images
Max 2D image size 16384x32768 pixels
Max 3D image size 16384x16384x16384 pixels
Max number of read image args 256
Max number of write image args 16
Local memory type Local
Local memory size 49152 (48KiB)
Registers per block (NV) 65536
Max number of constant args 9
Max constant buffer size 65536 (64KiB)
Max size of kernel argument 4352 (4.25KiB)
Queue properties
Out-of-order execution Yes
Profiling Yes
Prefer user sync for interop No
Profiling timer resolution 1000ns
Execution capabilities
Run OpenCL kernels Yes
Run native kernels No
Kernel execution timeout (NV) Yes
Concurrent copy and kernel execution (NV) Yes
Number of async copy engines 2
printf() buffer size 1048576 (1024KiB)
Built-in kernels (n/a)
Device Extensions cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64 cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_copy_opts cl_nv_create_buffer
NULL platform behavior
clGetPlatformInfo(NULL, CL_PLATFORM_NAME, ...) NVIDIA CUDA
clGetDeviceIDs(NULL, CL_DEVICE_TYPE_ALL, ...) Success [NV]
clCreateContext(NULL, ...) [default] Success [NV]
clCreateContextFromType(NULL, CL_DEVICE_TYPE_DEFAULT) No platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_CPU) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_GPU) No platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_ACCELERATOR) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_CUSTOM) Invalid device type for platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_ALL) No platform
ICD loader properties
ICD loader Name OpenCL ICD Loader
ICD loader Vendor OCL Icd free software
ICD loader Version 2.2.11
ICD loader Profile OpenCL 2.1
[email protected]:/#
これは何を意味するのでしょうか?これは、このコンテナ内から任意のOpenCLアプリケーションを実行できることを意味します。 Dockerfileを再変更するだけで、それだけになります。
OpenCLバックエンドを必要とするPythonアプリケーションを操作することもできます。この記事の便利なコンパニオンとして非常に役立つことができる私の以前の報道をチェックしてください。それをチェックして、Dockerfilesで遊んでみてください。
AMDGPU用のOpenCLのセットアップ
まず、OpenCLがメインのUbuntuデスクトップ/サーバーで機能することを確認する方法を説明します。それが終わったら、AMDGPUで同じ目的でDockerコンテナを実行する方法を紹介します。
新しいUbuntuシステムでは、最初にAMDサポートページから「AMDGPUドライバー」をダウンロードする必要があります。将来を見据えた構成の場合、インストールアーカイブ(tar.xz)を取得した後、レガシーおよび新しいAMDGPUの両方にOpenCLをインストールするだけで済みます。
最後に、 clinfo
をインストールします OpenCLが正しくインストールされていることを確認するプログラム。AMDGPUのOpenCL仕様を詳細に示します。ただし、プロセス全体が予想よりも少し難しい場合があります。方法を見てみましょう。
AMDサポートページをナビゲートし、Curlを使用して関連するドライバーをダウンロードします。 Curlがインストールされていることを確認してください。
sudo apt install curl
curl -e https://drivers.amd.com/drivers/linux -O https://drivers.amd.com/drivers/linux/amdgpu-pro-21.10-1247438-ubuntu-20.04.tar.xz
インストール、異常、およびそれらの回避策
アーカイブを抽出します:
tar -Jxvf amdgpu-pro-21.10-1247438-ubuntu-20.04.tar.xz
新しいディレクトリに移動します:
cd amdgpu-pro-21.10-1247438-ubuntu-20.04
次に、レガシーGPUと新しいGPUの両方にOpenCLをインストールします。
./amdgpu-install --opencl=legacy,rocr --headless --no-dkms
その使用法の完全な概要については、コマンド ./ amdgpu-install -h
を使用できます。 スクリプトが基本的にどのように機能するかを学びます。これは、コマンドのmanエントリに似ています。 -ヘッドレスコード> オプションは、OpenCLサポートと
-no-dkms
のみを指定します amdgpu-dkms
をインストールしないように指示します およびamdgpu-dkms-firmware
カーネルにパッケージ化します。あなたはそれを必要としません。
かなり前から、-no-dkms
を指定しても オプションの場合、スクリプトはわざわざ準拠せず、これらの不要なパッケージのインストールを続行します。 amdgpu-dkms
を許可した場合は、さらに追加します カーネル構成をインストールおよび変更するために、システムはその後の再起動またはシャットダウンを拒否します。これは、Ubuntuリポジトリからカーネルアップデートを受け取った後に発生しました。
そのような場合、私がしたことは次のとおりです。
dpkg -i package-name.deb
を使用して次のパッケージを手動でインストールしました 、抽出されたディレクトリ内に存在します:
amdgpu-pin_21.10-1247438_all.deb
amdgpu-core_21.10-1247438_all.deb
amdgpu-pro-core_21.10-1247438_all.deb
libdrm-amdgpu-common_1.0.0-1247438_all.deb
libdrm2-amdgpu_2.4.100-1247438_amd64.deb
libdrm-amdgpu-amdgpu1_2.4.100-1247438_amd64.deb
hsakmt-roct-amdgpu_1.0.9-1247438_amd64.deb
hsa-runtime-rocr-amdgpu_1.3.0-1247438_amd64.deb
comgr-amdgpu-pro_2.0.0-1247438_amd64.deb
hip-rocr-amdgpu-pro_21.10-1247438_amd64.deb
ocl-icd-libopencl1-amdgpu-pro_21.10-1247438_amd64.deb
clinfo-amdgpu-pro_21.10-1247438_amd64.deb
opencl-rocr-amdgpu-pro_21.10-1247438_amd64.deb
libllvm11.0-amdgpu_11.0-1247438_amd64.deb
これにより、 amdgpu-dkms
およびamdgpu-dkms-firmware
回避して、カーネルをそのままにしておくことができます。また、新しい最新バージョンの21.30が利用可能ですが、古い21.10ドライバーをダウンロードしたことに注意してください。後者の理由は、 clinfo
を実行したときに「HSAエラー」を表示してRadeonVIIGPUを認識できないためです。 後で:
HSA Error: Incompatible kernel and userspace, Vega 20 [Radeon VII] disabled. Upgrade amdgpu.
これらの異常に対処した後、私は clinfo
を取得することができました GPUを正しく報告します。
clinfoパッケージをインストールします
clinfo
をインストールします 以前にNVIDIAGPUで行ったようなパッケージ:
sudo apt install clinfo
[email protected]:~$ clinfo
Number of platforms 1
Platform Name AMD Accelerated Parallel Processing
Platform Vendor Advanced Micro Devices, Inc.
Platform Version OpenCL 2.0 AMD-APP (3246.0)
Platform Profile FULL_PROFILE
Platform Extensions cl_khr_icd cl_amd_event_callback
Platform Extensions function suffix AMD
Platform Name AMD Accelerated Parallel Processing
Number of devices 1
Device Name gfx906:sramecc-:xnack-
Device Vendor Advanced Micro Devices, Inc.
Device Vendor ID 0x1002
Device Version OpenCL 2.0
Driver Version 3246.0 (HSA1.1,LC)
Device OpenCL C Version OpenCL C 2.0
Device Type GPU
Device Board Name (AMD) Vega 20 [Radeon VII]
Device Topology (AMD) PCI-E, 0a:00.0
Device Profile FULL_PROFILE
Device Available Yes
Compiler Available Yes
Linker Available Yes
Max compute units 60
SIMD per compute unit (AMD) 4
SIMD width (AMD) 16
SIMD instruction width (AMD) 1
Max clock frequency 1801MHz
Graphics IP (AMD) 9.0
Device Partition (core)
Max number of sub-devices 60
Supported partition types None
Supported affinity domains (n/a)
Max work item dimensions 3
Max work item sizes 1024x1024x1024
Max work group size 256
Preferred work group size (AMD) 256
Max work group size (AMD) 1024
Preferred work group size multiple 64
Wavefront width (AMD) 64
Preferred / native vector sizes
char 4 / 4
short 2 / 2
int 1 / 1
long 1 / 1
half 1 / 1 (cl_khr_fp16)
float 1 / 1
double 1 / 1 (cl_khr_fp64)
Half-precision Floating-point support (cl_khr_fp16)
Denormals No
Infinity and NANs No
Round to nearest No
Round to zero No
Round to infinity No
IEEE754-2008 fused multiply-add No
Support is emulated in software No
Single-precision Floating-point support (core)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Correctly-rounded divide and sqrt operations Yes
Double-precision Floating-point support (cl_khr_fp64)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Address bits 64, Little-Endian
Global memory size 17163091968 (15.98GiB)
Global free memory (AMD) 16760832 (15.98GiB)
Global memory channels (AMD) 128
Global memory banks per channel (AMD) 4
Global memory bank width (AMD) 256 bytes
Error Correction support No
Max memory allocation 14588628168 (13.59GiB)
Unified memory for Host and Device No
Shared Virtual Memory (SVM) capabilities (core)
Coarse-grained buffer sharing Yes
Fine-grained buffer sharing Yes
Fine-grained system sharing No
Atomics No
Minimum alignment for any data type 128 bytes
Alignment of base address 1024 bits (128 bytes)
Preferred alignment for atomics
SVM 0 bytes
Global 0 bytes
Local 0 bytes
Max size for global variable 14588628168 (13.59GiB)
Preferred total size of global vars 17163091968 (15.98GiB)
Global Memory cache type Read/Write
Global Memory cache size 16384 (16KiB)
Global Memory cache line size 64 bytes
Image support Yes
Max number of samplers per kernel 26287
Max size for 1D images from buffer 134217728 pixels
Max 1D or 2D image array size 8192 images
Base address alignment for 2D image buffers 256 bytes
Pitch alignment for 2D image buffers 256 pixels
Max 2D image size 16384x16384 pixels
Max 3D image size 16384x16384x8192 pixels
Max number of read image args 128
Max number of write image args 8
Max number of read/write image args 64
Max number of pipe args 16
Max active pipe reservations 16
Max pipe packet size 1703726280 (1.587GiB)
Local memory type Local
Local memory size 65536 (64KiB)
Local memory syze per CU (AMD) 65536 (64KiB)
Local memory banks (AMD) 32
Max number of constant args 8
Max constant buffer size 14588628168 (13.59GiB)
Preferred constant buffer size (AMD) 16384 (16KiB)
Max size of kernel argument 1024
Queue properties (on host)
Out-of-order execution No
Profiling Yes
Queue properties (on device)
Out-of-order execution Yes
Profiling Yes
Preferred size 262144 (256KiB)
Max size 8388608 (8MiB)
Max queues on device 1
Max events on device 1024
Prefer user sync for interop Yes
Number of P2P devices (AMD) 0
P2P devices (AMD) <printDeviceInfo:147: get number of CL_DEVICE_P2P_DEVICES_AMD : error -30>
Profiling timer resolution 1ns
Profiling timer offset since Epoch (AMD) 0ns (Thu Jan 1 05:30:00 1970)
Execution capabilities
Run OpenCL kernels Yes
Run native kernels No
Thread trace supported (AMD) No
Number of async queues (AMD) 8
Max real-time compute queues (AMD) 8
Max real-time compute units (AMD) 60
printf() buffer size 4194304 (4MiB)
Built-in kernels (n/a)
Device Extensions cl_khr_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_fp16 cl_khr_gl_sharing cl_amd_device_attribute_query cl_amd_media_ops cl_amd_media_ops2 cl_khr_image2d_from_buffer cl_khr_subgroups cl_khr_depth_images cl_amd_copy_buffer_p2p cl_amd_assembly_program
NULL platform behavior
clGetPlatformInfo(NULL, CL_PLATFORM_NAME, ...) No platform
clGetDeviceIDs(NULL, CL_DEVICE_TYPE_ALL, ...) No platform
clCreateContext(NULL, ...) [default] No platform
clCreateContext(NULL, ...) [other] Success [AMD]
clCreateContextFromType(NULL, CL_DEVICE_TYPE_DEFAULT) Success (1)
Platform Name AMD Accelerated Parallel Processing
Device Name gfx906:sramecc-:xnack-
clCreateContextFromType(NULL, CL_DEVICE_TYPE_CPU) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_GPU) Success (1)
Platform Name AMD Accelerated Parallel Processing
Device Name gfx906:sramecc-:xnack-
clCreateContextFromType(NULL, CL_DEVICE_TYPE_ACCELERATOR) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_CUSTOM) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_ALL) Success (1)
Platform Name AMD Accelerated Parallel Processing
Device Name gfx906:sramecc-:xnack-
So, now you can run OpenCL applications with your AMD GPU on your host system!
OpenCL on Docker for AMD GPUs
How about doing the same through Docker containers? Let's see how much it contrasts with NVIDIA GPUs.
Creating the Dockerfile
Create a new directory for your AMD GPU OpenCL project and move into it:
mkdir amd-opencl
cd amd-opencl
Use your favorite text editor (Vim/Nano or any other) to create the following Dockerfile and save it:
FROM ubuntu:20.04
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get -y upgrade \
&& apt-get install -y \
initramfs-tools \
apt-utils \
unzip \
tar \
curl \
xz-utils \
ocl-icd-libopencl1 \
opencl-headers \
clinfo \
;
ARG AMD_DRIVER=amdgpu-pro-21.10-1247438-ubuntu-20.04.tar.xz
ARG AMD_DRIVER_URL=https://drivers.amd.com/drivers/linux
RUN mkdir -p /tmp/opencl-driver-amd
WORKDIR /tmp/opencl-driver-amd
RUN curl --referer $AMD_DRIVER_URL -O $AMD_DRIVER_URL/$AMD_DRIVER; \
tar -Jxvf $AMD_DRIVER; \
cd amdgpu-pro-*; \
./amdgpu-install --opencl=legacy,rocr --headless --no-dkms -y; \
rm -rf /tmp/opencl-driver-amd;
RUN mkdir -p /etc/OpenCL/vendors && \
echo "libamdocl64.so" > /etc/OpenCL/vendors/amdocl64.icd
RUN ln -s /usr/lib/x86_64-linux-gnu/libOpenCL.so.1 /usr/lib/libOpenCL.so
WORKDIR /
I had to add the initramfs-tools
package since the amdgpu-dkms
and amdgpu-dkms-firmware
would still be installed. I kept it this way since in this case, the reboot and shutdown issues I mentioned earlier are irrelevant for containers.
Alternatively, you could still use the dpkg -i
method in the Dockerfile.
Building the Dockerfile
So now that you have the necessary Dockerfile to get started, let's build it. I'm naming the image as amd-opencl
:
docker build -t amd-opencl .
Add your username to the video &render groups
For the AMD GPU Docker container to work flawlessly, it is better you also add your username to the video and render groups:
sudo usermod -a -G video $LOGNAME
sudo usermod -a -G render $LOGNAME
Launch the OpenCL Container
Based on the new image that you just built, it's time to launch the new OpenCL container!
Permit your Linux username on the local machine to connect to the X windows display with the following command:
xhost +local:username
With the following command, you can now directly enter the local container's shell based on the new image just created:
docker run --rm -it --device=/dev/kfd --device=/dev/dri --group-add video --group-add render -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY amd-opencl
Verify your OpenCL configuration on Docker
Now that you are inside the container shell, you can run the clinfo
command to verify your OpenCL configuration just like you did on the bare-metal host system:
[email protected]:/# clinfo
Platform Name AMD Accelerated Parallel Processing
Number of devices 1
Device Name gfx906:sramecc-:xnack-
Device Vendor Advanced Micro Devices, Inc.
Device Vendor ID 0x1002
Device Version OpenCL 2.0
Driver Version 3246.0 (HSA1.1,LC)
Device OpenCL C Version OpenCL C 2.0
Device Type GPU
Device Board Name (AMD) Device 66af
Device Topology (AMD) PCI-E, 0a:00.0
Device Profile FULL_PROFILE
Device Available Yes
Compiler Available Yes
Linker Available Yes
Max compute units 60
SIMD per compute unit (AMD) 4
SIMD width (AMD) 16
SIMD instruction width (AMD) 1
Max clock frequency 1801MHz
Graphics IP (AMD) 9.0
Device Partition (core)
Max number of sub-devices 60
Supported partition types None
Supported affinity domains (n/a)
Max work item dimensions 3
Max work item sizes 1024x1024x1024
Max work group size 256
Preferred work group size (AMD) 256
Max work group size (AMD) 1024
Preferred work group size multiple 64
Wavefront width (AMD) 64
Preferred / native vector sizes
char 4 / 4
short 2 / 2
int 1 / 1
long 1 / 1
half 1 / 1 (cl_khr_fp16)
float 1 / 1
double 1 / 1 (cl_khr_fp64)
Half-precision Floating-point support (cl_khr_fp16)
Denormals No
Infinity and NANs No
Round to nearest No
Round to zero No
Round to infinity No
IEEE754-2008 fused multiply-add No
Support is emulated in software No
Single-precision Floating-point support (core)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Correctly-rounded divide and sqrt operations Yes
Double-precision Floating-point support (cl_khr_fp64)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Address bits 64, Little-Endian
Global memory size 17163091968 (15.98GiB)
Global free memory (AMD) 16760832 (15.98GiB)
Global memory channels (AMD) 128
Global memory banks per channel (AMD) 4
Global memory bank width (AMD) 256 bytes
Error Correction support No
Max memory allocation 14588628168 (13.59GiB)
Unified memory for Host and Device No
Shared Virtual Memory (SVM) capabilities (core)
Coarse-grained buffer sharing Yes
Fine-grained buffer sharing Yes
Fine-grained system sharing No
Atomics No
Minimum alignment for any data type 128 bytes
Alignment of base address 1024 bits (128 bytes)
Preferred alignment for atomics
SVM 0 bytes
Global 0 bytes
Local 0 bytes
Max size for global variable 14588628168 (13.59GiB)
Preferred total size of global vars 17163091968 (15.98GiB)
Global Memory cache type Read/Write
Global Memory cache size 16384 (16KiB)
Global Memory cache line size 64 bytes
Image support Yes
Max number of samplers per kernel 26287
Max size for 1D images from buffer 134217728 pixels
Max 1D or 2D image array size 8192 images
Base address alignment for 2D image buffers 256 bytes
Pitch alignment for 2D image buffers 256 pixels
Max 2D image size 16384x16384 pixels
Max 3D image size 16384x16384x8192 pixels
Max number of read image args 128
Max number of write image args 8
Max number of read/write image args 64
Max number of pipe args 16
Max active pipe reservations 16
Max pipe packet size 1703726280 (1.587GiB)
Local memory type Local
Local memory size 65536 (64KiB)
Local memory syze per CU (AMD) 65536 (64KiB)
Local memory banks (AMD) 32
Max number of constant args 8
Max constant buffer size 14588628168 (13.59GiB)
Preferred constant buffer size (AMD) 16384 (16KiB)
Max size of kernel argument 1024
Queue properties (on host)
Out-of-order execution No
Profiling Yes
Queue properties (on device)
Out-of-order execution Yes
Profiling Yes
Preferred size 262144 (256KiB)
Max size 8388608 (8MiB)
Max queues on device 1
Max events on device 1024
Prefer user sync for interop Yes
Number of P2P devices (AMD) 0
P2P devices (AMD) <printDeviceInfo:147: get number of CL_DEVICE_P2P_DEVICES_AMD : error -30>
Profiling timer resolution 1ns
Profiling timer offset since Epoch (AMD) 0ns (Thu Jan 1 00:00:00 1970)
Execution capabilities
Run OpenCL kernels Yes
Run native kernels No
Thread trace supported (AMD) No
Number of async queues (AMD) 8
Max real-time compute queues (AMD) 8
Max real-time compute units (AMD) 60
printf() buffer size 4194304 (4MiB)
Built-in kernels (n/a)
Device Extensions cl_khr_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_fp16 cl_khr_gl_sharing cl_amd_device_attribute_query cl_amd_media_ops cl_amd_media_ops2 cl_khr_image2d_from_buffer cl_khr_subgroups cl_khr_depth_images cl_amd_copy_buffer_p2p cl_amd_assembly_program
NULL platform behavior
clGetPlatformInfo(NULL, CL_PLATFORM_NAME, ...) No platform
clGetDeviceIDs(NULL, CL_DEVICE_TYPE_ALL, ...) No platform
clCreateContext(NULL, ...) [default] No platform
clCreateContext(NULL, ...) [other] Success [AMD]
clCreateContextFromType(NULL, CL_DEVICE_TYPE_DEFAULT) Success (1)
Platform Name AMD Accelerated Parallel Processing
Device Name gfx906:sramecc-:xnack-
clCreateContextFromType(NULL, CL_DEVICE_TYPE_CPU) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_GPU) Success (1)
Platform Name AMD Accelerated Parallel Processing
Device Name gfx906:sramecc-:xnack-
clCreateContextFromType(NULL, CL_DEVICE_TYPE_ACCELERATOR) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_CUSTOM) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_ALL) Success (1)
Platform Name AMD Accelerated Parallel Processing
Device Name gfx906:sramecc-:xnack-
[email protected]:/#
And that's how you can run OpenCL applications inside an AMD GPU container!
Note that the xhost
command being used for both the NVIDIA and AMD GPU containers is necessary every time you want to run them from a new terminal.
Bonus Tips
If you happen to own multiple GPUs on a single system and want to be specific about running the containers, you can do that as well. Read on.
NVIDIA GPUs
Based on how clinfo
reports NVIDIA GPU information, they are classified on Docker as 0
, 1
, 2
等々。 So, say you have three NVIDIA GPUs and want the container to see only GPU 0(the first one), the corresponding command would have to be revised as:
docker run --rm -it --gpus 0 -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY nvidia-opencl
AMD GPUs
Similarly, based on how clinfo
reports AMD GPU information, they are classified on Docker as /dev/dri/card0
, /dev/dri/card1
, /dev/dri/card2
等々。 So, say you have three AMD GPUs and want the container to see only the first, use the following command instead:
docker run --rm -it --device=/dev/kfd --device=/dev/dri/card0 --device=/dev/dri/renderD128 --group-add video --group-add render -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY amd-opencl
As per the above command, note that renderD128
corresponds to card0
, both of which relate to the first AMD GPU. On the same lines, renderD129
would correspond to card1
for the second AMD GPU and so on. The "renderD" value is incremental and therefore for the third GPU, it would be renderD130
corresponding to card2
。 You can know these metrics in detail by running the ls -l /dev/dri/by-path
コマンド。
Personal notes
Since the last 7 years, I've been actively involved with research that focuses on harnessing the computational power of Graphics Processing Units (GPUs) to understand biological phenomena.
For more than a year now, I've been working on Dockerizing CellModeller, which is my primary research software that I've been working with, to understand multicellularity and at the same time also contributing on its development as a software.
Even though the AMD GPU containerization process can be a bit tedious and tricky, I still liked the way it works without the need of an additional runtime package necessary for NVIDIA GPU containers.
For the entire endeavour, the following references were extremely helpful:
Congleton, N., 2020. Install OpenCL For The AMDGPU Open Source Drivers On Debian and Ubuntu 。 [online] LinuxConfig.org. Available at: https://linuxconfig.org/install-opencl-for-the-amdgpu-open-source-drivers-on-debian-and-ubuntu [Accessed June 23 2021].
My heartfelt thanks to all three authors!
There are so many applications out there on the accelerated computing domain that need OpenCL running on the backend for both GPU vendors. One good example is [email protected] and its specific GPU requirements.
Do let me know your thoughts about this intriguing adventure with OpenCL, GPUs, Linux and finally, Docker! If you have any feedback or suggestions, please let me know in the comment section below.