It's Going To Take More Time To Get Vega Compute Support With The Mainline Kernel
This weekend I wrote how the AMDKFD discrete GPU support should be in place for the next kernel cycle, Linux 4.17. This is going to allow discrete Radeon GPUs to have ROCm working off the mainline kernel for OpenCL/compute support, but for 4.17 it's unlikely RX Vega GPUs will have compute working.
Following that article this weekend, AMD's Felix Kuehling confirmed on the mailing list they are up for the challenge of trying to get the AMDKFD dGPU support ironed out in decent shape for Linux 4.17. "Yes, sounds great. I think I should be able to get userptr support done in time for 4.17, so that should get it into pretty good shape for running ROCm on an upstream kernel on Fiji and Polaris GPUs."
But as you can see, only Fiji and Polaris GPUs are mentioned for the Linux 4.17 target. Curious, I asked about the situation. The explanation is Fiji (R9 Fury series) and Polaris (RX 400/500 series) is where they have been focusing their testing and will be the generations with the best support. But older hardware like Hawaii and Tonga should "more or less" work but there is the possibility for some corner-case issues, etc, with those GPUs being tested less at the moment.
Unfortunately, the latest-generation Vega GPUs will take longer before they are expected to have ROCm working off the mainline kernel. He explained that Vega "requires some significant changes to common code: 64-bit doorbells, different PM4 packet formats, different ways of allocating doorbells to queues due to engine-specific doorbell routing."
But the good news is they are committed to getting that mainline ROCm code working for Vega 10 and as part of that they will be working on a memory manager for Vega based upon the Heterogeneous Memory Management code that was recently added to the mainline Linux kernel.
Following that article this weekend, AMD's Felix Kuehling confirmed on the mailing list they are up for the challenge of trying to get the AMDKFD dGPU support ironed out in decent shape for Linux 4.17. "Yes, sounds great. I think I should be able to get userptr support done in time for 4.17, so that should get it into pretty good shape for running ROCm on an upstream kernel on Fiji and Polaris GPUs."
But as you can see, only Fiji and Polaris GPUs are mentioned for the Linux 4.17 target. Curious, I asked about the situation. The explanation is Fiji (R9 Fury series) and Polaris (RX 400/500 series) is where they have been focusing their testing and will be the generations with the best support. But older hardware like Hawaii and Tonga should "more or less" work but there is the possibility for some corner-case issues, etc, with those GPUs being tested less at the moment.
Unfortunately, the latest-generation Vega GPUs will take longer before they are expected to have ROCm working off the mainline kernel. He explained that Vega "requires some significant changes to common code: 64-bit doorbells, different PM4 packet formats, different ways of allocating doorbells to queues due to engine-specific doorbell routing."
But the good news is they are committed to getting that mainline ROCm code working for Vega 10 and as part of that they will be working on a memory manager for Vega based upon the Heterogeneous Memory Management code that was recently added to the mainline Linux kernel.
25 Comments