Announcement

Collapse
No announcement yet.

Some More Radeon Vega Frontier Edition Linux ROCm OpenCL Benchmarks

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Some More Radeon Vega Frontier Edition Linux ROCm OpenCL Benchmarks

    Phoronix: Some More Radeon Vega Frontier Edition Linux ROCm OpenCL Benchmarks

    A Phoronix reader allowed me to access his Radeon Vega Frontier Edition system when checking on the ROCm OpenCL benchmark and uploaded the data to OpenBenchmarking.org...

    http://www.phoronix.com/scan.php?pag...FE-More-OpenCL

  • #2
    Did anyone try Vega FE on Arch yet?

    Comment


    • #3
      2MB page support is already in amd-staging and targeting 4.14 for upstream.

      Comment


      • #4
        Originally posted by agd5f View Post
        ... and targeting 4.14 for upstream.
        Excellent... however AFAIK it is not yet in the released ROCm or 17.30 kernels.

        Comment


        • #5
          Solid results. Also thanks AMD for giving us the opportunity to get CUDA code running on AMD-cards - really opens up the field of use.

          Comment


          • #6
            I'm happy to allow Michael access to my Vega machine again in the future if there's any desire to run more tests under new kernels or drivers or whatever

            Comment


            • #7
              Here in Australia the Vega 64 is practically the same cost as the Vega FE, at least projections and inflation are strongly indicating that to be. Quite a sad case. So much for 8GB HBM2 being significantly cheaper then 16GB.

              Not that I really care, not going to get a Vega now given that they are clearly a MINING card and NOT a gaming card.

              Comment


              • #8
                wanted to try rocm, but it's really a PITA to build this thing, especially since you require a special kernel (currently 4.11), for which it isn't trivial to even merge the bugfix releases, nevermind even newer kernel versions (4.12).
                Also it's a huge repository which includes special versions of llvm and clang.
                Did I mention that I hate bundled packages? :/

                Would love to finally have OpenCL support in darktable, but seems like I have to wait another long time.

                Comment


                • #9

                  Originally posted by Berniyh View Post
                  wanted to try rocm, but it's really a PITA to build this thing, especially since you require a special kernel (currently 4.11), for which it isn't trivial to even merge the bugfix releases, nevermind even newer kernel versions (4.12).
                  We are in the process of adding the Kernel Compatibility Layer and DKMS support from AMDGPU-PRO to the ROCM stack - almost made it for 1.6 (you can see the /drivers/gpu/drm/amd/amdkcl folder) but not quite. That will allow you to install just driver modules rather than replace the kernel. Internal trees have now moved to 4.12, and we will be tracking upstream much more closely now that we have finished syncing up all of the driver variants (all-open, hybrid and ROCm).

                  Felix is also making progress on getting the latest ROC kernel code upstreamed:

                  https://lists.freedesktop.org/archiv...st/011984.html

                  Originally posted by Berniyh View Post
                  Also it's a huge repository which includes special versions of llvm and clang.
                  Did I mention that I hate bundled packages? :/
                  We supply prebuilt binaries as well IIRC. Or are you saying you want to build it yourself but want it to be easier ?

                  As long as llvm is on a six month release cycle and we have customers expecting faster-than-that response time & progress we are going to have to operate out of tree, aren't we ? Or is there an easier solution I am missing ?

                  Comment


                  • #10
                    Originally posted by bridgman View Post
                    We supply prebuilt binaries as well IIRC. Or are you saying you want to build it yourself but want it to be easier ?

                    As long as llvm is on a six month release cycle and we have customers expecting faster-than-that response time & progress we are going to have to operate out of tree, aren't we ? Or is there an easier solution I am missing ?
                    I think most people would prefer at least having the option of just using the upstream libs without needing a special version. Even if it's slower/less optimized, they could always then try to grab the special version if that's an issue. Is that the plan eventually? Will LLVM 6 work OOTB?
                    Last edited by smitty3268; 08-11-2017, 07:32 PM.

                    Comment

                    Working...
                    X