No announcement yet.

AMD Ryzen 7 2700X Linux Performance Boosted By Updated BIOS/AGESA

  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by shmerl View Post
    What exactly changed though?
    It looks like power management was causing the Raven Ridge GPU to downclock to 200Mhz in certain situations, leading to stuttering in certain games.

    Perhaps something similar impacted the CPU side. But maybe it was something completely different. Maybe working on Spectre mitigations meant they didn't have time for the usual rounds of optimization before release, and they had to push it out with suboptimal cache timings.

    We're left guessing because it's distributed as a binary blob and there's no commercial benefit to AMD to change that or issue more details. :-/
    Last edited by GreenReaper; 25 April 2018, 10:08 AM.


    • #42
      Originally posted by trivialfis
      Hi all. I want to assemble an workstation primarily for running machine learning tasks and get rid of Nvidia toolchains. Do you have any suggestions for a list of hardware I can buy? It's just a personal device, industrial level GPU is not planned.
      Seems like the ROCm stack should be easier to get running with a Polaris GPU, which are also running a fair bit cheaper than Vega.

      You should read up on some instructions for enabling ROCm support and building OpenMI on your distro + machine learning framework of choice, before you pull the trigger on any hardware purchases. In many ways, the hardware is the easy part.

      Also, go for the big memory version of any GPU you get. For instance, don't get a RX 580 with only 4 GB - go for the 8 GB version. More memory = bigger batches, which improves performance.


      • #43
        Originally posted by MaxToTheMax View Post
        You can, however, use OpenCL on NVIDIA hardware and avoid a dependency on CUDA at least.
        I've run Caffe/OpenCL on Intel GPUs. It's not in the same ballpark as a discrete GPU, but still better than CPU training.

        BTW, I don't see the point of buying a Nvidia GPU and then circumventing the optimized CUDA/cuDNN codepath. I think the point was to avoid paying money to Nvidia. All deep learning frameworks already have excellent Nvidia support, so you're not affecting that.
        Last edited by coder; 26 April 2018, 01:55 AM.


        • #44
          One reason might be to use the best hardware currently, or that which is now available to you, but not to constrain a future change in hardware.


          • #45
            Originally posted by GreenReaper View Post
            One reason might be to use the best hardware currently, or that which is now available to you, but not to constrain a future change in hardware.
            Sure, if you're writing new code.

            But someone who's just getting involved with deep learning will surely use an existing framework, in which case they might as well use a build that has optimal support for their hardware. If using Nvidia, that means cuBLAS and cuDNN. If using AMD, that means OpenMI. If using Intel... I've not kept up with their efforts, but they presumably have a comparable set of libraries. Even if you can use the OpenCL backend on all of these, it won't be as well optimized.

            Then, if you change hardware, just use a different backend. Your own code should be unaffected. Deep learning frameworks serve the same role of providing portability as APIs like OpenCL, but at a higher-level. The only issue is to use a framework with good support for whatever hardware you have or might want to use. This is a moving target.


            • #46
              I assume it set tighter memory timings.


              • #47
                When googling it seem like 0505 is too? If so I guess it's some other tweaking.