Announcement

Collapse
No announcement yet.

AMD Ryzen 7 2700X Linux Performance Boosted By Updated BIOS/AGESA

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • aliquis
    replied
    When googling it seem like 0505 is 1.0.0.2a too? If so I guess it's some other tweaking.

    Leave a comment:


  • aliquis
    replied
    I assume it set tighter memory timings.

    Leave a comment:


  • coder
    replied
    Originally posted by GreenReaper View Post
    One reason might be to use the best hardware currently, or that which is now available to you, but not to constrain a future change in hardware.
    Sure, if you're writing new code.

    But someone who's just getting involved with deep learning will surely use an existing framework, in which case they might as well use a build that has optimal support for their hardware. If using Nvidia, that means cuBLAS and cuDNN. If using AMD, that means OpenMI. If using Intel... I've not kept up with their efforts, but they presumably have a comparable set of libraries. Even if you can use the OpenCL backend on all of these, it won't be as well optimized.

    Then, if you change hardware, just use a different backend. Your own code should be unaffected. Deep learning frameworks serve the same role of providing portability as APIs like OpenCL, but at a higher-level. The only issue is to use a framework with good support for whatever hardware you have or might want to use. This is a moving target.

    Leave a comment:


  • GreenReaper
    replied
    One reason might be to use the best hardware currently, or that which is now available to you, but not to constrain a future change in hardware.

    Leave a comment:


  • coder
    replied
    Originally posted by MaxToTheMax View Post
    You can, however, use OpenCL on NVIDIA hardware and avoid a dependency on CUDA at least.
    I've run Caffe/OpenCL on Intel GPUs. It's not in the same ballpark as a discrete GPU, but still better than CPU training.

    BTW, I don't see the point of buying a Nvidia GPU and then circumventing the optimized CUDA/cuDNN codepath. I think the point was to avoid paying money to Nvidia. All deep learning frameworks already have excellent Nvidia support, so you're not affecting that.
    Last edited by coder; 26 April 2018, 01:55 AM.

    Leave a comment:


  • coder
    replied
    Originally posted by trivialfis
    Hi all. I want to assemble an workstation primarily for running machine learning tasks and get rid of Nvidia toolchains. Do you have any suggestions for a list of hardware I can buy? It's just a personal device, industrial level GPU is not planned.
    Seems like the ROCm stack should be easier to get running with a Polaris GPU, which are also running a fair bit cheaper than Vega.

    You should read up on some instructions for enabling ROCm support and building OpenMI on your distro + machine learning framework of choice, before you pull the trigger on any hardware purchases. In many ways, the hardware is the easy part.

    Also, go for the big memory version of any GPU you get. For instance, don't get a RX 580 with only 4 GB - go for the 8 GB version. More memory = bigger batches, which improves performance.

    Leave a comment:


  • GreenReaper
    replied
    Originally posted by shmerl View Post
    What exactly changed though?
    It looks like power management was causing the Raven Ridge GPU to downclock to 200Mhz in certain situations, leading to stuttering in certain games.

    Perhaps something similar impacted the CPU side. But maybe it was something completely different. Maybe working on Spectre mitigations meant they didn't have time for the usual rounds of optimization before release, and they had to push it out with suboptimal cache timings.

    We're left guessing because it's distributed as a binary blob and there's no commercial benefit to AMD to change that or issue more details. :-/
    Last edited by GreenReaper; 25 April 2018, 10:08 AM.

    Leave a comment:


  • Dr.Diesel
    replied
    This board uses the SupremeFX 8-Channel High Definition Audio CODEC S1220, does the audio work under recent kernels? Apparently it's got a tweaked FW and has some issues.

    Leave a comment:


  • oleid
    replied
    Non-nvidia for gpgpu on Linux? There are ports of tensorflow other frameworks to AMD. Not OpenCL, but their portable version of CUDA (hip?). If I were you, I'd check their state, find benchmarks and decide based on those.

    Leave a comment:


  • Kendji
    replied
    Impressive improvements from a Bios update

    Leave a comment:

Working...
X