Announcement

Collapse
No announcement yet.

17-Way NVIDIA Binary vs. AMD Open-Source Linux 4.6 / Mesa Git Driver Tests

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • bridgman
    replied
    Originally posted by Ansla View Post
    I guess a valid question would be if CUDA is easier to implement than OpenCL. After all, AMD is working on Clover for several years now, while this ROC stack was announced less than a year ago if I remeber correctly. Or was it just that AMD could invest more resources into ROC than into Clover?
    Yeah, ROC was built on the HSA stack, which had a decently sized team working on it. Clover had a somewhat smaller team, if you'll pardon the understatement.

    Moving the Catalyst OpenCL stack onto ROC and opening it up should give us the best of both worlds.

    Off the top of my head I would say that the CUDA/HCC model is harder from a compiler POV while OpenCL is harder from a runtime POV. Something like that

    Leave a comment:


  • chrisb
    replied
    Originally posted by Michael View Post

    PTS has support for determining perf-per-dollar, but it's unfortunately difficult to do that for cards well after launch.... Since there are many different cards out there for same model, do I use the old reference price? the average current price? the lowest price? Thus due to a lot of variables what to do for something besides launch pricing, don't tend to do it then.
    IMHO A decent ballpark for old cards would be the lowest Buy It Now price on EBay for a popular "used" (I.e. working) card from a premium manufacturer of each model. Maybe average of n lowest. The figure would be pretty representative of the actual price that most of us would pay if we bought that old card right now. You could automate it with the eBay API.

    Leave a comment:


  • Ansla
    replied
    I guess a valid question would be if CUDA is easier to implement than OpenCL. After all, AMD is working on Clover for several years now, while this ROC stack was announced less than a year ago if I remeber correctly. Or was it just that AMD could invest more resources into ROC than into Clover?

    Leave a comment:


  • bridgman
    replied
    Originally posted by Qaridarium
    Dude... AMD is really doing it wrong if this is the result: "let the CUDA paths run instead".
    Really. We have an open source open-standards solution that lets you port from CUDA to portable C++ that runs on multiple platforms today and you're saying that's wrong ? What is your rationale ? Are you saying we should force people to only use OpenCL ?

    Originally posted by Qaridarium
    why not just opensource the Vulkan and OpenCL implementation of the AMDGPU-pro stack?
    AMD in my understanding they plan to do this anyway so why is there no time-plan to really do it?
    We're doing that, but the ROC stack is available and open source today. What's wrong with telling people about it ?

    You've watched enough open sourcing efforts over the years that you should understand how they work - they are more difficult than most projects to accurately estimate the work required, so the timeline ends up being "when it's done" more often than not.

    If you are aware of other companies providing and delivering on accurate timelines for major open-sourcing efforts please let me know.

    Originally posted by Qaridarium
    the situation right now is really very confusing amd could do better and why i think so?
    they still try the old way of developing stuff means make it run make it perfect and then release it.
    but instead of this they should just drop the code copensource in public and let the community help them to finale it.
    Sure, no problem. We'll just drop all the third-party proprietary code out in public and let their lawyers help to finalize it (or us). No thank you, I certainly don't want to be finalized...

    Remember that all the components we're talking about are code shared, and were generally written first for some other OS using toolchains and internal info from those other OSes. "Dropping the code in public and finalizing later" is not an option in those cases.
    Last edited by bridgman; 02 June 2016, 01:12 PM.

    Leave a comment:


  • bridgman
    replied
    Originally posted by Qaridarium
    what does SHOC/ROC stack mean?
    SHOC is the Scalable HeterOgenous Computing benchmark - the first set of tests Michael ran for this article. It's interesting because it includes both OpenCL and CUDA paths for most of the tests.

    ROC is Radeon Open Compute, which started with the HSA stack then added dGPU support, ability to run without IOMMUv2, plus the HCC single-binary C++17 compiler and HIP porting tool. HIP helps you convert CUDA code to portable C++ code that can run through either AMD or NVidia toolchains.

    We don't have sufficiently high OpenCL support on the all-open stack to run all the SHOC tests yet AFAIK (you would need the hybrid driver for now), but ROC will let the CUDA paths run instead.

    Leave a comment:


  • Luke_Wolf
    replied
    Originally posted by juno View Post
    Absolutely. If you told this to somebody some years ago, he would have called you crazy.
    What's going to be more interesting is where we're going to be a year or two after they've caught up with all the various specs beyond OpenGL, and thus have really had time to optimize the stack, I wouldn't be entirely surprised to see a role reversal where Mesa becomes the performance king.

    Leave a comment:


  • juno
    replied
    Well, the launch price doesn't say much about the average price during the whole life cycle of the product.
    If one competitor is early with the new generation, the launch price would be higher and then stabilise on a lower level for years when the competition is out.

    It is really hard to do such a comparison. Launch prices don't help, you have to consider the actual street prices and those are fluctuating much.

    Originally posted by FireBurn View Post
    Wow, I'm really impressed at how well AMD's open driver is competing against the nVidia blob
    Absolutely. If you told this to somebody some years ago, he would have called you crazy.
    Last edited by juno; 01 June 2016, 07:58 AM.

    Leave a comment:


  • Azpegath
    replied
    Originally posted by Michael View Post

    PTS has support for determining perf-per-dollar, but it's unfortunately difficult to do that for cards well after launch.... Since there are many different cards out there for same model, do I use the old reference price? the average current price? the lowest price? Thus due to a lot of variables what to do for something besides launch pricing, don't tend to do it then.
    I understand that it would be a bit non-deterministic for older cards... But for me I think it would be OK just to have a ballpark figure about launch price. Like "which cards is this card supposed to compete against when it came out?". Mainly, I'm thinking about Nvidia vs AMD generations.

    Leave a comment:


  • Michael
    replied
    Originally posted by Azpegath View Post
    Performance per watt is interesting, but could you add Performance per USD? I'm certain you've done that before, but it's still quite interesting to see how much value we get per krona (or USD as you crazy Americans like to call it )
    PTS has support for determining perf-per-dollar, but it's unfortunately difficult to do that for cards well after launch.... Since there are many different cards out there for same model, do I use the old reference price? the average current price? the lowest price? Thus due to a lot of variables what to do for something besides launch pricing, don't tend to do it then.

    Leave a comment:


  • Azpegath
    replied
    Performance per watt is interesting, but could you add Performance per USD? I'm certain you've done that before, but it's still quite interesting to see how much value we get per krona (or USD as you crazy Americans like to call it )

    Leave a comment:

Working...
X