Announcement

Collapse
No announcement yet.

AMD Releases ROCm 4.2 Compute Stack

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • coder
    replied
    Originally posted by Qaridarium
    because of the ISA war with intel games always used the next intel only version of the ISA...
    1. I doubt game developers can lock out people with Intel CPUs 1-2 generations old. So, it's not going to be the very latest ISA revision.
    2. Intel didn't add any new instructions to mainsteam desktop CPUs between Skylake and Rocket Lake. And Ryzens supported basically all the Skylake instructions.

    Originally posted by Qaridarium
    but AMD/ATI did find a hole in the intel/nvidia defence it was the gaming consoles of sony/MS because the developers where used to end up in programm low level and optimize the game for exactly the hardware of the console. this means it was the only way to sell gaming gpus and cpus without going into "nvidia the way its meant to be played" walled garden fight with nvidia or ISA war against intel.
    and this war is still going on but because playstation/xbox is based on AMD hardware nearly all game developers become used to in the end support amd hardware.
    There are two problems with this theory:
    1. Playstation uses its own graphics API. So, games have to be ported across APIs, when porting between PCs and Playstation consoles. A lot of the optimizations might therefore get lost.
    2. The people doing console ports are rarely the game's original developers. Game companies contract out this work. I heard this from a very good authority on the matter, when I asked about the very thing you're claiming.

    Originally posted by Qaridarium
    this means from AMD side this is an strategic decision to fight nvidia/intel and because of this AMD sells the chips very cheap to Sony/MS...
    You're just guessing. I'm sure you don't have a good source on any of it.

    Originally posted by Qaridarium
    for AMD the game consoles are the way to stay in business even in times no one buy amd hardware on the pc market.

    as i wrote you at top of my post here even if AMD makes ZERO money on gaming consoles it is of great importance to have this position.. because without it amd would be bankrupted long time ago.
    Man, you're really contradicting yourself. So, if AMD doesn't make console chips for profit, then how can the revenue be enough to keep them alive? But, if they're doing it so that PC games will be better optimized for their GPUs, then why do you say they don't care about PC gaming?

    It sounds to me like you need to get your story straight. It would help if you could cite some facts to bolster your case.

    Leave a comment:


  • smitty3268
    replied
    Originally posted by coder View Post
    It would be interesting to know how much profit AMD is making off each XBox and PS5 sold. I'm guessing it's not nearly as much as their dGPUs, but I honestly can't say.
    It's well known that the console market is a high-volume but low-profit per chip market. That's the primary reason Nvidia and Intel were never particularly interested in fighting AMD to get it. Profits on their own chips are going to be far more profitable than what they're getting paid by Sony and Microsoft.

    I've heard many people claim that AMD is handling the production of the XBox and PS5 APUs, but I wonder if this is an assumption or based on any evidence. Even if it's true, AMD is operating by contract, which means they're subject to certain supply obligations. Therefore, I'm sure they're not just at liberty to prioritize their own products ahead of MS and Sony's.
    It is public knowledge that MS/Sony don't pay TSMC directly, and they are paying AMD a lot of money. So their chips come out of AMD's share of TSMC production. It's all defined by contract though. I assume they are contracted to provide a certain # of working chips per month and it's up to AMD to figure out how many wafers that requires (or maybe it's just a certain number of wafers that is required, I'm not sure). You can be certain that AMD is trying to prioritize their own chips as much as possible since they'll make far more profit on those.
    Last edited by smitty3268; 15 May 2021, 08:39 PM.

    Leave a comment:


  • coder
    replied
    Originally posted by Qaridarium
    no PC gaming is not important to amd compared to playstation5
    It would be interesting to know how much profit AMD is making off each XBox and PS5 sold. I'm guessing it's not nearly as much as their dGPUs, but I honestly can't say.

    Edit: if you're interested, a quick search turned up these discussions on the topic. Indeed there are better places than here to discuss such things.


    Originally posted by Qaridarium
    at TSMC AMD did in fact prioritized playstation 5 chips over PC gaming chips.... now they only start to make PC chips again because the console gaming market is already filled up.
    What's your source on that?

    I've heard many people claim that AMD is handling the production of the XBox and PS5 APUs, but I wonder if this is an assumption or based on any evidence. Even if it's true, AMD is operating by contract, which means they're subject to certain supply obligations. Therefore, I'm sure they're not just at liberty to prioritize their own products ahead of MS and Sony's.

    Originally posted by Qaridarium
    right... but does it make a difference ?
    In so far as you're claiming linux gamers must be using like 48 GB graphics cards, it does.

    Originally posted by Qaridarium
    linux people have much better hardware than windows gamers...
    I'll agree with you, on this point. That's what such data would suggest. Next question is how many Linux gamers are using Steam.

    Originally posted by Qaridarium
    I am sure on the long run amd will lose customers as soon as intel or whatever company does gaming cards with features like SR-IOV...
    You do make a good case for it.
    Last edited by coder; 15 May 2021, 08:36 PM.

    Leave a comment:


  • coder
    replied
    Originally posted by Qaridarium
    thats just your opinion i find the user "Coder" very interesting...
    Thanks man. When we stay on topic, I find you have some interesting perspectives. Your points and questions often make me think, and that's a good thing.

    Originally posted by Qaridarium
    i many times only do not respond to something because the only think i could write is: yes you are right..,
    Right. If I don't reply to a point in a post, then it's usually because I agree. Sometimes I'll say I agree, if I'm already replying, but not always.

    If I agree with a whole post (or most of it and don't strongly disagree with any part), I just use the "like" button.

    Leave a comment:


  • coder
    replied
    Originally posted by vegabook View Post
    Oh brother. Not another looooong winded trawl through all the same issues,
    I don't like to repeat myself. So, chances are that it's not simply a regurgitation of the same points.

    Anyway, if you've lost interest in the discussion, what people usually do is just let it go and walk away. The fact that you need to toss some shade on your way out tells me that you're still too emotionally invested in your position to consider the matter dispassionately.

    Originally posted by vegabook View Post
    probably peppered with your "butthurt" projections.
    I'm sure at least 9 out of 10 independent observers would read your first post and conclude nothing less. In fact, it's so bad that you even had to write that little fan fiction piece about Raja Koduri, which is a pretty epic level of butthurt.

    Originally posted by vegabook View Post
    Sorry dude I'm done I've made all my points can't be bothered to read this.
    I understand that it's asking a lot for someone to really look at facts that run somewhat counter to one's preferred narrative. I thought you were up to the challenge, but maybe you need to cool off a bit.

    You certainly don't have to read anything I write, but it's basic decorum to read the posts you're replying to.

    Originally posted by vegabook View Post
    Try to be more concise man you're not that interesting.
    You're clearly invested in the issue. I don't pretend you're interested in me. I do presume you're interested in the issue you posted about.

    I don't apologize for taking on your points, fully, logically, and thoroughly. That I would even bother to do so could be taken as a sign of respect. And on the point of respect, I do respect your decision and your frustration. If you want to talk about it, that's what we're here for. If not, that's also fine.

    Leave a comment:


  • vegabook
    replied
    Originally posted by coder View Post
    This point we agree on. They should've taken a more expansive view of Linux-based device support and appreciated that success in GPU computing meant making sure that AMD's GPU computing ecosstem had the broadest reach. That means implementing and maintaining support on all the hardware.

    I fear one obstacle they faced is that AMD is/was too Windows-centric. For all I know, DirectCompute works fine, on all these GPUs. Maybe some people at AMD thought that was enough for most non Big-Iron GPU compute users.


    Why do you say that? I don't see anything about it that makes it a less-suitable vehicle for moving forward.

    What I think happened is that the whole ROCm strategy got a late, slow start. Remember that they futzed around with HSA for a while. And by the the time ROCm was finally starting to solidify, resources got diverted for the whole HiP nonsense and trying to catch up in AI & HPC framework support.

    Meanwhile, there's been wave after wave of new APUs and GPUs that need drivers & feature support. RDNA was a big change, so that probably meant a lot of work. And stuff like ray tracing and MI100's new Matrix cores would mean a lot of work further up the software stack. With all that going on, it's hard to keep focus on rebuilding the whole driver stack & its userspace counterparts.


    Am I? Just because I'm trying to look at it from AMD's perspective doesn't mean I'm any more willing to wait. I'm going to use whatever I need, in order to achieve my goals.


    I'm not really sure about what, other than that you seem more emotive about this whole situation.
    Oh brother. Not another looooong winded trawl through all the same issues, probably peppered with your "butthurt" projections. Sorry dude I'm done I've made all my points can't be bothered to read this. Try to be more concise man you're not that interesting.

    Leave a comment:


  • coder
    replied
    Originally posted by Qaridarium
    i think about compute... OpenCL is already gone... Blender remove it and other will follow
    It's not.

    Blender is not a representative example. Its core devs were always Nvidia fanboys and neither wrote nor maintained the OpenCL backend. Their new reworking effort was just a convenient excuse to get rid of it.

    Originally posted by Qaridarium
    Vulkan-Compute/WebGPU is the way to go
    Vulkan's precision standards are too low for some compute users and and the API is incredibly complex. That doesn't mean it can't be used, but it's just not equivalent to OpenCL.

    WebGPU has yet to be seen.

    Originally posted by Qaridarium
    but AMD spend a lot of money into ROCm HIP (fake cuda) and openCL---
    You need to distinguish ROCm from OpenCL and HiP. ROCm is just the framework that they fit into.

    Originally posted by Qaridarium
    also some market disdinctions dont make a lot of sense in today reality...
    They do in terms of who is willing to pay how much, and for which features.

    Cloud and AI have grown like crazy. AMD has virtually been left out of AI. And I think their focus on HPC is because it's a definite market with customers willing to pay serious money. That's makes it hard to prioritize below a disorganized and nebulous bunch of linux people, who want to do all sorts of things with their cheap gaming cards.

    Originally posted by Qaridarium
    "Gamers" as AMD and the world see it is today more playstation5/xbox and smartphons ...
    I wish you'd take just a minute to think about these things, before you type them. PC gaming is obviously very important to them, as you can see if you just visit their Radeon web site.

    Originally posted by Qaridarium
    to have a average of 8gb vram compared to the 1gb of windows people you need a lot of people with 12gb or 16gb or 24gb or even 48gb vtam people to do the statistic to result in an average of 8gb vram...
    Not if "average" is actually the median. And, for game developers, median is the statistic that's most useful.

    Leave a comment:


  • coder
    replied
    Originally posted by vegabook View Post
    I think it's fair to summarise things as follows. I believe AMD has screwed up on its ROCm strategy,
    This point we agree on. They should've taken a more expansive view of Linux-based device support and appreciated that success in GPU computing meant making sure that AMD's GPU computing ecosstem had the broadest reach. That means implementing and maintaining support on all the hardware.

    I fear one obstacle they faced is that AMD is/was too Windows-centric. For all I know, DirectCompute works fine, on all these GPUs. Maybe some people at AMD thought that was enough for most non Big-Iron GPU compute users.

    Originally posted by vegabook View Post
    and that ROCm is now holed below the waterline.
    Why do you say that? I don't see anything about it that makes it a less-suitable vehicle for moving forward.

    What I think happened is that the whole ROCm strategy got a late, slow start. Remember that they futzed around with HSA for a while. And by the the time ROCm was finally starting to solidify, resources got diverted for the whole HiP nonsense and trying to catch up in AI & HPC framework support.

    Meanwhile, there's been wave after wave of new APUs and GPUs that need drivers & feature support. RDNA was a big change, so that probably meant a lot of work. And stuff like ray tracing and MI100's new Matrix cores would mean a lot of work further up the software stack. With all that going on, it's hard to keep focus on rebuilding the whole driver stack & its userspace counterparts.

    Originally posted by vegabook View Post
    You're prepared to be more patient
    Am I? Just because I'm trying to look at it from AMD's perspective doesn't mean I'm any more willing to wait. I'm going to use whatever I need, in order to achieve my goals.

    Originally posted by vegabook View Post
    We simply, fundamentally, disagree, and that's fine.
    I'm not really sure about what, other than that you seem more emotive about this whole situation.

    Leave a comment:


  • vegabook
    replied
    Originally posted by coder View Post
    As I've already implied (and I think quite directly stated, in other threads), I get it. AMD is ignoring the "developer pipeline" problem. That's like the thing I think has us all in the most collective disbelief.

    But that's not the point you seemed to make, in the post I was replying to. In that case, it sounded like you were saying: "if Nvidia can do all this for a $100 device, why can't AMD for a $700 GPU." I think there's a good reason for that, which I stated.

    Also, I'm sure you well know that software development costs are virtually uncorrelated with the costs of the hardware it runs on.


    No, it's not fair. We have the luxury of seeing how decisions played out, but you can't truly judge the quality of decisions on the basis of information that wasn't available to the decision-makers at the time. You have to ask the question: did they make the best decision they could, given the circumstances and what they knew. The answer might be "yes", even if the outcome was bad (i.e. if the bad outcome was due to unlikely or unforeseeable events or developments). I'm not saying they made the best decisions, but you're being too harsh. AMD was in a difficult position, and I understand their incentives to follow the bigger and more definite payoffs of HPC and deep learning.


    Actually, no. The market values companies based on their current and expected performance. It doesn't care much about past mistakes or why the current situation is what it is. Whether a company is awesome, its competitors are terrible, or it just got lucky and is succeeding in spite of a bad call pretty much amount to the same thing, in the eyes of the market.


    Maybe get an ice pack for that sore ass, dude. The mere fact that I'm willing to look at the situation from AMD's perspective doesn't mean that I like where we ended up or that I'm fawning on anyone.

    Honestly, I'm as disappointed as you. I wanted AMD to be better, but the larger they grow, the more they start to look like Intel (in the bad ways, I mean). And, to my great surprise, Intel is ending up as the lone hold-out for OpenCL. And they have open source drivers for all their stuff, on launch day no less!

    And, in all sincerity, I'm sorry for your wasted time and effort on ROCm, and I wish you better fortunes in your future pursuits.


    For reasons outside my control, the only serious use I've made of OpenCL happened to be on Intel. I expected to be doing more with AMD, but never got that far.

    In full disclosure, I've also done a fair bit with Nvidia, back when they were the only game in AI, via deep learning frameworks. Also wrote a little CUDA, in 2013, and wasn't impressed vs. OpenCL.
    I think it's fair to summarise things as follows. I believe AMD has screwed up on its ROCm strategy, and that ROCm is now holed below the waterline. You're prepared to be more patient and allow the firm more leeway to see if what it's doing can still bear fruit.

    We simply, fundamentally, disagree, and that's fine.
    Last edited by vegabook; 15 May 2021, 06:20 AM.

    Leave a comment:


  • coder
    replied
    Originally posted by vegabook View Post
    "Jetson/robotics": Who cares the reason? The point is anybody can pickup a cheap device and start learning CUDA, the same CUDA that will then run on a huge DGX gpu. No can do on AMD.
    As I've already implied (and I think quite directly stated, in other threads), I get it. AMD is ignoring the "developer pipeline" problem. That's like the thing I think has us all in the most collective disbelief.

    But that's not the point you seemed to make, in the post I was replying to. In that case, it sounded like you were saying: "if Nvidia can do all this for a $100 device, why can't AMD for a $700 GPU." I think there's a good reason for that, which I stated.

    Also, I'm sure you well know that software development costs are virtually uncorrelated with the costs of the hardware it runs on.

    Originally posted by vegabook View Post
    your hindsight thing ATI etc: It's obvious that hindsight is the ONLY way to look at how the management of any company has performed.
    No, it's not fair. We have the luxury of seeing how decisions played out, but you can't truly judge the quality of decisions on the basis of information that wasn't available to the decision-makers at the time. You have to ask the question: did they make the best decision they could, given the circumstances and what they knew. The answer might be "yes", even if the outcome was bad (i.e. if the bad outcome was due to unlikely or unforeseeable events or developments). I'm not saying they made the best decisions, but you're being too harsh. AMD was in a difficult position, and I understand their incentives to follow the bigger and more definite payoffs of HPC and deep learning.

    Originally posted by vegabook View Post
    That's exactly how the entire market works, you know, hindsight on results that have already happened.
    Actually, no. The market values companies based on their current and expected performance. It doesn't care much about past mistakes or why the current situation is what it is. Whether a company is awesome, its competitors are terrible, or it just got lucky and is succeeding in spite of a bad call pretty much amount to the same thing, in the eyes of the market.

    Originally posted by vegabook View Post
    More generally, your fawning desire to forgive AMD
    Maybe get an ice pack for that sore ass, dude. The mere fact that I'm willing to look at the situation from AMD's perspective doesn't mean that I like where we ended up or that I'm fawning on anyone.

    Honestly, I'm as disappointed as you. I wanted AMD to be better, but the larger they grow, the more they start to look like Intel (in the bad ways, I mean). And, to my great surprise, Intel is ending up as the lone hold-out for OpenCL. And they have open source drivers for all their stuff, on launch day no less!

    And, in all sincerity, I'm sorry for your wasted time and effort on ROCm, and I wish you better fortunes in your future pursuits.

    Originally posted by vegabook View Post
    certainly most actual users of AMD cards trying to do ROCm are not gonna agree with you.
    For reasons outside my control, the only serious use I've made of OpenCL happened to be on Intel. I expected to be doing more with AMD, but never got that far.

    In full disclosure, I've also done a fair bit with Nvidia, back when they were the only game in AI, via deep learning frameworks. Also wrote a little CUDA, in 2013, and wasn't impressed vs. OpenCL.
    Last edited by coder; 15 May 2021, 05:42 AM.

    Leave a comment:

Working...
X