Announcement

Collapse
No announcement yet.

Vulkan 1.0 Released: What You Need To Know About This Cross-Platform, High-Performance Graphics API

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • SystemCrasher
    replied
    Originally posted by bridgman View Post
    Yeah, but the fact that something is experimental today doesn't mean it will be experimental forever
    "Experimental" could end by either being promoted to "mainline/default" or being chopped away. TBH I've got impression AMD devs are rather up for later.

    AFAIK we have been REALLY FREAKIN' CLEAR that the upstream restrictions on breaking userspace do not apply to packaged binary drivers (eg amdgpu hybrid)
    Distros have each and every option to load modules the way they see it fits by fiddling with blacklists/HWDB/aliases/etc. Most are using udev which allows to do a plenty of strange things. Furthermore, distros are responsible for consistency of updates, etc. I do not really get why 2 modules able to support same HW should be a big deal. There are other things supported by 2 different modules, e.g. some Realtek Wi-Fi cards (one module just works and another is a "proper" mac80211 rewrite). I do not get why loading different kernel module is a "breakage". Most of time decisions are being made by udev, it is not up to kernel to decide. If usermode loads module it can't handle, there is nothing kernel can do. At most I can imagine strange combo where one builds both modules as part of kernel image (uhm, is this supported at all?). But kernel allows to configure even more strange combos. So if kernel devs are so inclined on breakages, building kernel without /proc BREAKS usermode, and without ELF support getting usermode started could be a bit tricky :P. So I'm not really sure why kernel needs option to disable support of "older" parts at all and especially why it should be disabled by default. Are there some unique failure modes AMD or kernel devs afraid of?

    that amdgpu support earlier HW had been enabled by default from day one. Alex and I both said multiple times that initial amdgpu development had been done on CI, and that hybrid development was continuing on it.
    But it was not clear about fate of this code on long run. I.e. it sounded it happens only because newer parts are not here and when they arrive, old code could be chopped away since it is not needed to test driver at this point. I can't remember anyone telling it going to stay, and especially it going to be new defaults at some point. Maybe I've missed something once more time though.

    Yeah, even I forget sometimes that "GCN" means "Graphics Core Next", ie just the shader core. There's no rigid pattern but generally you won't see big core changes happen in the same generation as big uncore changes.
    [...]
    The kernel driver cares about uncore and scheduling/dispatching but not about ISA. Take the above list with a grain of salt, it's a 90-second brain dump.
    Verrrrry good explanation, it explains why things happen this way. I guess it could be good to put it somewhere near http://xorg.freedesktop.org/wiki/RadeonFeature (though this wiki seems to be "private club" so it is up for devs).

    Yes, all drivers that support multiple HW generations have to do that. The issue is that by breaking between SI and CI we were avoiding a big chunk of duplication. We can't remove the code from radeon because it's still needed for NI, but adding NI to amdgpu starts to get stupid.
    Hmm, NI in AMDGPU? Sounds funny, I can imagine NI owners could be happy, though it could look a bit unexpected unless one gets idea this idea about core/uncore/ISA .

    Yeah, IIRC that's still an option, but doing something like that at the same time as all the other changes we are making gets impractical. Agree that if nothing else that could make us feel a bit better about the bloat from adding SI to amdgpu
    Now I can at least understand why SI support in AMDGPU looks not so exciting from technical point of view.

    Leave a comment:


  • drSeehas
    replied
    Originally posted by SystemCrasher View Post
    ... So it seems "GCN" term mostly refers to new shader core design ...
    What does the C in "GCN" mean?

    Leave a comment:


  • azari
    replied
    Originally posted by bridgman View Post
    ^^^ Imagine detailed post here ^^^

    Unapproved again. Sigh. Maybe this is why people use twitter.
    I feel your pain, I made this post in the afternoon (EST), and it only showed up just now. I actually made it before our exchange in the other thread.

    The worst thing is that it doesn't show anywhere that it's "awaiting moderation" or anything, the post just doesn't show up and you have to assume it worked.

    Leave a comment:


  • bridgman
    replied
    ^^^ Imagine detailed post here ^^^

    Unapproved again. Sigh. Maybe this is why people use twitter.

    Leave a comment:


  • bridgman
    replied
    Originally posted by SystemCrasher View Post
    Yeah, and overall I bet it could be better, i.e. AMD could issue simple and clear communications in their press releases, outlining what AMD is up to, so Michael and everybody else gets it right. I wonder if you're allowed to take part in creating press releases, at least Linux & open drivers related parts? Somehow you do it right, and AMD should value such "PR person", just looking on your forum rating is enough to get idea . Not to mention it could be good for Michael to ask you about actual plans, before drawing conclusions.
    Normally I'm at least peripherally involved but this was an exception...

    Originally posted by SystemCrasher View Post
    Because...
    1) AMDGPU lacks support of anything but VI by default. So most distros would only enable it for Vi, as long as it called "experimental".
    2) IIRC, "experimental" code is also disabled by default, reinforcing idea in 1) further.
    3) "Experimental" could mean anything, e.g. one do not have to be surprised if experimental code is removed at some point, etc. I wonder what are REAL plans about it? It seems at this point AMD devs rather consider making it unified base for all GCN based GPUs or something like this?
    4) Radeon on its own was not meant to deal with Catalyst's user mode parts, and since Vulkan appears to be part of these efforts, it got extremely unobvious how it could be supported at all on older parts.

    This is logic chain I've followed and looking on this thread it seems I'm not anyhow unique in this :P.
    Yeah, but the fact that something is experimental today doesn't mean it will be experimental forever -- but unless/until we can switch the defaults we have to focus upstream effort on one driver and describing amdgpu as experimental is the only way we could do that.

    That said, your point #3 (that "experimental" could be interpreted as "might go away" is certainly something that never occurred to me. I think of experimental as midway between "nothing" and "mainstream", ie might get rewritten but not likely to go away completely. Maybe that'markings wrong.

    AFAIK we have been very clear that the upstream restrictions on breaking userspace do not apply to packaged binary drivers (eg amdgpu hybrid) and that amdgpu support for earlier HW had been enabled by default from day one in our hybrid driver builds. I guess we should have repeated all that when we were talking about Vulkan on Linux rather than just saying that Vulkan would be delivered as part of the hybrid driver.

    Originally posted by SystemCrasher View Post
    So it seems "GCN" term mostly refers to new shader core design rather than something else. Btw, are there some reasonable "technical" description on how these things evolved?
    Yeah, even I forget sometimes that "GCN" means "Graphics Core Next", ie just the shader core. There's no rigid pattern but generally you won't see big core changes happen in the same generation as big uncore changes. One useful way to think about it is to split the chip into three parts -- ISA (shader core), scheduling/dispatching (CP and more, maybe include graphics pipeline here), uncore (bus, memory, display, UVD, VCE etc...:

    - r300 (first generation 3xx) had new shader core
    - r4xx was mostly uncore, eg transition to PCIE (although some of those parts were actually given 3xx numbers)
    - r5xx was mostly scheduling/dispatching (many threads, big register file to let idle threads stay resident, decoupling pixel from vertex shaders)
    - r6xx was new shader core, new graphics pipeline
    - r7xx was mostly uncore plus "more of everything"
    - r8xx/Evergreen was mostly new graphics pipeline
    - r9xx/NI was fairly big (Cayman at least) - uncore plus scheduling/dispatching (addition of compute rings) plus shader core
    - SI/gfx6 was new shader core - lots of other things changed eg memory hierarchy, just not things kernel driver cared much about
    - CI/gfx7 was uncore plus scheduling/dispatching (separate MEC blocks for async compute)
    - VI/gfx8 was significant scheduling/dispatching, eg HW virtualization, CWSR etc... plus HBM for Fiji

    The kernel driver cares about uncore and scheduling/dispatching but not about shader core. Take the above list with a grain of salt though, it's a 90-second brain dump. Every generation typically includes changes in all areas (eg ISA changes a bit every generation) but some areas change more than others.

    Originally posted by SystemCrasher View Post
    Catalyst doing something like this for a while, no?
    Yes, all drivers that support multiple HW generations have to do that. The issue is that by breaking between SI and CI we were avoiding a big chunk of duplication. We can't remove the code from radeon because it's still needed for NI, but adding NI to amdgpu starts to get stupid.

    Originally posted by SystemCrasher View Post
    And looking on AMDGPU, I wonder if AMD devs considered idea to split it to "core" module and then "part-specific" submodules, loaded by core, loading just submodules needed by GPU(s) on actual system, reducing footprint of code in flight. Though it could make things more complicated and this is kinda random idea, I haven't checked if there is something inherently preventing this approach.
    Yeah, IIRC that's still an option, but doing something like that at the same time as all the other changes we are making gets impractical. Agree that if nothing else that could make us feel a bit better about the bloat from adding SI to amdgpu
    Last edited by bridgman; 24 February 2016, 11:55 PM.

    Leave a comment:


  • haagch
    replied
    Maybe Michael could simply make a new article, an update to the driver status.
    Intel is presumably working on ivy bridge/haswell support, but I have heard nothing at all what the status is and when it will reach conformance.
    And then what AMD's status is and if there is an ETA for the alpha/beta driver.

    Leave a comment:


  • SystemCrasher
    replied
    Originally posted by bridgman View Post
    The problem is that three things got garbled together in the article :
    Yeah, and overall I bet it could be better, i.e. AMD could issue simple and clear communications in their press releases, outlining what AMD is up to, so Michael and everybody else gets it right. I wonder if you're allowed to take part in creating press releases, at least Linux & open drivers related parts? Somehow you do it right, and AMD should value such "PR person", just looking on your forum rating is enough to get idea . Not to mention it could be good for Michael to ask you about actual plans, before drawing conclusions.

    Unfortunatlye it seems that a lot of people read that comment and started posting "OMG AMD is only going to support Vulkan on VI".
    Because...
    1) AMDGPU lacks support of anything but VI by default. So most distros would only enable it for Vi, as long as it called "experimental".
    2) IIRC, "experimental" code is also disabled by default, reinforcing idea in 1) further.
    3) "Experimental" could mean anything, e.g. one do not have to be surprised if experimental code is removed at some point, etc. I wonder what are REAL plans about it? It seems at this point AMD devs rather consider making it unified base for all GCN based GPUs or something like this?
    4) Radeon on its own was not meant to deal with Catalyst's user mode parts, and since Vulkan appears to be part of these efforts, it got extremely unobvious how it could be supported at all on older parts.

    This is logic chain I've followed and looking on this thread it seems I'm not anyhow unique in this :P.

    (SI is basically Cayman with a new shader core), and it probably takes longer to make production-ready, but it is easier to support over time.
    So it seems "GCN" term mostly refers to new shader core design rather than something else. Btw, are there some reasonable "technical" description on how these things evolved?

    We are currently leaning towards just doing (a) and holding our noses w.r.t. impact on driver size & complexity.
    Catalyst doing something like this for a while, no? And looking on AMDGPU, I wonder if AMD devs considered idea to split it to "core" module and then "part-specific" submodules, loaded by core, loading just submodules needed by GPU(s) on actual system, reducing footprint of code in flight. Though it could make things more complicated and this is kinda random idea, I haven't checked if there is something inherently preventing this approach.

    Leave a comment:


  • azari
    replied
    Originally posted by bridgman View Post
    Our assumption was that we probably would have to do (a) eventually anyways, but depending on time / effort / resources it might be worth doing (b) first in order to get support into user's hands more quickly. We are currently leaning towards just doing (a) and holding our noses w.r.t. impact on driver size & complexity.
    If AMD wasn't selling brand new GCN 1.0 cards in the R9/R7 300 product line, option (b) would be an easier pill to swallow, but since that's not the case, I'd have to agree that option (a) is preferable long-term. Even though I know you guys aren't technically ditching the old radeon kernel driver, it's probably still not going to get as much love as the new stack over time; one of the big benefits of a shared kernel driver for Mesa and Catalyst is that any bugs that AMD finds and identifies with specific motherboards, that require workarounds in the driver, will benefit both users of the opensource and proprietary stacks.

    I've actually been facing an issue like this with an old Llano APU for years, which works perfectly with Catalyst but locks up on the opensource driver; I know I probably should've filled a bug back then, but I just basically settled for Catalyst, but now that I know that Catalyst is going to be amdgpu-only going forward, I realized it's a more pressing concern, since this machine will suddenly either have to stop getting updates, or the opensource driver needs to be fixed. Hopefully problems like this will go away with the shared stack for future GPUs, as there won't be a fragmented userbase on the kernel side.

    By the way, you mentioned "leaning" towards (a), but I got the impression from the posts on this page that option (a) was already well in-progress.

    Leave a comment:


  • SystemCrasher
    replied

    Guest relatively simple ISA is simple/fast to parse and execute in HW, allows low-level optimizations and happens to be quite efficient use of "silicon estate" and bus bandwidth. It works. It does not interferes with existing software, does not assumes any programming language on its own. So it got used a lot. CPUs and GPUs are entriely different tradeoffs though. While they have something in common, GPUs are more like bunch of SIMD-like ALUs, they are much less up for high-speed flow control. This implies SIMD-style high-speed number crunching. CPUs are targeting to be efficient on general purpose programs instead.

    Actually, nothing prevents high level languages to emit mix of CPU and GPU code, do some synchronization, etc, and I guess it much easier to do in SW and it going to work on existing HW. Btw, there were attempts to execute higher level langs by CPUs, yet, somehow it never worked reasonably. There was Oberon CPU, but most programs aren't in Oberon, to begin with. Most ARMs able to execute Java bytecodes "directly", but somehow, JIT and later AOT compile prevailed, these days e.g. Android does JIT in early versions and AOT compile in later versions. So virtually nobody uses strange undocumented ARM Jazelle extension. Somehow, it seems idea with compact, language-agnostic opcodes worked better to the date.

    Leave a comment:


  • bridgman
    replied
    .... and message is blocked again. Wondering if happens every time my post includes the string "Michael" ?

    EDIT: nope, that's not it.

    Leave a comment:

Working...
X