Right, but generally the better user experience comes from being open source and integrated into upstream.
Announcement
Collapse
No announcement yet.
It Looks Like AMDGPU DC (DAL) Will Not Be Accepted In The Linux Kernel
Collapse
X
-
Originally posted by bridgman View Post
We will also still be lighting up hardware with "DC the code" whether or not it is upstream at the moment.
Comment
-
Originally posted by krelian View Post@bridgeman Late to the party, but have you guys ever considered building some DSLs and abstracting the tricky bits with source-translation at compile time, rather than a runtime abstraction layer? Maybe with a PEG/packrat parser and a good templating engine -- something as simple as awk can work wonders? That way you would still be reasonably confidant that things will work on Linux, while both making the upstream maintainers happy and reducing the burden of conformance/validation testing?
I'm a long time game programmer with a technical background, I've worked on lots of different engines over the years, and it's been my observation that most everyone has been doing abstraction layers wrong, including myself. Extra layers of fine-grained abstractions end up being a nightmare to maintain and can end up causing performance issues.
Anyway, I've recently had quite a bit of success doing our physicality based shading and material pipeline with a custom DSL that generates backends for HLSL, GLSL, PSSL and MSL. Also currently using the same idea to build and generate backends for our particle rendering system, allowing us to abstract rendering where geometry is generated and draw calls are batched on the CPU for OpenGL ES 2.0 profiles all the way up to where everything is done on the GPU.
In both cases we're generating orders of magnitude more code (and a lot of it is high-performance stuff and very readable, code generation doesn't have to be a mess) while only having to maintain the stuff written in the DSL plus the DSL front-end and code generation templates. Literally generating several 100,000 LoC for all the various combinations and different platforms from a base of around 8000 LoC.
Got the ideas a few years back from a talk by Alan Kay entitled "Programming and Scaling". Look it up.
Comment
-
bridgman Thank you for the breakdown and clarification. I actually read all 14 pages. My panic didn't actually fade until your explanation on page 13 that basically "nothing has changed and this is all a miscommunication". I'll can admit my disappointment as well, but I'm still planning on going Zen Summit Ridge and Vega HBM2. I'll be honest, looking for the end of tunnel for support can get depressing. Still rocking Sabayon/Gentoo, but have no AMDGPU-PRO driver due to .deb or .rpm only. I myself had considered going Nvidia right before my R9 Nano and I have been an AMD puritan since AMD K6 Days and an ATI puritan since the Rage 128. AMDGPU was what gave me hope to continue and purchase an R9 Nano. Since this is all going the way of opensource eventually, is there a way we can get a non packaged version we could download to compile and install on our vanilla systems?
Comment
-
Originally posted by Sonadow View PostThat implies the use of AMDGPU-PRO, doesn't it? What if I do not want to use AMDGPU-PRO, but instead stay on with a vanilla kernel + libDRM + Mesa stack? What will i expect to lose?
What does it mean to you ?Test signature
- Likes 3
Comment
-
Originally posted by Darksurf View PostStill rocking Sabayon/Gentoo, but have no AMDGPU-PRO driver due to .deb or .rpm only. I myself had considered going Nvidia right before my R9 Nano and I have been an AMD puritan since AMD K6 Days and an ATI puritan since the Rage 128. AMDGPU was what gave me hope to continue and purchase an R9 Nano. Since this is all going the way of opensource eventually, is there a way we can get a non packaged version we could download to compile and install on our vanilla systems?Test signature
- Likes 2
Comment
-
Originally posted by bridgman View Post
Maybe we are interpreting "lighting up" differently.
Originally posted by bridgman View PostTo me that means "we just got first silicon back from the fab, months before launch, and need to start bringing up the driver and testing".
What does it mean to you ?
1) Works on the latest shipping version of the kernel (not git!) on launch day or shortly after launch with KMS
2) can output at up to 1440p over HDMI
3) Can hook onto the modesetting DDX driver
4) can talk with latest libdrm and Mesa to hardware accelerate a typical DE (like Gnome or Plasma)
#1, #2 and #3 are about getting a usable display. #4 is about actually using the hardware, and not falling back to software and llvmpipe to get the desktop drawn onscreen. I consider these four aspects to be the bare minimum of expectations with regards to any driver; if all 4 requirements are met, i deem the hardware as being successfully lit up.
Anything else like HDMI audio, FreeSync, etc etc are bonuses and icing on the cake that I can afford to wait longer for or live without for an extended period of time.Last edited by Sonadow; 09 December 2016, 01:50 PM.
- Likes 1
Comment
-
Originally posted by Sonadow View Post
That implies the use of AMDGPU-PRO, doesn't it? What if I do not want to use AMDGPU-PRO, but instead stay on with a vanilla kernel + libDRM + Mesa stack? What will i expect to lose?
And just for the record: Could people please read and re-read all of @bridgman's posts before posting?
THREAD SUMMARY:
- On behalf of the Linux kernel DRM subsystem maintainers, Dave Airlie notes that they have prior experience with merging a HAL, which in term makes them very skeptical about doing it ever again.
-- So even in light of the DRM maintainer's policy of accepting good enough code and then chipping away at it, merging the DC-the-HAL-code in its present state whould be a BIG mistake since it is all but guaranteed to cause major headaches down the road.
ermo's personal observation #1: The above seems a clearly engineering-driven decision which accurately reflects the experience/culture/philosophy/policy of the Linux kernel dev model given its available DRM subsystem dev resources.
- On behalf of AMD, bridgman notes that AMD respects this and that the developers assigned to DC-the-project will continue to work on refactoring the cross-platform DC-the-HAL-code to be more in line with what the Linux kernel DRM devs will accept.
-- In the mean time and purely for business reasons, AMD will have to keep bringing up yet-to-be-introduced new hardware on the evolving DC-the-HAL-code because this code is shared across Linux (AMDGPU PRO) and Windows (Radeon Software Crimson) and because it makes little business sense to delay introducing new hardware support for the 99% of its customers that run Windows due to very specific specific technical requirements of the platform (Linux) that represents the remaining 1% of its customers.
ermo's personal observation #2: bridgman considers the reaction to the article a bit of a storm in a teacup when in fact the ongoing RFC process is pretty much business as usual.
bridgman and airlied: I hope the above is a sufficiently accurate representation of your positions? Feel free to correct/expand on this post as warranted.
- Likes 4
Comment
-
Originally posted by libv View PostDave is such a massive twofaced hypocritical wanker.
He's one of the main people who worked real hard and applied all sorts of bullshit tricks to kill the proper open source driver for ATI. He's been very busy making sure that the ATI way of things got implemented. And now he is trying to act like the big hero with RADV2 and now this, because guess what, ATI is doing things the ATI way.
Why is this guy given any credit whatsoever?
Originally posted by Fixnix View PostThis constant war between blind linux idealist and GPU driver guys affects users, gamers and game developers the worst and no one cares. I just want my god dam 400$+ GPU to work out of the box like it does on Windows …
- Likes 2
Comment
Comment