Either way, congrats to AMD. They've fullfilled their promise and new AMD GPU got supported in Linux right at launch date. Nearly first time in AMD's story, eh? Not to mention opensource drivres were kicking the ass for sure. In some games and gputest they've outperformed "pro" thing and were very competitive.
I think everyone following Linux GPU drivers development have to admit AMD did really good job and state of their Linux drivers have improved greatly.
Announcement
Collapse
No announcement yet.
AMD Radeon RX 480 On Linux
Collapse
X
-
Originally posted by cde1 View Post
Is it? I removed to cover to try and find the origin of my problem. It turns out the fan has a plastic base that is screwed in three places to the card. However this base itself is only attached in two points to the motor's PCB.
Leave a comment:
-
Originally posted by efikkan View PostMost rational people want something that actually works well. When people have to run a firmware blob anyway, a kernel blob is not that big of a deal.
GTX 1060 will be available in 1-2 weeks, so stay tuned.
- Likes 3
Leave a comment:
-
Originally posted by Kano View PostWell "running" is a vague description, somewhere it "runs". But I see no diff to a closed source firmware which was stored in the hardware itself. It has just something to do with the Debian or FSF definition of non-free. But current hardware needs that, so take it or use legacy hardware...
Difference is that Debian does not prevent user to go there, while FSF like to prevent user to go there. So, "option is not a problem" vs "option is a problem"
FSF like to endorse distros which have maximum possible prevention at the time with no other in a blobby way option, so i guess if further in future someone make distro which even prevent user from running any blobby-only software, FSF will start to endorse only those distros and drop those that are currently endorsedLast edited by dungeon; 04 July 2016, 02:04 AM.
Leave a comment:
-
Originally posted by efikkan View PostMost rational people want something that actually works well. When people have to run a firmware blob anyway, a kernel blob is not that big of a deal.
GTX 1060 will be available in 1-2 weeks, so stay tuned.
I just want you to understand when a comment is that stupid and it really annoys people.
Leave a comment:
-
Originally posted by efikkan View PostMost rational people want something that actually works well. When people have to run a firmware blob anyway, a kernel blob is not that big of a deal.
GTX 1060 will be available in 1-2 weeks, so stay tuned.
Developers just need to stop using nonstandard opengl.Last edited by fuzz; 02 July 2016, 11:39 AM.
Leave a comment:
-
Originally posted by Qaridarium
the reference design of the 480 is very good. it has relatively big output on the left end side duo to small display connectors(no big DVI) and it does even have a air input hole in the top right of the card what makes it a good crossfire card in small cases. in fact it is highend design.
So I fixed it with some cyanoacrylate, now the third arm is at the same level as the two others and the card is working fine. Still a disappointment; I'm not a fan of their design also unsure how this defect could not been seen in QA, considering the violent noise the card made.Last edited by cde1; 02 July 2016, 08:24 AM.
Leave a comment:
-
Well "running" is a vague description, somewhere it "runs". But I see no diff to a closed source firmware which was stored in the hardware itself. It has just something to do with the Debian or FSF definition of non-free. But current hardware needs that, so take it or use legacy hardware...
Leave a comment:
-
Originally posted by efikkan View PostMost rational people want something that actually works well. When people have to run a firmware blob anyway, a kernel blob is not that big of a deal.
- Likes 2
Leave a comment:
-
Originally posted by pszilard View PostSo the AMDGPU-PRO stack will always go through the HSAIL phase while the rock stack does (will) support both?
Originally posted by pszilard View PostHow does the intermediate step impact code generation/optimizations? In particular (admittedly I'm not very familiar with the clang internals) I wonder how do these two paths affect the late optimizer stages and its ability to generate lean code and avoid irreversible transformations in intermediate phases (which AFAIUK were/are partly the reason for the previous OpenCL compilers' awful register management, and something that NVIDIA's compiler suffered to from badly in the past)?
Having two levels of IR (one source-oriented and one target-oriented) is one solution but even that gets less than ideal when either source or target are changing rapidly. It's probably fair to say that source changes less (some variant of C++) and target changes more (OMG) these days. The other complication is the usual huge gap between processor speeds and memory latencies, which is managed pretty well in single-thread environments (caches tend to be big enough for working sets these days) but which becomes more complex in highly parallel implementations where you need to fit (#threads x working set of registers & heavily used variables) into RF+cache or performance plummets.
Originally posted by pszilard View PostBTW, is there a roadmap for the two stack, in particular the ROC + Lightning _stable_ release?
- get it running and enabled as an option (done),
- live with it and finish solutions for the downsides of direct-to-ISA like portability of compiled code, and HSAIL dependencies in tools (in process)
- change HCC default to Lightning path when first two tasks are completed
Each of the first two steps is pretty time consuming although the third is quick and easy
OpenCL basically tracks the Lightning roadmap, so using HSAIL today (probably adding an option for Lightning path early if not there already) and changing default to Lightning when it is felt to be ready.Last edited by bridgman; 01 July 2016, 10:49 PM.
Leave a comment:
Leave a comment: