Announcement

Collapse
No announcement yet.

AMD Radeon RX 480 On Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • SystemCrasher
    replied
    Either way, congrats to AMD. They've fullfilled their promise and new AMD GPU got supported in Linux right at launch date. Nearly first time in AMD's story, eh? Not to mention opensource drivres were kicking the ass for sure. In some games and gputest they've outperformed "pro" thing and were very competitive.

    I think everyone following Linux GPU drivers development have to admit AMD did really good job and state of their Linux drivers have improved greatly.

    Leave a comment:


  • Passso
    replied
    Originally posted by cde1 View Post

    Is it? I removed to cover to try and find the origin of my problem. It turns out the fan has a plastic base that is screwed in three places to the card. However this base itself is only attached in two points to the motor's PCB.
    Yes it is for any blind fanboy who did not buy it.

    Leave a comment:


  • Ansla
    replied
    Originally posted by efikkan View Post
    Most rational people want something that actually works well. When people have to run a firmware blob anyway, a kernel blob is not that big of a deal.
    GTX 1060 will be available in 1-2 weeks, so stay tuned.
    Firmware blobs don't taint your kernel. Binary only userspace (steam games) also don't. Only the nVidia driver does that.

    Leave a comment:


  • dungeon
    replied
    Originally posted by Kano View Post
    Well "running" is a vague description, somewhere it "runs". But I see no diff to a closed source firmware which was stored in the hardware itself. It has just something to do with the Debian or FSF definition of non-free. But current hardware needs that, so take it or use legacy hardware...
    Whatever software is published as blob only it is non-free. But Debian definition is that they just has blob firmwares disabled by default (so it is good with both worlds, disabled you are in world one and if you enabled it you are in world two), while FSF kernel distros prevents users from even using those at all and terminate any sniff of blob marketing in those distros

    Difference is that Debian does not prevent user to go there, while FSF like to prevent user to go there. So, "option is not a problem" vs "option is a problem"

    FSF like to endorse distros which have maximum possible prevention at the time with no other in a blobby way option, so i guess if further in future someone make distro which even prevent user from running any blobby-only software, FSF will start to endorse only those distros and drop those that are currently endorsed
    Last edited by dungeon; 04 July 2016, 02:04 AM.

    Leave a comment:


  • artivision
    replied
    Originally posted by efikkan View Post
    Most rational people want something that actually works well. When people have to run a firmware blob anyway, a kernel blob is not that big of a deal.
    GTX 1060 will be available in 1-2 weeks, so stay tuned.
    We never said that Nvidia must give to as an open graphics driver like RadeonSI, so their comments vs Linus middle finger for patents and the best closed graphics driver are irrelevant. We wanted an open kernel driver like AMD_GPU to cover open power management, slots and other problems. Imagine if you bought an Intel CPU and you could only run it at 1Ghz, then Intel tells you that they will give you documents to code the overclocking your self.

    I just want you to understand when a comment is that stupid and it really annoys people.

    Leave a comment:


  • fuzz
    replied
    Originally posted by efikkan View Post
    Most rational people want something that actually works well. When people have to run a firmware blob anyway, a kernel blob is not that big of a deal.
    GTX 1060 will be available in 1-2 weeks, so stay tuned.
    The open source drivers run without kernel version problems or dkms compilation issues and work out of the box. No hassle. No mess. This works the best. It's the holy grail of "it just works!"

    Developers just need to stop using nonstandard opengl.
    Last edited by fuzz; 02 July 2016, 11:39 AM.

    Leave a comment:


  • cde1
    replied
    Originally posted by Qaridarium

    the reference design of the 480 is very good. it has relatively big output on the left end side duo to small display connectors(no big DVI) and it does even have a air input hole in the top right of the card what makes it a good crossfire card in small cases. in fact it is highend design.
    Is it? I removed to cover to try and find the origin of my problem. It turns out the fan has a plastic base that is screwed in three places to the card. However this base itself is only attached in two points to the motor's PCB. In my card the third "arm" of the plastic base was broken (which is easy as it is not attached to the PCB), and so the whole fan was only really attached to the card by two screws, leading to one side protruding and rattling against the cover extremely loudly and almost not spinning.

    So I fixed it with some cyanoacrylate, now the third arm is at the same level as the two others and the card is working fine. Still a disappointment; I'm not a fan of their design also unsure how this defect could not been seen in QA, considering the violent noise the card made.
    Last edited by cde1; 02 July 2016, 08:24 AM.

    Leave a comment:


  • Kano
    replied
    Well "running" is a vague description, somewhere it "runs". But I see no diff to a closed source firmware which was stored in the hardware itself. It has just something to do with the Debian or FSF definition of non-free. But current hardware needs that, so take it or use legacy hardware...

    Leave a comment:


  • bridgman
    replied
    Originally posted by efikkan View Post
    Most rational people want something that actually works well. When people have to run a firmware blob anyway, a kernel blob is not that big of a deal.
    Sure, except of course nobody is actually "running a firmware blob" they are loading hardware microcode into the GPU at power-up.

    Leave a comment:


  • bridgman
    replied
    Originally posted by pszilard View Post
    So the AMDGPU-PRO stack will always go through the HSAIL phase while the rock stack does (will) support both?
    Sorry, when I talked about OpenCL moving to ROCm and Lightning I meant "OpenCL in the AMDGPU-PRO stack".

    Originally posted by pszilard View Post
    How does the intermediate step impact code generation/optimizations? In particular (admittedly I'm not very familiar with the clang internals) I wonder how do these two paths affect the late optimizer stages and its ability to generate lean code and avoid irreversible transformations in intermediate phases (which AFAIUK were/are partly the reason for the previous OpenCL compilers' awful register management, and something that NVIDIA's compiler suffered to from badly in the past)?
    That's pretty much the big hairy question of the compiler world AFAICS. Winning with toolchains seems to be simple in principle - identify the perfect IR from both code and dev tool perspective, implement toolchains around that IR, profit - but in practice there seem to be conflicting pressures on the IR from the code (complex & non-flat is better) and dev tool (simple and flat is better) perspective and the representations on both sides of the IR keep changing over time.

    Having two levels of IR (one source-oriented and one target-oriented) is one solution but even that gets less than ideal when either source or target are changing rapidly. It's probably fair to say that source changes less (some variant of C++) and target changes more (OMG) these days. The other complication is the usual huge gap between processor speeds and memory latencies, which is managed pretty well in single-thread environments (caches tend to be big enough for working sets these days) but which becomes more complex in highly parallel implementations where you need to fit (#threads x working set of registers & heavily used variables) into RF+cache or performance plummets.

    Originally posted by pszilard View Post
    BTW, is there a roadmap for the two stack, in particular the ROC + Lightning _stable_ release?
    Roadmap for ROC+Lightning is pretty simple:

    - get it running and enabled as an option (done),
    - live with it and finish solutions for the downsides of direct-to-ISA like portability of compiled code, and HSAIL dependencies in tools (in process)
    - change HCC default to Lightning path when first two tasks are completed

    Each of the first two steps is pretty time consuming although the third is quick and easy

    OpenCL basically tracks the Lightning roadmap, so using HSAIL today (probably adding an option for Lightning path early if not there already) and changing default to Lightning when it is felt to be ready.
    Last edited by bridgman; 01 July 2016, 10:49 PM.

    Leave a comment:

Working...
X