Announcement

Collapse
No announcement yet.

No, AMD Will Not Be Opening Up Its Firmware/Microcode

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #81
    It must be a good summer hole article to discuss the same over and over again if you look at the number of postings. For me a driver just has to work correctly, no matter if it needs non-free parts or not. As long as it is allowed to ship those files (better look at broadcom b43 instead) all is fine. If a driver requires firmware not in linux-firmware git then this should be discussed. It is non-sense to praise firmware stored inside the hardware over firmware loaded from hd. You need certainly a (small) part to get basic vesa/gop functions directly on a gfx card but the rest can be loaded later. Every device that does not need to init while POST can use firmware while the driver is loaded. The number of opensource firmware must be very for common devices like dvb adapters if there is even one. But nobody seems to discuss this. So better focus on more important things...

    Comment


    • #82
      Is this where the cool kids hang out?

      Comment


      • #83
        Bridgemen

        So if I am understanding this correctly before asking a question. Correct me if I misunderstand.
        fgrlx/xf86-video-amdgpu? = closed source blob from gl implementation to mircocode/firmware
        Radeonhd/xf86-video-radeon = foss, except for nvram type blob microcode
        radeon/xf86-video-ati/mesa/modeset in kernel = foss down to nvram type mircrocode, like register interactions at a hardware level, prior to 4/5xxx series cards.
        intel/xf86-video-modesetting/i915/i965 = Very much closed at microcode/firmware until after GMA45xx series when firmware level documents started being published for possible reverse engineering.

        So my question is why would the politicos at AMD direct their engineers (like yourself?) to move the drm/'copyrighted materials' into the microcode when they could just stick to the whole '(in)security by obscurity' model that fglrx layer has been doing for a while now?

        They seem to be having you guys FOSS'ing the gl layer/part of kernel layer and keeping everything else under tight wraps. The whole xf86-video-ati/xf86-video-radeon debacle seems to centre around down to the DRM'ing of firmware implementation instead of just at a hardware board spec level. Which seems quite odd as even if the firmware is yours to do with as you want there could still be hardware quirks that you would never find out about, short of reverse engineering the board(oh my the lawsuits?).

        Also to libv
        (partialsarcasm)
        Why not just reverse engineer the hardware using a good old oscillator and make your own spec sheets? Going from there you could write a generic init of the hardware for that generation of GPU. Oh and after that somehow test it on each GPU of that gen. Then find a way to flash it using the many flash tools out there. Oh and then make it seemingly possible for a noobish end user to do or a distro to automate. Seems easier than complaining to some giant corporation's nda'd engineer who seems to want, but is not allowed, to help you.(endpartialsarcasm)

        P.S first post ever >.>

        Comment


        • #84
          Originally posted by Qaridarium

          what a pill of bullshit... you can not solve every problem with "more cores" some problems can not scale with more cores.

          and for this kind of problems his point is true.
          just because your usual consumer software is trash, doesn't mean it also applies to other software. It is true, that you can't offload some task on multiple cores, but this is either because of bad file formats, bad API design or the application wasn't developed with multithreading in mind.

          Comment


          • #85
            Hi,

            Not sure if this is the right place to discuss it.

            I'm not on my home PC, so I might be slightly off here.

            The quirks list where my card was added is in drivers/gpu/drm/radeon/radeon_device.c,

            static struct radeon_px_quirk radeon_px_quirk_list[] = {
            ...
            { PCI_VENDOR_ID_ATI, 0xXXXX, 0xXXXX, 0xXXXX, RADEON_PX_QUIRK_DISABLE_PX },
            ...

            This will disable power management on your card (RADEON_PX_QUIRK_DISABLE_PX).

            You can add your own device and rebuild the kernel. You need PCI IDs (obtained via lspci -v -n). If this fixes your issue, send a patch upstream.

            --Coder

            Originally posted by RussianNeuroMancer View Post
            Ok, I don't dynpm issues here, dynpm works for me. however runpm broken for me too on Acer 7560G with 6620G+6650M. I doesn't even research this properly because I thought motherboard was broken in some funky way.

            That what I seen on my laptop:
            When manual switch via vgaswitcheroo was implemented and became available in regular kernels it worked for me and was able to power on dGPU properly.
            Some time later, maybe with runpm shipping, I not sure, it stopped working, and I even filled bugreport about this on freedesktop, which turns out to be probably invalid.
            Now it's getting interesting - it turns out to be invalid because Alex asked me to check fglrx, and fglrx wasn't able to power on disabled dGPU too! That works earlier with previous fglrx (like with earlier vgaswitcheroo) but since my motherboard was replaced once and new motherboard was shipped with BIOS 2.0 which is not even published on Acer web-site, I conclude that is issue more likely will be on motherboard side (hardware issue or BIOS issue) than on both of kernel and fglrx side.

            Now I even don't know what to think about this.

            Comment


            • #86
              Originally posted by bridgman View Post
              Luc, one more time... I never even *saw* your proposal until ~9 months after the project started. We have discussed this multiple times.

              I wrote the proposal that was approved and funded, with input from GPU-side architects plus some CPU-side people also named John (presumably their input was based on your proposal). At a high level the two proposals were pretty much the same the same (enough for SUSE to be told their proposal was going ahead) but they actually differed in a number of ways (3D focus, use of AtomBIOS, AMD hiring developers, AMD sanitizing & releasing docs etc...) that would only be obvious to someone who *really* cared about those details.

              Obviously if I had *known* there were two different proposals I would have discussed that with you at the start, but I only found out about the detailed SUSE proposal much later, and didn't find out that SUSE had been told their proposal was accepted until well into 2008, during a call with your VP. It didn't seem like anybody screwed up, just the kind of unfortunate disconnect that sometimes happens if you have enough people involved.

              I don't remember what my first thought was when I found out (other than "oh crap") but "no wonder Luc was angry all the time" had to have been right up there. You can continue to paint me as some kind of super-villain for as long as you want but it's been almost 10 years... probably time to give it up.
              Jon,

              I never bought that excuse. Whether you had seen our proposal or not, you knew from the start that SuSE was an equal partner in this and that AMD paid for just half of our development time. You knew from the start that we were making technical decisions on our own, ignoring ATI as much as we could, but dead set on providing the best technical solution for both the open source world and AMD for this hardware. We definitely never were contracting slaves, and you claiming that you had that epiphany in May 2008 at the SuSE office in Prague, coincidentally after seriously getting your ass kicked by SuSE and AMD management is just not very credible. The fact that you continued your games afterwards (like that new atombios interpreter that you, totally out of character, offered us out on your own, only to see redhat release KMS code with it 1.5h laterm, and that's just the most egregious of a whole set), also seriously undermines your statement there.

              You played a double game from at least september 2007, and you were always on the handbrake in the two preceding months as well. Since you were, and still are, an ATI employee, you stuck to the party lines. My feeling is that we only got those docs out of you because AMD management forced you/ATI to release something to stick the proposal and the contracts that were in place between AMD and SuSE. From where i sat you were little help with the questions we had on top of the information we got in the register docs and by looking at atombios. At best you appeared to be either misleading or stalling. Then there was this whole nonsense about atomBIOS "scripts" and "legacy" C-code, and your statement that Alex was unable to understand the structure of the RadeonHD driver. Those were plain lies. And you know what, the few times we had a constructive thursday call, is when it was just us 3 SuSE guys, the TPMs from SuSE and AMD, and just Alex were in the call. All of a sudden we were able to constructively talk about technical things.

              Your statement here and your statement in may 2008 only reinforces my views that you were working at a perpendicular goal from the start (and my instincts told me this the first time you were in the same phone conference, but then my peers at SuSE still did not want to believe me). It also does nothing to acquit you in this whole story. You played a pivotal and rather machiavellian role in creating and/or sustaining the current situation of severely non-free and technically inferior drivers.

              But the thing is, we were prepared for you and ATI, and we were able to deal with you and ATI (otherwise no driver would've even made it to the open for instance). What we were unable to deal with was that the people from redhat and some community members sided with you so easily. The likes of Dave Airlie, Benjamin Herrenschmidt, Adam Jackson, Daniel Stone, Matthew Garrett, Shawn Starr (and i am forgetting quite a few names here), and even Alex, all knew very well what they were doing. This went from bad technical solutions (from atomBIOS functions, to PLL calculations, to i2c block usage, or DVI hotplug, the fork clearly used the ATI party line and actively chose technically inferior solutions), to the shitthrowing that would make .us presidential candidates blush, to the abuse of power (which culminated in the radeonhd repo being vandalized, but there's also a ton of examples here), they were all very well aware of what they were doing.

              If these guys had been morally and intellectually honest, then we could have had (at worst) very minimal dependence on firmware and microcode today, and then we would've had the start of a vulkan implementation 6 months ago.

              You know, I had an epiphany too a few years ago. I was wrong about 1 thing in this whole AMD story. I was wrong about atomBIOS.

              I had promised egbert in july 2007, that atomBIOS would be dead in 2 years. I was proven very wrong, and the above partially explains why. A bigger part of the reason is of course that AMD lost the internal AMD/ATI powerstruggle (which heavily influenced the above and which the above was also influencing). But when i am honest, i cannot blame any of those reasons. I know that I am to blame for the continued existence of atomBIOS. I had the means to kill it, with fire. And all I would've had to do was to simply embrace it, and claim that it was the best thing since boiled water. Then the same people i listed above would've gone and created a properly open implementation on top of the (then still rather simplistic) avivo driver. I sadly do not work/think like this, and a part of me regrets missing that unique opportunity.

              Comment


              • #87
                Originally posted by notanoob View Post
                (partialsarcasm)
                Why not just reverse engineer the hardware using a good old oscillator and make your own spec sheets? Going from there you could write a generic init of the hardware for that generation of GPU. Oh and after that somehow test it on each GPU of that gen. Then find a way to flash it using the many flash tools out there. Oh and then make it seemingly possible for a noobish end user to do or a distro to automate. Seems easier than complaining to some giant corporation's nda'd engineer who seems to want, but is not allowed, to help you.(endpartialsarcasm)
                Notanoob. (seriously) If there is one person who, repeatedly and consistently, went and proved that there are logical, manageable and productive ways of getting rid of binary blobs in the graphics world, then it is me. And all I did was follow my instincts, and apply logic and some elbowgrease. In doing so, i have, time and time again, created the circumstances and the prerequisites for the careers of a large portion of the people currently employed in linux graphics today. I am continuously amazed at how much crap I personally do tend to get for this though (and i really was kidding myself with mali/lima that it would be different that time round).

                Comment


                • #88
                  Originally posted by tigerroast View Post
                  But why not though?
                  "Of course there's no reason for it, it's just our policy." (unknown)

                  Comment


                  • #89
                    Originally posted by karolherbst View Post

                    just because your usual consumer software is trash, doesn't mean it also applies to other software. It is true, that you can't offload some task on multiple cores, but this is either because of bad file formats, bad API design or the application wasn't developed with multithreading in mind.
                    You're wrong. There are problems that aren't parallelizable. This has nothing to do with API design or file formats.

                    Comment


                    • #90
                      Originally posted by karolherbst View Post
                      If you can't scale vertically (higher freqs) you simply scale horizontally (more cores), so how does this help with anything you just said?
                      I don't get the discussion. The original assumption by MaxToTheMax was that the fabs reach technical/physical limits and therefore scaling horizontally is not possible anymore.

                      Comment

                      Working...
                      X