Announcement

Collapse
No announcement yet.

AMD's Raven Ridge Botchy Linux Support Appears Worse With Some Motherboards/BIOS

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    Originally posted by bridgman View Post

    Huh ? It "couldn't test OS-specific paths" because they weren't there - most of the code was common (albeit at the price of being abstracted away from the OS code).



    You quoted a lot of text so not sure what "that" refers to... but guessing you mean "isn't the fact that most of the common code development is done on other OSes the root of the problem ?".

    If so, then the answer is "what did you expect ?". The whole idea behind using DC in the Linux driver was to leverage testing done on other OSes with much larger market shares and hence much larger engineering budgets.
    No, you're not understanding my point. DC had a lot of abstraction layers that implemented a well tested interface.... Bugs in that case are not likely to be in the interface, but rather in the abstraction layers. OS specific paths seem to still be implementing the same interface. That interface is still gonna be tested, but rather than hitting buggy abstraction layers, they hit OS paths instead, that are themselves well tested.

    EDIT: And this is where the comparison to fglrx comes into being, by at least one account I can remember it was a spaghetti bowl of abstraction of enormous size. A completely incomprehensible mess. Nobody wants that situation on linux. It's obviously what AMD development style is though, because DC's initial release showed the same thing.
    Last edited by duby229; 19 February 2018, 06:13 PM.

    Comment


    • #62
      Originally posted by zoomblab View Post
      From your saying, you had to let go reusable and tested code, that was supposedly working on multiple other systems, in order to please the Linux developers.
      Effectively yes, although it was anticipated and not unreasonable.

      The Linux kernel model is based around the idea of community maintenance rather than vendor commitments to maintain specific code, and as a consequence code needs to be maintainable by community developers from the start even in cases where there is an obvious vendor commitment and where "maintainable by the vendor" might lead to a different architecture from "maintainable by the community". In the first case the biggest gain comes from having the code shared across OSes/platorms with minimum changes, but in the second case the biggest gain comes from "having the code look like the rest of Linux".

      Originally posted by zoomblab View Post
      Looks like you were forced to create a fork of your code. That must have been really disturbing for your engineers.
      It was more disturbing for their managers - the engineers working specifically on Linux had a pretty good idea what to expect.

      Originally posted by zoomblab View Post
      Do you think that that melding process bared any benefits to the code for the other platforms as well?
      Certainly any bugs that were found & fixed in the early code would benefit other platforms as well, and once the code had been partially refactored it became possible for community developers (particularly Dave) to spend time in the code and find/fix issues. I don't have a good handle on what percentage of those changes could be carried back to common code vs "finding issues in the code we had just been required to implement" but off the top of my head maybe half of the changes would be broadly useful.

      Overall I think the exercise was beneficial - it's just the timing that was really awkward.
      Last edited by bridgman; 19 February 2018, 06:56 PM.
      Test signature

      Comment


      • #63
        Originally posted by duby229 View Post
        No, you're not understanding my point. DC had a lot of abstraction layers that implemented a well tested interface.... Bugs in that case are not likely to be in the interface, but rather in the abstraction layers. OS specific paths seem to still be implementing the same interface. That interface is still gonna be tested, but rather than hitting buggy abstraction layers, they hit OS paths instead, that are themselves well tested.
        Still not understanding - the abstraction layers ARE the interfaces.
        Test signature

        Comment


        • #64
          bridgman
          Thank you

          Comment


          • #65
            Originally posted by bridgman View Post

            Still not understanding - the abstraction layers ARE the interfaces.
            Which you said are not tested with a linux kernel. That's the point.

            Comment


            • #66
              Originally posted by duby229 View Post
              Which you said are not tested with a linux kernel. That's the point.
              I said nothing of the sort.

              What I said was that when we replace shared code with Linux-specific code that portion of the code now just gets "Linux" testing rather than benefitting from testing done on all the other platforms as well. The idea of using shared code was to be able to leverage testing done on all platforms (some with bigger market shares and hence bigger engineering budgets), ie with testing from 100% of the market rather than just <Linux %>.
              Last edited by bridgman; 19 February 2018, 07:07 PM.
              Test signature

              Comment


              • #67
                Originally posted by agd5f View Post
                No exactly true. There are physical requirements for various HDMI versions. The OEM has to validate the port for the version of HDMI they want to support. The driver checks the connector tables in the bios provided by the OEM to determine what connectors are present and what they support.
                The Linux driver since some months ago checks those tables, any refuses to utilize HDMI 2.0 features offered by the GPU/APU, even if they work just fine and the manufacturer was just too lazy to "validate" or to update his BIOS image.

                Meanwhile the Windows drivers don't give a damn about that BIOS table and HDMI 2.0 works like advertised for the affected GPUs.

                This is clearly (like with ACPI-parsing for so many years) another case of "Linux kernel developers want to do 'the right thing' according to written specification", with just bad results for Linux users because neither Windows drivers nor the firmware follows the written specification.

                Comment


                • #68
                  Originally posted by Spazturtle View Post
                  All the people testing it and saying HDMI 2.0 works are on Windows, so I guess the Windows driver is just ignoring the connector table.
                  This is exactly what happenes. Has been discussed in https://bugs.freedesktop.org/show_bug.cgi?id=102820 - and I guess for the next few years people using amdgpu will need to revert the commit named "drm/amd/display: Block 6Ghz timing if SBIOS set HDMI_6G_en to 0" from their local kernels in order to have working HDMI 2.0 support. Because neither the GPU board manufacturers nor MicroSoft will bother to change their "not spec compliant" implementations.

                  Comment


                  • #69
                    Originally posted by Spazturtle View Post




                    They do have HDMI 2.0, even if they don't list it. The port on the board is literally just that, the physical port, it is wired to the APU, so if the APU supports HDMI 2.0 then the port is a HDMI 2.0 port.

                    With AM4 what HDMI or Displayport version is supported is dictated by the APU alone, the chipset on the motherboard has nothing to do with it.






                    Originally posted by agd5f View Post

                    No exactly true. There are physical requirements for various HDMI versions. The OEM has to validate the port for the version of HDMI they want to support. The driver checks the connector tables in the bios provided by the OEM to determine what connectors are present and what they support.
                    agd5f is incorrect. HDMI 2.0 has worked flawlessly on all AM4 motherboards that have been tested thus far. Proof:
                    (UPDATE! LINUX USERS, PLEASE READ THE WARNING AT THE BOTTOM OF THE POST. WINDOWS USERS REMAIN UNAFFECTED.) Fantastic news for Raven Ridge! Thanks to an adventurous redditor, it turns out that in at least one case HDMI 2.0 capability is, in fact, not dictated by the chipset but the processor...

                    Comment


                    • #70
                      Originally posted by Hifihedgehog View Post





                      agd5f is incorrect. HDMI 2.0 has worked flawlessly on all AM4 motherboards that have been tested thus far. Proof:
                      https://smallformfactor.net/forum/th...gathread.6709/
                      Except, as dwagner says above, Linux actually checked the BIOS tables.

                      Comment

                      Working...
                      X