Announcement

Collapse
No announcement yet.

The Significant Corporate Importance & Pressure Around Mesa Open-Source Linux 3D Drivers

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by mdedetrich View Post
    [...] Linux being a monolithic kernel which means that it doesn't have stable ABI's for things like graphics card drivers which matters when you know, you buy a new graphics card and you need the latest driver but for obvious reasons you don't want to be forced to use the latest version of Linux kernel (which tend to less tested than older more stable versions).

    This problem is less of an issue in closed down systems where the vendor has almost complete control over the hardware (i.e. steam deck, android phones) but you can't avoid this problem with laptop's/desktops where the hardware configuration can be anything.
    Except most computers are laptops and most laptops never receive GPU upgrades, so this can't be the major reason for lacking linux adoption.

    Comment


    • #32
      Originally posted by jabl View Post
      Don't know about SPECViewPerf but at least SPECCPU requires the benchmark sources to be unmodified, to prevent vendors from cheating. I would guess there are similar rules for SPECViewPerf as well. If so, no you can't patch SPECViewPerf and submit scores obtained with the patched version.
      But zlib is not part of the sources of SPECViewPerf, it's part of its binary distribution. Deleting libz.so.* from SPECViewPerf works around the issue, just as SPECViewPerf not shipping it in the first place or statically linking it. "Patching" in this sense has nothing to do with modifying the sources of SPECViewPerf. Even if SPECViewPerf fixes this internally, it still won't help with old releases of SPECViewPerf that are in circulation.

      Sometimes you have to deal with this kind of cr*p from closed source apps.

      Comment


      • #33
        Originally posted by Serafean View Post
        Surprise surprise, the guys discussing in the MR know what they're talking about.

        Vendoring zlib would result in distros unvendoring it anyway, making the issue come back. (not to mention the CVE resolution nightmare)
        Reporting the issue to viewperf: that using LD_LIBRARY_PATH in a launcher script while bundling system libraries is a good way to end up in hell. (at $dayjob I fought very hard against this approach)
        While at the same time dancing around the issue that updating something shouldn't break something else.

        This is a shitshow with no good solution.
        I'm sure distros introducing their own flavour of insanity is a absolutely new thing... wait.

        Comment


        • #34
          Originally posted by zboszor View Post

          But zlib is not part of the sources of SPECViewPerf, it's part of its binary distribution. Deleting libz.so.* from SPECViewPerf works around the issue, just as SPECViewPerf not shipping it in the first place or statically linking it. "Patching" in this sense has nothing to do with modifying the sources of SPECViewPerf. Even if SPECViewPerf fixes this internally, it still won't help with old releases of SPECViewPerf that are in circulation.

          Sometimes you have to deal with this kind of cr*p from closed source apps.
          By 'similar rules' I meant rules prohibiting modifying the package, details of which obviously would depend on how it's distributed (say, if it's distributed as a binary package then a rule saying you cannot modify the sources is kind of pointless). So deleting libz.so could very well be prohibited.

          Comment


          • #35
            Originally posted by zboszor View Post

            But zlib is not part of the sources of SPECViewPerf, it's part of its binary distribution. Deleting libz.so.* from SPECViewPerf works around the issue, just as SPECViewPerf not shipping it in the first place or statically linking it. "Patching" in this sense has nothing to do with modifying the sources of SPECViewPerf. Even if SPECViewPerf fixes this internally, it still won't help with old releases of SPECViewPerf that are in circulation.

            Sometimes you have to deal with this kind of cr*p from closed source apps.
            The funny thing is that had SPECViewPerf actually statically linked zlib, this issue wouldn't exist. The fact the shipped a few dynamic libraries and then hacked library paths with their product -- because using package managers is a no-no (I understand why... way too many formats to reasonably support) -- is why an unexpected version gets pulled into the symbol table.
            I remember having to delete random .so from the steam runtime for exactly the same reasons.

            I mean, they're basically trying to flatpack themselves without flatpack.

            Comment


            • #36
              Originally posted by blacknova View Post

              I'm sure distros introducing their own flavour of insanity is a absolutely new thing... wait.
              Distros unvendoring dependencies is IMO a good thing. It allows for a manageable system, where you can say "all known fixes applied" and actually trust it to be true.
              Every installed product vendoring a dependency adds another party that must update. So instead of updating the system to a fixed (for instance) openssl* and thus making the entire system "fixed", you must wait & hope everyone vendoring openssl updates their product (and potentially inspect the binary/build system to check they actually did update openssl).
              Another bonus is that as the end user you can replace the implementation of the .so without the binary ever knowing**. (eg: gnutls has a .so that implements parts of the openssl ABI. So you could replace openssl with gnutls without any program knowing. AFAIK WolfSSL does the same.)
              It's a question of what you expect from the system: it doing whatever other people want, or what you want; or if you expect a system at all (instead of a bunch of haphazardly thrown together binaries)

              *replace openssl with zlib in this case.
              ** funnily enough this is exactly what SPECViewPerf did to mesa, and broke it

              Comment


              • #37
                Mesa is another domino fell from the corporate control structure. Today, Linux rules the internet, mobile and server space. i think Mesa improved so much while mobile devices gained graphics capabilities on par with other proprietary codes out there for the desktop. I am so glad to see that software futures will be controlled by developers with their technical perspectives instead of skewed interests of those corporate penny counters.

                Comment


                • #38
                  Originally posted by mdedetrich View Post
                  The whole point behind Apple and their devices is that they are vertically integrated inclusive of hardware and part of that (and hence their strategy) is that Apple deprecates/removes older stuff and they expect developers to play along which in stark contrast to Windows where they bend over backwards for developers. A lot of the problems you mention are a result of developers not updating their software which has its own pro's and con's.
                  Reasonable in isolation, but not in the real world. That's like arguing that it's defunct publishers' fault for not putting out patches for their books when "eyes 2.0" breaks them. Forget EOL, defunct software publishers and abandonware are an unavoidable reality, and some pieces of software are cultural artifacts.

                  Originally posted by mdedetrich View Post
                  I should also remind you that in other cases i.e. iPhones Apple has historically had much better device support than any other competitor (i.e. Android), iPhones are known for having a minimum of 5 years of updates where as most Android phones up until recently was lucky to have more than 2-3 years unless it was a Google phone.
                  I don't think we should conflate support between hardware and OS and support between OS and software.

                  Remember that Apple has always been good about Hardware-to-OS support periods within an ISA. (The Macintosh Plus is compatible all the way from System 3.0 to 7.5.5 (January 1986 to whatever month Mac OS 7.6 came out in 1997), not that 7.5.5 will perform well on an original 68000. My hand-me-down Macbook from 2009 came with OSX 10.6 and was supported all the way up to a 2020 security update for 10.13.)

                  If Apple were more willing to commit to longer guaranteed support windows on OS-to-Software compatibility, I wouldn't have as big a problem with ISA changes breaking Hardware-to-OS compatibility.


                  Originally posted by mdedetrich View Post
                  Its not as clear cut as you are painting it to be, see https://news.ycombinator.com/item?id=39726292 (and fyi I work with JVM as part of my fulltime job, I am a Scala/Java developer). It seems like Apple was expecting developers to run applications in a more privileged mode seeing as the JVM is JIT, but what is an application is less clear cut with JVM due to the concept of jars conflating libraries with executables/executable code.

                  This is also a pretty bad example because ironically the JVM being a virtual machine, a newer version of the JVM can be released which deals with this change in MacOS. I mean thats one of the fundamental reasons why JVM is so popular, its so it can abstract over the OS so if the OS does changes like this (its not just Mac that has broken the JVM in some way), then Oracle and/or any of the other vendors can just release a new version of the JVM and you don't need to modify the jar's/classfiles at all.

                  EDIT: When reading the specifics it does seem like in this case the MacOS change may have to be reverted, either that or they Apple needs to specify a way to propagate JIT executable mode all the way down to jar's being executed
                  Which is why I specifically included "and they apparently just treated it like any other minor update, not running it through a preview channel first."

                  Comment


                  • #39
                    The road to Hell is paved with expedient solutions.

                    Accepting expedient solutions sets a precedent that leads to expedient solutions in general being acceptable, so the exception becomes the rule and standard practice.

                    This is probably not the hill to die on, but at what point should one say, "Thus far and no further."?

                    I don't have an answer, but it is something project leads should be thinking about.

                    Comment


                    • #40
                      Originally posted by JustK View Post

                      Except most computers are laptops and most laptops never receive GPU upgrades, so this can't be the major reason for lacking linux adoption.
                      You still have to deal with problem where if you buy a new laptop you are forced to use the newest kernel version to get GPU driver support. And I was specifically talking about Linux desktop to be frank

                      Comment

                      Working...
                      X