Originally posted by mdedetrich
View Post
Announcement
Collapse
No announcement yet.
The Significant Corporate Importance & Pressure Around Mesa Open-Source Linux 3D Drivers
Collapse
X
-
Originally posted by jabl View PostDon't know about SPECViewPerf but at least SPECCPU requires the benchmark sources to be unmodified, to prevent vendors from cheating. I would guess there are similar rules for SPECViewPerf as well. If so, no you can't patch SPECViewPerf and submit scores obtained with the patched version.
Sometimes you have to deal with this kind of cr*p from closed source apps.
- Likes 3
Comment
-
Originally posted by Serafean View PostSurprise surprise, the guys discussing in the MR know what they're talking about.
Vendoring zlib would result in distros unvendoring it anyway, making the issue come back. (not to mention the CVE resolution nightmare)
Reporting the issue to viewperf: that using LD_LIBRARY_PATH in a launcher script while bundling system libraries is a good way to end up in hell. (at $dayjob I fought very hard against this approach)
While at the same time dancing around the issue that updating something shouldn't break something else.
This is a shitshow with no good solution.
- Likes 1
Comment
-
Originally posted by zboszor View Post
But zlib is not part of the sources of SPECViewPerf, it's part of its binary distribution. Deleting libz.so.* from SPECViewPerf works around the issue, just as SPECViewPerf not shipping it in the first place or statically linking it. "Patching" in this sense has nothing to do with modifying the sources of SPECViewPerf. Even if SPECViewPerf fixes this internally, it still won't help with old releases of SPECViewPerf that are in circulation.
Sometimes you have to deal with this kind of cr*p from closed source apps.
Comment
-
Originally posted by zboszor View Post
But zlib is not part of the sources of SPECViewPerf, it's part of its binary distribution. Deleting libz.so.* from SPECViewPerf works around the issue, just as SPECViewPerf not shipping it in the first place or statically linking it. "Patching" in this sense has nothing to do with modifying the sources of SPECViewPerf. Even if SPECViewPerf fixes this internally, it still won't help with old releases of SPECViewPerf that are in circulation.
Sometimes you have to deal with this kind of cr*p from closed source apps.
I remember having to delete random .so from the steam runtime for exactly the same reasons.
I mean, they're basically trying to flatpack themselves without flatpack.
- Likes 3
Comment
-
Originally posted by blacknova View Post
I'm sure distros introducing their own flavour of insanity is a absolutely new thing... wait.
Every installed product vendoring a dependency adds another party that must update. So instead of updating the system to a fixed (for instance) openssl* and thus making the entire system "fixed", you must wait & hope everyone vendoring openssl updates their product (and potentially inspect the binary/build system to check they actually did update openssl).
Another bonus is that as the end user you can replace the implementation of the .so without the binary ever knowing**. (eg: gnutls has a .so that implements parts of the openssl ABI. So you could replace openssl with gnutls without any program knowing. AFAIK WolfSSL does the same.)
It's a question of what you expect from the system: it doing whatever other people want, or what you want; or if you expect a system at all (instead of a bunch of haphazardly thrown together binaries)
*replace openssl with zlib in this case.
** funnily enough this is exactly what SPECViewPerf did to mesa, and broke it
- Likes 4
Comment
-
Mesa is another domino fell from the corporate control structure. Today, Linux rules the internet, mobile and server space. i think Mesa improved so much while mobile devices gained graphics capabilities on par with other proprietary codes out there for the desktop. I am so glad to see that software futures will be controlled by developers with their technical perspectives instead of skewed interests of those corporate penny counters.
- Likes 2
Comment
-
Originally posted by mdedetrich View PostThe whole point behind Apple and their devices is that they are vertically integrated inclusive of hardware and part of that (and hence their strategy) is that Apple deprecates/removes older stuff and they expect developers to play along which in stark contrast to Windows where they bend over backwards for developers. A lot of the problems you mention are a result of developers not updating their software which has its own pro's and con's.
Originally posted by mdedetrich View PostI should also remind you that in other cases i.e. iPhones Apple has historically had much better device support than any other competitor (i.e. Android), iPhones are known for having a minimum of 5 years of updates where as most Android phones up until recently was lucky to have more than 2-3 years unless it was a Google phone.
Remember that Apple has always been good about Hardware-to-OS support periods within an ISA. (The Macintosh Plus is compatible all the way from System 3.0 to 7.5.5 (January 1986 to whatever month Mac OS 7.6 came out in 1997), not that 7.5.5 will perform well on an original 68000. My hand-me-down Macbook from 2009 came with OSX 10.6 and was supported all the way up to a 2020 security update for 10.13.)
If Apple were more willing to commit to longer guaranteed support windows on OS-to-Software compatibility, I wouldn't have as big a problem with ISA changes breaking Hardware-to-OS compatibility.
Originally posted by mdedetrich View PostIts not as clear cut as you are painting it to be, see https://news.ycombinator.com/item?id=39726292 (and fyi I work with JVM as part of my fulltime job, I am a Scala/Java developer). It seems like Apple was expecting developers to run applications in a more privileged mode seeing as the JVM is JIT, but what is an application is less clear cut with JVM due to the concept of jars conflating libraries with executables/executable code.
This is also a pretty bad example because ironically the JVM being a virtual machine, a newer version of the JVM can be released which deals with this change in MacOS. I mean thats one of the fundamental reasons why JVM is so popular, its so it can abstract over the OS so if the OS does changes like this (its not just Mac that has broken the JVM in some way), then Oracle and/or any of the other vendors can just release a new version of the JVM and you don't need to modify the jar's/classfiles at all.
EDIT: When reading the specifics it does seem like in this case the MacOS change may have to be reverted, either that or they Apple needs to specify a way to propagate JIT executable mode all the way down to jar's being executed
- Likes 2
Comment
-
The road to Hell is paved with expedient solutions.
Accepting expedient solutions sets a precedent that leads to expedient solutions in general being acceptable, so the exception becomes the rule and standard practice.
This is probably not the hill to die on, but at what point should one say, "Thus far and no further."?
I don't have an answer, but it is something project leads should be thinking about.
- Likes 5
Comment
-
Originally posted by JustK View Post
Except most computers are laptops and most laptops never receive GPU upgrades, so this can't be the major reason for lacking linux adoption.
Comment
Comment