Originally posted by ssokolow
View Post
Announcement
Collapse
No announcement yet.
Google Makes Linux Apps On Chrome OS Official
Collapse
X
-
Originally posted by jacob View Post
Of course this is all true but you are missing my point: the graphics drivers ALREADY WORK as reliably as ethernet or usb on MacOS and Windows but not on Linux. That's the whole problem.
nVidia drivers have always worked as reliably for me on Linux as on Windows (Usually rock-solid, but some versions causing problems), while AMD drivers (going all the way back to the ATi Rage 128) have been stable on Linux, but had a tendency to hang the system on Windows. (Across multiple machines and multiple independently-installed "newest at the time" Windows versions from Windows 98SE onward)
Heck, with the aforementioned ATi Rage 128, it was one of the straws that broke the camel's back and drove me to switch my best machine at the time over to Linux because Windows+eMule would invariably BSOD the system with the latest drivers available, while Linux+aMule on the exact same hardware ran without an issue.Last edited by ssokolow; 17 May 2018, 06:55 PM.
Comment
-
-
Originally posted by makam View PostHowever, do keep in mind that single-point also means a signle point of failure.
Comment
-
Originally posted by ssokolow View Post
I beg to differ.
nVidia drivers have always worked as reliably for me on Linux as on Windows (Usually rock-solid, but some versions causing problems), while AMD drivers (going all the way back to the ATi Rage 128) have been stable on Linux, but had a tendency to hang the system on Windows. (Across multiple machines and multiple independently-installed "newest at the time" Windows versions from Windows 98SE onward)
Originally posted by ssokolow View PostHeck, with the aforementioned ATi Rage 128, it was one of the straws that broke the camel's back and drove me to switch my best machine at the time over to Linux because Windows+eMule would invariably BSOD the system with the latest drivers available, while Linux+aMule on the exact same hardware ran without an issue.
- Likes 1
Comment
-
Originally posted by ssokolow View Post
Which is a good thing in this case, because it maximizes the chances that it will fail when the developers do QA testing.Originally posted by jacobNot at all. APIs don't somehow break at runtime or become unreliable and don't need redundancy. For example, all threads, processes, containers, VMs etc. in Linux are ultimately created by the clone() system call (with various wrappers on top of it). That's a single point of entry. It's predictable, documented, tested and is the One True Way to achieve that particular purpose. Linux wouldn't be any better or more reliable of it had a dozen different versions of it, all incompatible, and let each app developer devise and implement his very own syscall to create processes because "choice". To the contrary, it would be an unmanageable mess and an OS that no serious developer would consider working on. It's exactly the same for configuration, except that back in the days Unix never bothered to do it right (honestly, there is very very very little that Unix ever did right) and so we are historically stuck with a poor man's substitute where the OS has basically no support whatsoever for what is nevertheless an essential feature. Yet because it was Unix, we post-rationalise it and try to convince ourselves that it's in fact a good thing and that it should be that way. It shouldn't. There is absolutely nothing good about a programming language that relies on GOTO for its control flow; there is absolutely nothing good about an OS that hardcodes 640k as a maximum RAM size, there is absolutely nothing good about the "UGO" permissions model when ACLs existed since MULTICS and, by the same token, there is absolutely nothing good about an OS that has no One True configuration framework and instead lets everyone dump arbitrary config files all over the place. Just the thought of the number of text parsers (written in C, no less, with all its memory management and buffer limit handling "features") running with root privileges gives me shivers.
As for GPU drivers here is my opinion/experience on what goes from best to worst:
1) AMD APU
2) AMD CPU + AMD discrete GPU
3) AMD APU + AMD discrete GPU
4) Any CPU without an integrated GPU + an NVIDIA GPU
5) Any intel/amd, intel/nvidia or amd/nvidia hybrid GPU setup.
Other than having a full AMD hybrid setup, avoid hybrid setups. If you are buying a laptop my advice is buy one with a strong APU.
Comment
-
Originally posted by makam View PostThat is fine from the point of view of a developer. But it is not fine from the point of view of an average Linux tinkerer whose system would then become less rigid against modification failures.
Having more default configurations doesn't magically make developers better coders... they may test a few more configurations, but it won't change whether they're writing their code in a robust way.
On the other hand, having fewer default configurations allows them to release something that's more thoroughly tested as well as allowing people who offer alternatives a more solid grasp of what they need to ensure compatibility with.
At the theoretical level, you're essentially arguing against the fundamental design precepts involved in writing code that lends itself well to CI testing.
Originally posted by makam View PostAs for GPU drivers here is my opinion/experience on what goes from best to worst:
1) AMD APU
2) AMD CPU + AMD discrete GPU
3) AMD APU + AMD discrete GPU
4) Any CPU without an integrated GPU + an NVIDIA GPU
5) Any intel/amd, intel/nvidia or amd/nvidia hybrid GPU setup.
Other than having a full AMD hybrid setup, avoid hybrid setups. If you are buying a laptop my advice is buy one with a strong APU.
Comment
-
Originally posted by ssokolow View Post
Where would a desktop with an Intel CPU (for me_cleaner) and a discrete AMD GPU (for the open-source drivers) fit into that list? (What I plan to buy when my pre-UEFI, pre-PSP 65W AMD CPU dies.)
Comment
Comment