Announcement

Collapse
No announcement yet.

Google Makes Linux Apps On Chrome OS Official

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #91
    Originally posted by ssokolow View Post



    Because graphics are much more complicated than network drivers. They're half-way between network drivers, which push Ethernet frames, not having to care much about their contents, and Wine, which has to emulate a massively complex OS API.

    As-is, OpenGL is a big, complex, stateful API. It's not wonder that different applications exercise it in different ways to trigger different bugs in divergent implementations.

    If every driver were using Vulkan for everything and OpenGL were implemented as a single GL-on-Vulkan layer shared between all drivers (similar to how TCP/UDP/etc. and IP/IPX/etc. are implemented as layers on top of the network driver), then the world would be a better place and I'd agree with your expectations.
    Of course this is all true but you are missing my point: the graphics drivers ALREADY WORK as reliably as ethernet or usb on MacOS and Windows but not on Linux. That's the whole problem.

    Comment


    • #92
      However, do keep in mind that single-point also means a signle point of failure.

      Comment


      • #93
        Originally posted by jacob View Post

        Of course this is all true but you are missing my point: the graphics drivers ALREADY WORK as reliably as ethernet or usb on MacOS and Windows but not on Linux. That's the whole problem.
        I beg to differ.

        nVidia drivers have always worked as reliably for me on Linux as on Windows (Usually rock-solid, but some versions causing problems), while AMD drivers (going all the way back to the ATi Rage 128) have been stable on Linux, but had a tendency to hang the system on Windows. (Across multiple machines and multiple independently-installed "newest at the time" Windows versions from Windows 98SE onward)

        Heck, with the aforementioned ATi Rage 128, it was one of the straws that broke the camel's back and drove me to switch my best machine at the time over to Linux because Windows+eMule would invariably BSOD the system with the latest drivers available, while Linux+aMule on the exact same hardware ran without an issue.
        Last edited by ssokolow; 17 May 2018, 06:55 PM.

        Comment


        • #94
          Originally posted by makam View Post
          However, do keep in mind that single-point also means a signle point of failure.
          Which is a good thing in this case, because it maximizes the chances that it will fail when the developers do QA testing.

          Comment


          • #95
            Originally posted by makam View Post
            However, do keep in mind that single-point also means a signle point of failure.
            Not at all. APIs don't somehow break at runtime or become unreliable and don't need redundancy. For example, all threads, processes, containers, VMs etc. in Linux are ultimately created by the clone() system call (with various wrappers on top of it). That's a single point of entry. It's predictable, documented, tested and is the One True Way to achieve that particular purpose. Linux wouldn't be any better or more reliable of it had a dozen different versions of it, all incompatible, and let each app developer devise and implement his very own syscall to create processes because "choice". To the contrary, it would be an unmanageable mess and an OS that no serious developer would consider working on. It's exactly the same for configuration, except that back in the days Unix never bothered to do it right (honestly, there is very very very little that Unix ever did right) and so we are historically stuck with a poor man's substitute where the OS has basically no support whatsoever for what is nevertheless an essential feature. Yet because it was Unix, we post-rationalise it and try to convince ourselves that it's in fact a good thing and that it should be that way. It shouldn't. There is absolutely nothing good about a programming language that relies on GOTO for its control flow; there is absolutely nothing good about an OS that hardcodes 640k as a maximum RAM size, there is absolutely nothing good about the "UGO" permissions model when ACLs existed since MULTICS and, by the same token, there is absolutely nothing good about an OS that has no One True configuration framework and instead lets everyone dump arbitrary config files all over the place. Just the thought of the number of text parsers (written in C, no less, with all its memory management and buffer limit handling "features") running with root privileges gives me shivers.

            Comment


            • #96
              Originally posted by ssokolow View Post

              I beg to differ.

              nVidia drivers have always worked as reliably for me on Linux as on Windows (Usually rock-solid, but some versions causing problems), while AMD drivers (going all the way back to the ATi Rage 128) have been stable on Linux, but had a tendency to hang the system on Windows. (Across multiple machines and multiple independently-installed "newest at the time" Windows versions from Windows 98SE onward)
              I have had NVidia GPUs in the past and the drivers were a constant source of headache. Kernel panics at least once a week, suspend/resume permanently broken, distro kernel updates that may or may not work, and of course I didn't even dare thinking about such things as Wayland. Admittedly I didn't use Windows on the same machines so I can't compare the experience, but based on how (badly) it worked on Linux, NVidia is now on my personal blacklist.

              Originally posted by ssokolow View Post
              Heck, with the aforementioned ATi Rage 128, it was one of the straws that broke the camel's back and drove me to switch my best machine at the time over to Linux because Windows+eMule would invariably BSOD the system with the latest drivers available, while Linux+aMule on the exact same hardware ran without an issue.
              I have a hybrid Intel/AMD configuration at the moment and it's no panacea on Linux. If all you are concerned about are desktop apps, it's mostly fine, but try to run the simplest game and all hell breaks lose. AFAICT that's not the case when running Windows 10 on the same laptop. Not that I would switch to Win10 for that, of course.

              Comment


              • #97
                Originally posted by ssokolow View Post

                Which is a good thing in this case, because it maximizes the chances that it will fail when the developers do QA testing.
                Originally posted by jacob
                Not at all. APIs don't somehow break at runtime or become unreliable and don't need redundancy. For example, all threads, processes, containers, VMs etc. in Linux are ultimately created by the clone() system call (with various wrappers on top of it). That's a single point of entry. It's predictable, documented, tested and is the One True Way to achieve that particular purpose. Linux wouldn't be any better or more reliable of it had a dozen different versions of it, all incompatible, and let each app developer devise and implement his very own syscall to create processes because "choice". To the contrary, it would be an unmanageable mess and an OS that no serious developer would consider working on. It's exactly the same for configuration, except that back in the days Unix never bothered to do it right (honestly, there is very very very little that Unix ever did right) and so we are historically stuck with a poor man's substitute where the OS has basically no support whatsoever for what is nevertheless an essential feature. Yet because it was Unix, we post-rationalise it and try to convince ourselves that it's in fact a good thing and that it should be that way. It shouldn't. There is absolutely nothing good about a programming language that relies on GOTO for its control flow; there is absolutely nothing good about an OS that hardcodes 640k as a maximum RAM size, there is absolutely nothing good about the "UGO" permissions model when ACLs existed since MULTICS and, by the same token, there is absolutely nothing good about an OS that has no One True configuration framework and instead lets everyone dump arbitrary config files all over the place. Just the thought of the number of text parsers (written in C, no less, with all its memory management and buffer limit handling "features") running with root privileges gives me shivers.
                That is fine from the point of view of a developer. But it is not fine from the point of view of an average Linux tinkerer whose system would then become less rigid against modification failures.

                As for GPU drivers here is my opinion/experience on what goes from best to worst:
                1) AMD APU
                2) AMD CPU + AMD discrete GPU
                3) AMD APU + AMD discrete GPU
                4) Any CPU without an integrated GPU + an NVIDIA GPU
                5) Any intel/amd, intel/nvidia or amd/nvidia hybrid GPU setup.

                Other than having a full AMD hybrid setup, avoid hybrid setups. If you are buying a laptop my advice is buy one with a strong APU.

                Comment


                • #98
                  Originally posted by makam View Post
                  That is fine from the point of view of a developer. But it is not fine from the point of view of an average Linux tinkerer whose system would then become less rigid against modification failures.
                  That's not how combinatorial complexity works. (Source: I'm a programmer and I have a comp. sci. degree, so I have a pretty good understanding of what goes into making things bug-prone.)

                  Having more default configurations doesn't magically make developers better coders... they may test a few more configurations, but it won't change whether they're writing their code in a robust way.

                  On the other hand, having fewer default configurations allows them to release something that's more thoroughly tested as well as allowing people who offer alternatives a more solid grasp of what they need to ensure compatibility with.

                  At the theoretical level, you're essentially arguing against the fundamental design precepts involved in writing code that lends itself well to CI testing.

                  Originally posted by makam View Post
                  As for GPU drivers here is my opinion/experience on what goes from best to worst:
                  1) AMD APU
                  2) AMD CPU + AMD discrete GPU
                  3) AMD APU + AMD discrete GPU
                  4) Any CPU without an integrated GPU + an NVIDIA GPU
                  5) Any intel/amd, intel/nvidia or amd/nvidia hybrid GPU setup.

                  Other than having a full AMD hybrid setup, avoid hybrid setups. If you are buying a laptop my advice is buy one with a strong APU.
                  Where would a desktop with an Intel CPU (for me_cleaner) and a discrete AMD GPU (for the open-source drivers) fit into that list? (What I plan to buy when my pre-UEFI, pre-PSP 65W AMD CPU dies.)

                  Comment


                  • #99
                    Maybe some day we also get 1080p Netflix with Chrome on Linux instead of just ChromeOS

                    Comment


                    • Originally posted by ssokolow View Post

                      Where would a desktop with an Intel CPU (for me_cleaner) and a discrete AMD GPU (for the open-source drivers) fit into that list? (What I plan to buy when my pre-UEFI, pre-PSP 65W AMD CPU dies.)
                      Wait, I just noticed intel CPUs without integrated GPUs still exist. I would say 2.5 or 3.

                      Comment

                      Working...
                      X