Announcement

Collapse
No announcement yet.

Ryan Gordon Criticizes Open-Source Drivers Again

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    Originally posted by blackiwid View Post
    Hi I dont understand this guy. Most of the stuff is not like he says in my oppinion.

    1. drivers, ok they are slow for 3d stuff etc, but why is the openes a problem? the intel drivers are only open, ok no game more than tetris work with them, but thats mostly a hardware problem. Amd the fglrx is way more unstable then the open radeon drivers they are pretty much rock solid, if you not try the newest magic they are 100 rock-solid no crash at all.
    It's not that the openness is a problem - but as the open drivers are becoming good enough for every-day use, their shortcomings become more relevant. It used to be adequate to tell people to run the nvidia binaries if they wanted good 3D, because proprietary or not, they were the only practical option. But Nouveau has now matured enough to be a genuine option on the desktop, meaning people are less willing to use the binary drivers, even though the open ones aren't good enough for games.

    Comment


    • #62
      Originally posted by AdamW View Post
      Ignore the issue and install anyhow is almost exactly never what you actually want, and will cause all sorts of breakage later. The intricacies of interdependent apps and libraries aren't terribly apparent at first, but distributors usually do actually know what the hell they're doing, and if something conflicts or doesn't satisfy a dependency, that's usually _correct_, it's not the distributor being obnoxious just for the hell of it.

      It's pretty hard to provide a condensed explanation of why. It's something you either take on trust or learn through bitter experience. Feel free to choose either path. =)
      I am perfectly aware.
      But you know all those packages on your system? Their are dependant on the current version, or a version that has 0.1.0 increase in version number.
      Everybody that pack packages tend to do the same. Some time they really stretch it, and perhaps it will allow a version number of 0.1.2+ instead of a mere 0.1.0 :P
      This is not distributors faults, but the fault of the people who make packages. They specify that their package NEEDS something that will have a completely different version number in 6 months, even if it is just minor bugfixes that is patched on.
      Don't tell me that you never fetched a .deb package, and attempted to install it, only to find that the dependancy is set to only a special version of a certain libary? We both know that installing it will work just fine, but the pack don't.

      What you are talking about is when a package actually breaks due minor or large libary changes, which is a different issue.


      Originally posted by JanC View Post
      This "button" (wel, switch) exists in the commandline versions of the package manager, if you really want it, but as explained before making it the default/easy-to-use results into masking of packaging bugs and other bad things.
      If I can't find it in the manpage, and nobody on several IRC chans has no idea, it does not likely exist.
      Then again: It might be a .deb issue.
      Last edited by del_diablo; 08 August 2011, 07:08 PM.

      Comment


      • #63
        Originally posted by deanjo View Post
        You HOPE it is being checked by other QUALIFIED people.
        I'm not talking about average phoronix posters, but guys working at IBM, Google, Amazon and the major distros, who I do trust with checking something at least casually before installing it on 100,000 servers.

        It is also common for people employed by major distros, or people with long involvement in said distros, to be major or minor contributors to many FLOSS projects, and that they patch them very often before shipping. They read the source code and are certainly qualified.

        So yeah, I do feel safe that Dave Airlie, Alex Deucher and Marek Olsak (along with many many others) are all fiddling in there in kernel DRM and Mesa and know the source code well. They are qualified, and work for different companies (or are volunteers). I am convinced that the danger of a Bonzi Buddy or rootkit getting past ALL of them is rather small.

        There is always danger, but openness is a strength.

        Comment


        • #64
          Originally posted by ean5533 View Post
          There's no standard set of base libraries that I'm guaranteed to have on all Linux distros, so unless I'm missing something then the conclusion is that I'd have to statically link every dependency that chains up from libcairo, which is insane.
          Well yes, actually there is such a base set. Maybe "guaranteed" is too strong a word, but you can certainly assume you've got glibc, and all the usual X libraries that any modern desktop requires. That's what Firefox does, afterall - their binary packages do provide copies of some system libraries (usually statically linked), but they assume that if the user is running an Gtk+/X build of Firefox on an X desktop, they've probably got the usual X libraries, and a recent-enough copy of Gtk+ and it's associated libraries...

          Comment


          • #65
            Originally posted by del_diablo View Post
            I am perfectly aware.
            But you know all those packages on your system? Their are dependant on the current version, or a version that has 0.1.0 increase in version number.
            Everybody that pack packages tend to do the same. Some time they really stretch it, and perhaps it will allow a version number of 0.1.2+ instead of a mere 0.1.0 :P
            This is not distributors faults, but the fault of the people who make packages. They specify that their package NEEDS something that will have a completely different version number in 6 months, even if it is just minor bugfixes that is patched on.
            But is there anything wrong with this?
            There is such thing as ABI dependencies that (when user properly) say for example "0.1.0 is compatible with 0.1.2+". Meaning all versions of libfoo (0.1.0, 0.1.1, 0.1.2) are completely replace'able if they have the same ABI (for example they all provide libfoo.so.1).
            And in order to run dynamically linked binaries, it's usually sufficient to just meet their ABI deps. Right, it is irrelevant whether libfoo-0.1.0 or libfoo-0.1.2 is used, but there may be grave bugs in 0.1.0 that prevent application from running but that's is really what distribution developers are as aware of as software developers and they both usually know better than users.
            I mean, what is your point here?

            What AdamW said on package format vs dependencies is spot on.

            From other distro developer (Gentoo for that matter) point of view, I'll go as far as to say I never trusted software packages (.deb or .rpm) created by respective proprietary software vendors.
            It usually created only mess on package updates and left orphaned files on removals (those files were usually artifacts of messy pre/post-install scripts).

            It also makes me laugh hearing typical argument "we cannot provide application X for Linux because there's no standards Linux distribution". My answer to this is always - "so what? You do your job - write application and package as plain tarball, we (distributions) will do ours. Or just pick your favourite distribution and create packege for it - other distributions with wrap it around format they use".

            I really wish proprietary software vendors stopped caring about "supporting packages for distro X" and focused just on providing merely a tarball or simple pure command line installer (if pre/post-install/uninstall actions are required) with dynamically linked binaries (sic!) and providing just a list of those ABI dependencies in some README so that distro packagers can do their job and they will do this job better (those ABI deps can be figured out from DT_NEEDED so no such README is actually needed but nice to have indeed).

            However I understand the will to provide out of the box experience for end user without relying on distribution packagers, but frankly it's not going to work unless:
            a) either all binaries are linked statically and proprietary software vendor provides (de)installation script
            b) or proprierary software vendor picks one distribution (say RHEL6 or latest Ubuntu LTS) and devotes its time to supporting it (ignoring the rest of the world)

            No ideal solution for diversity.

            Sometimes they cannot simply say to their client "hey, grab the pieces and put them together yourself" -
            Last edited by reavertm; 08 August 2011, 09:00 PM.

            Comment


            • #66
              Originally posted by ean5533 View Post
              I don't deal with closed source software on Linux very often, so I've never had to think about it. However, I can immediately say that bundling libraries would get very ugly very quickly, because you don't know when to stop. That is, you don't know when you can assume a certain library will definitely be installed.

              Let's say you write an app that uses cairo. If you bundle libcairo2, you also have to make a decision about whether to bundle each of its dependencies or to assume that they exist on the system. Here are all of libcairo2's dependencies. And each of them have their own dependencies. How do you make the decision about which ones you need to bundle and which ones you can assume will already exist?
              the answer is: all of them, entire graph of dependencies.
              why ? for ugly, fucked-by-design, artificial problem can be only solved by ugly, screwy solution.

              problem is: this is not how you package software for open system.
              programs, binaries on open systems are... open. only data flow for those programs is completely unbound. this is just how open software systems work.
              you want to gallantly distribute your stuff in open system - you remove as much hacks and workarounds near end of development as you can, make your engine easily compilable and compatible with ways of how open programs treat data (for example, avoid hardcoded paths), open it up and let maintainers package it as they see fit, sell your game data.


              from user point of view, he just needs to install engine from repo and put data in correct place (like /usr/games/$CREATOR/$GAME, /opt/$CREATOR-$GAME or ~/Games/$CREATOR-$GAME for non-system-wide). those are easy steps and easily even further automatible.

              from developer point of view, you have to employ proper development practices and put your shit together. we all know that after few months, at best - years, game will become abandonware but distributors still will want to have some money of it and to shrug off all problems lower in chain, to distribution and users. so, it's your, developer, problem to ensure you making foresight, proactive decision for guaranteeing that your game will work after 1-4 years when all underlying software stack is changed.
              stop winning. do your job or at least have decency to not act as shitty corporate distributor as if someone else is liable for interacting with your business peers.

              PS: and don't say that "other systems" don't have a problem. not only they do but it's worse. in Windows? you have 1] deep, system deps that change with major version (even though API can stay the same, your app still can choke on it) 2] deps from bunch of redistibutives you keep on CD or in game folder for manual installation or half-assedly scripted one by "installer" 3] deps from game's folder which override system ones since those can be harder to provide/install (full redistribution forbidden by license) or update properly (everyone hardcodes their shit on different versions) 4) still bunch of stuff statically linked. and of course you will blame distributions, on Windows? there is no one to blame besides users and yourself.

              Comment


              • #67
                If the major browsers (Firefox, Chrome) would just get on the bandwagon and learn how to allow you to run executable binaries from the browser like you do with .exe on Windows, the packaging war would be over.

                A browser just has to do this:

                1. Download file completely
                2. chmod +x
                3. fork() and exec(), or system()

                These things can be done using either native code, wrapper libraries like Qt or GLib, or running shell commands. There are a gazillion ways to do these things.

                I can understand the desire to prevent users from inadvertently running malicious code on their machine, but being unable to run a MojoSetup .run or .bin file from Chrome or Firefox is just ridiculous. Does Windows make you go into CMD.EXE and run two or three commands to run an executable, just because Microsoft thinks that executables are a security risk? No.

                The browser (rightly) warns the user that the file may be malicious. And if the user has half a brain, they will go into the terminal and run the required commands to run it anyway, regardless of whether it is malicious. Those who don't have even half a brain aren't able to use Linux, even for non-malicious programs, because they can't figure out "how to run a .bin file" (they're expecting point-and-click like an .exe).

                Seriously, it's 2011. Where's my point-and-click runnable Linux scripts and binaries? The closest I've ever seen is Click-n-Run, and that died quickly. Why do we need all that infrastructure built around it? What's so hard about allowing the user to run a raw .bin installer?

                Yeah, I know, the installer might require terminal I/O. But maybe the launcher can be coded to automatically pop up a gnome-terminal and run the downloaded binary (inside a screen session -- we'll get to that later), then monitor the running process for any calls to X11. If it sees an X11 call it can automatically detach the screen session, close the terminal and let the binary "go graphical". But if it doesn't succeed in going graphical then the terminal's still there.

                Something like that; I'm just brainstorming. But we really need a point-and-click installer launcher that works across all distros, not just .deb or .rpm file associations.

                And in other news, .rpm is vastly superior to .deb for obvious reasons. Bring on the flamebait! But before you flame, just tell me where Debian's LZMA2 compression and binary diff generation ("delta rpms" equivalent for dpkg) are. I haven't seen heads or tails of them. Maybe Debian thinks dpkg is better because it's less featureful. It'd make perfect sense in Debuntu World, where everything is backwards: slower is faster, less usable is more usable, and Australia is the capital of the world.

                Comment


                • #68
                  Originally posted by allquixotic View Post
                  A browser just has to do this:

                  1. Download file completely
                  2. chmod +x
                  3. fork() and exec(), or system()
                  4. alert("error while loading shared libraries: cannot open shared object file: No such file or directory")

                  Comment


                  • #69
                    Originally posted by del_diablo View Post
                    I am perfectly aware.
                    But you know all those packages on your system? Their are dependant on the current version, or a version that has 0.1.0 increase in version number.
                    Everybody that pack packages tend to do the same. Some time they really stretch it, and perhaps it will allow a version number of 0.1.2+ instead of a mere 0.1.0 :P
                    This is not distributors faults, but the fault of the people who make packages. They specify that their package NEEDS something that will have a completely different version number in 6 months, even if it is just minor bugfixes that is patched on.
                    Don't tell me that you never fetched a .deb package, and attempted to install it, only to find that the dependancy is set to only a special version of a certain libary? We both know that installing it will work just fine, but the pack don't.
                    I will tell you that, because I don't run any .deb distros, and only ever have very briefly, for testing purposes. Check the line under my name. =)

                    But in general, I don't think you're right anyway. This can be the case with some poorly-done third party packages, sure. But for all the distributions I've looked at - and I do look at packages from more distributions than I actually *run* - what you describe is not the case at all for the official packages. Most distributions learned long ago that being overly specific with versioned dependencies is silly. Most dependencies in distros these days, at least mainstream ones, are unversioned or versioned very broadly, and specific versioned dependencies are the exception, used when there's a really genuine need for them.

                    The big obvious exception is dependencies on shared libraries, which are usually automated in modern distributions. There's a widely-accepted convention for versioning libraries, where bumping the major version is usually used to denote an ABI change. So whizzy_application will be written to link against libwhizzy.so.1 . libwhizzy.so.1 will be a symlink to libwhizzy.so.1.0.0 . If a minor change is made to libwhizzy, which doesn't affect ABI compatibility, libwhizzy will bump to libwhizzy.so.1.0.1 , but the libwhizzy.so.1 symlink will still be there, the app will be happy. If a major change is made which does break ABI compatibility libwhizzy will bump to libwhizzy.so.2.0.0 and there will now be a libwhizzy.so.2 symlink. This is how the library advertises that its ABI has changed: because whizzy_application is linked against libwhizzy.so.1 it will now stop working until it's recompiled (with any necessary code changes) against the new libwhizzy. Package dependencies will encode this same relationship. This is all exactly how things are supposed to work.

                    Comment


                    • #70
                      Originally posted by ean5533 View Post

                      *Note to Opera users: It's a hypothetical situation. Quell your hipster rage.
                      As an opera user this made me lol

                      Comment

                      Working...
                      X