Announcement

Collapse
No announcement yet.

Ryan Gordon Is Fed Up, FatELF Is Likely Dead

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #91
    Originally posted by Svartalf View Post
    The main reason you don't have 64-bit binaries is not a packaging reason (though that doesn't help...)- it's that you have to build the binaries for the differing architectures, and FatELF doesn't fix that problem.

    It doesn't resolve issues within your code for endianness. It doesn't resolve issues within your code for byte alignment. It doesn't resolve the issues from poorly written code that presumes a void pointer is equivalent to int- and you have issues with that going to a 64-bit world.

    All FatELF did was allow you to make universal binaries...after you resolve all the other problems. The ones that actually stymie most commercial vendors from doing anything in something other than X86-32.

    And, knowing what I know about the kernel crowd and of Ulrich...heh...I saw this little turn of events coming from a mile away. Sorry to see him disillusioned, but it happens...Lord only knows, I've been there a time or two for similar reasons myself.

    This is not to say that it's not a nice idea, mind...it's just that the resistance is going to be high on it and that it doesn't resolve a few crucial issues that need to be sorted out "better" before solving the particular problem he tried to solve.
    Agreed. I'm sorry as well Mr Gordon got that disappointed but to me FatElf was deemed to fail.

    Originally posted by deanjo View Post
    Not dodging the questions at all, had old games such as loki games used such a approach I would still be able to use those games for example in a modern distro. Right now you try to run some of those old games and they *cough* *puke* and fart trying to find matching libs. In a "universal binary" approach this wouldn't be the case.

    [...]

    Ryan is just trying to make a solution that would allow commercial developers to develop for linux without having to worry about each distro's "nuance" in order to get their product to run on each persons flavor of linux.
    ... without playing by GNU/Linux rules, which are there is a place for everything and everything has its place. I see FatELF as a big waste of machine power (and storage space) as it would best result in taking only the least common factor found in every machine without taking the most of CPU power. There are too many differences between arches to care only for what they have in common -- it's like... running 8088 code on a Core i7 (or rent a Boeing 747 to ship a single box of pills)... and I'm not talking of non-Intel arches!

    Universal binary itself denies the reason why applications *must* be compiled against the target CPU. That won't be solved that way.


    Originally posted by deanjo View Post
    This is a sore point that does hold linux back from being mainstream (as well, like it or not the lack of commercial apps).
    No. Either you haven't understood what GNU/Linux is or you're trolling.
    Last edited by VinzC; 05 November 2009, 07:45 AM.

    Comment


    • #92
      Originally posted by Milt View Post
      Somewhat agree somewhat disagree personaly...

      Yes clearly the best situation would be a nice clean package manager that deals with the problem.
      And yes I am aware that fatelf does not stand a chance of getting anywhere.

      I still disagree that it is necessarily a bad solution.
      What exactly do you think is the problem to be dealt with? As exemplified in my previous post (WorldOfGoo deployment), commercial apps CAN be easily and portably deployed across distributions already. They achieve that by using exactly the same trick FatElf uses: bundling multiple binaries in one 'package' or 'bundle'. It is easy, reliable and it works now, with no intrusive kernel/libc modifications needed.
      Observe that the only improvement FatElf brings over 'bundles' is that it removes the need for the (very simple) start-up script. But at what a great cost. It really solves a problem that does not exist.


      Unifying package management across distributions solves a different problem, which is to make it possible to manage updates of the used libraries for all applications (including the commercial ones). For that the applications dont even have to be installed "the linux (unix) way" (/usr/bin, /usr/lib etc...) as long as the package manager knows what is where. Although conforming to this standard would be nice and save space.
      FatElf doesn't address this problem at all and rather makes it more difficult.

      Comment


      • #93
        Originally posted by VinzC View Post

        No. Either you haven't understood what GNU/Linux is or you're trolling.
        Then count Linus in that crowd too, a he also admits that linux's past present and future success has always been in a "shades-of-gray". If you don't think that the availability of commercial apps on linux isn't a MAJOR handicap that prevents widespread adoption then your simply in denial or wearing horse-blinders for a narrow tunnel vision view. Why do you think Linus despite other kernel devs trying to force blob solutions to be locked out were promptly rejected.

        On Wed, 13 Dec 2006, Greg KH wrote:
        >
        > Numerous kernel developers feel that loading non-GPL drivers into the
        > kernel violates the license of the kernel and their copyright. Because
        > of this, a one year notice for everyone to address any non-GPL
        > compatible modules has been set.

        Btw, I really think this is shortsighted.

        It will only result in _exactly_ the crap we were just trying to avoid,
        namely stupid "shell game" drivers that don't actually help anything at
        all, and move code into user space instead.

        What was the point again?

        Was the point to alienate people by showing how we're less about the
        technology than about licenses?

        Was the point to show that we think we can extend our reach past derived
        work boundaries by just saying so?

        The silly thing is, the people who tend to push most for this are the
        exact SAME people who say that the RIAA etc should not be able to tell
        people what to do with the music copyrights that they own, and that the
        DMCA is bad because it puts technical limits over the rights expressly
        granted by copyright law.

        Doesn't anybody else see that as being hypocritical?

        So it's ok when we do it, but bad when other people do it? Somehow I'm not
        surprised, but I still think it's sad how you guys are showing a marked
        two-facedness about this.

        The fact is, the reason I don't think we should force the issue is very
        simple: copyright law is simply _better_off_ when you honor the admittedly
        gray issue of "derived work". It's gray. It's not black-and-white. But
        being gray is _good_. Putting artificial black-and-white technical
        counter-measures is actually bad. It's bad when the RIAA does it, it's bad
        when anybody else does it.

        If a module arguably isn't a derived work, we simply shouldn't try to say
        that its authors have to conform to our worldview.

        We should make decisions on TECHNICAL MERIT. And this one is clearly being
        pushed on anything but.

        I happen to believe that there shouldn't be technical measures that keep
        me from watching my DVD or listening to my music on whatever device I damn
        well please. Fair use, man. But it should go the other way too: we should
        not try to assert _our_ copyright rules on other peoples code that wasn't
        derived from ours, or assert _our_ technical measures that keep people
        from combining things their way.

        If people take our code, they'd better behave according to our rules. But
        we shouldn't have to behave according to the RIAA rules just because we
        _listen_ to their music. Similarly, nobody should be forced to behave
        according to our rules just because they _use_ our system.

        There's a big difference between "copy" and "use". It's exatcly the same
        issue whether it's music or code. You can't re-distribute other peoples
        music (becuase it's _their_ copyright), but they shouldn't put limits on
        how you personally _use_ it (because it's _your_ life).

        Same goes for code. Copyright is about _distribution_, not about use. We
        shouldn't limit how people use the code.

        Oh, well. I realize nobody is likely going to listen to me, and everybody
        has their opinion set in stone.

        That said, I'm going to suggest that you people talk to your COMPANY
        LAWYERS on this, and I'm personally not going to merge that particular
        code unless you can convince the people you work for to merge it first.

        In other words, you guys know my stance. I'll not fight the combined
        opinion of other kernel developers, but I sure as hell won't be the first
        to merge this, and I sure as hell won't have _my_ tree be the one that
        causes this to happen.

        So go get it merged in the Ubuntu, (Open)SuSE and RHEL and Fedora trees
        first. This is not something where we use my tree as a way to get it to
        other trees. This is something where the push had better come from the
        other direction.

        Because I think it's stupid. So use somebody else than me to push your
        political agendas, please.

        Linus

        Comment


        • #94
          and how would FATELF help with commercial apps?

          HINT: it wouldn't.

          but you are obviously ignoring what everybody else explained 100 times already. So.. do you think your trolling is funny? Because if the answer is yes, then you are the only one who thinks so.

          And nice bringing up Ulrich in a completly unrelated topic, so you are not only not understanding the problem, you are also trying to evade.

          EPIC FAIL.

          Comment


          • #95
            Originally posted by VinzC View Post
            ... without playing by GNU/Linux rules, which are there is a place for everything and everything has its place. I see FatELF as a big waste of machine power (and storage space) as it would best result in taking only the least common factor found in every machine without taking the most of CPU power. There are too many differences between arches to care only for what they have in common -- it's like... running 8088 code on a Core i7 (or rent a Boeing 747 to ship a single box of pills)... and I'm not talking of non-Intel arches!

            Universal binary itself denies the reason why applications *must* be compiled against the target CPU. That won't be solved that way.

            What? You are kidding right. Do you think every commercial app out there (or even opensource) is compiled specifically for one processor. Hell no. You have completely lost the grasp of the situation. Optimization for a processor, I am sorry, goes a lot further then a simple recompile of the code. Hell if you wanted a compiler to take full advantage and produce the tightest code for you i7 you wouldn't be using GCC period but intels own compiler suite. Distro's compile apps to support a lowest common denominator for an arch, some have runtime detection of cpu capabilities upon launch and will take advantage of extra instruction sets and improvements might be seen depending on the code if it can take advantage of such instruction sets. For a vast majority of executables no performance gain is seen. Here is a newsflash for you, most linux users don't compile their OS from scratch for their system and no where were we talking about compiling from scratch, we are however talking about pre-compiled solutions. Your argument on this is reaching for straws at best and I can only assume you have no or very limited of coding, packaging and compiling experience to try to make such an misinformed argument.
            Last edited by deanjo; 05 November 2009, 11:13 AM.

            Comment


            • #96
              misiu_mp you do have a point, I completely agree with you that a perfect package manager would be a better solution.

              Wrapper scripts are easy to make and do probably solve all occurring scenarios. If we for instance want to have an usb stick with some stuff and support multiple archs then it can easily be done, but nothing is without a price. Be it wrappers everywhere // symlink playing around or whatever, the price has to be payed.

              I would not mind having fatelf instead.

              Simply being able to work with a system without having to check if something where 32 or 64 bit would be nice.
              For instance being able to LD_PRELOAD some fatelf trace library without having to check if the program being started where 32 bit or 64 bit would be nice.

              (and yes I understand that I can easily make a wrapper for that instead, I still would prefer not to)
              Last edited by Milt; 05 November 2009, 02:55 PM. Reason: last line added

              Comment


              • #97
                It's a shame really....

                I think the biggest flaw is cross compiling support in linux right now is not exactly an automated process. The fact of the matter is that right now gcc will not produce exactly the same binary when one is cross compiled and the other is native compiled.

                Additionally make would have to be completely rewritten. The fact of the matter is that make simply wasnt intended for cross compiling at all. It would have to be completely rewritten, or better yet replaced with something easier and more intuitive.

                I personally could definitely benefit from fatelf, but I think it would be just as difficult or more so to make a fatelf LiveUSB than it would to make an individual LiveUSB for each architecture target.

                Comment


                • #98
                  Originally posted by deanjo View Post
                  Then of course there is also the fact that steam is not available to linux or os x, so are we now using Windows as an example of proper package delivery and installation?
                  I was referring to the native Linux versions of the game servers. I don't think that Valve even offers 64-bit versions of those for Windows.


                  You may not care about patents but I assure you the GNU / FSF crowd does.
                  They do. It's just that I personally don't care.

                  Comment


                  • #99
                    Originally posted by deanjo View Post
                    You don't seem to be getting the whole picture of the goals of FatELF.

                    http://icculus.org/fatelf/
                    ... so the goals are to make everybody need to download 5 or 10 DVDs to install from instead of just one that matches their architecture? That sure sounds simple to me....

                    Comment


                    • Originally posted by deanjo View Post
                      Wrong, I've seen way to many compat 32-bit libs slapped up with the likes of cups-32 compat libs. Uhhuh ya, that is really needed now isn't it.
                      That would depend on how retarded your distro is behaving (i.e. how many unneeded 32bit libs they install just for the fun of it).

                      In other words, you can have an *entirely* 64bit system and it works just fine. If you need to run some specific 32bit program, you install ONLY the required dependencies instead of *ALL* 32bit packages provided by the platform, which is what fatelf would do (in addition to installing ALL of the packages for ALL platforms that *aren't even compatible* -- like ARM, SPARC, PPC, PPC64, etc.).

                      Comment

                      Working...
                      X