Announcement

Collapse
No announcement yet.

Valve going to officially support Linux?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by anshuman View Post
    I think, this step from valve should be credited to Ubuntu users . They have been great fanatics and don't shy out for asking support to linux. any group with money and not waiting to shout for support cannot be ignore .

    BTW i am redhat user cause that was the stuff around in my days (10 years ago) and i have to work on it. still Ubuntu isnt bad for stuff like this. Go ubuntu fanatics!!!
    I use Feodra, and was too a Red Hat user from 10 years ago.

    Originally posted by Svartalf
    Add LGP to the static-link crowd. They offer a statically linked binary which is the one that's been verified. They also offer a dynamic link version with the install just in case you run afoul of problems- but it's not officially supported except when they've verified a specific configuration (and told you to use it in a FAQ or elsewhere...)... The problem with the dynamic linked version is that it doesn't QUITE work the way you expect- Linux does NOT search the PATH for .so files. It searches along the ld.so.cache specified path, which can only be dynamically overridden like you mention via LD_PRELOAD, which has it's own set of issues.
    I've seen many ways of ensuring that a particular library path is seeked for in the launcher scripts for many applications. Take Neverwinter Nights' script for example, where they override the system-wide SDL library (in case it is not installed) for their own with LD_LIBRAY_PATH, placing in a higher priority their own ./lib and ./miles directories for libraries, that effectively overrides the /etc/ld.so.cache file. Some others also use LD_PRELOAD, though I'm still not sure what are the benefits of one Vs the other method, are they simply different "styles" of doing the same thing, or does one loads the symbols of a certain particular .so library and even if found another in the regular library path, that at least ensures those symbols are resolved and available for the application (maybe increasing the chances for segmentation faults or the like memory errors). I've seen LD_PRELOAD being used for overriding some application behavior (again, with NWN and some addons for Linux, like the nwuser library which sets the basic infrastructure for the game to save its data onto the user's /home directory rather than the current game directory).

    Comment


    • #42
      Originally posted by Thetargos View Post
      I've seen many ways of ensuring that a particular library path is seeked for in the launcher scripts for many applications. Take Neverwinter Nights' script for example, where they override the system-wide SDL library (in case it is not installed) for their own with LD_LIBRAY_PATH, placing in a higher priority their own ./lib and ./miles directories for libraries, that effectively overrides the /etc/ld.so.cache file. Some others also use LD_PRELOAD, though I'm still not sure what are the benefits of one Vs the other method, are they simply different "styles" of doing the same thing, or does one loads the symbols of a certain particular .so library and even if found another in the regular library path, that at least ensures those symbols are resolved and available for the application (maybe increasing the chances for segmentation faults or the like memory errors). I've seen LD_PRELOAD being used for overriding some application behavior (again, with NWN and some addons for Linux, like the nwuser library which sets the basic infrastructure for the game to save its data onto the user's /home directory rather than the current game directory).
      Each has it's own issues. LD_LIBRARY_PATH forces the system to look up the libraries in the specified path first when doing a lookup and then applies the ld.so.cache lookups if it can't find what it's looking for the LD_LIBRARY_PATH. LD_PRELOAD explicitly preloads the specified .so files before any linkage or execution is attempted on a binary. Neither is really good from the standpoint of a regular binary- you typically use them to debug or to sidestep a problem with a binary and the current runtimes you have.

      For some background:




      http://www.kernelthread.com/publications/umischiefs/ (For the last one search for LD_PRELOAD on the page...)

      LGP provides LD_PRELOAD/LD_LIBRARY_PATH dynamic linkable binaries to provide compliance with the LGPL licensing terms and to provide a way for people to sidestep very specific issues when they might arise with a given system.
      Last edited by Svartalf; 18 September 2007, 08:46 PM.

      Comment


      • #43
        The main problem with commercial applications (such as games) in Linux is the heterogeneity of the system from distribution to distribution, particularly in those with long release cycles, as they tend to have older libraries to linger for a much longer time, not only that, but also the many versions at any one time of GCC libraries. As such it would be best for a commercial application to pretty much package the libraries it was linked against and force their use rather than the system-wide libraries, or statically link against those (having the side effect of fattening the binary). Those are the main issues, from what I've been able to observe:
        1. GlibC version.
        2. GCC version, libs and version which the system libs were built with.
        3. C++ runtime libraries version.


        That's when LD_LIBRARY_PATH and LD_PRELOAD are most useful.

        Judging from some of the sites you cited, it would seem as if it would more beneficial if the application programmers at build time, used specific link paths so that the binary would "look" for those libs at run time, regardless of system configuration and as such they could rely more on their own "local" ./lib path [1]. So if I understand this correctly, they could simply build having int mind their own application's directory structure and ship the libraries they may need without relying on LD_LIBRARY_PATH or LD_PRELOAD at all (if I read correctly this should be done with the ld -R flag at link time). However most cases I've seen of LD_LIBRARY_PATH being set are in wrapper mode (the lesser evil).

        Most of the references you provided, however are for Solaris, so I have to wonder if GCC's ld in Linux does support the -R flag [2].

        The LD_PRELOAD reference is quite a nice read! Simply due to its function, LD_PRELOAD could be very dangerous, depending on its use.

        Thanks for the great references!

        Comment


        • #44
          Originally posted by Thetargos View Post
          The main problem with commercial applications (such as games) in Linux is the heterogeneity of the system from distribution to distribution, particularly in those with long release cycles, as they tend to have older libraries to linger for a much longer time, not only that, but also the many versions at any one time of GCC libraries. As such it would be best for a commercial application to pretty much package the libraries it was linked against and force their use rather than the system-wide libraries, or statically link against those (having the side effect of fattening the binary). Those are the main issues, from what I've been able to observe:
          1. GlibC version.
          2. GCC version, libs and version which the system libs were built with.
          3. C++ runtime libraries version.
          In the case of the first two, Autobuild from the Autopackage project happens to resolve this so it will gracefully handle a LARGE range of runtimes because of the nature of how glibc accomplishes ABI versioning support (incl. an odd way of handling backwards compatibility...) The C++ runtime libs issue's a mess- Autobuild will resolve a goodly portion of these, but I've not used it as much as just simply statically linking that one with myself (it will still dlopen the glibc you specify with the ABI, etc. through Autobuild and as long as you're not flipping around stuff via C++, you end up being fine.

          That's when LD_LIBRARY_PATH and LD_PRELOAD are most useful.
          Indeed. It's just that they have severe security and stability concerns and shouldn't be used except in the case of a problem with your app just not working right. What the vendors are doing when they use this out of the gate first is showing that they don't understand how things work as well as they ought to. That's not to say we're not happy with them making the stuff, but...

          Judging from some of the sites you cited, it would seem as if it would more beneficial if the application programmers at build time, used specific link paths so that the binary would "look" for those libs at run time, regardless of system configuration and as such they could rely more on their own "local" ./lib path [1]. So if I understand this correctly, they could simply build having int mind their own application's directory structure and ship the libraries they may need without relying on LD_LIBRARY_PATH or LD_PRELOAD at all (if I read correctly this should be done with the ld -R flag at link time). However most cases I've seen of LD_LIBRARY_PATH being set are in wrapper mode (the lesser evil).
          Yep. Unfortunately we don't HAVE the -R flag available to us, to the best of my knowlege, and it's still an issue.

          The LD_PRELOAD reference is quite a nice read! Simply due to its function, LD_PRELOAD could be very dangerous, depending on its use.

          Thanks for the great references!
          You're very much welcome. It always helps to know and understand why something's much less than desirable if it can be avoided.

          Comment


          • #45
            Not having -R to ld or the $ORIGIN variable sure can be a problem in Linux. I wonder if there is work being done to implement those?

            Comment


            • #46
              Heh... Back to the "rumor" about Id not supporting Linux anymore...

              I'd LOVE to soften the people up that started this rumor with a warhammer or a morningstar- it's making the rounds everywhere today.

              Sigh....

              Comment


              • #47
                Talk about "jumping to conclusions"

                Comment


                • #48
                  Originally posted by Thetargos View Post
                  Talk about "jumping to conclusions"
                  Silliest thing I've heard of in a while, esp. with Intel, AMD, and the OEMs going this way (Noting that one of the OEMs made it a requirement to have Open Source drivers for Linux, etc. or they don't get to play in a short time...)

                  The only reason why you might really want to circulate that rumor or buy into it is if you're chearleading Windows or X-Box... >:-)

                  Comment


                  • #49
                    Yeah... At any rate, that OEM part of your post caught me off guard, do you mean Dell, by any chance?

                    Edit
                    Just reading the latest news post and article in the front page, so we don't know yet which OEM this mystery one requiring Open Source drivers within the next twelve months is, tough one thing is certain, it must have a lot muscle to be able to at least require those IHVs that would supply them with hardware to open up their drivers (hence it could be either Dell or HP, and given recent history, I'd lean towards Dell)
                    Last edited by Thetargos; 20 September 2007, 12:22 AM.

                    Comment


                    • #50
                      Originally posted by Thetargos View Post
                      Yeah... At any rate, that OEM part of your post caught me off guard, do you mean Dell, by any chance?

                      Edit
                      Just reading the latest news post and article in the front page, so we don't know yet which OEM this mystery one requiring Open Source drivers within the next twelve months is, tough one thing is certain, it must have a lot muscle to be able to at least require those IHVs that would supply them with hardware to open up their drivers (hence it could be either Dell or HP, and given recent history, I'd lean towards Dell)
                      Heh... I have my guesses some based off of info that's public, some based off of stuff that's covered by NDA still (and not the LGP ones still in place... ). I wouldn't put it past HP, Lenovo, or Dell to lean HEAVILY on the chip vendors to get their damn acts together on this. Vista's burned pretty much everyone in the industry, Microsoft included- and they don't want to rely solely on MS any longer. As it stands, I know of at least one situation where HP AND Dell told a vendor to get at least proprietary Linux support solid, preferably as Open Source drivers- or they are a no-sale. Windows support wasn't enough.

                      Comment

                      Working...
                      X