Announcement

Collapse
No announcement yet.

linux, the very weak system for gaming

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #46
    Originally posted by gens View Post
    atof() is killing me, im making a small .obj previewer for fun in 64bit asm and since coordinates in obj's are ascii(human readable) i have to convert them
    wanted to do in without stdlibs, and was doing good till it hit me how complicated it rly is(theres a couple tricks to make char to int, but then you dont have the dot)

    so atof() was a quick solution, only problem is the documentation is WRONG(or im reading it wrong)
    atof() takes the string(in memory or direct in registers, still dont know) from(or pointed to from) the 2'nd operand, what is confusing since the documentation says theres only 1 operand
    and it returnes it i have no idea where

    maybe its some dumb mistake on my part
    maybe it is rly bad documentation
    il need to gdb to see what is happening, or get some1 to write me a simple C program too decompile

    but this is low level(and newbie) problems that dont affect portability
    middleware, directx and compiler specific programing limit portability
    this is what i meant as "lower level" that usually regular devs aren't used to use since is lot easier to use this function at toolkits or other common libraries than actually use posix api directly unless you wanna work in core GNU projects or the kernel.

    following the same logic is very unusual for a game to tie itself to this kind of super OS specific low level API, so as you properly say it doesnt affect portability and in fact this is true for windows too since game studios don't use windows low level api either since they just buy a license for an engine[unreal, crytek, etc] that give them a set of very high level tools/scripting to do the game.

    so the engines are the ones that have to decide to use this APIs or rely on higher level libraries like boost or glib or Qt/gtk and others if the want to migrate to linux

    now my answer to OP is still true POSIX is widely used and in the end every program you compile on linux use POSIX heavily behind scenes but except pthreads and few others you usually don't use it directly[openmp is getting more common everyday tho] since most toolkits provide much easiers way to handle this

    Comment


    • #47
      Originally posted by Tweenk View Post
      This meme is getting boring.
      Linux userspace API - the thing you use from programs - is stable, and has always been.
      The only thing not stable is the kernel API, which you use from kernel code. This is only of any relevance to people who want to maintain closed-source kernel drivers, such as Nvidia. Nobody else is affected by this. For example, printer and scanner drivers - which typically run in userspace - are not affected. Neither are games.
      Wrong. If I'm making some program that uses the GPU, guess what? I need to interact with the GPU drivers. Now, someone goes and changes the Kernel API, the drivers need a new update, and boom, my program stops working, or I have a massive performance regression, because some feature I was using got broken somewhere between the Kernel and driver.

      When you change the Kernel API, you necessitate a driver redesign. When you necessitate a driver redesign, you really tick off the people who interact with said driver. Additions, fine. But you should almost NEVER remove functionallity.

      since POSIX defines everything concerning processes and allot of stuff that dont (like the df command sintax etc) including libc, id say everything you got on your linux sistem complies to POSIX, even windows xp(idk about 7) is posix compilant(people tell me fully)
      Windows is partly POSIX compliant.

      POSIX is fundamentally broken for one simple reason: the pthread_create() method is fundamentally flawed because it allows no way to create a thread in a suspended state. (And no, manually suspending a pthread after creation is an example of a horrid, wasteful, programming practice that belongs in the dark ages).

      I had to support POSIX once, and my hatred of it exceeds even my hatred for the Ada language (which I can assure you, I hate with a passion).

      Comment


      • #48
        Originally posted by Tweenk View Post
        100% false. You seem to have little idea what you are talking about.
        Link your application statically or bundle it with the libraries it uses, which is a common practice on Windows. Done.
        Oh really? I've seen it happen plenty of times. Some library is changed and then your third party software breaks because it was dependent on the earlier version of the library. Sure you can link your application statically or bundle libraries but due to a lack of standardization there would be a lot of stuff to bundle just to be sure it'll work on different distributions with different version of libraries etc. Hell, even new versions of GCC sometimes break compatibility with older versions so you might have problems if your software is compiled with a different version than the system one.

        Originally posted by Tweenk View Post
        This is true, but providing .deb and .rpm covers 99% of Linux users and doesn't take that much time. One guy can figure it out in 2 workdays. Not convenient but not a showstopper either.
        Except that it's not that easy. There are many RPM- and deb-based distributions and many releases of each with a varying degree of compatibility with each other. So you can't just create an RPM and then expect it to magically work across all RPM-distributions.

        Originally posted by Tweenk View Post
        For 90% of games distributed as .tar.gz it's as difficult as:
        1. Left click .tar.gz, select 'Extract Here'
        2. Double-click the executable or launcher script
        3. Play game
        So the standardized way of installing software is to just unpack it in your home dir? Right. But lets say you want to install the game so that all users can access it. How would you do that? Lets say you're an average use with average computer skills. First you should be somewhat familiar with the FHS to know where you should unpack it so that it's accessible to everyone. This isn't easy to figure out just by looking at the names of the folders (/dev, /mnt, /var etc) so you'll probably have to google it. The next problem is to actually move it in place since you don't have write access to most folders. So you'll have to resort to google again to learn how to use sudo to gain temporary root access to the file system. After that the game might be accessible to everyone but you have to open it from the file browser, it wont show up in the Unity dash or whatever you are using (even if you only unpacked it in your home dir). So what do you do? You google it again to find out that you have to create a .desktop file and put it in either /usr/share/applications/. Great! Now the game is installed!

        Compare this to what you would do in windows (run a installer and click next a few times) or OSX (just drag the app bundle to the applications dir). App installations on linux aren't an issue you say?

        Originally posted by Tweenk View Post
        The situation will stay the same because API stability was never a problem in the first place, unless you did something really stupid.
        Then why don't you go tell that to the Gnome and Ubuntu developers? I'm sure they would love to hear about it so they can stop wasting their time fixing this apparent non-issue.

        Comment


        • #49
          The best solution is to target one, maybe two distros and if a distro upgrade breaks something, you tell the users to write a bug report against the distro vendor and GTFO.

          Every other Linux user can grab a .tar.gz and a list of dependencies and do their own work.

          The idea that devs should target 4 million permutations of operating environments is BS.

          Of course devs aren't going to target anything Linux-related until there are actual users to target (e.g., Android).
          Last edited by johnc; 08-23-2012, 05:35 PM.

          Comment


          • #50
            Originally posted by Tweenk View Post
            This meme is getting boring.
            ... This is only of any relevance to people who want to maintain closed-source kernel drivers, such as Nvidia. Nobody else is affected by this. For example, printer and scanner drivers - which typically run in userspace - are not affected. Neither are games.
            Wrong. You are also affected if you want up to date drivers, even if they're open source. Let's say you have some new wifi adapter that's supported by linux 3.5 or later, but your distribution only has 3.0. If you had a stable driver interface you could just get the driver binary and install it. But instead you'll either have to figure out how to compile a new kernel, or you could pull the driver sources from a git tree and pray that they'll compile against your current kernel, or you could wait a few months for your distribution to have another release which will hopefully include the new kernel. Neither of these solutions are any good compared to just running an installer and having the hardware working right away as you would do on Windows or OSX. Especially for the average user that probably knows nothing about APIs and kernels and just want their newly bought hardware to work.

            Comment


            • #51
              Originally posted by gamerk2 View Post
              Wrong. If I'm making some program that uses the GPU, guess what? I need to interact with the GPU drivers. Now, someone goes and changes the Kernel API, the drivers need a new update, and boom, my program stops working, or I have a massive performance regression, because some feature I was using got broken somewhere between the Kernel and driver.

              When you change the Kernel API, you necessitate a driver redesign. When you necessitate a driver redesign, you really tick off the people who interact with said driver. Additions, fine. But you should almost NEVER remove functionallity.



              Windows is partly POSIX compliant.

              POSIX is fundamentally broken for one simple reason: the pthread_create() method is fundamentally flawed because it allows no way to create a thread in a suspended state. (And no, manually suspending a pthread after creation is an example of a horrid, wasteful, programming practice that belongs in the dark ages).

              I had to support POSIX once, and my hatred of it exceeds even my hatred for the Ada language (which I can assure you, I hate with a passion).
              1.) if you are making a program that need to interact with a GPU driver you are doing it wrong [i think you try to say gpu blobs??? cuz this is insane]

              2.) linux rarely if ever broke ABI in the same major release and revisions [3.2 3.2.1 3.2.3] and since some time ago LTS releases are ABI stable very long time

              3.) if you mean you have a 3rd party BLOB driver like FGLRX or Nvidia's and you want to run it outside the supported kernel versions for that driver is your fault not linux but if you use a linux supported[in tree] driver this rarely even happens and this happens in almost any OS including windows [vista driver in windows 7 == missing features - crashes - pain] you just think windows kernel is ABI stable cuz it has longer release cycles and you think since windows vista and 7 are different OSes is 100% rational to download new drivers[but no NT kernel 6.0/6.1]. So in resume linux major release cycle is 6-10 weeks windows is 2-5 years that is all, so a 3rd party driver need to stick with an LTS kernel release or put resources in support every major version each release cycle[if linux releases a major every 2 years you won't even notice this <-- this is what LTS is for].

              3.a) we can discuss that maybe the linux release cycle should be extended or maybe provide an isolation layer between drivers and kernel API or another solution but a 3rd party is not responsability of linux[as was not microsofts the vista nvidia drivers mess] so the risk of breakage is always bigger than an linux natives in-tree drivers

              3.b) internal kernel api changes breaking userspace is a rare scenario too and if it does normally in the changelog they put a big fat warning that an userspace software requires an upgrade to X.x.x version [kernel ppl are not crazy ....]

              4.) about POSIX please post some code example and some logic behind cuz well i really can't see it useselfuness and looking in msdn that feature is deprecated for windows too[at least under suspend]

              5.) about ada can't comment since its outside my area of expertise

              Comment


              • #52
                Originally posted by nej_simon View Post
                Wrong. You are also affected if you want up to date drivers, even if they're open source. Let's say you have some new wifi adapter that's supported by linux 3.5 or later, but your distribution only has 3.0. If you had a stable driver interface you could just get the driver binary and install it. But instead you'll either have to figure out how to compile a new kernel, or you could pull the driver sources from a git tree and pray that they'll compile against your current kernel, or you could wait a few months for your distribution to have another release which will hopefully include the new kernel. Neither of these solutions are any good compared to just running an installer and having the hardware working right away as you would do on Windows or OSX. Especially for the average user that probably knows nothing about APIs and kernels and just want their newly bought hardware to work.
                i agree here but this don't mean ABI breakeage or messy developers as i explained in previous post linux release cycle is just faster but i think this problem is generated cuz the drivers and the internal Kernel API are released as a unique piece of software so i think could be interesting to strip drivers from kernel tree [need deeper discussion tho] so they can be separated entities hence solving this problem.

                another interesting idea could be to expose subsystems layers [bluetooth, wifi, usb, sata,etc] APIs isolated of the internal API[so you can adapt the inner layer to new internal kernel API but keeping the outside layer stays unchanged to the drivers]

                Comment


                • #53
                  Originally posted by nej_simon View Post
                  So the standardized way of installing software is to just unpack it in your home dir? Right. But lets say you want to install the game so that all users can access it. How would you do that? Lets say you're an average use with average computer skills. First you should be somewhat familiar with the FHS to know where you should unpack it so that it's accessible to everyone. This isn't easy to figure out just by looking at the names of the folders (/dev, /mnt, /var etc) so you'll probably have to google it. The next problem is to actually move it in place since you don't have write access to most folders. So you'll have to resort to google again to learn how to use sudo to gain temporary root access to the file system. After that the game might be accessible to everyone but you have to open it from the file browser, it wont show up in the Unity dash or whatever you are using (even if you only unpacked it in your home dir). So what do you do? You google it again to find out that you have to create a .desktop file and put it in either /usr/share/applications/. Great! Now the game is installed!
                  The standart way installing software is the package manager, even installing software via an installer the installer must handle operatingsystem directory's too in every operating system.
                  For example:
                  Windows OS X Linux
                  idk /dev /dev
                  not used /Volumes /mnt (manuly mounted) /media (automatic)
                  many, idk /etc, /Library and /Users/$USER/Library /etc and /home/$USER/.config
                  C:/Users/Pupblic/Startmenu not used the .app itself is link /usr/share/applications
                  You see that many things are common, so you see that what you said is not fully true, this are only a few examples if you google more you find more.
                  Com spare this to what you would do in windows (run a installer and click next a few times) or OSX (just drag the app bundle to the applications dir). App installations on linux aren't an issue you say?
                  The way of Windows with the installer is bad see the issues with them, the bundle system is an other concept, the linux concept is the package manager.

                  Comment


                  • #54
                    Originally posted by nej_simon View Post
                    Wrong. You are also affected if you want up to date drivers, even if they're open source. Let's say you have some new wifi adapter that's supported by linux 3.5 or later, but your distribution only has 3.0.
                    Are there any big distributions that won't have a repository/PPA/??? with bleeding edge software?

                    Originally posted by nej_simon View Post
                    If you had a stable driver interface you could just get the driver binary and install it.
                    Yes. Do you really want to split that little developer time there is so they keep legacy abstraction layers up to date instead of actually progressing the ecosystem?

                    Originally posted by nej_simon View Post
                    But instead you'll either have to figure out how to compile a new kernel, or you could pull the driver sources from a git tree and pray that they'll compile against your current kernel, or you could wait a few months for your distribution to have another release which will hopefully include the new kernel. Neither of these solutions are any good compared to
                    Why would you not just install a precompiled kernel? Again, are there big distributions where you can't get an updated kernel easily?

                    On archlinux you put an unofficial repository in /etc/pacman.conf
                    Code:
                    [miffe]
                    Server = http://arch.miffe.org/$arch/
                    pacman -Syu && pacman -S linux-mainline
                    Put it in your bootloader. Congratulations, you now run 3.6-rc3.

                    Originally posted by nej_simon View Post
                    just running an installer and having the hardware working right away as you would do on Windows or OSX. Especially for the average user that probably knows nothing about APIs and kernels and just want their newly bought hardware to work.
                    That's a nice idea and all but just today I have seen an up to date windows 7 prof where somebody tried to install some dongle and while it installed the driver directly from windows update it got a bluescreen and never booted again without bluescreening or freezing at the login screen. And what about all that hardware that doesn't work with windows vista/7 anymore and everybody is just okay with that? If you use windows 7 you can throw away your HP Scanjet 3300C even though it works perfectly fine (today it still works with sane). Even Microsoft, the world market leader of legacy, just abandons their old compatibility layers for hardware.

                    I don't think you have a very convincing argument there.

                    Comment


                    • #55
                      Originally posted by Thaodan View Post
                      The standart way installing software is the package manager, even installing software via an installer the installer must handle operatingsystem directory's too in every operating system.
                      For example:
                      Windows OS X Linux
                      idk /dev /dev
                      not used /Volumes /mnt (manuly mounted) /media (automatic)
                      many, idk /etc, /Library and /Users/$USER/Library /etc and /home/$USER/.config
                      C:/Users/Pupblic/Startmenu not used the .app itself is link /usr/share/applications
                      You see that many things are common, so you see that what you said is not fully true, this are only a few examples if you google more you find more.

                      The way of Windows with the installer is bad see the issues with them, the bundle system is an other concept, the linux concept is the package manager.
                      My point is that it's easy to create an installable package for OSX or windows while it's more difficult to do the same on Linux due to lack of standardization. If you download firefox on windows you get an installer, if you download it on OSX you get an app bundle but on Linux you get a tarball that you have to "install" yourself. The same is true for a lot of other project, most will just provide a source tarball. My guess is that they simply don't think it's worth the effort to maintain a range of packages for Linux distros. I'm not saying that Linux distros should standardize installers or app bundles but this is a problem that needs to be solved in some way.

                      Comment


                      • #56
                        I don't see the point of this thread, windows and osx are not even in competition with GNU and linux

                        'games don't work because of X/Y/Z' funny that other programs both free and non-free seem to keep on working....

                        Comment


                        • #57
                          The issue that is being described is accurate.

                          It takes too long for fixes and features to make their way into distributions in a consumer friendly manner.

                          That said, the root cause is not an unstable API/ABI. Worse, when people say this, I always begin to doubt that they know what an API/ABI is. It's really not even a problem. People see a fix/feature hit the mesa dev mailing list, and for some reason decide that the fix/feature should be available in the current 12.04 release of Ubuntu immediately. The cause of this type of thinking comes as a carryover from the old MS ecosystem when all releases were fairly secret until the release announcement.

                          The other thing about this is that the people complaining 'are not wrong'. A legitimate fix in Mesa should be available in Ubuntu-current "tomorrow", it is just that the Linux/gnu ecosystem (or any other ecosystem for that matter)has not yet figured out how to accomplish this gracefully. Even security fixes take 3-14 days and require near super-human efforts on the part of the distribution maintainers.

                          If one of you readers is a process guy or PM, can you take a look at the path a patch takes to go from mesa-dev to Ubuntu/Fedora-current? While I do not think that a 24-48h release cycle is feasible, I'm curious as to how we would improve the current 3-6 month cycle without interfering with the current progressive development methods employed today..

                          F
                          Last edited by russofris; 08-23-2012, 07:20 PM.

                          Comment


                          • #58
                            Originally posted by nej_simon View Post
                            My point is that it's easy to create an installable package for OSX or windows while it's more difficult to do the same on Linux due to lack of standardization. If you download firefox on windows you get an installer, if you download it on OSX you get an app bundle but on Linux you get a tarball that you have to "install" yourself. The same is true for a lot of other project, most will just provide a source tarball. My guess is that they simply don't think it's worth the effort to maintain a range of packages for Linux distros. I'm not saying that Linux distros should standardize installers or app bundles but this is a problem that needs to be solved in some way.
                            They could just use a MojoSetup installer, or one from Bitrock, or Nixinstaller, or a multitude of other installers available for Linux.

                            Personally, I would rather not have proprietary applications handled through my package manager (and do not ATM). I do not even really want my games managed through my package manager either, which is why I have OpenArena and other free titles handled by Desura instead. I prefer to keep my package manager strictly handling free system applications, drivers, and codecs. But maybe that is just me?

                            Regardless, such solutions are available. If you have actually gamed on Linux (which I thought this thread was supposed to be about?) you would have already known this.

                            Comment


                            • #59
                              Originally posted by ChrisXY View Post
                              Are there any bigdistributions that won't have a repository/PPA/??? with bleeding edgesoftware?
                              Perhaps in some cases, but should a user really have to resort to repo with bleeding edge software just to get a working driver? Wouldn't it be better to support third party drivers so that the hardware manufacturer can bundle a driver with the hardware?

                              Originally posted by ChrisXY View Post
                              Yes. Do youreally want to split that little developer time there is so they keeplegacy abstraction layers up to date instead of actually progressingthe ecosystem?
                              It would significantly enhance the usability of Linux so it would be worth it. At least if Linux is ever going to be viable alternative to Windows and OSX for consumer PCs.

                              Originally posted by ChrisXY View Post
                              Why would you notjust install a precompiled kernel? Again, are there big distributionswhere you can't get an updated kernel easily?
                              Well, you can if there any available for your distribution. For ubuntu 12.04 for example the are backported unsupported kernels from 12.10 but they are a lot less tested than the default kernel and might have regressions.

                              Originally posted by ChrisXY View Post
                              On archlinux youput an unofficial repository in /etc/pacman.conf
                              Code:
                              [miffe]
                              Server =http://arch.miffe.org/$arch/
                              pacman -Syu && pacman -Slinux-mainline
                              Put it in your bootloader.Congratulations, you now run 3.6-rc3.
                              That's great if you run arch, but arch is an advanced distribution and probably not something a novice user would run nor something that would come preinstalled on consumer hardware. Soulutions like this might work for hackers and enthusiasts but again, if Linux is ever going to be viable alternative to Windows and OSX for consumer PCs you can't expect users to play around with different kernels and repos to find a working driver.

                              Originally posted by ChrisXY View Post
                              That's a niceidea and all but just today I have seen an up to date windows 7 profwhere somebody tried to install some dongle and while it installedthe driver directly from windows update it got a bluescreen and neverbooted again without bluescreening or freezing at the login screen.And what about all that hardware that doesn't work with windowsvista/7 anymore and everybody is just okay with that? If you usewindows 7 you can throw away your HP Scanjet 3300C even though itworks perfectly fine (today it still works with sane). EvenMicrosoft, the world market leader of legacy, just abandons their oldcompatibility layers for hardware.
                              That's besides the point. Just because Windows 7 has a stable driver API doesn't mean that every driver is perfectly stable. Or that drivers will always be available for older hardware. However when you buy new hardware and install the bundled driver on Windows it just works in most cases. You don't have to resort to backported kernels from pre-release distributions, or repos with bleeding edge software etc.

                              EDIT:
                              This is getting really off topic. Perhaps it's time to just end this discussion about stable API now.
                              Last edited by nej_simon; 08-23-2012, 07:51 PM.

                              Comment


                              • #60
                                really am i missing something? most gnu/linux distributions are perfectly viable as gaming platforms and what does this thread have to do with fglrx/catalyst?

                                does every game work out of the box on windows xp/vista/7/8 or is there a certain amount of fiddling to play the latest AAA+++ title?

                                and why are people comparing windows with gnu/linux anyway? the point of a gnu/linux system is freedom to do whatever YOU want to do with it, the point of a windows or osx system is to have a company wipe your anus - they are not the same thing and not in competition and why a gpu binary blob is such an affront to those of us who understand this

                                A fresh install of ubuntu will run ut2k4 if set up correctly - true it has to be set up correctly .... but what do you want? you have to set up any OS correctly or have somebody wipe your anus

                                now stfu or i'll kick you up and down the server of your choice on an EEEPC

                                HOLY SHIT!!!
                                WICKED SICK!!!

                                Comment

                                Working...
                                X