Announcement

Collapse
No announcement yet.

linux, the very weak system for gaming

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by gamerk2 View Post
    But it exists. Hardware makers do NOT like to constantly have to update their drivers, especially since a lot of HW is sold for years at a time.



    My question is why the API is in need of constant changes [specifically, removes].



    So you are basically saying: "Non-geeks do not apply". General users are going to download/install the latest version of SW, period. If they then find their HW doesn't work, guess who's fault that is?



    "alien nature"? Drivers are the correct way for devices to talk to the OS; thats how it SHOULD be done. The API is the interface from which they do this, so all devices can code to one standard.

    If drivers can't be packaged easily with the OS, then either the packaging system or the OS [or both] is faulty. Stop complaining then and fix it.
    But doesn't those difficulties prehaps point to more underlying problems with the overall system architecture?
    1.) this is the reason most of them send/release/contribute code to the kernel or help kernel developers to integrate their drivers into main tree <--[microsoft, redhat, adaptec, lsi, ibm, hp, atheros,etc], mostly the problem is GPU blobs and BIOS manufacturers[they don't respect in any possible way the ACPI standard and the ongoing ligitations are slow]

    2.) Linux has much Unix inheritance that is useless or unstable or terribly outdated from that distance past and after many years of research many new improved technologies are entering in tree to fix or remove the dinos[remove of big lock, VSE, tickless, RT, ASLR, Batman, openswtich,virtio,kvm, numa, etc]

    3.) sure but if you upgrade to windows 8 beta and you gpu freezes the PC and you call microsoft or nvidia the answer is? the same apply here ppl just get confused cuz vanilla kernel release cycle is only 6-10 weeks compared to 2-5 years from microsoft or 1-2 years with OS X. so the distro in this case is the one that chooses the kernel [distro == windows aka an product based on X kernel while kernel is just a kernel not an OS]

    4.)sadly not nvidia or amd are package friendly and both need to replace a big chunk of your distro[kernel mgmt, libgl,libglx,ddx,etc] it can get pretty messy and they both seem to love their script and refuse any other option[amd is bit better here since they open documentation and r600g is a native driver]

    5.)nvidia and fglrx driver code is not linux native code beyond the most basic[0.5% maybe] and the same is true for windows, they use 99% of the code for windows and linux [that is why the nvidia/fglrx drivers has more LOC than nt/linux kernel added togheter] that its what alien means and that 99% is an .o precompiled blackbox that only nvidia know wtf it uses internally<-- this is not the proper way to write a driver in any os is just the cheaper way to do it

    Comment


    • Does anyone have an example of a game or program that has stopped working due to api changes?

      Comment


      • Originally posted by D0pamine View Post
        Does anyone have an example of a game or program that has stopped working due to api changes?
        Only momentarily. They started working again shortly after when the developers updated the release. All of the programs were network utilities though (Ethereal may have been one of them, and it was many years ago). I also had some GPIB cards that had problems after a kernel subsystem overhaul, and were dropped (never corrected). I'm not pissed off at the kernel devs though. The blame lies with the GPIB mftr who had to choose between updating their kernel drivers, or attempt to charge me for new hardware. They chose the latter. I resolved the situation by purchasing a $100 surplus Optiplex, and installing a distro that was old enough to support the hardware (RedHat 7.2). I heard that the box is still running, and that the GPIB cards are still working a decade later.

        I guess my point is... Yes, stuff breaks. It's not a complete myth, and it's not as bad as the pooh-pooh'ers make it out to be. It sucks when you're the odd man out and an application's developers or mftr chooses not to stay with the times. Applications and chips that you spent real money on end up on a legacy box with an older kernel. You can often extend their lifetime by a few years this way. I have trouble imagining scenarios where someone would blame Linux for the situation though, and in a number of scenarios, mitigation is trivial enough to forgive the misgivings of any responsible party.

        F

        Comment


        • I do understand especially when it comes to various expansion cards - I tried to get a scsi card working on winxp sp3 for a scanner once which worked with no effort form me in debian. I've never used ethereal although I know of it - a guy at work was talking about it once but he was using wireshark which he was trying to get me to use instead of tcpdump however i was thinking more of non-free games ( its understandable that a program that uses libpcap breaks if libpcap is updated or any other library for that matter ) I have few games for GNU/Linux but they all work still from unreal tournament (99) to postal 2 to quake wars - they all work without issue on ~amd64 gentoo and postal 2 works on debian squeeze for sure

          really i cant find a game that doesn't work...

          why has wine been built for windows i wonder? is it because this is more of a windows problem than a GNU/Linux problem or does someone out there love mingw enough to compile wine for windows for no good reason

          i'd love to see some evidence of GNU/Linux being an inferior gaming platform to any OS not just win

          Comment


          • Originally posted by jrch2k8 View Post
            1.) this is the reason most of them send/release/contribute code to the kernel or help kernel developers to integrate their drivers into main tree <--[microsoft, redhat, adaptec, lsi, ibm, hp, atheros,etc], mostly the problem is GPU blobs and BIOS manufacturers[they don't respect in any possible way the ACPI standard and the ongoing ligitations are slow]
            And yet, not a problem on other platforms. And GPU manufacturers are NOT going to contribute much to open source, if for no other reason then company coding standards are typically confidential. Nevermind you don't want the competitors to see what you did, and decide to "borrow" a few ideas.

            2.) Linux has much Unix inheritance that is useless or unstable or terribly outdated from that distance past and after many years of research many new improved technologies are entering in tree to fix or remove the dinos[remove of big lock, VSE, tickless, RT, ASLR, Batman, openswtich,virtio,kvm, numa, etc]
            ...And? Feel free to change to backend. Do NOT change the underlying API calls. If I have an API call that my program uses, you are free to do whatever you want to the IMPLEMENTATION of said API call, but if you remove the call entirely and break my program, you are going to have one really ticked off developer.

            My point being, if you have an API call to create a thread [we'll just call it CreateThread for simplicities sake] that takes a number of parameters, that API call better remain supported, forever, and as long as you provide the proper output, I'm happy. The rest is all implementation, which should be done independent of the underlying API. So if you end up with a new model for creating threads, feel free to implement it. Just make sure the call that creates said thread remains CreateThread.

            So yes, deletes of API calls should be exceedingly rare.

            4.)sadly not nvidia or amd are package friendly and both need to replace a big chunk of your distro[kernel mgmt, libgl,libglx,ddx,etc] it can get pretty messy and they both seem to love their script and refuse any other option[amd is bit better here since they open documentation and r600g is a native driver]
            Probably because AMD/NVIDIA has a unified driver architecture that is mostly OS independent. Point being, they aren't going to change. Although I'd "love" to hear your definition of "package unfriendly".

            5.)nvidia and fglrx driver code is not linux native code beyond the most basic[0.5% maybe] and the same is true for windows, they use 99% of the code for windows and linux [that is why the nvidia/fglrx drivers has more LOC than nt/linux kernel added togheter] that its what alien means and that 99% is an .o precompiled blackbox that only nvidia know wtf it uses internally<-- this is not the proper way to write a driver in any os is just the cheaper way to do it
            Unified code is cheaper to maintain, and should be mostly OS independent. With the exception of any OS API calls, there really shouldn't be any reason to not use a unified code architecture.

            Comment


            • Originally posted by gamerk2 View Post
              And yet, not a problem on other platforms. And GPU manufacturers are NOT going to contribute much to open source, if for no other reason then company coding standards are typically confidential. Nevermind you don't want the competitors to see what you did, and decide to "borrow" a few ideas.



              ...And? Feel free to change to backend. Do NOT change the underlying API calls. If I have an API call that my program uses, you are free to do whatever you want to the IMPLEMENTATION of said API call, but if you remove the call entirely and break my program, you are going to have one really ticked off developer.

              My point being, if you have an API call to create a thread [we'll just call it CreateThread for simplicities sake] that takes a number of parameters, that API call better remain supported, forever, and as long as you provide the proper output, I'm happy. The rest is all implementation, which should be done independent of the underlying API. So if you end up with a new model for creating threads, feel free to implement it. Just make sure the call that creates said thread remains CreateThread.

              So yes, deletes of API calls should be exceedingly rare.



              Probably because AMD/NVIDIA has a unified driver architecture that is mostly OS independent. Point being, they aren't going to change. Although I'd "love" to hear your definition of "package unfriendly".



              Unified code is cheaper to maintain, and should be mostly OS independent. With the exception of any OS API calls, there really shouldn't be any reason to not use a unified code architecture.
              1.) mmm both nvidia and AMD has zillion issues with windows too[mostly with games tho], for example my windows 7 partition without sp1 and 303.xx driver was good but i decided to install sp1 and system stop booting so i had to access from linux and remove nvidia driver file by file until it booted again in failsafe in the amd case battlefield 3 blue screen dump me until a hotfix was released<-- so yes in windows failures are less common but is not like you put it "in windows nothing fails TM".[i wont go with raid card or fiber channel drivers cuz ill be here all week]

              2.) mmm the issue here is not common api vanishing in thin air but more like an entire subsystem is gone or replaced and unlike you think those were marked as deprecated in most cases or was widely publicited and every maintainer of in tree drivers make the respective adjustements but it seems blob maker policy is to fix their code once the kernel is released, so you have a tangible lag between a small fix [stop using spinlocks for example] while in tree developers work with the fixes from rc1.

              in kernels you regularly don't have backend[like to know where you get that idea] or frontend since a kernel is not an library, a kernel api is much more bare metal than you think, aka things like InitUSBport(my_type usbid, bool isReady) don't exist at this level, what you get is DMA access, registers, buses adrresing, memory operations, cache operations, bit manipulation, ASM support, etc aka the smallest possible blocks needed to interact with a piece of hardware, use wikiedia or kernelnewbies.org for a more in depth idea

              3.) i think you need to see for yourself what is package unfriendly i reccomend you to download nvidia driver from ubuntu and uncompress the deb packages and check out the scripting in there <-- will be crystal clear for you then[or rpm from fedora whatever you like]

              4.) is the cheap way not the more efficient way and to achieve that you need to use every dirty hack in the book to keep the fps going good enough hence why these blob have such a ridiculous amount of LOC.

              5.) you are wrong fglrx or nvidia blobs even if they release the full source won't be used at all to improve linux drivers cuz it will take years of work to understand and get something useful or usable from that mess what OSS drivers need is documentation[asm level] and more developers[hard part]

              Comment


              • Originally posted by jrch2k8 View Post
                1.) mmm both nvidia and AMD has zillion issues with windows too[mostly with games tho], for example my windows 7 partition without sp1 and 303.xx driver was good but i decided to install sp1 and system stop booting so i had to access from linux and remove nvidia driver file by file until it booted again in failsafe in the amd case battlefield 3 blue screen dump me until a hotfix was released<-- so yes in windows failures are less common but is not like you put it "in windows nothing fails TM".[i wont go with raid card or fiber channel drivers cuz ill be here all week]
                For the most part, WHQL certified drivers don't have major bugs anymore. Yes, there was that BF3 incident, but those have grown very few and far between these days. I'm actually not upgrading my GPU driver that often anymore, due to lack of reasons to.

                2.) mmm the issue here is not common api vanishing in thin air but more like an entire subsystem is gone or replaced and unlike you think those were marked as deprecated in most cases or was widely publicited and every maintainer of in tree drivers make the respective adjustements but it seems blob maker policy is to fix their code once the kernel is released, so you have a tangible lag between a small fix [stop using spinlocks for example] while in tree developers work with the fixes from rc1.
                You make the very silly and dangerous assumption that developers are going to re-code their software because you decided to junk an API they were using. They aren't going to invest either the time or effort to do so.

                in kernels you regularly don't have backend[like to know where you get that idea] or frontend since a kernel is not an library, a kernel api is much more bare metal than you think, aka things like InitUSBport(my_type usbid, bool isReady) don't exist at this level, what you get is DMA access, registers, buses adrresing, memory operations, cache operations, bit manipulation, ASM support, etc aka the smallest possible blocks needed to interact with a piece of hardware, use wikiedia or kernelnewbies.org for a more in depth idea
                I call some API with some Inputs, and get some outputs back. Its a message to the Kernel to do something and return some result. Nothing more and nothing less.

                I don't care HOW those outputs are achieved.

                So feel free to change as much of the low level DMA code within the Kernel as you want. I DON'T CARE. All I want is for the OS API call to continue to exist. You can do whatever the hell you want with the implementation of said API.

                Comment


                • i see there is confusion here on how drivers work, how the kernel works and how programs work
                  (at least thats the parts i can clarify a little )

                  so il start with the simplest, programs:
                  a program is, at its binary level(as in the actual binary) almost OS independent, its platform dependent(x86_64, arm, risc, a calculator)
                  what is not platform dependent are the calls it makes to other binaries(libs for example) and syscalls it makes to the kernel its running on

                  now lets say you program in C a program that opens a file and adds all the numbers written in that file together and write the result in stdout(console)
                  first thing youl have to do is declare the libs your using(how or whatever, i dont write C)
                  then youl have to allocate some memory to store the numbers, you call libc again to call the kernel(you can also add them as they are read, but adding them all at once is probably faster)

                  C uses stdlibs to communicate with the kernel((g)libc is one) to open a file
                  so the libc gets loaded in memory (probably is there already since the kernel uses it i think, if not the kernel all other programs you got do)
                  does the adding(for more complicated math theres libm and such)
                  and calls libc again with the instruction to write that data do stdout

                  now most of the time in that really simple program is spent in the program itself
                  (well, more time is probably spent reading the file as cpus are rlyyyyyy fast at adding numbers together; but for more complicated programs its true)

                  why use libs to do things then as it all can be done in the program itself reducing the calls(even the syscalls can easily) that take time ?
                  well firstly cuz its allot easier then writing lots of lines of code (probably some1 coded it better in that lib then you can), making it easier to read too.

                  other thing is the libs can be dynamically loaded effectively reducing the memory footprint of your program, fitting it in the cache thus reducing cache misses(probably giving more speed then calls take) i think


                  a kernel is mostly there so you dont have to write device dependent code(bits upon bits of human unreadable things setting up the device and making it do something)
                  the kernel also schedules programs(multitasking), gives them blocks of memory and idk what else
                  its a huge program

                  linux kernel is built as a monolitic hybrid like kernel
                  if it were modular it would have to call itself for everything it needs to do, like making 3 4 or more calls for every piece of a file a program wants, slowing it down
                  but a kernel dosent take much cpu time itself(mostly the scheduler does i think), except doing the things your program would have to do anyway

                  the mentioned things like BATMAN, kvm, and idk dont affect the kernel's speed nor compatibility(usually anyway), just its size
                  things like the tickless option on the other hand do, but the example of tickless saves energy the cpu uses by skipping some tics when doing nothing(ticks are the smallest step a cpu can make, example adding two loaded numbers together takes a tick(dividing them takes more, 13 they say on my cpu))

                  and all that kernel options, drivers and such can be disabled by recompiling manually(defaults you get from your distro are good; some things can be changed at run-time too)
                  but probably dont affect performance much

                  "unix inheritance" you could call the way programs interact, the many standards of computing and programing of same
                  (actually as far as i know its a wide term; vague to me but i know POSIX is a part of it and i know UNIX compatibility is important to many people)
                  but making a whole system that works great form the start is... daunting

                  about drivers i know little
                  i know the cpu has some registers that redirect bits to ports(example PCI) and other parts
                  i know almost all hardware(cpu too) has its registers that put it in a mode of operation
                  i know gpu drivers are bloody complicated as they have too receive data in some order while taking care of the registers, memory(onboard and offboard) and timing(registers and units usually need too wait to "settle" to avoid data corruption)
                  so i dont blame out-of-tree drivers for replacing lots of lib's to get coherency in their own back yard(thats what causes the breakage usually i think)

                  so... thats about all i know about that, and note that some small parts could be wrong

                  on the other hand, nobody is making anybody use the latest kernel/xorg/drivers (except on newest hardware, where theres no drivers in the older kernel)
                  and you probably wont get much more fps using them, in fact you might loose some as Michael shown over the years
                  as far as i know the newest wine and probably all open source games can be compiled on rly old kernels and using rly old gcc

                  edit: oh and i almost forgot;
                  what if the kernel/sayd_other_thing devs stopped changing things around ?
                  what if they made stable things in the 32bit era and sayd "now its stable, now its fast, now we stop thinking how we can make it better, faster, more flexible"

                  breakage is a part of that progress
                  and if computer science showed you could run a program twice as fast by rewriting the kernel(you cant probably), wouldnt you like all your programs to run twice as fast, even at the expense of waiting for some 6 months till enough bugs are worked out ?
                  (600fps l4d2 woohoo)
                  Last edited by gens; 08-28-2012, 11:24 AM.

                  Comment


                  • The tools on this forum are actually ***** who probably work for m$ and or a**ple trolling pro "linux as a gaming platform" websites. In reality linux is a better gaming platform then windows and Valve proved it when they ported Left 4 Dead 2 ( as of this writing it is still in closed beta testing) and got better performances then they did on windows 7. http://www.vg247.com/2012/08/03/left...than-windows-7 (hay look a link to actual data). And that my friends has your bosses scared running so much that they got there trolling hench men running around looking really stupid. So go back to your masters and tell them that the real linux community( who have been playing AAA games on linux for years from company's like [ some ported by icculus] ID, 3DRealms, Raven software, and Splash Damage to name a few) that were not fulled by your spin tactics and all around B$ that you spew.

                    Comment


                    • Originally posted by gens View Post
                      i see there is confusion here on how drivers work, how the kernel works and how programs work
                      (at least thats the parts i can clarify a little )

                      so il start with the simplest, programs:
                      a program is, at its binary level(as in the actual binary) almost OS independent, its platform dependent(x86_64, arm, risc, a calculator)
                      what is not platform dependent are the calls it makes to other binaries(libs for example) and syscalls it makes to the kernel its running on

                      now lets say you program in C a program that opens a file and adds all the numbers written in that file together and write the result in stdout(console)
                      first thing youl have to do is declare the libs your using(how or whatever, i dont write C)
                      then youl have to allocate some memory to store the numbers, you call libc again to call the kernel(you can also add them as they are read, but adding them all at once is probably faster)

                      C uses stdlibs to communicate with the kernel((g)libc is one) to open a file
                      so the libc gets loaded in memory (probably is there already since the kernel uses it i think, if not the kernel all other programs you got do)
                      does the adding(for more complicated math theres libm and such)
                      and calls libc again with the instruction to write that data do stdout

                      now most of the time in that really simple program is spent in the program itself
                      (well, more time is probably spent reading the file as cpus are rlyyyyyy fast at adding numbers together; but for more complicated programs its true)

                      why use libs to do things then as it all can be done in the program itself reducing the calls(even the syscalls can easily) that take time ?
                      well firstly cuz its allot easier then writing lots of lines of code (probably some1 coded it better in that lib then you can), making it easier to read too.

                      other thing is the libs can be dynamically loaded effectively reducing the memory footprint of your program, fitting it in the cache thus reducing cache misses(probably giving more speed then calls take) i think


                      a kernel is mostly there so you dont have to write device dependent code(bits upon bits of human unreadable things setting up the device and making it do something)
                      the kernel also schedules programs(multitasking), gives them blocks of memory and idk what else
                      its a huge program

                      linux kernel is built as a monolitic hybrid like kernel
                      if it were modular it would have to call itself for everything it needs to do, like making 3 4 or more calls for every piece of a file a program wants, slowing it down
                      but a kernel dosent take much cpu time itself(mostly the scheduler does i think), except doing the things your program would have to do anyway

                      the mentioned things like BATMAN, kvm, and idk dont affect the kernel's speed nor compatibility(usually anyway), just its size
                      things like the tickless option on the other hand do, but the example of tickless saves energy the cpu uses by skipping some tics when doing nothing(ticks are the smallest step a cpu can make, example adding two loaded numbers together takes a tick(dividing them takes more, 13 they say on my cpu))

                      and all that kernel options, drivers and such can be disabled by recompiling manually(defaults you get from your distro are good; some things can be changed at run-time too)
                      but probably dont affect performance much

                      "unix inheritance" you could call the way programs interact, the many standards of computing and programing of same
                      (actually as far as i know its a wide term; vague to me but i know POSIX is a part of it and i know UNIX compatibility is important to many people)
                      but making a whole system that works great form the start is... daunting

                      about drivers i know little
                      i know the cpu has some registers that redirect bits to ports(example PCI) and other parts
                      i know almost all hardware(cpu too) has its registers that put it in a mode of operation
                      i know gpu drivers are bloody complicated as they have too receive data in some order while taking care of the registers, memory(onboard and offboard) and timing(registers and units usually need too wait to "settle" to avoid data corruption)
                      so i dont blame out-of-tree drivers for replacing lots of lib's to get coherency in their own back yard(thats what causes the breakage usually i think)

                      so... thats about all i know about that, and note that some small parts could be wrong

                      on the other hand, nobody is making anybody use the latest kernel/xorg/drivers (except on newest hardware, where theres no drivers in the older kernel)
                      and you probably wont get much more fps using them, in fact you might loose some as Michael shown over the years
                      as far as i know the newest wine and probably all open source games can be compiled on rly old kernels and using rly old gcc

                      edit: oh and i almost forgot;
                      what if the kernel/sayd_other_thing devs stopped changing things around ?
                      what if they made stable things in the 32bit era and sayd "now its stable, now its fast, now we stop thinking how we can make it better, faster, more flexible"

                      breakage is a part of that progress
                      and if computer science showed you could run a program twice as fast by rewriting the kernel(you cant probably), wouldnt you like all your programs to run twice as fast, even at the expense of waiting for some 6 months till enough bugs are worked out ?
                      (600fps l4d2 woohoo)
                      mmm about unix inheritance i dont refer to posix or any higher for of api but more to 90 state of the art techs like locks, none scalable or numa aware smp code, uber cool APM code[<--pre acpi],unix98 terminals, old schedulers, no in-kernel memory manager for graphics, lack of memory hotplug, lack of hypervisor[now we have xen and kvm in-tree] QEMU golden age of slowness, and on and on and on[kernel newbies has very nice changelogs and articles about all this new cool stuff]

                      now BSD is the other side of the coin any sort of upgrade or new tech can take decades to be included so if your things is stable ABI for the eternity FreeBSD is a better choice for you[<-- for gamerk2] than linux and nvidia support it with da blob too and as far i remember most linux games should run on BSD too[give PCBSD a try, is not ubuntu but is the easier to use BSD around as desktop]

                      Comment


                      • Originally posted by oldskool69 View Post
                        hi all,

                        why is it, that i have like 50% better average gaming perfomance in windows 7 compared to lubuntu, for example, no matter, what i do? and why is it, that you always get errors, warnings and crashes everytime you install or run something on linux? i believe it is at the time, that developers make linux a gaming plattform, that is better than windows. every year i try linux again and it makes me sick that its still trash in gaming.

                        this is one video, that shows, what i tried to "explain" here: http://www.youtube.com/watch?v=Sh-cnaJoGCw

                        in gaming you right in this point in time BUT its a BIG BUT if you play right and develop engine for LINUX (just for linux ) and DROP windows support.
                        the performance in windows will be a joke compare to linux (most of the distro)ץ

                        and you mentioned a few reasons that its isn't connected directly to linux gaming performance soo please be accurate next time
                        Last edited by nir2142; 08-28-2012, 11:54 AM.

                        Comment


                        • Originally posted by jrch2k8 View Post
                          mmm about unix inheritance i dont refer to posix or any higher for of api but more to 90 state of the art techs like locks, none scalable or numa aware smp code, uber cool APM code[<--pre acpi],unix98 terminals, old schedulers, no in-kernel memory manager for graphics
                          wasnt all that changed ?
                          we got NUMA, SMP, GEM(or whatever the memory manager is now) and even a couple schedulers to choose from(i know 2, both being updated all the time)

                          idk about APM
                          but terminals usually dont get in the way of performance or anything as i know

                          Comment


                          • Linux has many problems that will need to be addressed before it can be used as a first rate gaming platform. The biggest ones, in my opinion are:

                            • Old OpenGL 3.0 - We are stuck on an OpenGL that is 5.5 years behind the current version and equivalent to DirectX6
                            • Kernel upgrades not ABI/API compatible - makes every kernel upgrade require new drivers, which are especially troubling for non OSS drivers (ie. graphic cards).
                            • No unified installation package like Windows - too many different packaging systems (RPM, deb, ebuilds, etc.) forces developers to develop a package for every distro!

                            Hopefully, now that VALVE is porting STEAM to Linux, some of these issues might get addressed, and with Microsoft focusing more on Console Gaming vs. Window's gaming, now is the perfect opportunity for Linux to finally gain ground in the Gaming space.
                            Last edited by gururise; 08-28-2012, 02:11 PM.

                            Comment


                            • Originally posted by gururise View Post
                              Linux has many problems that will need to be addressed before it can be used as a first rate gaming platform. The biggest ones, in my opinion are:

                              • Old OpenGL 3.0 - We are stuck on an OpenGL that is 5.5 years behind the current version and equivalent to DirectX6
                              • Kernel upgrades not ABI/API compatible - makes every kernel upgrade require new drivers, which are especially troubling for non OSS drivers (ie. graphic cards).
                              • No unified installation package like Windows - too many different packaging systems (RPM, deb, ebuilds, etc.) forces developers to develop a package for every distro!

                              Hopefully, now that VALVE is porting STEAM to Linux, some of these issues might get addressed.
                              OpenGL 3.0 isn't new but its not nearly as old as directx6
                              Kernel updates - speak to nvidia about this
                              no unified install package - steam will have or install to ~/.local for eg

                              unlike windows? use windows then?

                              Comment


                              • Originally posted by Tweenk View Post
                                The problem here is the closed-source driver, not the change in kernel API. Do not upgrade your kernel if your closed-source driver supplier does not support it. If you don't want the driver supplier to prevent you from kernel upgrades, convince them to release an open-source driver.



                                What if some API function turns out later to be poorly designed / unsafe / not general enough? By never removing any functions you prevent the removal of cruft as well as a good deal of improvement.
                                This is the problem with Linux/OSS in general... How do you not upgrade your kernel if the next distro release comes with a new kernel? In my case, I bought a netbook with Poulsbo graphics. Installed Ubuntu 11.04 on it, as it had a PowerVR graphics driver for the GPU which worked well.

                                Since the graphics driver is a binary blob, and for reasons outside of my control, PowerVR never updated the graphics driver, so now I'm stuck forever on Ubuntu 11.04? If thats the case, I'll never be able to upgrade to newer versions of popular software (ie. Firefox, open office, etc).

                                Comment

                                Working...
                                X