Announcement

Collapse
No announcement yet.

Are AMD HD video card users all using older versions of GNU/Linux?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by energyman View Post
    I am not using GNU/Linux.
    I am using gentoo. And my card (HD3870) works fine.
    No, you're using the Linux kernel and the GNU programs which interoperate with the kernel.

    Originally posted by Ant P. View Post
    (re: big post about stable kernel APIs somewhere up the thread)
    That's a neat idea. You should post that on the LKML, I'd love to see the reaction.
    You're right, APIs are a stupid idea, who's needs 'em, who uses 'em. Standards and actual interoperability is for n00bs, what was I thinking. Everyone should learn to compile and spend their lives doing it, instead of actually, you know, using the program.

    The kernel devs will never agree, because they are funded by companies that want to control the code, instead of letting anyone be able to make a driver for it, and you know, consequently, allowing users to be able to easily install drivers from any source.
    Last edited by Yfrwlf; 20 May 2009, 05:36 PM.

    Comment


    • #22
      Originally posted by Yfrwlf View Post
      That's the great lie they want to push on you, yes. Linux programs are Linux programs. The main difference between "distros" is they choose to use different package managers that aren't compatible with a universal Linux packaging standard. But no, it's not a "different OS". Binaries can be run on any Linux system because they're all compatible. The main thing that isn't compatible between Debian and Red Hat is their stupid package format difference just because they want to be unique and the companies supporting them want to dig their own proprietary trench in order to get converts to come over to their side (the point of being "proprietary" meaning lock-in). All that is needed is for the community to recognise this deeply impacting Linux issue as a major threat, and push for standards. The companies won't do it for Linux users and developers, because they don't want it.
      It's not nearly as simplistic as you would like it to be. Firstly, binary compatibility doesn't mean jack. FreeBSD has a Linux binary compatibility, while Linux has binary compatibility for a number of different Unixes. ELF loading doesn't imply anything . Hey even Vista can run Win2000 binaries. Are they the same operating system?

      There are a number of significant differences between Fedora and Debian stable, some being a much older kernel, a much older X version, and older libraries, and some libraries installed with different configurations and options. They even use different boot/init procedures. We're getting into fuzzy areas here, but I take a wholistic view of an operating system as a kernel, associated userspace tools, and base set of provided libraries and userspace applications. In this case Debian stable and Fedora are rather disparate. Fedora and RHEL5 even more so. There is no expectation that everything you want to even compile on Fedora would do so on Debian stable without some intervention, and good luck getting all your bells and whistles in Fedora working on RHEL5.

      Even distributions with the same packaging system aren't necessarily compatible. Mandriva rpms don't necessarily work on Fedora boxes, and Ubuntu debs won't necessarily install on Debian boxes. Packaging standards, while useful, aren't the panacea that you believe it be. Even as it is, you can convert rpms to debs on a Debian platform.

      In this sense, it makes sense to think of a distribution as an operating system because they each ship with different kernels (versions, patches etc), different userspace tools, different libraries, and different userspace applications. Sure, some may be compatible, some of the time. But that is hugely dependant on which ones you are comparing.

      They're called standards. A web browser uses HTML standards. A FTP client uses FTP standards. FreeDesktop.org, the Linux Foundation, and other groups help create standards so that when you're in KDE and you want to run a Gnome app, for example, you can still do so. It empowers Linux users to communicate on the same page at certain points. It does NOT hamper progress, because if someone feels that the standard needs to be broadened, it can be, so that new features can be added. What version of HTML are we up to now, five? OpenGL 3? Those are standards which are quite solid, and they help everyone.

      Just imagine a world without any standards, where everyone did just go and do their own thing. .....
      Ok firstly, let me say I personally think of standards and stable/consistent APIs not necessarily one and the same different things. To me a standard is a collection interfaces and/or best practices maintained for the interests of interoperability.

      HTML is a standard to allow interoperability between web-browsers.
      OpenGL was presumably created to provide a way for programmers to speak to variety of different hardware devices in a uniform way.

      In that sense, yes, I agree that things like package management and desktop integration should be standardised. These are domains where interoperability between packaging formats or desktops implementations are important.

      I fail to see how standardisation would help the kernel though. A controlling group already sets it's direction. And what exactly does it need to interoperate with? Are you proposing some UNIX Kernel standard that requires all Unix kernels to be compatible with each other somehow? Or is this for device drivers only? Are you proposing a Unix Device Driver Standard? How realistic is this, and how neccessary? POSIX, for example, already provides a cross platform way to access various devices, so do we really need to further generalise the implementation?

      Standards can add an unnecessary layer of fat to the development process. OpenGL is a pretty good example. It's a stagnant standard, and part of the trouble is its inertia. One of the problems facing OpenGL 2.0 was the maintaining backwards compatibility. This came a cost of implementing newer features, and was a pretty big disappointment to the OGL community. OpenGL 3.0 was promised to deliver some heavy improvements, but it was more of the same really. Maintaining a single framework to support legacy compatibility came at the cost of added features, performance, and usability, and OpenGL is all but relegated as the "cross-platform" option. D3D on the other hand, while not an official "standard" is almost a de-facto standard and has pretty much won the war, due to it's manoeuvrability and flexibility. People would get pissed of if OpenGL 4,5,6,7,8, and 9 changed the API every release, but they are currently pissed off that OpenGL 2 and 3 didn't change it enough.

      Now, if you are commenting more on the stability between kernel versions I feel my point still stands. Yes, ideally, updated kernels shouldn't be continually changing interfaces, but sometimes this is neccessary. Maintaining legacy compatibility at the cost of significant architectural or performance improvements is pretty much the OpenGL trap. To be fair, I don't follow the driver side too much, but yes, I do remember when they updated the wireless stack, leaving me using flaky experimental drivers. That being said, I don't the kernel guys are anti-standard or anti-interface stability layer at all. AFAIK they are pretty POSIX-friendly (look at the ext4 discussions) and layers like the VFS are pretty stable (cue Rieser4 flames). And from what I understand, the stuff happening now with in-kernel graphics drivers will also provide a more standard way for other display servers to access graphics hardware.

      It doesn't even work on the major distros, that's one of the points of this thread. I had to downgrade to kernel 2.6.27 and xorg 1.5 (using the Ubuntu Intrepid software bunch) I assume it is since fglrx won't work in the newer versions yet with my 4850x2.
      Well expect compatibility between X/kernel versions and drivers to not be great until the new graphics stack settles down. It's not like they can work on it for 4 years then release Linux 7. They release stuff incrementally, and when structural changes are made for benefit of the system, things will break. This is unfortunate but a reality. It IS a bit of a mess, but it has less to do with lack of standards, and more to do with the previous stagnant state of Xfree86. It's only been pretty recently (2005) that the graphics stack has started to undergo improvement, so most of us expect a little pain.

      Comment


      • #23
        Originally posted by Yfrwlf View Post
        Originally posted by energyman
        I am not using GNU/Linux.
        I am using gentoo. And my card (HD3870) works fine.
        No, you're using the Linux kernel and the GNU programs which interoperate with the kernel.No, you're using the Linux kernel and the GNU programs which interoperate with the kernel.
        Which Linux kernel?
        What version of X?
        What system libraries (GNU and non-GNU)?

        It makes much more sense to call it Gentoo.

        A kernel does not an operating system make.

        Comment


        • #24
          Originally posted by yesterday View Post
          Which Linux kernel?
          What version of X?
          What system libraries (GNU and non-GNU)?

          It makes much more sense to call it Gentoo.

          A kernel does not an operating system make.
          true - but in a discusion of drivers the main point of interest would be the kernel, though this is also where the greatest strength and weakness of OSS comes back to bite us - just because ubuntu and gentoo both use the linux kernel, doesn't mean they are the same kernel. even if you were using 2.6.29 in both, still doesn't mean they are exactly the same. could very well be a problem with a specific change/patch/whatever ubuntu has applied, or maybe its solved by a specific change/patch gentoo has applied.

          although, saying you aren't using gnu/linux when you are using gentoo is inacurate, as by definition when using gentoo you are using the kernel and base system packages from GNU. not necessarily disagreeing that calling it gentoo may make more sense, but you are still using GNU/Linux.

          as for the original question, no problems here with arch/2.6.29/fglrx with either a hd2600 or a hd3200.

          Comment


          • #25
            Originally posted by Yfrwlf View Post

            That's the great lie they want to push on you, yes. Linux programs are Linux programs. The main difference between "distros" is they choose to use different package managers that aren't compatible with a universal Linux packaging standard. But no, it's not a "different OS". Binaries can be run on any Linux system because they're all compatible. The main thing that isn't compatible between Debian and Red Hat is their stupid package format difference just because they want to be unique and the companies supporting them want to dig their own proprietary trench in order to get converts to come over to their side (the point of being "proprietary" meaning lock-in). All that is needed is for the community to recognise this deeply impacting Linux issue as a major threat, and push for standards. The companies won't do it for Linux users and developers, because they don't want it.



            They're called standards. A web browser uses HTML standards. A FTP client uses FTP standards. FreeDesktop.org, the Linux Foundation, and other groups help create standards so that when you're in KDE and you want to run a Gnome app, for example, you can still do so. It empowers Linux users to communicate on the same page at certain points. It does NOT hamper progress, because if someone feels that the standard needs to be broadened, it can be, so that new features can be added. What version of HTML are we up to now, five? OpenGL 3? Those are standards which are quite solid, and they help everyone.

            Just imagine a world without any standards, where everyone did just go and do their own thing. Lets say there were 500 different website protocols for instance, instead of having everything built on HTML. Imagine trying to function in such a world, it would be horrible, bloated, confusing, and nothing would get done. Every web-based program would have to try to be compatible with all the craziness.

            I'm in no way saying that there shouldn't be different approaches, and competition, for those things are good. However, with most things, there are definite areas which are similar, and those can be shared. Those are points at which standards should be adopted.

            For example, lets say you have two different file manager programs. Do they use standards? Already, yes, they use a ton of standards, because they interface with files on a hard drive managed by file systems etc etc etc, and there are standards every step of the way, that's how computers are built, otherwise you'd be screwed with pineapples. As you move up the stack from hardware to software, you encounter several different standards along the way. So, for the file manager program itself, if you recognise that there are several common needs there, and can find a way to utilise a common standard or even a common library in order to work together with other similar projects, of course that's in your interest to do so.

            If everyone had to re-invent the wheel for everything, NOTHING would EVER get done.



            Maybe that's why D3D is becoming more popular recently, so I sure hope that OGL can compete properly and catch back up if it really is lacking like you accuse it of being.

            But guess what? Let's say that OGL, a standard, changed it's points of communication, it's API, the actual part that is a standard, and implemented OGL 3, 4, 5, 6, 7, 8, 9, 10, etc, and "moved" like you say, and none of those versions happened to at all be compatible with any other version.

            a) That would be retarded, because there's no reason for it to be completely incompatible like that when they all would use nearly the exact same things, so the stable parts should have a clear API, while the new OGL calls should be added on, and what do you know, that's the way it happens to work.

            b) No one would use the "standard".

            The reason why: because that's not a standard. You clearly need to understand what a standard is, and how to implement one so that it doesn't impede progress and development. That's what standards are for, is to not impede progress, but to allow just the communication part to occur.

            Let me give you a quick example of that. HTML again. How many programs use it? Hmm lets see, Google Maps, Google Docs, Yahoo Mail, Slashdot, all sorts of different programs and content, all using the same standard! So wow, you can have diversity and competition while using standards that are quite static. Maybe OGL needs to get it's tail into gear, maybe that was a poor example, but hopefully by now you are getting some idea of the point of standards, because without them, none of the above mentioned web apps and sites would exist!

            And the rest of your comment was more of the same so I'll leave it there. Ultimately this is the on-topic point: there is nothing stopping the creation and adoption of some universal intelligent Linux binary packaging systems, the only reason they don't exist is disinterest by distro companies to play nicely with competing companies because they use their repos as a lure to get users to come to them. It's completely pointless and fragments Linux where there is no reason to do so and in an area which is critical for simplifying binary deployment by closed and open source software projects alike.

            For now we're stuck with crappy archive files for the most part, which aren't very user-friendly to install at all.



            It doesn't even work on the major distros, that's one of the points of this thread. I had to downgrade to kernel 2.6.27 and xorg 1.5 (using the Ubuntu Intrepid software bunch) I assume it is since fglrx won't work in the newer versions yet with my 4850x2.

            Yes, Nvidia has always had quick support of the latest stuff. AMD is smaller, so they're behind in the closed source driver, but ahead in the open one still AFAIK. However, the open one doesn't have 3D, so it's a deal breaker after you plop down $1400 for a new system expecting to play some high-end Linux or Wine games.

            wow, that whole posting is so wrong and so full with really, really bad ideas - you should stop talking about this. You ridicule yourself.

            I give you an example why distributions have more differences then just package managers:

            expat. Expat had a change in namening (and minor abi change). Distribution A uses old expat, Distribution B uses new expat. While A&B are identical in every other aspect, the expat difference makes apps compiled for A using expat (and that are a lot) fail on B and vice versa.

            Your HTML example is rotten through its core. There are how many HTML standards? We are at what? 4.1 or something? And each html renderer shows the same page a bit different. HTML is a good example why standards don't work the way YOU think they do.

            OpenGL - another thing you don't understand. Opengl was not invented by SGI to make it easy to program different hardware easy. It was invented to make programing for THEIR hardware easy. That Opengl had so much success afterwards is a whole other can of worms.

            c) gnome apps & kde apps. Yes, you can use a gnome app in kde. That 'standard' has nothing to do with freedesktop. Or 'standards'. The gnome app starts all it needs to run in the background, so you suddenly have 'both' desktops running - only gnome incomplete. Interaction between apps from different desktops is...not very satisfying - and the standard every app has to follow is ICCCCC (whatever how many C...)M. Which is a bitch and very, very broken.

            d) the linux kernel. People smarter than you have explained many times why stable 'api's for external modules are a very bad idea. Just go to a lkml archive of your choice and look for yourself. Some hints:
            bug fixing
            cruft
            speed
            are keywords to look after. I am pretty sure you never really read any of the threads, based on the observation how you confuse even basic stuff.

            e) 'crappy archive files' are easy enough to install. The best thing - they prevent idiots from installing infested crap from dubious sources. Again, a problem you seem to have missed.

            f) I am using gentoo. Not gnu/linux. I am using X - not gnu. I am using KDE - not gnu. Most of my system tools are not gnu. python? not gnu. Perl? not gnu. java? not gnu. star? not gnu. bzip2? not gnu. qt? not gnu. Do I need to continue?Nobody uses 'gnu/linux' - it would be a system with gnustep as gui.

            g) stable 'apis' are only a problem fro ATI and NVIDIA. And both are on very thin ice already. Why cater for someone who might have to stop what they are doing if anybody of the core contributers decides to sue them?

            h) zeroinstall is not a solution - it will just make it easier for idiots to destroy their system.

            in conclusion: you don't know what you are talking about. You parrot some crap you read on slashdot. You never really thought through the mess.

            Comment


            • #26
              "e) 'crappy archive files' are easy enough to install. The best thing - they prevent idiots from installing infested crap from dubious sources. Again, a problem you seem to have missed."

              Sorry, but that is not at all the case. If every file was scrutinized by the end user and actually knew what they were looking at then that statement would hold water.

              Comment


              • #27
                an idiot that can not do ./configure&&make&&make install is nicely prevented from installing crap - until they see a oh-so-easy-and-friendly zeroinstall installer.

                Comment


                • #28
                  Originally posted by energyman View Post
                  an idiot that can not do ./configure&&make&&make install is nicely prevented from installing crap - until they see a oh-so-easy-and-friendly zeroinstall installer.
                  Uhhuh and all make files contain a nice uninstall capability don't they? Of course that doesn't prevent "dubious" code from being injected into your system from a modified tar ball taken from who knows where. At least with packages they are usually signed and easily uninstalled.

                  Comment


                  • #29
                    Which do you think is more likely, to find tampered source or a tampered binary? Binary of course, because with source it's way more likely someone will actually find it out.

                    Comment


                    • #30
                      Originally posted by curaga View Post
                      Which do you think is more likely, to find tampered source or a tampered binary? Binary of course, because with source it's way more likely someone will actually find it out.
                      Like the unofficial Catalyst drivers some people download from unofficial websites

                      Comment

                      Working...
                      X