Announcement

Collapse
No announcement yet.

Radeon OpenGL 2.0 support

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by agd5f View Post
    fglrx will still work with his hardware if he doesn't mind using a supported kernel and xserver.
    Which is very crippling in the linux world. Most distroes are release locked. I guess I could fine tune stuff like Arch but it requires a lot of work since you need to lock a nice set of packages and hope for the best.

    Comment


    • #32
      It's probably easier to stay with the distro that worked for you before, at least until the open source drivers are at a point where they do what you need. For everything except gaming the open source drivers are probably at that point already.

      Comment


      • #33
        Seriosuly - use an older distro which has support (you can even dual boot if you need newer software - I doubt you need GLSL 100% of the time) or you could do what I did and buy a new card that is fully supported by fglrx and wait for the open source drivers to improve. Chances are a new card will be < $100 and shows ATI/ AMD that supporting Linux users is a good business decision.

        Comment


        • #34
          Originally posted by Tillin9 View Post
          Seriosuly - use an older distro which has support (you can even dual boot if you need newer software - I doubt you need GLSL 100% of the time) or you could do what I did and buy a new card that is fully supported by fglrx and wait for the open source drivers to improve. Chances are a new card will be < $100 and shows ATI/ AMD that supporting Linux users is a good business decision.
          I can't because it's a laptop. And my main focus is development, not gaming (although there's some gaming development going on ).

          I'm not whining, but IMHO AMD/ATI could put more effort in this. If it was windblows, the drivers would be in long time ago. Still, I applaud all the effort and see great things coming after these abstractions stabilize.

          Comment


          • #35
            Originally posted by Almindor View Post
            If it was windblows, the drivers would be in long time ago.
            Actually we dropped support for Linux and Windows at the same time. The issue, I think, is that new versions of Windows normally work with older drivers (eg Vista could use XP drivers and Win7 can use Vista drivers).

            Linux changes more frequently, with API changes happening every few months rather than every few years, and requires constant driver updates in order to keep working with changed X and kernel versions. There is no attempt to make use of drivers which supported the previous API; drivers have to constantly change in order to keep working (unlike Windows where a driver from two years ago is often still useful).

            The kernel changes are particularly expensive when functions the driver relies on are marked GPL-only, ie saying "binary drivers can't use this functionality any more, you have to redesign the bottom end of your driver and find another way to do the same thing".
            Last edited by bridgman; 10-13-2009, 11:31 AM.

            Comment


            • #36
              Originally posted by bridgman View Post
              The kernel changes are particularly expensive when functions the driver relies on are marked GPL-only, ie saying "binary drivers can't use this functionality any more, you have to redesign the bottom end of your driver and find another way to do the same thing".
              I'm very curious about this; does it happen often? Is there a _technical_ reason behind it?

              Comment


              • #37
                Originally posted by yotambien View Post
                I'm very curious about this; does it happen often? Is there a _technical_ reason behind it?
                Yes.

                Comment


                • #38
                  Originally posted by yotambien View Post
                  I'm very curious about this; does it happen often? Is there a _technical_ reason behind it?
                  yes, creating a stable interface and sticking to it is a lot of work.

                  Microsoft only releases a new kernel every couple of years. They usually provide compatibility wrappers for old APIs.

                  Linux is different. Frequent releases means that all API-changes are public at some point and the amount of compatibility wrappers to write are much higher. But the only drivers suffering from these problems are binary drivers - OS-drivers that were merged into the kernel-tree are updated by the kernel maintainers on every API change and will always work.

                  So maybe it's not strictly a technical reason, but do you expect the kernel developers to do a huge amount of extra work to cater for closed source drivers that nobody really likes?

                  For further reading, try this and google for the resulting discussion.

                  Comment


                  • #39
                    Originally posted by bridgman View Post
                    Linux changes more frequently, with API changes happening every few months rather than every few years, and requires constant driver updates in order to keep working with changed X and kernel versions. There is no attempt to make use of drivers which supported the previous API; drivers have to constantly change in order to keep working (unlike Windows where a driver from two years ago is often still useful).

                    The kernel changes are particularly expensive when functions the driver relies on are marked GPL-only, ie saying "binary drivers can't use this functionality any more, you have to redesign the bottom end of your driver and find another way to do the same thing".
                    This is not quite fair to say bridgman. The _external_ kernel API doesn't change frequently at all.

                    The _internal_ API of the kernel however does change in the way you describe. If your driver uses the external API of the kernel only (when it is in userspace) then your driver would only very infrequently need updates for changed APIs.

                    If your driver uses the internal API however, then yes, you'll have to deal with the internal API changes. That's the tradeoff. And yes I know, in order to get any performance at all you'll at least need part of your driver in kernel space.

                    This is a classical dilemma for proprietary drivers.
                    Which is also why open sourcing your driver/specs is such a good idea :-)

                    Also, comparing the Windows lifecycle with the Linux lifecycle is quite unfair as well, they are completely different. Linux thrives on the ability to change internal APIs as you well know. For example, how else could G3D ever have come to life?

                    Comment


                    • #40
                      I asked a very specific and simple question:

                      Originally posted by yotambien
                      Originally posted by bridgman
                      The kernel changes are particularly expensive when functions the driver relies on are marked GPL-only, ie saying "binary drivers can't use this functionality any more, you have to redesign the bottom end of your driver and find another way to do the same thing".
                      I'm very curious about this; does it happen often? Is there a _technical_ reason behind it?
                      You decided to ignore it and provided, yet again, an unrelated link to the same old propaganda. Next time don't forget to include Arjan's doomsday scenario for good measure.

                      On the meanwhile, I found a more appropriate answer in the linux-kernel mailing list faq:

                      # What is this about GPLONLY symbols?

                      * (REG) By default, symbols are exported using EXPORT_SYMBOL, so they can be used by loadable modules. During the 2.4 series, a new export directive EXPORT_SYMBOL_GPL was added. This is almost the same thing, except that the symbol can only be accessed by modules which have a GPL compatible licence (note that this includes dual-licenced BSD/GPL code). This new directive was added for these reasons:
                      o To clarify the ambiguous legal ground on which non-GPL (particularly proprietary) modules lie. A strict reading of the GPL prohibits loading proprietary modules into the kernel. While Linus has consistently stated that proprietary modules are allowed (i.e. he has granted an explicit exemption), it is not clear that he is able to speak for all developers who have contributed to the Linux kernel. While many think Linus' edict means that all contributed code falls under this exemption granted by Linus, not everyone agrees that this is a legally sound argument. The new EXPORT_SYMBOL_GPL directive makes the licence conditions explicit, and thus removes the legal ambiguity.
                      o To allow choice for developers who wish, for their own reasons, to contribute code which cannot be used by proprietary modules. Just as a developer has the right to distribute code under a proprietary licence, so too may a developer distribute code under an anti-proprietary licence (i.e. strict GPL).
                      Note that Linus has stated that existing symbols will not be switched to GPL-only. Developers of proprietary modules for Linux need not fear. Furthermore, it is quite unlikely that Linus will look favourably upon the introduction of new core driver APIs which are restricted to GPL-only modules. This would not be in the best interests of Linux. Linus has forwarded me a message he sent to someone else to clarify his views. Note that since that time, several developers have eroded the number of non-GPL only symbols by writing new (usually better) infrastructure and interfaces and deprecating the older interfaces. The newer interfaces are often tagged as GPL-only. In addition, there are some "kernel janitors" who aggressively submit patches to remove all symbols (whether GPL-only or not) which are not used by code shipped with the kernel source tree.
                      So no, no technical reason whatsoever.

                      Comment


                      • #41
                        Originally posted by fhuberts View Post
                        This is not quite fair to say bridgman. The _external_ kernel API doesn't change frequently at all.

                        The _internal_ API of the kernel however does change in the way you describe. If your driver uses the external API of the kernel only (when it is in userspace) then your driver would only very infrequently need updates for changed APIs.

                        If your driver uses the internal API however, then yes, you'll have to deal with the internal API changes. That's the tradeoff. And yes I know, in order to get any performance at all you'll at least need part of your driver in kernel space.

                        This is a classical dilemma for proprietary drivers.
                        Which is also why open sourcing your driver/specs is such a good idea :-)
                        A number of OS functions (memory management is a good example) need to be implemented differently for graphics in order to provide optimal performance. As a result the graphics driver stack ends up having to re-implement some of the upper level OS functions and hook into the OS at a lower level than most other drivers.

                        Some OSes recognize this and offer a separate set of stable entry points for graphics drivers, while other OSes require that graphics drivers make use of "internal" functions in order to deliver the same level of performance and functionality. Linux follows the second approach - I'm not saying this is *wrong*, just that we need to recognize that there are some costs as well as benefits.

                        Originally posted by fhuberts View Post
                        Also, comparing the Windows lifecycle with the Linux lifecycle is quite unfair as well, they are completely different. Linux thrives on the ability to change internal APIs as you well know. For example, how else could G3D ever have come to life?
                        Actually Gallium3D is a userspace change, similar to the architectural changes we made in our OpenGL stack a few years ago. No impact on kernel code or APIs, other than an implementation decision to only run over KMS/GEM/TTM/DRI2.

                        You can make a compelling argument that the Windows development cycle and the enterprise/LTS Linux development cycle are very similar, with a new release coming out every couple of years followed by updates which maintain all of the original APIs and core code. The Linux world adds a series of interim end-user releases between major enterprise/LTS releases, resulting in end-user-visible changes (ie the need for new drivers) every few months rather than every couple of years.
                        Last edited by bridgman; 10-13-2009, 01:12 PM.

                        Comment


                        • #42
                          Originally posted by fhuberts View Post
                          This is not quite fair to say bridgman. The _external_ kernel API doesn't change frequently at all.
                          And this is not quite fair to say to bridgman - after all, he is quite aware of the different API types. He was merely stating the situation from a proprietary driver writer point of view.

                          (BTW, I tend to agree with the Linux kernel decision of not providing a stable (internal) API, because it makes sense from a technical POV. However, it does make life more difficulty for people who develop exokernel-style drivers like the graphics drivers, so even though I agree that it's the best choice for Linux as a whole, from a purely egoistic POV, I'm not entirely happy about it.)

                          Comment


                          • #43
                            Originally posted by nhaehnle View Post
                            And this is not quite fair to say to bridgman - after all, he is quite aware of the different API types. He was merely stating the situation from a proprietary driver writer point of view.

                            (BTW, I tend to agree with the Linux kernel decision of not providing a stable (internal) API, because it makes sense from a technical POV. However, it does make life more difficulty for people who develop exokernel-style drivers like the graphics drivers, so even though I agree that it's the best choice for Linux as a whole, from a purely egoistic POV, I'm not entirely happy about it.)
                            Yeah I know you guys understand it. Actually met bridgman on FOSDEM 2008. You guys are doing a great job.

                            My remark was merely meant to point out that there are more nuances to the story he is telling. It seemed a bit too much laying the blame with the kernel.

                            Comment


                            • #44
                              Originally posted by bridgman View Post
                              Actually we dropped support for Linux and Windows at the same time. The issue, I think, is that new versions of Windows normally work with older drivers (eg Vista could use XP drivers and Win7 can use Vista drivers).

                              Linux changes more frequently, with API changes happening every few months rather than every few years, and requires constant driver updates in order to keep working with changed X and kernel versions. There is no attempt to make use of drivers which supported the previous API; drivers have to constantly change in order to keep working (unlike Windows where a driver from two years ago is often still useful).

                              The kernel changes are particularly expensive when functions the driver relies on are marked GPL-only, ie saying "binary drivers can't use this functionality any more, you have to redesign the bottom end of your driver and find another way to do the same thing".
                              I really don't want to start a needless argument, or alienate you guys, since I really like the work you're doing and even with the current state of things, feel very thankful. BUT you didn't get me here.

                              I know you dropped both windows and linux at 8.3. But if Windows service pack/vista/whatever caused the driver to stop working due to update you'd make a new one for the older cards (same features, just sort of recompiled). This didn't happen with the newer X (AFAIK it was only Xorg which caused problems?). That was my argument.

                              Anyways, once more, I don't want this to turn into some sort of blamefest, I'm not blaming you guys (at least not the devs) for anything.

                              Comment


                              • #45
                                Originally posted by fhuberts View Post
                                Yeah I know you guys understand it. Actually met bridgman on FOSDEM 2008. You guys are doing a great job.

                                My remark was merely meant to point out that there are more nuances to the story he is telling. It seemed a bit too much laying the blame with the kernel.
                                I certainly wasn't trying to "blame" the kernel, but I was trying to explain why we didn't feel that the approach we're using for Windows (quarterly updates, no support for new OS versions) would work as well for Linux.

                                The decision to constantly evolve APIs has significant benefits but it does have some costs as well, and the costs are probably felt the most in high end graphics, where the cost and complexity of an optimized driver stack makes it harder for open source drivers to replace proprietary ones in all scenarios.
                                Last edited by bridgman; 10-13-2009, 03:00 PM.

                                Comment

                                Working...
                                X