Announcement

Collapse
No announcement yet.

R600 Gallium3D Patch Boosts Unigine By ~30%

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by blacknova View Post
    And the main reason for that mess is that Linux kernel don't provide stable kernel ABI and API. If kernel interfaces are stable you can develop drivers with just kernel headers available.

    In the past this was not an problem and i see the reason why this is now an problem.

    Comment


    • #17
      Originally posted by Nille View Post
      In the past this was not an problem and i see the reason why this is now an problem.
      Actually it was and still is, in the sense the kernel devs who change the interfaces and cause "breakage" are also supposed to change "broken" drivers. Of course it wouldn't matter as much if the kernel had a different model with fewer, but bigger releases.

      ************************************************** *

      I have to agree with posts above, and I find that being able to update drivers separately from the kernel(not having to update the whole of it for one driver) would be cool. Besides, being out of tree and being closed-source/proprietary have nothing to do with each other.
      (AFAIK there are out of tree open/free driver as well as in tree closed drivers anyway ..)

      Comment


      • #18
        Originally posted by Rigaldo View Post
        I have to agree with posts above, and I find that being able to update drivers separately from the kernel(not having to update the whole of it for one driver) would be cool. Besides, being out of tree and being closed-source/proprietary have nothing to do with each other.
        (AFAIK there are out of tree open/free driver as well as in tree closed drivers anyway ..)
        It would definitely be nice, yes. But you are just shifting more of a burden onto the developers to get that to work and maintaining it all, which seems like a poor decision to make when the existing developers are already so shorthanded and still trying to catch up with current hardware features.

        Comment


        • #19
          Originally posted by smitty3268 View Post
          It would definitely be nice, yes. But you are just shifting more of a burden onto the developers to get that to work and maintaining it all, which seems like a poor decision to make when the existing developers are already so shorthanded and still trying to catch up with current hardware features.
          It shouldn't be much of an issue if the in-kernel interfaces were backwards compatible. But this won't happen soon I guess ... As needs arise(and hopefully more resources too), things should improve on these sectors too. But sorry, I'm getting a bit of topic now.

          Comment


          • #20
            Originally posted by Rigaldo View Post
            It shouldn't be much of an issue if the in-kernel interfaces were backwards compatible. But this won't happen soon I guess ... As needs arise(and hopefully more resources too), things should improve on these sectors too. But sorry, I'm getting a bit of topic now.
            Things are allowed to break internally because the people who are affected also have the skills to fix anything that. THings are also cut from the kernel if they aren't maintained. Its not like that externally (userspace facing) because not every userspace dev is a kernel dev and projects come and go. They want to make sure that the kernel isnt the reason why a userspace application stops working some time down the line.

            As far as "fewer, bigger releases" ....No. Just no. The kernel moves quickly, deal with it. Development happens very fast, and id rather not have to constantly recompile from git master if I wanted a specific feature when git isnt guaranteed to be stable. Much better to wait the like 2 months for a new release (personally id prefer monthly releases but thats just me) and have relatively assured stability thanks to the rc's.

            Comment


            • #21
              Originally posted by Ericg View Post
              Things are allowed to break internally because the people who are affected also have the skills to fix anything that.
              If instead of fixing kernel breaks people with skill would spend the same time working on actual drivers, the drivers in question would be much better.

              Comment


              • #22
                Originally posted by blacknova View Post
                If instead of fixing kernel breaks people with skill would spend the same time working on actual drivers, the drivers in question would be much better.
                You dont get the driver right the first time. Like the major nouveau rewrite that happened 2 releases ago. To write good quality drivers someimes you have to stop and go "....We fucked this up." Scrap some work and start over. Which breaks things in the process.

                Comment


                • #23
                  Originally posted by Ericg View Post
                  You dont get the driver right the first time. Like the major nouveau rewrite that happened 2 releases ago. To write good quality drivers someimes you have to stop and go "....We fucked this up." Scrap some work and start over. Which breaks things in the process.
                  And how complete driver rewrite correlate to stable kernel interfaces? Driver seriously is independent piece of software which strongly relying on some services provided by kernel, e.g. pci bus support, memory management, etc.
                  Are you saying that driver rewrite would become any simpler just because kernel interfaces have been changed and now developers need to figure out the new way, to do something that have been already working. Yeah, right.

                  Comment


                  • #24
                    Originally posted by blacknova View Post
                    And how complete driver rewrite correlate to stable kernel interfaces? Driver seriously is independent piece of software which strongly relying on some services provided by kernel, e.g. pci bus support, memory management, etc.
                    Are you saying that driver rewrite would become any simpler just because kernel interfaces have been changed and now developers need to figure out the new way, to do something that have been already working. Yeah, right.
                    When the driver changes sometimes their interfaces change. It happens. And I'm not relating to simplicity of anything. You said they should stop changing the interfaces and just worry about writing a better driver. I just pointed out that sometimes TO write a better driver you have to change some interfaces.

                    Comment


                    • #25
                      Let's have a 5 year old argument, the result of which was the inclusion of GPU drivers into the kernel.

                      We've been here before.

                      F

                      Comment


                      • #26
                        Originally posted by blacknova View Post
                        And how complete driver rewrite correlate to stable kernel interfaces? Driver seriously is independent piece of software which strongly relying on some services provided by kernel, e.g. pci bus support, memory management, etc.
                        Are you saying that driver rewrite would become any simpler just because kernel interfaces have been changed and now developers need to figure out the new way, to do something that have been already working. Yeah, right.
                        One thing you "Stable Kernel API" people don't get is this: The API does not just arbitrarily change for the hell of it. If/When it changes it's changing because the change makes things work better / have a better design, It's not like they're going "Mwahahahaha what can we do to break the drivers this week?", and the only way a stable kernel driver API is even semi-sane for drivers is if you're operating off of a Microkernel because then there shouldn't be enough in kernel changes for it to matter once it's at it's "fully-developed" state, because everything is out in userspace. Even Microsoft is only somewhat staying stable, trying to run an XP driver on Vista/7 (as in a pre-vista driver) is going to be hit or miss due to kernel changes, and WDDM was a driver API break and redesign, and was part of what screwed up the Vista launch.

                        Furthermore just look at Xorg for just why stable apis don't really belong in monolithic designs, part of the entire reason it's so crappy is that it's got a Stable API that they can't touch and thus they've been forced to write extensions to get around the problem (I.E. they have to work around the issues of the X.org protocol). Does this really sound like something that should be happening with the kernel as well? Even Qt which insures ABI and API compatibility over major versions and is modularly designed does breaks which cause the creation of a new major version. Which means even something like it doesn't really have a stable API but a versioned API, which is the only thing you might have an argument for but that falls through due to the monolithic design of the kernel.
                        Last edited by Luke_Wolf; 02-18-2013, 06:42 PM.

                        Comment


                        • #27
                          Actually X's way of depreciating interfaces is even nicer.... accidentally, quietly, break an interface / API. Someone comes along years later, notices that its been broken for years and no one noticed. The assumption is "Well no ones complained so I guess no one uses that anymore" Then they vocally and deliberately remove that interface/API and that chunk of code is gone lol

                          Comment


                          • #28
                            Originally posted by Rigaldo View Post
                            It shouldn't be much of an issue if the in-kernel interfaces were backwards compatible. But this won't happen soon I guess ... As needs arise(and hopefully more resources too), things should improve on these sectors too. But sorry, I'm getting a bit of topic now.
                            Making the in-kernel interfaces backwards compatible would itself add a lot of extra work onto the developers, because they'd have to keep building compatibility interfaces on top of every change they made. Check out win32 if you think that wouldn't add up.

                            But you keep on #TiltingAtWindmills

                            Comment


                            • #29
                              Originally posted by agd5f View Post
                              What automated tools would you suggest? There weren't any piglit tests which triggered the issue. Unfortunately, automated testing is often a challenge for GPU development. For best results you generally need physical access to the hardware.

                              1) Github ot other git repo for code upload with hooks for compilation with various configurations. (Did you know that mesa 9.0 wont compile with comands from radeonBuildHowTo? I know. I was told that on IRC after 4h of traying )

                              2) PTS + some defined collection of apps (open source so you can be sure that "testing" code is in fact bug free), for performance testing.

                              3) Piglit for any regressions in OpenGL that can be detected that way.

                              4) PTS + games + automated screen shot + diff for images and warnings for too much difference between reference image (taken on Catalyst drivers..), and currently obtained.

                              5) Whatever you can get for testing delays between following frames.

                              6) Restoration of "clean" configuration on multiple failed boots.


                              Everyghtin of above compared to reference (either last stable mesa+kernel+server, or Catalyst, which is better for you).

                              Oh, and Server that would manage automatic restart of hw/ queuing test request from various developers, and replicating test runs on different hw configurations, as well as managing reporting.


                              Better?
                              Last edited by przemoli; 02-19-2013, 05:51 AM.

                              Comment


                              • #30
                                Originally posted by Ericg View Post
                                Actually X's way of depreciating interfaces is even nicer.... accidentally, quietly, break an interface / API. Someone comes along years later, notices that its been broken for years and no one noticed. The assumption is "Well no ones complained so I guess no one uses that anymore" Then they vocally and deliberately remove that interface/API and that chunk of code is gone lol


                                Unfortunately that do not work for core X protocol.

                                So we still have paining of cool squares...

                                Comment

                                Working...
                                X