Announcement

Collapse
No announcement yet.

Intel Just Released A Crazy Fast Acceleration Architecture

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by bridgman View Post
    - implement support for all the performance-related features - tiling, HyperZ, pageflipping etc.. - and bug fix to the point where they can be enabled by default, so the hardware will be running at full speed at least at a micro-level.
    Do you have an estimated time for these features to be enabled by default?

    Comment


    • #32
      Bridgman: This is offtopic and certainly a bit on the naive side to ask you this.
      I'll give it a try nevertheless.

      First off, I'm really happy about the decision to release documentation of major parts of Ati/AMD GPU chipsets. Also I really liked how things evolved. Then again, while one might not care a lot about the motivation behind all this now that "it" happened, I asked myself several times how this "going open source" might be solely related to the new lines of CPUs with integrated GPU parts (out-of-the-box experience) and would not happened otherwise.

      Comment


      • #33
        Originally posted by bridgman View Post
        Stupid 30 minute edit limit
        Hahahahahahahahahaha!!!

        Sorry for spam, I just had to!

        Comment


        • #34
          Originally posted by tball View Post
          Do you have an estimated time for these features to be enabled by default?
          These are all already enabled features if i can remember my Phoronix stories correctly.

          Comment


          • #35
            They are not, at least not generally. On my RV710, tiling is NOT enabled by default and hyper-Z is not even implemented. http://www.x.org/wiki/RadeonFeature

            Comment


            • #36
              Originally posted by bbordwell View Post
              These are all already enabled features if i can remember my Phoronix stories correctly.
              AFAIK they are enabled for r3xx-5xx but not for r6xx-NI although the initial code is written and has gone through some testing, particularly on 6xx/7xx.
              Last edited by bridgman; 06-05-2011, 07:26 PM.

              Comment


              • #37
                Originally posted by entropy View Post
                Bridgman: This is offtopic and certainly a bit on the naive side to ask you this.
                I'll give it a try nevertheless.

                First off, I'm really happy about the decision to release documentation of major parts of Ati/AMD GPU chipsets. Also I really liked how things evolved. Then again, while one might not care a lot about the motivation behind all this now that "it" happened, I asked myself several times how this "going open source" might be solely related to the new lines of CPUs with integrated GPU parts (out-of-the-box experience) and would not happened otherwise.
                It's off topic even for an AMD thread, but what the heck

                Definitely not "solely related" but support for Fusion parts was a consideration even back in 2007. Maybe 30-40% of the motivation, something like that...

                Comment


                • #38
                  Does anyone know if any of these enhancements can be ported to the open source AMD drivers or Gallium3D in general if Intel ever decides to join the rest of us?

                  Comment


                  • #39
                    Originally posted by Prescience500 View Post
                    Does anyone know if any of these enhancements can be ported to the open source AMD drivers or Gallium3D in general if Intel ever decides to join the rest of us?
                    As was previously noted the main change in this patch set was using the 3D engine for everything "2D" related (let's call it X rendering related since the X RENDER stuff never really worked on traditional 2D engines) rather than using a mix of the 2D and 3D engines and dealing with the latency involved in the synchronization between them. Since R6xx, the open source drivers for AMD hardware already do this since we have no 2D engine any more.

                    This particular patch isn't directly related to the 3D drivers, but the same synchronization issues are still relevant in the 3D driver. None of the radeon 3D drivers (neither gallium no classic) for hardware that has a 2D engine (r1xx-r5xx) use the 2D engine in the 3D driver, so there are no synchronization issues in that respect.

                    This brings up a good point in general. Often hardware has features that it doesn't always make sense to use as we see in the case of this patch. On the surface have multiple asynchronous hardware engines may seem like a useful feature, but the overhead of synchronization that comes with sharing buffers between the engines is often not worth the extra functionality. That's not to say multiple engines don't have their uses, but just because you can use them doesn't always mean you should.

                    Comment


                    • #40
                      On a slightly unrelated topic... when is it planned to just retire the hardware-specific DDX's and use a generic interface exposed via Mesa, e.g. Gallium?

                      DDX's are the one part of the LInux graphics stack I've never at least slightly looked into. I'm assuming they're doing some direct hardware programming through the DRI2 interfaces rather than passing through the Mesa/Gallium code. Like the DDX is issuing commands for "draw opaque rectangle here" to the DRI2 interfaces rather than going through some Gallium interface to draw primitives (wrapping OpenGL is probably too much overhead to justify perhaps, but surely a 2D acceleration state tracker in Gallium would not be). I'm aware that this is basically what doing a nested X.org server over Wayland or so on would do, but why isn't X.org doing it that way internally already? Legacy support or something?

                      (This is a more AMD specific topic, as I know Intel doesn't use Gallium yet.)

                      Comment


                      • #41
                        Originally posted by elanthis View Post
                        On a slightly unrelated topic... when is it planned to just retire the hardware-specific DDX's and use a generic interface exposed via Mesa, e.g. Gallium?
                        My impression is that there are two simple issues in the way :

                        1. Memory management APIs differ from hardware vendor to another (since the underlying hardware is quite different in that area) so at least part of the "common DDX code" is not likely to be common any time soon.

                        2. The DDX code doesn't seem to represent a big maintenance overhead today and so it would be a lot more work to replace it than to keep the current DDX code going.

                        If we reach a point where the DDX code needs to be substantially re-written anyways (new acceleration APIs inside X or radically different hardware) then it might make sense to rewrite part of the code using Gallium3D operations, but for now it seems to make more sense to leave DDX alone and work on other parts of the stack instead.

                        It's a bit like the transition to Gallium3D - if you were writing a driver from scratch then writing it against the Gallium3D interfaces is less work than writing a "classic" driver... but if you already *have* a classic driver that works then re-writing to Gallium3D is *more* work rather than less.

                        One minor point -- the Gallium3D interface is not actually exposed by mesa - the code just happens to reside in the mesa tree because mesa is "the first and biggest customer". AFAIK the Gallium3D pipe and winsys drivers are built into whatever driver uses them, so there would be a copy of the drivers in the DDX, a copy in mesa, a copy in the video decode driver etc...
                        Last edited by bridgman; 06-06-2011, 07:51 AM.

                        Comment


                        • #42
                          Hang on, it just occurred to me...If this driver improvement has been made in x.org/x server, then would it still be relevant when the switch to Wayland is made?

                          Comment


                          • #43
                            The speed improvement is nice, but I have no problem with 2D speeds on my Core i3 550, or even on my GMA950 Netbook. Is there any chance this could lead to noticeably faster 3D rendering?

                            Comment


                            • #44
                              Originally posted by elanthis View Post
                              On a slightly unrelated topic... when is it planned to just retire the hardware-specific DDX's and use a generic interface exposed via Mesa, e.g. Gallium?
                              There's already an Xorg state tracker in gallium that in theory should work over any gallium driver. However, it's not been tested on hw drivers to any large extent. There are a few problems problems with switching to a generic ddx:

                              - supporting accel on asics with no gallium driver
                              - supporting non-KMS (users using nomodeset on Linux and non-Linux OSes that don't support kms)
                              - dealing with hardware specific quirks (e.g., evergreen+ doesn't support interleaved depth/stencil buffers)

                              None of them are insurmountable, but it's still more work than just adding support to the existing ddx.

                              Comment


                              • #45
                                I'd just like to point out, since everybody seems to have decided that AMD is the hot target for the night, that during r300g's initial bringup, this series of optimizations was done *in advance*, before I had even finished the driver. The pros (Dave and Alex) informed me that 2D/3D switches are too slow, and that I should not bother turning on the 2D hardware, so I didn't. We made a similar decision about hardware fog units.

                                The thing is that it's not always obvious whether or not using every last feature of the hardware is going to produce good results. Sometimes there are architectural problems which get in the way, sometimes there are library warts, and sometimes the hardware's just not very fast at a certain task.

                                And to the people complaining about r600g, it might be good to keep in mind that the primary goal of driver authors right now is making sure that the average user's experience is solid. The desktop needs to render correctly and speedily, browsers need to work, video needs to work, and games really are at the back of the list. If a game gets 60fps on a mid-end box with a mid-end card, then that's more than enough for "now", and it can always be optimized "later." (Quotes are to emphasize relative timeframes.)

                                Comment

                                Working...
                                X