Announcement

Collapse
No announcement yet.

Intel Just Released A Crazy Fast Acceleration Architecture

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by elanthis View Post
    On a slightly unrelated topic... when is it planned to just retire the hardware-specific DDX's and use a generic interface exposed via Mesa, e.g. Gallium?
    My impression is that there are two simple issues in the way :

    1. Memory management APIs differ from hardware vendor to another (since the underlying hardware is quite different in that area) so at least part of the "common DDX code" is not likely to be common any time soon.

    2. The DDX code doesn't seem to represent a big maintenance overhead today and so it would be a lot more work to replace it than to keep the current DDX code going.

    If we reach a point where the DDX code needs to be substantially re-written anyways (new acceleration APIs inside X or radically different hardware) then it might make sense to rewrite part of the code using Gallium3D operations, but for now it seems to make more sense to leave DDX alone and work on other parts of the stack instead.

    It's a bit like the transition to Gallium3D - if you were writing a driver from scratch then writing it against the Gallium3D interfaces is less work than writing a "classic" driver... but if you already *have* a classic driver that works then re-writing to Gallium3D is *more* work rather than less.

    One minor point -- the Gallium3D interface is not actually exposed by mesa - the code just happens to reside in the mesa tree because mesa is "the first and biggest customer". AFAIK the Gallium3D pipe and winsys drivers are built into whatever driver uses them, so there would be a copy of the drivers in the DDX, a copy in mesa, a copy in the video decode driver etc...
    Last edited by bridgman; 06 June 2011, 07:51 AM.
    Test signature

    Comment


    • #42
      Hang on, it just occurred to me...If this driver improvement has been made in x.org/x server, then would it still be relevant when the switch to Wayland is made?

      Comment


      • #43
        The speed improvement is nice, but I have no problem with 2D speeds on my Core i3 550, or even on my GMA950 Netbook. Is there any chance this could lead to noticeably faster 3D rendering?

        Comment


        • #44
          Originally posted by elanthis View Post
          On a slightly unrelated topic... when is it planned to just retire the hardware-specific DDX's and use a generic interface exposed via Mesa, e.g. Gallium?
          There's already an Xorg state tracker in gallium that in theory should work over any gallium driver. However, it's not been tested on hw drivers to any large extent. There are a few problems problems with switching to a generic ddx:

          - supporting accel on asics with no gallium driver
          - supporting non-KMS (users using nomodeset on Linux and non-Linux OSes that don't support kms)
          - dealing with hardware specific quirks (e.g., evergreen+ doesn't support interleaved depth/stencil buffers)

          None of them are insurmountable, but it's still more work than just adding support to the existing ddx.

          Comment


          • #45
            I'd just like to point out, since everybody seems to have decided that AMD is the hot target for the night, that during r300g's initial bringup, this series of optimizations was done *in advance*, before I had even finished the driver. The pros (Dave and Alex) informed me that 2D/3D switches are too slow, and that I should not bother turning on the 2D hardware, so I didn't. We made a similar decision about hardware fog units.

            The thing is that it's not always obvious whether or not using every last feature of the hardware is going to produce good results. Sometimes there are architectural problems which get in the way, sometimes there are library warts, and sometimes the hardware's just not very fast at a certain task.

            And to the people complaining about r600g, it might be good to keep in mind that the primary goal of driver authors right now is making sure that the average user's experience is solid. The desktop needs to render correctly and speedily, browsers need to work, video needs to work, and games really are at the back of the list. If a game gets 60fps on a mid-end box with a mid-end card, then that's more than enough for "now", and it can always be optimized "later." (Quotes are to emphasize relative timeframes.)

            Comment


            • #46
              Originally posted by MostAwesomeDude View Post
              And to the people complaining about r600g, it might be good to keep in mind that the primary goal of driver authors right now is making sure that the average user's experience is solid. The desktop needs to render correctly and speedily, browsers need to work, video needs to work, and games really are at the back of the list. If a game gets 60fps on a mid-end box with a mid-end card, then that's more than enough for "now", and it can always be optimized "later." (Quotes are to emphasize relative timeframes.)
              I would love to get 60 fps. Even 30 fps would be a dream come true. For me, I can't seem to find more than 12 - 15 FPS on any 3d application at all. Even glxgears gives me 30 fps, and I can't think of anything simpler except rendering a single triangle. I have swap buffers wait set to off. And I don't exactly have a mid-end box; until the HD6990 came around, the HD5970 was the fastest single-card solution on the planet. But right now it's performing about as fast as an r200 card (maybe slower).

              On both Fedora 15 and Ubuntu 11.04, I see visible stutter when asking Gnome3 shell to "present" about 4 windows (Google Chrome, Quassel, Nautilus and Pidgin). On a piddly i965 chip, I get absolutely fluid FPS throughout the entire gnome-shell experience, even under heavy system load (compiling the kernel). There's a very noticeable difference in performance between the once-fastest GPU on the planet and a mediocre IGP, even with this "common desktop task". It can't be the hardware; on my HD5970, I can whizz around Crysis 2 rendering at close to 60 FPS windowed on Windows 7's Aero compositing.

              Is this the norm for r600g, or is there something abnormal about my configuration that's keeping me from topping 30 fps for even the most basic OpenGL applications? I mean, if it's a new regression or weird bug or motherboard mojo or something else, I'm all ears for trying to track it down. But my understanding from the r600g benchmarks on Phoronix is that I'm pretty much getting all that r600g is willing to put out right now -- between 8 and 20 fps for most 3d apps.
              Last edited by allquixotic; 06 June 2011, 10:08 PM.

              Comment


              • #47
                4850X2 users and i can even play lineage2 through wine without lag (aka at least 40ish fps) using r600g, so yes is you.

                btw never use vanilla stack from ubuntu it sux, download your own kernel and compile it, preferably with 1000hz and preempt do a git pull from mesa, drm, ddx and compile with -O3 (it helps a bit).

                after that check man radeon and activates color tiling, swapbuffer=off download driconf and disable vblank(google it) and some other params that may help

                btw try always the next kernel, in this case 3.0 rc2 stable ones are sometime problematics

                Comment


                • #48
                  Originally posted by bridgman View Post
                  One minor point -- the Gallium3D interface is not actually exposed by mesa - the code just happens to reside in the mesa tree because mesa is "the first and biggest customer".
                  The state trackers would be public APIs though, for which you'd have some kind of "generic basic 2D rendering, buffer management, and mode setting" state tracker(s) that X could use; possibly just EGL+WFC+VG or something like that.

                  Makes sense that DDX's are just around because they're already there, though. Certainly bigger things to worry about if the DDX maintenance isn't being a burden.

                  Comment


                  • #49
                    p.s. bridgman, your private inbox is full, and I had some things I had wanted to talk to you about regarding any future plans on hiring interns, as I know a number of exceptionally talented soon-to-be graduates and some of them might be interested in doing graphics driver development work during their senior years, if you guys will have any such positions available. Maybe some future career folks too -- while we're billed as a game programming school, a notable number of our grads go on to work at places like Boeing, Intel, Microsoft R&D, etc.

                    Comment


                    • #50
                      Fast doesn't even begin to describe this.

                      I went from 2 fps to 40 fps on the fishtank test.



                      I'm using a 5 year-old i945gm (GMA950).

                      Comment

                      Working...
                      X