Announcement

Collapse
No announcement yet.

AMD's R300 Gallium3D Driver Is Looking Good For 2011

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    It's no more odd than any other tasks that will get done eventually as time permits. The developers are working on things based on (a) what delivers the most benefit to users and (b) the inherent dependencies that go along with any rearchitecture work (first you pillage, *then* you burn).

    Popper, are you suggesting that multithreading should have been implemented before other tasks ? If so, which tasks do you feel should have been delayed in order to free up time for multithreading ?

    Comment


    • #72
      Originally posted by bridgman View Post
      It's no more odd than any other tasks that will get done eventually as time permits. The developers are working on things based on (a) what delivers the most benefit to users and (b) the inherent dependencies that go along with any rearchitecture work (first you pillage, *then* you burn).

      Popper, are you suggesting that multithreading should have been implemented before other tasks ? If so, which tasks do you feel should have been delayed in order to free up time for multithreading ?
      as it happens, i agree with what you say, basically make a development plan ,follow it, its the only way to progress in a timely manor.

      But, and theres always a but, If your plan actually includes 'threading' in this case at some point, then allowing for that at the core of your plan is probably a good thing to consider, so you dont have to rip out lots of new code that just doesn't work well with threading later.

      if there is a suggestion, that is it, nothing more, nothing less.

      OC given we are talking Open source here and not closed , then what harm is there to pick an existing well optimized external threading code base library , use its supplied API ,and try building your plan with that along side your other tests as you progress, slightly more work sure, but mattst88 and others are coding for Fun and to learn/try new thing's not pay i assume!

      Comment


      • #73
        Originally posted by bridgman View Post
        Sure, but that's not what you said. First you were attacking the 300G project and developers, then you were saying that :
        - we were holding back "secret sauce" that would presumably let 3D run faster
        in your words there are some spec infos left and you say something about 5% means the OS driver r600 can have up to 95% of the speed,


        "OK, now I'm confused too. You're talking about 300g, which supports only the older GPUs that don't have OpenCL-capable hardware or video decode hardware (other than a legacy MPEG-2 IDCT block). "


        for the other people not bridgman--> the problem on OpenCL is that openCL is an 1to1 copy of Cuda and the reference card of cude is a geforce 8800 and this card do have 2 nvidia specific caches and amd ad one of them in the hd4000 cards and the other in the 5000 cards ....

        @bridgman:but in the video decode side you imagine thats there are only mpeg2 but the x1950 have shader based h264 decode...

        Comment


        • #74
          Originally posted by popper View Post
          as it happens, i agree with what you say, basically make a development plan ,follow it, its the only way to progress in a timely manor.
          Yep. I don't *think* there is anything being implemented today that will have problems with simple multithreading in the future, but it's also a pretty safe bet that there will be a pile of "oh crap" issues anyways.

          I imagine that multithreading would be implemented with whatever the standard OS threading mechanism is at the time, and AFAIK that is NPTL today.

          Modern games are starting to make more use of multithreading so there's not a big win from using a whole lot of driver threads -- the biggest advantage seems to come from using a single worker thread and letting some of the driver work run in parallel with a single-threaded game.

          Anyways, multithreading does get discussed from time to time (hopefully enough to avoid anything that would preclude multithreading in the future) but right now the devs are focusing on things that can give bigger and more immediate gains.

          Comment


          • #75
            Originally posted by Qaridarium View Post
            in your words there are some spec infos left and you say something about 5% means the OS driver r600 can have up to 95% of the speed
            Sure, but I also said that the last few bits of hardware info were not things where hardware support could easily be added to the open driver... if we ever get to the point where the open driver has a large team of developers doing application-specific optimizations *then* the additional HW info could make a difference. Not 100% sure, but I believe the 5% difference quote was from the 5xx days, and we have actually released some of that info since then (for both 5xx and 6xx+). I believe I was thinking of CMASK info and a few other bits when I mentioned the 5% number.

            I'm not sure if the CMASK info turned out to be useful... there was some hope of using it for faster clears but don't remember if that actually worked out.

            Originally posted by Qaridarium View Post
            @bridgman:but in the video decode side you imagine thats there are only mpeg2 but the x1950 have shader based h264 decode...
            I was talking about video decoding *hardware*, ie not counting general purpose shaders. On 5xx and earlier that is MPEG2-only (other than the rv550).

            Comment


            • #76
              Originally posted by popper View Post
              "The open source drivers are not multi-threaded AFAIK" which is Very ODD Today if thats really the case, as NPTL(Native POSIX Thread Library)has been in since 2.6 started http://en.wikipedia.org/wiki/Native_...Thread_Library
              "NPTL has been part of Red Hat Enterprise Linux since version 3, and in the Linux kernel since version 2.6. It is now a fully integrated part of the GNU C Library.[citation needed]
              There exists a tracing tool for NPTL, called POSIX Thread Trace Tool (PTT). And an Open POSIX Test Suite (OPTS) was written for testing the NPTL library against the POSIX standard." ,not to mention there are several other optimized 3rd party threading libraries suitable for any such driver inclusion around too.
              Using more than one core in the current graphics stack is a non-trivial task and using more than two cores efficiently is nearly impossible. There's a lot of tasks which must be performed in-order, e.g.:

              task A -> task B -> task C -> task D -> task E -> task F ...

              You may split this into two threads, like this:
              1st thread: task A -> task B -> task C -> write the output of C to a work queue.
              2nd thread: read the work queue -> task D -> task E -> task F

              Now where would you do the split in the current stack? Before st/mesa? Or after st/mesa and before the driver? Or between the driver and libdrm? Somewhere else? Note the split itself costs a lot of CPU cycles.

              Of course we could use OpenMP or Threading Building Blocks etc. for some algorithms but that would give us very little speedup, not enough to get 2 fully-loaded cores.

              I'd really like to see an actual plan instead of arguing that we should use some threading library. No library will magically use all your cores. And BTW, Mesa does use NPTL; it didn't help much, did it?

              Comment


              • #77
                how to build only gallium (r300) ?

                Hi folks,
                how to build only gallium without mesa build?

                Because now i've got:
                ../mesa/lib/r300_dri.so <<- mesa (R300 classic)
                ../mesa/lib/gallium/r300_dri.so <<- gallium (R300 gallium)

                please, correct me if i'm wrong!

                P.S. my configure line looks like this:
                ./configure --disable-gallium-nouveau --disable-gallium-svga --disable-gallium-i915 -disable-gallium-i965 --disable-gallium-swrast --enable-gallium-radeon --disable-64-bit --enable-32-bit --enable-asm --disable-debug --disable-glut --disable-glu --disable-glw --disable-egl --disable-openvg --with-state-trackers=dri,glx --with-dri-drivers=r300

                P.S.S Unreal Tournament runs better in mesa about +10fps (self demo rec). Unreal Gold crash whole system with gallium (Ctrl+Alt+Backspace does not help).

                Comment


                • #78
                  Originally posted by popper View Post
                  as it happens, i agree with what you say, basically make a development plan ,follow it, its the only way to progress in a timely manor.

                  But, and theres always a but, If your plan actually includes 'threading' in this case at some point, then allowing for that at the core of your plan is probably a good thing to consider, so you dont have to rip out lots of new code that just doesn't work well with threading later.

                  if there is a suggestion, that is it, nothing more, nothing less.

                  OC given we are talking Open source here and not closed , then what harm is there to pick an existing well optimized external threading code base library , use its supplied API ,and try building your plan with that along side your other tests as you progress, slightly more work sure, but mattst88 and others are coding for Fun and to learn/try new thing's not pay i assume!
                  You seem to be under the misunderstanding that multi-threading is some feature that is added by linking to a library and calling some functions. I can assure you, it is nothing like that. The linking to a library part and calling functions part is much less than 1% of the effort that would be required.

                  Comment


                  • #79
                    Originally posted by 69acid69 View Post
                    Hi folks,
                    how to build only gallium without mesa build?

                    Because now i've got:
                    ../mesa/lib/r300_dri.so <<- mesa (R300 classic)
                    ../mesa/lib/gallium/r300_dri.so <<- gallium (R300 gallium)

                    please, correct me if i'm wrong!

                    P.S. my configure line looks like this:
                    ./configure --disable-gallium-nouveau --disable-gallium-svga --disable-gallium-i915 -disable-gallium-i965 --disable-gallium-swrast --enable-gallium-radeon --disable-64-bit --enable-32-bit --enable-asm --disable-debug --disable-glut --disable-glu --disable-glw --disable-egl --disable-openvg --with-state-trackers=dri,glx --with-dri-drivers=r300

                    P.S.S Unreal Tournament runs better in mesa about +10fps (self demo rec). Unreal Gold crash whole system with gallium (Ctrl+Alt+Backspace does not help).
                    Use: --with-dri-drivers=

                    Comment


                    • #80
                      What about the low-hanging fruit, such as compiling independent-from-each-other shaders in parallel?

                      Comment

                      Working...
                      X