Announcement

Collapse
No announcement yet.

Benchmarks Of AMD's Newest Gallium3D Driver

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    Originally posted by Qaridarium
    in my point of view all fixed function pipelines are just bad in visual Quality.

    this fake shader lights and shader effects are just bad in quality if you compare this to an real raytracing lighting.

    show me your D3D code man.

    they do raytracing in software on the cpu in the past-
    First result on google: http://graphics.stanford.edu/papers/i3dkdtree/

    Our system also takes advantage of GPUs' strengths at rasterization and shading to offer a mode where rasterization replaces eye ray scene intersection, and primary hits and local shading are produced with standard Direct3D code. For 1024x1024 renderings of our scenes with shadows and Phong shading, we achieve 12-18 frames per second.

    Comment


    • #72
      Originally posted by Qaridarium
      you are sure they use any fixed funktions of dx ?
      Fixed-function DX died with DX7. This solution uses DX9, which means HLSL.

      There are hundreds of HLSL-/GLSL-based raytracing implementations. You don't need OpenCL to make this happen.

      Comment


      • #73
        Originally posted by Qaridarium
        and your talking about FPS with raytracing is just complete nonsence !
        Think about this for a minute.

        If you did not render whole frames, you end up with these "ant lines." So say you only render 1/3rd of a frame. Let's say that instead of tearing or incorrect pixels, we just end up with 1/3rd of a valid scene evenly distributed across the screen with the remaining pixels being old scene data. Now use this technology outside of the proof-of-concept demos and in real games like, say, Left 4 Dead.

        (Q) What happens when you move around at high speeds, looking left and right and jittering around firing guns, and almost every single pixel changes every single game update at around 60hz?
        (A) You end up with a completely unrecognizable mess of smeared color across your screen that results in a completely and utterly unplayable game.

        At some point in the future, when ray tracing is more than just the toy demos you've found on Youtube, the scenes will be rendered to an entire frame and displayed at once. Because they have to be. Because the alternative is not usable or playable technology, not remotely.

        Also, try googling for "ray tracing fps." The first 5 hits for me were papers written by the actual graphics hardware vendors about GPGPU ray tracers... and they most absolutely certainly beyond any doubt measure things in FPS. Because real, non-toy raytracers do not accept "ant lines" as an acceptable outcome of a render, period.

        Comment


        • #74
          A thought has occurred to me a couple of times in the past weeks:

          After seeing what is possible wrt. automatic benchmarking - like this graph from Phoromatic - I've been thinking if this is possible too with graphics drivers?
          Something completely on line with the charts from the above link, but only a machine constantly pulling the newest git versions of r600c and r600g, compiling them and running benchmarks.

          So on the X-axis we would have the date, exactly as in the Phoromatic page, and the Y-axis would have the FPS count for a specific game, like Nexuiz, for both r600c, r600g, and fglrx.
          We could then see, very precisely, the performance gains that these two open drivers have - day by day.

          Is it just me or would that be extremely cool?

          To take it even further, each git commit in the driver code could be tied together with a benchmark, to allow the developers to see any performance gains or hits that a patch introduces (a la this), and perhaps help to hint at where the driver needs work in order to get more performance.

          Is there any reason why this isn't possible, and a custom, "hand-made" benchmark, like the one that is the subject of this thread, has to be performed?


          Originally posted by Qaridarium
          a very good exampel openCL+bulledphysic does raytracing:

          http://www.youtube.com/watch?v=33rU1axSKhQ
          Cool video! Looks so real, despite of the simple textures etc.

          Comment


          • #75
            Originally posted by runeks View Post
            Something completely on line with the charts from the above link, but only a machine constantly pulling the newest git versions of r600c and r600g, compiling them and running benchmarks.
            Why go for the kill when you can go for overkill: we could have a commit-by-commit benchmarking of r600c and r600g for commits that actually touch those drivers. This would also give away speed-related regressions pretty much immediately after they end up in the tree.

            Comment


            • #76
              Originally posted by Qaridarium
              openCL only needs to be better than HLSL-/GLSL --
              Eh, no. OpenCL has a different target audience than HLSL/GLSL. It is not a feasible replacement and it is not meant as one either.

              Comment


              • #77
                Originally posted by Qaridarium
                tearing is not the same as Ant Noise
                Of course not, but it's a similar issue: broken images.

                I can write an immediate-mode triangle rasterizer without any double buffering. This also has no frames per second, because there is no point where a whole frame is displayed to the user. You will see incomplete images as it runs. If it runs at a very high speed, you may not notice those incomplete images. What you need for this then is an incredibly high triangles/second, shader-ops/second, and fill rate. This is the same general idea of a limiting factor as "RPS" is in a ray tracer.

                The FPS is not a native part of either rendering approach; it's something we intentionally slap on because it's the difference between seeing broken crappy images or seeing clean and complete images.

                Also, keep in mind the fact of post-processing. Yes, you can do a lot of post-processing as part of the render for a pixel in a ray tracer, but not all of it; not without defining an incredibly complex filter up front, at least. Take a simple gaussian blur, for instance. Doing it as a pure ray tracer approach is not fun and absolutely no efficient, GPGPU or not. Doing it on a final image is actually pretty quick, though. If you want to have a scene behind some menus or something and want that scene blurred, you're damn well going to want to render a complete frame, post-process it, and then render over it. That's universal no matter how that original scene was actually rendered in the first place.

                Sure, you could go ahead and accept artifacts in that scene like ant lines, except those artifacts can multiply badly with various post-filter effects. If each pixel influences multiple pixels in the output, then every single incomplete/incorrect pixel in the source buffer results in numerous incorrect pixels in the output buffer. You absolutely want completed frames before doing post-processing, period.

                there is no modern high skilled realtime raytracing engine without Ant Noice
                Because there is no realtime raytracing engine that's actually usable for anything other than silly little toy demos. Which is the core of what I was getting at.

                but Ant Noise does not mean viewable noise for humans
                It absolutely does. A single dead pixel on a high resolution monitor is visible "noise" for humans. A single off pixel in a triangle rasterizer -- like seams or aliasing -- is visual noise.

                A single pixel that's not right in a ray traced render is also noise.

                If we had displays at 2000 DPI and the ray tracers were able to fill at least 99% of that space (with that remaining 1% divided evenly across the space) then maybe the noise would be imperceptible.

                We're decades away from that being possible with our monitors, much less our GPUs.

                you do not need to render 100% of a frame because a human can not see the difference on 90% to 100% or 80% to 100%

                in the most apps 50% is fine thats because on the second frame its 75%
                Not true at all. The demos you're looking at don't show it very well, because they are a very simple scene where the camera is moving around at low speeds.

                Try using that technique in Call of Duty while you're spinning left and right to fire at enemies, and you'll very, very, very easily notice the discrepencies.

                on 60fps means if an human see 30fps as a movie the human can not check the difference on 30 to 60fps in raytracing thats because the screen chance per pixel and do not have an deliffering time out per frame.
                Again, wrong. 30fps "works" in movies because the cameras move slowly and there's blurring. A lot of people hate the 30fps of movies because of the limitations it puts on the camera. Go watch a movie where the camera pans across a street. Even at 1080p, if that camera is moving at any even moderately fast velocity (say, 1/8th the speed you might turn your head while looking over a street), the whole scene is highly blurred. You won't be able to make out faces or read scenes while the camera is panning.

                30fps is totally unacceptable for video games or even any kind of decent animation. Movies make huge sacrifices to fit in the 30fps bracket. (On a side note, we have had the technology to move to higher frame rates in movies for years, but a lot of consumers complain about those because they "look wrong" -- which isn't because they actually _are_ wrong but simply because it's very different-looking if you're used to watching action flicks at 24fps, and people get stupid when things change or are different.)

                thats so wrong any realtime raytracing engine works in an relativ way.
                "Relative way" is more or less the same as saying "toy crappy demos" which is what I said.

                What you linked are a few shiny spheres floating over a textured plane. That's the very freaking definition of toy demos. Maybe you haven't noticed, but even old crappy games were made up of FAR more interesting shapes, and a shitload more of them too.

                What you're looking at are toy proofs of concepts showing off that somebody managed to write a highly simplified and extremely incomplete ray tracer that can barely handle a couple of the most simple to model shapes there are while chugging along at not-actually-realtime speeds but just partially-realtime speeds. Outside of the "look at what I can do" scene, what you are looking at is a broken ray tracer, not a working sample of a real engine.

                Go look up some actual ray tracing software (not realtime demos, but the actual software used today). They are already using OpenCL. Remember again that they are NOT realtime. And even with that limitation, the OpenCL versions lack a ton of the features of the CPU versions, because GPGPUs lack a ton of features of a CPU. And aren't realtime.

                Nobody here is saying that ray tracing on OpenCL isn't possible, or isn't the future. It's just a very distant future, and what you're looking at is just a toy demo idea of what the future might possibly kinda sorta could look like... maybe.

                And, more importantly, when that future comes, there will be NO ant lines because nobody is going to use this technology until it can render WHOLE FRAMES in realtime, because that's what people want. Doing it any other way is broken and will just look way worse.

                Just to finish it up, none of the papers on GPGPU ray tracing are advocating what you're saying, either. The actual people writing these are measuring things in FPS and talking about the day when they can render whole frames of complex scenes in realtime. You seem to be misinterpreting their work and claiming nonsense that even the people doing the work know is nonsense. Knock it off.

                Comment


                • #78
                  Originally posted by Qaridarium
                  yes its not perfect right now but any realtime raytracing engines in the future will not render a full screen and every pixel!
                  short response: read the last thing I wrote. the people at NVIDIA and ATI who are actually working on the technology are not saying the same thing you are. every single paper on the topic I've found quite explicitly says what I'm saying and the opposite of what you're saying. you're making stuff up.

                  Comment


                  • #79
                    Originally posted by nanonyme View Post
                    Why go for the kill when you can go for overkill: we could have a commit-by-commit benchmarking of r600c and r600g for commits that actually touch those drivers. This would also give away speed-related regressions pretty much immediately after they end up in the tree.
                    Exactly! That's actually what I meant. I didn't mean to imply that we would constantly be benchmarking the same drivers just for the sake of benchmarking, but I see how what I wrote could be taken to mean that.

                    But this should only be done if it would actually help the developers. If these commit-by-commit benchmarks would be of no value to the devs, I think daily benchmarks would be sufficient.

                    I'd just love to see that graph of r600g's performance slowly but steadily approaching fglrx .

                    Comment


                    • #80
                      Originally posted by runeks View Post
                      Exactly! That's actually what I meant. I didn't mean to imply that we would constantly be benchmarking the same drivers just for the sake of benchmarking, but I see how what I wrote could be taken to mean that.

                      But this should only be done if it would actually help the developers. If these commit-by-commit benchmarks would be of no value to the devs, I think daily benchmarks would be sufficient.

                      I'd just love to see that graph of r600g's performance slowly but steadily approaching fglrx .
                      I don't think there will be any significant performance improvement with current design (thought i hope and would like to be wrong on this). GPU driver are not like any others beast, small improvement just doesn't scale up.

                      Comment

                      Working...
                      X