Announcement

Collapse
No announcement yet.

Benchmarks Of AMD's Newest Gallium3D Driver

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #76
    Originally posted by Qaridarium View Post
    you don't unterstand what raytracing is thats because there are no FPS anymore you get unlimied fps on any kind of hardware with raytracing.

    on raytracing you have rays per seconds RPS

    less RPS means just more black or white pixels left per frame.
    The notion of "frame" is independent of the rendering method (rasterization or ray-tracing). Even ray-traced games use frames, as you yourself mention, which means "frames per second" is well defined.

    There are very good reasons for using frames instead of displaying each drawing operation on the fly. Seeing each pixel update as it is traced (or rasterized) would be extremely annoying to the user - try it! Modify glxgears to turn off double-buffering (it's a 2 line change). You won't have "frames" anymore but the result won't be pretty.

    Comment


    • #77
      Originally posted by Qaridarium View Post
      in my point of view all fixed function pipelines are just bad in visual Quality.

      this fake shader lights and shader effects are just bad in quality if you compare this to an real raytracing lighting.

      show me your D3D code man.

      they do raytracing in software on the cpu in the past-
      First result on google: http://graphics.stanford.edu/papers/i3dkdtree/

      Our system also takes advantage of GPUs' strengths at rasterization and shading to offer a mode where rasterization replaces eye ray scene intersection, and primary hits and local shading are produced with standard Direct3D code. For 1024x1024 renderings of our scenes with shadows and Phong shading, we achieve 12-18 frames per second.

      Comment


      • #78
        Originally posted by elanthis View Post
        I think you have no idea what you're talking about. Actually, I KNOW you have no idea what you're talking about. Par for the course.

        The scenes in a ray tracer are still rendered into a single frame which is displayed once fully assembled. There is a very definite FPS involved in the process.

        The FPS will be dependent on a similar combination of factors that concerns triangle-mesh rendering, including the destination framebuffer size, number of objects, and complexity of the lighting equations used.

        The "RPS" you mention is more or less the same idea as the "triangles per second" or "fragment fill rate" that you have on contemporary 3D rasterization hardware. All it indicates is how complex of a scene the hardware can manage while maintaining a usable FPS.
        you are just wrong raytracing does not care about your frame to display.

        on any kind of raytracing hardware you can have allways the max FPS the monitor can handle.

        the only difference between a low RPS hardware and a high RPS hardware is the black or white ant Noise over the screen.

        more RPS means less ant Noise


        and your talking about FPS with raytracing is just complete nonsence !

        watch some exampel videos on youtube on slow hardware and on fast hardware the only difference is the Ant Noise


        Originally posted by elanthis View Post
        Carmack mentioned that he hoped that we'd have MIXED MODE renderers in within 3-5 years. These are not actual ray tracing engines, but rather traditional triangle rasterizers that used some extremely simplified and inaccurate ray tracing techniques to compute shadows and lighting on the GPU during rendering rather than on the CPU before rendering. That's it, nothing more.

        Also, just to be clear, even if you're right about ray-tracing magically becoming feasible, DirectX is in no way being threatened by OpenCL, because DirectX has DirectCompute -- same damn thing, just a different API and syntax. More games make use of DirectCompute than OpenCL by a huge margin right now, today. (Not for rendering; for physics and such.)
        yes DirectCompute but i compare it to directX yes DCompute can beat openCL..

        Comment


        • #79
          Originally posted by BlackStar View Post
          you are sure they use any fixed funktions of dx ?

          i think they only use HLSL and your source do not do the raytracing fully on the GPU they do it on the CPU and only a tiny part goes to the shader cores on the GPU..-

          Comment


          • #80
            Originally posted by Qaridarium View Post
            you are sure they use any fixed funktions of dx ?
            Fixed-function DX died with DX7. This solution uses DX9, which means HLSL.

            There are hundreds of HLSL-/GLSL-based raytracing implementations. You don't need OpenCL to make this happen.

            Comment


            • #81
              Originally posted by BlackStar View Post
              Fixed-function DX died with DX7. This solution uses DX9, which means HLSL.

              There are hundreds of HLSL-/GLSL-based raytracing implementations. You don't need OpenCL to make this happen.
              openCL only needs to be better than HLSL-/GLSL --

              Comment


              • #82
                Originally posted by Qaridarium View Post
                and your talking about FPS with raytracing is just complete nonsence !
                Think about this for a minute.

                If you did not render whole frames, you end up with these "ant lines." So say you only render 1/3rd of a frame. Let's say that instead of tearing or incorrect pixels, we just end up with 1/3rd of a valid scene evenly distributed across the screen with the remaining pixels being old scene data. Now use this technology outside of the proof-of-concept demos and in real games like, say, Left 4 Dead.

                (Q) What happens when you move around at high speeds, looking left and right and jittering around firing guns, and almost every single pixel changes every single game update at around 60hz?
                (A) You end up with a completely unrecognizable mess of smeared color across your screen that results in a completely and utterly unplayable game.

                At some point in the future, when ray tracing is more than just the toy demos you've found on Youtube, the scenes will be rendered to an entire frame and displayed at once. Because they have to be. Because the alternative is not usable or playable technology, not remotely.

                Also, try googling for "ray tracing fps." The first 5 hits for me were papers written by the actual graphics hardware vendors about GPGPU ray tracers... and they most absolutely certainly beyond any doubt measure things in FPS. Because real, non-toy raytracers do not accept "ant lines" as an acceptable outcome of a render, period.

                Comment


                • #83
                  Originally posted by elanthis View Post
                  Think about this for a minute.

                  If you did not render whole frames, you end up with these "ant lines." So say you only render 1/3rd of a frame. Let's say that instead of tearing or incorrect pixels, we just end up with 1/3rd of a valid scene evenly distributed across the screen with the remaining pixels being old scene data. Now use this technology outside of the proof-of-concept demos and in real games like, say, Left 4 Dead.

                  (Q) What happens when you move around at high speeds, looking left and right and jittering around firing guns, and almost every single pixel changes every single game update at around 60hz?
                  (A) You end up with a completely unrecognizable mess of smeared color across your screen that results in a completely and utterly unplayable game.

                  At some point in the future, when ray tracing is more than just the toy demos you've found on Youtube, the scenes will be rendered to an entire frame and displayed at once. Because they have to be. Because the alternative is not usable or playable technology, not remotely.

                  Also, try googling for "ray tracing fps." The first 5 hits for me were papers written by the actual graphics hardware vendors about GPGPU ray tracers... and they most absolutely certainly beyond any doubt measure things in FPS. Because real, non-toy raytracers do not accept "ant lines" as an acceptable outcome of a render, period.
                  tearing is not the same as Ant Noise

                  there is no modern high skilled realtime raytracing engine without Ant Noice

                  and no realtime raytracing engine renders full frames they only delifers RPS on the nativ monitor HZ frame rate.

                  but Ant Noise does not mean viewable noise for humans

                  you do not need to render 100% of a frame because a human can not see the difference on 90% to 100% or 80% to 100%

                  in the most apps 50% is fine thats because on the second frame its 75%

                  on 60fps means if an human see 30fps as a movie the human can not check the difference on 30 to 60fps in raytracing thats because the screen chance per pixel and do not have an deliffering time out per frame.

                  you got impressiv graphic effects just because Ant Noise imitate an natural Uncertainty movement effect.

                  ""ant lines." "

                  there are no ant lines on raytracing its per pixel means you really can not watch the ants on an higher RPS rate

                  "At some point in the future, when ray tracing is more than just the toy demos you've found on Youtube, the scenes will be rendered to an entire frame and displayed at once. Because they have to be. Because the alternative is not usable or playable technology, not remotely."

                  thats so wrong any realtime raytracing engine works in an relativ way.

                  Realtime Raytracing with openCL:

                  http://www.youtube.com/watch?v=v1JS4wyGGy0

                  http://www.youtube.com/watch?v=zxEsyukiRw4

                  Comment


                  • #84
                    http://www.youtube.com/watch?v=JT6Iyl35Wnc

                    this video shows th noise ants very well.

                    Comment


                    • #85
                      a very good exampel openCL+bulledphysic does raytracing:

                      http://www.youtube.com/watch?v=33rU1axSKhQ

                      Comment


                      • #86
                        A thought has occurred to me a couple of times in the past weeks:

                        After seeing what is possible wrt. automatic benchmarking - like this graph from Phoromatic - I've been thinking if this is possible too with graphics drivers?
                        Something completely on line with the charts from the above link, but only a machine constantly pulling the newest git versions of r600c and r600g, compiling them and running benchmarks.

                        So on the X-axis we would have the date, exactly as in the Phoromatic page, and the Y-axis would have the FPS count for a specific game, like Nexuiz, for both r600c, r600g, and fglrx.
                        We could then see, very precisely, the performance gains that these two open drivers have - day by day.

                        Is it just me or would that be extremely cool?

                        To take it even further, each git commit in the driver code could be tied together with a benchmark, to allow the developers to see any performance gains or hits that a patch introduces (a la this), and perhaps help to hint at where the driver needs work in order to get more performance.

                        Is there any reason why this isn't possible, and a custom, "hand-made" benchmark, like the one that is the subject of this thread, has to be performed?


                        Originally posted by Qaridarium View Post
                        a very good exampel openCL+bulledphysic does raytracing:

                        http://www.youtube.com/watch?v=33rU1axSKhQ
                        Cool video! Looks so real, despite of the simple textures etc.

                        Comment


                        • #87
                          Originally posted by runeks View Post
                          Something completely on line with the charts from the above link, but only a machine constantly pulling the newest git versions of r600c and r600g, compiling them and running benchmarks.
                          Why go for the kill when you can go for overkill: we could have a commit-by-commit benchmarking of r600c and r600g for commits that actually touch those drivers. This would also give away speed-related regressions pretty much immediately after they end up in the tree.

                          Comment


                          • #88
                            Originally posted by Qaridarium View Post
                            openCL only needs to be better than HLSL-/GLSL --
                            Eh, no. OpenCL has a different target audience than HLSL/GLSL. It is not a feasible replacement and it is not meant as one either.

                            Comment


                            • #89
                              Originally posted by BlackStar View Post
                              Eh, no. OpenCL has a different target audience than HLSL/GLSL. It is not a feasible replacement and it is not meant as one either.
                              i think raytracing was not the target of HLSL/GLSL

                              openCL was much better for that

                              Comment


                              • #90
                                Originally posted by Qaridarium View Post
                                tearing is not the same as Ant Noise
                                Of course not, but it's a similar issue: broken images.

                                I can write an immediate-mode triangle rasterizer without any double buffering. This also has no frames per second, because there is no point where a whole frame is displayed to the user. You will see incomplete images as it runs. If it runs at a very high speed, you may not notice those incomplete images. What you need for this then is an incredibly high triangles/second, shader-ops/second, and fill rate. This is the same general idea of a limiting factor as "RPS" is in a ray tracer.

                                The FPS is not a native part of either rendering approach; it's something we intentionally slap on because it's the difference between seeing broken crappy images or seeing clean and complete images.

                                Also, keep in mind the fact of post-processing. Yes, you can do a lot of post-processing as part of the render for a pixel in a ray tracer, but not all of it; not without defining an incredibly complex filter up front, at least. Take a simple gaussian blur, for instance. Doing it as a pure ray tracer approach is not fun and absolutely no efficient, GPGPU or not. Doing it on a final image is actually pretty quick, though. If you want to have a scene behind some menus or something and want that scene blurred, you're damn well going to want to render a complete frame, post-process it, and then render over it. That's universal no matter how that original scene was actually rendered in the first place.

                                Sure, you could go ahead and accept artifacts in that scene like ant lines, except those artifacts can multiply badly with various post-filter effects. If each pixel influences multiple pixels in the output, then every single incomplete/incorrect pixel in the source buffer results in numerous incorrect pixels in the output buffer. You absolutely want completed frames before doing post-processing, period.

                                there is no modern high skilled realtime raytracing engine without Ant Noice
                                Because there is no realtime raytracing engine that's actually usable for anything other than silly little toy demos. Which is the core of what I was getting at.

                                but Ant Noise does not mean viewable noise for humans
                                It absolutely does. A single dead pixel on a high resolution monitor is visible "noise" for humans. A single off pixel in a triangle rasterizer -- like seams or aliasing -- is visual noise.

                                A single pixel that's not right in a ray traced render is also noise.

                                If we had displays at 2000 DPI and the ray tracers were able to fill at least 99% of that space (with that remaining 1% divided evenly across the space) then maybe the noise would be imperceptible.

                                We're decades away from that being possible with our monitors, much less our GPUs.

                                you do not need to render 100% of a frame because a human can not see the difference on 90% to 100% or 80% to 100%

                                in the most apps 50% is fine thats because on the second frame its 75%
                                Not true at all. The demos you're looking at don't show it very well, because they are a very simple scene where the camera is moving around at low speeds.

                                Try using that technique in Call of Duty while you're spinning left and right to fire at enemies, and you'll very, very, very easily notice the discrepencies.

                                on 60fps means if an human see 30fps as a movie the human can not check the difference on 30 to 60fps in raytracing thats because the screen chance per pixel and do not have an deliffering time out per frame.
                                Again, wrong. 30fps "works" in movies because the cameras move slowly and there's blurring. A lot of people hate the 30fps of movies because of the limitations it puts on the camera. Go watch a movie where the camera pans across a street. Even at 1080p, if that camera is moving at any even moderately fast velocity (say, 1/8th the speed you might turn your head while looking over a street), the whole scene is highly blurred. You won't be able to make out faces or read scenes while the camera is panning.

                                30fps is totally unacceptable for video games or even any kind of decent animation. Movies make huge sacrifices to fit in the 30fps bracket. (On a side note, we have had the technology to move to higher frame rates in movies for years, but a lot of consumers complain about those because they "look wrong" -- which isn't because they actually _are_ wrong but simply because it's very different-looking if you're used to watching action flicks at 24fps, and people get stupid when things change or are different.)

                                thats so wrong any realtime raytracing engine works in an relativ way.
                                "Relative way" is more or less the same as saying "toy crappy demos" which is what I said.

                                What you linked are a few shiny spheres floating over a textured plane. That's the very freaking definition of toy demos. Maybe you haven't noticed, but even old crappy games were made up of FAR more interesting shapes, and a shitload more of them too.

                                What you're looking at are toy proofs of concepts showing off that somebody managed to write a highly simplified and extremely incomplete ray tracer that can barely handle a couple of the most simple to model shapes there are while chugging along at not-actually-realtime speeds but just partially-realtime speeds. Outside of the "look at what I can do" scene, what you are looking at is a broken ray tracer, not a working sample of a real engine.

                                Go look up some actual ray tracing software (not realtime demos, but the actual software used today). They are already using OpenCL. Remember again that they are NOT realtime. And even with that limitation, the OpenCL versions lack a ton of the features of the CPU versions, because GPGPUs lack a ton of features of a CPU. And aren't realtime.

                                Nobody here is saying that ray tracing on OpenCL isn't possible, or isn't the future. It's just a very distant future, and what you're looking at is just a toy demo idea of what the future might possibly kinda sorta could look like... maybe.

                                And, more importantly, when that future comes, there will be NO ant lines because nobody is going to use this technology until it can render WHOLE FRAMES in realtime, because that's what people want. Doing it any other way is broken and will just look way worse.

                                Just to finish it up, none of the papers on GPGPU ray tracing are advocating what you're saying, either. The actual people writing these are measuring things in FPS and talking about the day when they can render whole frames of complex scenes in realtime. You seem to be misinterpreting their work and claiming nonsense that even the people doing the work know is nonsense. Knock it off.

                                Comment

                                Working...
                                X