Announcement

Collapse
No announcement yet.

Benchmarks Of AMD's Newest Gallium3D Driver

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #91
    Originally posted by elanthis View Post
    Of course not, but it's a similar issue: broken images.

    I can write an immediate-mode triangle rasterizer without any double buffering. This also has no frames per second, because there is no point where a whole frame is displayed to the user. You will see incomplete images as it runs. If it runs at a very high speed, you may not notice those incomplete images. What you need for this then is an incredibly high triangles/second, shader-ops/second, and fill rate. This is the same general idea of a limiting factor as "RPS" is in a ray tracer.

    The FPS is not a native part of either rendering approach; it's something we intentionally slap on because it's the difference between seeing broken crappy images or seeing clean and complete images.

    Also, keep in mind the fact of post-processing. Yes, you can do a lot of post-processing as part of the render for a pixel in a ray tracer, but not all of it; not without defining an incredibly complex filter up front, at least. Take a simple gaussian blur, for instance. Doing it as a pure ray tracer approach is not fun and absolutely no efficient, GPGPU or not. Doing it on a final image is actually pretty quick, though. If you want to have a scene behind some menus or something and want that scene blurred, you're damn well going to want to render a complete frame, post-process it, and then render over it. That's universal no matter how that original scene was actually rendered in the first place.

    Sure, you could go ahead and accept artifacts in that scene like ant lines, except those artifacts can multiply badly with various post-filter effects. If each pixel influences multiple pixels in the output, then every single incomplete/incorrect pixel in the source buffer results in numerous incorrect pixels in the output buffer. You absolutely want completed frames before doing post-processing, period.



    Because there is no realtime raytracing engine that's actually usable for anything other than silly little toy demos. Which is the core of what I was getting at.



    It absolutely does. A single dead pixel on a high resolution monitor is visible "noise" for humans. A single off pixel in a triangle rasterizer -- like seams or aliasing -- is visual noise.

    A single pixel that's not right in a ray traced render is also noise.

    If we had displays at 2000 DPI and the ray tracers were able to fill at least 99% of that space (with that remaining 1% divided evenly across the space) then maybe the noise would be imperceptible.

    We're decades away from that being possible with our monitors, much less our GPUs.



    Not true at all. The demos you're looking at don't show it very well, because they are a very simple scene where the camera is moving around at low speeds.

    Try using that technique in Call of Duty while you're spinning left and right to fire at enemies, and you'll very, very, very easily notice the discrepencies.



    Again, wrong. 30fps "works" in movies because the cameras move slowly and there's blurring. A lot of people hate the 30fps of movies because of the limitations it puts on the camera. Go watch a movie where the camera pans across a street. Even at 1080p, if that camera is moving at any even moderately fast velocity (say, 1/8th the speed you might turn your head while looking over a street), the whole scene is highly blurred. You won't be able to make out faces or read scenes while the camera is panning.

    30fps is totally unacceptable for video games or even any kind of decent animation. Movies make huge sacrifices to fit in the 30fps bracket. (On a side note, we have had the technology to move to higher frame rates in movies for years, but a lot of consumers complain about those because they "look wrong" -- which isn't because they actually _are_ wrong but simply because it's very different-looking if you're used to watching action flicks at 24fps, and people get stupid when things change or are different.)



    "Relative way" is more or less the same as saying "toy crappy demos" which is what I said.

    What you linked are a few shiny spheres floating over a textured plane. That's the very freaking definition of toy demos. Maybe you haven't noticed, but even old crappy games were made up of FAR more interesting shapes, and a shitload more of them too.

    What you're looking at are toy proofs of concepts showing off that somebody managed to write a highly simplified and extremely incomplete ray tracer that can barely handle a couple of the most simple to model shapes there are while chugging along at not-actually-realtime speeds but just partially-realtime speeds. Outside of the "look at what I can do" scene, what you are looking at is a broken ray tracer, not a working sample of a real engine.

    Go look up some actual ray tracing software (not realtime demos, but the actual software used today). They are already using OpenCL. Remember again that they are NOT realtime. And even with that limitation, the OpenCL versions lack a ton of the features of the CPU versions, because GPGPUs lack a ton of features of a CPU. And aren't realtime.

    Nobody here is saying that ray tracing on OpenCL isn't possible, or isn't the future. It's just a very distant future, and what you're looking at is just a toy demo idea of what the future might possibly kinda sorta could look like... maybe.

    And, more importantly, when that future comes, there will be NO ant lines because nobody is going to use this technology until it can render WHOLE FRAMES in realtime, because that's what people want. Doing it any other way is broken and will just look way worse.

    Just to finish it up, none of the papers on GPGPU ray tracing are advocating what you're saying, either. The actual people writing these are measuring things in FPS and talking about the day when they can render whole frames of complex scenes in realtime. You seem to be misinterpreting their work and claiming nonsense that even the people doing the work know is nonsense. Knock it off.
    the short version of answer because you write a lot

    yes its not perfect right now but any realtime raytracing engines in the future will not render a full screen and every pixel!

    all realtime engines allways will give you RPS on the nativ Hz of the monitor.

    the differend is just the Noice level will goes down because of the strong hardware.

    right now you can buy hardware for that just an 48core opteron system with 512gb of ram or an crossfire X with 4 pices od 6870 GPUs

    right now its expensiv right

    Comment


    • #92
      Originally posted by Qaridarium View Post
      yes its not perfect right now but any realtime raytracing engines in the future will not render a full screen and every pixel!
      short response: read the last thing I wrote. the people at NVIDIA and ATI who are actually working on the technology are not saying the same thing you are. every single paper on the topic I've found quite explicitly says what I'm saying and the opposite of what you're saying. you're making stuff up.

      Comment


      • #93
        Originally posted by elanthis View Post
        short response: read the last thing I wrote. the people at NVIDIA and ATI who are actually working on the technology are not saying the same thing you are. every single paper on the topic I've found quite explicitly says what I'm saying and the opposite of what you're saying. you're making stuff up.
        show me every single paper you read about that tropic it can not be so much papers

        Comment


        • #94
          Originally posted by nanonyme View Post
          Why go for the kill when you can go for overkill: we could have a commit-by-commit benchmarking of r600c and r600g for commits that actually touch those drivers. This would also give away speed-related regressions pretty much immediately after they end up in the tree.
          Exactly! That's actually what I meant. I didn't mean to imply that we would constantly be benchmarking the same drivers just for the sake of benchmarking, but I see how what I wrote could be taken to mean that.

          But this should only be done if it would actually help the developers. If these commit-by-commit benchmarks would be of no value to the devs, I think daily benchmarks would be sufficient.

          I'd just love to see that graph of r600g's performance slowly but steadily approaching fglrx .

          Comment


          • #95
            Originally posted by runeks View Post
            Exactly! That's actually what I meant. I didn't mean to imply that we would constantly be benchmarking the same drivers just for the sake of benchmarking, but I see how what I wrote could be taken to mean that.

            But this should only be done if it would actually help the developers. If these commit-by-commit benchmarks would be of no value to the devs, I think daily benchmarks would be sufficient.

            I'd just love to see that graph of r600g's performance slowly but steadily approaching fglrx .
            I don't think there will be any significant performance improvement with current design (thought i hope and would like to be wrong on this). GPU driver are not like any others beast, small improvement just doesn't scale up.

            Comment


            • #96
              Now this got me interested.

              You really don't expect significant performance increase for the Gallium-based Radeon drivers?

              Or do you think that this kind of benchmarking is not good for measuring the increase?

              Comment


              • #97
                Originally posted by Qaridarium View Post
                show me every single paper you read about that tropic it can not be so much papers
                That's an easy game to play. I doubt you can actually even comprehend any papers given the general gibberish you write. Do you have even a single reference that shows that anyone is seriously implementing or proposing a broken ray tracer that generates noise and erroneous images as the future of graphics? I don't need to see "every single paper," you just need to show a single solitary one from a reputable source. I'd really like to see it, and I'd really like to show it to the few dozen top-tier graphics experts I work with. I bet they'd like to know that their jobs just got a lot easier because the future means they can write broken lazy graphics engines that generate incomplete and noisy images. Save them a lot of time if they can just half-ass everything from now on.

                Just to humor you though, here's the just first three meaningful articles/papers that come up in a Google search, all of which quite explicitly mention complete frames, frame rates, and explicit desire to match the visual quality of contemporary ray tracers (that is, no noise or garbage):

                http://gpurt.sourceforge.net/DA07_04..._GPU-1.0.5.pdf
                http://www.keldysh.ru/pages/cgraph/a...RayTracing.pdf
                http://blogs.intel.com/research/2007..._the_end_o.php

                Funnily enough, searching for realtime ray tracing without frames turns up this thread on Phoronix before it turns up a single paper. And still doesn't show any papers three pages into the results. Conspiracy? Government suppression of information? Alien abduction of ray tracing engineers? Stupid forum posters making shit up? You decide.

                If you don't have a single link or reference, I'm done -- beaten my head into a wall enough with this "conversation." I think I've at least managed to make sure nobody else chancing into this thread will inadvertently believe anything coming out of you and start parroting it, so at least your nonsense won't spread.

                Comment


                • #98
                  elanthis,

                  seriously you should know better for someone who joined Phoronix in 2007. Never. Discuss. Anything. With Qaridarium, unless you are ill and feel bored.

                  Comment


                  • #99
                    Originally posted by d2kx View Post
                    elanthis,

                    seriously you should know better for someone who joined Phoronix in 2007. Never. Discuss. Anything. With Qaridarium, unless you are ill and feel bored.
                    I know. Trust me, I know. I think I might actually be mentally ill, because I logically know better than to argue _anything_ on the Internet for any reason but yet I keep doing it. I should maybe find a 12-step program or something.

                    Comment


                    • Originally posted by pingufunkybeat View Post
                      Now this got me interested.

                      You really don't expect significant performance increase for the Gallium-based Radeon drivers?

                      Or do you think that this kind of benchmarking is not good for measuring the increase?
                      Just that with current design for r600g i don't think we can't match 50% of fglrx speed on things like nexuiz or newer game/engine. So you won't see any major boost until a complete rewrite (shader compiler excluded). That's my current feeling, i could be wrong.

                      Comment


                      • Is the r600g design considerably different from r300g (of course, the hardware architectures are very different)?

                        I believe that r300g passed the 50% mark.

                        Also, do you think that a complete rewrite is feasible/planned? Being stuck at <50% of the maximum forever would be a bit disappointing.

                        Comment


                        • Originally posted by glisse View Post
                          Just that with current design for r600g i don't think we can't match 50% of fglrx speed on things like nexuiz or newer game/engine. So you won't see any major boost until a complete rewrite (shader compiler excluded). That's my current feeling, i could be wrong.
                          It's not even finished yet and it requires a rewrite already?
                          That is more than disappointing...

                          Comment


                          • Originally posted by HokTar View Post
                            It's not even finished yet and it requires a rewrite already?
                            That is more than disappointing...
                            That's not disappointing, that's expected and encouraging. There's only one kind of software that doesn't get constantly rewritten: dead software.

                            As the drivers gain more features (and hence more real-world testing), bottlenecks will appear and/or shift around. And as the developers gain more experience, existing architectural deficiencies will be discovered and dealt with.

                            (Note that a rewrite doesn't mean "throw away everything and start from scratch". It means things like, "hey, if we move state validation from part X to part Y, we can avoid re-validations under circumstances Z and W and increase batch submission performance by up to 15%.")

                            Comment


                            • Originally posted by BlackStar View Post
                              That's not disappointing, that's expected and encouraging. There's only one kind of software that doesn't get constantly rewritten: dead software.

                              As the drivers gain more features (and hence more real-world testing), bottlenecks will appear and/or shift around. And as the developers gain more experience, existing architectural deficiencies will be discovered and dealt with.

                              (Note that a rewrite doesn't mean "throw away everything and start from scratch". It means things like, "hey, if we move state validation from part X to part Y, we can avoid re-validations under circumstances Z and W and increase batch submission performance by up to 15%.")
                              Well, glisse wrote: "until a complete rewrite (shader compiler excluded)".
                              That pretty much sounds like we have to restart from (almost) scratch. Obviously, I hope what you said is correct but it does not seem so.

                              Comment


                              • I suspect glisse is talking about more than just the actual r600g driver... usually you find bottlenecks scattered all through the stack.

                                I don't think he is saying "just rewrite the r600g part and everything will be fine"... remember that the r600g code is only 1-2% of the 3D driver stack, and AFAIK this is the first time the open source stack has really been put to work on high performance graphics hardware.

                                Things that were "nicely tuned" in the r200 days can easily become major bottlenecks on newer graphics hardware simply because the newer GPUs are so much faster.

                                Comment

                                Working...
                                X