Announcement

Collapse
No announcement yet.

Benchmarking The Intel Ivy Bridge Gallium3D Driver

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Benchmarking The Intel Ivy Bridge Gallium3D Driver

    Phoronix: Benchmarking The Intel Ivy Bridge Gallium3D Driver

    While Intel only supports their classic Mesa DRI driver when it comes to their open-source 3D driver on Linux, developed independently is also a Gallium3D driver for Sandy Bridge and Ivy Bridge generations of Intel graphics processors. In this article are benchmarks of the new Intel (i965) Gallium3D driver with Ivy Bridge HD 4000 hardware.

    http://www.phoronix.com/vr.php?view=18652

  • #2
    These results are impressive. They managed to achieve 30 to 70% speed with 1/10th or less of the manpower and only a few months of development... I wonder what this could become with 20+ full time developers working on it.

    Comment


    • #3
      The question is whether this perf difference is really due to gallium or the driver backend pieces? And if it is gallium, this project can be a great opportunity to identify and fix those slow paths. I know, not so exciting for Intel, as they've got nothing to gain here.

      Comment


      • #4
        Originally posted by log0 View Post
        The question is whether this perf difference is really due to gallium or the driver backend pieces? And if it is gallium, this project can be a great opportunity to identify and fix those slow paths. I know, not so exciting for Intel, as they've got nothing to gain here.
        This.
        The perfomance difference looks nearly same as radeon vs catalyst; nouveau vs nvidia. The same 50%.
        I really wonder if radeon and nouveau are actually slower due to gallium?!

        Comment


        • #5
          Originally posted by brosis View Post
          This.
          The perfomance difference looks nearly same as radeon vs catalyst; nouveau vs nvidia. The same 50%.
          I really wonder if radeon and nouveau are actually slower due to gallium?!
          Doubtful. I believe Nouveau is usually slower mostly due to re-clocking issues, and when it comes to R300-R500, the Gallium driver can beat the final catalyst driver that was released for those cards.

          There may be a CPU-usage issue for faster cards, but there's ongoing optimization work to also improve the Gallium layer to reduce unnecessary/redundant work.

          Comment


          • #6
            Originally posted by log0 View Post
            The question is whether this perf difference is really due to gallium or the driver backend pieces? And if it is gallium, this project can be a great opportunity to identify and fix those slow paths. I know, not so exciting for Intel, as they've got nothing to gain here.
            If I had to hazard a guess, I'd say it's because of the shader compiler backend. Note that it's TGSI-based and named "toy compiler":
            https://github.com/olvaffe/mesa/tree...rs/i965/shader

            Now that Gallium supports both TGSI and LLVM IR (i.e. more than one kind), it probably wouldn't be too hard to add support for GLSL IR and drop in the classic driver's compiler backend. Less than a week, I'd imagine.
            Free Software Developer .:. Mesa and Xorg
            Opinions expressed in these forum posts are my own.

            Comment


            • #7
              OA 0.8.5 uses fixed function rendering. Surely the shader compiler has little effect there?

              Comment


              • #8
                Originally posted by curaga View Post
                OA 0.8.5 uses fixed function rendering. Surely the shader compiler has little effect there?
                On that hardware the fixed function path will be implemented using shaders. The question is whether this shaders are compiled at runtime or are available in some intermediate/assembly form, bypassing the compiler.

                Comment


                • #9
                  Originally posted by Kayden View Post
                  If I had to hazard a guess, I'd say it's because of the shader compiler backend. Note that it's TGSI-based and named "toy compiler":
                  https://github.com/olvaffe/mesa/tree...rs/i965/shader

                  Now that Gallium supports both TGSI and LLVM IR (i.e. more than one kind), it probably wouldn't be too hard to add support for GLSL IR and drop in the classic driver's compiler backend. Less than a week, I'd imagine.
                  Yep, I'm pretty sure that's it, and plumbing GLSL IR though Gallium would allow it to reuse the existing shader compiler backend that Intel has spent so much time optimizing.

                  However, i don't think Gallium support would be that easy. Right now i don't think it supports LLVM IR directly, it just allows translating TGSI into that before it goes to the driver. I don't think you'd want to go from GLSL IR -> TGSI -> GLSL IR inside gallium, which means you'd have to add the ability to remove TGSI from the pipeline altogether, and i think that's still quite a bit of work to be done before that can happen. Maybe not, though.

                  Originally posted by curaga
                  OA 0.8.5 uses fixed function rendering. Surely the shader compiler has little effect there?
                  Gallium tries to translate everything into shaders. The fixed function calls get translated into TGSI, and then the driver is responsible for calling into the hardware correctly, which is generally going to be by creating shaders.

                  Originally posted by kbios
                  These results are impressive. They managed to achieve 30 to 70% speed with 1/10th or less of the manpower and only a few months of development... I wonder what this could become with 20+ full time developers working on it.
                  This does build on top of quite a bit of work already done by the Intel devs. For example, it shares all the same kernel driver code, and i imagine quite a bit of the hardware driver code was just copied out of the i965 driver.
                  Last edited by smitty3268; 04-16-2013, 02:39 PM.

                  Comment


                  • #10
                    I told you so!

                    Originally posted by log0 View Post
                    The question is whether this perf difference is really due to gallium or the driver backend pieces? And if it is gallium, this project can be a great opportunity to identify and fix those slow paths. I know, not so exciting for Intel, as they've got nothing to gain here.
                    Intel has some "egg on their face" and "foot in their mouth" to gain here:
                    1. if someone can finally prove that Gallium architecture is performant.
                    2. leveraging gallium allowed 1/2 guys to do what took 20 Intel engineers.

                    And users have much less chance of being denied features
                    simply because Intel chooses not to implement them.

                    Ah, the sweet satisfaction that comes from those simple words: "I told you so!".
                    It almost makes it worth the wait!

                    Comment


                    • #11
                      Originally posted by project_phelius View Post
                      Intel has some "egg on their face" and "foot in their mouth" to gain here:
                      1. if someone can finally prove that Gallium architecture is performant.
                      2. leveraging gallium allowed 1/2 guys to do what took 20 Intel engineers.
                      Well, it's still much slower, so they didn't really do what 20 intel engineers did.

                      As far as I have seen it intel simply said "gallium is cool but we have been doing classic mesa and don't see much of a benefit of porting the code and learning that stuff etc. so we'll just keep doing what we were doing before".

                      If i965g gets mainlined again and gets close in features (e.g. only OpenGL 2.1 right now) and performance maybe the intel developers switch over.

                      Much activity lately, e.g. a very big merge from "origin/master" so I might even be outdated right now.

                      Comment


                      • #12
                        Intel versus NVidia / Nouveau performance

                        While it is nice to see Intels commitment to Linux graphic drivers, I am still not impressed by the performance they deliver.
                        While I understand that integrated GPUs always have some limitations, it is hardly acceptable for me to pay the $$$ or tag for any higher end Intel CPU.

                        Here in Germany a typical Intel I7 Ivy bridge with HD4000 GPU costs about 250 to 300 alone.

                        Keep in mind that we see around 77 FPS with Xonotic in 1024x768 with effects set to High.

                        Not to sound too disrespectfull but my dog old system (AMD dual core with 2x2.7 GHz and a used GeForce 9800GT from the bay) was still very competetive back then when I did some benchmarks with it under Trisquel:
                        http://openbenchmarking.org/result/1...BY-TRISQUEL546

                        I tested 1280x1024 as max res since I had no other monitor available which supported a higher res. I consider 65 FPS with "high" effects and 44 FPS with Ultra "effect" very competetive. Especially since an AMD dual core with identical rating usually costs now 40-50 plus a cheap board and even faster NVidia cards are available under discount below 50.

                        I think intel should either lower CPU prices in general or at least provide integrated GPUs at the performance level of a medium range NVidia card (e.g. 450 GTS or 650GTS).

                        It would be interesting to see what performance I would get now out of my system with a Phenom Quadcore (4x3.2 GHz) plus GeForce 450GTS. I guess they will be a lot higher than what I benchmarked back then.

                        Comment

                        Working...
                        X