Announcement

Collapse
No announcement yet.

Mesa 7.5 RC3 Brings Build, Bug Fixes

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by Yfrwlf View Post
    I thought Gallium3D was supposed to greatly simplify the writing of drivers. Or is it simpler, but still not incredibly easy?
    It's not neccessarily simpler nor easier, but the code will be a lot cleaner (easier to debug and stuff) and once you've got it working, Gallium3D provides access to the whole range of state trackers (e.g. OpenGL 1-3, video decoding acceleration, OpenCL, network debugging, etc).

    Comment


    • #17
      Originally posted by NeoBrain View Post
      It's not neccessarily simpler nor easier, but the code will be a lot cleaner (easier to debug and stuff) and once you've got it working, Gallium3D provides access to the whole range of state trackers (e.g. OpenGL 1-3, video decoding acceleration, OpenCL, network debugging, etc).
      So then it *is* simpler if you want things like OGL and whatnot...if it didn't make it simpler to write and manage "fully-featured" drivers, then there'd be utterly no point in Gallium3D.

      I think I could condense down the entire point of software into that, actually. Better software has more features as well as making it easier to create those and future features and create content in general. (ok and of course tack on the usual less buggy, performs better, etc, but I lumped those into "features")

      Look forward to the day when you can easily snap in a few things to create the program you want, or better yet just think about it. It's coming, but needs to hurry the $!@# up.

      Comment


      • #18
        Originally posted by Yfrwlf View Post
        I thought Gallium3D was supposed to greatly simplify the writing of drivers. Or is it simpler, but still not incredibly easy?
        Yeah, I think that sums it up pretty well. Using Gallium rather than the classic Mesa hardware driver model seems to take the task from "mind-numbingly difficult" to "a lot of hard work", which of course is a significant improvement.

        Comment


        • #19
          Originally posted by bridgman View Post
          Yeah, I think that sums it up pretty well. Using Gallium rather than the classic Mesa hardware driver model seems to take the task from "mind-numbingly difficult" to "a lot of hard work", which of course is a significant improvement.
          OK, whew, sanity check passed. ^^

          Too bad there's not some way of making it excruciatingly easy!

          Comment


          • #20
            There is... we could go back to making GPUs the way we did 10 years ago, where the GPU registers more or less matched the OpenGL state variables

            Comment


            • #21
              Originally posted by bridgman View Post
              There is... we could go back to making GPUs the way we did 10 years ago, where the GPU registers more or less matched the OpenGL state variables
              LOL, if only!

              Do you think getting OpenGL 3.x working well on Intel's "Larrabee" hardware (within the Gallium framework) will be harder or easier than on Ati/Nvidia GPUs?

              Larrabee's just (supposedly) a bunch of Pentium cores with a 16 SIMD extensions per core.

              Comment


              • #22
                Just guessing here, but probably a bit harder the first time since it will need to include and upload software to perform vertex grouping and rasterizing / scan converting -- those tasks are done in hardware on other GPUs. I imagine the Intel open source devs will be able to pick up most of the code from other drivers so it shouldn't be too bad.

                The x86-ness probably won't make much difference, since the driver will just be compiling TGSI opcodes down to SIMD instructions anyways. It's "just another instruction set" for the developers. The actual x86 opcodes presumably would not be used in shader programs (other than for flow control) because the performance would drop to 1/16 or less.

                I imagine the shader translation / compilation code code would be fairly similar to the existing Nouveau code, since Nouveau only has to deal with SIMD hardware while the r6xx/7xx code needs to deal with SIMD + superscalar hardware -- perversely enough what you call a vector engine on a normal CPU counts as a scalar engine on a GPU, since everyone takes SIMD for granted in GPUs.

                I think LRB has dedicated texture blocks so that part shouldn't be much different from other GPUs. I haven't really looked much at LRB though -- and probably won't until I have time to catch up on all the things I want to look at with *our* products
                Last edited by bridgman; 06-07-2009, 04:43 PM.

                Comment


                • #23
                  Originally posted by MostAwesomeDude View Post
                  r300 is currently OVER 9000 lines of code
                  But what is its power level?

                  Comment


                  • #24
                    Originally posted by MamiyaOtaru View Post
                    But what is its power level?
                    Dunno, lemme go find my scouter.

                    Comment


                    • #25
                      it's over NINE THOUSAND!

                      Originally posted by MostAwesomeDude View Post
                      r300 is currently OVER 9000 lines of code!
                      Originally posted by MamiyaOtaru View Post
                      But what is its power level?!
                      Originally posted by MostAwesomeDude View Post
                      Dunno, lemme go find my scouter!
                      WHAT! NINE THOUSAND!? There's no way that can be right!

                      **shot**

                      Oh gag-muffins, the dreaded DBZ meme strikes again!

                      Comment

                      Working...
                      X