Announcement

Collapse
No announcement yet.

The R300 GLSL Compiler Improvements Are Coming

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    I don't know how often 'normal' People buy hardware but I do only within about 5-6 years.
    And the computers I see around .. from my parents ... or even the stoneage Machines of my girlfriends family ... anybody cares to better support the R100 family?! (and that's the new Machine)
    (and I have issues with compiz on it, it works but sometimes the background doesn't get refreshed. Just imagine Windows trying composite on such a machine. )

    One of the reasons that make linux so cool is that you can throw any antique hardware at it and it will (somehow) work with it.

    So way to go.

    Comment


    • #12
      The R300 driver is more complete in many ways. Adding new features there first makes some sense. They can be ported to R600/R800 drivers later.

      A few Mesa/X developers have an entirely new GLSL stack up on git.freedesktop.org. I don't know where it's going, but it's a particularly high quality compiler using a lot of modern compiler design elements that the old Mesa compiler lacks. I'm expecting it'll be the basis for a new GLSL compiler in upcoming months. The design is really great, lacking perhaps only in that it uses standard memory allocator calls (new/delete as it's in C++) which is really not a good thing for a core driver component that may be called many times over the lifetime of an application (possibly one or more times per frame for some apps); the memory fragmentation and allocator churn kills performance. Sadly, few people outside of game developers and some embedded developers seem to have much head for that sort of thing these days. The last big popular C/C++ project I saw that made serious effort towards reducing fragmentation from frequent small object allocation was the Samba folks. :/

      Comment


      • #13
        I would still be using my r500 had it not broken down :P

        Comment


        • #14
          For example, I have a x2300=RV530 in my laptop

          Comment


          • #15
            Ups, x2300=RV515

            Comment


            • #16
              Originally posted by elanthis View Post
              The R300 driver is more complete in many ways. Adding new features there first makes some sense. They can be ported to R600/R800 drivers later.

              A few Mesa/X developers have an entirely new GLSL stack up on git.freedesktop.org. I don't know where it's going, but it's a particularly high quality compiler using a lot of modern compiler design elements that the old Mesa compiler lacks. I'm expecting it'll be the basis for a new GLSL compiler in upcoming months. The design is really great, lacking perhaps only in that it uses standard memory allocator calls (new/delete as it's in C++) which is really not a good thing for a core driver component that may be called many times over the lifetime of an application (possibly one or more times per frame for some apps); the memory fragmentation and allocator churn kills performance. Sadly, few people outside of game developers and some embedded developers seem to have much head for that sort of thing these days. The last big popular C/C++ project I saw that made serious effort towards reducing fragmentation from frequent small object allocation was the Samba folks. :/
              I know C++ supports custom allocators so it could be that the best course of action is writing correct C++ code first and then experimenting with different allocators down the line if it's found that it's really needed. As I understand it shaders only need to compiled once during the lifetime of a program so I'm not sure this kind of optimization is important anyways.

              Comment


              • #17
                Originally posted by Ian_M View Post
                I know C++ supports custom allocators so it could be that the best course of action is writing correct C++ code first and then experimenting with different allocators down the line if it's found that it's really needed.
                It can be a tremendous pain in the butt to retrofit allocators into an app, but that is a certainly possibility.

                As I understand it shaders only need to compiled once during the lifetime of a program so I'm not sure this kind of optimization is important anyways.
                Simple apps may compile a handful of shaders once, but more complex apps (games) will be loading and unloading many, many shaders on a surprisingly frequent basis. I have in fact seen apps (games and not) that rebuild a number of shaders nearly every frame, based on the needs for complicated effects engines that go beyond what you can code into a single shader.

                Comment


                • #18
                  Originally posted by droidhacker View Post
                  I think it would be better to focus on the newer parts... how many people still actually have those parts and are worried about getting peak performance out of them?
                  All of the notebook users that have those parts and not being able to replace them...

                  In the first place IN all the IGPs EVERY performance peak is important as those do not have much performance capabilities in the first place...

                  Comment


                  • #19
                    Originally posted by droidhacker View Post
                    I think it would be better to focus on the newer parts... how many people still actually have those parts and are worried about getting peak performance out of them?
                    How many people still have those parts? A TON of people with laptops that use discrete ATI graphics.

                    You seem to forget that a huge number of desktop linux users are on laptops, not on desktop PCs with easily-upgradeable cards. It may be trivial to upgrade the video on a desktop, but replacing a $1200 laptop isn't something most users do every year, or even every other year.

                    My 4-year old Thinkpad is a Core 2 Duo with an X1400 integrated, and there is absolutely nothing about it other than the GPU itself that would be unusual on a brand new laptop sold today. Most of the "value" series laptops from the major manufacturers are still Core or Core 2 series CPUs, even now, so mine could hardly be considered outdated.

                    Comment


                    • #20
                      My rv100, rv280 and RV350 all agree with you :-)!

                      Originally posted by Melcar View Post
                      I am. I can use all the performance I can get.
                      However, the biggest problem is that the RV350 card is AGP, and Gallium is inexplicably slow with AGP. Has this slowness been understood yet, please? Would (say) a 2.6.35.x kernel resolve this?

                      Comment

                      Working...
                      X