Announcement

Collapse
No announcement yet.

R600 Open-Source Driver WIth GLSL, OpenGL 2.0

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    With OSS drivers it won't make them hot right now (with my 3870 I have ~6x less frames). When it will change it may make some problems, but right GPU cooling should suffice.
    AFAIK this problem was caused by third-party designs whoose cooling was very modest, but I'm not sure. Better not trying this with RV770 GPUs.

    Comment


    • #32
      Originally posted by Wielkie G View Post
      With OSS drivers it won't make them hot right now (with my 3870 I have ~6x less frames). When it will change it may make some problems, but right GPU cooling should suffice.
      AFAIK this problem was caused by third-party designs whoose cooling was very modest, but I'm not sure. Better not trying this with RV770 GPUs.
      It's not limited too the 3rd party HSF's but the death toll does rise when a 3rd party HSF offered a even worse design on the VRM's.

      Comment


      • #33
        Originally posted by Louise View Post
        So VMware is taking a gamble that it will get done when they are ready
        All VMWare *needs* from a business perspective is the "svga" driver which supports the virtual SVGA adapter that VMWare exposes to each guest OS. They are using the Gallium3D state trackers plus the svga driver to provide a complete set of drivers on guest OSes, running on that virtual SVGA adapter.

        On the host OS, commands passed down from the virtual SVGA hardware are translated into OpenGL and executed using whatever OpenGL driver is present on the host. They don't actually need a Gallium3D driver for the host GPU - even a proprietary binary driver will work.

        It seems like a good plan - gives VMWare a more maintainable driver suite for the SVGA adapter than they could get through other means, provides lots of goodness to the open source community, and still lets VMWare get "first dibs" on the new code by having the actual hardware driver tied to their emulated SVGA chip.

        This seems like some of the best roadmap planning I have seen in a long time -- I was both surprised and impressed by the end of the VMWare session.

        Originally posted by Wielkie G View Post
        I hope that the work done with r600 GLSL will be used in future r600g Gallium driver. As far as I know current Gallium drivers use a lot of features (files) from classic mesa drivers. Am I right? If so, does it mean that with correct r300g and r600 drivers it will be far simplier to create r600g driver?
        Most of the *work* done for r600 is likely to get used in r600g, although outside of the shader compiler code generator it's more likely that "chunks of code" will be re-used rather than entire files. Since the GLSL work *is* mostly in the shader compiler, however, the re-use should be really high for the GLSL work itself.

        It may turn out to be easier to just look at how r600 does something and write "similar but all new" code for r600g - it depends a bit on how each developer likes to work. Either way, having a working r600 driver and a working r300g driver should make work on r600g a lot more satisfying, in the sense that the "visible results per unit of work" should be a lot higher than normal.

        One important thing to remember is that maybe 90% of the 3D code (nearly all of the mesa code *above* the HW driver layer) will be the same in both cases. Rather than calling a "classic" HW driver, the upper level code calls a set of routines which translate the "classic mesa" calls into Gallium3D calls then act like a state tracker to the Gallium3D driver - including translating "classic mesa" IL into TGSI. It's really "mesa using the old HW drivers" vs "mesa using the new HW drivers".
        Last edited by bridgman; 12-20-2009, 10:54 AM.

        Comment


        • #34
          Originally posted by bridgman View Post
          All VMWare *needs* from a business perspective is the "svga" driver which supports the virtual SVGA adapter that VMWare exposes to each guest OS. They are using the Gallium3D state trackers plus the svga driver to provide a complete set of drivers on guest OSes, running on that virtual SVGA adapter.

          On the host OS, commands passed down from the virtual SVGA hardware are translated into OpenGL and executed using whatever OpenGL driver is present on the host. They don't actually need a Gallium3D driver for the host GPU - even a proprietary binary driver will work.

          It seems like a good plan - gives VMWare a more maintainable driver suite for the SVGA adapter than they could get through other means, provides lots of goodness to the open source community, and still lets VMWare get "first dibs" on the new code by having the actual hardware driver tied to their emulated SVGA chip.

          This seems like some of the best roadmap planning I have seen in a long time -- I was both surprised and impressed by the end of the VMWare session.
          Okay, that is impressive!

          So according to the Gallium status matrix
          http://www.x.org/wiki/GalliumStatus

          VMware is very close to have reached their goal, assuming they use the close source driver on the host.

          Although it doesn't show OpenCL and D3D, but in the workshop videos they said that OpenCL was almost good to go.

          It is almost too existing that they have a working OpenCL state tracker just sitting there.

          Comment


          • #35
            Originally posted by Louise View Post
            So according to the Gallium status matrix
            http://www.x.org/wiki/GalliumStatus

            VMware is very close to have reached their goal, assuming they use the close source driver on the host.
            Yep. Everything from that point on is optimization and performance tuning.

            Comment


            • #36
              Originally posted by bridgman View Post
              Yep. Everything from that point on is optimization and performance tuning.
              John Carmack have said that the solution to the parallelization problem might be writing your entire game in a scripting language and the just let the system handle all the multi core stuff.

              Of course no AAA title is going out as a bunch of Python scripts , but let's say that a game studio developed their own scripting language for this purpose.

              Could OpenCL used in this respect?

              And how much performance do you think you would burn off going for the scripting approach to utilize the sea of processors we soon will get with 12 cores and 16 cores?

              Comment


              • #37
                Let me remind you that r600-r700 opengl 2.0 and GLSL support in radeon driver has been pushed by an AMD employee; Richard Li. With the release of Ubuntu 10.04 or F13 we will all see r600-r700 users playing ET Quake Wars on their computers with good performance while r500 performance will remain GARBAGE.
                This makes it clear that AMD urges us users to upgrade to newer GPUs by supporting whatever feature they see fit. Well I will definitely upgrade to a new GPU as you guys suggest, but an NVIDIA ONE.

                Comment


                • #38
                  Originally posted by barbarbaron View Post
                  Let me remind you that r600-r700 opengl 2.0 and GLSL support in radeon driver has been pushed by an AMD employee; Richard Li. With the release of Ubuntu 10.04 or F13 we will all see r600-r700 users playing ET Quake Wars on their computers with good performance while r500 performance will remain GARBAGE.
                  This makes it clear that AMD urges us users to upgrade to newer GPUs by supporting whatever feature they see fit. Well I will definitely upgrade to a new GPU as you guys suggest, but an NVIDIA ONE.
                  I just noticed your number posts is equal to your I.Q.

                  Comment


                  • #39
                    I just noticed your number posts is equal to your I.Q.
                    Keep your number post fascism to yourself forum thug.

                    Comment


                    • #40
                      Originally posted by barbarbaron View Post
                      Keep your number post fascism to yourself forum thug.
                      Let's make a deal.

                      You do just a minimum of resource before posting flame messages to see if your postulates are true, and I won't accuse you for being a minor

                      Btw. Didn't I see you on the nVidia forum bitching about nVidia doesn't give a damn about open source 3D drivers?

                      Comment

                      Working...
                      X