Announcement

Collapse
No announcement yet.

R600 Open-Source Driver WIth GLSL, OpenGL 2.0

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    With OSS drivers it won't make them hot right now (with my 3870 I have ~6x less frames). When it will change it may make some problems, but right GPU cooling should suffice.
    AFAIK this problem was caused by third-party designs whoose cooling was very modest, but I'm not sure. Better not trying this with RV770 GPUs.

    Comment


    • #32
      Originally posted by Wielkie G View Post
      With OSS drivers it won't make them hot right now (with my 3870 I have ~6x less frames). When it will change it may make some problems, but right GPU cooling should suffice.
      AFAIK this problem was caused by third-party designs whoose cooling was very modest, but I'm not sure. Better not trying this with RV770 GPUs.
      It's not limited too the 3rd party HSF's but the death toll does rise when a 3rd party HSF offered a even worse design on the VRM's.

      Comment


      • #33
        Originally posted by Louise View Post
        So VMware is taking a gamble that it will get done when they are ready
        All VMWare *needs* from a business perspective is the "svga" driver which supports the virtual SVGA adapter that VMWare exposes to each guest OS. They are using the Gallium3D state trackers plus the svga driver to provide a complete set of drivers on guest OSes, running on that virtual SVGA adapter.

        On the host OS, commands passed down from the virtual SVGA hardware are translated into OpenGL and executed using whatever OpenGL driver is present on the host. They don't actually need a Gallium3D driver for the host GPU - even a proprietary binary driver will work.

        It seems like a good plan - gives VMWare a more maintainable driver suite for the SVGA adapter than they could get through other means, provides lots of goodness to the open source community, and still lets VMWare get "first dibs" on the new code by having the actual hardware driver tied to their emulated SVGA chip.

        This seems like some of the best roadmap planning I have seen in a long time -- I was both surprised and impressed by the end of the VMWare session.

        Originally posted by Wielkie G View Post
        I hope that the work done with r600 GLSL will be used in future r600g Gallium driver. As far as I know current Gallium drivers use a lot of features (files) from classic mesa drivers. Am I right? If so, does it mean that with correct r300g and r600 drivers it will be far simplier to create r600g driver?
        Most of the *work* done for r600 is likely to get used in r600g, although outside of the shader compiler code generator it's more likely that "chunks of code" will be re-used rather than entire files. Since the GLSL work *is* mostly in the shader compiler, however, the re-use should be really high for the GLSL work itself.

        It may turn out to be easier to just look at how r600 does something and write "similar but all new" code for r600g - it depends a bit on how each developer likes to work. Either way, having a working r600 driver and a working r300g driver should make work on r600g a lot more satisfying, in the sense that the "visible results per unit of work" should be a lot higher than normal.

        One important thing to remember is that maybe 90% of the 3D code (nearly all of the mesa code *above* the HW driver layer) will be the same in both cases. Rather than calling a "classic" HW driver, the upper level code calls a set of routines which translate the "classic mesa" calls into Gallium3D calls then act like a state tracker to the Gallium3D driver - including translating "classic mesa" IL into TGSI. It's really "mesa using the old HW drivers" vs "mesa using the new HW drivers".
        Last edited by bridgman; 12-20-2009, 10:54 AM.

        Comment


        • #34
          Originally posted by bridgman View Post
          All VMWare *needs* from a business perspective is the "svga" driver which supports the virtual SVGA adapter that VMWare exposes to each guest OS. They are using the Gallium3D state trackers plus the svga driver to provide a complete set of drivers on guest OSes, running on that virtual SVGA adapter.

          On the host OS, commands passed down from the virtual SVGA hardware are translated into OpenGL and executed using whatever OpenGL driver is present on the host. They don't actually need a Gallium3D driver for the host GPU - even a proprietary binary driver will work.

          It seems like a good plan - gives VMWare a more maintainable driver suite for the SVGA adapter than they could get through other means, provides lots of goodness to the open source community, and still lets VMWare get "first dibs" on the new code by having the actual hardware driver tied to their emulated SVGA chip.

          This seems like some of the best roadmap planning I have seen in a long time -- I was both surprised and impressed by the end of the VMWare session.
          Okay, that is impressive!

          So according to the Gallium status matrix
          http://www.x.org/wiki/GalliumStatus

          VMware is very close to have reached their goal, assuming they use the close source driver on the host.

          Although it doesn't show OpenCL and D3D, but in the workshop videos they said that OpenCL was almost good to go.

          It is almost too existing that they have a working OpenCL state tracker just sitting there.

          Comment


          • #35
            Originally posted by Louise View Post
            So according to the Gallium status matrix
            http://www.x.org/wiki/GalliumStatus

            VMware is very close to have reached their goal, assuming they use the close source driver on the host.
            Yep. Everything from that point on is optimization and performance tuning.

            Comment


            • #36
              Originally posted by bridgman View Post
              Yep. Everything from that point on is optimization and performance tuning.
              John Carmack have said that the solution to the parallelization problem might be writing your entire game in a scripting language and the just let the system handle all the multi core stuff.

              Of course no AAA title is going out as a bunch of Python scripts , but let's say that a game studio developed their own scripting language for this purpose.

              Could OpenCL used in this respect?

              And how much performance do you think you would burn off going for the scripting approach to utilize the sea of processors we soon will get with 12 cores and 16 cores?

              Comment


              • #37
                Let me remind you that r600-r700 opengl 2.0 and GLSL support in radeon driver has been pushed by an AMD employee; Richard Li. With the release of Ubuntu 10.04 or F13 we will all see r600-r700 users playing ET Quake Wars on their computers with good performance while r500 performance will remain GARBAGE.
                This makes it clear that AMD urges us users to upgrade to newer GPUs by supporting whatever feature they see fit. Well I will definitely upgrade to a new GPU as you guys suggest, but an NVIDIA ONE.

                Comment


                • #38
                  Originally posted by barbarbaron View Post
                  Let me remind you that r600-r700 opengl 2.0 and GLSL support in radeon driver has been pushed by an AMD employee; Richard Li. With the release of Ubuntu 10.04 or F13 we will all see r600-r700 users playing ET Quake Wars on their computers with good performance while r500 performance will remain GARBAGE.
                  This makes it clear that AMD urges us users to upgrade to newer GPUs by supporting whatever feature they see fit. Well I will definitely upgrade to a new GPU as you guys suggest, but an NVIDIA ONE.
                  I just noticed your number posts is equal to your I.Q.

                  Comment


                  • #39
                    I just noticed your number posts is equal to your I.Q.
                    Keep your number post fascism to yourself forum thug.

                    Comment


                    • #40
                      Originally posted by barbarbaron View Post
                      Keep your number post fascism to yourself forum thug.
                      Let's make a deal.

                      You do just a minimum of resource before posting flame messages to see if your postulates are true, and I won't accuse you for being a minor

                      Btw. Didn't I see you on the nVidia forum bitching about nVidia doesn't give a damn about open source 3D drivers?

                      Comment


                      • #41
                        Originally posted by Louise View Post
                        John Carmack have said that the solution to the parallelization problem might be writing your entire game in a scripting language and the just let the system handle all the multi core stuff.
                        How would that work? AFAIK, the big problem with multi-core is data dependencies among various threads killing performance (i.e. Thread A is waiting on the result of Thread B which is waiting on the result of Thread C...), so unless "the system" is going to redesign your algorithms for you I'm not sure how it's supposed to be a solution.

                        Originally posted by barbarbaron
                        Let me remind you that r600-r700 opengl 2.0 and GLSL support in radeon driver has been pushed by an AMD employee; Richard Li. With the release of Ubuntu 10.04 or F13 we will all see r600-r700 users playing ET Quake Wars on their computers with good performance while r500 performance will remain GARBAGE.
                        This makes it clear that AMD urges us users to upgrade to newer GPUs by supporting whatever feature they see fit. Well I will definitely upgrade to a new GPU as you guys suggest, but an NVIDIA ONE.
                        This was already addressed (more than once) in the first page of the thread. The reason you don't see a headline like this for R300-R500 is that the corresponding change on those chipsets already happened, but in the Gallium3D driver. In other words, the actual situation is almost the opposite of what you think it is: with the transition to Gallium3D, the older chipsets will be ahead of the newer ones in the open-source stack (even as an R600 user who prefers open-source drivers, I have to admit that makes sense for several reasons ).
                        Last edited by Ex-Cyber; 12-22-2009, 07:07 AM.

                        Comment


                        • #42
                          Originally posted by Ex-Cyber View Post
                          How would that work? AFAIK, the big problem with multi-core is data dependencies among various threads killing performance (i.e. Thread A is waiting on the result of Thread B which is waiting on the result of Thread C...), so unless "the system" is going to redesign your algorithms for you I'm not sure how it's supposed to be a solution.
                          In id Software's upcoming game, Rage, they have hand scheduled the expensive tasks off to their own threads, and let the game code stay in a single threaded form.

                          They did the same with Enemy Territory.

                          In both cases, the game code is the bottle neck.

                          But as Carmack said, it was a consious decision, as they didn't want the game programmers to worry about parallelization. The game code programmers should only care about writing the code that makes the game fun.

                          The system programmers that does the hard core stuff like rendering, obstacle avoidance, collision detection and so on, they should handle the parallelization problems.

                          So I think that Carmack was talking about the game code, when he talked about using a scripting language.

                          Doom 4 will be build with the same model, so it is something he stands by, that the game code should be easy to write.

                          Comment


                          • #43
                            @Ex-Cyber:
                            with the transition to Gallium3D, the older chipsets will be ahead of the newer ones in the open-source stack
                            @Dragonx:
                            Sooo.. just to sum it up: R600+ can choose between radeon and fglrx if they want (useable) 3d accel (--> GLSL) and <=R500 users have to use an old fglrx? Is this correct? *THUMBS UP* to ATi.. definately my last ati/amd board..
                            +1

                            Who will be ahead of who? Aaah yes in the open source stack... We r100-r500 users are VERY familiar with it...
                            Last edited by barbarbaron; 12-22-2009, 07:48 AM.

                            Comment


                            • #44
                              Originally posted by barbarbaron View Post
                              Aaah yes in the open source stack
                              Yes, the open-source stack. You know, the one that's the subject of the article and the one in which you were complaining about R600+ being favored over R300-R500. You should take a breather; those goalposts look heavy.

                              Originally posted by Louise
                              So I think that Carmack was talking about the game code, when he talked about using a scripting language.
                              Ah, that makes more sense. I suppose in some sense the engine is "the system" from a gameplay development standpoint.

                              Comment


                              • #45
                                I used to have a lot of respect for this site but now it's nothing more than a spin room for them lackeys from ATI.
                                Last edited by rob2687; 12-22-2009, 11:20 AM.

                                Comment

                                Working...
                                X