Announcement

Collapse
No announcement yet.

AMD Radeon HD 7000 Series Gallium3D Merged

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • AMD Radeon HD 7000 Series Gallium3D Merged

    Phoronix: AMD Radeon HD 7000 Series Gallium3D Merged

    Even if you're not a NVIDIA graphics customer and not interested in the state of the Nouveau driver and its big advancements today, there still is some Mesa Gallium3D news of importance to share. AMD has merged their Radeon HD 7000 "Southern Islands" Gallium3D driver to mainline...

    http://www.phoronix.com/vr.php?view=MTA4NzM

  • #2
    Good to read. First progress is made, and yes due to the new HW it won't pop up and be all shiny for prime time. I expect maybe a third of the time that r600 stuff needed to grow for it until it is starting to be really shiny. Until then fglrx will hopefully do.

    Comment


    • #3
      Amazing that an officially sponsored driver lags behind a reverse engineering project.

      Comment


      • #4
        Originally posted by scottishduck View Post
        Amazing that an officially sponsored driver lags behind a reverse engineering project.
        If you look at the latest benchmark Michael did for nvidia chips, you will se it doesn't lag that much On the other side geforces are easier to program.

        Comment


        • #5
          Originally posted by scottishduck View Post
          Amazing that an officially sponsored driver lags behind a reverse engineering project.
          Reverse engineering, in some fields, is faster because doesn't need lawyers in the process, but for AMD HD7000 vs nVidia 600 there is a simple explanation: the first is a completely new architecture, the second one is very similar to nVidia 500 as far as I know, so the AMD developers had to rewrite a driver "from scratch", nouveau developers had to add few lines in the existing driver. You are comparing a pyramid with a small house IMHO (with all respect to nouveau developers, their work isn't simple at all).

          If you want to make a right comparison, compare nVidia 600 and AMD Aruba, that is an evolution of Cayman. The code is already public, while no AMD Trinity APU is being sold at the moment

          Comment


          • #6
            Geforces just got harder to program with Kepler, and Radeon got easier with GCN

            Comment


            • #7
              I dont think it got harder, nouveau has got still the same problem as years ago: reclocking is not fully stable as it should be. I guess this is very hard to do this correctly, nvidia guys should give em a hint maybe. Opengl features seem to be more easy to RE than to read docs, i really like what nouveau devs already managed to do. I don't use it all day but as soon as vdpau is working it could be considered as replacement for lowend cards. I doubt that somebody who plays the whole day will use nouveau als long as the nvidia binary is much faster. But for office systems it should not matter at all, a few desktop effects maybe and the user is happy - stability is the key.

              Comment


              • #8
                Originally posted by Kano View Post
                stability is the key.
                That's one of the biggest issues of all open drivers except radeon. Radeon git is somehow stabler than intel stable releases, not speaking about nouveau...
                ## VGA ##
                AMD: X1950XTX, HD3870, HD5870
                Intel: GMA45, HD3000 (Core i5 2500K)

                Comment


                • #9
                  from my experience gnome shell works better under intel than radeon. on my main rig the desktop effects are downright painful.

                  Comment


                  • #10
                    A.t.m. I am quite happy with KDE 4.8.2. and its desktop effects. Kernel is 3.3.1, X whatever is latest in Gentoo, mesa is Gentoo ~amd64. Works fine. HW is a HD 5670. Didn't check with gnome for a long time Besides by Gnome 2 on a different computer (an ECS G320 slowly falling apart after all the years) but that has a VIA CLE266 so there is nothing like 3d at all. :/ But then I use that one for typing and stuff only.

                    Comment


                    • #11
                      Originally posted by curaga View Post
                      Geforces just got harder to program with Kepler, and Radeon got easier with GCN
                      Writing an efficient compute application definitely got easier with GCN, and we believe that implementing an optimized shader compiler got easier as well, but writing a driver was probably 10x as much work as it would have been if SI were just another evolution of the R600=>NI architecture. Maybe closer to 20x.
                      Last edited by bridgman; 04-14-2012, 02:13 PM.

                      Comment


                      • #12
                        Originally posted by bridgman View Post
                        Writing an efficient compute application definitely got easier with GCN, and we believe that implementing an optimized shader compiler got easier as well, but writing a driver was probably 10x as much work as it would have been if SI were just another evolution of the R600=>NI architecture. Maybe closer to 20x.
                        much work is not hard work. its easier but because of the restart its 20x much work.

                        whatever RIP VLIW.

                        Comment


                        • #13
                          I haven't seen any indication that writing a driver for GCN is any "easier" than for VLIW, except when it comes to seriously optimizing the shader compiler.

                          The VLIW hardware actually worked out real well from a shader compiler POV, since most of the IR operations were on 3- or 4-component vertices and fragments and so each IR operation could be translated directly into a single 3- or 4-slot VLIW instruction.

                          Comment


                          • #14
                            Well somehow there do not exist any fast opengl drivers for ati cards. Not even using win where you can compare opengl against dx11 you see good drivers. For linux you see lots of artefacts using rage with wine (hd 5670 if you forgot it). It would be just too funny when oss drivers would beat fglrx in some years...

                            Comment


                            • #15
                              Originally posted by bridgman View Post
                              I haven't seen any indication that writing a driver for GCN is any "easier" than for VLIW, except when it comes to seriously optimizing the shader compiler.

                              The VLIW hardware actually worked out real well from a shader compiler POV, since most of the IR operations were on 3- or 4-component vertices and fragments and so each IR operation could be translated directly into a single 3- or 4-slot VLIW instruction.
                              i already point out that the most linux gurus are focused on: x86_64+simd(sse/AVX)
                              this means its easier for the "Linux-Nerds"

                              a general truth can be untruth in a specially case. and linux nerds are special case's.

                              Comment

                              Working...
                              X