Announcement

Collapse
No announcement yet.

AMD Radeon Software Crimson Edition Is A Letdown On Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    Originally posted by gamerk2 View Post
    Unlikely. We have a fairly good idea where Zen performance is going to be. AMD claims a 40% IPC improvement, which sounds need until you remember clocks are going to be lower due to the process node and shorter pipeline. So per-core performance is going to be about 20-30% faster, which puts it around Ivy Bridge/Haswell level performance. AMD is still going to be behind Intel in performance, even after Zen.
    Thing is AMD does not need to outright beat Intel in CPU performance to be a profitable company. The problem in the last few years is that the gap became too big. They need to be fast enough for AMD to sell them with decent margins in decent quantities. What Zen needs to achieve is not to take the performance crown from Intel, but rather to let AMD start making money again. Right now their architecture is so far behind that they are having to sell big 8-core CPUs at budget prices with no margins in small quantities.

    Comment


    • #72
      Originally posted by Linuxxx View Post
      ZEN:
      AMD becomes competitive against Intel again, especially in the high-margin server field!
      Is desktop Zen expected to have quad-channel DDR4 memory controller? Just curious.

      Comment


      • #73
        Originally posted by arunbupathy View Post
        Correct me if I am wrong, but the way I see it, IPC gains are all about parallism in many of the common algorithms that we use and the proper way to exploit it would be to use massively parallel cores or GPUs.
        IPC (Instructions Per Cycle) is all about how a fast a single thread is computed by the CPU.

        Comment


        • #74
          Originally posted by clockley1 View Post
          I need OpenGL 4 for Steam, and the open source driver will never support GL4 on 6670... AMD has made a new Nvidia customer.
          Never say never according to http://mesamatrix.net/ , tess is the only thing holding from GL4. And there are already 2 persons working on this aspect: Dave Airlied and Glenn Kennard.

          You can find more info here: http://cgit.freedesktop.org/~airlied.../?h=r600g-tess

          I'm hoping that there will be patches posted for review in mesa-dev mailing list regarding tessellation until the end of the year.

          After that, GL4.1 will be reached.

          Comment


          • #75
            Originally posted by oleid View Post
            Um, so why can I play OpenGL4 games like Civ Beyond Earth on my Radeon 5570 using Mesa 11 and a recent kernel? Right! Because all the extensions needed are supported. Just give it a try, it might already work today
            You can't ... try something that actually uses GL4 ... or rather, set up mesa debug and see what context is initialized by the game ...

            Comment


            • #76
              Originally posted by xxmitsu View Post

              Never say never according to http://mesamatrix.net/ , tess is the only thing holding from GL4. And there are already 2 persons working on this aspect: Dave Airlied and Glenn Kennard.

              You can find more info here: http://cgit.freedesktop.org/~airlied.../?h=r600g-tess

              I'm hoping that there will be patches posted for review in mesa-dev mailing list regarding tessellation until the end of the year.

              After that, GL4.1 will be reached.
              Unless the HW can support GL_ARB_gpu_shader_fp64 you will not get GL4 from Mesa ... SO for R600g that means 5830, 5850,5870, 6950, 6970 ... all the other cards need FP64 emulation ...

              Comment


              • #77
                Another fiasko guys... Just 4.0, 4.1, 4.2 NOT 4.3 kernel support and NOTHING more than that... I wonder if the AMD employees that make those slide presentations are NVIDIA employees in reality... I cannot explain it other way! Unbelievable... Vaapi still sucks the same. I am going back to AMDGPU with vdpau and no reclocking for the moment, it is more stable, I can use EGL with Kwin, Xserver 1.18, DRI3 and soon enough I will play games fine with my Tonga!

                Comment


                • #78
                  Originally posted by bridgman View Post
                  Fixed that for you.
                  Nice fix, and hopefully it would happen the way you predicted (though I'm not a big fan of C++, is ability to use "just C" to program GPU also part of plan?). But whole existence of translators implies you expect translator input isn't exactly C++17... :\. And then I do not see what would prevent devs from sticking to cuda, mumbling something like "okay, AMD can convert it to whatever crap they want". Needless to say, in this case it would somehow work, sure, but I doubt speed would be good. Because cuda devs never bothered self to optimize for AMD. They just do not give a fuck about AMD existence. If it wouldn't be a case, they would use something else than cuda, right? Catch 22 of GPU world...

                  Comment


                  • #79
                    Working in the Linux Catalyst team must be a very, very sad experience.
                    I can visually see all the depressed people hanging around and hating life.

                    Comment


                    • #80
                      Originally posted by gamerk2 View Post
                      Except GCN has it's own performance issues, and frankly, the arch is getting a bit long in the tooth at this point.
                      I wonder which particular complaints do you have? On its own GCN seem to be more or less logical design and on its own it haves no trouble to beat nvidia to the dust, at least in some scenarios. E.g. in bitcoin mining (basically a massively parallel sha256) nvidia was nowhere close to both VLIWs and GCNs, so it seems it can outrun nvidia on at least some tasks quite a lot.

                      If AMD would be a bit better in being sneaky, they can get idea: aha, we are good in doing massively parallel crypto. And now everyone wants SSL? Very handy! Sounds like a new emerging market. Hey, all, we have epic crypto accelerators! Do 50x more TLS sessions on same server! But ok, it would be way too sneaky and smartass for AMD management.

                      NVIDIA is moving on from Maxwell, and guess what? It's async compute support is getting fixed.
                      Sounds pretty much like marketing bullshit to me. I guess what nvidia thinks is achievement is already here in AMD. They are good at engineering, any day. Uhm, well, hardware engineering. I wouldn't insist on the very same idea about Catalyst.

                      AMD is still going to be behind Intel in performance, even after Zen.
                      Price also matters, as well as many other things. E.g. ARMs were never known for superb performance. But they had small core, which made them cheap, and once techs advanced, they also got performance which is "good enough" for more and more use cases, and they also improved their cores quite a lot, as well as designing IP blocks around them, making it a rather appealing portfolio allowing by crapload of companies to produce custom CPUs. Somehow, Intel got totally pwned on some fast growing markets. They have to PAY manufacturers to get their atom crap used at all. LOL.

                      simply because on-die HBM will make APUs too expensive for their price point,
                      Iris isn't exactly cheap and HBM-based APU can act as "real" gpu, actually beating most inexpensive models to the dust. So it probably can take a price point of Iris and weaker cpu core can be outweighed by stronger GPU. You see, Iris barely can compete with 6770. Which isn't exactly new, and got modest 128-bit GDDR5 bus.

                      APUs with HBM is simply a pipe-dream.
                      Somehow Intel sells similar "dream" for an awful price with performance barely able to compete HD6770. So it seems it works even it far more crappy implementation. And Iris isn't what I would call cheap. Uhm, once HBM technology goes more or less mass production I guess it hardly adds more than few bucks to price on its own. Looking on Iris price I guess there is quite a large margin. Sure, competition can bring prices down, but at the end of day, so far AMD is the ONLY company on the planet which does HBM designs. Others will eventually catch up, sure, but AMD would have much better expertise to the date anyway.

                      Comment

                      Working...
                      X