Announcement

Collapse
No announcement yet.

Red Hat & NVIDIA To Collaborate On Some Open-Source Efforts

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by msotirov View Post
    You didn't address my argument at all. I'm talking about the state of the underlying architectures.
    I addressed it, see above. AMD are making new architecture.

    Comment


    • #32
      Originally posted by shmerl View Post
      As for competing with Nvidia in discrete cards, AMD are working on Arcturus architecture (something they labeled "super-SIMD") which should be their major step up in gaming performance per power specifically comparing to GCN generations.
      Please go back and read my original post. Arcturus is a CHIP not an ARCHITECTURE... just the first chip where the timing worked out for us to give it a name in the driver code that we knew would not get confused with the eventual marketing name.

      Someone said they wished we would go back to using code names for chips to avoid the current confusion between internal and marketing names, and I said that we were doing that. Nothing more.
      Last edited by bridgman; 24 October 2018, 02:34 PM.
      Test signature

      Comment


      • #33
        Originally posted by bridgman View Post
        Please go back and read my original post. Arcturus is a CHIP not an ARCHITECTURE.
        OK, the first chip to use the architecture "Z", which has no (public) name? I suppose Polaris, Vega and Navi aren't architectures either, it's really GCN 4, GCN 5 and GCN 6, but code names are matching them. We'll see if this star theme will continue or internal names will diverge from marketing ones. For now we don't have the later at all.
        Last edited by shmerl; 24 October 2018, 02:55 PM.

        Comment


        • #34
          Originally posted by shmerl View Post
          OK, the first chip to use the architecture "Z", which has no (public) name? I suppose Polaris, Vega and Navi aren't architectures either, it's really GCN 4, GCN 5 and GCN 6, but code names are matching them. We'll see if this star theme will continue or internal names will diverge from marketing ones. For now we don't have the later at all.
          Not even the first chip to use the architecture. It's just a chip. We did pick from star names but you have to pick from somewhere.

          We do consider Vega and Navi to be architectures (GFX9 and 10 respectively) while Polaris is an improved GFX8 implementation with only tiny SW-visible changes (1 bit enabled by driver IIRC).
          Test signature

          Comment


          • #35
            Originally posted by bridgman View Post
            Not even the first chip to use the architecture.
            So Arcturus chip is not going to use a new architecture (after Navi)? In AMD charts, after Navi was marked as next-generation, so I assumed it means new architecture.

            Comment


            • #36
              Arcturus is not post-Navi architecture. If I get more specific than that it kinda defeats the whole purpose of having code names

              The only thing we have ever said is that it was the first chip where we were using a code name in the driver source that was unrelated to eventual marketing names... but then the internet ran wild with the usual "ooh ooh new word must be a new architecture" noise.
              Last edited by bridgman; 24 October 2018, 04:58 PM.
              Test signature

              Comment


              • #37
                Originally posted by bridgman View Post
                Arcturus is not post-Navi architecture.
                Good to know, thanks! That helps avoiding the confusion.

                Comment


                • #38
                  Originally posted by msotirov View Post
                  You didn't address my argument at all. I'm talking about the state of the underlying architectures. Being power inefficient is not just a disadvantage in notebooks but also for desktop GPUs. Currently AMD can't compete neither in the notebook GPU market nor in the desktop GPU market. The only segment they currently dominate GPU-wise is consoles. Me and some other folks on this forum buy AMD GPUs purely out of principle, not because they are better.

                  In your original comment you were talking about "growing pressure from AMD". My point was that there is no such pressure in the GPU market. If anything, they lost some marketshare in the most recent quarter.

                  That data was updated by John Peddie research
                  Add-in Board report – a report on the Graphic Add-in Board market The Add-in Board report is a quarterly report that focuses on the market activity of PC graphics controllers for mobile and desktop computing. The report provides an in-depth look at the PC graphics market and includes unit shipment and segment market share data, and trend analysis. For an annual subscription that includes four quarterly reports click here. Table of Contents


                  Comment


                  • #39
                    If RedHat is working with Nvidia, it is surely for GPU computing in the datacentre. IBM works with nvidia too for Power for that reason. I can even imaging a three-way collaboration.

                    As far as I know, nvidia is dominant in this market. It isn't that price-sensitive. I think that they buy top-end cards, often designed for this purpose. Not much focus on actual video rendering or displaying.

                    Power consumption matters a lot in this market. Nvidia seems to win here.

                    Compute power matters a lot. For graphics, Nvidia wins at the high end. For mining, AMD seemed to have the edge. Not sure were GPU computing comes in.

                    Double precision may matter a lot. This used to be a differentiator between compute and graphics workloads. But it seems like AI workloads don't need this.

                    Support for safely partitioning the GPU between processes would be nice for datacentres. My impression is that AMD is ahead here.

                    Making off-card memory access seamless ought to be an advantage for big workloads. I think AMD might have an edge here.

                    There has been a large investment in Cuda applications. Clearly this gives Nvidia an edge. AMD is trying to provide a transition but I have no idea if it would work for most customers.

                    Sadly, none of this has anything to do with our desktops.

                    Comment


                    • #40
                      Originally posted by msotirov View Post
                      Did you see the RTX series reviews?
                      You mean the one with their most underwhelming generational performance increase in a while? Performance increases almost surpassed by corresponding price increases?

                      I think it opens a window where AMD can launch something aggressive that brings the heat into Nvidia's upper range. Maybe not the RTX 2080 Ti, but their 2080 is definitely vulnerable.

                      Sure, RTX has fancy new tech, but that's largely irrelevant for the current generation of games. In another year, the story could be different. However, if ray tracing is too slow to use > 1080p, then all they've really got is their blurry DLAA feature, which it's come to light is really rendering at a lower resolution and upscaling.

                      https://www.tomshardware.com/reviews...-rtx,5870.html
                      Last edited by coder; 28 October 2018, 03:09 AM.

                      Comment

                      Working...
                      X