Announcement

Collapse
No announcement yet.

NVIDIA Rolls Out The Titan Xp Graphics Card For $1200 USD

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by creative View Post
    I am of the opinion gpu technology needs to change in respect to footprint size in GPU's. I'm a bit neurotic when it comes to TDP. Ok Im an idealist and if I had my say, money and resources. I would push all to manufacturing limits. Faster, co TDP and smaller. In my opinion the GTX minis are biggest leap the GPU industry has made. Even if I had the money I would still not buy any of the large high performance cards cause I consider them stone age.
    Okay, woah. I think you're missing a couple key points.

    GPUs are fundamentally about maximizing perf/W. The amount of concurrency in graphics tasks allows them to take extremely efficient compute elements and achieve lofty performance numbers by stamping out thousands of them. In fact, the dies would probably be even bigger & the clocks even lower if manufacturing were cheaper. Alas, it's not. So, that forces them to use higher clock speeds than those which yield optimal perf/W.

    Now, as for manufacturing tech, it's no accident that GPUs tend to lag behind most other types of chips. Something like a cellphone SoC is quite small (when compared with desktop CPUs, at least). Since the high-end models are also pretty high-margin parts, they can afford to be trail blazers on new process nodes (which have tighter supply, lower yields, and thus much higher production costs). Next, you usually see the desktop CPUs. And finally come the big server CPUs and GPUs. At least, that's how it's been for the past couple node shrinks.

    If you were to look at the cost of fabricating current GPUs on say 10 nm, it wouldn't be economically viable for quite a while, yet. In fact, I think it's no accident that Nvidia chose to sell the largest of the Pascal family (i.e. the P100 and GP102) exclusively into the most price-insensitive markets, at first.

    In summary: if you want GPUs to be more power-efficient, you'd make them even bigger and dial back the clocks a bit. But this would make them even more expensive, which was your original complaint. Finally, the realities of market competition and demand for the power-efficient computation that GPUs can provide means the high-end graphics cards will remain firmly pinned to the ~250 W ceiling imposed by the PCIe spec (which is pretty close to what's pragmatic from a cooling perspective). Sure, some cards push this to ~300 W, but there's not much beyond.

    You'll find deeper analysis (and some recently-updated charts!), in this classic treatment of the subject:

    In particular, I appreciate his analysis of memory scaling.

    Comment


    • #22
      Originally posted by coder View Post
      Okay, woah. I think you're missing a couple key points.

      GPUs are fundamentally about maximizing perf/W. The amount of concurrency in graphics tasks allows them to take extremely efficient compute elements and achieve lofty performance numbers by stamping out thousands of them. In fact, the dies would probably be even bigger & the clocks even lower if manufacturing were cheaper. Alas, it's not. So, that forces them to use higher clock speeds than those which yield optimal perf/W.

      Now, as for manufacturing tech, it's no accident that GPUs tend to lag behind most other types of chips. Something like a cellphone SoC is quite small (when compared with desktop CPUs, at least). Since the high-end models are also pretty high-margin parts, they can afford to be trail blazers on new process nodes (which have tighter supply, lower yields, and thus much higher production costs). Next, you usually see the desktop CPUs. And finally come the big server CPUs and GPUs. At least, that's how it's been for the past couple node shrinks.

      If you were to look at the cost of fabricating current GPUs on say 10 nm, it wouldn't be economically viable for quite a while, yet. In fact, I think it's no accident that Nvidia chose to sell the largest of the Pascal family (i.e. the P100 and GP102) exclusively into the most price-insensitive markets, at first.

      In summary: if you want GPUs to be more power-efficient, you'd make them even bigger and dial back the clocks a bit. But this would make them even more expensive, which was your original complaint. Finally, the realities of market competition and demand for the power-efficient computation that GPUs can provide means the high-end graphics cards will remain firmly pinned to the ~250 W ceiling imposed by the PCIe spec (which is pretty close to what's pragmatic from a cooling perspective). Sure, some cards push this to ~300 W, but there's not much beyond.

      You'll find deeper analysis (and some recently-updated charts!), in this classic treatment of the subject:

      In particular, I appreciate his analysis of memory scaling.
      Going to have to read your links. You seem to be getting into the nitty gritty of it.

      Comment


      • #23
        Originally posted by coder View Post
        Okay, woah. I think you're missing a couple key points.

        GPUs are fundamentally about maximizing perf/W. The amount of concurrency in graphics tasks allows them to take extremely efficient compute elements and achieve lofty performance numbers by stamping out thousands of them. In fact, the dies would probably be even bigger & the clocks even lower if manufacturing were cheaper. Alas, it's not. So, that forces them to use higher clock speeds than those which yield optimal perf/W.

        Now, as for manufacturing tech, it's no accident that GPUs tend to lag behind most other types of chips. Something like a cellphone SoC is quite small (when compared with desktop CPUs, at least). Since the high-end models are also pretty high-margin parts, they can afford to be trail blazers on new process nodes (which have tighter supply, lower yields, and thus much higher production costs). Next, you usually see the desktop CPUs. And finally come the big server CPUs and GPUs. At least, that's how it's been for the past couple node shrinks.

        If you were to look at the cost of fabricating current GPUs on say 10 nm, it wouldn't be economically viable for quite a while, yet. In fact, I think it's no accident that Nvidia chose to sell the largest of the Pascal family (i.e. the P100 and GP102) exclusively into the most price-insensitive markets, at first.

        In summary: if you want GPUs to be more power-efficient, you'd make them even bigger and dial back the clocks a bit. But this would make them even more expensive, which was your original complaint. Finally, the realities of market competition and demand for the power-efficient computation that GPUs can provide means the high-end graphics cards will remain firmly pinned to the ~250 W ceiling imposed by the PCIe spec (which is pretty close to what's pragmatic from a cooling perspective). Sure, some cards push this to ~300 W, but there's not much beyond.

        You'll find deeper analysis (and some recently-updated charts!), in this classic treatment of the subject:





        In particular, I appreciate his analysis of memory scaling.
        I stand by my opinion, in point of fact the statement that the minis have been the biggest leap the GPU industry has made. I am not apt to propell an Intel vs AMD debate both have their merits. It's about the usefulness in regards to consumers in their respective budgets/needs, I did not miss any key points, in point of fact I have an opinion that is backed by current offerings in pascal form, the GTX minis. Accessibility is the key point and the GTX minis are easily accessible. I see what you are getting at its just not relevant to application in my life. I would rather use and correspond in plain syntax and plain language as possible albeit within a certain mental space and boundary. I say this hence my forum handle 'creative' part of what I do is art, secondly Linux/gaming and fourthly philosophy. Not quite the one to get into the nitty gritty too too much of arch design. Everybody has their point of reference which their skills/understanding have developed from to a greater or lesser degree. I very much respect/have a high appreciation for our different areas of skill and understanding. ;-)
        Last edited by creative; 09 April 2017, 02:54 AM. Reason: Better explanation from my point of application.

        Comment


        • #24
          Originally posted by creative View Post
          I stand by my opinion, in point of fact the statement that the minis have been the biggest leap the GPU industry has made.
          Can you please explain what you mean by GTX minis?

          Originally posted by creative View Post
          I am not apt to propell an Intel vs AMD debate both have their merits.
          Wow, neither am I. I don't think either of us said anything about AMD. The Intel vs. Nvidia comparison was because you said they should cost about the same. Then, I simply pointed out the difference in production economies.

          Originally posted by creative View Post
          I did not miss any key points,
          Well, let's see. You wanted GPUs to be smaller, cheaper, and lower-power. So, let's take another pass at this, in a slightly different order.

          Lower-power: people want performance. Gamers, machine learning users, VR-users, and various other GPU-compute users want the most performance they can get. People employ GPUs for non-graphic tasks because they're the fastest and most power-efficient of the generally-programmable options (true, FPGAs and ASICS are better, but also more expensive, cumbersome, and less flexible). The demand for performance pretty much means that the very fastest GPUs will max (and slightly exceed) the PCIe ceiling of 250 W per slot.

          Now, the way to maximize performance, when you have a hard power ceiling and a highly concurrent workload, is parallelism. This lets you design each compute element to be maximally power-efficient, and then you can instantiate lots of them. One reason for this is that power dissipation tends to increase as a square of the clock speed. So, it's generally more efficient to have twice as many compute elements running at half the clock speed.

          Cost: given the above points, the GPUs must be as large as it's economically-viable to make them. Due to what I said above, their speed is derived from their size. Making them smaller (in terms of # of transistors) will result in them being: slower, more power hungry, or both.

          Manufacturing tech isn't a solution, unless you're willing to pay dramatically more. I wish I knew the cost per transistor of 10 nm vs. 16 nm., but I assure it's much higher. As soon as they approach parity, you'll know by the fact that AMD and Nvidia are releasing new GPUs on that node.

          All of this is to say: you can't have fast and cheap and low-power. High-end GPUs will always be expensive power monsters, at least as long as the market demands performance.

          Originally posted by creative View Post
          I would rather use and correspond in plain syntax and plain language as possible albeit within a certain mental space and boundary. I say this hence my forum handle 'creative' part of what I do is art, secondly Linux/gaming and fourthly philosophy.
          Okay, you completely lost me.

          Anyway, I was just trying to explain why yours are contradictory or otherwise impractical goals:
          I would push all to manufacturing limits. Faster, co TDP and smaller.
          That said, if your needs are satisfied by smaller, less-power hungry GPUs, that's great. I never said you should buy these high-end models.

          Comment


          • #25
            Originally posted by coder View Post
            Can you please explain what you mean by GTX minis?

            Wow, neither am I. I don't think either of us said anything about AMD. The Intel vs. Nvidia comparison was because you said they should cost about the same. Then, I simply pointed out the difference in production economies.

            Well, let's see. You wanted GPUs to be smaller, cheaper, and lower-power. So, let's take another pass at this, in a slightly different order.

            Lower-power: people want performance. Gamers, machine learning users, VR-users, and various other GPU-compute users want the most performance they can get. People employ GPUs for non-graphic tasks because they're the fastest and most power-efficient of the generally-programmable options (true, FPGAs and ASICS are better, but also more expensive, cumbersome, and less flexible). The demand for performance pretty much means that the very fastest GPUs will max (and slightly exceed) the PCIe ceiling of 250 W per slot.

            Now, the way to maximize performance, when you have a hard power ceiling and a highly concurrent workload, is parallelism. This lets you design each compute element to be maximally power-efficient, and then you can instantiate lots of them. One reason for this is that power dissipation tends to increase as a square of the clock speed. So, it's generally more efficient to have twice as many compute elements running at half the clock speed.

            Cost: given the above points, the GPUs must be as large as it's economically-viable to make them. Due to what I said above, their speed is derived from their size. Making them smaller (in terms of # of transistors) will result in them being: slower, more power hungry, or both.

            Manufacturing tech isn't a solution, unless you're willing to pay dramatically more. I wish I knew the cost per transistor of 10 nm vs. 16 nm., but I assure it's much higher. As soon as they approach parity, you'll know by the fact that AMD and Nvidia are releasing new GPUs on that node.

            All of this is to say: you can't have fast and cheap and low-power. High-end GPUs will always be expensive power monsters, at least as long as the market demands performance.

            Okay, you completely lost me.

            Anyway, I was just trying to explain why yours are contradictory or otherwise impractical goals:

            That said, if your needs are satisfied by smaller, less-power hungry GPUs, that's great. I never said you should buy these high-end models.
            GTX mini's. I like them cause they don't restrict airflow in smaller enclosures like mid towers and under. The GTX 1070 mini is actually what I would consider pretty high performance and also is quiet. Oh yea and they run cool, also they don't sag off the motherboard.

            https://www.amazon.com/ZOTAC-GeForce...words=gtx+mini

            https://www.amazon.com/Gigabyte-GeFo...=gtx+1060+mini

            https://www.amazon.com/ZOTAC-GeForce...words=gtx+mini



            Done here.
            Last edited by creative; 09 April 2017, 08:32 PM.

            Comment


            • #26
              Originally posted by creative View Post
              GTX mini's. I like them cause they don't restrict airflow in smaller enclosures like mid towers and under. The GTX 1070 mini is actually what I would consider pretty high performance. Oh yea and they run cool,
              Of course, the actual GPU is the same size, so you get smaller cards either by cutting clock speeds or dealing with higher noise.

              Those GPUs can still perform admirably, at lower clocks, as shown by the fact that they use the same chips in laptops.

              Originally posted by creative View Post
              also they don't sag off the motherboard.
              My 980 Ti has a metal backplate. I haven't noticed a hint of sag. I do realize this adds to cost and weight, but I'm just saying that sag, in larger cards, isn't unavoidable.

              Comment


              • #27
                Originally posted by Geopirate View Post
                It seems like AMD is getting their driver situation together though. By the time the 1070 drops down into a reasonable price point, there may be a competing AMD card. I've noticed a smoother experience on a lower specced AMD card lately and it's made me start to question the quality of the Nvidia driver due to compatibility issues. If they can get performance on par, AMD may have an all around better solution by the end of the year....
                x2, IIRC nvidia does some funny stuff with their Geforce driver, so that it's not quite entirely congruent with the OpenGL spec, whereas AMD follows the spec to the letter. I've also noticed what you're describing, that the output of AMD cards just "feels" smoother than what Nvidia is putting on the screen. Less tearing, less stuttering, etc. Running an original GTX Titan right now, but will be giving a hard look at the new AMD Vega cards.

                Comment

                Working...
                X