Announcement

Collapse
No announcement yet.

NVIDIA GeForce GTX 1060 Offers Great Performance On Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #91
    Originally posted by efikkan View Post
    The latest big hype is that AMD's architectures are superior in Direct3D 12 and Vulkan, which of course is based on a handful of games, including AofS and Doom which were written for AMD hardware exclusively..
    It was nvidia that first showed of Doom and vulkan back in may.
    Today NVIDIA showcased the new GeForce GTX 1080 running DOOM with Vulkan API for the first time - running at up to 200 fps with Ultra settings. PC gamers, ge...

    You can't seriously state that doom developers and nvidia would be showing of the vulkan support for the first time for an title and API that's written exclusively for AMD hardware.
    You also states that AMD fans hate and discredit tests that doesn't show the "expected" AMD favor, well i can't see that you are any better only favoring nvidia instead.
    Try to be a bit more neutral if you want anyone to take you seriously if not i will think that you're to biased or trolling.
    The nvidia pascal lineup looks real good and seems to be some wonderful hardware but it's not what i am looking for so for me the AMD RX480 is more interesting but still waiting for RX470, RX460, AIB cards and solutions for laptops, i wish for the full AMD lineup to be out but i guess i need to wait a bit longer.

    Comment


    • #92
      Originally posted by smitty3268 View Post
      There's no possible way you can know that, though, unless you are actually an engineer who works for AMD or NVidia (or TSMC/GloFo).
      We know for a fact that Nvidia gets ~40-50% more performance per GFlop, the production node has nothing to do with that.

      Originally posted by GruenSein View Post
      You might not like the points I raised but the basics can't be disputed with. The raw performance is there in AMD hardware and there are applications that manage to extract it. The question is why. Since the software that is able to do it, uses low level APIs, it is only reasonable to assume that the OpenGL-Drivers which did a lot of this low level stuff internally aren't as well optimised as a developer of one specific workload.
      The "raw" performance in terms of theoretical performance is there, but AMD has been unable to harvest it for years. Some due to drivers, but mainly due to architectural inefficiencies. The only thing that matters though is real world performance.

      The argument that "software may be able to utilize it" is a false narrative. If the use cases don't correspond to real world applications, then it's a useless piece of technology. The argument you are using is exactly the same AMD fans has been raising for years over the failed Bulldozer architecture, claiming that it's just a matter of time before it will show it's glory, which of course never happened. AMD has even admitted the bad design choice and that's why Zen will be more a "copy" of Intel.

      Comment


      • #93
        Originally posted by efikkan View Post
        We know for a fact that Nvidia gets ~40-50% more performance per GFlop, the production node has nothing to do with that.


        The "raw" performance in terms of theoretical performance is there, but AMD has been unable to harvest it for years. Some due to drivers, but mainly due to architectural inefficiencies. The only thing that matters though is real world performance.

        The argument that "software may be able to utilize it" is a false narrative. If the use cases don't correspond to real world applications, then it's a useless piece of technology. The argument you are using is exactly the same AMD fans has been raising for years over the failed Bulldozer architecture, claiming that it's just a matter of time before it will show it's glory, which of course never happened. AMD has even admitted the bad design choice and that's why Zen will be more a "copy" of Intel.
        Considering the 6.5 Tflop 1070 is only 10% faster than the 5.5 Tflop 480 while the 1070 has 18% more Tflops than the 480 on Vulkan, how do you justify that level of blind fanboyism?

        Comment


        • #94
          Originally posted by efikkan View Post
          We know for a fact that Nvidia gets ~40-50% more performance per GFlop, the production node has nothing to do with that.
          Raw FLOP performance is only 1 aspect of a GPU, it was overall performance we were talking about. And I repeat, you have no way of knowing whether switching the process from GloFo to TSMC would let them catch up or not. None. All you can do is guess.

          Comment


          • #95
            Originally posted by smitty3268 View Post
            Raw FLOP performance is only 1 aspect of a GPU, it was overall performance we were talking about. And I repeat, you have no way of knowing whether switching the process from GloFo to TSMC would let them catch up or not. None. All you can do is guess.
            Even with Maxwell, Nvidia has performed 40-50% more per GFlop than AMD, so it's not like the switch to GloFo/Samsung has tilted this greatly. Of course there can be a few percent difference, and I do believe TSMC is slightly better, but that is not why AMD are struggling. If AMD and Nvidia had each similar bricks, similar specs, similar architecture, similar performance but one consumed 10% more energy, then you could perhaps blame some of it on the production node.

            Also, keep in mind that Polaris was taped out on both TSMC "16 nm" and Samsung "14 nm", and they went with Samsung for some unknown reason.
            Last edited by efikkan; 23 July 2016, 03:53 AM.

            Comment


            • #96
              Originally posted by efikkan View Post
              Even with Maxwell, Nvidia has performed 40-50% more per GFlop than AMD
              Since you don't seem to get it, let me try again. Having 50% more performance per GFlop doesn't matter at all if your competition has 50% more GFlops on their cards. In fact, it's entirely possible one architecture is designed to have a higher possible peak performance at the expense of not being able to achieve it as often, and that tradeoff was done on purpose.

              What actually matters is the end result. Which cards play games faster, and using less power. The answer is clearly NVidia's right now, but we don't know what kind of affect (if any) the process differences are having.
              Last edited by smitty3268; 23 July 2016, 05:50 PM.

              Comment


              • #97
                Originally posted by smitty3268 View Post
                Since you don't seem to get it, let me try again. Having 50% more performance per GFlop doesn't matter at all if your competition has 50% more GFlops on their cards. In fact, it's entirely possible one architecture is designed to have a higher possible peak performance at the expense of not being able to achieve it as often, and that tradeoff was done on purpose.
                You clearly don't know how microprocessors work. GFlop/s is a measurement of theoretical throughput of the FPUs, if they are fully saturated. Its a product of core count × IPC × clock, which means if your architecture is less efficient so you need more cores to achieve your desired level of performance, then your energy efficiency might suffer compared to the competition.

                Originally posted by smitty3268 View Post
                What actually matters is the end result. Which cards play games faster, and using less power. The answer is clearly NVidia's right now, but we don't know what kind of affect (if any) the process differences are having.
                If course the end result matters. But the efficiency is a part of that, in terms of heat, noise, overclocking headroom etc.

                And to reiterate, moving from 28 nm TSMC to 16nm TSMC/14 nm Samsung didn't have a great impact on the relation between AMD and Nvidia in terms of efficiency.
                Last edited by efikkan; 24 July 2016, 04:43 AM.

                Comment


                • #98
                  Originally posted by efikkan View Post
                  You clearly don't know how microprocessors work.
                  I know exactly how they work, thanks.

                  GFlop/s is a measurement of theoretical throughput of the FPUs, if they are fully saturated.
                  I understand that completely. I also know that you can sometimes create architectures that are difficult to fully saturate, and the flops numbers are then somewhat misleading.

                  Its a product of core count × IPC × clock, which means if your architecture is less efficient so you need more cores to achieve your desired level of performance, then your energy efficiency might suffer compared to the competition.
                  It's not just adding more cores that can boost it. As you mention here, increasing the clock can make up for low IPC as well. In fact, that's exactly the route NVidia took with this generation, greatly boosting their clock speeds.

                  If course the end result matters. But the efficiency is a part of that, in terms of heat, noise, overclocking headroom etc.
                  The end result is all that matters. Heat and overclocking headroom are precisely the types of things that are affected by different manufacturing processes, along with how many transistors you can pack tightly together (which will affect # of cores, clockspeed, etc.)

                  And to reiterate, moving from 28 nm TSMC to 16nm TSMC/14 nm Samsung didn't have a great impact on the relation between AMD and Nvidia in terms of efficiency.
                  And to reiterate, correlation != causation. You can guess that there is no appreciable difference between the processes, but you have no way of knowing it. Too many things changed between the cards to know for certain that 1 specific change made no difference.

                  Comment

                  Working...
                  X