Announcement

Collapse
No announcement yet.

NVIDIA Announces The GeForce GTX 1060, Linux Tests Happening

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    It seems GTX 1060 is more power/price efficient then RX 480 due to memory cut (minus 1-2GB) and no SLI support

    What is the price of 3GB model? Hopefully that isn't 2,5+0,5 GB

    Comment


    • #12
      I noticed from the pictures, the PCB is shorter on the 1060 than the 1070 and 1080. Just like the RX 480. I would expect to see some coolers with ITX in mind... Which is exactly what I'm thinking about for my next system. Although I'm more interested in a small form factor ITX card along the lines of what Gigabyte are putting out.

      I'm wondering if AMD have held back the RX 490 to destroy the 1060 at the ~$300 range so as not to be upstaged by nVidia. Officially there is no RX 490.. But unofficially it keeps popping up on various manufacturers sites and then disappearing when people spot the name.

      Comment


      • #13
        Originally posted by dungeon View Post
        It seems GTX 1060 is more power/price efficient then RX 480 due to memory cut (minus 1-2GB) and no SLI support

        What is the price of 3GB model? Hopefully that isn't 2,5+0,5 GB
        Well I play at 1080p so 6GB is overkill anyway. And SLI missing... no comments it's useless.

        So performance #1 and power consumption #2 will bring the winner in my desktop.

        Comment


        • #14
          PCB is half as short as the card itself. If it had to be built a giant and requires such a fan, it may not be as "power efficient" as marketing bullshit says. It could have been a small, single-slot card. Why it isn't?

          Comment


          • #15
            Originally posted by eydee View Post
            PCB is half as short as the card itself. If it had to be built a giant and requires such a fan, it may not be as "power efficient" as marketing bullshit says. It could have been a small, single-slot card. Why it isn't?
            Because Nvidia is clearly re-using the same basic cooler design in the GTX-1060, and even then the card is still somewhat shorter than the other reference cards.

            You might want to watch your tone about insulting the power efficiency considering the supposedly "power efficient" Rx 480 often draws about the same amount of power as the GTX-1070 (not the 1060) and has substantially lower performance while doing it. I also have seen zero evidence that anybody is building a single slot version of the Rx 480 for that matter.

            Comment


            • #16
              Originally posted by Passso View Post
              Well I play at 1080p so 6GB is overkill anyway. And SLI missing... no comments it's useless.
              Well, it fine to mention that in case someone care regardless of OS.

              So performance #1 and power consumption #2 will bring the winner in my desktop.
              You can't have winner on both points, so on your desktop seems nothing will be . For Linux and due to drivers and bad game ports, you can guess that already, as GTX 1060 will perform better but that will use more power too.
              Last edited by dungeon; 07 July 2016, 11:21 AM.

              Comment


              • #17
                Originally posted by dungeon View Post
                You can't have winner on both points, so on your desktop seems nothing will be . For Linux and due to drivers and bad game ports, you can guess that already, as GTX 1060 will perform better but that will use more power too.
                It's physically impossible for the 1060 to use more power than the 480, since the 480 is already at the limit of what PCI-SIG allows.
                Theoretically customs designs could come with an 8 pin connector and use over 150W, but since the 1060 is only rated at 120W, I think there's plenty of headroom to make an 8 pin unnecessary. At the same time, I'm pretty sure a few manufacturer will slap an 8 pin connector in there, just because they can.

                Comment


                • #18
                  RX 480 GPU is declared at 110W + 40W for the rest of the board including VRAM, which is in whole 150W. RX 480 equipped with 8GB VRAM are clocked higher at 8GHz, then 4GB models which are clocked at 7GHz... only 8 GB models exceed that 150W in certain scenarios, but not 4GB models... 4GB models are actually not affected You see it is not RX 480 GPU that is problem, but more memory plus higher clocked memory on board.

                  But what i meant there, is real world system power usage on Linux... due to better performing nvidia driver and games used, system equipped with GTX 1060 will i expect use same or more watts then RX 480

                  Here we have picture of the marketing optical illusion, where lines are bigger then numbers So they claim 15% more perf over RX 480 (that MUCH FASTER THEN AN RX 480 - actually means 15% faster claim), 24% more VR perf and 43% power efficiency. According to that i claim, on Linux it will be 30% perf diff with near same power efficiency

                  Last edited by dungeon; 07 July 2016, 12:35 PM.

                  Comment


                  • #19
                    Originally posted by duby229 View Post

                    Actually the difference is mostly in naming conventions for what a core is. Nvidia's convention is correct and AMD's isn't. An nvidia "core" is approximately equal to an amd "compute unit"

                    AMD's definition for what a core is, is most definitely wrong.

                    EDIT: An nvidia core is an in order scalar pipeline architecture, an amd compute unit is an out of order scalar pipeline architecture. AMD's architecture certainly has greater potential to scale, but nvidia's is simpler and easier to program and optimize, and probably has much less latency.
                    The latency bit has certainly been proven false. And how would an in order have less latency than an out of order anyway?

                    Comment


                    • #20
                      Originally posted by SaucyJack View Post

                      The latency bit has certainly been proven false. And how would an in order have less latency than an out of order anyway?
                      I don't actually know how many stages either of these pipelines have, but theoretically an in order pipeline can be as short as 6 stages long. AMD's pipeline is certainly much longer than that. It's possible to hide latency behind caches and prefetching logic, but it is still there. Out of order pipelines have the potential for higher bandwidth, hence better scalability, but are much longer and more complicated.
                      Last edited by duby229; 07 July 2016, 12:23 PM.

                      Comment

                      Working...
                      X