Announcement

Collapse
No announcement yet.

NVIDIA Announces The GeForce GTX 1060, Linux Tests Happening

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • NVIDIA Announces The GeForce GTX 1060, Linux Tests Happening

    Phoronix: NVIDIA Announces The GeForce GTX 1060, Linux Tests Happening

    One week after the launch of the Radeon RX 480, NVIDIA is lifting the lid this morning on the GeForce GTX 1060 Pascal graphics card with pricing at $249+ USD while delivering GeForce GTX 980 class performance. I already have been testing the GeForce GTX 1060 under Ubuntu Linux, but unfortunately that embargo doesn't expire today... But here's the run-down on all of the technical details on the GTX 1060.

    http://www.phoronix.com/vr.php?view=23341

  • #2
    Awesome. Glad Nvidia got you a review sample before the embargo for reviews lifts.

    I'm extremely happy with my custom GTX-1080 under Arch Linux and I look forward to your review of the 1060. I may grab a custom version for a secondary system.

    Comment


    • #3
      I am wondering why Nvidia can generally achieve equivalent performance with less ALU units compared to AMD.

      Is it because Nvidia has a better hardware architecture or is it because Nvidia has a better DirectX/OpenGL compiler?

      Comment


      • #4
        Wouldn't be surprised if this thing can beat a 980Ti once overclocked with that low TDP. Also good they went with 6GB memory and not 4.

        Comment


        • #5
          Originally posted by atomsymbol View Post
          I am wondering why Nvidia can generally achieve equivalent performance with less ALU units compared to AMD.

          Is it because Nvidia has a better hardware architecture or is it because Nvidia has a better DirectX/OpenGL compiler?
          Usually the performance difference is related to the drivers.

          Nvidia drivers are not so spec compliant as AMD, but a big share of the developers use Nvidia, so it's a "work in my machine" situation, similar to Webkit-only websites, that use some of it's quirks and render incorrectly on Firefox...

          IIRC, I've read somewhere that Nvidia spent a lot of effort in the driver to optimize its performance replacing instructions in the shader or even replacing them entirely for customized version, like when a new AAA games is released with a new driver version "optimized" for that game

          Comment


          • #6
            Originally posted by atomsymbol View Post
            I am wondering why Nvidia can generally achieve equivalent performance with less ALU units compared to AMD.

            Is it because Nvidia has a better hardware architecture or is it because Nvidia has a better DirectX/OpenGL compiler?
            Actually the difference is mostly in naming conventions for what a core is. Nvidia's convention is correct and AMD's isn't. An nvidia "core" is approximately equal to an amd "compute unit"

            AMD's definition for what a core is, is most definitely wrong.

            EDIT: An nvidia core is an in order scalar pipeline architecture, an amd compute unit is an out of order scalar pipeline architecture. AMD's architecture certainly has greater potential to scale, but nvidia's is simpler and easier to program and optimize, and probably has much less latency.

            Last edited by duby229; 07-07-2016, 09:51 AM.

            Comment


            • #7
              Well I put my money where my mouth is and as I said before I went and bought a RX 480. AMD are much more open-source friendly and have been making very good progress lately. I have a GTX 780 for work related reasons, and while I don't even need a new graphics card, I still bought the RX 480, because I would like to vote with my wallet. That is what I value, and that's why I buy their products.

              Of course if your only concern is gaming, then pick whatever card falls in your budget and performs the best for the games you play.

              Still exited for the results as I like comparing architectures, their strengths and weaknesses!

              Originally posted by duby229 View Post

              Actually the difference is mostly in naming conventions for what a core is. Nvidia's convention is correct and AMD's isn't. An nvidia "core" is approximately equal to an amd "compute unit"

              AMD's definition for what a core is, is most definitely wrong.

              EDIT: An nvidia core is an in order scalar pipeline architecture, an amd compute unit is an out of order scalar pipeline architecture. AMD's architecture certainly has greater potential to scale, but nvidia's is simpler and easier to program and optimize, and probably has much less latency.
              I would argue that both definitions are "wrong", because by their logic, I have a 68 core CPU (4 cores, and 512-bit AVX instructions/core, 512/32 = 16 "cores").
              Last edited by Oguz286; 07-07-2016, 09:59 AM. Reason: Explanation of the "16" cores

              Comment


              • #8
                Originally posted by Oguz286 View Post
                Well I put my money where my mouth is and as I said before I went and bought a RX 480. AMD are much more open-source friendly and have been making very good progress lately. I have a GTX 780 for work related reasons, and while I don't even need a new graphics card, I still bought the RX 480, because I would like to vote with my wallet. That is what I value, and that's why I buy their products.

                Of course if your only concern is gaming, then pick whatever card falls in your budget and performs the best for the games you play.

                Still exited for the results as I like comparing architectures, their strengths and weaknesses!



                I would argue that both definitions are "wrong", because by their logic, I have a 68 core CPU (4 cores, and 512-bit AVX instructions/core, 512/32 = 16 "cores").
                I'm not sure what you mean. RX480 for example has 36 compute units, each of which has 64 stream processors. Here's a diagram that shows the gist.
                http://images.anandtech.com/doci/4455/GCN-CU.png

                AMD is calling the stream processors cores, but that's not true. From the diagram it's plainly obvious that it is the compute unit as a whole which implements the front end. Individually the stream processors have no way to fetch or decode or schedule loads. It takes a compute unit to be a core.

                Comment


                • #9
                  Originally posted by Oguz286 View Post
                  Well I put my money where my mouth is and as I said before I went and bought a RX 480. AMD are much more open-source friendly and have been making very good progress lately. I have a GTX 780 for work related reasons, and while I don't even need a new graphics card, I still bought the RX 480, because I would like to vote with my wallet. That is what I value, and that's why I buy their products.

                  Of course if your only concern is gaming, then pick whatever card falls in your budget and performs the best for the games you play.

                  Still exited for the results as I like comparing architectures, their strengths and weaknesses!



                  I would argue that both definitions are "wrong", because by their logic, I have a 68 core CPU (4 cores, and 512-bit AVX instructions/core, 512/32 = 16 "cores").
                  I wrote a reply, but it's in the mod que.

                  EDIT: Basically the gist was that for AMD architectures it takes a compute unit to be a core.
                  http://images.anandtech.com/doci/4455/GCN-CUTh.png

                  Take a good look at this diagram, you can clearly see how the front end, fetch, and decode are at the compute unit. the stream processors by themselves aren't capable of doing anything. The logic to function exists at the compute unit level, which means the compute unit is the core.
                  Last edited by duby229; 07-07-2016, 10:29 AM.

                  Comment


                  • #10
                    So this will be 480 VS 1060. At last a real battle, the winner will get my money.

                    Fight!

                    Comment

                    Working...
                    X