Announcement

Collapse
No announcement yet.

NVIDIA Announces The GeForce GTX 1060, Linux Tests Happening

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by duby229 View Post
    Actually the difference is mostly in naming conventions for what a core is. Nvidia's convention is correct and AMD's isn't. An nvidia "core" is approximately equal to an amd "compute unit". AMD's definition for what a core is, is most definitely wrong.
    Just curious, what do you think is wrong ? We normally use "compute unit" and "core" interchangeably, isn't that what you want ?

    Originally posted by duby229 View Post
    EDIT: An nvidia core is an in order scalar pipeline architecture, an amd compute unit is an out of order scalar pipeline architecture. AMD's architecture certainly has greater potential to scale, but nvidia's is simpler and easier to program and optimize, and probably has much less latency.
    I'm pretty sure GCN cores are short, in-order pipelines. Curious what you base the latency statement on ?

    Comment


    • #22
      Originally posted by dungeon View Post
      RX 480 GPU is declared at 110W + 40W for the rest of the board including VRAM, which is in whole 150W. RX 480 equipped with 8GB VRAM are clocked higher at 8GHz, then 4GB models which are clocked at 7GHz... only 8 GB models exceed that 150W in certain scenarios, but not 4GB models... 4GB models are actually not affected You see it is not RX 480 GPU that is problem, but more memory plus higher clocked memory on board.

      But what i meant there, is real world system power usage on Linux... due to better performing nvidia driver and games used, system equipped with GTX 1060 will i expect use same or more watts then RX 480

      Here we have picture of the marketing optical illusion, where lines are bigger then numbers So they claim 15% more perf over RX 480, 24% more VR perf and 43% power efficiency. According to that i claim, on Linux it will be 30% perf diff with near same power efficiency

      How biaised is this graph, starting at 0.8 so that average people think it will 2x because visually the red bar is half the green one... What a shame, all this marketing $hit.

      Comment


      • #23
        Originally posted by duby229 View Post

        I don't actually know how many stages either of these pipelines have, but theoretically an in order pipeline can be as short as 6 stages long. AMD's pipeline is certainly much longer than that. It's possible to hide latency behind caches and prefetching logic, but it is still there. Out of order pipelines have the potential for higher bandwidth, hence better scalability, but are much longer and more complicated.
        That doesn't make any sense at all. And Nvidia GPUs have a delay of about 15-20ms (yes, that's milliseconds) over Radeon hardware. So the 'stages' don't seem to matter if AMD really has more of them. An out of order pipeline also doesn't stall all currently in flight instructions when one has an issue whereas an in order pipeline will have a seizure until it's all sorted out.

        Comment


        • #24
          Originally posted by bridgman View Post

          Just curious, what do you think is wrong ? We normally use "compute unit" and "core" interchangeably, isn't that what you want ?



          I'm pretty sure GCN cores are short, in-order pipelines. Curious what you base the latency statement on ?
          I've seen tons of documentation that describe stream processors as cores, but they aren't, the front end is at the compute unit.
          http://assets.hardwarezone.com/img/2013/05/gcn.png

          Take a close look at this diagram, if it is an in order pipeline the simd units would be highly underutilized.

          Comment


          • #25
            Originally posted by Passso View Post

            How biaised is this graph, starting at 0.8 so that average people think it will 2x because visually the red bar is half the green one... What a shame, all this marketing $hit.
            I said it is optical illusion, if you look at lines you might think it is that really much faster, until you look at numbers and figure out 15% diff

            Comment


            • #26
              Originally posted by SaucyJack View Post

              That doesn't make any sense at all. And Nvidia GPUs have a delay of about 15-20ms (yes, that's milliseconds) over Radeon hardware. So the 'stages' don't seem to matter if AMD really has more of them. An out of order pipeline also doesn't stall all currently in flight instructions when one has an issue whereas an in order pipeline will have a seizure until it's all sorted out.
              http://assets.hardwarezone.com/img/2013/05/gcn.png

              Of course there are advantages and disadvantages. I'm sure they've been thought out very carefully by both companies. I'm checking into the delay you mention, but my first guess is it's probably display logic related and not at all compute logic related.

              Comment


              • #27
                Originally posted by bridgman View Post

                Just curious, what do you think is wrong ? We normally use "compute unit" and "core" interchangeably, isn't that what you want ?



                I'm pretty sure GCN cores are short, in-order pipelines. Curious what you base the latency statement on ?
                Are you sure they are in order? The diagram for a compute unit doesn't look at all like an in order architecture. Look at the diagram in the post above this one and then compare it to an out of order architecture. They look very similar.

                Comment


                • #28
                  Today's just the soft announcement for the GeForce GTX 1060 while it will begin shipping worldwide on 19 July. Not until that hard launch date does NVIDIA's embargo expire for being able to provide GTX 1080 benchmarks, but at least all of the technical details are fair game today as well as pictures/videos.
                  Michael you have a typo in the article shown above.

                  Comment


                  • #29
                    Originally posted by bridgman View Post
                    I'm pretty sure GCN cores are short, in-order pipelines.
                    I don't understand the meaning of in-order and out-of-order in the context of a SIMD processor.

                    Radeon GPUs cannot execute other instructions while waiting for data to arrive from memory for example?

                    Comment


                    • #30
                      unless it performs at least 2x as fast as 480, there is no way i would ever buy it. OSS drivers, ftw.

                      Comment

                      Working...
                      X