Announcement

Collapse
No announcement yet.

NVIDIA Announces The GeForce GTX 1060, Linux Tests Happening

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by dungeon View Post
    RX 480 GPU is declared at 110W + 40W for the rest of the board including VRAM, which is in whole 150W. RX 480 equipped with 8GB VRAM are clocked higher at 8GHz, then 4GB models which are clocked at 7GHz... only 8 GB models exceed that 150W in certain scenarios, but not 4GB models... 4GB models are actually not affected You see it is not RX 480 GPU that is problem, but more memory plus higher clocked memory on board.

    But what i meant there, is real world system power usage on Linux... due to better performing nvidia driver and games used, system equipped with GTX 1060 will i expect use same or more watts then RX 480

    Here we have picture of the marketing optical illusion, where lines are bigger then numbers So they claim 15% more perf over RX 480, 24% more VR perf and 43% power efficiency. According to that i claim, on Linux it will be 30% perf diff with near same power efficiency

    How biaised is this graph, starting at 0.8 so that average people think it will 2x because visually the red bar is half the green one... What a shame, all this marketing $hit.

    Comment


    • #22
      Originally posted by duby229 View Post

      I don't actually know how many stages either of these pipelines have, but theoretically an in order pipeline can be as short as 6 stages long. AMD's pipeline is certainly much longer than that. It's possible to hide latency behind caches and prefetching logic, but it is still there. Out of order pipelines have the potential for higher bandwidth, hence better scalability, but are much longer and more complicated.
      That doesn't make any sense at all. And Nvidia GPUs have a delay of about 15-20ms (yes, that's milliseconds) over Radeon hardware. So the 'stages' don't seem to matter if AMD really has more of them. An out of order pipeline also doesn't stall all currently in flight instructions when one has an issue whereas an in order pipeline will have a seizure until it's all sorted out.

      Comment


      • #23
        Originally posted by bridgman View Post

        Just curious, what do you think is wrong ? We normally use "compute unit" and "core" interchangeably, isn't that what you want ?



        I'm pretty sure GCN cores are short, in-order pipelines. Curious what you base the latency statement on ?
        I've seen tons of documentation that describe stream processors as cores, but they aren't, the front end is at the compute unit.


        Take a close look at this diagram, if it is an in order pipeline the simd units would be highly underutilized.

        Comment


        • #24
          Originally posted by Passso View Post

          How biaised is this graph, starting at 0.8 so that average people think it will 2x because visually the red bar is half the green one... What a shame, all this marketing $hit.
          I said it is optical illusion, if you look at lines you might think it is that really much faster, until you look at numbers and figure out 15% diff

          Comment


          • #25
            Originally posted by SaucyJack View Post

            That doesn't make any sense at all. And Nvidia GPUs have a delay of about 15-20ms (yes, that's milliseconds) over Radeon hardware. So the 'stages' don't seem to matter if AMD really has more of them. An out of order pipeline also doesn't stall all currently in flight instructions when one has an issue whereas an in order pipeline will have a seizure until it's all sorted out.


            Of course there are advantages and disadvantages. I'm sure they've been thought out very carefully by both companies. I'm checking into the delay you mention, but my first guess is it's probably display logic related and not at all compute logic related.

            Comment


            • #26
              Originally posted by bridgman View Post

              Just curious, what do you think is wrong ? We normally use "compute unit" and "core" interchangeably, isn't that what you want ?



              I'm pretty sure GCN cores are short, in-order pipelines. Curious what you base the latency statement on ?
              Are you sure they are in order? The diagram for a compute unit doesn't look at all like an in order architecture. Look at the diagram in the post above this one and then compare it to an out of order architecture. They look very similar.

              Comment


              • #27
                Today's just the soft announcement for the GeForce GTX 1060 while it will begin shipping worldwide on 19 July. Not until that hard launch date does NVIDIA's embargo expire for being able to provide GTX 1080 benchmarks, but at least all of the technical details are fair game today as well as pictures/videos.
                Michael you have a typo in the article shown above.

                Comment


                • #28
                  unless it performs at least 2x as fast as 480, there is no way i would ever buy it. OSS drivers, ftw.

                  Comment


                  • #29
                    Originally posted by siavashserver
                    I heard you like optical illusions, so here you go )
                    Ha, ha, marketing does that always, i don't claim AMD does not do that ... just point out what people could expect for real. In hope that 15% advantage claimed is not against slowest 4GB RX 480 tested and not in some nvidia optimized titles with too much tweaked drivers for release which will break soon after that

                    Comment


                    • #30
                      Originally posted by Passso View Post
                      So this will be 480 VS 1060. At last a real battle, the winner will get my money.

                      Fight!

                      I don't think there should be any doubt that Nvidia will win hands down with the closed drivers.
                      More than performance is a matter of driver choice.

                      Personally I'd go with the rx 480 since I'm terribly enjoying mesa drivers.
                      While the don't offer the best performance, they're still good enough and I never get an issue of stability with them.
                      Something I couldn't say neither with fglrx or nvidia closed drivers.

                      On windows they should perform equally despite the Nvidia marketing slides (yes marketing lies..)
                      Last edited by sonnet; 07 July 2016, 01:10 PM.

                      Comment

                      Working...
                      X