Announcement

Collapse
No announcement yet.

NVIDIA GeForce GTX 1070 On Linux: Testing With OpenGL, OpenCL, CUDA & Vulkan

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by bug77 View Post
    Misc fact of the day: AMD did not invent HBM. But don't let that faze you.
    Most of the internet seems to think it was initiated by AMD then co-developed by AMD & Hynix with input from UMC and others).

    Who do you think developed it ?
    Test signature

    Comment


    • #32
      @Q

      You know that Nvidia has got a HBM2 card with GP100 already? But with 32 GB VRAM and for HPC it is very expensive: Telsa P100. If Nvidia wants there will be a new Titan/1080 Ti to beat a possible AMD Vega. The Fury (X) has HBM but the diff to GDDR5X like used in the GTX 1080 is very small. But VRAM is doubled, 8 GB compared to 4 GB (Fury). AMD wants to beat Nvidia via price now, no idea how successfull will be - the performance crown is a bit too high for AMD - most likely for Zen too. But i hope AMD creates even cheaper cards soon with HDMI 2.0a/HEVC Main 10 support - this could be a good selling point for HTPC systems via 4k TV. The GTX 950 is a bit expensive if you mainly need multimedia features. But as Kaby Lake is delayed till next january there might be a demand.

      Comment


      • #33
        Originally posted by bridgman View Post

        Most of the internet seems to think it was initiated by AMD then co-developed by AMD & Hynix with input from UMC and others).

        Who do you think developed it ?
        Afaik it's the result of Hynix competing with Micron with AMD as Hynix's "guinea pig". And I mean guinea pig in the sense of being the first adopter.
        Has AMD designed anything related to HBM?

        Comment


        • #34
          Originally posted by bug77 View Post
          Has AMD designed anything related to HBM?
          I'm pretty sure the answer is "yes". Remember that we have been driving memory and interface standards since the early ATI days.
          Test signature

          Comment


          • #35
            Originally posted by bridgman View Post

            I'm pretty sure the answer is "yes". Remember that we have been driving memory and interface standards since the early ATI days.
            Ah, interfaces. Somehow I was only thinking about the physical die. My bad.

            Comment


            • #36
              Originally posted by Passso View Post
              1080 is aimed at enthousiast, like top end i7 are. They just want the best available, 10% more perf for 50% more $ is not a problem for them.
              That's probably true. But if you have the money to blow away for such a relatively minimal performance increase, you should have the money to get 2x 1070s instead (and get more performance in the end).

              Titan series are NOT aimed for gaming or benchmarking, they are mainly used by professionnal and even a 1080 cannot beat them in complex scenes calculation.
              A classical benchmark running at 60 fps may be misleading, when you calculate a really complex scene with a mass of effects at 4k and get 1 frame every 5mn, Titans work harder.
              Titans aren't meant for professionals either. That's what the Quadro series is for. Titans are supposed to be a mix between gamer and workstation graphics (and are even priced accordingly), but it's not the best at being either one.

              Comment


              • #37
                Originally posted by schmidtbag View Post
                That's probably true. But if you have the money to blow away for such a relatively minimal performance increase, you should have the money to get 2x 1070s instead (and get more performance in the end).
                Well considering the random performance boost from SLI and the price of motherboard/adapters etc. It is better getting a single 1080 for 600$ imho.
                But hey, real enthousiasts already have 2x 1080

                Titans aren't meant for professionals either. That's what the Quadro series is for. Titans are supposed to be a mix between gamer and workstation graphics (and are even priced accordingly), but it's not the best at being either one.
                Most of my friends working for 3D in marketing and ads use Titans for live previews but yes Quadro is considered as "the real one".

                Comment


                • #38
                  Originally posted by Kano View Post
                  If Nvidia wants there will be a new Titan/1080 Ti to beat a possible AMD Vega.
                  Yeah. And if AMD wanted they could release a GPU that beats Vega as well. But the point is that there is already the 1080 so the somewhat higher level market is still occupied. And on the other hand they have a nice card that most people can afford and that meets most peoples' requirements. dualslot(could even be single), silent, decent gaming performance, newest interfaces and features. And it's future proof as you can simply add more of them to work flawlessly with Vulkan or DX12 as one virtual device. And the other problem is that there won't be significantly faster GPUs for about 3 years if they would already release a 600mm² chip. So we'll see what happens.

                  Originally posted by Kano View Post
                  the performance crown is a bit too high for AMD - most likely for Zen too.
                  When you look at the ~35% bigger die of GP104 and compare it with the Polaris Ellesmere you might notice that the estimated performance is pretty much competitive. Especially including the fact that the size of the VRAM- connection occupies relatively big parts of the smaller die, so bigger dice would scale better. I can't tell about Zen but I am just asking myself how you can write certainly wrong things so confidently? - The 390X is just slightly behind the 980Ti on the newest Windows games(438mm²(AMD) vs 601mm²(Nvidia)).
                  Last edited by oooverclocker; 15 June 2016, 12:28 PM.

                  Comment


                  • #39
                    Ok, dual Polaris GPU might work fine with AotS but not everybody plays it with 1080p. There are already unofficial benchmarks that show that the scaling is pretty bad for higher res where you really would need more fps. CF is certainly not future proof, you always need profiles for DX11/OpenGL games. The 390X is more expensive than PX480 but why do you compare it against 980 Ti as there is a much cheaper GTX 1070?

                    Comment


                    • #40
                      Originally posted by Kano View Post
                      Ok, dual Polaris GPU might work fine with AotS but not everybody plays it with 1080p.
                      I'm not referring to the official benchmark which was an epic marketing fail for me. What I am talking about is the pure Vulkan performance, which hopefully will become the only important number in Linux gaming soon, no DX11 or OpenGL. And theoretically Vulkan makes it possible to use random GPUs together without any CrossfireX or SLI - not even the brand matters in theory.
                      Originally posted by Kano View Post
                      The 390X is more expensive than PX480 but why do you compare it against 980 Ti as there is a much cheaper GTX 1070?
                      Both the 390X and 980Ti are old 28nm generation cards. It makes no sense to compare them to 14nm FinFET or 16nm cards in terms of the price because 28nm is always more expensive for the same performance. But as the GTX 1080 is generally like a shrinked 980Ti with higher clocks and nearly no further improvements and the RX 480 is a bit smaller then a shrinked 390X with not so much higher clocks you can roughly estimate that its raw performance should be somewhere near the 1070 and its DX11 performance should be a little lower so i'm quite sure that a bit of overclocking may get it above the 1070 in cases of greatly optimized applications and maybe at about the same level for DX11.
                      But remember I only look at the Vulkan performance in terms of Linux so I don't care how it performs with DirectX or whatever because I don't want to play on Windows anymore.
                      Last edited by oooverclocker; 15 June 2016, 09:14 PM.

                      Comment

                      Working...
                      X