Announcement

Collapse
No announcement yet.

NVIDIA GeForce GTX 680 To RTX 2080 Ti Graphics/Compute Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by tuxd3v View Post

    Indeed AMD is putting a lot of work on OpenSourcing some tools, and given support for some projects like Torch, which I wanted(but it also works with Cuda..)
    I brought 2 cards to check, and AMD cards are very restrictive, on GPU Computing requirements..

    For Example,
    The Polaris 10 RX580, you put them on a pcie 3.0, with a CPU with pcie3.0 and in DON'T work
    It needs pci atomics on processor and chipset,
    its like a gamble, when you go buy new Mobos no one tells you if it has pcie atomics or not...its lotary..

    Now we have here, in the company 2 new Cards( Rx580 4GB and RX580 8GB ) very good design by Sapphire the best ones...
    BUT they don't work!
    Now we need to resell them, because they need cpus with pci 3.0 + mobos pcie 3.0 with pcie atomics for GPU Computing...miserable.
    If some one is interested PM me, they are new cards..

    On the Contrary,
    Our Gtx 1070 or 1080, you can even put them in a pcie 1.1, with a Core 2 Duo processor( machines from last decade ) and they shine, with all functionalities available, and a minor slow down in performance( Depends what type of workloads you GPU Compute have ), at bandwith, they are limited by pcie rev1.1, but lots of our work don't require lots of bandwith, a pcie 1.1 works ok, but better 2.0 or 3.0, around ~10% more performance, and slightly less power comsumption, but we are talking about 3-4Watts diference...

    They are a Secure investment..
    More over a 8400GS from last Decade, is still supported on Debian Stretch today , with Cuda Toolkit 6.5, and it works nice on Cuda.

    What can I say,
    If you have last hardware, and want to be gambling(like we gambled with our 2 RX580 Cards, and lost), and support OpenSource, then by AMD.
    If you just want a secure investment buy NVidia, because they will work at least in the next 10 years to come.
    Ok we get it, you are just an Nvidia shill... If you don't have pcie atomics, you can wait for ROCm to support Polaris without them which i assume will happen eventually, or just use the binary OpenCL driver like i do on my Tonga and you can use OpenCL just fine. Remind me when Nvidia has something like ROCm on the binary driver. You don't seem to understand what ROCm is.

    Comment


    • #22
      Originally posted by dungeon View Post

      He is talking about ROCm platform requirements:

      https://rocm.github.io/ROCmInstall.h...rdware-support

      Basically you can't use that without specific CPU/GPU/mobo combo. Not even everything what you can currently buy very new on the market is supported, so *read carefully* right there

      When tuxd3v said it DON'T work, he means particulary that ROCm don't work as he didn't met particular hardware combo requirement to be able to use it as only specific and selected combos are supported

      When tuxd3v said "its lotary" that means exactly that - as maybe you will be able to get to use it or maybe not, same like a linux or whatever else in its early days as ROCM is also relatively new project

      On top of the page don't miss to read ROCm, a New Era in Open GPU Computing as these New Eras usually does not support past as much or in some cases even current eras

      Unlike CUDA from a Past Era who supports things from the past and current eras
      You ignorant amateurs should understand what ROCm is supposed to be before you spew your Nvidia propaganda. ROCm != mere OpenCL. ROCm provides a deeper integration with the CPU and higher capabilities than simply using OpenCL on the dgpu, that is why it needs atomics. If you just want mere OpenCL you can use the binary OpenCL library from the AMD driver, it works side by side with the rest of the free drivers just fine. And it provides more or less the same thing as CUDA.

      Comment


      • #23
        Originally posted by TemplarGR View Post

        You ignorant amateurs should understand what ROCm is supposed to be before you spew your Nvidia propaganda. ROCm != mere OpenCL. ROCm provides a deeper integration with the CPU and higher capabilities than simply using OpenCL on the dgpu, that is why it needs atomics. If you just want mere OpenCL you can use the binary OpenCL library from the AMD driver, it works side by side with the rest of the free drivers just fine. And it provides more or less the same thing as CUDA.
        Yes, ROCm is not mere OpenCL and provides more or less same thing as CUDA... but that is also problem to say how it provides same thing as CUDA, because people think it will work on the same hardware

        As people are like that, they think "OK so i remove this green card i have and put red card and now things will work same way", but it is not like that No one would complain if it is like that, by just changing cards but it is not.
        Last edited by dungeon; 21 September 2018, 04:46 AM.

        Comment


        • #24
          Originally posted by TemplarGR View Post
          You ignorant amateurs
          These words usually are enough to ignore the rest of a post.

          Originally posted by TemplarGR View Post
          Ok we get it, you are just an Nvidia shill...
          And this also is enough.

          Are you a troll or just a hater? Can't you just explain politely why people are wrong?

          Comment


          • #25
            Originally posted by tildearrow View Post

            Phoronix has unveiled a new Turing card without the ray-tracing cores!
            It's the low cost version, for only $ 999,99...

            Comment


            • #26
              Originally posted by wizard69 View Post
              In some ways this new card is impressive but the power usage is just not acceptable. I know many don't care but I'd buy a lower power (as in watts) simply to avoid the power bill.
              How is it unacceptable? It hasn't got any less efficient, the performance has gone up with the increased power draw.

              You will never generate a big power bill from gaming. Because we game only a small fraction of the cards lifetime, nobody has time to play games the whole day. When you are watching movies and browsing the web etc the GPU is in lower power states.

              The power bill becomes a concern when you do things like mining which load the GPU 24x7.

              Comment


              • #27
                Originally posted by humbug View Post
                How is it unacceptable? It hasn't got any less efficient, the performance has gone up with the increased power draw.

                You will never generate a big power bill from gaming. Because we game only a small fraction of the cards lifetime, nobody has time to play games the whole day. When you are watching movies and browsing the web etc the GPU is in lower power states.

                The power bill becomes a concern when you do things like mining which load the GPU 24x7.
                True. Not a big concern for gaming. Not sure why anyone would buy such expensive card for gaming though.

                My R9 Fury has the same TDP as the RTX2080ti at 260W. When I was mining Ethereum it would draw 190W per card if I let it run at full speed. That's 0.190×24×30×0.11 = 15,048 USD per month on the power bill. A gamer would only see a fraction of that becouse gaming PC's don't game 24/7.

                Let's assume you run demanding game and average 220W in power draw for the card alone. A 260W TDP card doesn't average at 260W becouse of various bottlenecks, V-sync and such things. You play 2 hours per day, every day for a month and the electricity costs 0.11 USD per kWh. 0,220×2×30×0,11 = 1,452 USD per month. No bid deal, and that is quite a lot of hours of gaming.
                Last edited by Brisse; 21 September 2018, 06:53 AM.

                Comment


                • #28
                  The 2080ti power consumption at load is less here than what windows benchmarking people are seeing. It's coming up at 405+ watts. Might be due to their open bench setups or drivers, but this is the lowest power metric I've seen yet. It sucks a good bit more juice than the vega 64 on windows benchmarks.

                  Comment


                  • #29
                    Originally posted by humbug View Post
                    nobody has time to play games the whole day.
                    Clearly we live in different worlds since that stuff is pretty common in mine from game addicts.

                    Comment


                    • #30
                      Ignorants should pay note to some facts. 6 years and only 4x increase while increasing power draw (so this can't go on indefinitely like this), let that sink in. Moore's Law would say each 18 months you get doubled performance with double the transistors. At least it used to.

                      If Moore's Law was followed, we'd need a 16x increase in performance over 6 years, not 4x. And also have the same power draw, not way higher (even if more efficient per performance)

                      Yeah, definitely the evil crypto mining is the reason for this, and not that physical limits are being approached (and fast). /s

                      Comment

                      Working...
                      X