Announcement

Collapse
No announcement yet.

NVIDIA GeForce GTX 680 To RTX 2080 Ti Graphics/Compute Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by Weasel View Post
    Ignorants should pay note to some facts. 6 years and only 4x increase while increasing power draw (so this can't go on indefinitely like this), let that sink in. Moore's Law would say each 18 months you get doubled performance with double the transistors. At least it used to.

    If Moore's Law was followed, we'd need a 16x increase in performance over 6 years, not 4x. And also have the same power draw, not way higher (even if more efficient per performance)

    Yeah, definitely the evil crypto mining is the reason for this, and not that physical limits are being approached (and fast). /s
    It isn't even 4x, but more like 2.33x on its base, let me explain this...

    That GTX 680 had $500 launch price and this RTX 2080 Ti FE is $1200

    For your story 6 years comparison back and now you should pick something that costs the same as GTX 680 now and also have similar power draw... so that would be RTX 2070 Just ignore naming of these, cards have similar power draw, for the same money back and now - as nothing changed there now let's go to see how just base perf changed Without benchmarks? Impossible! Nope, very possible

    More than that cards nowdays really goes into kind of exclusives, of more power draw and have been also more pricey for these who like

    Without increasing die and power draw, you will see the same difference in base performance as there is difference between processes in which these chips are done. GTX 680 is 28 nm and RTX 2070 is 12 nm, so how much is difference there - just 2.33 times

    So that is what it is, if Moore's Law and 18 months story is correct you will see these GPUs nowdays done in 3.5 nm, but that isn't a case
    Last edited by dungeon; 09-21-2018, 10:51 AM.

    Comment


    • #32
      Originally posted by TemplarGR View Post
      (...)
      ROCm provides a deeper integration with the CPU and higher capabilities than simply using OpenCL on the dgpu, that is why it needs atomics. If you just want mere OpenCL you can use the binary OpenCL library from the AMD driver, it works side by side with the rest of the free drivers just fine. And it provides more or less the same thing as CUDA.
      ldesnogu,
      Already provided you some good introspection words, about the first part of your comments path above..

      In relation to OpenCL, you are completely Wrong about it!
      Because I made a batch of tests with AMDGPU-PRO, on same Hardware we have, and combining Kavery APUS, with RX500 Series DON'T WORK!!
      On Ubuntu 18.04, it doesn't work at all, on the dGPU( Maybe a problem related with amdgpu changes recently? and its not yet reflected in the Upstream drives...it could be..I don't know )!
      Strangely enough, the iGPU some times work, some times don't, and crash!
      I am talking about a CPU with pcie 3.0 and a Mobo with pcie3.0 with AMD A88X (Bolton-D4) chipset.
      NOT even a simple Hello world, works

      So you are completely wrong, AMDGPU-PRO also requires strict requirements now.

      On the part of NVidia,
      I don't support, the Idea of been lets say a Fan, If you have the capability to read, you would see above that I brough 2 AMD cards to test in our Hardware Setups, precisely because I wanted to use Hardware supported by OpenSource, or in this case AMD, but it don't work on our setups.
      It's not a Fan issue, its reality.

      Comment


      • #33
        Originally posted by Weasel View Post
        Ignorants should pay note to some facts. 6 years and only 4x increase while increasing power draw (so this can't go on indefinitely like this), let that sink in. Moore's Law would say each 18 months you get doubled performance with double the transistors. At least it used to.

        If Moore's Law was followed, we'd need a 16x increase in performance over 6 years, not 4x. And also have the same power draw, not way higher (even if more efficient per performance)
        Yeah, Intel was quite good at brain washing people by calling a law something that obviously wasn't one. Physics laws (the real ones) have just struck back: nothing increases indefinitely, even power efficiency.

        OTOH it can be argued that NVIDIA could have had a faster card at current games by not adding fancy new features currently unused and using that space for traditional stuff. But they are innovating, and IMHO that is more important than getting more FPS in existing games.

        Comment


        • #34
          Originally posted by tuxd3v View Post

          Thats because you own a Nvidia Card
          Nvidia is like, insert the card, install drivers, install Cuda toolkit, and you are done...
          You don't even need to worry with pcie atomics, Pcie Revision, CPU,nothing, they work anny way

          If you own a AMD card like the Rx400/500 Series and you want GPU Computing...this will be your worst nightmare( On machines that don't support pcie atomics).

          My comment was related to GPU Computing, only, we don't play games with our working cards..
          But at that level, I think it would be ok, and infact the AMD Hardware like the Saphire Cards I brought for test, are very Good Quality, better than Nvidia cards we own, or at least better than a lot of them.
          One of the best Designs , if not the best I saw til now, but with very Restrict Requirements, for GPU Computing..
          You guys using intel Z series motherboards that cause that with AMD GPU's? Just asking in the case that in the future I fall upon a Polaris card or something to tinker with. I would check out an RX card but limited to no funds at the moment.
          Last edited by creative; 09-21-2018, 02:50 PM.

          Comment


          • #35
            Originally posted by ldesnogu View Post
            These words usually are enough to ignore the rest of a post.


            And this also is enough.

            Are you a troll or just a hater? Can't you just explain politely why people are wrong?
            I did. The only thing these blatant TROLLS, who are known to be trolls on this forum btw, have to do to understand they are wrong is actually visit ROCm webpage and read about it... ROCm is not just an OpenCL implementation, OpenCL is just a subset of what it does. I am not going to be polite to people who are here just to shill for a certain company, sorry.

            Comment


            • #36
              Originally posted by creative View Post

              You guys using intel Z series motherboards that cause that with AMD GPU's? Just asking in the case that in the future I fall upon a Polaris card or something to tinker with. I would check out an RX card but limited to no funds at the moment.
              No,
              In Linux to use Rocm project for GPU Computing, you need to follow very strict hardware compliance..

              Rx400/500 Series is one of that cards that needs pcie atomics, and so processors and mobos that support pcie3.0 "only" isn't enough.
              This cards require pcie3.0, but with pcie atomics operations support( on cpu and Motherboard).

              If you own a mobo or a cpu, that don't support pcie atomics, you are done,
              But,
              I was talking ONLY on GPU Computing with Rocm, for Graphics, its another story( I don't game on this cards.. ).
              They are very well positioned on the market, AMD could be done a lot of money with them for QA GPU Computing environments, but with this limitations, I don't know.

              Comment


              • #37
                Originally posted by TemplarGR View Post

                I did. The only thing these blatant TROLLS, who are known to be trolls on this forum btw, have to do to understand they are wrong is actually visit ROCm webpage and read about it... ROCm is not just an OpenCL implementation, OpenCL is just a subset of what it does. I am not going to be polite to people who are here just to shill for a certain company, sorry.
                The question is not if Rocm is more than OpenCL, the thing is that people buy cards and want Support!
                Is this too much for your to understand?

                I don't care if Rocm does 10 thousands things..
                When I buy a card I want support.
                It could be in the form of amdgpu-pro or Rocm, you have this options.

                None Work with Hardware brought in beginning of 2016, and graphic cards brought in 2018...that's the problem!

                I hope you could be able to understand now the problem...
                if not, please instead of coming calling names on people, just read it again, and again, util you realise what people are talking about.

                Comment


                • #38
                  Originally posted by Weasel View Post
                  Moore's Law would say each 18 months you get doubled performance with double the transistors.
                  Moore's Law only talks about increasing number of transistors, it says nothing about performance.

                  Comment


                  • #39
                    Originally posted by LinAGKar View Post
                    Moore's Law only talks about increasing number of transistors, it says nothing about performance.
                    For a GPU, that's roughly double the performance since they're embarrassingly parallel. It's why they also scaled so much better than CPUs the past decade or so, except last few years.

                    Comment

                    Working...
                    X