Announcement

Collapse
No announcement yet.

RADV vs. NVIDIA Vulkan/OpenGL Performance For Serious Sam 2017

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by babai View Post
    At least at higher resolutions both the 480 and Fury runs as fast as they should compared to Nvidia counterparts.
    i expect them to run better in future

    Comment


    • #32
      Originally posted by oooverclocker View Post
      Because it's computing power is about the same, 6TFLOPS.
      who cares. 480 competes with 1060 based on price

      Comment


      • #33
        Originally posted by oooverclocker View Post
        A GPU should perform with it's maximum theoretical performance. Otherwise many shaders don't get stressed which means that the internal workload distribution is not optimal or the drivers or the applications are not optimal. An RX 480 has >25% more computing power than a GTX 1060. This theoretical calculation is proven practically by few highly optimized applications and games.
        Are you serious?
        You are referring to theoretical computational performance, which any remotely competent being knows it not the same as rendering performance.

        Originally posted by oooverclocker View Post
        We have also seen Futuremark tables that show more overhead on AMD cards and so on.
        What kind of overhead?

        Originally posted by oooverclocker View Post
        There is no doubt that a RX 480 is the better hardware but with worse drivers and games that aren't optimized to stress the high shader count and use all CPU cores to fire draw calls it just performs on the same level in average.
        How is RX 480 better hardware than GTX 1060? Pascal is a much more advanced and mature architecture, no sane person would claim otherwise.

        And how would more CPU cores help feed the higher shader count? You obviously don't know a single thing about how GPUs or rendering works. Scheduling of work per cluster on the GPU is never handled by the game, this scheduling is handled by the GPU itself. "Draw calls" has nothing to do with which cluster will do the computation on the GPU.

        Originally posted by oooverclocker View Post
        Which means sometimes it performs extremely bad in comparison to the GTX 1060, when the game engine is crap, and it performs much better than the GTX 1060 when the game engine is a masterpiece.
        Typical fanboys; only accepts the data that supports their favorite.

        Comment


        • #34
          Originally posted by bridgman View Post
          I don't think so. If you look at typical boost clocks for the two chips the difference is less than that.
          I hope that my calculation is right:
          RX 480: 1330 MHz x 2 x 2304 shaders = 6.13 TFLOPS FP32
          GTX 1060: 1900MHz(quite flattering) x 2 x 1280 shaders = 4.86 TFLOPS FP32
          -> 6.13 / 4.86 = 126%.

          Originally posted by efikkan View Post
          How is RX 480 better hardware than GTX 1060? Pascal is a much more advanced and mature architecture, no sane person would claim otherwise.
          Originally posted by efikkan View Post
          Typical fanboys; only accepts the data that supports their favorite.

          Comment


          • #35
            Theoretical computational performance is just a number to measure the calcualtion throughput of a GPU when it's fully saturated with floating point calculations and no data dependencies. Rendering involves much more than just calculations of floats. Any competent person knows this.

            Comment


            • #36
              Originally posted by oooverclocker View Post
              I hope that my calculation is right:
              Calculations look right but the clock values I was thinking about were a bit further apart.

              Will check and make sure I'm not comparing OC'ed to non-OC'ed or something like that.
              Test signature

              Comment


              • #37
                Originally posted by oooverclocker View Post
                A GPU should perform with it's maximum theoretical performance. Otherwise many shaders don't get stressed which means that the internal workload distribution is not optimal or the drivers or the applications are not optimal. An RX 480 has >25% more computing power than a GTX 1060. This theoretical calculation is proven practically by few highly optimized applications and games.
                RX 480 has a slower ROP. That's why it shows less FPS than GTX 1060 in many games IMHO.
                Based on 1,629,182 user benchmarks for the AMD RX 480 and the Nvidia GTX 1060-6GB, we rank them both on effective speed and value for money against the best 714 GPUs.

                I've heard that Vega will obtain a revamped ROP module.

                Comment


                • #38
                  $750 for an R9 Fury X or $699 for a GTX 1080Ti which has 2-3x the performance on Linux. Tough call folks.

                  Yeah, I know, wait for Vega in June. By that time I will have had my Titan XP for nearly a year. Ryzen's looking good, but they need to bring the engineering team to the US and greatly improve their cycle times. The Chinese developers are penny wise and pound foolish. Using the same logic they should not have hired Jim Clark but off-shored the entire Zen team. That way they'd have a completely non-competitive CPU, but at least it would have been cheap on a per-human-hour cost. And as stupid-account-tricks prove every time, that's what really matters, right?

                  Comment


                  • #39
                    Originally posted by deppman View Post
                    $750 for an R9 Fury X or $699 for a GTX 1080Ti which has 2-3x the performance on Linux. Tough call folks.
                    You should wait this to be compared to amdgpu-pro and also nouveau... that should i think draw the whole partial picture
                    Last edited by dungeon; 24 March 2017, 07:09 PM.

                    Comment


                    • #40
                      Also remember that Fury (as opposed to Fury X) is <$300 using those prices...

                      Going back to the earlier topic, it's never been clear to me why anyone would expect graphics performance to be determined by shader core throughput but not by any of the other subsystems that contribute to graphics rendering.
                      Test signature

                      Comment

                      Working...
                      X