Announcement

Collapse
No announcement yet.

NVIDIA Talks Up GeForce RTX 2080 Series Performance, But No Linux Mentions

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by [email protected] View Post
    Some YT channels are suspicious of this new card series. Nvidia is not making FPS comparisons against the older series, is all about AI and ray tracing. This could mean a very thin performance advantage over the old series, worse power consumption, the new technologies will mean squat to the majority of games on the market and most future releases.

    If they didn't deliver a performance increase big enough to justify the high prices, these cards will not fly out of the shelves... Especially if you consider they will not stay on top for 2 years like the 10XX series, since next year NVidia and AMD 7nm cards are coming anyway.
    It seems like NVIDIA is finally doing a form of async compute, which could make a significant difference. They can't just call it async compute though, as that would be admitting they didn't have it before.

    Beyond that, NVIDIA said in their presentation that they want to change what to benchmark... in their favor of course!

    Comment


    • #32
      Originally posted by cj.wijtmans View Post

      but i remember thats how doom literally does it. it reuses frames and is renders lower than actual resolution. i could remember it wrong though. theres an entire article explaining the render process of doom.
      Doom (like many other games) has temporal anti-aliasing, that is what I think you called "reusing frames". The multiple samples in fact come from accumulating samples of frames over time, instead of computing multiple samples per pixel per frame. It shouldn't render lower than the actual resolution, but might offer an option for that. Upscaling before the temporal AA step will help maintaining better quality compared to just upscaling, because there are still more samples than the actual rendering resolution has pixels. But first of all, it is a method to improve quality, not save render time, however it will work both ways obviously.

      Comment


      • #33
        Skip this one, go for the next iteration. You know that the Ray-tracing stuff won't be good enough this time. They probably change a couple of instructions once they get experience with this architecture, and not support backwards this version. Kinda like how AMD doesn't support GCN 1.0.

        I'm not a fan of nVidia, at all. But here, they really are innovating. I doubt the performance will be incredible, but they are pushing forward.

        CUDA cores, Ray-Tracing cores, and Tensor cores, all on one chip. It's pretty cool. I doubt it will work together, and async compute doesn't really work IRL (see AMD), but it's a whole lot of sandbox to play around with.

        Also, that chip has 18 billion transistors. 18 billion! On 14/12 nm. Pretty amazing.

        Comment


        • #34
          Originally posted by fuzz View Post

          It seems like NVIDIA is finally doing a form of async compute, which could make a significant difference.
          Significant how? With AMD, async compute needs to be developer enabled (which they never do), and gives, in best case scenarios, maybe 3.5% fps increases overall, but with an increase in latency volatility (i.e. the latency differences of frames tend to vary more, even if on average more frames are delivered).

          We've all heard how much Vega was supposed to eventually take advantage of async compute scenarios, but turns out they don't really exist in the wild.

          AdoredTv has a pretty good analysis of this TU chip. Interesting stuff.

          Comment


          • #35
            Originally posted by AndyChow View Post
            Skip this one, go for the next iteration. You know that the Ray-tracing stuff won't be good enough this time. They probably change a couple of instructions once they get experience with this architecture, and not support backwards this version. Kinda like how AMD doesn't support GCN 1.0.

            I'm not a fan of nVidia, at all. But here, they really are innovating. I doubt the performance will be incredible, but they are pushing forward.

            CUDA cores, Ray-Tracing cores, and Tensor cores, all on one chip. It's pretty cool. I doubt it will work together, and async compute doesn't really work IRL (see AMD), but it's a whole lot of sandbox to play around with.

            Also, that chip has 18 billion transistors. 18 billion! On 14/12 nm. Pretty amazing.
            Dont fall for the marketing speak. There are no raytracing cores on that gpu. They are just CUDA cores with FP16 capability. Something VEGA can do already. It makes no sense on the same process to add fixed function/dedicated hardware, because that would limit the older architecture and you would lose performance on existing games even if you were not using the new features...

            What Nvidia did, was improve their CUDA cores to perform FP16, and create a software framework for raytracing effects that runs on CUDA, for lazy developers to use. And they are marketing it as special hardware.... Yeah right...

            Comment


            • #36
              Originally posted by TemplarGR View Post

              Dont fall for the marketing speak. There are no raytracing cores on that gpu. They are just CUDA cores with FP16 capability. Something VEGA can do already. It makes no sense on the same process to add fixed function/dedicated hardware, because that would limit the older architecture and you would lose performance on existing games even if you were not using the new features...

              What Nvidia did, was improve their CUDA cores to perform FP16, and create a software framework for raytracing effects that runs on CUDA, for lazy developers to use. And they are marketing it as special hardware.... Yeah right...
              Not just FP16, but also INT8 and INT4, and some sort of concurrent FP+INT pipeline. That was the "1.5" comment by Huang. I agree with you, none of this will (in my estimation) help game performance. But it does allow us to play around a bit and run some fun software. I think it's cool to have those features.

              This is a new architecture. I figure with AMD not releasing anything this year, and probably nothing great next year, nVidia doesn't really have to care about performance. They can push their SDKs and start having some perceived exclusivity, meanwhile gaining tons of experiences on their future goals of self-driving cars and AI assisted image processing.

              So yes, I believe they are losing performance on existing games, compared to the 1080 ti. But they gain it back by having more transistors, more CUDA cores, basically, they brute-force over the loss. I've seen estimates that the improvement generation wise is 13.5% between the TU104 and the GP104, the lowest in nVidia's history.

              Comment


              • #37
                Originally posted by AndyChow View Post

                Significant how? With AMD, async compute needs to be developer enabled (which they never do), and gives, in best case scenarios, maybe 3.5% fps increases overall, but with an increase in latency volatility (i.e. the latency differences of frames tend to vary more, even if on average more frames are delivered).

                We've all heard how much Vega was supposed to eventually take advantage of async compute scenarios, but turns out they don't really exist in the wild.

                AdoredTv has a pretty good analysis of this TU chip. Interesting stuff.
                I watched the same Adored video you did... point is async may be possible, even if NVIDIA is lying to their users about how useful it will be out of the box. We just have no information on what's there. Of course Adored's newer analysis was even more interesting (comparing 2080 to 1080, with lower memory bandwidth and HDR performance).

                Comment


                • #38
                  I have a hunch they may have dropped the ball on the Linux support this time around....
                  Michael Larabel
                  http://www.michaellarabel.com/

                  Comment


                  • #39
                    Originally posted by Michael View Post
                    I have a hunch they may have dropped the ball on the Linux support this time around....
                    driver-wise or performance-wise or both?

                    Comment

                    Working...
                    X