Announcement

Collapse
No announcement yet.

NVIDIA Announces The GeForce RTX 40 Series With Much Better Ray-Tracing Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Anux View Post
    It sounds amazing but I found the thing a little suspicious, it completely changed textures and objects. I don't think that was automatic, more like someone put much work in it.
    I think they said it will be a mod pack that works inline with some ADA RTX stuff... So yes modders need to come in and do some work to create the mod pack. Which is fine IMO.

    I also liked the massive improvements to Flight Simulator, that was impressive because original that game ran pretty badly.

    Comment


    • #12
      If I understood that correct, the improvement comes from DLSS 3 which introduces motion interpolation, therefore the CPU has less frames to render.

      Let's see how this looks in real games, I don't like the effect on modern TVs. Maybe this is done better, if they take some data from the game engine it could be pretty accurate.

      Comment


      • #13
        Key points:
        • DLSS 3.0 is fantastic though proprietary.
        • Pricing is just bad
        • Two 4080 SKUs with a different amount of shaders? Looks like NVIDIA decided to charge top dollar for what should have been RTX 4070 Ti. Let's see what RDNA 3.0 will bring because this is just ugly.
        • I expect RDNA 3.0 to reach the RTRT performance of the RTX 30 series which again means NVIDIA will take the performance crown for heavy RT games for the next two years.
        • Looks like we've reached the point where the laws of physics no longer allow to get more performance at the same power (envelope) which is really really sad.
        Last edited by birdie; 20 September 2022, 12:41 PM.

        Comment


        • #14
          Originally posted by Anux View Post
          If I understood that correct, the improvement comes from DLSS 3 which introduces motion interpolation, therefore the CPU has less frames to render.

          Let's see how this looks in real games, I don't like the effect on modern TVs. Maybe this is done better, if they take some data from the game engine it could be pretty accurate.
          It's not just motion interpolation, DLSS 3.0 renders almost full frames without using any classic GPU/CPU instructions which will result in an enormous performance increase/power savings. I'm not sure the tech will work for all games.

          Why "almost", because in motion the image changes and there's new stuff (not seen before on the screen) to be rendered anyways using GPU/CPU but depending on how good DLSS 3.0 is, it will mean that you only have to render properly the percentage of the frame which has changed between DLSS interpolations. In games with little motion it means you can use just DLSS 3.0 instead of computing frames using GPU/CPU.

          DLSS 3.0 looks like an alien tech - I've heard about it before but no one imagined anyone could make it work.

          As much as people here hate NVIDIA (for reasons which I personally don't understand at all because they are absolutely illogical, let me repeat once again: NVIDIA does not owe anyone Linux support at all or open source drivers for Linux) the company has basically invented the modern game graphics pipeline starting with programmable shaders ending with RTX/RTRT.
          Last edited by birdie; 20 September 2022, 12:54 PM.

          Comment


          • #15
            Originally posted by birdie View Post

            As much as people here hate NVIDIA (for reasons which I personally don't understand at all because they are absolutely illogical, let me repeat once again: NVIDIA does not owe anyone Linux support at all or open source drivers for Linux) the company has basically invented the modern game graphics pipeline starting with programmable shaders ending with RTX/RTRT.​
            They don't but people also don't have to like their stance towards the OSS community, especially when they stand to gain so much from allowing contributions from external sources (see the recent massive vulkan draw call throughput improvements for the AMD and Intel mesa vulkan drivers by the same contributor for instance).

            Comment


            • #16
              Wow 1400 TOPS tensor performance compared to 320 from the RTX 3090 Ti. No wonder why DLSS 3 only supports the RTX 40 series. I remember just a few years ago Nvidia was demoing an AI rendering an urban city. Now they have an AI rendering frames in games.



              I wouldn’t be shocked if games in 2030 are rendered entirely by Tensor cores.

              Comment


              • #17
                Originally posted by birdie View Post
                As much as people here hate NVIDIA (for reasons which I personally don't understand at all because they are absolutely illogical, let me repeat once again: NVIDIA does not owe anyone Linux support at all or open source drivers for Linux) the company has basically invented the modern game graphics pipeline starting with programmable shaders ending with RTX/RTRT.
                holy wars begins in 3.....2.......1......



                Comment


                • #18
                  Originally posted by fong38 View Post

                  They don't but people also don't have to like their stance towards the OSS community, especially when they stand to gain so much from allowing contributions from external sources (see the recent massive vulkan draw call throughput improvements for the AMD and Intel mesa vulkan drivers by the same contributor for instance).
                  I'm fine with "not liking" NVIDIA. On the pages of these forums however you'll read "F you, NVIDIA" under every news piece related to the company.

                  I don't like a single company in the world BTW, and I certainly like neither AMD, nor Intel. They have enjoyed the x86 ISA duopoly for over two decades now. How this is totally fine and legal in this world I've no idea. Likewise NVIDIA and AMD have enjoyed the GPU market duopoly for two decades as well, only in the early 00s they fiercely competed in terms of pricing and performance and new graphics architectures came out a lot more frequently than nowadays. And around 4 years ago or so (Radeon VII) both companies coincidentally started copying one another pricing which, according to AMD fans, is totally fine as well. Everything AMD does has a justification, it's good, it's great, everything Intel and NVIDIA do is evil/anti-competitive/anti-consumer.

                  Speaking of the "the recent massive vulkan draw call throughput improvements" - I've no idea what you're talking about. Yes, AMD invented the proprietary Mantle Windows-only graphics API which was used as a basis for Vulkan/D3D 12 but NVIDIA swiftly adopted it for Pascal GPUs which had been designed long before Vulkan 1.0 was released and it worked beatififully on NVIDIA GPUs.

                  I don't know a single OSS development over the past two decades which NVIDIA has benefitted from aside from their super computer contracts but that's outside of OSS per se. Sorry. I know a ton of features NVIDIA has added to Linux however.

                  Comment


                  • #19
                    Here's another point of concern (shamefully stolen from TPU forums):

                    Originally posted by AnotherReader
                    This will be the biggest difference between the flagship and the next fastest GPU in terms of SMX count that I can remember. The previous generations were like this:
                    Generation Flagship SMX count 2nd tier GPU SMX count Ratio Comments
                    Kepler
                    15
                    12
                    1.25
                    GTX 780 Ti vs GTX 780
                    Maxwell
                    24
                    22
                    1.09
                    Titan X vs GTX 980 Ti
                    Pascal
                    30
                    28
                    1.07
                    Titan Xp vs GTX 1080 Ti
                    Turing
                    72
                    68
                    1.06
                    RTX Titan vs RTX 2080 Ti
                    Ampere
                    84
                    68
                    1.24
                    RTX 3090 Ti vs RTX 3080 10 GB
                    Ada
                    128
                    76
                    1.68
                    RTX 4090 vs RTX 4080 16 GB
                    One can easily see how the 4080 16 GB stands out as the runt and poor value.

                    Even if we go by die sizes for the actual 2nd tier full die, this generation is an outlier.
                    Generation Flagship SMX count Flagship Price 2nd tier GPU SMX count 2nd tier Price SMX Ratio Price Ratio Comments
                    Kepler
                    15
                    699
                    8
                    330
                    1.88
                    2.12 GTX 780 Ti vs GTX 680
                    Maxwell
                    24
                    649
                    16
                    499
                    1.50
                    1.30 980 Ti vs GTX 980
                    Pascal
                    30
                    699
                    20
                    499
                    1.50
                    1.75 1080 Ti vs GTX 1080
                    Turing
                    72
                    1200
                    48
                    699
                    1.50
                    2.08 2080 Ti vs RTX 2080 Super
                    Ampere
                    84
                    1999
                    48
                    599
                    1.75
                    3.34 RTX 3090 Ti vs RTX 3070 Ti
                    Ada
                    128
                    1599
                    76
                    1199
                    1.68
                    1.33 RTX 4090 vs RTX 4080 16 GB
                    Now it looks better for the 4080 16 GB until you consider the price which is outside the historical norm for the lower tier GPUs. Only the GTX 980 was priced this close to the flagship, and that was an atypical generation in many ways.
                    I don't care too much because I've never bought cards of these tiers but it's still not looking good at all.

                    Comment


                    • #20
                      Originally posted by M@GOid View Post
                      Boy do Jensen had a narcissistic photo for the video... "Look at me, I'm the best Taiwanese CEO, not that witch in the other company"...
                      Glad I am not the only one who looked at it and thought the exact same thing. At least Lisa smiles.

                      Do they teach this shitty "Pretension Face" in all limp dick "Business" schools? "Look into my nostrils as I *appear* to be way up here and I imagine you to be somewhere way down there." SMH.

                      Amending: A smile goes a long way.
                      Last edited by kozman; 20 September 2022, 02:59 PM.

                      Comment

                      Working...
                      X