Announcement

Collapse
No announcement yet.

NVIDIA Announces The GeForce RTX 40 Series With Much Better Ray-Tracing Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    Originally posted by birdie View Post
    Key points:
    • DLSS 3.0 is fantastic though proprietary.
    • Pricing is just bad
    • Two 4080 SKUs with a different amount of shaders? Looks like NVIDIA decided to charge top dollar for what should have been RTX 4070 Ti. Let's see what RDNA 3.0 will bring because this is just ugly.
    • I expect RDNA 3.0 to reach the RTRT performance of the RTX 30 series which again means NVIDIA will take the performance crown for heavy RT games for the next two years.
    • Looks like we've reached the point where the laws of physics no longer allow to get more performance at the same power (envelope) which is really really sad.
    Apparently the memory bus width is different between the two 4080 versions (192 vs 256 bit).
    I can honestly live with anything else, but I'm still not paying that kind of money for a video card.

    Comment


    • #52
      Originally posted by WannaBeOCer View Post

      Of course a troll, I can play too. This is the first generation that surpasses 40 TFlops.
      You put in 0 effort to read my post. If you put in more effort to read my comment I will do the same.
      Last edited by Jabberwocky; 21 September 2022, 07:39 AM. Reason: typeo the/to

      Comment


      • #53
        Originally posted by bug77 View Post

        Apparently the memory bus width is different between the two 4080 versions (192 vs 256 bit).
        I can honestly live with anything else, but I'm still not paying that kind of money for a video card.
        The most expensive part I've ever bought was a Ryzen 7 5800X CPU which cost me $450. Never again.

        My GPUs upper limit has always been $400. Looks like I'm either staying with GTX 1660 Ti or moving to redder pastures.

        Comment


        • #54
          Originally posted by bug77 View Post

          Apparently the memory bus width is different between the two 4080 versions (192 vs 256 bit).
          I can honestly live with anything else, but I'm still not paying that kind of money for a video card.
          How does the Nvidia RTX 4080 compare to the newer RTX 4070 Ti? We've tested both cards and we now know the answer.


          Memory bandwidth and number of usable cores. In the past the official version numbers would have been different from Nvidia. like 4080 TI be the 16G and 12G be non Ti at least.

          Comment


          • #55
            Originally posted by birdie View Post
            The most expensive part I've ever bought was a Ryzen 7 5800X CPU which cost me $450. Never again.

            My GPUs upper limit has always been $400. Looks like I'm either staying with GTX 1660 Ti or moving to redder pastures.
            That the other thing to consider. All the AM5 socket cpu from AMD will have APU as in AMD integrated graphics. All the new coming intel processes for desktop usage have iGPU. We are not talking ultra good GPU here but the AM5 socket one should be equal or better than the steamdeck APU.

            Server and Workstation targeted chips are the ones missing intergrated graphics now that is your threadripper/epic amd and xeons with Intel.

            This is another thing that going to have effect on Nvidia AIB as new systems will cease to be after very low end cards in most cases.

            You might not be moving redder pastures depend on the games you play you might move from being Decanted GPU user to integrated GPU user. Save power and coin. It will depend where the intel and amd integrated GPU land in the upcoming cycles. The steamdeck limit GPU power has caused some game developers to focus on efficiency again if that the games you like then why would you need more than what the integrated offer.

            Comment


            • #56
              Originally posted by Teggs View Post
              In the fields of mathematics and science in general professional respect is shown by referring to a person by their last name. Kepler, Volta, Maxwell, Pascal, Turing... Nvidia would have people believe that they chose Ada Lovelace to name an architecture after out of respect for her accomplishments, but by concentrating on her gender by using her first name they show disrespect for her work as a scientist. I suppose they don't care about any damage they do to the effort to value scientific work apart from the gender of the scientist doing said work.

              I believe it is correct to refer to this architecture as 'Lovelace' no matter what Nvidia says.
              On paper you´re correct and the GPU should labeled as ¨AL102¨ or whatever. However, Ada Lovelace isn´t the only ¨Lovelace¨ out there, nor the most recent...

              Comment


              • #57
                The only good thing here is the Motion Interpolation Filter part of DLSS3. They use the standard approach generating a new frame between two existing ones. That generates at least one frame latency because you cannot project the current frame on the screen in zero time, instead you must calculate the intermediate one based on the latest two latest originals and project it. Only then you can project the current frame. Its only usable for slow action games to go from 50fps to 100fps and gain the motion clarity that LCDs don't offer on low framerate.

                Comment


                • #58
                  Originally posted by birdie View Post

                  The new pricing and SKUs are just ... horrible. RTX 4080 12GB is a 192bit bus card - it could actually mean that RTX 4060 will feature a 128 wide bus, WTF?!

                  I sure hope RDNA 3.0 will kick some ass because NVIDIA has seemingly stopped caring about end users and only cares about enterprise.
                  Nvidia no longer makes GPUs just for gamers. They target multiple different markets with their consumer GPUs. R&D cost have been going up about 35% every year as well. RDNA will kick ass in gaming aside from that it will be a disappointment just like RDNA1/2. RDNA2 was the worst at launch since they priced their pure gaming GPUs about the same price as Nvidia’s Ampere.

                  Comment


                  • #59
                    Originally posted by WannaBeOCer View Post

                    Nvidia no longer makes GPUs just for gamers. They target multiple different markets with their consumer GPUs. R&D cost have been going up about 35% every year as well. RDNA will kick ass in gaming aside from that it will be a disappointment just like RDNA1/2. RDNA2 was the worst at launch since they priced their pure gaming GPUs about the same price as Nvidia’s Ampere.
                    To take this point a little further: I don´t think NVIDIA actually makes consumer GPUs. They make variants of one or two GPUs (e.g. Lovelace & Hopper) and wrap products around those, some of which are targeted towards the consumer. As an example, the H100 is an NN/ML accelerator targeted towards huge AI workloads, whereas AD10x seems to be for every other market (GeForce RTX for games, RTX <whatever> for the professional workstation market and the A40 & L40 for the data center workloads such as VDI/virtual workstation, video encoding, ¨small¨ AI/ML workloads).

                    Having thought about it, this might explain why NVIDIA´s gaming GPUs are getting so big & power-hungry - they are actually professional/data center GPUs repurposed for PCs and given technologies to make use of all that silicon (raytracing, DLSS) - notice that with Ampere, the higher-end SKUs didn´t really shine until 4K. Compare this to AMD´s RDNA and CDNA architectures, each of which is aimed specifically at a given market.

                    Comment


                    • #60
                      Originally posted by parityboy View Post

                      To take this point a little further: I don´t think NVIDIA actually makes consumer GPUs. They make variants of one or two GPUs (e.g. Lovelace & Hopper) and wrap products around those, some of which are targeted towards the consumer. As an example, the H100 is an NN/ML accelerator targeted towards huge AI workloads, whereas AD10x seems to be for every other market (GeForce RTX for games, RTX <whatever> for the professional workstation market and the A40 & L40 for the data center workloads such as VDI/virtual workstation, video encoding, ¨small¨ AI/ML workloads).

                      Having thought about it, this might explain why NVIDIA´s gaming GPUs are getting so big & power-hungry - they are actually professional/data center GPUs repurposed for PCs and given technologies to make use of all that silicon (raytracing, DLSS) - notice that with Ampere, the higher-end SKUs didn´t really shine until 4K. Compare this to AMD´s RDNA and CDNA architectures, each of which is aimed specifically at a given market.
                      I disagree, Turing, GA102 and Lovelace are general purpose GPUs aimed at consumers. Nvidia split their architectures a while back with the release of GP100 vs GP102. Then introduced their tensor accelerator cores(tensor cores) in Turing and continued their split architectures with TU102 vs V100, GA100 vs GA102 and now H100 vs AD102. Gamers need to stop thinking they're the only individuals that utilize GPUs. Just because an individual utilizes them for parallel computing or content creation doesn't mean they should have to shell out $3-5k for a GPU.

                      AMD has been two generations behind since 2010, they eventually followed Nvidia's revolutionary computing architecture Fermi with the release of Tahiti. Then finally releasing a computing platform ROCm with Vega. From leaks it appears they will add tensor accelerators similar to matrix cores of CDNA in RDNA3 which is great news. Nvidia uses them specifically for ray tracing denoising and upscaling but tensor accelerators can be used for physics simulation, character locomotion and audio to facial animations in regards to games.
                      With adversarial reinforcement learning, physically simulated characters can be developed that automatically synthesize lifelike and responsive behaviors. A ...


                      Intel also showed off an AI utilizing Nvidia's Tensor cores in a RTX 3090 to make GTA V photorealistic.
                      https://www.youtube.com/watch?v=P1IcaBn3ej0​
                      Last edited by WannaBeOCer; 21 September 2022, 11:20 PM.

                      Comment

                      Working...
                      X