Announcement

Collapse
No announcement yet.

NVIDIA Announces The GeForce RTX 40 Series With Much Better Ray-Tracing Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    Originally posted by oiaohm View Post
    So DLSS doing 1080p to 4K will be mix rendering in 1080p and 4K. Way less 4K frames. Of course this brings interesting problem. GPU memory usage back into play. Rendering 4K means you need 4K textures loaded and rendering 1080p means you need 1080p textures loaded. I feel sorry for the Nvme drives.
    No you can render 4 k with small textures and the other way around. Textures might look blurry in one instance.

    Comment


    • #72
      Originally posted by WannaBeOCer View Post

      I'm aware of the passively cooled server variants, AMD also has the Radeon Pro V620 based on RDNA2 along with the Radeon Pro W6800 for workstations. GA100 and GA102 SMs look completely different. GA102 has 2x FP32 Processing, GA102 has 6 MB of L2 cache compared to GA100's 48 MB along with FP64 units. As mentioned AMD is just 2 generations behind and we'll see them catch up slowly in regards to ML on their consumer GPUs.
      They are both Ampere architecture though so I assume that is more to do with how the logic is laid out as opposed to the size of specific resources such as cache memory, in the same way that GP100 and GP102 are both Pascal architecture. Interestingly, there was no TU100 variant for Turing, e.g. for the Tesla server cards.


      Originally posted by WannaBeOCer View Post
      My argument is that Geforce cards are still consumer cards but aimed at additional markets, content creators with Studio drivers along with the continued support of CUDA. They added better encoders and tensor cores for these consumers. A student/hobbyist that's into content creation/ML is still a consumer not a professional and doesn't require ECC, tested drivers for professional workloads, SR-IOV and etc.
      This I certainly agree with.

      Comment


      • #73
        Originally posted by oiaohm View Post

        I am sorry to say birdie is right that you do need to read how DLSS 3.0 works. DLSS 3.0 added keyframe rendering.

        So yes you are rendering at 1080 for the mid frames but then you are rendering 4K keyframe every so often and comparing to what the AI upscale generated from the 1080.

        I can possibility of some really wacky artifacts with DLSS 3.0. I am not sure if the 4K keyframes will be ever directly shown on screen or will be just used for self tuning of the AI upscale with DLSS 3.0. When I say wacky this could mean feed the same data by a playback program into GPU and get two very different outputs..

        There is one thing here absolutely DLSS 3.0 is not designed to upscale existing non modified games.

        So DLSS doing 1080p to 4K will be mix rendering in 1080p and 4K. Way less 4K frames. Of course this brings interesting problem. GPU memory usage back into play. Rendering 4K means you need 4K textures loaded and rendering 1080p means you need 1080p textures loaded. I feel sorry for the Nvme drives.
        Apparently 2x of the performance gain is going to come from frame interpolation. So that annoying motion smoothing you see on many modern TVs; NVidia is adding that and counting it as a 2x performance gain because they can output twice as many frames per second.

        Comment


        • #74
          Originally posted by shmerl View Post


          Why would you need such refresh rate at the cost of reducing quality, no matter the technology? I don't get the appeal. I'd take the refresh rate that GPU can handle natively without any upscaling, as long as it gives better image quality.
          To fix motion blur. See blurbusters on the subject.

          Comment


          • #75
            Originally posted by ryao View Post

            To fix motion blur. See blurbusters on the subject.
            I mean refresh rate is already pretty high as it is. Less motion blur at the cost of worse image quality sounds like a bad trade off.

            Comment


            • #76
              Originally posted by shmerl View Post

              I mean refresh rate is already pretty high as it is. Less motion blur at the cost of worse image quality sounds like a bad trade off.
              See the blur busters research on the subject. It explains the need for 1kHz refresh rates.

              Comment


              • #77
                Originally posted by ryao View Post

                See the blur busters research on the subject. It explains the need for 1kHz refresh rates.
                Good, GPUs will get there as they progress. But as above, not at the cost of upscaling and degrading image quality. I don't see any pressing "need" that justifies such reduction to rush ahead of what GPUs are capable of.

                Comment


                • #78
                  Originally posted by Mahboi View Post
                  This pricing targets enterprise. I can't imagine the public is even that much of a target any longer.
                  It isn't. (If you're curious about this stuff "for real", read the transcripts of investor calls on trade sites: those will tell you where a company's heading for the next 6-12 months).

                  Enterprise is where the money is, esp now that cryptoscams are dead. There's nothing wrong with nv slowly opting out of the consumer space by raising prices and margins like this: there's going to be a third competitor in there in a few years, for the first time since nv was a tiny baby of a company in the 90s, and it's a bigger fish than they are. Leaving ATI and Intel to fight it out over a shrinking market seems pretty sensible to me.

                  Comment

                  Working...
                  X