Announcement

Collapse
No announcement yet.

NVIDIA Announces The GeForce RTX 40 Series With Much Better Ray-Tracing Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    If DLSS is boosting up to 4x, that means they are rendering 1080p and upscaling it to 4K, not sure I want that..

    Comment


    • #62
      Originally posted by WannaBeOCer View Post
      I disagree, Turing, GA102 and Lovelace are general purpose GPUs aimed at consumers. Nvidia split their architectures a while back with the release of GP100 vs GP102. Then introduced their tensor accelerator cores(tensor cores) in Turing and continued their split architectures with TU102 vs V100, GA100 vs GA102 and now H100 vs AD102.
      Ampere & Lovelace GPUs are used in the A40 and L40 accelerators respectively, which are clearly not consumer-level products - they are targeted to the datacenter. I believe the GA100 and GA102 GPUs are two variants of the same basic architecture, Ampere - in the case of the GA100 I believe that the parts of the micro-architecture specifically built for visualization were removed (or not added in the first place) and more matrix accelerators/tensor cores where added instead (I´m simplifying here). On the other hand Hopper was built specifically for AI in the datacenter and therefore never had any visual elements in its micro-architecture.

      Originally posted by WannaBeOCer View Post
      Gamers need to stop thinking they're the only individuals that utilize GPUs. Just because an individual utilizes them for parallel computing or content creation doesn't mean they should have to shell out $3-5k for a GPU.
      Can you clarify what you are saying here? I ask because the two statements appear to be contradictory.

      Comment


      • #63
        What about supported codecs?

        Comment


        • #64
          Originally posted by MorrisS. View Post
          What about supported codecs?
          AV1 decoding was already supported by the RTX 30 series.

          The RTX 40 series supports 8K 60fps dual stream AV1 hardware encoding.

          Comment


          • #65
            Originally posted by carewolf View Post
            If DLSS is boosting up to 4x, that means they are rendering 1080p and upscaling it to 4K, not sure I want that..
            Maybe read about how DLSS 3.0 works first?

            Comment


            • #66
              Originally posted by birdie View Post

              Maybe read about how DLSS 3.0 works first?
              How do you think an upscaling algorithms achieves speedups? Do you think doing extra work, somehow makes the base rendering faster? It has to base work + the time it takes the DLSS algorithm, so with 4x speedup, it HAS to do at maximum a quarter of work before DLSS is applied, there is no other way.

              Comment


              • #67
                Originally posted by carewolf View Post
                How do you think an upscaling algorithms achieves speedups? Do you think doing extra work, somehow makes the base rendering faster? It has to base work + the time it takes the DLSS algorithm, so with 4x speedup, it HAS to do at maximum a quarter of work before DLSS is applied, there is no other way.
                I am sorry to say birdie is right that you do need to read how DLSS 3.0 works. DLSS 3.0 added keyframe rendering.

                So yes you are rendering at 1080 for the mid frames but then you are rendering 4K keyframe every so often and comparing to what the AI upscale generated from the 1080.

                I can possibility of some really wacky artifacts with DLSS 3.0. I am not sure if the 4K keyframes will be ever directly shown on screen or will be just used for self tuning of the AI upscale with DLSS 3.0. When I say wacky this could mean feed the same data by a playback program into GPU and get two very different outputs..

                There is one thing here absolutely DLSS 3.0 is not designed to upscale existing non modified games.

                So DLSS doing 1080p to 4K will be mix rendering in 1080p and 4K. Way less 4K frames. Of course this brings interesting problem. GPU memory usage back into play. Rendering 4K means you need 4K textures loaded and rendering 1080p means you need 1080p textures loaded. I feel sorry for the Nvme drives.

                Comment


                • #68
                  Originally posted by parityboy View Post

                  Ampere & Lovelace GPUs are used in the A40 and L40 accelerators respectively, which are clearly not consumer-level products - they are targeted to the datacenter. I believe the GA100 and GA102 GPUs are two variants of the same basic architecture, Ampere - in the case of the GA100 I believe that the parts of the micro-architecture specifically built for visualization were removed (or not added in the first place) and more matrix accelerators/tensor cores where added instead (I´m simplifying here). On the other hand Hopper was built specifically for AI in the datacenter and therefore never had any visual elements in its micro-architecture.



                  Can you clarify what you are saying here? I ask because the two statements appear to be contradictory.
                  I'm aware of the passively cooled server variants, AMD also has the Radeon Pro V620 based on RDNA2 along with the Radeon Pro W6800 for workstations. GA100 and GA102 SMs look completely different. GA102 has 2x FP32 Processing, GA102 has 6 MB of L2 cache compared to GA100's 48 MB along with FP64 units. As mentioned AMD is just 2 generations behind and we'll see them catch up slowly in regards to ML on their consumer GPUs.

                  My argument is that Geforce cards are still consumer cards but aimed at additional markets, content creators with Studio drivers along with the continued support of CUDA. They added better encoders and tensor cores for these consumers. A student/hobbyist that's into content creation/ML is still a consumer not a professional and doesn't require ECC, tested drivers for professional workloads, SR-IOV and etc.
                  Last edited by WannaBeOCer; 22 September 2022, 12:53 PM.

                  Comment


                  • #69
                    Originally posted by oiaohm View Post
                    So DLSS doing 1080p to 4K will be mix rendering in 1080p and 4K. Way less 4K frames. Of course this brings interesting problem. GPU memory usage back into play. Rendering 4K means you need 4K textures loaded and rendering 1080p means you need 1080p textures loaded. I feel sorry for the Nvme drives.
                    No you can render 4 k with small textures and the other way around. Textures might look blurry in one instance.

                    Comment


                    • #70
                      Originally posted by WannaBeOCer View Post

                      I'm aware of the passively cooled server variants, AMD also has the Radeon Pro V620 based on RDNA2 along with the Radeon Pro W6800 for workstations. GA100 and GA102 SMs look completely different. GA102 has 2x FP32 Processing, GA102 has 6 MB of L2 cache compared to GA100's 48 MB along with FP64 units. As mentioned AMD is just 2 generations behind and we'll see them catch up slowly in regards to ML on their consumer GPUs.
                      They are both Ampere architecture though so I assume that is more to do with how the logic is laid out as opposed to the size of specific resources such as cache memory, in the same way that GP100 and GP102 are both Pascal architecture. Interestingly, there was no TU100 variant for Turing, e.g. for the Tesla server cards.


                      Originally posted by WannaBeOCer View Post
                      My argument is that Geforce cards are still consumer cards but aimed at additional markets, content creators with Studio drivers along with the continued support of CUDA. They added better encoders and tensor cores for these consumers. A student/hobbyist that's into content creation/ML is still a consumer not a professional and doesn't require ECC, tested drivers for professional workloads, SR-IOV and etc.
                      This I certainly agree with.

                      Comment

                      Working...
                      X