Announcement

Collapse
No announcement yet.

NVIDIA Announces The GeForce RTX 40 Series With Much Better Ray-Tracing Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • carewolf
    replied
    Originally posted by birdie View Post

    Maybe read about how DLSS 3.0 works first?
    How do you think an upscaling algorithms achieves speedups? Do you think doing extra work, somehow makes the base rendering faster? It has to base work + the time it takes the DLSS algorithm, so with 4x speedup, it HAS to do at maximum a quarter of work before DLSS is applied, there is no other way.

    Leave a comment:


  • birdie
    replied
    Originally posted by carewolf View Post
    If DLSS is boosting up to 4x, that means they are rendering 1080p and upscaling it to 4K, not sure I want that..
    Maybe read about how DLSS 3.0 works first?

    Leave a comment:


  • birdie
    replied
    Originally posted by MorrisS. View Post
    What about supported codecs?
    AV1 decoding was already supported by the RTX 30 series.

    The RTX 40 series supports 8K 60fps dual stream AV1 hardware encoding.

    Leave a comment:


  • MorrisS.
    replied
    What about supported codecs?

    Leave a comment:


  • parityboy
    replied
    Originally posted by WannaBeOCer View Post
    I disagree, Turing, GA102 and Lovelace are general purpose GPUs aimed at consumers. Nvidia split their architectures a while back with the release of GP100 vs GP102. Then introduced their tensor accelerator cores(tensor cores) in Turing and continued their split architectures with TU102 vs V100, GA100 vs GA102 and now H100 vs AD102.
    Ampere & Lovelace GPUs are used in the A40 and L40 accelerators respectively, which are clearly not consumer-level products - they are targeted to the datacenter. I believe the GA100 and GA102 GPUs are two variants of the same basic architecture, Ampere - in the case of the GA100 I believe that the parts of the micro-architecture specifically built for visualization were removed (or not added in the first place) and more matrix accelerators/tensor cores where added instead (I´m simplifying here). On the other hand Hopper was built specifically for AI in the datacenter and therefore never had any visual elements in its micro-architecture.

    Originally posted by WannaBeOCer View Post
    Gamers need to stop thinking they're the only individuals that utilize GPUs. Just because an individual utilizes them for parallel computing or content creation doesn't mean they should have to shell out $3-5k for a GPU.
    Can you clarify what you are saying here? I ask because the two statements appear to be contradictory.

    Leave a comment:


  • carewolf
    replied
    If DLSS is boosting up to 4x, that means they are rendering 1080p and upscaling it to 4K, not sure I want that..

    Leave a comment:


  • WannaBeOCer
    replied
    Originally posted by parityboy View Post

    To take this point a little further: I don´t think NVIDIA actually makes consumer GPUs. They make variants of one or two GPUs (e.g. Lovelace & Hopper) and wrap products around those, some of which are targeted towards the consumer. As an example, the H100 is an NN/ML accelerator targeted towards huge AI workloads, whereas AD10x seems to be for every other market (GeForce RTX for games, RTX <whatever> for the professional workstation market and the A40 & L40 for the data center workloads such as VDI/virtual workstation, video encoding, ¨small¨ AI/ML workloads).

    Having thought about it, this might explain why NVIDIA´s gaming GPUs are getting so big & power-hungry - they are actually professional/data center GPUs repurposed for PCs and given technologies to make use of all that silicon (raytracing, DLSS) - notice that with Ampere, the higher-end SKUs didn´t really shine until 4K. Compare this to AMD´s RDNA and CDNA architectures, each of which is aimed specifically at a given market.
    I disagree, Turing, GA102 and Lovelace are general purpose GPUs aimed at consumers. Nvidia split their architectures a while back with the release of GP100 vs GP102. Then introduced their tensor accelerator cores(tensor cores) in Turing and continued their split architectures with TU102 vs V100, GA100 vs GA102 and now H100 vs AD102. Gamers need to stop thinking they're the only individuals that utilize GPUs. Just because an individual utilizes them for parallel computing or content creation doesn't mean they should have to shell out $3-5k for a GPU.

    AMD has been two generations behind since 2010, they eventually followed Nvidia's revolutionary computing architecture Fermi with the release of Tahiti. Then finally releasing a computing platform ROCm with Vega. From leaks it appears they will add tensor accelerators similar to matrix cores of CDNA in RDNA3 which is great news. Nvidia uses them specifically for ray tracing denoising and upscaling but tensor accelerators can be used for physics simulation, character locomotion and audio to facial animations in regards to games.
    With adversarial reinforcement learning, physically simulated characters can be developed that automatically synthesize lifelike and responsive behaviors. A ...


    Intel also showed off an AI utilizing Nvidia's Tensor cores in a RTX 3090 to make GTA V photorealistic.
    https://www.youtube.com/watch?v=P1IcaBn3ej0​
    Last edited by WannaBeOCer; 21 September 2022, 11:20 PM.

    Leave a comment:


  • parityboy
    replied
    Originally posted by WannaBeOCer View Post

    Nvidia no longer makes GPUs just for gamers. They target multiple different markets with their consumer GPUs. R&D cost have been going up about 35% every year as well. RDNA will kick ass in gaming aside from that it will be a disappointment just like RDNA1/2. RDNA2 was the worst at launch since they priced their pure gaming GPUs about the same price as Nvidia’s Ampere.
    To take this point a little further: I don´t think NVIDIA actually makes consumer GPUs. They make variants of one or two GPUs (e.g. Lovelace & Hopper) and wrap products around those, some of which are targeted towards the consumer. As an example, the H100 is an NN/ML accelerator targeted towards huge AI workloads, whereas AD10x seems to be for every other market (GeForce RTX for games, RTX <whatever> for the professional workstation market and the A40 & L40 for the data center workloads such as VDI/virtual workstation, video encoding, ¨small¨ AI/ML workloads).

    Having thought about it, this might explain why NVIDIA´s gaming GPUs are getting so big & power-hungry - they are actually professional/data center GPUs repurposed for PCs and given technologies to make use of all that silicon (raytracing, DLSS) - notice that with Ampere, the higher-end SKUs didn´t really shine until 4K. Compare this to AMD´s RDNA and CDNA architectures, each of which is aimed specifically at a given market.

    Leave a comment:


  • WannaBeOCer
    replied
    Originally posted by birdie View Post

    The new pricing and SKUs are just ... horrible. RTX 4080 12GB is a 192bit bus card - it could actually mean that RTX 4060 will feature a 128 wide bus, WTF?!

    I sure hope RDNA 3.0 will kick some ass because NVIDIA has seemingly stopped caring about end users and only cares about enterprise.
    Nvidia no longer makes GPUs just for gamers. They target multiple different markets with their consumer GPUs. R&D cost have been going up about 35% every year as well. RDNA will kick ass in gaming aside from that it will be a disappointment just like RDNA1/2. RDNA2 was the worst at launch since they priced their pure gaming GPUs about the same price as Nvidia’s Ampere.

    Leave a comment:


  • artivision
    replied
    The only good thing here is the Motion Interpolation Filter part of DLSS3. They use the standard approach generating a new frame between two existing ones. That generates at least one frame latency because you cannot project the current frame on the screen in zero time, instead you must calculate the intermediate one based on the latest two latest originals and project it. Only then you can project the current frame. Its only usable for slow action games to go from 50fps to 100fps and gain the motion clarity that LCDs don't offer on low framerate.

    Leave a comment:

Working...
X