NVIDIA Announces The GeForce RTX 50 "Blackwell" Series

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts
  • qarium
    Senior Member
    • Nov 2008
    • 3438

    #51
    Originally posted by avis View Post
    NVIDIA's Blackwell is mental:
    Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.
    people should take care of the Virtue signalling of that first male person in the video with his
    black painted fingernails show his affiliation to the L G B T Q P-M a f i a with its anti-male and anti-white standpoints and its child-free propaganda​.

    Phantom circuit Sequence Reducer Dyslexia

    Comment

    • qarium
      Senior Member
      • Nov 2008
      • 3438

      #52
      Originally posted by Volta View Post
      Trash from nvidia ruined gaming industry with crap like dlss and rtx. Oh, and there's path tracing to make them look even more stupid. Thankfully there's AMD and Intel.
      first of all i agree with you DLSS and RTX is overrated if in the end you play without it to avoid its downside.

      no i have a question what do you think about ML based FSR4 ? upscalling and Temporal anti-aliasing
      and even Frame generation and now in FSR4 this not filter-algorithm based on shaders now its machine learning based on AI units isn't this not just copy Nvidia DLSS3/4 ?

      isn't AMD RadeonAnti-Lag 2​ not a copy of nvidia anti-lag ?

      and RDNA4 it looks like AMD did not improve rasterization​ graphic's much instead AMD spend the transistors to improve the Raytracing performance...

      isn't it foolish to praise AMD and Intel if AMD and Intel copy nvidia in nearly every aspect ?

      yes the price is lower means the performance per dollar but they go on the DLSS and raytracing hype just as nvidia.

      and keep in mind i have a AMD PRO W7900 i pay double the price before i go out and buy a Nvidia RTX 4090

      and i will buy the AMD Radeon 9070XT as well for neighbor's​ children i do manage the IT hardware for them.

      so i will not buy nvidia but it really sounds foolish to blame DLSS and RTX in time AMD and Intel Copy DLSS and RTX...
      Phantom circuit Sequence Reducer Dyslexia

      Comment

      • qarium
        Senior Member
        • Nov 2008
        • 3438

        #53
        Originally posted by ElderSnake View Post
        How do they generate so many extra frames without hugely affecting input latency?
        theoretically its because an AI means machine learning can use new mouse/keyboard/controler input and generate a result what is similar to the result of a real render frame.

        keep in mind that Machine learning can much more for example replace raytracing and any other lighting and decoration effect theoretically you can use a low resulution rasterrendering as input for Machine learning and do all the lighting and all the texture details and all the raytracing effects via machine learning...
        this is known technology but very expensive in calculation means need a lot of calculating power this is reason why we do not see this yet.

        with that in mind all these technologies like raytracing and DLSS and so one could be obsolete in 2-3 generations of graphic cards.. because then they could use any low quality rasterizer input and do all high quality texturing and raytracing light effects and all the stuff DLSS in one big fat Machine learning engine.

        if you do not believe this go on youtube and search for AI generated videos its exactly this technology these Machine Learning engines to generate these AI-generated videos and sound can be used to take any low quality raster game input and produce a high quality output

        imagine this you use old games like first Tomb Raider or Deus Ex put it in these machine learning engines and you get a high quality remaster without any extra work...

        you can search on youtube about AI based Rendering engine there are already examples.
        Phantom circuit Sequence Reducer Dyslexia

        Comment

        • qarium
          Senior Member
          • Nov 2008
          • 3438

          #54
          Originally posted by Quaternions View Post
          That's the secret, they don't. It hugely affects input latency.
          technically not true a AI/Machine learning rendeirng engine can generate new frames with new mouse/keyboard/controler input.

          but keep in mind such machine learning engines "Dream" it is tuned to look good but look good has nothing to do with the orginal... means you get a good looking result even withut input latency but the result has nothing to do with the orginal.

          Phantom circuit Sequence Reducer Dyslexia

          Comment

          • qarium
            Senior Member
            • Nov 2008
            • 3438

            #55
            Originally posted by ElderSnake View Post
            Right, which sounds awful. So I guess still you need to be producing a bazillion base FPS for input latency to be viable.
            wrong the long term is to replace any rasterizer-engine and any raytracing engine with machine learning graphic engines.

            with that you can take any low quality input and generate a high quality result from it.

            you can render the game in 640x480 pixel with a raster engine and generate a 4K high quality result who looks like full pathtraced raytracing render.
            Phantom circuit Sequence Reducer Dyslexia

            Comment

            • qarium
              Senior Member
              • Nov 2008
              • 3438

              #56
              Originally posted by sobrus View Post
              But you forgot that how much TSMC charges for a silicone wafer today. Back then we had multiple foundries, now it's only TSMC.
              3nm wafer costs $18.000. In 2013 it was $5000 for 28nm. And probably around $1-$2k in GF2/GF3 days (guessing, I couldn't find any actual information)
              If you want a cheap GPU - it needs to be a small chip, overclocked far beyond what it should be, with piss poor energy/performance ratio.
              That's why Intel B580 with 190W TDP (!) isn't any more energy efficient than 4 year old RDNA2.
              That's why today desktop CPUs have 170W TDP, not 55W. Only Apple can afford large, non overclocked chips. The rest tries to squeeze last drop of performance from tiny silicone chiplets.
              What we need is more cutting edge Fabs, preferably far away from China.
              all you right is correct but your last sentence is wrong... china always try to compete in the meaning of offer cheap products and cheap manufacturing problem here is china does 5nm at best with DUV based quadruple patterning by SMIC they have no EUV and no High-NA EUV and no megastructure technology and no 3D packaging technology and also no chiplet technology.

              "(DUV based quadruple patterning​)The very same method was a major reason for the failure of Intel’s 1st Generation 10nm-class process technology,"



              with that china also can only produce small chips with low Yild rate means many defect chips per waver ..

              so your claim "preferably far away from China." is nonsense we can not get cheap 2nm or 3nm or 4nm or 5nm chips from china because they plain and simple do not have the technology.

              if you see the market for legacy chips means old chips on old fabrication nodes 28nm and up they all come from china and they are all dirt cheap...

              my opinion is competition is competition and keeps the price low but TSMC has no competition

              if you see the yild rate and quality of Samsung's 3nm node its a pain in the ass...

              and intel is a complete loser they even switched to TSMC for their halo products because their 5nm is garbage.

              yes they have 4nm or 3nm or 2nm or 1,4nm on paper and they bed big on it with billions of investment but yet we have to see how this turns out.

              Phantom circuit Sequence Reducer Dyslexia

              Comment

              • qarium
                Senior Member
                • Nov 2008
                • 3438

                #57
                Originally posted by kurkosdr View Post
                Can the GTX 880M GPU in my old laptop also become an RTX 4090 with enough DLSS? Happy to render at 640x360 resolution and 10 frames per second, since DLSS will fix it all and make it look like 4K 120fps.
                DLSS is like "dynamic contrast" for LCD TVs, with enough software processing, any value is achievable, it's how edge-lit LED TVs can report "dynamic contrast" values in the millions in their marketing materials (actual contrast for edge-lit LEDs is somewhere in the low thousands).
                How long until Nvidia stops mentioning DLSS-free values (we are already almost there). And how long until you can't turn off DLSS at all so that independent reviewers cannot measure DLSS-free values?
                "render at 640x360"

                the future is even worst they will use machina learning 3d graphic engines with low rasterrendering input as you say at 640x360 pixel and then the AI will scale it to 4-5K resolution and will even perform the raytracing decoration on top of it and even high-quality textures on the walls will be generated by the AI instead of provided by the game engine.
                Phantom circuit Sequence Reducer Dyslexia

                Comment

                • qarium
                  Senior Member
                  • Nov 2008
                  • 3438

                  #58
                  Originally posted by mdedetrich View Post
                  At least if you want to put reasonable limits on price and power usage, we are starting to hit diminishing returns for raw raster performance. This is due to a combination of hitting physical limits with chip density (newer nodes are starting to hit the size of an atom) and also the price of these wafers, which as pointed out earlier are magintitudes more expensive even when inflation adjusted.
                  Jenson is not wrong here, we have to accept that its not like the earlier 2000's where we were getting massive jumps in raster performance generation to generation at the same cost/power draw, those days are over.
                  do we even need this raster performance in time of machine learning based 3D graphic engines ?

                  you can use a raster 640x480 pixel input for the AI graphics engine and the result will be 4K with full raytracing lighting and high quality textures generated by AI...
                  Phantom circuit Sequence Reducer Dyslexia

                  Comment

                  • ssokolow
                    Senior Member
                    • Nov 2013
                    • 5096

                    #59
                    Originally posted by qarium View Post

                    all you right is correct but your last sentence is wrong... china always try to compete in the meaning of offer cheap products and cheap manufacturing problem here is china does 5nm at best with DUV based quadruple patterning by SMIC they have no EUV and no High-NA EUV and no megastructure technology and no 3D packaging technology and also no chiplet technology.

                    "(DUV based quadruple patterning)The very same method was a major reason for the failure of Intel’s 1st Generation 10nm-class process technology,"



                    with that china also can only produce small chips with low Yild rate means many defect chips per waver ..

                    so your claim "preferably far away from China." is nonsense we can not get cheap 2nm or 3nm or 4nm or 5nm chips from china because they plain and simple do not have the technology.

                    if you see the market for legacy chips means old chips on old fabrication nodes 28nm and up they all come from china and they are all dirt cheap...

                    my opinion is competition is competition and keeps the price low but TSMC has no competition

                    if you see the yild rate and quality of Samsung's 3nm node its a pain in the ass...

                    and intel is a complete loser they even switched to TSMC for their halo products because their 5nm is garbage.

                    yes they have 4nm or 3nm or 2nm or 1,4nm on paper and they bed big on it with billions of investment but yet we have to see how this turns out.
                    What you wrote, as phrased, only makes sense if you missed the "far away" in "What we need is more cutting edge Fabs, preferably far away from China." and interpreted it as "preferably sourced from China" instead of the intended "preferably NOT sourced from China".

                    ...well, unless you felt such a strong need to rant about China that you spat out an irrelevant ramble when all that was needed was "Don't worry. China is behind and shows no signs of catching up for a long time."

                    Comment

                    • theriddick
                      Senior Member
                      • Oct 2015
                      • 1744

                      #60
                      Many not realizing the performance matching is done with Frame Gen 4x etc...

                      Imaging having to be connected to a massive data farm in order to play a video game in the future because that's how it processes your frames...

                      Comment

                      Working...
                      X