NVIDIA Announces The GeForce RTX 50 "Blackwell" Series

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts
  • bofkentucky
    Junior Member
    • Aug 2018
    • 18

    #41
    Originally posted by sobrus View Post

    But you forgot that how much TSMC charges for a silicone wafer today. Back then we had multiple foundries, now it's only TSMC.
    3nm wafer costs $18.000. In 2013 it was $5000 for 28nm. And probably around $1-$2k in GF2/GF3 days (guessing, I couldn't find any actual information)

    If you want a cheap GPU - it needs to be a small chip, overclocked far beyond what it should be, with piss poor energy/performance ratio.
    That's why Intel B580 with 190W TDP (!) isn't any more energy efficient than 4 year old RDNA2.

    That's why today desktop CPUs have 170W TDP, not 55W. Only Apple can afford large, non overclocked chips. The rest tries to squeeze last drop of performance from tiny silicone chiplets.

    What we need is more cutting edge Fabs, preferably far away from China.
    Also with that fab pricing, the list of people buying an equivalent pro/workstation card not to game was limited to Hollywood (Pixar, Dreamworks, ILM, etc), a couple of guys in the oil and gas industry, and other niche markets. Now the hyperscalers and everyone thinks they have the next big AI thing and can use those same chips for non-gaming purposes.

    Comment

    • creative
      Senior Member
      • Mar 2017
      • 870

      #42
      While we haven't seen any benchmarks for a large sample size of games with Blackwell, so far I'm feeling it's kind of meh. Don't know if NVIDIA is going to neuter their architecture in the future concerning raw rasterized performance. While RT and RTGI can be appreciated in certain repsects, the whole FG thing as far as my limited experience, it doesn't seem all that great. I think special compression techniques for textures is a good idea for GPU's with limited VRAM it's not going to solve issues unless it's automatic within the architecture, meaning it happens with some legacy games that can use massive texture packs with limited VRAM GPU's.

      New GPU technologies aren't going to correct bad game design and development that's for sure. Stuff like UE5 isn't helping the matter with it's widespread adoption, even consoles suffer traversal stuter and horrible TAA implimentation. Some of the DLSS implementations are looking better than native in a number of games with bad TAA. I've seen bad implementations of DLSS as well, the Dead Space remake is a good
      example.

      When I revisit an old console game ported to PC, made in 2009. F.E.A.R. 2 Project Origin, with it's highly stylized graphics that look very smooth, well balanced and detailed, playing that game were weapon feedback and overall game design just feels refreshing campared to modern titles, it proves how games have regressed. F.E.A.R 2 Project Origin is a harped on game in the series, yet playing the game just feels excellent, a very cohesive sneak shooter that is both very responsive and feels well designed against alot of the new thats out there. Very impressed how well the game has aged while looking very modern.
      Last edited by creative; 07 January 2025, 04:46 PM.

      Comment

      • Shadywack
        Junior Member
        • Apr 2024
        • 35

        #43
        All these comments and nobody bothered to mention that DLSS 3.5 only worked on Linux "officially" as of a month ago. How long will it take for DLSSv4's new multi frame gen to work?

        Comment

        • ssokolow
          Senior Member
          • Nov 2013
          • 5096

          #44
          Originally posted by bnolsen View Post
          Compared to the rx5700xt cards I have the 12GB 3060 runs fairly cold. I really dislike nvidia but their lower end cards seem to be much better perf/watt compared to what amd is offering.
          *nod* Perf per watt was explicitly one of the considerations I made when I bought the GTX 750 back in 2014 and it was also a consideration when buying the RTX 3060.

          Comment

          • Monsterovich
            Senior Member
            • Dec 2020
            • 298

            #45
            LOL, they still haven't removed the 12whpwr connectors, and the GPU can draw up to 600w. What a clowns.

            They've already released two cable standards to not use the regular 8 pin.

            Comment

            • Paradigm Shifter
              Senior Member
              • May 2019
              • 893

              #46
              Originally posted by pWe00Iri3e7Z9lHOX2Qx View Post

              The 5090 Founders Edition goes back to a dual slot design which is pretty impressive given the performance.
              You're right; I've not been paying attention as other things have taken my focus recently. A dual-slot reference design could be rather nice.

              Comment

              • Paradigm Shifter
                Senior Member
                • May 2019
                • 893

                #47
                Originally posted by sophisticles View Post

                Same in the U.S.

                Back in the 90's before the tech bubble burst Java programmers where commanding 90 grand a year, a Porsche 944 turbo cost 45 grand, a Porsche 911 turbo cost 60 grand and a Ferrari Mondial cost 70 grand.

                The days when a computer programmer could buy two brand new Porsches a year in cash are long gone.
                Thanks for confirming. I'd just like the post but like currently AWOL.

                Comment

                • reavertm
                  Senior Member
                  • Jul 2008
                  • 410

                  #48
                  As someone here said, don't fall for NVIDIA marketing. 5070 will be about half raw performance of 4090, only "on pair" with it thanks to inserted 3 fake DLSSv4 frames instead of one for 4090, at the cost of increased input latency.
                  Also some comments on latest Digital Foundry NVIDIA video piece are hilarious:

                  "It's amazing how all it takes for DF to finally recognize the cons of DLSS is for another new shiny DLSS tech to release. Yesterday DLSS and TAA was perfect, ghosting was created today, obviously"

                  Comment

                  • creative
                    Senior Member
                    • Mar 2017
                    • 870

                    #49
                    Originally posted by reavertm View Post
                    As someone here said, don't fall for NVIDIA marketing. 5070 will be about half raw performance of 4090, only "on pair" with it thanks to inserted 3 fake DLSSv4 frames instead of one for 4090, at the cost of increased input latency.
                    Also some comments on latest Digital Foundry NVIDIA video piece are hilarious:

                    "It's amazing how all it takes for DF to finally recognize the cons of DLSS is for another new shiny DLSS tech to release. Yesterday DLSS and TAA was perfect, ghosting was created today, obviously"
                    I know this is a TLDR.

                    Going to be honest as a RTX 4070 Ti Super owner, I've actually grown more impressed with what AMD is doing, the 7900XT is impressive with what it has done with raw raster and it's improvements in 4K. The card I have is still a good GPU but if AMD keeps it up, NVIDIA is going to have a lot to contend with. I personally feel NVIDIA is starting to neuter their GPU's raw performance and focus on AI too much. AI has merit with GPU's concerning games but I feel NVIDIA is using and leveraging it as a catch all for bad game design and lazy development. Rather, I'd like to see modern games critized more due to poor art direction and mismanagement in overall game design involving politics that should be kept out of games.

                    What's kept me from AMD GPU's is a personal bad experience with one, luck of the draw. For some people there's still driver issues, enough to be cautious of buying from AMD. That being said, even with NVIDIA I found myself returning and swapping a Gigabyte RTX 4070 Ti Super for very strange issues interacting with not one but two high end PSU's, first was a Corsair RM850X, second was a Corsair RM1000X, that GPU also developed screeching coil whine, the type that had me thinking it was actually coming through my speakers, needless to say that GPU got exchanged for an Asus one when all had been said and done.

                    At this rate it's going to eventually be very difficult to pass up AMD in the future cause they are offering more for less. Generation on generation they have improved as well. If they can correct driver timeouts and a nvidia-settings like application is developed for AMD GPU's for GNU Linux AMD will be a home run with it's already better support across multiple desktop environments with multi head display setups, NVIDIA seems limited in it's support there being limited to GNOME from what I've gathered, I may be wrong though. I've only used one display at a time but I've heard plenty of people complain about NVIDIA's bad support for multi monitor setups in GNU Linux, it could be overblown and not the real case though for a good number of users.

                    AMD seems to have idle power consumption issues that needed end user intervention, this should have been resolved by now. I've seen at least two generations of complaints concerning that particular issue, but if that is an easy to fix end user issue I guess it's not too terrible.

                    Both AMD and NVIDIA GPU's have their issues but CES was some outright deceptive marketing 'despite saying due to DLSS4'. Why? There is no way hell a 5070 could even remotely touch a 4090 in raw raster performance. At best it's closing up to a mere RTX 4070 Ti Supers even then that 5070 has a shit 12GB of VRAM. My ask of modern GPU's is that if the card is $400 or more? Give that damn thing at least 16GB of VRAM, I don't care how fast GDDR 7 is, it's not going to make up for only having 12GB. Remember when the GTX 1070 matched or beat the raw peformance of the previous generations Titan X? That's no longer the case anymore for 70's class cards.

                    Had my RTX 3070 had more VRAM I would have kept it longer and that 3070 was a very decent card in terms of performance that got hamstringed by 8GB.
                    Last edited by creative; 09 January 2025, 09:02 AM.

                    Comment

                    • kurkosdr
                      Senior Member
                      • Jul 2013
                      • 163

                      #50
                      Originally posted by mdedetrich View Post

                      At least if you want to put reasonable limits on price and power usage, we are starting to hit diminishing returns for raw raster performance. This is due to a combination of hitting physical limits with chip density (newer nodes are starting to hit the size of an atom) and also the price of these wafers, which as pointed out earlier are magintitudes more expensive even when inflation adjusted.

                      Jenson is not wrong here, we have to accept that its not like the earlier 2000's where we were getting massive jumps in raster performance generation to generation at the same cost/power draw, those days are over.
                      We had reached good enough performance and visual fidelity with the last of the unified shaders GPUs. But then of course the industry wanted something new, which led to ray-tracing, which led to DLSS because ray-tracing makes even high-end GPUs chug. You'll get your crappy upscaled and frame-interpolated frames and you'll be happy.

                      Comment

                      Working...
                      X