Announcement

Collapse
No announcement yet.

AMD Radeon RX 6600 XT Launching For 1080p RDNA2 Gaming At ~$379 USD

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    I'm not in the market for these, having (fortunately, as it turns out :P) opted for a 5700 ~18 months ago, but I'll say this to anyone who is: based on recent AIB performance as far as quality and price go (in that order), unless something dramatic has changed in the interim I wouldn't even consider anyone but Sapphire, shortage-induced "beggars can't be choosers" notwithstanding. The build quality, noise, and cooling performance of mine is just ridiculously good.

    (Yes, I knew RDNA2 would have RTRT and was coming "soon", but I decided I'd rather wait 3 years for that to (a) not suck, and (b) appear in games I care about, and buy a second- or third- generation card then rather than wait for a first pass that was underpowered on that front and would need replacing by the same time anyway).

    Comment


    • #12
      Originally posted by bridgman View Post

      In fairness, if you compare "clearing out" price for the older model against "launched in the middle of a global semiconductor shortage" price for the newer model the upgrade path is always going to be a tough sell.

      The other challenge is that price-per-transistor from the fabs used to go down significantly with each new process generation, so GPU vendors could deliver better price performance by moving to newer fab processes. That pretty much stopped happening some time between 14nm and 7nm, so end product price is back to tracking transistor count much more than in the past. Your 580 has ~5.7B transistors IIRC while the 6600XT has 11.1B.

      I haven't looked closely at benchmark performance but I suspect the price/performance of the 6600XT at launch is at least as good as the RX580 was at launch ($229 US for 8GB). What is missing is the price/performance improvements that came along for the ride with new fab processes and which became a key part of new product expectations.

      The per-transistor pricing trend was already interrupted before the chip shortage hit - there may be some improvements if/when capacity is able to catch up with demand, but the cost of fabs for newer and finer processes is going up so fast that I don't see the old pricing trend returning any time soon.

      We may get lucky and find another sweet spot like 14nm where the increase in density (from finer process pitch) outstrips the increase in cost (from more complex fab process and cost of building yet another fab) and allows per-transistor pricing to head down some more, but I don't think we know yet.

      For your amusement:

      https://www.amazon.ca/Radeon-RX-580-...eon+RX+580+8GB

      I do sometimes wonder if we should port the RX580/90 to GF 12LP+ and use that to try to keep the miners fed. The problem is that mining booms never seem to last long enough to have room for a product development cycle, and spending a big pile of money during a mining bust is a tough sell even if that is exactly the strategy recommended for stock market investing.
      All true and valid points. Do you think that for next gen it will be worthwhile to keep the gaming GPUs a bit longer on 7nm rather than the cutting edge node where they are competing for wafers with all the other products?

      There is a big fear right now amongst PC gamers (windows and Linux alike) that their hobby is increasingly becoming more expensive and niche, something for the rich. And that the USD 200 mainstream GPU will never again be a thing.

      It also seems like GPUs by their very nature take a lot of die size and transistors. So obviously when their is a chip shortage it makes more sense for companies like AMD to prioritize CPUs which are more profitable.

      Comment


      • #13
        I don't even want to waste time to figure out how AMD thinks a 128bit GPU is worth $379 when nvidia's 3060Ti's 256bit GPU is $399. I mean the 6700XT 192bit GPU at $479 is already outrageous . Dollar per bitness or GB/s bandwidth is higher at the lower end tier even if you compare AMD's own RDNA2 series. I guess that's the reason why ROCm still can't be enabled for RDNA cards, if the people at AMD can't even do elementary level math.

        Here is a bad joke for AMD GPU fans: considering that 6700XT had to bust Geforce 3070's 8GB's vram at 1440P gaming to convince that an inferior 192bit GPU should be $479, how about we compare 3060 and 6600XT at 1440P resolution to "intentionally" bust the 6600XT's 8GB 128Bit VRAM? Oh, wait, this time, the 6600XT is now "designed for 1080P gaming".

        Comment


        • #14
          $380 and they call it 'entry-level gaming gpu'. No, thanks. I'm fine with rx550, it's just enough.

          Comment


          • #15
            Originally posted by phoronix_is_awesome View Post
            I don't even want to waste time to figure out how AMD thinks a 128bit GPU is worth $379 when nvidia's 3060Ti's 256bit GPU is $399. I mean the 6700XT 192bit GPU at $479 is already outrageous . Dollar per bitness or GB/s bandwidth is higher at the lower end tier even if you compare AMD's own RDNA2 series. I guess that's the reason why ROCm still can't be enabled for RDNA cards, if the people at AMD can't even do elementary level math.

            Here is a bad joke for AMD GPU fans: considering that 6700XT had to bust Geforce 3070's 8GB's vram at 1440P gaming to convince that an inferior 192bit GPU should be $479, how about we compare 3060 and 6600XT at 1440P resolution to "intentionally" bust the 6600XT's 8GB 128Bit VRAM? Oh, wait, this time, the 6600XT is now "designed for 1080P gaming".
            Bitness doesn't matter, only performance does. It remains to be seen what that will be - you certainly can't trust vendor provided slides.

            Comment


            • #16
              Originally posted by bridgman View Post
              I do sometimes wonder if we should port the RX580/90 to GF 12LP+ and use that to try to keep the miners fed.
              I'm with skeevy: I think that is absolutely a viable *graphics* card, still, once you give it that sort of node shrink. Hell, even a GCN1-era 7970 would still comfortably hit 60FPS on a LOT of games at High+ with the clocks available there: it just doesn't have the feature set caps for today, but IIRC that's not a problem for Polaris.

              Given that ALL the hugely-popular games for the last several years: Fortnite, Overwatch, CSGO, DOTA, TF2, GTA V, PUBG - you name it - are either literally running on engines over a decade old or ones equivalent to that even if new, GPU "power" is practically irrelevant from a sales standpoint, other than on the marketing front.

              Comment


              • #17
                Originally posted by bridgman View Post
                We may get lucky and find another sweet spot like 14nm where the increase in density (from finer process pitch) outstrips the increase in cost (from more complex fab process and cost of building yet another fab) and allows per-transistor pricing to head down some more, but I don't think we know yet.
                I really miss the days when "1080p gaming"-class cards were $150, and significantly lower TDP too.

                If you're fine with 1080p 30 FPS (which I actually prefer), the GTX 750 Ti is still holding up very well today, and I guess the same applies to the R7 260X too (but they're not gonna hold up much longer, so don't take that as a recommendation to buy a used one).
                Those were $150 cards from 2014 that would serve gamers for at least 6 years (at least modest gamers), which is an incredible value compared to how things are today.

                Ignoring the current ripoff market prices, nowadays the cheapest offering from the latest generation from each vendor is $350 MSRP? What the hell?

                Assuming that you work for AMD, sir, is there any chance of getting a low-cost (at least in terms of MSRP) RDNA2 discrete card targeted at 1080p 30 FPS gaming? Or at the very least an RDNA1 card? I mean the RX 550 from just a few years ago is a perfect example.
                Or is targeting the modest 1080p 30 FPS gamers not feasible anymore?

                Comment


                • #18
                  Originally posted by phoronix_is_awesome View Post
                  I don't even want to waste time to figure out how AMD thinks a 128bit GPU is worth $379 when nvidia's 3060Ti's 256bit GPU is $399. I mean the 6700XT 192bit GPU at $479 is already outrageous . Dollar per bitness or GB/s bandwidth is higher at the lower end tier even if you compare AMD's own RDNA2 series.
                  We do badly at dollars per number of letters in the product name as well, at least against the 3060Ti, although we do very well against the 3060.

                  One could ask how NVidia thinks a GPU with <9MB of cache is worth $399 when our 6600XT with 32MB of cache is only $379.

                  Too many possibilities. Perhaps we could find something more generic to compare, like performance ?
                  Last edited by bridgman; 30 July 2021, 01:07 AM.
                  Test signature

                  Comment


                  • #19
                    Originally posted by bridgman View Post

                    We do badly at dollars per number of letters in the product name as well, at least against the 3060Ti, although we do very well against the 3060. One could ask how NVidia thinks a GPU with <9MB of cache is worth $399 when our 6600XT with 32GB of cache is only $379.

                    Too many possibilities. Perhaps we could find something more generic to compare, like performance ?
                    Wait what, 32GB of cache? H- Oh, it was 32MB... I almost thought that AMD managed to wire a massive SRAM chip to the processing core...

                    Comment


                    • #20
                      Originally posted by idash View Post
                      Ignoring the current ripoff market prices, nowadays the cheapest offering from the latest generation from each vendor is $350 MSRP? What the hell?

                      Assuming that you work for AMD, sir, is there any chance of getting a low-cost (at least in terms of MSRP) RDNA2 discrete card targeted at 1080p 30 FPS gaming? Or at the very least an RDNA1 card? I mean the RX 550 from just a few years ago is a perfect example.
                      Or is targeting the modest 1080p 30 FPS gamers not feasible anymore?
                      The tricky part is that we are now putting that kind of performance into our integrated GPUs - a 5700G with decent memory is arguably a pretty decent 1080p 30fps gaming system all in one chip for only a bit more than the cost of an equivalent CPU.

                      https://www.tomshardware.com/news/am...7-5700g-review

                      There used to be a market for slightly-higher-than-APU-class dGPUs back when OEMs would pair an APU with an inexpensive dGPU in a Crossfire/SLI pair, but that has largely gone away (to the relief of engineers at both companies). Go back a few more years, and the GPU market was primarily driven by demand for entry-level GPUs going into entry-level and mid-range PCs.

                      System builders all use APUs these days rather than discrete CPU/GPU, so the only apparent remaining market for entry level dGPUs seems to be the upgrade market for customers with older GPUs in the same range. That used to be a somewhat viable market when each new fab process brought improvements in price/performance through lower per-transistor costs, but right now it's tough to improve much over something like the RX550 at the same price.

                      NVidia seems to be in much the same boat - I believe their latest card in that range is the GDDR5 1030 from the same era. There is a still a gap between top end APU and bottom end dGPU, but it gets smaller every year.

                      We did introduce an RX580 replacement in the form of the 5500XT, but that got a bad rap because it was how everyone discovered that 7nm parts could not give the same kind of price-performance increase that had been seen with previous fab process jumps. It was a bit faster and a bit cheaper than the RX580 at launch but not enough so to get a lot of attention.

                      Sorry I don't have a better answer.
                      Last edited by bridgman; 30 July 2021, 02:20 AM.
                      Test signature

                      Comment

                      Working...
                      X