Announcement

Collapse
No announcement yet.

Hands On With The AMD Radeon RX 6600 XT

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • torsionbar28
    replied
    Originally posted by CaptainLugnuts View Post
    AMD is never going to make a card like that ever again because they're pointless.
    If all you need is a low-end GPU, their integrated GPUs have that covered.
    Their APU's are very capable, no doubt. The challenge there is they share the TDP envelope with the CPU. A dedicated GPU, even a low power one, frees the CPU up from a power budget perspective, and can make for a potent combination. There is also the fact that the majority of games do not require much CPU- you can game all but the latest AAA titles on a 5+ year old CPU just fine, provided you have a modern dGPU. This allows you to decouple GPU from the CPU/mobo from a financial perspective. This is a compelling option for those on a budget. Of course nowadays with the silicon shortage, manufacturers and vendors aren't going to bother with low margin SKU's, so you won't be seeing anything like a Rx460 or a Gtx550ti any time soon.

    Leave a comment:


  • Tomin
    replied
    I wonder if it would be feasible to integrate a tiny but still 3D and video decoding capable GPU into the IO-die of Ryzen CPUs once that die gets node shrink to (for example) 7 nm so that there would be more transistors to spare. No idea when that might happen or if it's actually possible to do but if it is, it could mean that every Ryzen CPU could be an APU with (hopefully) little extra cost to manufacture.

    Leave a comment:


  • foobaz
    replied
    Originally posted by birdie View Post

    Vega graphics in its Ryzen APUs is good enough and that's my opinion as well. Now tons of people don't have a built-in GPU and that's a valid concern but such people normally know what they are doing and buy beefy GPUs instead.
    Nearly everyone in the developed world can afford a low-end dGPU, but in the developing world, Ryzen APUs are very popular with gamers. They offer acceptable performance in most titles if you turn down the graphics settings. The next step up in gaming performance is a low-end dGPU but this can double the cost of a system.

    Leave a comment:


  • anth
    replied
    Originally posted by MadeUpName View Post
    As a side note I saw a YT clip where some one was showing that at least in the Steam survey NVidia has a higher share of 3090s than all of AMDs 6000 series combined. I'm not entirely sure what to make of that. Could it be true? If so why is it true?
    It could be that survey isn't counting them properly. There have been reports of people on Windows getting the Steam survey and seeing RDNA2 cards identified as "AMD Radeon Graphics" rather than the actual GPU model, and "AMD Radeon HD 8800 Series" has been growing for a few months.

    It could also be that AMD just aren't making many of them. Their priorities for the limited fab capacity are likely:
    1. Consoles, as the deals made with Sony and Microsoft would require a certain volume of product. The profit per unit is probably quite low, on the other hand without having these sales guaranteed years ago AMD wouldn't have been able to do the R&D needed to be where they are now.
    2. CPU chiplets. The profit/wafer for a few chiplets to go into Ryzens will be much higher than using that same space to make one GPU, and Epyc is much more again.
    3. GPUs, and even here there is more money in workstation and AI cards than in gaming.

    Leave a comment:


  • bridgman
    replied
    Originally posted by DanL View Post
    If the silicon is the problem, maybe AMD could at least update the video encode/decode block on the Polaris cards, while keeping the 3D part more or less the same. That way, they can still use GloFo for those cards.
    Yep, that seems like the best way to get another fab process into use... or save 6-9 months and fab them as they are to distract the mining market and make more gaming cards available to gamers. Mining is the real problem here AFAICS - take that away and I think the industry capacity comes fairly close to being able to supply current requirements.

    That said, my impression is that GloFo is fully booked for the foreseeable future as well.
    Last edited by bridgman; 07 August 2021, 10:32 PM.

    Leave a comment:


  • DanL
    replied
    If the silicon is the problem, maybe AMD could at least update the video encode/decode block on the Polaris cards, while keeping the 3D part more or less the same. That way, they can still use GloFo for those cards.

    Leave a comment:


  • leipero
    replied
    Originally posted by bridgman View Post

    Yep - our CPUs for those markets also include GPUs, although our GPUs tend to be a bit more powerful than the corresponding Intel ones.

    This isn't always obvious because we did not sell Renoir into the DIY desktop market - qualification takes a long time and Cezanne was running right behind Renoir - but we did sell a lot of 4xxxG parts to OEMs which went into the same office systems you describe.

    We are shipping 5xxxG parts into the DIY market now, although the "most office-y version" (5300G) is currently OEM only.

    What we don't have today is "tiny little GPUs for troubleshooting a gaming/workstation system", whether they be integrated or discrete, and opinions vary re: how important those are. Most DIY'ers tend to have an old dGPU sitting around but not all of them do. The sad thing is that there are probably tens of thousands of those cards scrapped every year but still functional.

    On the other hand my old Vesa bus dGPUs are probably running off the end of their usefulness
    I see your point, still, those GPUs in those APUs are way overkill for office work (for now at least). Depending on the owner (and place), sometimes it's actually cheaper to purpose build PCs than use OEM ones, so I was refering more to that. Also, what such business owners prefer is how easy is to set and maintain the system, so having "easy interface all in one" driver (Radeon Software is unnecessarily heavy for such use on what is usually Windows env., I don't know if it's possible to install driver only from GUI, I assume it is tho.) for such systems would be beneficial IMO. I don't know if what I'm saying is possible from the cost/benefit perspective, but if it is, I see no reason why not to do it.

    That's sad thing, but, it's how people work in general, for those office PCs those cards could be useful even today indeed .

    Leave a comment:


  • bridgman
    replied
    Originally posted by puleglot View Post
    1080p GPU with 3 FANs... When will I have a chance to replace my R9 380 ITX Compact?
    https://www.asrock.com/Graphics-Card...r%20ITX%208GB/

    I believe this is the only single fan ITX 6600XT at the moment; most cards seem to have 2 or 3 fans.

    Leave a comment:


  • bridgman
    replied
    Originally posted by user1 View Post
    Well, to me, a worthy successor means that the card is much faster than the one it replaces and it's at least about the same price. I would say that the RX 5600xt was pretty close being a 580 / 1060 replacement (AFAIR, its MSRP was below 300$ and it was as fast as a GTX 1080) but yeah.. with todays prices because of obvious reasons, you just can't expect that.
    I hate to say it, but I think those days are mostly gone. Those improvements in price/performance were largely made possible by ongoing reductions in cost per transistor from newer fab processes, but between 14/12nm and 8/7nm those ongoing reductions largely went away (not completely but mostly).

    The rule of thumb used to be that GPU cost was roughly proportional to die size and that moving to new fab processes gave the GPU vendors more transistors to work with (and hence more performance) at the same price point. These days the increasing complexity and cost of new fab processes largely matches the increased transistor density, and as a consequence GPU cost tracks number of transistors more closely than it tracks die size.

    Remember all those years when people were saying that the semiconductor industry would hit a wall and be unable to shrink fab processes past a certain point ? It's probably fair to say that rather than turning out to be impossible it turned out to be extremely complicated and expensive. My low-quality understanding is that the move from planar FETs to Fin-FETs was a big contributor to the complexity.

    TL;DR - unless/until something changes in the fab world the degree of price/performance improvement with each new fab process dropped substantially in the last 5 years (even before shortages and COVID-19) and shows no sign of coming back.

    Improvements in performance and power/performance are still happening, but price/performance not so much.
    Last edited by bridgman; 07 August 2021, 05:20 PM.

    Leave a comment:


  • user1
    replied
    Originally posted by bridgman View Post
    Have to disagree there - the RX 5500XT was a bit faster, a bit cheaper and used less power than either of those. Unfortunately even the 5500XT is selling for 4x MSRP or higher right now.
    Well, to me, a worthy successor means that the card is much faster than the one it replaces and it's at least about the same price. I would say that the RX 5600xt was pretty close being a 580 / 1060 replacement (AFAIR, its MSRP was below 300$ and it was as fast as a GTX 1080) but yeah.. with todays prices because of obvious reasons, you just can't expect that.
    Last edited by user1; 07 August 2021, 04:57 PM.

    Leave a comment:

Working...
X