Announcement

Collapse
No announcement yet.

AMD Announces The Radeon RX 6700 XT For $479

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by bridgman View Post

    So you're saying that GPUs should sell well below cost once you factor in R&D.

    Are you thinking about a system of subsidies from other markets, or from governments, or is the idea that manufacturers would just take turns losing money until they closed their doors ?

    Governments have subsidized things like rice, alcohol and fuel at various times but subsidizing gaming GPUs seems like a stretch.
    I am sorry but i am totally unwilling to believe this. What "R&D" you are talking about, exactly? Cpu and gpu hardware has remained relatively stagnant for almost a decade now. Intel keeps releasing the same cpu architecture for more than half a decade with tiny improvements, and Nvidia and AMD keep doing the same for GPU hardware. AMD for example has been simply refining the same GCN architecture since 2012, and they only introduced a modification of it with RDNA a year ago. The only real new thing we have had since 2013 with Mantle/Vulkan/D3D12, was Raytracing. What "innovation" you are talking about?

    AMD keeps selling the same gpu at the mainstream/budget segment for many, many years now. If i want an entry level gpu, budget, mainstream to even the lower high end, i still get to buy Polaris cards to this day, there is no alternative. Polaris was introduced almost 5 years ago. What innovation?

    There is no innovation. Just the same old same old architecures re-arranged a little differently to exploit smaller process nodes with a little better drivers, sold at ever increasing price points. 479$ MRSP (and we know the price will be significantly higher at retail) is no joke. Even for wealthy countries like the USA, let alone for my country where this amount is close to what most make in a month. It used to be that i could go to the store, get a 100 euro gpu and still game decently at lower resolutions for all modern games. Nothing fancy but it could do the trick. The last half of the decade this ceased to be the case. Today i need to pay 300 euros retail in order to be able to game at the bear minimum for new AAA games at 1080p. The bear minimum. If i pay less than that i have to not only forget anything about raytracing (obviously), but even mid settings at 1080p are out of the question, as well as steady 60fps. It is a lot of money to get such poor experience, when you can add 100 euros and just buy a whole console that will perform at 4K and with raytracing....

    The gpu duopoly's greed has been destroying the whole market, but i suppose they don't care since they have miners to sell to. Only that when the bitcoin ponzi scheme collapses at some point (and it WILL collapse, make no mistake about it), i am not sure there will be any healthy gaming market to sell the gpus to....

    Comment


    • Originally posted by leipero View Post
      I don't think he's suggesting that, but his point stands - GPU (and CPU) prices are overinflated for quite a while now. There are valid reasons for having an overinflated price (things you've mentioned), and it is possible that 100% of those reasons are valid, that doesn't change the fact that value is way higher than what "should be", similar to the top soccer players for example, or any top sports person - not a single person could convince me that one person who's sole job is to kick a ball is worth more than a person who does brain surgery or something, it's insane (because it suggests that entertaining X amount of people is worth more than a life of X amount of people), but it is a very similar issue.
      But, this is more of a philosophical approach to the topic, not the "real world" one.
      TL;DR version is: It's an system fault, those anomalies are just the product of it.
      right now you can't use raytracing with MESA/ACO.... this means the 6800/6900XT hardware is not very usefull for linux people.
      so who cares if the price is high?
      so why not just buy a Vega64 for 220-250€ on ebay (i know the price because i sold 3 vega64)

      you do not have to become "philosophical" to realise that it is better to buy a 220€ vega64 and don't buy 1600€ 6900XT

      "GPU prices are overinflated for quite a while now."

      thats only true for hyped products like RTX2000 and RTX3000 and Radeon 6000 series gpus...

      if you buy a Vega64 the price is low: 220€

      so sometimes people just need to cool down use good old tech and buy a 6900XT for 1000€ instead of 1600€ in like 6 months.
      Phantom circuit Sequence Reducer Dyslexia

      Comment


      • This is why it’s important to have local fabrication of supplies. The only reason that it has been so global is because low wages + cheap shipping costs = cheaper to produce

        Comment


        • Originally posted by lyamc View Post
          This is why it’s important to have local fabrication of supplies. The only reason that it has been so global is because low wages + cheap shipping costs = cheaper to produce
          right nationalism was always right and Globalism/internationalism/kommunism was always wrong...
          Phantom circuit Sequence Reducer Dyslexia

          Comment


          • Originally posted by bridgman View Post
            Are you thinking about a system of subsidies from other markets, or from governments, or is the idea that manufacturers would just take turns losing money until they closed their doors ?
            Maybe the industry should focus on cutting production costs with better engineering and spread their fabs around the globe to spread the risks of these desasters hitting the whole industry (the initiatives in the US and Europe hopefully will lead to some new leading-edge manufacturing facilities from both TSMC and Samsung). The wafer costs increases are absurd and only trend in one direction, north east, in an exponential curve. There must be ways to produce advanced chips cheaper, but I guess it was easier to get customers to pay more for the end products, but that will hit a wall if that trend continues. And people are rightfully upset about the current price increases. One other example I can think of was the added costs due to PCIe 4.0 plus retimers on motherboards, the GenZ-consortium came up with a more cost-effective solution which I hope we will get to see somewhen in the future. Of course that would mean to cut backwards compatibility with older components, but that is done already on newer platforms for new CPU sockets and motherboards could still come with additional PCIE 3.0 slots.

            Comment


            • Urk... I don't know where to start, so guess I'll just go through your post in sequence...

              Originally posted by TemplarGR View Post
              I am sorry but i am totally unwilling to believe this. What "R&D" you are talking about, exactly? Cpu and gpu hardware has remained relatively stagnant for almost a decade now. Intel keeps releasing the same cpu architecture for more than half a decade with tiny improvements, and Nvidia and AMD keep doing the same for GPU hardware. AMD for example has been simply refining the same GCN architecture since 2012, and they only introduced a modification of it with RDNA a year ago. The only real new thing we have had since 2013 with Mantle/Vulkan/D3D12, was Raytracing. What "innovation" you are talking about?

              AMD keeps selling the same gpu at the mainstream/budget segment for many, many years now. If i want an entry level gpu, budget, mainstream to even the lower high end, i still get to buy Polaris cards to this day, there is no alternative. Polaris was introduced almost 5 years ago. What innovation?
              Polaris was replaced with the RX5500 - similar performance at maybe 2/3 the power. Remember that between NVidia and the press everyone was convinced that power consumption was The Most Important Thing In The World at the time, although now that we are caught up or ahead it no longer seems to be considered important.

              The 5500 didn't give much in the way of price/performance improvement, unfortunately, because both of the main technologies (7nm, GDDR6) were still quite a bit more expensive than the 12nm & GDDR5 they replaced. It would have been better if we could have made Navi14 a bit larger (maybe 28 CUs rather than 24) so that there could have been a visible performance bump, but AFAIK we had limited budget and had to make do with one die for both desktop and laptop.

              Originally posted by TemplarGR View Post
              There is no innovation. Just the same old same old architecures re-arranged a little differently to exploit smaller process nodes with a little better drivers, sold at ever increasing price points.
              This is the one I have trouble with - RDNA was a significant redesign, nothing to do with exploiting smaller process nodes. It was built on the same process node as Vega20 but delivered much better perf/power and perf/area on the same process. Navi10 provided Vega20 performance at a significantly lower price.

              In addition to adding things like ray tracing and architectural changes to scale efficiently to 80 CUs, RDNA2 was a significant redesign at the logical & physical level in order to achieve much higher clocks on the same fab process and power level. Again, those higher clocks translate directly into performance.

              Originally posted by TemplarGR View Post
              479$ MRSP (and we know the price will be significantly higher at retail) is no joke. Even for wealthy countries like the USA, let alone for my country where this amount is close to what most make in a month. It used to be that i could go to the store, get a 100 euro gpu and still game decently at lower resolutions for all modern games. Nothing fancy but it could do the trick. The last half of the decade this ceased to be the case. Today i need to pay 300 euros retail in order to be able to game at the bear minimum for new AAA games at 1080p. The bear minimum. If i pay less than that i have to not only forget anything about raytracing (obviously), but even mid settings at 1080p are out of the question, as well as steady 60fps. It is a lot of money to get such poor experience, when you can add 100 euros and just buy a whole console that will perform at 4K and with raytracing...
              I'm not sure how to interpret what you are saying here but it reads like "games have become more complex and so I need a faster GPU to run them and it's all the fault of the GPU vendors". Or you could be talking about currently inflated retail prices as a consequence of chip shortages, not sure.

              Are you saying that a 300 euro card today has the same raw performance as a 100 euro GPU from "back then", or just that you need a much more powerful GPU to play "modern games" today than in the past ?

              Consoles are heavily subsidized by the manufacturers, so comparing HW cost/perf between GPUs and consoles is meaningless except for deciding which one you should buy. If you are saying that "someone" should subsidize GPUs and make their money back on games the same way that console vendors do that's a reasonable idea, but then the question is who that "someone" would be.

              Originally posted by TemplarGR View Post
              The gpu duopoly's greed has been destroying the whole market, but i suppose they don't care since they have miners to sell to. Only that when the bitcoin ponzi scheme collapses at some point (and it WILL collapse, make no mistake about it), i am not sure there will be any healthy gaming market to sell the gpus to....
              Sorry, I'm trying to understand this "greed" you are talking about. If we were being greedy then our GPU margins would have gone up significantly, but that has not happened. Can you be a bit more specific about this "greed" you see ?
              Last edited by bridgman; 04 March 2021, 05:32 PM.
              Test signature

              Comment


              • Originally posted by ms178 View Post
                Maybe the industry should focus on cutting production costs with better engineering and spread their fabs around the globe to spread the risks of these desasters hitting the whole industry (the initiatives in the US and Europe hopefully will lead to some new leading-edge manufacturing facilities from both TSMC and Samsung). The wafer costs increases are absurd and only trend in one direction, north east, in an exponential curve. There must be ways to produce advanced chips cheaper, but I guess it was easier to get customers to pay more for the end products, but that will hit a wall if that trend continues.
                At the risk of stating the obvious, neither NVidia nor AMD have their own fabs. NVidia was always fabless, while AMD had to get out of the fab business because the costs were growing exponentially and we could no longer afford to fund ongoing fab & process development.

                We rely on third party fabs who make their own decisions about capacity planning and locations, although we can influence that planning a bit by placing long term orders (as we have done). The cost of a single fab is almost 20x the net profit we made during our best year in two decades, so it's not clear where you think we would get the money.

                If you are saying that TSMC and Samsung are making too much money that is a possibility - TSMC's net profit is ~30% of revenues - but I haven't gone through their financials enough to have a good handle on how much of that money is being plowed back as capital expenditures (which would not show up on an income statement other than as future depreciation) into new fabs. I suspect that a fairly high percentage of their profits are being rolled over into new processes and new fab capacity as well.

                Originally posted by ms178 View Post
                And people are rightfully upset about the current price increases. One other example I can think of was the added costs due to PCIe 4.0 plus retimers on motherboards, the GenZ-consortium came up with a more cost-effective solution which I hope we will get to see somewhen in the future. Of course that would mean to cut backwards compatibility with older components, but that is done already on newer platforms for new CPU sockets and motherboards could still come with additional PCIE 3.0 slots.
                Everyone is constantly looking for ways to reduce manufacturing costs - the problem is that those initiatives generally need to happen industry-wide and they don't always pan out with the expected cost reductions. Moving to 7nm was supposed to be expensive at first but much less so over time - didn't really happen. GDDR6 was expensive at first but was supposed to come down significantly over time - didn't really happen. Both of those are probably caused in part by industry-wide shortages over the last year but please do not confuse "we don't always see the expected savings" with "we aren't doing anything".
                Last edited by bridgman; 04 March 2021, 04:29 PM.
                Test signature

                Comment


                • Originally posted by bridgman View Post

                  At the risk of stating the obvious, neither NVidia nor AMD have their own fabs. We rely on third party fabs who make their own decisions about capacity planning, although we can influence that planning a bit by placing long term orders (as we have done). The cost of a single fab is almost 20x the net profit we made during our best year in two decades, so it's not clear where you think we would get the money.
                  Sorry if my thoughts were not stated clearly enough, but that part was directed at ASML and the foundry ecosystem. I'd like to see some radical innovation which brings price levels down to earth. It seems to me that the methods and technology used hits a wall were a radical re-thinking is needed to cut costs. Much like the molten salt reactor in nuclear engineering which can also be build in a modular way, cutting construction costs drastically. Its concept is too different than what the nuclear industry was used to (variations of the light water reactor), unfortunately it was not the design where governments spend most of their R&D budget on for decades. But that new radical design provides benefits that you simply cannot get with the other technology path and which are desireable today.

                  Originally posted by bridgman View Post
                  Everyone is constantly looking for ways to reduce manufacturing costs - the problem is that those initiatives generally need to happen industry-wide and they don't always pan out with the expected cost reductions. Moving to 7nm was supposed to be expensive at first but much less so over time - didn't really happen. GDDR6 was expensive at first but was supposed to come down significantly over time - didn't really happen. Both of those are probably caused in part by industry-wide shortages over the last year but please do not confuse "we don't always see the expected savings" with "we aren't doing anything".
                  Is there such a thing as a shortage, or just not enough capacity? That is a philosophical question. All relevant memory makers are producing GDDR6 for quite some time now, and it is hard to believe for me that each of them don't have the needs to ramp up production with some lead time. By the way, I also thought we would get low-cost HBM by now, another promising memory technology which hasn't made it into the mainstream yet, just as many other memory technologies which are talked about for years, MRAM, PRAM, NRAM. Instead we get small refinements of old technology such as DDR5.

                  Comment


                  • Originally posted by bridgman View Post
                    Polaris was replaced with the RX5500 - similar performance at maybe 2/3 the power. Remember that between NVidia and the press everyone was convinced that power consumption was The Most Important Thing In The World at the time, although now that we are caught up or ahead it no longer seems to be considered important.
                    The 5500 didn't give much in the way of price/performance improvement, unfortunately, because both of the main technologies (7nm, GDDR6) were still quite a bit more expensive than the 12nm & GDDR5 they replaced.
                    This is the one I have trouble with - RDNA was a significant redesign, nothing to do with exploiting smaller process nodes. It was built on the same process node as Vega20 but delivered much better perf/power and perf/area on the same process. Navi10 provided Vega20 performance at a significantly lower price.
                    In addition to adding things like ray tracing and architectural changes to scale efficiently to 80 CUs, RDNA2 was a significant redesign at the logical & physical level in order to achieve much higher clocks on the same fab process. Again, those higher clocks translate directly into performance.
                    I'm not sure how to interpret what you are saying here but it reads like "games have become more complex and so I need a faster GPU to run them and it's all the fault of the GPU vendors". Or you could be talking about currently inflated retail prices as a consequence of chip shortages, not sure.
                    Are you saying that a 300 euro card today has the same raw performance as a 100 euro GPU from "back then", or just that you need a much more powerful GPU to play "modern games" today than in the past ?
                    Consoles are heavily subsidized by the manufacturers, so comparing HW cost/perf between GPUs and consoles is meaningless except for deciding which one you should buy. If you are saying that "someone" should subsidize GPUs and make their money back on games the same way that console vendors do that's a reasonable idea, but then the question is who that "someone" would be.
                    Sorry, I'm trying to understand this "greed" you are talking about. If we were being greedy then our GPU margins would have gone up significantly, but that has not happened. Can you be a bit more specific about this "greed" you see ?
                    i really don't get it all the people blame AMD and no one comes up with any practical idea to make the situation any better.
                    and it is even worst: all the critics is ilogical and scientifically wrong and has no basis in any economical theory and lag of any chip indutry business practice in short it is complete bullshit.

                    for me the case is clear: 5/7nm is expensive and GDDR6/HBM2 is also expensive
                    12nm and GDDR5 and DDR4 and GDDR5 is low cost.

                    my Vega64 is 14nm and has 13 TFLOPS... and costs only 220-250€ on ebey.

                    the logical concultion is not to blame AMD but instead demand a 12nm GPU what has a backport of the RDNA2 architecture who use a 512bit GDDR5 or GDDR5x interface with ~16TFLOPS of FP32

                    a 512 bit GDDR5 interface would have 640GB/​s
                    a 512bit GDDR5 interface would have 512GB/s

                    and with normal GDDR5 chips at 512bit you get 16GB of VRAM.

                    AMD Polaris 30, 1545 MHz, 2304 Cores, 144 TMUs, 32 ROPs, 8192 MB GDDR5, 2000 MHz, 256 bit


                    it is like a double of the RX 590. it is 12nm has 175watt TDP 7.119 TFLOPS 2304 shaders Die Size232 mm²

                    this 512bit memory interface card would have 350watt TDP ~14TFLOPS 4608 shaders ~500mm² die size in 12nm
                    with 16gb Vram GDDR5

                    the price would be 500-600€ but compared to the 1000€ for a 6800 or 1600€ for a 6900XT this is cheap

                    i think this is not bad for 12nm+GDDR5 and i really would buy something like this for 500€ compared to a 1000€ 6800...



                    Phantom circuit Sequence Reducer Dyslexia

                    Comment


                    • Originally posted by ms178 View Post
                      Is there such a thing as a shortage, or just not enough capacity? That is a philosophical question. All relevant memory makers are producing GDDR6 for quite some time now, and it is hard to believe for me that each of them don't have the needs to ramp up production with some lead time. By the way, I also thought we would get low-cost HBM by now, another promising memory technology which hasn't made it into the mainstream yet, just as many other memory technologies which are talked about for years, MRAM, PRAM, NRAM. Instead we get small refinements of old technology such as DDR5.
                      I'm not sure although my suspicion is that memory vendors have been seeing the same recent spikes in demand from mining as GPU vendors, which go far outside anything that could be reasonably forecast.

                      Even worse, everyone in the industry still has fresh memories of the last mining crash, where demand fell through the floor and everyone had piles of unsold inventory sitting around. Those piles of inventory in turn led to temporary price reductions (eg Polaris being inexpensive) which were nice for buyers but not in any way sustainable.

                      Someone on reddit commented that "mining demand is essentially infinite" which is both disturbing and not far from the truth, at least until the next crash. As far as I can see that is the real problem - the demand peaks from mining are not particularly forecast-able, and they are larger than anything that a reasonably sustainable production capacity can satisfy.
                      Last edited by bridgman; 04 March 2021, 05:27 PM.
                      Test signature

                      Comment

                      Working...
                      X