Announcement

Collapse
No announcement yet.

AMD Announces The Radeon RX 6700 XT For $479

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by phoronix_is_awesome View Post

    Don't wanna be argumentative. But to bench a 192bit 12GB VRAM card against 3070 8GB at 1440P Ultra to intentionally bust through the 3070's 8GB VRAM is kinda bad benchmarking in my mind. Plus running a 192Bit card at 230W vs 3060's 170W is another no no. No ROCm support, no tensorcores. The only upside is non-castrated hashing rate. This is not a gamer's card. It is squarely targeted at full hashrate miners.
    I applaud not wanting to be argumentative, and you bring up good points. But given the 3070 and 6700XT both target 1440P seems pretty fair to benchmark them against Ultra games. Not sure if that actually pushes the 3070's 8GB or not, but seems like exactly the kind of things someone looking for the best 1440P GPU would be interested in. If the 6700XT has less of a penalty for ultra because of the extra 4GB of ram, that's something they should get credit for. AMD has claimed that their new architecture will allow increasing the clock speeds, and is in this new 2nd implementation (after the 6800/XT/6900 which all use the same silicon) they are showing it's potential. Yes higher clocks mean more power, but I suspect for most of their market anywhere in the ball park at 230 watts is just fine. After all that's just 5 watts more than the 5700XT.

    Seems like a smart move, even just for gaming for AMD. It looks like (benchmarks will tell of course) like a substantial performance advantage over the 12GB 3060, likely faster on average than the 3060 Ti 8GB, and competitive if not quite matching the 3070. I was previously in the market for the unobtainable 3060 Ti and this looks pretty good. All the more if they deliver on the "substantially higher quantities that they mentioned". Personally I like the idea of more vram, I don't upgrade often (my current gpu is a GTX 1070) and who knows what the next 5 years will bring.

    I thought miners didn't really need the ram, if targeting miners wouldn't they ship less ram, that way they would make 50% more GPUs with the same amount of ram?

    Comment


    • #12
      Originally posted by nadro View Post

      I think that AMD realised that MSRP is just worthless word at current situation in GPU segment for customers and just want to sell chips at higher prices for partners for get some extra cash, because at now from overpriced GPUs mostly distributors and shops have extra gains. When situation will be back to normal (I hope) AMD will just drop MSRP down for this GPU.
      This is risky strategy because it assumes crypto boom and lack of GPUs will be a problem for quite long time. And if it won't be that means AMD will have to adjust MSRP to lower price points.

      In my opinion this card by at least a spec seems to be quite bad theoretically from perf-power draw point of view. 6800 has 256 bit bus so 2 more memory chips, 60 cores, and has 250W TDP, 6700XT has only 40 cores, 2 less memory chips with 192 memory bus and 220W power draw because AMD decided to make it clock 2.4GHz. Since power draw increases to power of 2 with frequency, that means although 6800/6800xt seemed very good from power efficiency perspective and OC perspective, that probably won't hold for 6700XT.

      Comment


      • #13
        Raster price (msrp) / perf ratio is going to be a bit better than one from NVIDIA. In other words - get ready for a series of bullshit meltdown posts from our intel/nvidia fanboy birdie

        Comment


        • #14
          And the price hike continues. AMD told us that they are on a "journey", but as I see it, they follow the same path Intel took in losing the goodwill of the community, milking its customers where they can. At least the product itself seems to be pretty decent, but the price tag certainly is not. From an business point of view, I can understand that they do it, it is in the best interest of their shareholders to get as much as possible out of the market. But I would have thought that they would bring more volume to the market and try to capitalize on the fact that Nvidia doesn't care about gamers and sells a sizeable portion of Ampere GPUs to miners instead. By the way, I still remember arguing in this forum about the terrible value of the 5600X when compared with the 3600. Some assumed that there will be a 5600 for around 200 EUR/USD by this date - I still can't see that happening for the rest of the year which means you have to pay more than 50% extra for a six core of the newest generation. And with GPUs, price to performance is nonexistant anymore. The sad reality is: If you want the great new stuff, you have to pay a hefty premium. And not just for new parts, thanks to the shortages. People all over the world should protest before the headquarters of these companies with pitch forks in their hands. It is not that the pandemic and a surge in demand would be something totally unforseeable at this point, it already lasts a year. And they really should have expanded their capacity sooner than they did.
          Last edited by ms178; 03 March 2021, 02:20 PM.

          Comment


          • #15
            This was my only worry with AMD catching up with performance, they will continue to hike the prices to silly levels. Copying Intel and Nvidia. What with shortage of silicon, this is the worst time to upgrade. Good job I'm not in need of a new PC right now.

            Comment


            • #16
              Originally posted by ms178 View Post
              But I would have thought that they would bring more volume to the market
              At the risk of asking a dumb question, how would we bring more volume to the market ?

              We are running totally fab-limited in the middle of a global chip and wafer shortage and a twice-a-decade (if that) console major refresh.
              Last edited by bridgman; 03 March 2021, 02:13 PM.
              Test signature

              Comment


              • #17
                Originally posted by phoronix_is_awesome View Post
                Don't wanna be argumentative. But to bench a 192bit 12GB VRAM card against 3070 8GB at 1440P Ultra to intentionally bust through the 3070's 8GB VRAM is kinda bad benchmarking in my mind. Plus running a 192Bit card at 230W vs 3060's 170W is another no no. No ROCm support, no tensorcores. The only upside is non-castrated hashing rate. This is not a gamer's card. It is squarely targeted at full hashrate miners.
                You're doing armchair benchmarking. Complaining about the paper specifications without any real world numbers in hand is pretty meaningless. Not to mention the fact that ROCm and tensorcores are irrelevant to consumer workloads. This SKU is clearly not targeting the AI/HPC/machine learning community. Also the term "tensor cores" is an Nvidia marketing buzzword, not a technology. "The Rtx 3070 sucks because it lacks RDNA2 architecture". See what I did there?

                The fact is, if the 6700 XT delivers noticeably better performance than the 3070 in popular games, the price delta will be justified and no one will care about the difference in bits or watts. I'm confident AMD will sell every 6700 XT card at MSRP, as fast as they can produce them. As a shareholder, I'm literally banking on it!
                Last edited by torsionbar28; 03 March 2021, 02:39 PM.

                Comment


                • #18
                  Originally posted by bridgman View Post

                  At the risk of asking a dumb question, how would we bring more volume to the market ?

                  We are running totally fab-limited in the middle of a global chip and wafer shortage and a twice-a-decade (if that) console major refresh.
                  You had an entire year to solve that problem. So as transcoder engines of MS and Apple reaching 75% efficiency, once an arm company expresses a 5 teraflops Gpu you can say bye bye. Dual installation and its over for you for ever. We had a simple request: not to ask for a kidney for a new pc, you all failed for years.

                  Comment


                  • #19
                    Originally posted by bridgman View Post

                    At the risk of asking a dumb question, how would we bring more volume to the market ?

                    We are running totally fab-limited in the middle of a global chip and wafer shortage and a twice-a-decade (if that) console major refresh.
                    Second-sourcing would still provide you with more capacity?! Sure, it is not only AMD's planning which is at fault here, ABF substrate and other critical players were not really fast either to ramp up production (due to underinvestment two to three years ago, I suppose). It is us consumers and other industry players who now have to pay the price for that. Concerning AMD, the strategy of putting all your chips on 7nm TSMC is backfiring in the current climate. You are already partnering with Samsung on their mobile SOC, I guess that also includes porting RDNA on Samsung's process. Of course these projects take years, but as AMD originally had this second-sourcing strategy in mind with GloFo's 7nm, right after they canceled that, AMD should have gone with Samsung at least for some products (Samsung's process should be good enough for smaller dies as Nvidia also shows with Ampere and I also read that it is considerably cheaper). That comes with additional expenses, sure. But that would have provided you with more capacity today.

                    Comment


                    • #20
                      Originally posted by bridgman View Post
                      At the risk of asking a dumb question, how would we bring more volume to the market ?
                      We are running totally fab-limited in the middle of a global chip and wafer shortage and a twice-a-decade (if that) console major refresh.
                      absolutly right... so the solution is to build smaller chips in chiplet design this increase the chip output on the waver.

                      do a chiplet design like this :

                      RDNA/CDNA hybrid chiplet design:

                      IO chip with the vram interface like HBM3
                      one RDNA chip and one CDNA chip
                      2 infinity cache means 1 for the RDNA chip and one for the CDNA chip

                      this way you should get insane level of performance for graphics and compute and the price compared to monolite design should be low.
                      Phantom circuit Sequence Reducer Dyslexia

                      Comment

                      Working...
                      X