Announcement

Collapse
No announcement yet.

Intel Arc Graphics A380: Compelling For Open-Source Enthusiasts & Developers At ~$139

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by tildearrow View Post
    What happened? The article moved...
    Typo update and for those that missed the article with not normally publishing big articles on Sunday...
    Michael Larabel
    https://www.michaellarabel.com/

    Comment


    • Originally posted by L_A_G View Post
      There's nothing in being a "75W card" that states you have to be completely powered by the PCIe bus. When you're that close to the maximum offered by the spec, not having an 8pin connector is the irresponsible thing to do. Even if you're just spiking to above it, the non spec compliant motherboards do still exist. They may not blow out on the first time power draw goes above spec, but over time non-compliant boards and over-drawing cards will start breaking motherboards. I've seen that happen with both overclocked versions cards that have drawn more than 75W from the PCIe lane and non-compliant motherboards.

      Simply put; Having a card draw power the maximum offered by the PCIe spec or close to it is like making a car that only barely meets legal crash safety regulations. Its just asking for a disaster to happen.
      There are other "75W cards" that are completely bus powered like GTX 1630 or GTX 1650 I don't know why are you arguing for this A380 to be called a "75W card" when it obviously isn't. It's a "100W card" or a "125W-class card" in my opinion. It certainly doesn't need 8-pin either since 6-pin would've been enough.
      Non-compliant motherboards aren't enough of a motivation because those are simply defective products.
      The car analogy doesn't work for me either, why have set limits in the first place? Both the GTX 1630/1650 and RX 6400 do not "barely meet" or "barely not exceed" the PCI-Express spec - they comply with it.
      Last edited by numacross; 29 August 2022, 03:54 PM.

      Comment


      • Hey Michael, I'm seeing a lot of confusion here about the Portal 2 and Left 4 Dead 2 OpenGL vs. Vulkan results.

        One thing I noticed when I was looking at Portal 2 is that it has wildly different default graphical settings for each API.
        • Portal 2 OpenGL: 4x MSAA, 16x Anisotropic Filtering, Very High Shaders
        • Portal 2 Vulkan: Single sampled, Trilinear Filtering, Low Shaders
        Also, every time I switched between running portal2.sh -gl and portal2.sh -vulkan, the game would revert all graphical settings back to default. (I haven't checked Left 4 Dead 2, but it may have a similar issue.)

        Your article shows that the Vulkan version of the game is nearly 2x faster than the OpenGL version. I was able to see that too, on my local machines, when I let the game run at default settings. But when I ran the game on each API twice, manually adjusting the settings on the first launch so they matched, I saw only about an 8% performance difference between the two APIs.

        You might want to double check that the openbenchmarking scripts are setting quality settings for those games in the way that you expect. I suspect you may be running into the same issue I saw. If so, then this is really more of a "OpenGL at Max Quality vs. Vulkan at Low Quality" comparison, rather than the "OpenGL vs. Vulkan" comparison people seem to think it is. Which is a bit different!

        FWIW, there is also a known issue with the current Linux drivers (both iris and anvil) where an important MSAA performance feature is temporarily disabled. Once we fix that, I anticipate that we'll see around an 18% uptick on Portal 2 with 4x MSAA.
        Free Software Developer .:. Mesa and Xorg
        Opinions expressed in these forum posts are my own.

        Comment


        • Originally posted by Solid State Brain View Post
          I find the ~17W idle power consumption to be rather disappointing to be honest. A discrete GPU of this level shouldn't require more than few watts in idle conditions.
          Wow, the RX 6500XT and RX 6400 idling at just 2 -3 W is indeed nice! Low enough to throw in a small server and not worry about, which is what I do with my old HD 5450 -- and that idles at 7 W! If I ever see a RX 6400 for < $100 (especially fanless) I migh grab one to replace it.

          Comment


          • Originally posted by ms178 View Post
            You haven't payed any attention to what I've written. Simply put, RDNA1 was cheap to produce and was produced and ready before the pandemic. Given the small die size, AMD could have been more aggressive on pricing with RDNA1 but they were not. There were no excuses for the elevated prices there and that pricing was a sign to Nvidia that AMD was not going for market share but higher margins instead. That's what happens when you have a duoploy, they do not want to compete on price with each other if there is not enough market pressure, AMD was prioritizing their 7nm wafers for their CPU products instead as they make more money there. I have the suspicion that this will continue to be the case the next couple of generations as AMD still has more interest in selling higher-margin Epyc's than their GPUs which compete for the same allocated wafers.
            AMD was not able to get a relevant amount of money from a GPU what does not have hardware accelerator for raytracing.
            RDNA1/5700XT at 7nm was not a commercial successfull product it was only 10% faster than a Vega64 who was at 14nm
            and AMD was forced to sell it at a lower price than even the Vega64...

            you claim this: "AMD was prioritizing their 7nm wafers for their CPU products instead"

            but i think this is wrong AMD plain and simple was not competive at that time and i bought over 6 vega64 means their loyal customers did buy it anyway. at the time i bought vega64 it was also not competive agaist nvidia.
            today with FSR1.0 and Temporal FSR2.0 the vega64 is much more competive to nvidia cards but FSR1.0 did come 4 years after the relase of vega64 and Temporal FSR2.0 did come 5 years after vega64 so it was not clear if vega64 ever did get this technology.
            same for RDNA1.0/5700XT without FSR1.0/2,0 this card was not competive to nvidia cards and i do not even talk about raytracing even without raytracing becuase of DLSS and DLSS2.0 and other factors.

            also TSMC 7nm and GDDR6 is expensive technology the 14nm of the vega64 at 14nm was cheap in comparison. (but of course HBM2 memory is expensive.)

            what do you expect that AMD pay you money to make you buy their product ? no at this situation all they could to is limit the loses.

            what people don't get is that Patent fees make product expensive and GDDR6 and GDDR6X have big patent fee penalty...

            SODDR5 and DDR5 is patent-free combined with infinity cache this could lead to cheaper cards...

            the joke is apple M1/M2 does bet on patent free memory like _SODDR5 but all the PCIe GPU makers intel,amd,nvidia bet on GDDR6 and GDDR6x... on the APU level amd goes DDR5 so if you want budged GPU the newest APU is a good bed.

            RDNA cards turned this around with raytracing support and FSR1.0/2.0 and new HIP (source code level CUDA emulation)support in Blender 3.2 AMD can sell it at a competive price... if you compare the 1150€ of the 6950XT to nvidia cards you get a good deal.


            Originally posted by ms178 View Post
            While Intel was not a top customer before, they still have a long history of collaboration. The term "frenemy" might be best used to describe their complex relationship. And more importantly, you might have missed the reports that Intel is going to get one of TSMC's top 3 customers next year. Pat Gelsinger was visiting TSMC more than once and wanted preferential treatment in return for a large pay check. And appearently he got just that as Intel relies on TSMC for the foreseeable future not only for their GPUs but also for large parts of their future CPUs.
            Again, look at the Hot Chips presentation of Meteor Lake and you will get my point. It's all about access to technology for money in the end. These tiles are also small and easy to fab on a new process. Intel is rumored to have gotten preferential access to 3 nm production and certainly paid large sums for it. While this deep cooperation certainly has a fixed end date, as long as Intel doesn't get its act together with their fabs, they are going to use TSMC instead of risking to get technologically behind in several of thei core markets. While that might hurt their margin, using outdated process tech might have hurt them even more, we have seen this with 14nm+++++ Cooper Lake versus Epyc.
            better production node is not how intel maintains their defacto monopole...

            they do it by ISA war ...

            SSE vs 3Dnow

            SSE4,0 (AMD) vs SSE4.1(Intel)

            4FMA(AMD) vs 3FMA(intel)) (this is even today intel did make an inferior version the defacto standard)

            multible incompatible versions of AVX512... and AMD did only AVX2 256bit...

            and so one and so one. intel did use their compiler to cheat and hurt amd...

            it would really be better for humanity if TSMC did reject intel as a customer.

            i am sick of this intel fraud... even if you buy AMD you support this intel ISA war...

            i am pretty sure my next system will be IBM POWER or ARM like apple M1/M2...

            Originally posted by ms178 View Post

            I dispute that, as that's not true at all: RDNA2 was on TSMC's 7nm, RDNA3 is on 5nm and Nvidia's Ada Lovelace is on 4nm and was fabbing Ampere at Samsung, while Intel's first generation Arc is on TSMC's 6nm. You missed the point that Intel is happy to give these cards away for a far worse margin to get any market share, wheras AMD and Nvidia are playing the high margin game. As there is no unlimited demand for GPUs, that would have meant better pricing in the end even with a fixed wafer capacity with an equal cost for all as it would have driven demand away from AMD and Nvidia, lowering their prices in every segment that ships any volume.
            people have to unterstand that the last and best 2D planar process​ is in 12nm
            if Rasperie pi moves from 16nm to 12nm this will be the last "cheap" node shrink,...
            going from 7nm to 6nm to 5nm to 4nm results into a cost explosion because design these 3D structures is very expensive because of the _Dark Silicon problem.
            and intel proof that these nodes do not mean what most people think it is...

            for example AMD 7nm gpus beat the intel ARC 6nm gpus at every level... even my 14nm Vega64 beats the intel ARC gpus hard.

            and Nvidia with 8nm samsung node beats the AMD 7nm 'GPUs...

            soon amd will produce in 5nm TSMC and then all the people who claimed about apple M1/M2 it is not the ISA instead it is the node will soon discover that the node is less important than the chip design...

            and any ISA war do in fact sapotage a good chip design... thats why ARM/POWER will perform better than x86 in the long run.

            even intel should unterstand that ISA war is poison to themself.

            Originally posted by ms178 View Post

            There is nothing emotional such as a "true partnership" in these business relationships. Intel needs TSMC in these times just as bad as everyone else to make a good product as they simply have not the proccesses needed yet. They are big enough financially to not get ignored by TSMC, after all, business is about making money. Also each of TSMC's customer might jump the ship if presented with a better total package by a different manufacturer. AMD and TSMC's relationship wasn't all too rosy either, AMD was complaining to not get enough 7nm wafers as soon as they wanted to as they could not meet the demand for each of their 7nm products. Intel on the other hand still has more financial leverage than AMD and as a secondary effect competes with AMD on wafer allocation which could hurt AMD further. AMD can now either pay more than Intel and others for the same wafer capacity on new processes, or diversify their fab partners more with limited options at the current state of the tech industry. However it is a fact that Intel got TSMC to terms as they simply are all in for the money and appearently payed what TSMC wanted.
            because of the ISA war you should avoid intel even if they sell you 3nm cpu...
            also at their gpus you see amd 7nm beats intel at 6nm... and nvidia beats intel 6nm at 8nm...
            this means lower node numbers to not automatically make you win the battle.

            intels 6nm 150mm² gpu chip loses agaist a 100mm² 6nm gpu chip... this is the best example about chip design...


            Phantom circuit Sequence Reducer Dyslexia

            Comment


            • Originally posted by numacross View Post
              in my opinion...
              That's the crux of the matter isn't it? That you're pulling extra conditions out of your backside.

              Non-compliant motherboards aren't enough of a motivation because those are simply defective products.
              Defective products maybe, but the even the most advanced buyers aren't going to be testing for it. The maximum power delivery for PCIe slots isn't something reviewers test for. All they're going to see is the motherboard and potentially the GPU too failing down the line and due to having failed, they can't test which one was at fault.

              The car analogy doesn't work for me either, why have set limits in the first place? Both the GTX 1630/1650 and RX 6400 do not "barely meet" or "barely not exceed" the PCI-Express spec - they comply with it.
              Yet you posted an example of a card peaking at 76W or beyond the spec. If a motherboard can't deliver that 75W, which is quite common in lower end boards and we are talking about the kinds of cards that are usually paired up with that exact type of board, then every time your example card peaks, it's going to damage that board. Little by little, it'll eventually cause the board to fail and when it does, it's not uncommon for it to take the graphics card with it.

              Comment


              • Originally posted by qarium View Post

                AMD was not able to get a relevant amount of money from a GPU what does not have hardware accelerator for raytracing.
                RDNA1/5700XT at 7nm was not a commercial successfull product it was only 10% faster than a Vega64 who was at 14nm
                and AMD was forced to sell it at a lower price than even the Vega64...you claim this: "AMD was prioritizing their 7nm wafers for their CPU products instead" but i think this is wrong AMD plain and simple was not competive at that time and i bought over 6 vega64 means their loyal customers did buy it anyway. at the time i bought vega64 it was also not competive agaist nvidia.
                Wait a second, don't mix "commercially successful" with performance numbers and the competitive landscape in various temporal dimensions and from several different angles all into one giant soup. Vega was huge and expansive to produce during its whole life cycle, think of HBM2 and the large die size (486 mm2​ vs 251 mm² for the 5700XT), there was also the added complexity with the interposer and packaging, also cooling was a headache. I guess in the end they sold Vega at a loss in the consumer market as it wasn't doing that great against Nvidia's offerings at that time. I doubt that the 5700XT was an economic disaster of the same magnitude. By the way, I also own a Vega 56 to this date and saw it as highly undervalued back then and HUB just released a video some days ago where it was found to have aged very well performance-wise. We both agree on that part. But the 5700XT was basically an overclocked lower mid-range card with a couple of annoying hardware bugs and driver issues during its first year. But it was way cheaper to produce than Vega, AMD simply did not want to go all-in with volume because they were constraint on wafer capacity and could use the same 7nm wafer and make way more money per square milimeter by selling it as Zen 2 chiplets instead. And yes, from AMD's point of view that makes perfect sense. It did come at the cost of mindshare in the GPU market though, they are still recovering from these dark times and have to fight back hard for lost market share against Nvidia. And if Intel gets their act together, they could be a formidable competitor, too.

                My point is: AMD could have gotten way more market share if they had been willing to be more price aggressive with RDNA1 (which would have been economically sound wheras beeing price-aggressive with Vega was suicidal). We consumers suffered from this period of stagnation as it made Nvidia the only other option to go to in certain segments and availability for some time until RDNA2 arrived (but was a pain to get for other reasons due to the Pandemic and shortages) and you can see what Nvidia made of its market position in their earnings releases.

                Originally posted by qarium View Post
                i am pretty sure my next system will be IBM POWER or ARM like apple M1/M2...
                Good luck with getting your favorite x86-only software to run with decent performance on such a system. I am all for more competition from other ISAs in the desktop space, but there are many reasons why Qualcomm hasn't been successful so far with their Windows-on-ARM offerings and no other has tried it yet. I buy a CPU not because of its ISA but what it offers me in terms of price/performance with the software I want to run with it. And the sad part is that I haven't seen any offering from IBM or ARM that would even make them even part of my decision making process yet.

                Originally posted by qarium View Post
                for example AMD 7nm gpus beat the intel ARC 6nm gpus at every level... even my 14nm Vega64 beats the intel ARC gpus hard.

                and Nvidia with 8nm samsung node beats the AMD 7nm 'GPUs...

                soon amd will produce in 5nm TSMC and then all the people who claimed about apple M1/M2 it is not the ISA instead it is the node will soon discover that the node is less important than the chip design...

                and any ISA war do in fact sapotage a good chip design... thats why ARM/POWER will perform better than x86 in the long run.

                even intel should unterstand that ISA war is poison to themself.
                Both chip design and process are equally important, but I don't get it why this is relevant for my rant about AMD's complacency in the GPU space for so long. Intel still has some catch up to do with designing a competitive GPU product (and all what it needs to be succesful that comes with it), that's hardly something insightful or news to anyone following the Arc launch.
                Last edited by ms178; 29 August 2022, 08:52 PM.

                Comment


                • Originally posted by ms178 View Post
                  RDNA1 was cheap to produce and was produced and ready before the pandemic. Given the small die size, AMD could have been more aggressive on pricing with RDNA1 but they were not.
                  They were first on 7 nm. You can't take two different products made on the same node at two different times and do such a direct cost comparison. The first ones are traditionally smaller and more expensive.

                  Nvidia even skipped TSMC 7 nm altogether. RTX 2000-series was 12 nm and RTX 3000-series went to Samsung 8 nm.


                  Originally posted by ms178 View Post
                  Intel is going to get one of TSMC's top 3 customers next year.
                  That's only while Intel catches up with its own fabs. It's not their long-term strategy to use TSMC, and everyone knows it. Intel is investing like $100 B in fabs around the world, and its IFS business is a direct competitor of TSMC.

                  Originally posted by ms178 View Post
                  as long as Intel doesn't get its act together with their fabs, they are going to use TSMC instead of risking to get technologically behind in several of thei core markets.
                  TSMC doesn't remotely have the capacity to replace Intel's own manufacturing. So, the examples you're highlighting represent the exception rather than the rule.

                  Originally posted by ms178 View Post
                  While that might hurt their margin, using outdated process tech might have hurt them even more, we have seen this with 14nm+++++ Cooper Lake versus Epyc.
                  Intel's manufacturing problems really didn't hurt them on the financial end, which is where it matters. The global shortage of fab capacity was so bad that big customers had to keep buying Intel CPUs, even when they weren't the fastest or most efficient.

                  BTW, Cooper Lake was a niche part they only sold to select customers. It doesn't really make a good example. Stick to Cascade Lake vs. Rome.

                  Originally posted by ms178 View Post
                  I dispute that, as that's not true at all: RDNA2 was on TSMC's 7nm, RDNA3 is on 5nm and Nvidia's Ada Lovelace is on 4nm and was fabbing Ampere at Samsung, while Intel's first generation Arc is on TSMC's 6nm.
                  Now you're muddling things all together. Your original contention was that Intel being in the GPU game could've saved us from the unprecedented price & availability problems of the past 2 years. In that time, the only viable process nodes for GPUs were TSMC 12 nm, TSMC N7, and Samsung 8 nm. It's only earlier this year that N6 started to enter the picture -- Intel couldn't have used TSMC N6 in the timeframe where they would have made a difference.

                  Furthermore, you don't seem to understand that being on Samsung 8 nm (or TSMC 12 nm) isn't something Nvidia really wanted to do. It seems to be a worse node in almost every way. That's what happens when there's a capacity shortage -- products get moved to different nodes that are either inferior or a lot more expensive.

                  Originally posted by ms178 View Post
                  You missed the point that Intel is happy to give these cards away for a far worse margin to get any market share,
                  I'm sure they were prepared for lower profit margins than their competitors, but it's not a situation like games consoles, where the manufacturer can effectively subsidize the hardware using revenue made back on software license fees. Nobody at Intel, looking at the historically profitable GPU business over pretty much the entire run up to ARC's development expected to be selling them below cost. You can't look at their current pricing and believe it would've sold for anything like that, 1-2 years ago.

                  Originally posted by ms178 View Post
                  wheras AMD and Nvidia are playing the high margin game. As there is no unlimited demand for GPUs, that would have meant better pricing in the end even with a fixed wafer capacity with an equal cost for all as it would have driven demand away from AMD and Nvidia, lowering their prices in every segment that ships any volume.
                  Again, you're conjuring Intel GPUs out of the ether. You neglect to account for the fact that Intel bidding for wafer capacity would've increased prices and decrease volumes for everyone, including Intel.

                  The main reason the internet was excited about Intel getting into the GPU game was based on the assumption they'd use their own fabs. That would represent a meaningful increase in supply. Now that they're basically fighting everyone else for TSMC's capacity, I don't expect them to have a major impact on GPU pricing, at least until they can offer something competitive. And even that's mostly predicated on the kinds of supply-side bottlenecks we've been seeing continuing to wind down.

                  Originally posted by ms178 View Post
                  You certainly missed some econimics lessons and missed some important facts in our discussion about the wafer technology used by each vendor in this GPU generation.
                  About the last person here I'm going to take "econimics" or semiconductor fabrication lessons from is you. It's obvious that your grasp on each is tenuous, at best.

                  Comment


                  • Originally posted by ms178 View Post
                    Wait a second, don't mix "commercially successful" with performance numbers and the competitive landscape in various temporal dimensions and from several different angles all into one giant soup. Vega was huge and expansive to produce during its whole life cycle, think of HBM2 and the large die size (486 mm2​ vs 251 mm² for the 5700XT),

                    Well 486 mm2​ vs 251 mm​ sounds good in the first moment but 14nm node was dirt cheap compared to the 7nm node.
                    who cares how big the chip is if it costs less than the small chip because the small chip is on an expensive node ?
                    HBM2 was expensive but without patent fees GDDR6 and GDDR6x have patent fee...

                    people believe stuff from the old age in the old age new nodes where always cheaper but if you see the cost explosion of 5nm and 4nm and 3nm and 2nm... this is no longer the case... if costs keep going up like this you can make a 1000mm² 12nm 2D design chip for the same cost a 3nm 200mm² chip costs... for mobiles devices you need battery time this is not an option but as soon as you talk about desktop or server and just in case you have cheap electricity some have this then 1000mm² 12nm chip is an option.


                    Originally posted by ms178 View Post
                    there was also the added complexity with the interposer and packaging, also cooling was a headache. I guess in the end they sold Vega at a loss in the consumer market as it wasn't doing that great against Nvidia's offerings at that time.
                    amd did make profit on it but only because of the ethereum mining boom not because of games ...
                    but with FSR1.0/2.0 the vega64 is still a very good card even at 4K... and the 5700xt is only very small upgrade
                    for me going from vega64 to 5700xt was never an option for only 10% higher performance.

                    Originally posted by ms178 View Post
                    I doubt that the 5700XT was an economic disaster of the same magnitude. By the way, I also own a Vega 56 to this date and saw it as highly undervalued back then and HUB just released a video some days ago where it was found to have aged very well performance-wise. We both agree on that part.
                    the 5700xt was in my knowelege a economic disaster for AMD but AMD did it anway to get experience for RDNA2 cards...

                    for amd it was impossible to go from vega GCN to RDNA2 ... they used the 5700xt as a preperation to the RDNA 2 cards.

                    yes vega56 and vega64 with FSR1.0/2.0 and ROCm HIP for blender 3.2 aged very well even for 4K gaming...

                    but for the consumers it is no fun to wait 4-5 years to get FSR1.0/2.0 for their vega cards.
                    for example cyperpunk 2077 at 4K resolution at 60hz with FSR1.0 works very well..

                    Originally posted by ms178 View Post

                    But the 5700XT was basically an overclocked lower mid-range card with a couple of annoying hardware bugs and driver issues during its first year. But it was way cheaper to produce than Vega, AMD simply did not want to go all-in with volume because they were constraint on wafer capacity and could use the same 7nm wafer and make way more money per square milimeter by selling it as Zen 2 chiplets instead.
                    you can claim this alright but the 5700 was never a competive product agaist the 1080ti and 2080ti...
                    unlike at vega64 age amd and nvidia had the same level of graphic effects it was only that vega was slower.
                    this is not the case for the 5700... many people did buy a RTX2060 or something like that because of raytracing as a feature.
                    and even if people say fuck raytracing it is to slow for gaming anyway they did buy 1080ti what was faster then 5700...

                    even if like you want it amd could have produces the double amount of 5700... and sell it dirt cheap it is not clear if amd could have make this sales and not just crash the market for their own sales.

                    did you buy a 5700XT ?...

                    Originally posted by ms178 View Post
                    ​​
                    And yes, from AMD's point of view that makes perfect sense. It did come at the cost of mindshare in the GPU market though, they are still recovering from these dark times and have to fight back hard for lost market share against Nvidia. And if Intel gets their act together, they could be a formidable competitor, too.
                    My point is: AMD could have gotten way more market share if they had been willing to be more price aggressive with RDNA1 (which would have been economically sound wheras beeing price-aggressive with Vega was suicidal). We consumers suffered from this period of stagnation as it made Nvidia the only other option to go to in certain segments and availability for some time until RDNA2 arrived (but was a pain to get for other reasons due to the Pandemic and shortages) and you can see what Nvidia made of its market position in their earnings releases.
                    just watch some userbenchmarks vega64 vs 1080ti vega64 vs 2080... and 5700 vs 1080ti and 5700 vs 2080...
                    you will see that AMD could not lower the price to make the product desirable...

                    Based on 931,944 user benchmarks for the AMD RX Vega-64 and the Nvidia GTX 1080-Ti, we rank them both on effective speed and value for money against the best 714 GPUs.


                    the 1080ti is +44% faster than vega 64

                    Based on 1,095,176 user benchmarks for the AMD RX 5700-XT and the Nvidia GTX 1080-Ti, we rank them both on effective speed and value for money against the best 714 GPUs.


                    the 1080ti is +42% faster than the 5700...

                    this alone shows you that 5700 only did get 2% more performance in this comparison:..

                    Based on 612,854 user benchmarks for the AMD RX 5700-XT and the Nvidia RTX 2080-Ti, we rank them both on effective speed and value for money against the best 714 GPUs.


                    the 2080ti is 35% faster than the 5700 ... (the 1080ti was better but because of raytracing the people buy it anyway)

                    Based on 449,622 user benchmarks for the AMD RX Vega-64 and the Nvidia RTX 2080-Ti, we rank them both on effective speed and value for money against the best 714 GPUs.


                    the 2080ti is 82% faster than a vega64 according to userbenchmark (looks like they use different benchmarks as soon as the card has raytracing)

                    however according to these numbers amd would have needed to sell their 5700 as a very big lost...

                    i think for amd it was not possible to sell a vega64 or 5700 for 82% lower price than nvidia sell their 2000 cards...

                    Originally posted by ms178 View Post
                    ​​
                    Good luck with getting your favorite x86-only software to run with decent performance on such a system. I am all for more competition from other ISAs in the desktop space, but there are many reasons why Qualcomm hasn't been successful so far with their Windows-on-ARM offerings and no other has tried it yet. I buy a CPU not because of its ISA but what it offers me in terms of price/performance with the software I want to run with it. And the sad part is that I haven't seen any offering from IBM or ARM that would even make them even part of my decision making process yet.
                    emulation of x86 is technically no problem...

                    apple products are not ready on the driver side...

                    and a IBM system is so expensive that you go broke instandly...

                    so right now it does not look like a option... but who knows maybe apple start to invest in linux driver development.

                    Originally posted by ms178 View Post
                    ​​
                    Both chip design and process are equally important, but I don't get it why this is relevant for my rant about AMD's complacency in the GPU space for so long. Intel still has some catch up to do with designing a competitive GPU product (and all what it needs to be succesful that comes with it), that's hardly something insightful or news to anyone following the Arc launch.
                    yes you can say amd did something wrong in the gpu space but with intel deliver a much worst result should tell you that maybe amd did not do so bad at all.
                    Phantom circuit Sequence Reducer Dyslexia

                    Comment


                    • [QUOTE=qarium;n1343211]
                      just watch some userbenchmarks vega64 vs 1080ti vega64 vs 2080... and 5700 vs 1080ti and 5700 vs 2080...
                      you will see that AMD could not lower the price to make the product desirable...
                      /QUOTE]
                      Userbenchmark is a joke, it's not a credible site, not transparent in how they reach their conclusions and openly biased against AMD.
                      Vega64 was priced competitively against the GTX1080, not the GTX1080Ti, and it appears to have aged quite well.
                      https://odysee.com/@HardwareUnboxed:2/was-vega-really-more-'future-proof'-vega:5

                      It's true that
                      it's a shame that Radeon cards weren't as polished at release as GeForce ones, but even GCN1 cards from 2012 are still useful today on linux.

                      Comment

                      Working...
                      X