Announcement

Collapse
No announcement yet.

Intel Arc Graphics A380: Compelling For Open-Source Enthusiasts & Developers At ~$139

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by L_A_G View Post
    That's the crux of the matter isn't it? That you're pulling extra conditions out of your backside.

    I could very well say the same about your opinion.
    Originally posted by L_A_G View Post
    Defective products maybe, but the even the most advanced buyers aren't going to be testing for it. The maximum power delivery for PCIe slots isn't something reviewers test for. All they're going to see is the motherboard and potentially the GPU too failing down the line and due to having failed, they can't test which one was at fault.
    I am sorry but there are plenty of cards that "approach the limit", at the very least three models that I linked previously, but there's countless more. If this was a widespread problem it would be widely known or worked around as you suggest - by adding a 6-pin to 75W cards, but it's not.
    This very issue is also being tested, for example in the 1650 article I linked they did OC the card, and it kept itself at sustained 74W during Furmark (which is an unrealistic load).
    Originally posted by L_A_G View Post
    Yet you posted an example of a card peaking at 76W or beyond the spec. If a motherboard can't deliver that 75W, which is quite common in lower end boards and we are talking about the kinds of cards that are usually paired up with that exact type of board, then every time your example card peaks, it's going to damage that board. Little by little, it'll eventually cause the board to fail and when it does, it's not uncommon for it to take the graphics card with it.
    Are you seriously arguing that a peak usage being 1W beyond the specc is an issue? Peaks happen infrequently and are accounted for in every design (this is for inrush current, but the idea is similar). This might very well be a measurement error margin issue as well.
    As I said before: if it damages the board then the board is defective from the start.

    Comment


    • Originally posted by numacross View Post
      ​I could very well say the same about your opinion.
      What you're setting up are conditions you've just made up, what I'm pointing out are considerations based on real world experience from an imperfect world.

      If you want an example I give you the factory overclocked models of the GTX 750Ti that was supposed to be a 60W card and reviewers at the time supposedly confirmed that it drew about 65W under load. However after reports began to surface of motherboards going poof independent third parties began running tests and eventually found out that the reviewer tests were wrong and it would actually draw up to 82W under load. Which broke quite a few lower end motherboards.

      I am sorry but there are plenty of cards that "approach the limit", at the very least three models that I linked previously, but there's countless more. If this was a widespread problem it would be widely known or worked around as you suggest - by adding a 6-pin to 75W cards, but it's not. This very issue is also being tested, for example in the 1650 article I linked they did OC the card, and it kept itself at sustained 74W during Furmark (which is an unrealistic load).
      Did you even read what I wrote? I was explicitly talking about MOTHERBOARD reviewers not testing the PCIe power delivery being up to spec and that even if they did, its something that wouldn't even show up in the short time span reviews run their tests in.

      Are you seriously arguing that a peak usage being 1W beyond the specc is an issue? Peaks happen infrequently and are accounted for in every design (this is for inrush current, but the idea is similar). This might very well be a measurement error margin issue as well.
      I pointed out the 76W peak card as an example of this issue not being limited to Intel's new GPU. The factory-overclocked versions of the 750 Ti didn't go much more above the spec and they still broke plenty of motherboards back in the day.

      As I said before: if it damages the board then the board is defective from the start.
      Again; You're not going to find out this until your motherboard fails and by that point the warranty will have most probably expired. That's also assuming that you can correctly deduce the culprit.

      Seriously thou, you're trying to make a huge deal out of a really small thing. A thing that's just a safety precaution for the fact that we live in an imperfect world where the things we buy aren't always up to specifications and corners are sometimes cut. We don't live in a perfect world and it's idiotic to act like we do.

      Comment


      • Originally posted by L_A_G View Post
        What you're setting up are conditions you've just made up, what I'm pointing out are considerations based on real world experience from an imperfect world.

        I am quoting direct measurements of multiple current products and industry specifications. You are presenting anecdotal evidence with no sources.

        Originally posted by L_A_G View Post
        If you want an example I give you the factory overclocked models of the GTX 750Ti that was supposed to be a 60W card and reviewers at the time supposedly confirmed that it drew about 65W under load. However after reports began to surface of motherboards going poof independent third parties began running tests and eventually found out that the reviewer tests were wrong and it would actually draw up to 82W under load. Which broke quite a few lower end motherboards.
        Great, so a product that violated the spec violated the spec, and motherboards that were designed incorrectly broke. That is an 8 year old card as well. I'm pretty sure that the industry learned from this and the RX 480 debacle... On-board power regulation has also advanced in that time.
        I gave you examples of many modern products that don't violate the spec.

        Originally posted by L_A_G View Post
        I pointed out the 76W peak card as an example of this issue not being limited to Intel's new GPU. The factory-overclocked versions of the 750 Ti didn't go much more above the spec and they still broke plenty of motherboards back in the day.
        Constant 82W under load from your example is a far cry from 76W spikes. Again, you are describing a product that is breaking the spec, and going over recommended NVIDIA BIOS settings.
        Also this Intel card is peaking 102W which is even further from 76W.

        Anyway this entire discussion went in an off topic direction. My original point was that this A380 shouldn't be called a "75W card" because it's not meeting the "industry standard" for one - it takes way too much power and requires an aux power connector, an oversized one at that.

        Originally posted by L_A_G View Post
        Again; You're not going to find out this until your motherboard fails and by that point the warranty will have most probably expired. That's also assuming that you can correctly deduce the culprit.

        Seriously thou, you're trying to make a huge deal out of a really small thing. A thing that's just a safety precaution for the fact that we live in an imperfect world where the things we buy aren't always up to specifications and corners are sometimes cut. We don't live in a perfect world and it's idiotic to act like we do.
        I am not making "a huge deal" no more than you are demanding that <=75W cards have a 6-pin aux connector simply because there are a few bad apples in motherboards.
        Are you not realizing that demanding that connector is limiting the pool of computers able to receive a card like that? There are many business PCs which can have their life extended by adding a 75W GPU into them, but they lack any sort of PCI-express power connectors. Dell for example is very fond of their proprietary PSUs that do this.
        This lower end Intel GPU would be a good fit for them, granted you're willing to accept the loss of performance from lack of ReBAR. Hopefully there will be versions of this card that are bus powered.

        Comment


        • Originally posted by numacross View Post
          I am quoting direct measurements of multiple current products and industry specifications. You are presenting anecdotal evidence with no sources.
          You're going based on figures that are sometimes plain wrong and a hopelessly naive assumption that specs are always followed and corners are never cut. I gave you exactly what you need to find the same sources I used and re-checked when I wrote my posts.

          Great, so a product that violated the spec violated the spec, and motherboards that were designed incorrectly broke. That is an 8 year old card as well. I'm pretty sure that the industry learned from this and the RX 480 debacle... On-board power regulation has also advanced in that time.I gave you examples of many modern products that don't violate the spec.
          Considering that the 750 Ti came out 8 years ago and then the RX 480 repeated the exact same mistake a few years later, what does that suggest about the industry and its ability to learn from mistakes? That it's not perfect. Companies do repeat mistakes. Pointing to a few cards that follow specs is no proof that cards never go beyond specs when you can show proof of exactly that.

          Constant 82W under load from your example is a far cry from 76W spikes. Again, you are describing a product that is breaking the spec, and going over recommended NVIDIA BIOS settings. Also this Intel card is peaking 102W which is even further from 76W.
          That 82W was under a similar load to the A380 non-vsync load (the v-sync load being exactly 75W) and had spikes up to 92W. However unlike the 750 Ti the A380 has that 8pin connector so it never needs to pull more than spec from the PCIe bus. Also the overclocked 750Ti:s were factory overclocked and came like that out of the box.

          Anyway this entire discussion went in an off topic direction. My original point was that this A380 shouldn't be called a "75W card" because it's not meeting the "industry standard" for one - it takes way too much power and requires an aux power connector, an oversized one at that.
          Again, "industry standard" is not what some random guy off the internet pulls out of his backside. If its rated as a 75W card and draws roughly that, then its that figure is roughly accurate. It doesn't matter that for added protection it also has an 8pin power connector because its draw is right up near the maximum of what the spec allows.

          Honestly, the thinking behind it is not all that different as to why sports car makers put far more powerful brakes in their cars than what they legally need to. When you have an increased risk of things going wrong and have any kind of pride in your work, you put in an extra margin of safety.

          I am not making "a huge deal" no more than you are demanding that <=75W cards have a 6-pin aux connector simply because there are a few bad apples in motherboards.
          Unlike you, I haven't demanded anything. Saying that having one does not disqualify it as a 75W card like you insist and that having it is a good thing in an imperfect world where specs aren't always met and corners cut is not a demand that every card with that level of power draw should have one.

          Are you not realizing that demanding that connector is limiting the pool of computers able to receive a card like that? There are many business PCs which can have their life extended by adding a 75W GPU into them, but they lack any sort of PCI-express power connectors. Dell for example is very fond of their proprietary PSUs that do this. This lower end Intel GPU would be a good fit for them, granted you're willing to accept the loss of performance from lack of ReBAR. Hopefully there will be versions of this card that are bus powered.
          When the A380 is already pushing against the maximum bus power delivery of the PCIe spec, and going over it without v-sync, insisting on cards that don't need an external power connector is just asking for a repeat of the 750 Ti and RX 480 debacles. Its specially not worth it just for pre-built machines from objectively crap companies like Dell.

          Comment


          • Originally posted by L_A_G View Post
            You're going based on figures that are sometimes plain wrong and a hopelessly naive assumption that specs are always followed and corners are never cut. I gave you exactly what you need to find the same sources I used and re-checked when I wrote my posts.
            You gave me nothing, I gave you links to my sources.
            Originally posted by L_A_G View Post
            Considering that the 750 Ti came out 8 years ago and then the RX 480 repeated the exact same mistake a few years later, what does that suggest about the industry and its ability to learn from mistakes? That it's not perfect. Companies do repeat mistakes. Pointing to a few cards that follow specs is no proof that cards never go beyond specs when you can show proof of exactly that.

            That 82W was under a similar load to the A380 non-vsync load (the v-sync load being exactly 75W) and had spikes up to 92W. However unlike the 750 Ti the A380 has that 8pin connector so it never needs to pull more than spec from the PCIe bus.
            ​Since RX 480 was caught pulling more from the PCIe slot than the spec allows despite having aux power available, what makes you think that this A380, or any other "75W card" with aux power, won't make the same mistake and destroy motherboards? Obviously companies do repeat mistakes.
            Do you see how this line of thinking is flawed?
            Originally posted by L_A_G View Post
            Also the overclocked 750Ti:s were factory overclocked and came like that out of the box.​
            Yes, they were non-compliant products from the factory. The manufacturer ignored NVIDIA BIOS guidelines and broke the PCIe spec on purpose.
            Originally posted by L_A_G View Post
            Again, "industry standard" is not what some random guy off the internet pulls out of his backside. If its rated as a 75W card and draws roughly that, then its that figure is roughly accurate. It doesn't matter that for added protection it also has an 8pin power connector because its draw is right up near the maximum of what the spec allows.
            This is not the case for this A380 because it does not draw 75W, but far over that. Hence why I don't want to call it a "75W card" which was my original argument, for the 3rd time.
            There's many other 75W cards that approach the limit without any problems for years now, but you keep ignoring that instead focusing of a few bad apples.

            I am not going to argue this further since neither side is going to convince the other it seems, so I agree to disagree.

            Comment


            • Originally posted by coder View Post
              They were first on 7 nm. You can't take two different products made on the same node at two different times and do such a direct cost comparison. The first ones are traditionally smaller and more expensive.
              The 7nm process was already mature and at good yields by July 2019. Yes, the consumers got the crappy Zen 2 bins for the first few quarters but that was not true for EPYC and even improved quickly in the consumer space. AMD surely could have ramped up GPU production by Q2/2020 if they had wanted to but they did not due to all the other products they could sell instead on the same 7nm process, than the shortages and rising costs hit them hard.

              Originally posted by coder View Post
              TSMC doesn't remotely have the capacity to replace Intel's own manufacturing. So, the examples you're highlighting represent the exception rather than the rule.
              Don't look at the past, look at the present and the near future. Admit it, Intel is dependant on TSMC for the forseeable future for their core products - at least the next three years. The examples I gave you are the new normal for the next three years and therefore highly relevant.

              Originally posted by coder View Post
              Intel's manufacturing problems really didn't hurt them on the financial end, which is where it matters. The global shortage of fab capacity was so bad that big customers had to keep buying Intel CPUs, even when they weren't the fastest or most efficient.
              You make it sound just as the careless former Intel executives that took their data center business for granted and didn't invest into their core products and simply kept miliking their clients (Pat Gelsinger had no nice things to say about that "decade of underinvestment"). That attitude got Intel in big trouble as the financial side only lagged behind the technical side for some time as the market is slow to change, but you see the rising financial storm in Intel's recent numbers, their Data Center and AI Group took a huge hit and will further suffer during the next two years at least. I always thought that Intel could counter AMD technologically within a few years but their flawed execution makes me think twice if that is still true. The heated competition with AMD and the ARM collective eats away Intel's big margins and that is partly attributable to Intel's manufacturing woes, absolutely.

              Originally posted by coder View Post
              BTW, Cooper Lake was a niche part they only sold to select customers. It doesn't really make a good example. Stick to Cascade Lake vs. Rome.
              No, Cooper Lake is the perfect example as it had to be fabbed on a technically obsolete process and became uncompetitive with AMD due to its limitations in power draw and core count. That meant it had to stay a niche product as Intel was limited technologically here with only some wins in HPC due to their lead in AVX-512 workloads. I am sure Intel didn't want Cooper Lake to stay a niche product as they initially planned for it and it underlines my point that their process woes hurt them a lot right there.

              Originally posted by coder View Post
              Now you're muddling things all together. Your original contention was that Intel being in the GPU game could've saved us from the unprecedented price & availability problems of the past 2 years. In that time, the only viable process nodes for GPUs were TSMC 12 nm, TSMC N7, and Samsung 8 nm. It's only earlier this year that N6 started to enter the picture -- Intel couldn't have used TSMC N6 in the timeframe where they would have made a difference.
              Dude, you really seem to like fighting strawman arguments. I never made such a claim, I said that Intel could have taken double digit market share by now by getting Alechemist out the door with a Q4/21 or Q1/22 launch. That was a realistic target that they eyed themselves and simply missed. TSMC's N6 was perfectly viable for a product launch at that point in time, heck TSMC even were mass producing N5 since April 2020 (for Apple, on lower die sizes but still you should think that they could have made a mid-sized GPU on 5nm a year and a half later)!

              Originally posted by coder View Post
              Furthermore, you don't seem to understand that being on Samsung 8 nm (or TSMC 12 nm) isn't something Nvidia really wanted to do. It seems to be a worse node in almost every way. That's what happens when there's a capacity shortage -- products get moved to different nodes that are either inferior or a lot more expensive.
              It is of no relevance of what Nvidia wanted to or not for my point that they did not fight for the same wafer capacity with AMD at TSMC. I falsified your "fixed pie" argument with that undisputable fact. And you were ignoring that fact alltogether in your previous comment.

              I also dispute the notion that Nvidia didn't want to fab Ampere at Samsung, it was a deliberate gamble on their part as they were not happy with TSMC's pricing and they might have hoped that the Samsung process would have turned out better than it did (they initially targeted Samsung's 5nm if I remember correctly but Samsung missed their release targets), that was a miscalculation on their end which backfired later on as they had to pay their way in when going back to TSMC. And they now have a ton of booked wafer capacity with TSMC that they don't need as their inventory levels with Ampere is still too high but can only postpone next-gen shipments until Q1/23.

              Originally posted by coder View Post
              I'm sure they were prepared for lower profit margins than their competitors [...]. Nobody at Intel, looking at the historically profitable GPU business over pretty much the entire run up to ARC's development expected to be selling them below cost. You can't look at their current pricing and believe it would've sold for anything like that, 1-2 years ago.
              I never said something about absolute pricing, I was talking about abstract pricing levels. And a new market entrant that is happy with lower margins and does not compete on the same process node for wafers surely would have had an impact on overall pricing levels as that would drive demand away from their competitors. Deny it if you want, I think your argument is ignorant of the relevant details and basic understanding of economics and therefore of no substance.

              Originally posted by coder View Post
              Again, you're conjuring Intel GPUs out of the ether. You neglect to account for the fact that Intel bidding for wafer capacity would've increased prices and decrease volumes for everyone, including Intel.
              Read my last paragraph of this post again, they are paying for different nodes - your fixation of your flawed "fixed wafer volume for all" thinking is really making you look stupid.

              Originally posted by coder View Post
              The main reason the internet was excited about Intel getting into the GPU game was based on the assumption they'd use their own fabs. That would represent a meaningful increase in supply. Now that they're basically fighting everyone else for TSMC's capacity, I don't expect them to have a major impact on GPU pricing, at least until they can offer something competitive. And even that's mostly predicated on the kinds of supply-side bottlenecks we've been seeing continuing to wind down.
              While it would surely help for overall supply if Intel brought their own fabs into the GPU market, the acceptance of lower margins has also positive effects for consumers even if they have to use the same node as AMD. It comes up to their GPU design if they can add the same value to the wafer as AMD can. After all we consumers chose with our wallets which products suits our needs better. But from a consumer perspective, the old saying still holds true: "There is no such thing as a bad product, only a bad price." - Intel cannot sell Arc below costs forever, but they need to start somewhere to gain traction in the market, they can improve their comparative stance in the following generations and increase margins if they succeed. Someone will be second or third in that fight, and these companies have to offer a compelling price/performance to sell their stuff. That's what I meant with "the laws of the market" - deal with it even if you don't seem to know something about it.

              Originally posted by coder View Post
              About the last person here I'm going to take "econimics" or semiconductor fabrication lessons from is you. It's obvious that your grasp on each is tenuous, at best.
              That's a pity as I did get some great education in law and economics. It's also great to see you back paddeling on some of your core arguments even without admitting anything wrong in your analysis. That part reflects bad on your character however. A good tip for any future argument, try to comprehend what others are saying first instead of simply reading into their words what you want and attacking that imaginative position later on, as no one likes fighting a strawman argument. Furthermore, get your basic facts right next time before starting to lecture people.

              Comment


              • Michael could you include a Polaris card in this kind of benchmarks? And one or two Intel iGPUs? Would be nice to see how those perform compared to Intel Arc.

                Comment


                • Originally posted by numacross View Post
                  You gave me nothing, I gave you links to my sources.
                  We're not having this conversation in person. You can find exactly what I'm talking about in about 30 seconds if you wanted to and you're already aware of the RX 480 issue. But if that's too much to ask, here's a link (check the section "Measured power consumption" for the 750 Ti). For the RX480 debacle (which I actually forgot about) I hardly need a source when you're already aware of it.

                  ​Since RX 480 was caught pulling more from the PCIe slot than the spec allows despite having aux power available, what makes you think that this A380, or any other "75W card" with aux power, won't make the same mistake and destroy motherboards? Obviously companies do repeat mistakes.
                  Considering the A380 is already drawing more than 75W at load in certain scenarios, where else is it going to draw those extra watts when its got no other place to draw them from? We know there have been factory overclocked cards that also go beyond the spec and where are they supposed to draw the excess power if they don't have an external power connector?

                  Do you see how this line of thinking is flawed?
                  That's kinda my line here. If its already at the edge and in some cases pulling more than the spec, how else is it going to avoid that? Even if Intel can get it down to 75W in all circumstances, there's something called having a "safety margin" you're naively trying to argue against. Without any external power input and that level of draw there's no way for you to be able to have any safety margin whatsoever.

                  Yes, they were non-compliant products from the factory. The manufacturer ignored NVIDIA BIOS guidelines and broke the PCIe spec on purpose.
                  In other words; You can't trust OEMs to keep within spec and those safety margins you're arguing against are very much necessary.

                  This is not the case for this A380 because it does not draw 75W, but far over that. Hence why I don't want to call it a "75W card" which was my original argument, for the 3rd time.
                  You argued the A380 is not a 75W card because it's got an external power connector and that 75W cards shouldn't have them. My argument was that when you get close to the PCIe spec maximum it is a good idea to have an external power connector and the safety margin it provides you. Specially on the A380 when its already over spec under certain conditions. I also provided the example of another card that while being compliant in the official specs, wasn't so in the real world.

                  There's many other 75W cards that approach the limit without any problems for years now, but you keep ignoring that instead focusing of a few bad apples.
                  Again with the naivete... The fact that those bad apples undeniably exist is clear proof that you shouldn't go around naively assuming that there are no bad apples you may end up running into. We agree that bad apples exist, but this fact for some reason doesn't seem to register for you. Instead you keep naively insisting to go on as if bad apples don't exist.

                  I am not going to argue this further since neither side is going to convince the other it seems, so I agree to disagree.
                  We certainly are at an impasse here. I've pointed out that bad apples undeniably exist and that because of it safety margins are a good idea. You on the other hand acknowledge that bad apples exist, but still naively insist these safety margins are totally unnecessary.
                  Last edited by L_A_G; 30 August 2022, 12:02 PM.

                  Comment


                  • Originally posted by L_A_G View Post
                    You argued the A380 is not a 75W card because it's got an external power connector and that 75W cards shouldn't have them. My argument was that when you get close to the PCIe spec maximum it is a good idea to have an external power connector and the safety margin it provides you.
                    Agree from an engineering POV, but from a customer POV there's a complication - for the last several years "75W card" has come to mean "card which does not require a separate power cable" for most people. Not saying that is strictly correct but it is how everyone thinks about things - at minimum we'll need an equally short name that can be used for cards which get all their power from the PCIE slot.
                    Test signature

                    Comment


                    • Originally posted by bridgman View Post
                      Agree from an engineering POV, but from a customer POV there's a complication - for the last several years "75W card" has come to mean "card which does not require a separate power cable" for most people. Not saying that is strictly correct but it is how everyone thinks about things - at minimum we'll need an equally short name that can be used for cards which get all their power from the PCIE slot.
                      I've never heard of anyone drawing an equal sign between a specific wattage and not requiring external power. From what I've seen people simply talk about cards that don't require external power and run totally off the bus.

                      Comment

                      Working...
                      X