Announcement

Collapse
No announcement yet.

Intel Arc Graphics A750 + A770 Linux Gaming Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • If I was given a dime everybody stated "AMD" or an AMD related terminology within this thread, I certainly would not be broke!

    Comment


    • Originally posted by Anux View Post
      True, but still AMD never had a problem staying behind Intel with DDR or PCIe.
      They actually leapfrogged Intel, with PCIe 4.0. Intel was caught so off-guard that Intel was stuck on PCIe 3.0 for 20 months after Zen 2 launched (July 2019)! It was supposed to be much less, but Intel had last-minute problems with PCIe 4.0 on Comet Lake boards and had to drop back to 3.0, before launch (August 2019). It wasn't until the launch of Rocket Lake (March 2021) that Intel finally got on PCIe 4.0!

      I can understand AMD not wanting to get left behind, staying on PCIe 4.0, even though it's more than enough for desktop user.

      Originally posted by Anux View Post
      They probably saw benefits of having PCIe 5 with SSDs and GPU direct acces
      Being able to run dual-x8 GPUs @ PCIe 5.0 is the singular argument for having it on the desktop, and only if you're running high-end GPUs of the sort a desktop PSU could barely power 2 of.

      In terms of SSDs, not only do desktop users not need PCIe 5.0 speeds, but SSDs are just working their way to market that can finally come close to maxing PCIe 4.0 on reads.

      It took a year after AMD's PCIe 4.0 support landed, for SSDs to hit the market that could exceed PCI 3.0. We're now seeing the same story play out with PCIe 5.0 vs 4.0. And when they do, they're burning lots of power, which means they throttle, which means they can't sustain it.

      AMD was early on PCIe 4.0 support, which is why they got the jump on Intel. Intel responded by being even earlier on PCIe 5.0. And it's not as if this stuff is "free". Everybody is paying for it in higher motherboard costs and power consumption. Just for Intel's bragging rights.
      Last edited by coder; 09 October 2022, 06:55 PM.

      Comment


      • Originally posted by Dukenukemx View Post
        They have direct control. Either that or sell their own motherboards because clearly their board partners have lost their minds.
        They don't have control over partner board pricing, but they surely know the input costs and how they compare with previous generation. They had a lot of influence over that, back when they decided on the specs for this generation.

        Also, I read some mention of the new Zen 4 boards having 2 "chipset" chips? That indeed sounds more expensive than 1, but it depends on the specifics.

        I think the main issue with motherboard costs for Alder Lake and Zen 4 is PCIe 5.0. That requires boards with more layers, more expensive materials, and retimers. DDR5 also isn't helping.

        The other thing that's happened is inflation. Board makers' input prices have gone up, sometimes considerably. Some of the more extreme examples (like Ethernet PHYs costing $hundreds) will get sorted out as silicon manufacturing backlogs are worked through, but we obviously can't expect to see current gen board prices bottom out at the same prices of earlier gens.

        Originally posted by Dukenukemx View Post
        You can't raise prices and expect stuff to sell. In fact, it isn't.
        Some people will pay more for more performance, but I agree that the mid-market has to stay at a roughly similar point.

        Comment


        • Originally posted by Dukenukemx View Post
          You have a source or not? You can't just say things without proving it. Put a link.
          I suspect he's getting it directly from this article:
          Originally posted by phoronix, in the article for this comment section
          But even still some games out there -- some Steam Play titles -- simply won't run yet with the Intel "ANV" Vulkan driver. Among the few unimplemented features is the Vulkan sparse support needed for some VKD3D-Proton games like DIRT 5, Deathloop, Assassin's Creed: Valhalla, Forza Horizon 4/5, and other modern titles.
          Sounds like DX12 games are broken but DX11 and prior probably work?

          The linked bug report is a year old and says that
          d3d12 feature level 12_0 and beyond and are currently unavailable on Anv
          the number of games requiring feature level 12_0 to even launch has only increased
          Last edited by smitty3268; 09 October 2022, 05:52 PM.

          Comment


          • Originally posted by qarium View Post
            well you do not have a apple m1/m2 means all you talk is pure speculation.
            Do you own one?
            and people who do have an M1/M2 told you LLVM-Pipe is already fast enough.
            No need to work on 3D acceleration then. You hear that guys? LLVM-Pipe is good enough for anyone.

            Comment


            • Originally posted by Dukenukemx View Post
              Do you own one?
              No need to work on 3D acceleration then. You hear that guys? LLVM-Pipe is good enough for anyone.
              no i do not have one. but you don't believe people who tell you this and i have no reason for not believe what people tell me about apple m1/m2

              you really really don't get it. you claim only because you can watch youtube on LLVM-Pipe and use simple UI apps does not mean the people do not want the full gpu performance.

              you as a gamer you have a specific view on apple m1/m2 what this hardware does not serve anyway even if the driver would be perfect you still would not want it you still will not buy it.

              thats the joke about you you demand stuff and even if apple does all you want in the end you will not buy it.

              to be honest you are a customer for hardware like this: laptop with something like intel 13900K+Nvidia RTX4000 on 4nm with FAKE-Frames DLSS3.0(this alone saves 25% energy.)

              and whatever apple does they can never compete with this because all your legacy games are all closed source x86 games ,,,

              best case szenario for apple M3 would be if apple do license and put in RDNA3 gpu who have hardware for FSR3.0 (similar to DLSS2,x hardware on RTX cards) ,,, but even if apple does this it could not compete with " intel 13900K+Nvidia RTX4000" in your case all your games are x86 and vulkan or dx12...

              so its complete pointless to talk with you about apple m1/m2/m3

              and its also pointless to talk with you about intel ARC because in the end you will not buy it.

              the problems right now are so big that you will not buy it. stuff like games in proton does not run... games who do run have hangs/stuttering,.,.. and so one.

              people say all the bad reviewers are windows only or/and gamers only but on linux gamers need proton and many titles do not run.

              so it looks like to be honest intel ARC is not for you YET... i am 100% sure you will not buy it.

              wait 1 year and intel maybe has fixed all the problems. it was for amd the same with the RDNA1 release they did nearly need 1 year to make it all run nicely on linux...


              Phantom circuit Sequence Reducer Dyslexia

              Comment


              • Originally posted by coder View Post
                Being able to run dual-x8 GPUs @ PCIe 5.0 is the singular argument for having it on the desktop, and only if you're running high-end GPUs of the sort a desktop PSU could barely power 2 of.
                If I'm seeing the trend correctly, especially the cheaper entry level GPUs come with cut down PCIe interfaces like the 6500 XT with PCIe 4 x4. I don't know if RDNA 3 will have PCIe 5 already but SSDs with PCIe 5 are in the works.

                they're burning lots of power, which means they throttle, which means they can't sustain it.
                There is already a "solution" to that: https://www.teamgroupinc.com/en/uplo...969f14c489.jpg We might see active cooled ones with PCIe 5. Increasing the Bandwidth of SSDs is easy, just build an internal "raid 0".

                Comment


                • Originally posted by Anux View Post
                  If I'm seeing the trend correctly, especially the cheaper entry level GPUs come with cut down PCIe interfaces like the 6500 XT with PCIe 4 x4.
                  Well, it was originally designed as a laptop GPU. So, I wouldn't read too much into the x4 thing. We've had lots of entry-level GPUs with x8 interfaces, but x4 isn't yet a trend.

                  Nor is it clear exactly how much it saves. The PCIe protocol didn't get more complex until PCIe 6.0 (using PCIe 3.0 as baseline), so the amount of silicon a PCIe controller takes per-lane shouldn't increase until then. I'd expect the difference between x4 and x8 to be negligible, in absolute terms. Just guessing, since PCIe 3.0 x8 interfaces didn't seem too onerous at 40 nm, even.

                  Originally posted by Anux View Post
                  SSDs with PCIe 5 are in the works.
                  Not because people need it, but because it's a feature they think they want. Also, the interface speed != sustained performance. You could get a higher-speed burst in/out of onboard DRAM buffers, but I think sustained, sequential reads are unlikely to go much above PCIe 4.0 speeds. No consumer SSDs can sustain writes much above PCIe 2.0 speeds. So, it's mostly just specsmanship.

                  Originally posted by Anux View Post
                  ​There is already a "solution" to that: https://www.teamgroupinc.com/en/uplo...969f14c489.jpg We might see active cooled ones with PCIe 5.
                  That's a good thing? I liked when SSDs actually used less power than spinning rust-based drives.


                  Originally posted by Anux View Post
                  ​Increasing the Bandwidth of SSDs is easy, just build an internal "raid 0".
                  The M.2 form factor is not good for that. Controllers eat up precious board space, not to mention the additional heat.

                  Comment


                  • Originally posted by coder View Post
                    Well, it was originally designed as a laptop GPU. So, I wouldn't read too much into the x4 thing. We've had lots of entry-level GPUs with x8 interfaces, but x4 isn't yet a trend.
                    It will become a trend with PCIe 5. Intel also used 8x and if it was designed for laptop or not doesn't help if that's the only entry level in the future. They will cut down as much as possible to save a few bucks, which is really bad because that makes the cards unusable for cheaper/older systems.

                    The PCIe protocol didn't get more complex until PCIe 6.0 (using PCIe 3.0 as baseline), so the amount of silicon a PCIe controller takes per-lane shouldn't increase until then.
                    The coding is more complex, so it would need more silicon. How much, I don't know.

                    No consumer SSDs can sustain writes much above PCIe 2.0 speeds. So, it's mostly just specsmanship.
                    Its like boost clocks for CPUs, in normal usage patterns it is virtually super fast. Sustained writes of course suffer but are only needed in specific workloads.

                    That's a good thing?
                    Depends on your preferences, I'm fine with my old cooler less 2500 MB/s SSD.

                    The M.2 form factor is not good for that. Controllers eat up precious board space, not to mention the additional heat.
                    What so special about raid0? It barely needs any logic. 1 controller and 4 mem chips and you're good to go.

                    Comment


                    • Originally posted by Anux View Post
                      It will become a trend with PCIe 5. Intel also used 8x
                      x8 cards are nothing new. I have an AMD RX 550 (Polaris) that's PCIe 3.0 x8. And I'm sure even that wasn't the first.

                      x4 would be a new development, if this is anything more than a one-off. But it's also kinda pointless, from a system perspective, because the card is mechanically x16 and the motherboards have to support x16. So, it's only a (tiny) cost savings at the silicon level, and shaving off a couple $0.01 from the graphics card board cost.

                      Originally posted by Anux View Post
                      The coding is more complex, so it would need more silicon. How much, I don't know.
                      You're thinking of 6.0, which introduces PAM4 and flits. PCIe 5.0 has none of that. It's just a straight clock-doubling exercise, from 3.0 to 5.0.

                      Originally posted by Anux View Post
                      What so special about raid0? It barely needs any logic. 1 controller and 4 mem chips and you're good to go.
                      The only reason even to do it is to surpass the limits of what a single controller can handle. By definition, you need multiple controllers. Otherwise, it's pointless. And if you want to make it transparent to the host, then you need a 3rd chip. I actually have a Western Digital card which works like that, but it's an x8 PCIe add-in card.

                      Comment

                      Working...
                      X