Intel Announces Arc B-Series "Battlemage" Discrete Graphics With Linux Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts
  • bug77
    Senior Member
    • Dec 2009
    • 6526

    #51
    Originally posted by pWe00Iri3e7Z9lHOX2Qx View Post

    Things are pretty different > 20 years later. These cards will still render your desktop fine even if Intel gives up on dGPUs. But they will effectively be paperweights for gaming without heavy continued investment in the drivers. Our unfortunate model of game specific driver changes / optimizations necessitates that. If people could trust that Intel was in the dGPU game for the long haul these could sell quite well. The market has been starved of decent (old school) mid-range cards for a long time. I'll probably buy one to mess around with knowing that I might be pissing my money away. I just don't see waves of others doing so without some kind of guarantee that Intel is still going to be making dGPUs for many years to come.
    Gaming could be an issue, yes. But between Proton and open source drivers, even that could still be covered pretty well.

    Comment

    • Quackdoc
      Senior Member
      • Oct 2020
      • 5112

      #52
      Originally posted by fintux View Post

      What's the point of sticking PCIe 5.0 on the GPU when it's not capable of using the bandwidth? PCIe 5.0 doesn't automatically increase performance, it just adds bandwidth, and if that wasn't the bottleneck, it's only going to add to the cost of the GPU.
      compute says hello

      Comment

      • PapagaioPB
        Junior Member
        • Jul 2024
        • 19

        #53
        Hello, does anyone have experience with how Intel's open-source driver performs for gaming with Valve's Proton? It's very hard to find benchmarks online, and the ones I saw showed a significant difference between gaming on Windows and Linux. I'm about to buy a new GPU and am waiting for RDNA4 to make my decision, but I might give Intel a chance given their pricing.

        Comment

        • Guilleacoustic
          Junior Member
          • Dec 2024
          • 3

          #54
          Probably a silly question, but what's the current status of Xe/Xe2 with Wayland ?

          Comment

          • Quackdoc
            Senior Member
            • Oct 2020
            • 5112

            #55
            Originally posted by PapagaioPB View Post
            Hello, does anyone have experience with how Intel's open-source driver performs for gaming with Valve's Proton? It's very hard to find benchmarks online, and the ones I saw showed a significant difference between gaming on Windows and Linux. I'm about to buy a new GPU and am waiting for RDNA4 to make my decision, but I might give Intel a chance given their pricing.
            if you know of any free games I could bench the a380 for you. I don't play a lot of paid games on steam and piracy is too much of a hassle for me now but generally it's been fine

            Comment

            • MillionToOne
              Phoronix Member
              • Aug 2024
              • 108

              #56
              Originally posted by PapagaioPB View Post
              Hello, does anyone have experience with how Intel's open-source driver performs for gaming with Valve's Proton? It's very hard to find benchmarks online, and the ones I saw showed a significant difference between gaming on Windows and Linux. I'm about to buy a new GPU and am waiting for RDNA4 to make my decision, but I might give Intel a chance given their pricing.
              Depends how well their xe driver performs under BM. On Alchemist the performance is mediocre. i915 works well enough, ANV is slow in many games though.

              Comment

              • PapagaioPB
                Junior Member
                • Jul 2024
                • 19

                #57
                Originally posted by Quackdoc View Post

                if you know of any free games I could bench the a380 for you. I don't play a lot of paid games on steam and piracy is too much of a hassle for me now but generally it's been fine
                Hello, thank you for being willing to run this test. If it's not too much trouble, could you test a demo of some AAA games?

                Play the opening section of FINAL FANTASY XVI. Save data can be carried over to the full version of the game.

                300 years of tyranny. A mysterious mask. Lost pain and memories. Wield the Blazing Sword and join a mysterious, untouchable girl to fight your oppressors. Experience a tale of liberation, featuring characters with next-gen graphical expressiveness!

                The Definitive Edition includes the critically acclaimed DRAGON QUEST XI, plus additional scenarios, orchestral soundtrack, 2D mode and more! Whether you are a longtime fan or a new adventurer, this is the ultimate DQXI experience.


                Originally posted by MillionToOne View Post

                Depends how well their xe driver performs under BM. On Alchemist the performance is mediocre. i915 works well enough, ANV is slow in many games though.
                Hi,the issue isn’t so much about it being slow right now but Intel's commitment to developing the open-source driver. AMD gets a lot of support from companies like Valve. If companies started hiring developers to help with Intel's driver, things would improve significantly.​

                Comment

                • gukin
                  Senior Member
                  • Apr 2009
                  • 224

                  #58
                  Originally posted by pWe00Iri3e7Z9lHOX2Qx View Post

                  Performance with my short time A770 was a total shit show in Linux on old systems without ReBAR / SAM (even with above 4G decoding enabled). Windows performance was actually much better at the time. My impression was basically...
                  • Windows: you really should have ReBAR enabled
                  • Linux: you absolutely need ReBAR enabled

                  ​I was hoping there would be memory controller upgrades to make these usable for gaming in old systems.
                  You're absolutely right there, I put my A380 in a B350 motherboard with a 2400g, the motherboard could do rebar but the 2400g couldn't and in that configuration the integrated Vega 11 walked all over the A380. I moved it to a B450 with a 5700g and things radically improved although the 5700g could only work in PCI 3.0. I've got a couple of mini PCs, one with an 8845HS (Hawk-Point) and one with a Strix-Point (AI Ryzen AI 370 HX AI AI AI) , using Gravity Mark, the A380 matches the 780M in the Hawk-Point but the 890M in the Strix-Point (AI AI AI AI AI) beats it by about 10%.

                  Comment

                  • pong
                    Senior Member
                    • Oct 2022
                    • 316

                    #59
                    Originally posted by fintux View Post

                    What's the point of sticking PCIe 5.0 on the GPU when it's not capable of using the bandwidth? PCIe 5.0 doesn't automatically increase performance, it just adds bandwidth, and if that wasn't the bottleneck, it's only going to add to the cost of the GPU.
                    Also lots of motherboards are hobbled with too few, too small PCIE slots.

                    So IIRC PCIE5 x16 64 GB/s; x8 = 32 GB/s; x4 = 16 GB/s.
                    All of those are WAY less than what one would hope one's RAM BW would be on a modern DDR5 PC,
                    and WAY WAY less than what one's VRAM BW must be. So right from the start even PCIE x16
                    is a bottleneck transferring between RAM / VRAM and CPU cache / VRAM relative to what the CPU and RAM and GPU should be capable of.

                    Then consider many PCs will have one or more GPUs loaded in slots that are only functional at x8 or x4 width in which case the 2x bps/lane improvement of PCIE-5 vs PCIE-4 makes the
                    difference between "horrible" and "bad but could be worse" GPU-to-system BW so it can
                    be a "win" there to compensate for a poor motherboard / CPU with less usable slot lanes than ideal.

                    And as others said faster rate can improve latency of transactions so it aids in responsiveness / transactional throughput as well as "hurry up and get idle" ability to save power on a sooner-idle PCIE link after the transmission is done.

                    Really the PC design is backwards since the GPU is EFFECTIVELY more powerful wrt. any reasonable metric of FLOPS / MIPS / VRAM BW than the PC it is inserted in but it's attached
                    to the mainboard (and can only communicate IPC or to system RAM) at PCIE x16 bottlenecked speeds well less than what the system RAM / CPU or parallel operating (multiple) GPUs are capable of handling if it were not for the PCIE BW bottleneck. So it is a poorly scaled "primary bus" to have one's most high throughput / performance "peripheral" (ha! more like main processor wrt. FLOPS) located on.

                    Comment

                    • pioto
                      Junior Member
                      • Nov 2022
                      • 20

                      #60
                      Originally posted by fintux View Post

                      What's the point of sticking PCIe 5.0 on the GPU when it's not capable of using the bandwidth? PCIe 5.0 doesn't automatically increase performance, it just adds bandwidth, and if that wasn't the bottleneck, it's only going to add to the cost of the GPU.
                      I meant why add the cost on the host side by implementing PCIe 5.0x16 support (in Intel Gen 12), with no GPU in sight to use it?
                      The same Intel Gen 12 introduced DDR5 support (when AMD's Zen3 did not have it). This gave Intel competitive advantage for a period of time, until Zen4 appeared. The point is that DDR5 memory sticks were immediately available together with Gen 12. PCIe 5.0x16 host support is completely out of sync with consumer GPUs.

                      Comment

                      Working...
                      X