Intel Arc B580 Battlemage Linux Workstation Graphics Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts
  • phoronix
    Administrator
    • Jan 2007
    • 67128

    Intel Arc B580 Battlemage Linux Workstation Graphics Performance

    Phoronix: Intel Arc B580 Battlemage Linux Workstation Graphics Performance

    Yesterday I shared the Linux gaming performance and OpenCL / GPU compute performance for the new Intel Arc B580 Battlemage graphics card. Today the focus is a look at how well the Linux workstation graphics performance for Battlemage is looking relative to the existing Alchemist hardware with the Intel Arc A-Series graphics cards.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite
  • markg85
    Senior Member
    • Oct 2007
    • 509

    #2
    I'm curious where the hardware accelerated video encoding and decoding is for this card on Linux. If i look at the specs it's a nice monster with h265 10 bit 4:4:4 encoding, that would be super nice for game streaming (the 4:4:4 matters most, the 10 bit not so much)! But is this already supported in va-api? Or OneAPI through ffmpeg?

    Comment

    • sophisticles
      Senior Member
      • Dec 2015
      • 2543

      #3
      Originally posted by markg85 View Post
      I'm curious where the hardware accelerated video encoding and decoding is for this card on Linux. If i look at the specs it's a nice monster with h265 10 bit 4:4:4 encoding, that would be super nice for game streaming (the 4:4:4 matters most, the 10 bit not so much)! But is this already supported in va-api? Or OneAPI through ffmpeg?
      I have been doing HEVC 10-bit 444 encoding using my Ice Lake laptop for years using Linux + QSV via FFMPEG.

      ffmpeg -h encoder=hevc_qsv

      You will see some oddly named pixel formats supported:

      Supported pixel formats: nv12 p010le p012le yuyv422 y210le qsv bgra x2rgb10le vuyx xv30le

      The one in bold will give you 10-bit 444 HEVC, you can also add pix_fmt=x2rgb10le​ to Shotcut if you choose the hevc_qsv encoder from the drop down menu and it will encode to that format.

      Comment

      • markg85
        Senior Member
        • Oct 2007
        • 509

        #4
        Originally posted by sophisticles View Post

        I have been doing HEVC 10-bit 444 encoding using my Ice Lake laptop for years using Linux + QSV via FFMPEG.

        ffmpeg -h encoder=hevc_qsv

        You will see some oddly named pixel formats supported:

        Supported pixel formats: nv12 p010le p012le yuyv422 y210le qsv bgra x2rgb10le vuyx xv30le

        The one in bold will give you 10-bit 444 HEVC, you can also add pix_fmt=x2rgb10le​ to Shotcut if you choose the hevc_qsv encoder from the drop down menu and it will encode to that format.
        That is their video encoding/decoding engine in their CPU.
        I'm not using an Intel CPU nor will i do that in the near future.

        Comment

        • pokeballs
          Junior Member
          • Sep 2024
          • 26

          #5
          It can probably also be accelerated via ffmpeg vulkan.

          Comment

          • Zapitron
            Phoronix Member
            • Aug 2009
            • 55

            #6
            One thing which might be interesting, is if the comparative benchmarks included the latest Intel IGP. But that would probably make it harder to keep the CPU the same.

            Comment

            • sophisticles
              Senior Member
              • Dec 2015
              • 2543

              #7
              I don't think Michael will see to much interest in the results of this article because there is no context for the average Phoronix as to what SPECviewperf is actually testing and why is it relevant.



              Applications represented by viewsets in SPECviewperf 2020 include Autodesk 3ds Max and Maya for media and entertainment; Dassault Systèmes Catia and Solidworks, PTC Creo, and Siemens NX for CAD/CAM; and two datasets representing professional energy and medical applications
              Many of the Phoronix readers are too busy "keeping it real" and express disdain and outright hostility towards "proprietary crap" that this benchmark is supposed to be simulating the workloads of.

              Think about it:

              Autodesk 3ds Max costs $1875 per year, per install and is Windows only.

              Maya costs $1875 per year, per install and officially supported OSes are Win 10 and 11, Mac OS, RH and Rocky.

              Dassault Systèmes Catia. near as I can tell, runs only on Windows 10 and 11 on x86 and IBM's Power based Unix servers and in fact I do not think it's possible to just buy the software, I think they sell you complete solutions.

              Solidworks only works on Windows and Mac OS.

              Siemens NX I couldn't find support details for but the screenshots that are shown on their website are clearly Windows 11.

              Most Linux users have no idea what SPECviewperf measures and if they find out they won;t care about the results because what's being measured is "proprietary crap".

              SPECviewperf does have appeal for professionals with functional brains that are not bound to some silly ideology and are more interested in getting the most bang for the buck and for those people a much more comprehensive test, running all the benchmarks on a variety of hardware would be interesting.

              Comment

              • sophisticles
                Senior Member
                • Dec 2015
                • 2543

                #8
                Originally posted by markg85 View Post

                That is their video encoding/decoding engine in their CPU.
                I'm not using an Intel CPU nor will i do that in the near future.
                No it is not.

                Both Intel QSV and NVIDIA's NVENC are hybrid encoders, if you download the developer's guise they explicitly state that parts of the encoder run on the CPU, parts run on the GPU and parts run on the dedicated fixed function hardware,

                This all happens transparently to the developer, he does not have to specify, he just calls the encoder and the driver and libraries take care of the details behind the scenes.

                If you want pure fixed function encoding you need to use the low_power=1 and that disables anything that is not run on the fixed function circuits.

                For instance, entropy encoding, CALVC/CABAC is always performed by the CPU, B Frames are handled by the CPU, if you encode using RGB color space that is handled by the GPU.

                This holds true if you are using an Intel dGPU and an AMD CPU, the encoder will still pass the software parts to the CPU for processing.

                Feel free to download the developer's guide and read it for yourself.

                Comment

                • sophisticles
                  Senior Member
                  • Dec 2015
                  • 2543

                  #9
                  Originally posted by pokeballs View Post
                  It can probably also be accelerated via ffmpeg vulkan.
                  Hevc encoding via Vulkan only supports 4:2:0 encoding.

                  Comment

                  • stormcrow
                    Senior Member
                    • Jul 2017
                    • 1511

                    #10
                    Originally posted by sophisticles View Post
                    I don't think Michael will see to much interest in the results of this article because there is no context for the average Phoronix as to what SPECviewperf is actually testing and why is it relevant.





                    Many of the Phoronix readers are too busy "keeping it real" and express disdain and outright hostility towards "proprietary crap" that this benchmark is supposed to be simulating the workloads of.

                    Think about it:

                    Autodesk 3ds Max costs $1875 per year, per install and is Windows only.

                    Maya costs $1875 per year, per install and officially supported OSes are Win 10 and 11, Mac OS, RH and Rocky.

                    Dassault Systèmes Catia. near as I can tell, runs only on Windows 10 and 11 on x86 and IBM's Power based Unix servers and in fact I do not think it's possible to just buy the software, I think they sell you complete solutions.

                    Solidworks only works on Windows and Mac OS.

                    Siemens NX I couldn't find support details for but the screenshots that are shown on their website are clearly Windows 11.

                    Most Linux users have no idea what SPECviewperf measures and if they find out they won;t care about the results because what's being measured is "proprietary crap".

                    SPECviewperf does have appeal for professionals with functional brains that are not bound to some silly ideology and are more interested in getting the most bang for the buck and for those people a much more comprehensive test, running all the benchmarks on a variety of hardware would be interesting.
                    But largely irrelevant when the majority of the software SPECvp is measuring only runs on Windows or Mac. The Mac versions are completely irrelevant because no modern Mac can mount a dGPU even via Thunderbolt IIRC. So, only Maya is relevant for running SPECvp on Linux. Everything else is irrelevant. You're not measuring the hardware here, you're measuring how well a specific driver and library stack functions. So for most people with a graphically oriented workstation it's not about ideology, it's that ARC's Linux performance is irrelevant to their work environment because the software doesn't run on it in any meaningful way (forget Wine- that's not a thing for corporate systems with certification requirements.) That makes SPECvp a synthetic benchmark with no practical relevance outside of the one case of those few people using Maya on Linux - and likely won't be buying Battlemage GPUs for it. Corporations buy workstations in atomic units, they don't build piecemeal as a rule.

                    Comment

                    Working...
                    X