Announcement

Collapse
No announcement yet.

Intel's New Brand For High Performance Discrete Graphics: Arc

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by Danielsan View Post
    Are these new GPU meant for general 3D acceleration (e.g. Blender and video-games) or are targeting d ML & AI garbage?
    Gaming. But the rest of the Xe line requires Intel CPUs so I wouldn't put it past Intel to tie these to their 11th+ gen IGP/CPU in some way, too.

    Someone mentioned benchmarks? Ignore benchmarks. Wait till someone's playing a real game and showing you the video along with reports on stability, stutter, jitter, latency, etc. Benchmarks are useless because all of the industry players know which ones they are and how to... game... them along with their audience.

    Comment


    • #32
      Originally posted by Teggs View Post
      I don't want anything Arcing inside my system, thank you. As for Battlemage: Jesus H Cringe, Intel. It's not as if everyone will use these to play the Elder Scrolls VI or other fantasy RPGs. Actual professionals and a lot of people who game on computers will not appreciate having to tell people they have a Battlemage in their system. They may even avoid acquiring one for that reason alone. What a silly own goal.
      I assume "Battlemage" here is like Skylake, Knights Landing, or Merced and is a mostly internal codename that no average person will need to know. Intel has always given their chips weird names.

      Comment


      • #33
        Originally posted by stormcrow View Post
        AMD can only barely keep them competitive in the CPU market.
        LOL. You been under a rock since Ryzen 2000-series?

        Originally posted by stormcrow View Post
        Makes me wonder what happens when they release and, if it's another bad (or even mediocre) product like i740 & Larabee, will they just take their ball and go home
        Larabee turned into Xeon Phi. They stuck with that effort for about a decade (depending on how you're counting) and 2 product generations (but at least 5, in total), before accepting the fact that x86 couldn't compete with GPUs in terms of GFLOPS/W or even absolute performance.

        And cancelling Xeon Phi was done not to get out of the space, but in order to make room for building the GPU-like compute accelerator they always *should* have made!

        Originally posted by stormcrow View Post
        They can have the best hardware GPU platform ever, but if developers can't be wooed away from Nvidia or AMD, it won't matter.
        Developers already optimize for their iGPUs. The dGPUs aren't so different.

        Comment


        • #34
          Originally posted by Danielsan View Post
          Are these new GPU meant for general 3D acceleration (e.g. Blender and video-games) or are targeting d ML & AI garbage?
          Well, they have some support for AI-based upsampling, so there's definitely AI acceleration in some form.

          BTW, I think modern Nvidia GPUs don't burn too much area on Tensor cores. And RDNA 2 is reliant on some packed-math instructions, somewhat akin to SSE.

          Comment


          • #35
            Originally posted by stormcrow View Post
            the rest of the Xe line requires Intel CPUs
            That consists of exactly one model, the DG1, which is only sold to OEMs.

            Originally posted by stormcrow View Post
            I wouldn't put it past Intel to tie these to their 11th+ gen IGP/CPU in some way, too.
            No way. Lots of consumers don't even know what kind of CPU they have. Intel wouldn't risk all the support calls and RMAs by restricting a retail graphics card to run only with Intel CPUs.

            Originally posted by stormcrow View Post
            Someone mentioned benchmarks? Ignore benchmarks. Wait till someone's playing a real game and showing you the video along with reports on stability, stutter, jitter, latency, etc.
            Some of the better gaming benchmarks out there actually *do* measure & report on jitter and even latency!

            Comment


            • #36
              Originally posted by thunderbird32 View Post
              I assume "Battlemage" here is like Skylake, Knights Landing, or Merced and is a mostly internal codename that no average person will need to know.
              Yes, that's my understanding.

              Comment


              • #37
                Nice that they went with RPG class names for the codenames. Also nice touch hearing the classic Intel jingle at the very end there. I like the name, the presentation, all looking good.

                From the development we can see in the open, seems we're getting good support in Mesa from day one, and Vulkan too. OpenCL is a concern and I can't remember the current state of that for Intel. Looks like we'll get whatever is already working in Gallium, right?

                At the very least, we get a more diverse market and Nvidia ends up looking even more like the pricks they are.

                Comment


                • #38
                  Originally posted by cmakeshift View Post
                  OpenCL is a concern and I can't remember the current state of that for Intel.
                  They were the first with official OpenCL 3.0 support, IIRC. Before that, they lead the way with 2.2 support and I don't even know how far back you have to go to reach a point where they weren't in the lead. They wrote at least 2 OpenCL compilers, IIRC. And oneAPI is even built atop OpenCL!

                  I'm pretty much an Intel GPU convert, after AMD dropped the ball so hard on OpenCL and ROCm support for consumer GPUs. AMD still doesn't have official OpenCL 3.0 support. Nvidia even beat them to it!

                  Comment


                  • #39
                    Thanks for the update, found the relevant resources and it's looking good.

                    Originally posted by coder View Post
                    I'm pretty much an Intel GPU convert, after AMD dropped the ball so hard on OpenCL and ROCm support for consumer GPUs. AMD still doesn't have official OpenCL 3.0 support. Nvidia even beat them to it!
                    Agreeeeeed. I've very happy with my RX580 on Mesa, for 1080p. Props all around. But being unable to do a Blender Cycles render on it, to this day is where AMD loses me a bit. They could have taken OpenCL to where it needed to be to compete with Nvidia, but alas.

                    Comment


                    • #40
                      I'm quite excited for that. Hopefully we'll finally get decent gaming laptops with decent Linux support.

                      Comment

                      Working...
                      X