Announcement

Collapse
No announcement yet.

Intel Arc A380 Desktop Graphics Launch In China

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by pinguinpc View Post
    in my case use linux mainly, dont care newer games, dont care raytracing, dont care scalers but stay very interested on av1 encode hardware
    Almost like myself but instead of AV1 encoding I am looking for AV1 decoding

    I wanted to buy an RX 6500 XT but then I saw they cut off AV1 decoding. Now my plan is to buy the Arc A380 when available here in Germany.

    Comment


    • #22
      Originally posted by Sonadow View Post

      Intel seriously thinks OEMs will want to put an Arc in their computers when they can get better margins and customer interest by simply continuing to bundle AMD or Nvidia graphics?
      MSI is already doing this, although the item listing on JD.com seems to have been pulled:
      Intel Arc A380 with 6GB memory confirmed by MSI MSI is now listing its upcoming desktop gaming system based on Intel Alder Lake platform also equipped with Arc graphics. The listing appeared on the JD platform, which is a large Chinese retailer. Intel previously confirmed that Intel Arc will see an exclusive launch in China […]

      Comment


      • #23
        I am interested in the A350 (4GB VRAM is enough for me, I currently have 2, on both desktop an laptop, and I used to have 1 on the desktop and none on the laptop (replaced GPU in the former, replaced motherboard in the latter)), since it is still x8, in contrast with AMD's RX 6400 and RX 6500XT.
        Last edited by moriel5; 15 June 2022, 06:31 PM.

        Comment


        • #24
          Originally posted by johanb View Post
          $150 USD sounds like great value for the money, I wonder what it will cost in the EU and US.
          Originally posted by dc_coder_84 View Post

          Almost like myself but instead of AV1 encoding I am looking for AV1 decoding

          I wanted to buy an RX 6500 XT but then I saw they cut off AV1 decoding. Now my plan is to buy the Arc A380 when available here in Germany.
          Unfortunately, from the looks of this, that price is just the market "value", in terms of how much of the final price is due to the GPU, while being prebuilt-only.

          If so, it is a shame, since I would really want an Intel dGPU, if only for properly working OpenCL 2.x-3.x+ (unfortunately, I can forget about SR-IoV with these, although at least I can rely on my i5-4570's iGPU for GVt-G for that).

          Comment


          • #25
            Originally posted by pinguinpc View Post


            So beautifull around 150us for 6gb card meanwhile amd offer 4gb in rx 6400 at 160us, 4gb too in rx 6500 at around 200us and nvidia offer 4gb on gtx 1050ti (outdated product) around 180us, 4gb on gtx 1650 around 200us, also outdated product

            on media capabilities offer 150us for AV1-H265-VP9-H264 decode and encode capabilities meanwhile rx 6400/6500 dont offer any encode capabilities and dont decode AV1 and gtx 1050/1650 (outdated products) only offer H265-H264 encode capabilities and dont decode AV1 again outdated product

            personally consider this a huge step for improve actual gpu situation and stay prepare some bucks to help intel (i stay very interested on arc a310) and punish amd (cutted laptop products like rx 6400/6500, few vram and very expensive) and nvidia (outdated and expensive products)

            in my case use linux mainly, dont care newer games, dont care raytracing, dont care scalers but stay very interested on av1 encode hardware
            i agree with you on the 6400/6500 part to cut out the AV1 decode part is a sin...

            makes the PowerColor Radeon RX 6600 the cheapest card for 309€ you should buy without hurt yourself.

            "6gb and AV1-H265-VP9-H264 decode and encode capabilities"

            for 150dollars is a good start for many non-gamer people for that.

            do you know how the firmware situation is with intel gpus ? i am sick of the closed source firmware.
            Phantom circuit Sequence Reducer Dyslexia

            Comment


            • #26
              Intel has spun the marketing wheel for their not-yet-on-any-shelve "discrete GPUs" for too long to let any further "will soon" news do as little as raise an eye brow. They had more than a fair chance to enter the market while GPUs were short in supply, and they missed it. Will be interesting to see if this will become Larrabee reloaded.

              Comment


              • #27
                Originally posted by oiaohm View Post

                The arc is faster than the Intel igp as AV1 processing and other things. Being a specialist made GPU gives it bigger silicon for this logic and more cooling area.



                There was a time frame where you had igpu, dgpu and media accelerator as a common combination in desktop systems. This combination still exists in server builds.

                Linux server workloads handling video work is not uncommon to have a igp/apu, dgpu and a media accelerator. So igp/apu + 2 dgpu is not off the cards if one of those cards is good at compute or media acceleration but not exactly good at normal 3d graphics. Of course being GPU equals extra problems on the OS side with vendors drivers need to play well with each other.
                IDK about that, I have never heard of this "igp/apu, dgpu and a media accelerator" server setup. And... why would you want 3d graphics in a server? Other than for game streaming?


                One issue with such a setup (particularly in game streaming) is that, ideally, you don't want raw video frames or whatever being shuffled between devices over PCIe. You want everything working on the same data in memory as much as possible.


                The kind of workload I was thinking of was esoteric ffmpeg or vapoursynth chains where not every filter can necessarily work on the same device and defaults to coming back to the CPU's RAM pool anyway. In this case, it might make sense to denoise on a dGPU, run an AI filter on an Nvidia card, pre/post process on the CPU, encode the output on an IGP's media block and so on... but I repeat, that use case is *extremely* niche an unoptimized. Its not a target market.
                Last edited by brucethemoose; 15 June 2022, 10:47 PM.

                Comment


                • #28
                  Originally posted by brucethemoose View Post
                  IDK about that, I have never heard of this "igp/apu, dgpu and a media accelerator" server setup. And... why would you want 3d graphics in a server? Other than for game streaming?

                  One issue with such a setup (particularly in game streaming) is that, ideally, you don't want raw video frames or whatever being shuffled between devices over PCIe. You want everything working on the same data in memory as much as possible.

                  The kind of workload I was thinking of was esoteric ffmpeg or vapoursynth chains where not every filter can necessarily work on the same device and defaults to coming back to the CPU's RAM pool anyway. In this case, it might make sense to denoise on a dGPU, run an AI filter on an Nvidia card, pre/post process on the CPU, encode the output on an IGP's media block and so on... but I repeat, that use case is *extremely* niche an unoptimized. Its not a target market.
                  The supremely cool Nvidia AI-powered Broadcast tool has just gotten a ton of handy updates, improving noise and visual reduction massively.

                  That AI filter for commercial broadcast can be very important.

                  Now the existing workflows where we have had to come back though the CPU memory has limited multi GPU. But we have seen the start of change here with GPUDirect RDMA, PRIME and the like. So chained up processing between GPUs in theory does not have to come back into CPU memory.

                  Using a igpu/APU of course you don't have the option of direct in the PCIe bus between items without going back into the CPU memory because the iGPU/APU is using the CPU memory.

                  Now for system monitoring the igpu/apu is good enough.

                  brucethemoose basically things have changed PCIe buses are getting decently fast this is why we are seeing extra ram with CXL over PCIe. With RDMA appearing on device as well where they can transfer between each other bipassing the CPU. Raw video frames being shuffled purely over PCIe has not been that bad this is how PRIME setups do work but they are mixing that with the CPU memory usage that has a overhead.

                  dgpu intel receiving a prime output from dgpu Nvidia card is not going to be disrupting the CPU memory operations or the CPU PCIe bandwidth to other devices.

                  http://www.socionext.com/en/products...264h265/M820L/ existing steps will have a Nvidia gpu and and media accelerator for encoding. The iGP media block ends in in servers if it there not used. CPU is doing enough work extra heat of processing media as well slows stuff down.

                  Intel dgpu here ends competing with a socionext M820L and equal cards. Except this is a dgpu as well as a media encoder. Yes these existing systems have not been using RDMA solutions for direct card to card transfers either.

                  Something like M820L is only linked to ffmpeg not to your general graphical input output system.

                  brucethemoose basically with Intel entering the dgpu at the same time CXL, RDMA and other direct transfer systems between PCIe cards come into existing means we could be in for a interesting time of change.

                  Newly developed CXL memory packs 4x the memory capacity over the previous version, enabling a server to scale to tens of terabytes with only one-fifth of the system latency Samsung to also introduce an upgraded version of its open-source software toolkit that facilitates CXL memory deployment into existing and emerging IT systems


                  Really who would have been thinking a 5 years ago that we would have allocatable ram in the PCIe slot. Yes a card that is just ram. So GPU transferring though this ram is not going though the CPU direct connected ram. This alters the game. These new designs you don't have to use the CPU memory as the middle man between cards any more. This makes a very different transfer pattern to what we have been use to.

                  Yes we are seeing the return of the non battery backed up hardware ramdrive to servers except this time with a system designed for ram. Yes instead of the old round peg square hole of making ram behave like a block device as the old hardware ramdrive. This alters how data can move around in the system in big way. Intel entering the dgpu market does line up with this change.

                  igpu and apu advantages are not the same any more. Think about it if the memory of the application is being stored in CXL memory in the PCIe bus the igpu in the CPU performing operations on it will have to be pulling it into the CPU memory over the PCIe bus before it can do anything to it. dgpu you have the same PCIe bus traffic but no CPU memory disruption. Things really change with CXL memory and RDMA between devices as this allows a lot of cases of cutting CPU connected memory out of loop.

                  Comment


                  • #29
                    Originally posted by WannaBeOCer View Post
                    People crapping on Intel yet they’ve had the GPU market lead for years thanks to their iGPUs which was at 62% at the end of 2021. People keep forgetting Raja Koduri created AMD’s RDNA. It won’t be long till Intel is competing with Nvidia and AMD.
                    Lead? Intel was/is putting crappy GPUs to their machines so nobody could use these computers for anything else than word. It is almost like IE situation, they could have developed better GPUs but didn't want to make an effort.

                    Comment


                    • #30
                      So they're launching a low-end GPU... in limited numbers... more than 6 months after it was initially supposed to launch. Looking at some articles from last year, people were even speculating that we might see Battlemage by the end of this year. Given Intel's recent history, I wouldn't be surprised if we won't see Battlemage until early 2024, at which point it might have a brief window of opportunity until RDNA4 and Lovelace successor come out.

                      Comment

                      Working...
                      X