Announcement

Collapse
No announcement yet.

Intel Arc A380 Desktop Graphics Launch In China

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • brucethemoose
    replied
    Originally posted by oiaohm View Post

    https://www.kotaku.com.au/2021/05/nv...-cats-insects/
    That AI filter for commercial broadcast can be very important.

    Now the existing workflows where we have had to come back though the CPU memory has limited multi GPU. But we have seen the start of change here with GPUDirect RDMA, PRIME and the like. So chained up processing between GPUs in theory does not have to come back into CPU memory.

    Using a igpu/APU of course you don't have the option of direct in the PCIe bus between items without going back into the CPU memory because the iGPU/APU is using the CPU memory.

    Now for system monitoring the igpu/apu is good enough.

    brucethemoose basically things have changed PCIe buses are getting decently fast this is why we are seeing extra ram with CXL over PCIe. With RDMA appearing on device as well where they can transfer between each other bipassing the CPU. Raw video frames being shuffled purely over PCIe has not been that bad this is how PRIME setups do work but they are mixing that with the CPU memory usage that has a overhead.

    dgpu intel receiving a prime output from dgpu Nvidia card is not going to be disrupting the CPU memory operations or the CPU PCIe bandwidth to other devices.

    http://www.socionext.com/en/products...264h265/M820L/ existing steps will have a Nvidia gpu and and media accelerator for encoding. The iGP media block ends in in servers if it there not used. CPU is doing enough work extra heat of processing media as well slows stuff down.

    Intel dgpu here ends competing with a socionext M820L and equal cards. Except this is a dgpu as well as a media encoder. Yes these existing systems have not been using RDMA solutions for direct card to card transfers either.

    Something like M820L is only linked to ffmpeg not to your general graphical input output system.

    brucethemoose basically with Intel entering the dgpu at the same time CXL, RDMA and other direct transfer systems between PCIe cards come into existing means we could be in for a interesting time of change.

    https://news.samsung.com/global/sams...-memory-module

    Really who would have been thinking a 5 years ago that we would have allocatable ram in the PCIe slot. Yes a card that is just ram. So GPU transferring though this ram is not going though the CPU direct connected ram. This alters the game. These new designs you don't have to use the CPU memory as the middle man between cards any more. This makes a very different transfer pattern to what we have been use to.

    Yes we are seeing the return of the non battery backed up hardware ramdrive to servers except this time with a system designed for ram. Yes instead of the old round peg square hole of making ram behave like a block device as the old hardware ramdrive. This alters how data can move around in the system in big way. Intel entering the dgpu market does line up with this change.

    igpu and apu advantages are not the same any more. Think about it if the memory of the application is being stored in CXL memory in the PCIe bus the igpu in the CPU performing operations on it will have to be pulling it into the CPU memory over the PCIe bus before it can do anything to it. dgpu you have the same PCIe bus traffic but no CPU memory disruption. Things really change with CXL memory and RDMA between devices as this allows a lot of cases of cutting CPU connected memory out of loop.
    Nvidia GPUs are currently not CXL devices, neither is the Arc. And I run a PRIME setup... its kinda annoying TBH. And good luck finding an app that can do direct transfers between different GPU vendors.


    I get what you're saying theoretically and agree that it's exciting, I just have no idea how that applies to available and near future hardware. Basically the current use case is (like you said) an Nvidia AI filter, except you want to encode the output to AV1 instead of AVC/HEVC or an image sequence, which is why you need the Arc... but who wants to do that right now? Basically no one serves real time AV1 atm, as even if you encode it, many downstream devices generally cant reliably decode it. No one uses it for storing output either. And where does a Rembrandt or even a Falcon Shores IGP come in?




    IDK much about what users are doing with those FPGAs and ASICs. I've read the claims of bitrate savings over the popular CPU encoders and GPU blocks, but TBH I think a lot of it is snake oil.
    Last edited by brucethemoose; 17 June 2022, 12:03 PM.

    Leave a comment:


  • smitty3268
    replied
    Originally posted by coder View Post
    This narrative doesn't make sense to me, at all. Intel's current GPU team grew out of their chipset graphics, going back some 15+ years ago, and expanded as those GPUs got integrated into their CPUs. That entire time, they've been working on refining their drivers up to and including the first Xe GPUs that shipped as part of Tiger Lake, in late 2020, and then as the DG1 dGPU, shortly thereafter. Sure, the DG2 GPU is a slightly different animal, but I'm sure it shares at least 90% with the Xe iGPUs and DG1.
    I think switching to a discrete graphics model might be more difficult than you're giving them credit for, but beyond that you have to remember that for all those decades of work they've never really had to care about getting good performance in games. Or even, well, working at all in games. People would always just write that off as being integrated graphics, and of course you need to get an AMD or NVidia card if you want to do anything other than simple desktop usage or video decoding.

    But remember how much effort AMD had to put into getting their OSS linux drivers working? How seemingly every single game out there had some kind of bug or weird behavior in it that happened to work on Nvidia drivers but exposed a problem in AMD's drivers because they were doing the "standard" more correctly while the game didn't expect that? Now multiply that by 2 for Intel.

    They've got to deal with games that do one thing for NVidia, and another for everything else - meaning AMD. And Intel has to pick one of those behaviors to match. And then other games will do something special for AMD and another thing for everything else - meaning NVidia. And again, they have to pick one of those behaviors to match, and it might not be the same vendor they need to match in each different title. With them finally putting out a discrete card they want to market as fully capable, people are going to be testing all these games on Intel for the first time, and they're going to be brutal in the reviews if it doesn't work right. People on windows are used to drivers actually working, there won't be nearly as much slack as people gave AMD on linux.

    DirectX is likely a lot easier to implement all that in versus OpenGL, but there's still going to be a lot of games out there with weird issues to resolve because they just hacked stuff up until it happened to work on the only 2 drivers they cared about.
    Last edited by smitty3268; 17 June 2022, 12:53 AM.

    Leave a comment:


  • qarium
    replied
    Originally posted by pWe00Iri3e7Z9lHOX2Qx View Post
    No thanks. The RX 6400 is an insulting piece of crap laptop GPU repackaged for desktop duty because AMD could get away with it during the low inventory mining frenzy. I'm not buying a neutered X4 lane 64 bit memory interface GPU with two display outputs and gimped decode along with nonexistent encode capabilities. That half a decade old Quadro P2000 I mentioned is a much better card.
    right and a radeon 6600 is at 304€ in germany... yes the 6500 is 100€ cheaper but in my point of view you save nothing you pay this 100€ anyway over the years watching youtube AV1 videos decoded by the CPU...

    so any intel card cheaper than 304€ with AV1 decode is a good deal.

    but to be honest if intel takes any longer time AMD will release 7400/7500/6600 cards in 5 nm with AV1 decode...

    so intel really should hurry up and release the hardware fast.

    Leave a comment:


  • pWe00Iri3e7Z9lHOX2Qx
    replied
    Originally posted by coder View Post
    As already mentioned, you can get an AMD RX 6400 that meets those specs. Until Intel starts selling this new card in the West, we can't really know that it'll be cheaper.

    BTW, nobody is considering the impact of China's crypto mining ban on pricing. This is a 6 GB card, which means it's a target for miners. That could push up prices for it, outside of China.
    No thanks. The RX 6400 is an insulting piece of crap laptop GPU repackaged for desktop duty because AMD could get away with it during the low inventory mining frenzy. I'm not buying a neutered X4 lane 64 bit memory interface GPU with two display outputs and gimped decode along with nonexistent encode capabilities. That half a decade old Quadro P2000 I mentioned is a much better card.

    Leave a comment:


  • dc_coder_84
    replied
    Originally posted by coder View Post
    This narrative doesn't make sense to me, at all. Intel's current GPU team grew out of their chipset graphics, going back some 15+ years ago, and expanded as those GPUs got integrated into their CPUs. That entire time, they've been working on refining their drivers up to and including the first Xe GPUs that shipped as part of Tiger Lake, in late 2020, and then as the DG1 dGPU, shortly thereafter. Sure, the DG2 GPU is a slightly different animal, but I'm sure it shares at least 90% with the Xe iGPUs and DG1.
    From what I am hearing on Phoronix taking the driver to dedicated RAM is no simple task. But to some degree you're totally right Intel has lots of experience at GPUs for lots of years because of their integrated GPUs. You also have to take into account the modularity and scalability they are planning for their new GPUs. Coreteks made a good video explaing their strategy: https://youtu.be/-CULeocEw38?t=466

    Leave a comment:


  • coder
    replied
    Originally posted by dc_coder_84 View Post
    I believe the alt Arc Alchemist launch is delayed due to graphics driver issues and not because of missing hardware.
    It could be that production delays are just buying them time to do more work on the drivers.

    Originally posted by dc_coder_84 View Post
    Also this kinda the first step for Intel in the dedicated GPU market (lets forget this dedicated GPU from over 20 years ago) and this step is not the most easy one. Graphics drivers are complicated software and take huge amounts of time and engineering.
    This narrative doesn't make sense to me, at all. Intel's current GPU team grew out of their chipset graphics, going back some 15+ years ago, and expanded as those GPUs got integrated into their CPUs. That entire time, they've been working on refining their drivers up to and including the first Xe GPUs that shipped as part of Tiger Lake, in late 2020, and then as the DG1 dGPU, shortly thereafter. Sure, the DG2 GPU is a slightly different animal, but I'm sure it shares at least 90% with the Xe iGPUs and DG1.

    As for Larrabee, which people keep bringing up, that was created by an entirely separate group, within Intel. It had nothing to do with their chipset graphics architecture or group. The only sense in which this is Larrabee 2.0 is that both products are from Intel. It's kind of like calling Core 2 a descendant of Pentium 4. Yes, both came from Intel and Core 2 came later and supported a superset of the instructions, but it was really rooted in the Pentium Pro/3 architecture and not Pentium 4's.
    Last edited by coder; 16 June 2022, 10:12 AM.

    Leave a comment:


  • dc_coder_84
    replied
    Originally posted by Keats View Post
    So they're launching a low-end GPU... in limited numbers... more than 6 months after it was initially supposed to launch. Looking at some articles from last year, people were even speculating that we might see Battlemage by the end of this year. Given Intel's recent history, I wouldn't be surprised if we won't see Battlemage until early 2024, at which point it might have a brief window of opportunity until RDNA4 and Lovelace successor come out.
    I believe the alt Arc Alchemist launch is delayed due to graphics driver issues and not because of missing hardware. Having said that Battlemage could come quite soon because the drivers are then in a much more mature state. Also this kinda the first step for Intel in the dedicated GPU market (lets forget this dedicated GPU from over 20 years ago) and this step is not the most easy one. Graphics drivers are complicated software and take huge amounts of time and engineering.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by grung View Post
    Lead? Intel was/is putting crappy GPUs to their machines so nobody could use these computers for anything else than word. It is almost like IE situation, they could have developed better GPUs but didn't want to make an effort.
    Over the years Intel has done different features in there igpu for business applications. So its not just been for people running word. This turns up in adobe applications and few others.



    You see them at the bottom here. Yes these 6 companies lot of them were working with Intel prior to 2010.

    There has been a reason why Intel has held 60+ of the total market share. For a lot of business work Intel GPU has enough functionality.

    AM5 is interesting as all AM5 chips are to have a APU. So CPU with integrated graphics is about to come the normal.

    Its kind of simple to miss that Intel has done some specialist design for their GPUs. That its specialist things not aimed at the gamer market but at the business market makes it simple to miss. Yes when 90%+ of intel graphics user base are business/office users kind of makes sense.

    Leave a comment:


  • coder
    replied
    Originally posted by pWe00Iri3e7Z9lHOX2Qx View Post
    Especially if it is a single slot solely bus powered card. That niche of a small card that can work in systems with very low end power supplies has been mostly forgotten about for years. Used Nvidia Quadro P2000 cards still go for nearly $300. The slightly updated P2200 cards are closer to $400.

    I need 3x 1440p class cards, but I also need several 1080p cards. I expect these first gen Intel cards to be a bit underwhelming and the drivers to be immature. I also expect those factors to influence street prices, assuming they don't stay OEM only.
    As already mentioned, you can get an AMD RX 6400 that meets those specs. Until Intel starts selling this new card in the West, we can't really know that it'll be cheaper.

    BTW, nobody is considering the impact of China's crypto mining ban on pricing. This is a 6 GB card, which means it's a target for miners. That could push up prices for it, outside of China.

    Leave a comment:


  • coder
    replied
    Originally posted by Keats View Post
    So they're launching a low-end GPU... in limited numbers... more than 6 months after it was initially supposed to launch.
    This likely has something to do with TSMC production capacity & pricing. Intel made a pretty late move to switch their GPUs over to TSMC, which surely caused some delays. Then, they might've faced steep price increases and production delays by TSMC.

    Intel's entire GPU effort seems late by a year or so, compared to what would've been ideal timing.

    Leave a comment:

Working...
X