Announcement

Collapse
No announcement yet.

Intel Arc Graphics A750 + A770 Linux Gaming Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • coder
    replied
    Originally posted by WannaBeOCer View Post
    Some exciting news about Intel Arc support in Pytorch/Tensorflow: https://github.com/oneapi-src/oneDNN/issues/1465
    I've been impressed with OpenVINO, on their iGPUs and CPUs. You can use it in GStreamer pipelines, using the elements provided in their DLStreamer project.

    They're somewhat analogous to Nvidia's TensorRT & DeepStream, respectively.

    Leave a comment:


  • coder
    replied
    Originally posted by Anux View Post
    I shouldn't have used the term Raid, I meant just using multiple memory chips and layers in parallel. Which of course is used to surpass the limits of a single memory chip.
    That's what SSD controllers already do, and the high-end drives routinely max out the number of channels.

    Leave a comment:


  • WannaBeOCer
    replied
    Some exciting news about Intel Arc support in Pytorch/Tensorflow: https://github.com/oneapi-src/oneDNN/issues/1465

    I appreciate your interest in Intel Arc GPUs! Recently released oneDNN v2.7 includes optimizations for Intel Arc GPUs and utilizes XMX engines. At the moment you can run inference workloads imported from Tensorflow and Pytorch with Intel Distribution of OpenVINO Toolkit.

    Keep an eye on future announcements around Pytorch and Tensorflow support.

    Leave a comment:


  • Anux
    replied
    Originally posted by coder View Post
    If your SSD controller can hit the desired performance target, then there's no need for a RAID. You only use RAID-0 to surpass those limits.
    I shouldn't have used the term Raid, I meant just using multiple memory chips and layers in parallel. Which of course is used to surpass the limits of a single memory chip.

    Leave a comment:


  • coder
    replied
    Originally posted by Anux View Post
    No, not a classical Raid controller, just a "simple" SSD controller. It's already the norm with channels, banks and multiple planes.
    If your SSD controller can hit the desired performance target, then there's no need for a RAID. You only use RAID-0 to surpass those limits.

    Leave a comment:


  • Anux
    replied
    Originally posted by coder View Post
    So, it's only a (tiny) cost savings at the silicon level, and shaving off a couple $0.01 from the graphics card board cost.
    If you only knew how greedy they are when it comes to those little penny's.

    PCIe 5.0 has none of that. It's just a straight clock-doubling exercise, from 3.0 to 5.0.
    Argh, right, I mixed that up.

    The only reason even to do it is to surpass the limits of what a single controller can handle. By definition, you need multiple controllers. Otherwise, it's pointless. And if you want to make it transparent to the host, then you need a 3rd chip. I actually have a Western Digital card which works like that, but it's an x8 PCIe add-in card.
    No, not a classical Raid controller, just a "simple" SSD controller. It's already the norm with channels, banks and multiple planes.

    Leave a comment:


  • coder
    replied
    Originally posted by Anux View Post
    It will become a trend with PCIe 5. Intel also used 8x
    x8 cards are nothing new. I have an AMD RX 550 (Polaris) that's PCIe 3.0 x8. And I'm sure even that wasn't the first.

    x4 would be a new development, if this is anything more than a one-off. But it's also kinda pointless, from a system perspective, because the card is mechanically x16 and the motherboards have to support x16. So, it's only a (tiny) cost savings at the silicon level, and shaving off a couple $0.01 from the graphics card board cost.

    Originally posted by Anux View Post
    The coding is more complex, so it would need more silicon. How much, I don't know.
    You're thinking of 6.0, which introduces PAM4 and flits. PCIe 5.0 has none of that. It's just a straight clock-doubling exercise, from 3.0 to 5.0.

    Originally posted by Anux View Post
    What so special about raid0? It barely needs any logic. 1 controller and 4 mem chips and you're good to go.
    The only reason even to do it is to surpass the limits of what a single controller can handle. By definition, you need multiple controllers. Otherwise, it's pointless. And if you want to make it transparent to the host, then you need a 3rd chip. I actually have a Western Digital card which works like that, but it's an x8 PCIe add-in card.

    Leave a comment:


  • Anux
    replied
    Originally posted by coder View Post
    Well, it was originally designed as a laptop GPU. So, I wouldn't read too much into the x4 thing. We've had lots of entry-level GPUs with x8 interfaces, but x4 isn't yet a trend.
    It will become a trend with PCIe 5. Intel also used 8x and if it was designed for laptop or not doesn't help if that's the only entry level in the future. They will cut down as much as possible to save a few bucks, which is really bad because that makes the cards unusable for cheaper/older systems.

    The PCIe protocol didn't get more complex until PCIe 6.0 (using PCIe 3.0 as baseline), so the amount of silicon a PCIe controller takes per-lane shouldn't increase until then.
    The coding is more complex, so it would need more silicon. How much, I don't know.

    No consumer SSDs can sustain writes much above PCIe 2.0 speeds. So, it's mostly just specsmanship.
    Its like boost clocks for CPUs, in normal usage patterns it is virtually super fast. Sustained writes of course suffer but are only needed in specific workloads.

    That's a good thing?
    Depends on your preferences, I'm fine with my old cooler less 2500 MB/s SSD.

    The M.2 form factor is not good for that. Controllers eat up precious board space, not to mention the additional heat.
    What so special about raid0? It barely needs any logic. 1 controller and 4 mem chips and you're good to go.

    Leave a comment:


  • coder
    replied
    Originally posted by Anux View Post
    If I'm seeing the trend correctly, especially the cheaper entry level GPUs come with cut down PCIe interfaces like the 6500 XT with PCIe 4 x4.
    Well, it was originally designed as a laptop GPU. So, I wouldn't read too much into the x4 thing. We've had lots of entry-level GPUs with x8 interfaces, but x4 isn't yet a trend.

    Nor is it clear exactly how much it saves. The PCIe protocol didn't get more complex until PCIe 6.0 (using PCIe 3.0 as baseline), so the amount of silicon a PCIe controller takes per-lane shouldn't increase until then. I'd expect the difference between x4 and x8 to be negligible, in absolute terms. Just guessing, since PCIe 3.0 x8 interfaces didn't seem too onerous at 40 nm, even.

    Originally posted by Anux View Post
    SSDs with PCIe 5 are in the works.
    Not because people need it, but because it's a feature they think they want. Also, the interface speed != sustained performance. You could get a higher-speed burst in/out of onboard DRAM buffers, but I think sustained, sequential reads are unlikely to go much above PCIe 4.0 speeds. No consumer SSDs can sustain writes much above PCIe 2.0 speeds. So, it's mostly just specsmanship.

    Originally posted by Anux View Post
    ​There is already a "solution" to that: https://www.teamgroupinc.com/en/uplo...969f14c489.jpg We might see active cooled ones with PCIe 5.
    That's a good thing? I liked when SSDs actually used less power than spinning rust-based drives.


    Originally posted by Anux View Post
    ​Increasing the Bandwidth of SSDs is easy, just build an internal "raid 0".
    The M.2 form factor is not good for that. Controllers eat up precious board space, not to mention the additional heat.

    Leave a comment:


  • Anux
    replied
    Originally posted by coder View Post
    Being able to run dual-x8 GPUs @ PCIe 5.0 is the singular argument for having it on the desktop, and only if you're running high-end GPUs of the sort a desktop PSU could barely power 2 of.
    If I'm seeing the trend correctly, especially the cheaper entry level GPUs come with cut down PCIe interfaces like the 6500 XT with PCIe 4 x4. I don't know if RDNA 3 will have PCIe 5 already but SSDs with PCIe 5 are in the works.

    they're burning lots of power, which means they throttle, which means they can't sustain it.
    There is already a "solution" to that: https://www.teamgroupinc.com/en/uplo...969f14c489.jpg We might see active cooled ones with PCIe 5. Increasing the Bandwidth of SSDs is easy, just build an internal "raid 0".

    Leave a comment:

Working...
X