Announcement

Collapse
No announcement yet.

Intel Xe2 Brings Native 64-bit Integer Arithmetic

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel Xe2 Brings Native 64-bit Integer Arithmetic

    Phoronix: Intel Xe2 Brings Native 64-bit Integer Arithmetic

    As some more exciting news for upcoming Xe2 graphics with Lunar Lake integrated graphics and Battlemage discrete GPUs, the latest open-source driver activity for Linux has confirmed Xe2 supporting native 64-bit integer arithmetic...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Next step: AVX! (;

    Comment


    • #3
      What does native 64 bit int support imply about potential native fp64? Unless I've missed it the current gen is also missing native fp64.

      Comment


      • #4
        Originally posted by geerge View Post
        What does native 64 bit int support imply about potential native fp64? Unless I've missed it the current gen is also missing native fp64.
        bruh, its in the literal above line of code in the image...

        Comment


        • #5
          Oh yeah, thanks. Exciting that FP64 is present, hopefully this flag guarantees that it's native. AMD kind of screwed the pooch with lacklustre FP64 improvements relative to the previous gen (32:1 ratio down from 16:1, still better than nvidia's 64:1), if intel can compete with AMD in areas I care about that would be great.

          Comment


          • #6
            There's no free lunch. FP64, even if fast, will be severely crippled as soon as Intel will gain some market share.
            Every nice feature "for the price" is given only because the product lacks something else. Poor RT Performance? We'll release 16GB cards for $600. No CUDA? But we'll offer better FP64 performance. Poor single threaded performance? No problem, we'll just put more cores.

            As soon as our product becomes popular, forget about it.
            Unless our competition pushes us again, that is.
            Last edited by sobrus; 01 June 2024, 09:10 AM.

            Comment


            • #7
              I mean sure, but AMD FP64 performance of GCN and Vega was a massive win for hobbyist compute, they never gained the market share so strong FP64 compute at a good price lasted a long time. It's a good bet that intel won't gain enough market share to be able to command high prices relative to competition, and FP64 compute in consumer is barely even a metric to compete on it's such a niche (or is it now that AI is a thing? Probably not). I don't see what crippling consumer FP64 really does for them, I suppose there's a chance they fuse it off and continue to rely on emulated FP64 if enabling it would impact binning yields.

              intel releasing FP64 in consumer cards cannot be a bad thing for my niche, at worst the cards are crap and it's a neutral development.

              Comment


              • #8
                Originally posted by geerge View Post
                (or is it now that AI is a thing? Probably not).
                Nah, AI is usually about quantizing it as much as possible to improve throughput. fp32 is already rare, fp16 is the norm (or bf16 but it's still 16 bits), and in some places you even go fp8 or 4-bit (I don't even know how the fuck that works tbh, 4 bits with 1 sign bit? wtf)

                Comment


                • #9
                  Does it include atomic operation on int64 ? that's something fairly important when it comes to GPGPU computing.

                  Comment


                  • #10
                    Originally posted by geerge View Post
                    What does native 64 bit int support imply about potential native fp64? Unless I've missed it the current gen is also missing native fp64.
                    Yeah, Intel weirdly left native fp64 out of consumer Xe. There was a middle-tier Xe that was aimed at datacenters, based on the same microarchitecture, and that presumably had fp64 (their Xe architecture slides listed fp64 as an optional feature), but that product line got cancelled and the Flex datacenter GPUs are now just glorified consumer dGPUs (same as AMD and Nvidia do, for their low-end server GPUs).

                    I appreciate that they saw the error of their ways and I guess seized on the opportunity to provide int64, as well. Nice move.

                    Originally posted by geerge View Post
                    AMD kind of screwed the pooch with lacklustre FP64 improvements relative to the previous gen (32:1 ratio down from 16:1, still better than nvidia's 64:1), if intel can compete with AMD in areas I care about that would be great.
                    Be realistic. The reason consumer GPUs only provide fp64 scalar implementations is that consumer graphics code doesn't need any more than that. Furthermore, because that's all you get in mass-market hardware, games will continue to optimize around it, virtually guaranteeing that wider implementations never happen (especially, now that HPC GPUs are now completely separate architectures from client GPUs).‚Äč

                    P.S. the reason AMD changed from 16:1 in GCN to 32:1 in RDNA is that they had implemented 16-wide SIMD in GCN (i.e. it takes 4 cycles to execute a 64-element wavefront) but RDNA is now 32-wide (matching the wavefront size). In both cases, fp64 support is relegated to scalars. So, it seems like mainly a byproduct of that architecture change, rather than a conscious decision to further disadvantage fp64.
                    Last edited by coder; 01 June 2024, 05:05 PM.

                    Comment

                    Working...
                    X