Announcement

Collapse
No announcement yet.

Intel Meteor Lake Arc Graphics: A Fantastic Upgrade, Battles AMD RDNA3 Integrated Graphics

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by qarium View Post
    in many cases the stuff run faster on the CPU than per compute on the GPU....
    Not true, in my benchmarks using custom Python code OCL on an intel iGPU runs up to 20x faster than running on the CPU.

    Comment


    • #32
      Originally posted by qarium View Post
      all relevant compute projects like PyTorch support the CPU target.
      I think this is mainly done for debugging and for extremely light-weight inferencing jobs that take a comparable amount of time as dispatching work to a GPU.

      Originally posted by qarium View Post
      right now i see it is like this: you can choose between AMD who is 20% faster on CPU and the Intel option who is 8% faster in the GPU part.
      That's probably for very light-weight inferencing, like I mentioned above.

      Originally posted by sophisticles View Post
      Not true, in my benchmarks using custom Python code OCL on an intel iGPU runs up to 20x faster than running on the CPU.
      GPUs are good at certain things, like AI and other sorts of highly-parallel, bulk number-crunching. This is hardly news.

      Unfortunately, many common computing problems don't fit that mold, or else GPU Compute would be way more popular. I've been following it, since its inception over 2 decades ago.
      Last edited by coder; 22 December 2023, 04:21 AM.

      Comment


      • #33
        Originally posted by coder View Post
        I think this is mainly done for debugging and for extremely light-weight inferencing jobs that take a comparable amount of time as dispatching work to a GPU.
        That's probably for very light-weight inferencing, like I mentioned above.
        GPUs are good at certain things, like AI and other sorts of highly-parallel, bulk number-crunching. This is hardly news.
        Unfortunately, many common computing problems don't fit that mold, or else GPU Compute would be way more popular. I've been following it, since its inception over 2 decades ago.
        right the problem need to be large enough that the "comparable amount of time as dispatching work to a GPU." overhead results in a positive outcome.

        the claim of sophisticle ​is just nativity. just because something special runs 20 times faster on the gpu does not mean that the overall software runs thats faster because other part of the software can not accelerate the gpu in that way.
        Phantom circuit Sequence Reducer Dyslexia

        Comment


        • #34
          Originally posted by sophisticles View Post

          Not true, in my benchmarks using custom Python code OCL on an intel iGPU runs up to 20x faster than running on the CPU.
          Hello everybody,

          I would like to apologize for my posts under nickname "sophisticles" and "hel88".

          the thing is, I am very sick person. Schizophrenia with manic depression.
          When I'm on my medication like now, I feel ashamed for the things that I do when not on medication.

          For example, when I'm not using my therapy properly I get this crazy tendency to troll on linux forums. For that devious purpose I am using nicknames "sophisticles" and "hel88". under those nicknames I write crazy, insane things. when I am on regular therapy like now, I cannot believe the crap that I wrote under those 2 nicknames.

          overall, I would like all of you to know that I don't really mean what I write under those 2 nicknames and also, I love linux, open source and gpl. and yes, microsoft sucks.​

          Comment

          Working...
          X