Announcement

Collapse
No announcement yet.

linux kernel running on GPU without CPU

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #1
    That's a terrible idea. A GPU is a chip made for certain tasks. It is really good for those tasks, but really terrible for others that it isn't made for. It would not be only very inefficient, but also slow. Besides that, I'm not even sure if current GPUs have all the features to do all the tasks CPUs do.

    Comment


    • #2
      Originally posted by A1B2C3 View Post
      I'm sorry but you are weak in this matter. Please look at the architecture of the CPU and GPU. [..] The CPU is primitive.
      It's exactly the opposite. A GPUs core architecture is much more simple than that of a CPU. GPUs are meant to execute simple operations on huge amounts of data. Therefore they have simple cores, but lots of them. A CPU on the other hand has very few cores, but they are much more complex because they must be able to execute code quickly no matter how complex it is.

      Originally posted by A1B2C3 View Post
      They are like jobs, they just use algorithms for solving that are more suitable for execution on the CPU
      It's not about algorithms. It's much more general. Problems with simple operations but lot's of data are good for GPUs. But if you have code with lots of control flow, e.g. conditional jumps GPUs won't like it because if the further progress depends completely on the result of one calculation it doesn't matter if you have one core or tens of thousands because they all have to wait until that calculation is done. And CPUs have much lower latencies when it comes to single instructions. Of course there are techniques to get around that like speculative execution, but that only works to some extend and is very inefficient when your code is full of such operations.

      Originally posted by A1B2C3 View Post
      It's like if you pulled a racing car in a cart with horses and said that a racing car is not faster than horses.
      I have a better analogy. The GPU is a big truck with lots of horse power while the CPU is a car. If you want to transport lots of goods from point A to point B a truck will be much quicker because it only has to drive once while the car has to do many trips, but if you only want to get one passenger from one point to another the car will be much quicker.

      I won't write more because you don't seem to understand the basics.

      Comment


      • #3
        Originally posted by A1B2C3 View Post
        [T]here are no algorithms for solving standard problems using GPU. everything [sic] is CPU-oriented.
        ...
        deep [sic] parallelization will help solve the stupid killing of processes in the Linux kernel, solve the problem with queues and other diseases that seem incurable when using the CPU.
        You talk a big game. Sometimes an annoyed developer makes an (arguably) better solution than what's come before like systemd, pipe wire, or Linux. If you think you can solve some of the complexities of programming through GPU programming, go ahead and show us how it's done. You'll have lots of folks eating their words and buying you beers if you can come up with a few more general purpose GPU algorithms. I'd highly recommend looking at some CUDA or opengl resources first.



        Learn OpenGL . com provides good and clear modern 3.3+ OpenGL tutorials with clear examples. A great resource to learn modern OpenGL aimed at beginners.

        Comment


        • #4
          I think what you want is a CPU with thousands of cores, rather than a GPU. GPUs are entirely designed to be co-processors, they don't have any of the general-purpose machinery in place to operate by themselves, they're basically designed to process many small equations at once, and don't have any of the hardware orchestration abilities CPUs perform. You could probably design a CPU with thousands of cores, but at that point it wouldn't look anything like x86/ARM/RISC-V, and the cores themselves would probably be slower than the dedicated GPU cores. On the flip side, standard GPU architectures can't run by themselves, you can't simply program an OS to run without the CPU, the computer's physical architecture doesn't support it, the computer literally will not turn on.
          Also, why would you even want this? It might be interesting from a manufacturing perspective, but there's nothing really wrong with having a CPU with a GPU as the co-processor. Even if the GPU is doing most of the work, you can still have the CPU perform general system management functionality, which is pretty much how it works now anyway. In fact, modern GPU architectures are bypassing the CPU and RAM entirely to perform work more efficiently, but they're not going to replace the CPU because that would be a huge waste of time (both to invent it, and a waste of the GPU's own processing time) when it's easier to simply split up simple tasks and parallel tasks into two separate dedicated processors.

          Comment


          • #5
            Originally posted by A1B2C3 View Post

            Guys, you are wrong. I know that it is possible. It is difficult but possible. Let's in this case each of us stick to our own opinion.
            But, this isn't a matter of opinion, you're literally asking what are the functional capabilities of a piece of hardware, it is by definition non-opinionated. You're asking how a machine works.
            If you *know* a standard GPU can run by itself, then, how? I'm genuinely asking.
            Otherwise, if you don't know, why don't you accept our answers? As far as I know from how GPUs work, they can't run by themselves, because they literally weren't built to do that, they are designed from the ground up to be a co-processor. Aside from being extremely proprietary and essentially entirely different architectures between generations, they don't have any hardware capabilities like networking, disk control, probably don't even have an MMU, I'd assume the CPU or Southbridge handle that. Again, it's totally fair to ask for a CPU with thousands of cores, but you are literally asking for an existing co-processor to perform all systems work, that's like saying you can rip out the 8086 of an IBM PC and only use the 8087, when that's physically impossible.

            Also I'm still not sure why you hate CPUs so much. I did just explain they're not only perfectly fine at what they do, but they actually offload work from the GPU, the GPU performs better because it doesn't have to do all of the serial non-parallel work the CPU is doing. You could easily turn your (frankly insane) argument around and say we should all stop using GPUs because technically you can render on the CPU. Both arguments don't work because current computer architecture designs of having a single workload workhorse and a highly parallel processor optimized for tiny mathematical operations fits perfectly in our modern computing paradigm, computers need to do both expensive non-parallel tasks as well as highly parallel micro-tasks, which CPUs and GPUs are perfect for, respectively. What you propose is either to bog down the GPU with expensive non-parallel workloads it isn't optimized for, or to have a CPU with thousands of multi-purpose compute cores that are a jack of all trades but a master of none.

            Comment


            • #6
              Originally posted by A1B2C3 View Post
              that it is time to make the transition. CPU time is up.
              You still haven't given a reason - or responded to my counter-reasons - as to why this is in any way desirable.
              So far you just sound like a ~2008 AMD marketing executive demanding more cores without any regards as to how well they perform.
              And, no, nobody wants to kill CPUs. To be frank, I'm actually not convinced you're even aware of what a CPU or GPU actually is.

              Comment


              • #7
                Okay, I think I at least understand you a little better. So this is political and not technical.

                Originally posted by A1B2C3 View Post
                Adapt the Linux system for implementation in the core of neural networks, no NPU can compare with the GPU for these purposes.
                I'm not sure why you assume this, NPUs are specifically designed to be better than GPUs at this purpose.
                Again, this seems to tie in with your misunderstanding of what a compute core is. CPU cores are entirely general-purpose, GPU cores are meant for simpler mathematical operations, but perform them faster than CPU cores, NPU cores are built specifically for ML-specific matrix operations, and are even better than GPU cores at it. This is why we have the different processor types; the more general-purpose a core is, the worse it is at specific tasks. Countering that, the better a core is a specific task, it becomes far worse as general-purpose ones. It's specifically why we don't throw everything at the GPU, because a lot of computational tasks would be horrible to perform on a GPU compared to a CPU, for a variety of reasons. The CPU is the safe default for computation because it's simply the best component for general purpose computing, people only go to the GPU ("GPGPU") when they have a computational workload that can specifically take advantage of the capabilities in GPU cores without suffering their downsides.

                Originally posted by A1B2C3 View Post
                NVIDIA has made a move into the AI ​​sector.. AMD and Intel will never be able to compete with them in this industry. The only way out is if People start buying GPUs from Intel and AMD instead of CPUs. Then they will be saved from collapse and people will in turn receive a base for realizing their needs and requirements. Everyone will be fine. Shitty GPUs from Intel and AMD that are not even good for games will be able to satisfy the demand instead of the CPU. Together, we will all enter the future without losing anyone. Linux systems can save AMD and Intel and become the only first system suitable for scientific work and entertainment. I think that Linux systems will become as popular as Windows. This is a chance for everyone. It is stupid to lose it.


                I fully agree with your sentiment that we absolutely need competition for nvidia, but, this solution has nothing to do with it. AMD and Intel's GPUs don't suck because they're under-utilized, they suck because they simply don't care. AMD neglects it's GPU division for it's CPU division (a very stupid move, but not one that will be solved by somehow magically making GPUs work autonomously), and Intel is outright incompetent and losing market value by the day because of it (people are comparing them to Boeing). As much as I dislike nvidia, they are quite literally the only company competently making advanced GPUs.

                Originally posted by A1B2C3 View Post
                Everything has already reached a dead end with the CPU. People are tired of lies. Nothing is being done to improve the lives of the masses.


                I find this particularly ironic, personally, because I vividly remember watching a video from JayzTwoCents a few months ago where he said "It's funny CPUs are exciting now while GPUs are boring, because in the past it was the GPUs that were exciting while the CPUs were stagnant." referring to how both Intel and nvidia don't innovate much because their competition simply isn't threatening. CPUs were boring because AMD didn't have the resources to compete with Intel, but now they're exciting because their CPUs are being massively innovated and it's quite exciting. Meanwhile, now GPUs are boring because AMD is as slack with their GPUs as they were with their CPUs before they finally invented the Zen architecture.

                Comment


                • #8
                  Originally posted by A1B2C3 View Post
                  the most important thing is that it will become easy to develop software. you will not need to write code for the CPU. there will not be such a mess as with a huge number of aarch64 processors. This is a huge plus for the Linux kernel. The Linux kernel will be able to move from chaos to order.
                  ... Have you even read my post? I specifically pointed out, almost needlessly, that GPU architectures are entirely proprietary and tend to radically change between generations. If you think ARM architectural differences are bad, GPU architectures are a thousand times worse. GPUs don't even remotely resemble each other between generations, they have no standards to adhere to like CPU architectures do.
                  This really points out that you have no idea what a CPU or GPU even is. I don't even mean that insultingly, you're genuinely making statements about things you know nothing about.

                  Comment


                  • #9
                    Originally posted by A1B2C3 View Post

                    You're the one who doesn't understand my messages. I have already talked many times about algorithms for solving problems on the GPU. Today, a programmer needs to solve a problem by thinking simultaneously on GPU algorithms and CPU algorithms. they will switch to only one algorithm, and they will improve it well. as for changes in GPU architectures, that's up to the compiler. The programming language will remain the same for all architectures and algorithms as well.
                    I can definitely tell you don't program, because nobody splits up algorithms between processors. When people have a highly parallel workload, they put it on the GPU, simple as. This is like saying we should freeze all our food because you're not sure if you should refrigerate something or not.
                    I really don't see any more point in this conversation, you don't even seem to understand what computers are, you sound like a politician trying to pass an IT law. I really recommend actually learning what each component in a computer is and how to perform GPU workloads in code, because I 100% know you know nothing about either.

                    Comment


                    • #10
                      I haven't read so much ignorant nonsense in a very long time. The facts don't change because you are of a different opinion. GPUs are just not suited for that task no matter how much you want them to be. That's why nobody is wasting time or money to make Linux run on a GPU.

                      Comment

                      Working...
                      X