Announcement

Collapse
No announcement yet.

Why More Companies Don't Contribute To X.Org

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    Originally posted by TemplarGR View Post
    There is no antagonism between the two. They will cooperate *on-die*.
    I read that as "They will cooperate-*or die*".

    : )

    Comment


    • #52
      A GPU is really really really crappy at doing certain things. Depending on what your actually calculating a regular CPU will run circles around a a GPU..


      When talking about a General purpose CPU vs GPU we are talking about them relative to one another. A CPU can do most things fast and the GPU can do certain things very fast and not much else.


      When talking about GPGPU on it's own it's just talking about using a GPU-type processor to perform tasks that are not directly related to graphics.


      So the idea is that the software you run on your machine will be able to use both the GPU and the CPU at the same time, depending on which ever is faster at a particular task.

      This is one of the reasons it's important to end up with the GPU functions integrated into the processor die as just another core. You can have the most powerful GPU and CPU in existence, but it does not help your applications a whole lot since the latency between the video card and the main memory/cpu will destroy any performance you have.

      If you want to have good performance with a normal PC you have to program for the CPU Or the GPU.


      You can see this with benchmarks with GTK and other things in Linux. The software rendering for 2-D stuff in X Windows is very fast and highly optimized. However people are still trying to migrate as much functions as possible to run on the GPU. Then when people get 'hardware acceleration' working the micro benchmarks will often show _worse_ performance. But it's still a good thing. When your doing complex graphics with a mixture of CPU and GPU because otherwise your just going to waste hundreds of thousands of cycles shovelling information back and forth between the two.

      Comment


      • #53
        Originally posted by RobbieAB View Post
        Since nVidia started CUDA, ATI started stream processing, openCL... These are ALL aiming to harness the considerable power of the GPU at general purpose floating point operations. No current GPU is specialised "graphics hardware", they are general purpose floating point SIMD cores with all the magic done in the drivers.

        And you know this at least as well as I do!
        Stream processing is hardly a "General purpose" use. Being programmable is also does not mean "general use". They are solutions that are aimed at a limited set of functions. See drags above post for more.

        Comment


        • #54
          Originally posted by jakubo View Post
          it kinda does sound to me like "its not performing very good. lets put it in the kernel..." (just recalling the attempt to get dbus in kernel. what has become out of it by the way? systemd seems to use dbus as well. to some point...)
          what will this developement be meaning for the kernel? will it become any different?
          For Linux graphics you'll have to use TTM/GEM and use KMS. All this stuff will have to be managed by the kernel's DRM driver.

          For systems that are not configured to use KMS and TTM/GEM they will need to retain the X Server since the there would be nothing to handle modesetting and memory management without X. For these systems your X Server actually bypasses the kernel completely and starts fiddling with bits directly on your PCI bus.

          Besides that Wayland itself will need drivers that support OpenGL ES (1.2, I think)

          Comment


          • #55
            Originally posted by deanjo View Post
            Stream processing is hardly a "General purpose" use. Being programmable is also does not mean "general use". They are solutions that are aimed at a limited set of functions. See drags above post for more.
            That is why these 2 will merge.

            It is simple:

            The cpu part will be like an intelligent boss, coordinating the effort and assigning priorities. Gpu will be the quick unskilled worker who can't think for himself but he is extremely efficient at following his boss's orders. As a matter of fact, you could actually imagine the gpu as a group of workers, since that is what it is, a group of small processors(shaders). These will work together and produce miracles.

            Comment


            • #56
              Originally posted by TemplarGR View Post
              That is why these 2 will merge.

              It is simple:

              The cpu part will be like an intelligent boss, coordinating the effort and assigning priorities. Gpu will be the quick unskilled worker who can't think for himself but he is extremely efficient at following his boss's orders. As a matter of fact, you could actually imagine the gpu as a group of workers, since that is what it is, a group of small processors(shaders). These will work together and produce miracles.
              Having one boss per employee is not an efficient way to increase productivity when one boss could easily coordinate multiple employees.

              Comment


              • #57
                Originally posted by deanjo View Post
                Having one boss per employee is not an efficient way to increase productivity when one boss could easily coordinate multiple employees.
                In this example the GPU is the equivalent of multiple emplyoees. That is what it is good at: Parallel processing aka many workers working simultaneously. And if you would like to be more literal, the gpgpu part is the supervisor. The cpu(boss) sends instructions to the supervisor(gpgpu as a whole) and the supervisor orders his minions (shaders) to start working, occasionaly he beats them too.

                Comment


                • #58
                  Originally posted by TemplarGR View Post
                  In this example the GPU is the equivalent of multiple emplyoees. That is what it is good at: Parallel processing aka many workers working simultaneously. And if you would like to be more literal, the gpgpu part is the supervisor. The cpu(boss) sends instructions to the supervisor(gpgpu as a whole) and the supervisor orders his minions (shaders) to start working, occasionaly he beats them too.
                  The other factor you have to remember as well is as such high performance applications move to solutions like gpu's and dsp's the reliance on the CPU becomes less and less making it easier to move over to another CPU solution as well. It is entirely possible that a CPU solution such as ARM would be enough to suffice or alternatively you could also have a solution such as the Transmeta chip was which had it's own architecture and offered compatibility through emulation. All one that would have to worry at that point is that the emulation is sufficient enough to feed those dedicated solutions for backwards compatibility or alternatively use that cpu's native non x86 instructions.

                  Comment


                  • #59
                    Originally posted by deanjo View Post
                    The other factor you have to remember as well is as such high performance applications move to solutions like gpu's and dsp's the reliance on the CPU becomes less and less making it easier to move over to another CPU solution as well. It is entirely possible that a CPU solution such as ARM would be enough to suffice or alternatively you could also have a solution such as the Transmeta chip was which had it's own architecture and offered compatibility through emulation. All one that would have to worry at that point is that the emulation is sufficient enough to feed those dedicated solutions for backwards compatibility or alternatively use that cpu's native non x86 instructions.
                    True, that is true. But i believe Intel/AMD will have an advantage,since they are already ahead of the competition in many ways. Plus, having both cpu and gpu on die provides far too many benefits. It will be far better than discreet ARM or Transmeta cpu + dedicated gpgpu, since there is no need for separate ram and moving data through a pcie.

                    Comment


                    • #60
                      Originally posted by TemplarGR View Post
                      True, that is true. But i believe Intel/AMD will have an advantage,since they are already ahead of the competition in many ways. Plus, having both cpu and gpu on die provides far too many benefits. It will be far better than discreet ARM or Transmeta cpu + dedicated gpgpu, since there is no need for separate ram and moving data through a pcie.
                      Who said the alternative CPU couldn't be placed on the GPU? The key thing here is that the CPU core doesn't have to be even on the motherboard and allocation of resources takes very little bandwidth for what those cpu cores would actually have to do. Heck you could even in theory utilize a Master / Slave / Slave / Slave / etc setup that grows with the needs.

                      Comment

                      Working...
                      X