Announcement

Collapse
No announcement yet.

Why More Companies Don't Contribute To X.Org

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • drag
    replied
    A GPU is really really really crappy at doing certain things. Depending on what your actually calculating a regular CPU will run circles around a a GPU..


    When talking about a General purpose CPU vs GPU we are talking about them relative to one another. A CPU can do most things fast and the GPU can do certain things very fast and not much else.


    When talking about GPGPU on it's own it's just talking about using a GPU-type processor to perform tasks that are not directly related to graphics.


    So the idea is that the software you run on your machine will be able to use both the GPU and the CPU at the same time, depending on which ever is faster at a particular task.

    This is one of the reasons it's important to end up with the GPU functions integrated into the processor die as just another core. You can have the most powerful GPU and CPU in existence, but it does not help your applications a whole lot since the latency between the video card and the main memory/cpu will destroy any performance you have.

    If you want to have good performance with a normal PC you have to program for the CPU Or the GPU.


    You can see this with benchmarks with GTK and other things in Linux. The software rendering for 2-D stuff in X Windows is very fast and highly optimized. However people are still trying to migrate as much functions as possible to run on the GPU. Then when people get 'hardware acceleration' working the micro benchmarks will often show _worse_ performance. But it's still a good thing. When your doing complex graphics with a mixture of CPU and GPU because otherwise your just going to waste hundreds of thousands of cycles shovelling information back and forth between the two.

    Leave a comment:


  • yotambien
    replied
    Originally posted by TemplarGR View Post
    There is no antagonism between the two. They will cooperate *on-die*.
    I read that as "They will cooperate-*or die*".

    : )

    Leave a comment:


  • jakubo
    replied
    it kinda does sound to me like "its not performing very good. lets put it in the kernel..." (just recalling the attempt to get dbus in kernel. what has become out of it by the way? systemd seems to use dbus as well. to some point...)
    what will this developement be meaning for the kernel? will it become any different?

    Leave a comment:


  • RobbieAB
    replied
    Originally posted by deanjo View Post
    Since when has there been such a device? If a GPU was to be doing general purpose duties it would more then likely doing it through emulation given that the GPU had enough headroom to carry out those lighter demanding tasks.
    Since nVidia started CUDA, ATI started stream processing, openCL... These are ALL aiming to harness the considerable power of the GPU at general purpose floating point operations. No current GPU is specialised "graphics hardware", they are general purpose floating point SIMD cores with all the magic done in the drivers.

    And you know this at least as well as I do!

    Leave a comment:


  • deanjo
    replied
    Originally posted by RobbieAB View Post
    So why bother putting in a specialised GENERAL PURPOSE GPU?
    Since when has there been such a device? If a GPU was to be doing general purpose duties it would more then likely doing it through emulation given that the GPU had enough headroom to carry out those lighter demanding tasks.

    Leave a comment:


  • TemplarGR
    replied
    Originally posted by deanjo View Post
    This is where I disagree, if the need is not there for more general purpose power there is absolutely no reason to include it.
    You are right. Ideally, gpu should never become extinct.

    Where you are wrong, is that not all solutions in computing are ideal, in absolute terms. Sometimes we sacrifice some efficiency if it makes economical and/or programming sense.

    When the bulk of gpus sold are on die, and software begins to become optimized for them, instead of discreet gpus, it will make more sense for AMD to drop discreet gpus and concentrate on APUs. If one needs more power, will buy more APUs It will not be more efficient than buying pure gpus, but AMD will not keep designing pure gpus if it makes little economic gain, while it can simply sell more APUs instead. Intel for obvious reasons(lack of discreet gpus) will do the same.

    Leave a comment:


  • RobbieAB
    replied
    Originally posted by deanjo View Post
    This is where I disagree, if the need is not there for more general purpose power there is absolutely no reason to include it.
    So why bother putting in a specialised GENERAL PURPOSE GPU?

    Leave a comment:


  • deanjo
    replied
    Originally posted by TemplarGR View Post
    When we reach this point, it will make far more sense to simply put more APUS inside a PC for increased performance.
    This is where I disagree, if the need is not there for more general purpose power there is absolutely no reason to include it.

    Leave a comment:


  • TemplarGR
    replied
    Originally posted by deanjo View Post
    I'm not forgetting anything, I've already said that specialized solutions aren't going to sit on their asses while general purpose CPU's continue on with development.
    No, you don't get it(yet).

    Let me explain:

    1) You think specialized vs general purpose are some kind of rivals. They are not. Intel and AMD from now on(with the exception of the first generation Bulldozer) will release only APUs. You will not be able to buy single cpus anymore, forget it. That's it. There is no antagonism between the two. They will cooperate *on-die*.

    2) Since cpus cannot advance faster anymore, and squizing more cores in a die reaches diminishing returns, the solution will be the combination of cpu+ gpu. Cpu will act kind of a coordinator, while the bulk of heavy calculations will happen on the gpu side.This will happen both for graphics and gpgpu.

    3) As the lithography advances, and transistors shrink, gpu will cover more and more of the APU die. As i said, it makes no sense putting more cpu cores there. You have to make something out of those transistors, and this something is a bigger gpu... Eventually, we will reach a point where a cpu could be 1/4 of the gpu part, or less...

    When we reach this point, it will make far more sense to simply put more APUS inside a PC for increased performance. It will make no sense using GPUs alone. Only NVIDIA currently lacks an APU solution, and if it not solves this soon, NVIDIA will disappear.

    Of course, for the gpgpu part, we need better OpenCL support, but be patient, it will come from both Intel and AMD...

    Leave a comment:


  • deanjo
    replied
    Originally posted by TemplarGR View Post
    You forget one thing:

    Gpu power progresses far more rapidly than Cpu power.
    I'm not forgetting anything, I've already said that specialized solutions aren't going to sit on their asses while general purpose CPU's continue on with development.

    We cannot increase the mhz and/or core count much. Of course, other improvements could increase the ipc, but in general, with each processing node from now on, cpu performance increases only for about 10-20% while gpu performance increases 100%. It is far more easier to increase gpu performance at this point.
    Which is why I said that a general purpose CPU is not the solution for the high performance crowd.

    Eventually we will reach a point where the ratio of CPU/GPU performance inside an APU, will reach a convient number, where it will be better to simply use more APUs than using discreet gpus.
    IMHO, with the HPC crowd especially, the CPU will, more or less, become just merely another component on the board to handle delegation of computing to the various discreet devices and will become nothing more then an extension of motherboard chipsets.

    Leave a comment:

Working...
X