Announcement

Collapse
No announcement yet.

Why More Companies Don't Contribute To X.Org

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    Originally posted by deanjo View Post
    Who said the alternative CPU couldn't be placed on the GPU? The key thing here is that the CPU core doesn't have to be even on the motherboard and allocation of resources takes very little bandwidth for what those cpu cores would actually have to do. Heck you could even in theory utilize a Master / Slave / Slave / Slave / etc setup that grows with the needs.
    It seems you have arrived at the same conclusion without knowing it... That cpu and gpu cores will merge, and processors will become heterogenous.

    There is no point in placing a cpu on the gpu. It is an oxymoron, if you do that, you end up with a mainboard(the gpu PCB) with cpu, ram(GDDR) and gpu. You accomplish nothing different at all. OSes will still need to boot from a cpu core, not a gpu core.

    As i said, Intel/Amd have the advantage. And they will push things this way in the future. NVIDIA, the other major player, doesn't have a cpu. Unless it develops one, or cooperates with a cpu player, it will perish eventually.

    We are talking about a decade timeframe. This won't happen all at once. believe we will need at least 5-6 years in order to begin using additional APUs instead of a dedicated GPGPU. But i believe that by the time 2020 ends, we will remember dedicated gpus like we remember dedicated fpus now. Like relics.

    Comment


    • #62
      Originally posted by TemplarGR View Post
      There is no point in placing a cpu on the gpu. It is an oxymoron, if you do that, you end up with a mainboard(the gpu PCB) with cpu, ram(GDDR) and gpu. You accomplish nothing different at all. OSes will still need to boot from a cpu core, not a gpu core.
      Yeah, I guess we (consumers) can live with replacing graphics chip being slightly more difficult. (it's not like you need to need to yank graphics card in and out all the time unless you're a developer doing QA) That is, it's somewhat simpler to detach a discrete card than a CPU at least with modern approaches.

      Comment


      • #63
        Originally posted by TemplarGR View Post
        It seems you have arrived at the same conclusion without knowing it... That cpu and gpu cores will merge, and processors will become heterogenous.

        There is no point in placing a cpu on the gpu. It is an oxymoron, if you do that, you end up with a mainboard(the gpu PCB) with cpu, ram(GDDR) and gpu. You accomplish nothing different at all. OSes will still need to boot from a cpu core, not a gpu core.

        As i said, Intel/Amd have the advantage. And they will push things this way in the future. NVIDIA, the other major player, doesn't have a cpu. Unless it develops one, or cooperates with a cpu player, it will perish eventually.

        We are talking about a decade timeframe. This won't happen all at once. believe we will need at least 5-6 years in order to begin using additional APUs instead of a dedicated GPGPU. But i believe that by the time 2020 ends, we will remember dedicated gpus like we remember dedicated fpus now. Like relics.
        With the unprecidented push that we are seeing in ARM this year it also not out of the realm of possibility that such a CPU solution takes over. In this arena Nvidia does have a CPU. The advantage of having on a PCB is that you have relatively easy scalability and given the advances in items like Light Peak and such I would be very surprised if we would still be limited to current bus bottlenecks. Also going a card based route also allows for non uniform configurations (much like what we are currently seeing in Lucid's attempts). To me Nvidia's biggest threat is coming from TI with their future HPC dsps.

        Comment


        • #64
          As far as timeframe goes I can't see anything radically changing in respect to discreet solutions for at least another 10-15 years.

          Comment


          • #65
            Originally posted by MaestroMaus View Post
            Ow waa! Your taking youself too serious if you can't take a joke.
            I agree with ethana - it might have been intended as a joke, but it certainly wasn't a funny one. In the commercial world, that kind of misuse of admin privileges would be grounds for instant dismissal. And if anything, it's worse in the likes of FD.o, where trust is *everything*.

            Comment


            • #66
              Originally posted by deanjo View Post
              Who said the alternative CPU couldn't be placed on the GPU? The key thing here is that the CPU core doesn't have to be even on the motherboard and allocation of resources takes very little bandwidth for what those cpu cores would actually have to do. Heck you could even in theory utilize a Master / Slave / Slave / Slave / etc setup that grows with the needs.

              The next step after putting the GPU on die is just to suck the rest of the motherboard on there.

              With embedded systems it's called "System On a Chip". You have your graphics, processor, memory controller, drive controllers, wireless controllers, ethernet, etc etc. All on one chip.

              Then the mainboard becomes little more then a break out board to connect external inputs/outputs, house memory, voltage regulation, etc.

              That's the future of the PC also.

              With typical embedded system per unit cost is critical as is energy efficiency and performance per watt. Which is why they took this approach long ago.

              Comment


              • #67
                Originally posted by drag View Post
                The next step after putting the GPU on die is just to suck the rest of the motherboard on there.

                With embedded systems it's called "System On a Chip". You have your graphics, processor, memory controller, drive controllers, wireless controllers, ethernet, etc etc. All on one chip.

                Then the mainboard becomes little more then a break out board to connect external inputs/outputs, house memory, voltage regulation, etc.

                That's the future of the PC also.

                With typical embedded system per unit cost is critical as is energy efficiency and performance per watt. Which is why they took this approach long ago.
                True but as we all know, SoC's make some serious compromises on performance and expansion options. Heck slapping a graphics solution on CPU isn't even anything new, Cyrix did it about a decade ago with the Cyrix MediaGX chip. Unfortunately the same reasons as to why that did not take off still exist today. Great for compact "throw away" systems but not to practical where large scaling is concerned.

                Comment


                • #68
                  Originally posted by Delgarde View Post
                  I agree with ethana - it might have been intended as a joke, but it certainly wasn't a funny one. In the commercial world, that kind of misuse of admin privileges would be grounds for instant dismissal. And if anything, it's worse in the likes of FD.o, where trust is *everything*.
                  Trust is weakness. Aptly shown.

                  Comment


                  • #69
                    Back to the original topic:

                    Q - Why More Companies Don't Contribute To X.Org?
                    A - Because they have better things to do with their time and money.

                    Yes, it's true. X is a huge, 23-year-old beast that falls squarely into the "good enough" and "time proven" categories. The first quality means that companies are happy to use it as-is; the latter means companies are unlikely to hit bugs so serious that they will have to hack X directly.

                    Comment


                    • #70
                      Originally posted by drag View Post
                      The next step after putting the GPU on die is just to suck the rest of the motherboard on there.

                      With embedded systems it's called "System On a Chip". You have your graphics, processor, memory controller, drive controllers, wireless controllers, ethernet, etc etc. All on one chip.

                      That's the future of the PC also.
                      No, this won't happen for PCs. You see, the idea will be to use multiple APUs inside a PC for additional performance, the way we do crossfire now for additional graphics performance.

                      If Pc processors become "system on a chip", then much functionality will become duplicated and inefficient. There is no point in having 4 drive controllers for example, if you have 4 APUs.

                      System on a chip will be the future for all mobile devices though. In the end, Intel and AMD will produce only 2 lines (in general) of products:

                      APUs and SOCs, in various editions depending on the needs.

                      Comment

                      Working...
                      X