Announcement

Collapse
No announcement yet.

Bye DirectX?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by blacknova View Post
    Hah?
    That is just a bios programming this is not that much different from API calls. And besides DOS video interrupt is 10h
    Close-To-Metal would be direct port programming.
    Ahh, brings me back to when I encountered linear VBE framebuffers with one byte per pixel, sooo much easier than messing with the Amiga's bitplanes. Together with the flat memory mode (segment registers BEGONE!) it sure made PC programming fun.

    Comment


    • #32
      Originally posted by XorEaxEax View Post
      The beauty of OpenCL would be that programs written in it runs on both CPU and GPU using the same code.
      If I remember correctly, there are parts of OpenCL that are covered by software patents, and as a result support for it cannot be completely implimented in Free Software projects that are to be distributed in countries that recognize such patents, such as the US.

      Comment


      • #33
        Originally posted by TwistedLincoln View Post
        If I remember correctly, there are parts of OpenCL that are covered by software patents, and as a result support for it cannot be completely implimented in Free Software projects that are to be distributed in countries that recognize such patents, such as the US.
        That sounds really bad, do you have any links? I did a quick search and only came up with Apple owning the OpenCL trademark (not surprising since they were they ones who first submitted the inital spec), but nothing on any patent problems. So far Apple (obviously), Intel, AMD/ATI and NVidia (perhaps reluctantly, they have their own CUDA) are aboard with OpenCL, with Apple, Intel and AMD actively pushing the framework for their products.

        Comment


        • #34
          Originally posted by XorEaxEax View Post
          NVidia (perhaps reluctantly, they have their own CUDA)
          They were the ones to work closely with apple during the development of openCL.

          Comment


          • #35
            Haha

            Originally posted by deanjo View Post
            They were the ones to work closely with apple during the development of openCL.
            Pity for nVidia the new MacBook Pro's, iMacs and Mac Pro's all ship with ATI cards now :P

            Comment


            • #36
              To me, the likely reason AMD and Intel are pouring money into OpenCL is because it fits with their GPU+CPU Fusion/Larrabee platforms which I'm guessing is where computing is heading. Having both GPU and CPU on the same chip allows for access to the same shared memory caches and removes the need to ship data on the (comparatively very slow) PCI bus.

              And then it's a matter of adding more and more cpu (and perhaps gpu?) cores onto the chip to be able to compete with discrete graphics in performance, with OpenCL being the framework to enable efficient use of these cores in parallell (although there are others, they are tied to specific platforms like CUDA (NVidia), DirectCompute (Windows) ).

              The losers here would be NVidia since afaik they do not have a licence to implement the x86/x64 instruction set (which is just insane, talk about closing the market for competition!) and is therefore unable to compete by offering the same type of solution. If this is indeed the case then NVidia must doing anything in their power to make sure that their dedicated GPU's stay well ahead of the CPU+GPU solutions from Intel/AMD.

              Again this is just me speculating, I'm hardly an expert in these matters.

              Comment


              • #37
                Originally posted by zeealpal View Post
                Pity for nVidia the new MacBook Pro's, iMacs and Mac Pro's all ship with ATI cards now :P
                It comes and goes in cycles at apple as to who's graphics they implement since the x86 switch over.

                Comment


                • #38
                  Originally posted by XorEaxEax View Post
                  To me, the likely reason AMD and Intel are pouring money into OpenCL is because it fits with their GPU+CPU Fusion/Larrabee platforms which I'm guessing is where computing is heading.
                  Actually Intel wasn't even the initial supporter of openCL. Before Larrabee got killed they had nothing to do with the openCL spec. They are a late adopter. As far as AMD pumping money into it, that is not surprising as Stream never really caught on and it was a chance for a clean break from it.

                  Having both GPU and CPU on the same chip allows for access to the same shared memory caches and removes the need to ship data on the (comparatively very slow) PCI bus.
                  PCI-e bus actually but I get what you are saying. On the flip side of that however is that you are sharing the main system memory which means less for other uses and which is often slower and narrower then your GPGPU cards.

                  And then it's a matter of adding more and more cpu (and perhaps gpu?) cores onto the chip to be able to compete with discrete graphics in performance, with OpenCL being the framework to enable efficient use of these cores in parallell (although there are others, they are tied to specific platforms like CUDA (NVidia), DirectCompute (Windows) ).
                  This is already the case. The beauty of openCL is that it is not limited to one device at a time. Items can be tasked off to the solution "most appropriate" for the task.

                  The losers here would be NVidia since afaik they do not have a licence to implement the x86/x64 instruction set (which is just insane, talk about closing the market for competition!) and is therefore unable to compete by offering the same type of solution. If this is indeed the case then NVidia must doing anything in their power to make sure that their dedicated GPU's stay well ahead of the CPU+GPU solutions from Intel/AMD.
                  They do however have ARM which could fill those needs. Also with Fermi the capabilities of what types of instructions it could process jumped dramatically which leads to relying on a external additional type of processor/dsp.

                  Comment


                  • #39
                    Originally posted by XorEaxEax View Post
                    Having both GPU and CPU on the same chip allows for access to the same shared memory caches and removes the need to ship data on the (comparatively very slow) PCI bus.
                    And bus speed has little impact on graphics performance, while memory speed has traditionally been the primary limit on graphics performance; hence replacing a slow bus with slow memory has rarely been a good idea. Though that's less true today when so much of the work required to render 3D graphics is shader based rather than texture-based.

                    Having the GPU on-chip is really only a performance benefit if you can use it as a co-processor rather than for 3D rendering. A separate CPU and GPU built by competent design teams will otherwise always be capable of faster 3D rendering than a single chip because you have twice as many transistors and can consume far more power.

                    Comment


                    • #40
                      A really good and well written article. Thanks for sharing it and let us know about this.

                      Comment

                      Working...
                      X