Announcement

Collapse
No announcement yet.

AMD Publishes Open-Source Linux HSA Kernel Driver

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Very excited to hear about this release. Glad to see AMD making strides, however slow they are in coming, to push this support out to the open source community. Hopefully we can put it to good use; I was waiting to take advantage of this in some of my own scientific modelling software.

    Comment


    • #12
      OpenCL support with the open source drivers will make me very pleased. It'll be nice being able to dedicate my 2nd GPU to that until crossfire is implemented into the open source drivers.
      Are there any test results currently available that show performance enhancements of HSA?

      Comment


      • #13
        at least nobody again said something that intel has a better driver and is ahead, just learned after searching a x86 tablet how good intel opensource support is really.

        Congrats amd, go go go.

        Funnily amd has no linux tablet support too, becuase lol there are basicly no amd tablets so u cannot support it

        Comment


        • #14
          Wonder if this will work for Movit-enabled kdenlive?

          I don't have hardware capable of using this driver (Bulldozer with Radeon HD6750 in my video editor) but if this ever works for Kdenlive, that shared memory address space would be a game changer. Until now GPU accelerated effectes in video editors have meant slowed render time due to CPU-GPU memory transfers. In the development version of Kdenlive, this nearly doubles render times if Movit effects are used, and the GLSL backend that supports it will increase rendertime by 3/2 in all cases except a straight transcode job. The CPU loading on an 8-core with this bottleneck can be as low as 25-30%!

          If this bottleneck is totally removed, the A10-7850K with only half as many cores might be able to blow away Bulldozer in Kdenlive render time so long as Movit effects are being used. For a 95 watt APU to defeat a 125 watt nominal (far higher when overclocked!) CPU with an 80 watt GPU is a major, major achievement. Unfortunately I am now out of funds for any further experimentation with new hardware, but if my rig ever dies I will know what to seek as a replacement board and CPU, and they won't come from Intel.

          Comment


          • #15
            Originally posted by dungeon View Post
            Not sure, there are also some PRO models (those with B) listed at AMD site which can run even as 35 W TDP . I guess those reduce turbo core frequency or may be disable it, but being optimized not too loose to much performance... who knows .

            http://www.amd.com/en-us/products/pr...p/a-series-apu
            The "Pro" versions are probably the same as the existing parts but rated as workstation class graphics aka FireGL variants. The chip may be identical and the only difference would be what parameters the driver sets.

            Comment


            • #16
              Originally posted by Kivada View Post
              The "Pro" versions are probably the same as the existing parts but rated as workstation class graphics aka FireGL variants. The chip may be identical and the only difference would be what parameters the driver sets.
              Yep PRO is the same but for diferent market segment, so actually that is Business class (that is what B stand for), and i guess that is only availiable for OEMs... those can be found in products like this:



              Someone will say nice HTPC, but price is not so nice for HP Elite series .

              Comment


              • #17
                In layman's terms, aside from compute and clustering - it sounds as it might speed up data transfers between cpu and gpu ? Is that the benefit from typical user, or is there none?

                Comment


                • #18
                  Originally posted by yoshi314 View Post
                  In layman's terms, aside from compute and clustering - it sounds as it might speed up data transfers between cpu and gpu ? Is that the benefit from typical user, or is there none?
                  Yeah, the nice thing is that you don't have to transfer betwen them at all -- the GPU (or DSP etc..) runs in the same demand-paged virtual address space as the CPU so they can share data structures without much in the way of special programming.
                  Test signature

                  Comment


                  • #19
                    Originally posted by yoshi314 View Post
                    In layman's terms, aside from compute and clustering - it sounds as it might speed up data transfers between cpu and gpu ? Is that the benefit from typical user, or is there none?
                    There would be tons, every task that currently bogs down multiple CPU cores would greatly bennifit from this, the other posted techs that they are working on would vastly improve Java performance among other tasks. HSA is not just for AMD hardware, pretty much all of the major ARM players are involved as well, it's made to allow the CPU, GPU and DSP all work together, which will be comming to a cellphone near you soon enough.

                    The single core IPC of the CPU has pretty much hit a wall, even on the Intel side, there aren't large gains every other version like the old days and most tasks that can be broken up to multiple threads don't gain a whole lot from a handful of general purpose CPU cores, but make huge gains when broken up into hundreds of pieces and sent off to the GPU.

                    What kills performance of current GPGPU implementations is the lack of unified memory and cache coherency between the CPU and GPU. What this means is that the data would have to be copied into main system memory, the CPU would have to decide to send it out to the GPU, it's have to be sent across the PCIe bus to the GPU memory, then worked on and placed back into the GPU memory, then sent back across the PCIe to the CPU memory, then the CPU has to check it, all of this adds latency to the process blowing away allot of the gains of processing the data on the GPU limiting it to only non real time tasks like Bitcoin mining.

                    One big gain for end users would be in GPU based physics in games, not just crappy non-persistent eye candy only ones like you see with PhysX, a game engine written to make use of the GPU in an APU would allow for all kinds of fun things to be done to the game environment, think a war game where you have a fully destructible city and more realistic(or not) hit effects on character's bodies.

                    Got any multimedia task? The GPU would absolutely destroy any CPU on the market with ease in these tasks. Think how VDPAU/VA-API help with video playback, apply that to editing and transcoding file, which is a very time consuming task, especially as we move to 4K and eventually 8K video.

                    Comment


                    • #20
                      Questions to Mr Bridgman : what are the next step after the kernel drive for comprehensive HSA support? Will there be patches to compilers like GCC or LLVM? What modifications would be needed for existing codes to make use of HSA?
                      Probably to wide a question but let's try...

                      Comment

                      Working...
                      X