Announcement

Collapse
No announcement yet.

AMD Publishes Open-Source Linux HSA Kernel Driver

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Bucic
    replied
    (Sea Islands and up)
    Well shiiit... :/

    Leave a comment:


  • ShFil
    replied
    "Does that mean that before, if you had an APU, some of your RAM was allocated to GPU tasks at startup, and when the CPU needed the GPU to do something then it had to transfer the data from the memory addresses used by the CPU to the parts used by the GPU (even if that was on the same physical stick of RAM)?"

    Nope, kaveri and cpu on am1 haven't this problem. Old amd apu and all intel's has that, but when gpu from kaveri count somethink and put the results in ram, after that cpu does not need to ask gpu about outcome.
    All profit will be made by removing queries.

    Leave a comment:


  • pixo
    replied
    Originally posted by Nille View Post
    I doubt that for this, HSA is a big benefit for you. The GPUs have a dedicated hardware encoder. opencl and h264 didn't work well together.
    Wasnt the problem that some parts were suitable for parallel power of GPU and other for single tread CPU and the data moving was expensive?
    Or was it for decoding?
    HSA removes this problem and you can also use CPU+GPU combo also for VP9 and h265 or other codecs that dont exists yet so cant be incorporated in ASIC.
    Also the hw encoder is not so great so people prefer to use x264 (software encoding) rather than the ASIC in GPUs.

    Leave a comment:


  • Nille
    replied
    Originally posted by RoboJ1M View Post
    who's primary heavy lifting task is video transcoding, could be replaced with an HSA enabled APU.
    I doubt that for this, HSA is a big benefit for you. The GPUs have a dedicated hardware encoder. opencl and h264 didn't work well together.

    Leave a comment:


  • Kraut
    replied
    Originally posted by kaprikawn View Post
    So if I understand this correctly, it means that both the CPU and GPU portions of an APU can both access the same memory (like they've been banging on about for the PS4 and Xbone 180)?

    Does that mean that before, if you had an APU, some of your RAM was allocated to GPU tasks at startup, and when the CPU needed the GPU to do something then it had to transfer the data from the memory addresses used by the CPU to the parts used by the GPU (even if that was on the same physical stick of RAM)?

    If my understanding is correct, I'm guessing it has no benefit for users with a CPU and a dedicated GPU where, obviously, the GPU has it's own RAM on the card?
    You can access the same memory with non HSA AMD or Intel APUs, too. The problem is cache coherency.
    After writing data with the CPU you must flush cache lines to the main memory and then invalidate GPU cache lines before issuing GPU commands.
    Same goes for reading GPU generated data with the CPU. First flush GPU cache lines and then invalidate CPU cache lines.

    You have to do the same with slot GPUs that hold there own memory.

    The CPU cache flushing/invalidating will be done automatically by the driver/os. But for example in OpenGL with ARB_buffer_storage you can chose to manually trigger glMemoryBarrier.

    Leave a comment:


  • RoboJ1M
    replied
    I've been hoping for so long that a valid replacement for my old Phenom-II based server, who's primary heavy lifting task is video transcoding, could be replaced with an HSA enabled APU.

    Watching the required building blocks slowly build from the hardware up to userspace is very exciting.

    So:
    1. Hardware support - check
    2. Kernel support - check
    3. OpenCL 2 - tbd
    4. Compilers - tbd
    5. libav/x265 + parallelism - tbd


    Did I miss anything?

    So damn exciting!

    Only question that remains is do I risk buying an APU now and lose out of getting the latest gen kit when the blocks are in place?

    If I knew that whatever mobo I get would have a compatible sockets for a few years, I'd sell my cat for an APU right now.

    (=o.O= -- mrrOw?)

    Leave a comment:


  • kaprikawn
    replied
    So if I understand this correctly, it means that both the CPU and GPU portions of an APU can both access the same memory (like they've been banging on about for the PS4 and Xbone 180)?

    Does that mean that before, if you had an APU, some of your RAM was allocated to GPU tasks at startup, and when the CPU needed the GPU to do something then it had to transfer the data from the memory addresses used by the CPU to the parts used by the GPU (even if that was on the same physical stick of RAM)?

    If my understanding is correct, I'm guessing it has no benefit for users with a CPU and a dedicated GPU where, obviously, the GPU has it's own RAM on the card?

    Leave a comment:


  • fithisux
    replied
    Beneficial for others?

    Will it be beneficial for example TI multicore cpu/dsp hybrids? I think they also have an HSA platform.

    Leave a comment:


  • RussianNeuroMancer
    replied
    Thanks, Bridgman! Can't wait to see HSA in real applications

    Leave a comment:


  • Nille
    replied
    Originally posted by Kivada View Post
    Got any multimedia task? The GPU would absolutely destroy any CPU on the market with ease in these tasks. Think how VDPAU/VA-API help with video playback, apply that to editing and transcoding file, which is a very time consuming task, especially as we move to 4K and eventually 8K video.
    For this Task the GPUs have a separate ASIC for this. Only things like deinterlacing oder other Filters are processed on the gpu but not the decoding and encoding.

    Leave a comment:

Working...
X