Announcement

Collapse
No announcement yet.

AMDGPU Patches Prepping JPEG Support For "Video Core Next"

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • AMDGPU Patches Prepping JPEG Support For "Video Core Next"

    Phoronix: AMDGPU Patches Prepping JPEG Support For "Video Core Next"

    AMD's Boyuan Zhang has sent out an initial set of 18 patches adding JPEG handling to the AMDGPU kernel driver for VCN "Video Core Next" as the new media encode/decode block found with Raven Ridge APUs for media decode/encode...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Though most of you don't care about JPEG capabilities
    There is some sorcery going on here

    Comment


    • #3
      It's not that we don't care, we just don't have any way to use it because no software uses it. Intel has had JPEG encode/decode hardware for some time, but I have yet to see any program actually use the block for it on Windows or Linux, because nobody has incorporated code to use it instead of the CPU-only method that has been used for decades.

      Comment


      • #4
        JPEG encode would be interesting for some kind of CMS system doing resizing on the fly or something. I see that it also supports the new kid on the block HEVC?

        Comment


        • #5
          Those who shoot high-res MJPEG videos certainly do care about hardware JPEG decoding. For single JPEG images, the overhead of sending it to the hardware and then fetching the result is probably larger than just decoding the thing on the CPU. But decoding a 4K or even 8K MJPEG stream is something different.

          Comment


          • #6
            Originally posted by TheLexMachine View Post
            It's not that we don't care, we just don't have any way to use it because no software uses it. ... because nobody has incorporated code to use it instead of the CPU-only method that has been used for decades.
            Indeed. It might be interesting, maybe for mass JPEG creation, though one could wonder if in e.g. darktable there really is a bottleneck with the final jpeg compression or if it is more the filters applied.


            Originally posted by Gusar View Post
            Those who shoot high-res MJPEG videos
            Well, MJPEG came to my mind first/second, when I read the news; but I hardly have seen that codec in use lately. Is there still hardware out there that creates MJPEG movies by default? Then this would be really helpful. But again, maybe decode is exposed via VDPAU and VAAPI (and maybe openmax), that would be nice and universal for decode, but encoding often has little to no hardware support in most programs. It may not be as configurable as a SW compression, but it is definitely by far more speedy.

            Stop TCPA, stupid software patents and corrupt politicians!

            Comment


            • #7
              Originally posted by Gusar View Post
              Those who shoot high-res MJPEG videos certainly do care about hardware JPEG decoding.
              High-res MJPEG is pretty much non-existent - outside of MJPEG 2K's use in digital cinema - and MJPEG has pretty much been dropped from still cameras as a video format, with a few oddball exceptions like Canon's continued use of it in a few of their DSLR cameras. despite overwhelming use of H.264 in their video and still camera products. MJPEG's primary use - outside of digital cinema - is with networked video cameras, because it's a frame-by-frame codec with minimal encoding/decoding. The whole point of adding JPEG encode/decode hardware was to give GPUs the ability to deal with high-resolution JPEG content from professional-grade cameras and the ability to utilize MJPEG with built-in camera modules, while keeping almost all image processing on the GPU hardware, so CPU power could be directed elsewhere, allowing low-end configurations like Intel's Atom netbook/nettop hardware to be significantly more useful to consumers.

              Comment


              • #8
                Originally posted by Gusar View Post
                Those who shoot high-res MJPEG videos certainly do care about hardware JPEG decoding. For single JPEG images, the overhead of sending it to the hardware and then fetching the result is probably larger than just decoding the thing on the CPU. But decoding a 4K or even 8K MJPEG stream is something different.
                Seems like even on the consumer side there could be some benefit, assuming it's not so expensive to configure that it's not worth it when you're decoding say... 200-300 thumbnails, especially if it means the bitmaps never have to touch main memory.

                Comment


                • #9
                  Originally posted by Gusar View Post
                  Those who shoot high-res MJPEG videos certainly do care about hardware JPEG decoding. For single JPEG images, the overhead of sending it to the hardware and then fetching the result is probably larger than just decoding the thing on the CPU. But decoding a 4K or even 8K MJPEG stream is something different.
                  Consider the web-browser use-case, Chromium doesn't do much in terms of hardware acceleration on GNU/Linux by default but you can manually change flags and make "Graphics Feature Status" under chrome://gpu show hardware accelerated for every item and then you're drawing the canvas and doing most of the work on the GPU. Most use-cases won't fetch the result - just send it directly to the screen the same way vaapi and vdpau is mostly used to hardware decode and directly present video frames.

                  Interestingly both my Intel iGPU laptop and AMDGPU desktop show VAProfileJPEGBaseline under vainfo already.

                  Comment

                  Working...
                  X