Announcement

Collapse
No announcement yet.

Linux's Stateless H.264 Decode Interface Ready To Be Deemed Stable

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Linux's Stateless H.264 Decode Interface Ready To Be Deemed Stable

    Phoronix: Linux's Stateless H.264 Decode Interface Ready To Be Deemed Stable

    The Linux kernel's stateless video decoder interface is used for video decoding where no state needs to be kept between processed video frames and allows for independently decoding each video frame. The H.264 stateless decode interface for the Linux kernel has been in the works for a few years and is now deemed ready and stable for dealing with modern stateless codecs...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    no state needs to be kept between processed video frames and allows for independently decoding each video frame
    I don't understand... why would I want a single frame from a video or multiple frames but throw away all state inbetween? I think I'm misunderstanding why/how this is usefull.

    Comment


    • #3
      Originally posted by Mathias View Post
      I don't understand... why would I want a single frame from a video or multiple frames but throw away all state inbetween? I think I'm misunderstanding why/how this is usefull.
      That's the ability, which means any frame that is stateless it will be able to INDEPENDENTLY decode each and every frame that makes up a whole video/clip.

      Comment


      • #4
        Originally posted by Mathias View Post
        I don't understand... why would I want a single frame from a video or multiple frames but throw away all state inbetween? I think I'm misunderstanding why/how this is usefull.
        To quote the official docs:

        A stateless decoder is a decoder that works without retaining any kind of state between processed frames. This means that each frame is decoded independently of any previous and future frames, and that the client is responsible for maintaining the decoding state and providing it to the decoder with each decoding request. This is in contrast to the stateful video decoder interface, where the hardware and driver maintain the decoding state and all the client has to do is to provide the raw encoded stream and dequeue decoded frames in display order.
        So basically it simplifies the decoder side of the equation by making keeping the state information around a responsibility of the client instead of the decoder.

        Comment


        • #5
          Thanks airminer, that explains it.

          Comment


          • #6
            Why now? H264 was a decade ago. It's dying already.

            Comment


            • #7
              Originally posted by eydee View Post
              Why now? H264 was a decade ago. It's dying already.
              Dunno where it is dying, it is still the most popular codec. Starting with H264 was smart decision, porting other codecs will be much easier.

              Comment


              • #8
                Why is this in the kernel?

                Comment


                • #9
                  Originally posted by GruenSein View Post
                  Why is this in the kernel?
                  Because it is for hardware decoding.

                  Comment


                  • #10
                    Originally posted by Mathias View Post
                    I don't understand... why would I want a single frame from a video or multiple frames but throw away all state inbetween? I think I'm misunderstanding why/how this is usefull.
                    I am not a developer in this field, so I have really no understanding of related concepts, but from what I could understand by glancing at other resources seems to suggest that the "decoder" means hardware, and by moving state management to the software, it will make the hardware much simpler.

                    Additionally it allows the software to basically do any kind of magic on its own, whereas the hardware only takes a frame and renders it, and does not have to care about the surrounding circumstances. This essentially allows re-using existing, old hardware decoders for any imaginable scenario, in contrast to earlier when you would be limited by the feature set supported by the individual chips.
                    Last edited by curfew; 17 November 2020, 10:37 AM.

                    Comment

                    Working...
                    X