Announcement

Collapse
No announcement yet.

A Video Decoding Interface For Gallium3D

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • A Video Decoding Interface For Gallium3D

    Phoronix: A Video Decoding Interface For Gallium3D

    Yesterday we talked about Nouveau Gallium3D video improvements that are allowing 1080p video clips to now play with this open-source driver stack. Today there's an ongoing discussion about a proper video decoding interface for Gallium3D. Younes Manton, the one responsible for some of the Nouveau work and Generic GPU Video Decoding using shaders, has proposed a proper video decoding interface to this new driver infrastructure...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    I think this is the wrong approach, because video decoding should be done on the dedicated hardware that comes on the card, not on the general-purpose 3D engines. Decoding video on the 3D engines will degrade 3D performance (ie. you wouldn't be able to spin the cube if you have a video running).

    Comment


    • #3
      I think you'll see that in the closed drivers; not sure yet how practical it will be to use dedicated hw in the open drivers.

      Note that the 3D engine does all the back-end processing anyways (colour space conversion, scaling, deinterlacing, post processing etc) even when running with dedicated hardware, so extending that a bit further up the pipe is not a big deal. The dedicated hardware is mostly useful for the very front end work; performing bistream decoding and managing spatial prediction.
      Last edited by bridgman; 21 January 2009, 09:46 AM.
      Test signature

      Comment


      • #4
        Bridgeman, if Gallium3D video decoding pans out, could Linux users soon have *ALL* video formats accelerated through the GPU shaders? Assuming we disregarded the software patents and whatnot.

        Comment


        • #5
          I'm not bridgman, but it should be possible to accelerate everything. Though then parts of it could be illegal in the US ($DEITY bless living in europe)

          Comment


          • #6
            Yes, although some low end and older midrange cards would probably not have enough shading power to keep up with HD resolutions -- they have enough throughput for Xv but not a lot left over. It would depend on format but I would assume anything below an X1650 or X800 wouldn't have the shader power. For more recent cards, you'd probably want an HD2600, 36xx or 46xx to have some shader power left over for decoding.

            An ideal implementation would let you choose where to switch processing from CPU to GPU so that even low end hardware could at least accelerate MC - the nice thing is that the last stages in the pipe (MC, colour space conversion, scaling etc..) tend to be the hardest for the CPU and easiest for the GPU so the idea of "everything past this point is done on GPU" works well.

            This is just a guess so don't buy hardware based on it
            Last edited by bridgman; 24 January 2009, 08:22 AM.
            Test signature

            Comment

            Working...
            X