Announcement

Collapse
No announcement yet.

X.Org SoC: Gallium3D H.264, OpenGL 3.2, GNU/Hurd

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • X.Org SoC: Gallium3D H.264, OpenGL 3.2, GNU/Hurd

    Phoronix: X.Org SoC: Gallium3D H.264, OpenGL 3.2, GNU/Hurd

    There's a few months left until it's summertime in the northern hemisphere, but Google is already preparing for their annual Summer of Code (SoC) project as are their projects involved. X.Org will once again be part of the Summer of Code program where Google pays various student developers to work on different free software projects...

    http://www.phoronix.com/vr.php?view=ODA0Ng

  • #2
    People still remember HURD?

    Comment


    • #3
      can somebody post a link to how this is going to be implemented into HURD?

      Comment


      • #4
        Originally posted by V!NCENT View Post
        can somebody post a link to how this is going to be implemented into HURD?
        People still care about HURD?

        (That's a joke. Sorta.)

        Comment


        • #5
          "People still care about Linux?" said the Windows user.

          "People still care about Hurd?" said the Linux user.

          "People still care?" said the Hurd user...

          Comment


          • #6
            Originally posted by Wyatt View Post
            People still care about HURD?

            (That's a joke. Sorta.)
            Care? Not really.
            Interested? Definately

            Comment


            • #7
              Hurd's a fun target. It's roughly as far away from Linux you can get while still being Unixy. If anybody wants to suggest a better target for DRM devel, we're up for it, but...

              * FreeBSD and OpenBSD each have a one-person team porting DRM features already; I'm sure they could use some help, but they're already started
              * OpenSolaris isn't interested in anything written outside of Oracle
              * Darwin is meant to use Apple proprietary code

              That leaves Hurd. Well, and NetBSD, I guess. At any rate, Hurd's the one that's got a possible mentor, so that's that.

              I kind of find it funny that Hurd's drawing all the attention. Why not show XCB some love?

              Comment


              • #8
                I think you forgot Minix and also Haiku

                I think both would appreciate an accelerated Mesa port

                I know there is at least on person that has worked on the haiku gallium port ("aljen" in the trac has implemented sw raster for gallium so far) and has gotten that working as I understand it only the drm needs porting for Haiku

                Minux should be more *nixy and haiku has more posix than BeOS had so they should be familiar enough...

                Comment


                • #9
                  I would love a MINIX port. I need to run MINIX for an operating systems course, and I want my wobbly windows....

                  But the only thing I'm really interested in, is OpenGL development. Well, that and power management. On Linux, of course.

                  Comment


                  • #10
                    Could someone explain to me ...

                    Why the h.264 acceleration if the dedicated decoders are ... unavailable? Are they attempting a shader implementation?

                    Thanks/Liam

                    Comment


                    • #11
                      Yeah, I think that would be accelerating the parts which are a good fit with shader processing. The nice things about going with shaders are (a) to a large extent the same code can run on hardware from multiple vendors and generations, (b) the same framework can be used for codecs such as Theora which do not have support in the dedicated decoder hardware anyways. I think ON6 (Flash) falls into this category as well but not 100% sure.

                      Comment


                      • #12
                        Thanks, Bridgman!

                        Originally posted by bridgman View Post
                        Yeah, I think that would be accelerating the parts which are a good fit with shader processing. The nice things about going with shaders are (a) to a large extent the same code can run on hardware from multiple vendors and generations, (b) the same framework can be used for codecs such as Theora which do not have support in the dedicated decoder hardware anyways. I think ON6 (Flash) falls into this category as well but not 100% sure.
                        I'd assumed this, but, well, I was hoping they were privy to some specs that haven't been release yet
                        Regardless, this will be a nice thing to have: a fairly general purpose, block based graphics accelerator. I really hope the work they do is general enough that it can be applied to other DCT codecs.

                        Best/Liam

                        Comment


                        • #13
                          I really believe that the "missing link" so far has been someone grafting libavcodec onto the driver stack so that processing can be incrementally moved from CPU to GPU.

                          Also, I forgot to mention that the other benefit of a shader-based implementation is that there are a lot of cards in use today which have a fair amount of shader power but which do not have dedicated decoder HW (ATI 5xx, for example).

                          Comment


                          • #14
                            BTW rather than ON6 in my earlier post I should have said VP6. ON2 is the company, VP6 is the codec

                            Comment


                            • #15
                              I actually think that this shader-based approach is much more important than using the dedicated video decoder. Specifically because it IS compatible with cards without a dedicated video decoder AND that it is applicable beyond the capabilities of the dedicated video decoder.

                              In addition, there won't be any need to deal with IP issues in the event that the video decoder components on future cards are just totally incompatible with older versions... so one less item of critical importance when dealing with new hardware support.

                              Here's a question: on the SoC website, it says that CABAC is not suitable for GPU-acceleration, and on wikipedia it says that CABAC is horribly CPU intensive.... Does this mean that high bitrate videos that use CABAC will be beyond our decoding capability for lower-end machines? Any idea what proportion of the overall video decoding process (in a typical software decoder) would be related to CABAC? Is it going to be something manageable, like 5%, or is it going to be something overwhelming, like 95%?

                              Comment

                              Working...
                              X