Announcement

Collapse
No announcement yet.

Lightspark May Work Towards A Gallium3D State Tracker

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by fabiank22 View Post
    I know, which is why I'm kind of surprised nobody has come up with a state tracker that does the thing DirectShow + GPU-Backends do(of course minus the codecs). I know that NVIDIA got their thing going and won't write Gallium Code, but I don't really get why you ATI-guys don't simply write "one API to work with all" for your Cards, give the nouveau-guys the option of hooking their stuff in and support that for a while. It would basically give you a monopole on cheap multimedia open-source linux solutions, if the vendor adds in BluRay-support there would have been even a slice of that web-enhanced TV-Solution market.
    Simple; the devs (both AMD and non-AMD) are all working on higher priority stuff first. Until the underlying Gallium3D hardware drivers are implemented and being used in all the major distros it doesn't make much sense building more upper layers and expecting them to be used.

    Originally posted by fabiank22 View Post
    Probably. Doesn't change my point though.
    Sure it does; if he meant Direct3D then he was right-ish, if he meant to include DirectShow then he was wrong-ish
    Test signature

    Comment


    • #22
      Originally posted by BlackStar View Post
      The fact that everyone seems to be considering the raw Gallium3d API for acceleration goes to show how much OpenGL sucks. Think about it for a moment.
      Keep trolling...

      Comment


      • #23
        Originally posted by bash View Post
        Since when does Internet Explorer 9 support WebM natively? It will still require installing additional codec packs.
        Since when does IE support Flash natively? Requiring the user to download and install something to view a web page isn't that obscure or prohibitive. Its just switching Flash for codecs. To the end user it's the same thing: download and install something.

        Not ideal, but still workable.

        Comment


        • #24
          I don't understand why he doesn't use the Clutter toolkit for this. It has got textures for image files, cairo textures for your vector graphics and other drawing operations, video textures for video playback. They can respond to events, and you can combine and composite them in any way you want. And you can achieve all this with a minimal amount of code.

          I'm afraid that by trying to reinvent the wheel and extending the scope of his project instead of focusing on the core, it will never reach a level where it's a viable alternative to the proprietary flash player.

          Comment


          • #25
            Read the replies.

            Basically nobody thinks that writing a new state tracker is his best option. The best idea I heard was for him to continue to use cairo and write some shaders for the remainder. Apparently he hasn't come close to utilizing all the features of OGL, according to the commenters.
            Having said that, I think the idea of a simple compositing state tracker is kinda neat.

            Comment


            • #26
              Maybe I misread things, but I don't think the developers were looking for "more features than OGL", but rather "something much smaller and simpler" where available.

              If you want to run a few shaders, Gallium3D will probably be the simplest way to do it... and if you use Gallium3D, the thing you write ends up being called a state tracker.
              Test signature

              Comment


              • #27
                Originally posted by bridgman View Post
                Simple; the devs (both AMD and non-AMD) are all working on higher priority stuff first. Until the underlying Gallium3D hardware drivers are implemented and being used in all the major distros it doesn't make much sense building more upper layers and expecting them to be used.
                Like I said, I don't get your focus. 3D requires a rather large sum of parts to work, shaders, advanced OpenGL, memory management, and so on. Even if you reach a milestone like glxgears(which took quite an amount of work) it won't mean that much, because hey, just basic 3D to the user.

                Video Acceleration on the other hand would actually produce somewhat of a sales point. I'll make an example: My dad owns a 1080p HDMI-Plasma-TV and we both have a rather large collection of Music-DVD's, Blu-Rays, Rips of that for convenience and so on. In an ideal world I would build a simple and silent shuttleesque multimedia-PC within the range of 300-400?, put in a low class ATI-Card for HD-Playback, install Linux and Mplayer and simply transport my blu-rays or external harddrives whenever I want to watch/show something.
                This is pointless right now because of the license costs for Windows 7 + DVD-Software alone are in the range of half the hardware costs.

                I get that the Gallium-work needs to be done, that the new GPU-generations are knocking, and I don't mean to belittle your efforts in any way, but right now every ATI-generation is in the state of "half-working" - half-working 3D(getting there but not fast enough to be considered reasonable yet), half-working Video Acceleration(tearing, APIs, software), OpenGL 2.1, no OpenCL.

                Originally posted by bridgman View Post
                Maybe I misread things, but I don't think the developers were looking for "more features than OGL", but rather "something much smaller and simpler" where available.
                The problem with discussing whether OpenGL is simple/okay enough for Lightsparks needs is that there is no consensus on what OpenGL actually means - some are talking about the OpenX-family(OpenGL ES, OpenGL, OpenCL, that audio thing), some mean OpenGL the spec(aka 4), and some mean the Mesa/Gallium implementation. So our discussion could be misinformed because the Lightspark devs may simply want to skip around the problems of the latter, facing the same problems KDE has right now.

                Comment


                • #28
                  I think I understand the disconnect now. You think the work required for 3D support on new GPUs is larger than the work required to implement a new GPU-based video decode framework and drivers and wonder why we work on 3D as a priority.

                  In actual fact, nearly everything you listed is shared from one generation to the next, and what we are doing to enable 3D on new hardware is *less* than what it would take to implement a new video decode framework... and the video decode framework would require pretty much everything we do for 3D anyways (eg the ability to compile and run shaders, the ability to draw primitives, the ability to access memory etc...). Remember that it's the upper levels of Mesa, shared across all GPU hardware vendors, which deal with all of the GL specific nasties... and it's that common code which needs most of the work to support higher levels of GL.

                  The proprietary drivers already have GL 4.x and OpenCL; video decode acceleration is WIP but is already working for some users. Mplayer supports GL output which doesn't normally have tearing issues unless you are running with a compositor, which I doubt you would want on an HTPC system.
                  Test signature

                  Comment


                  • #29
                    Originally posted by bridgman View Post
                    Maybe I misread things, but I don't think the developers were looking for "more features than OGL", but rather "something much smaller and simpler" where available.

                    If you want to run a few shaders, Gallium3D will probably be the simplest way to do it... and if you use Gallium3D, the thing you write ends up being called a state tracker.
                    Assuming you were responding to me, I was referencing the commenter named Sean. He seemed to be suggesting using only the openCL and OGL apis. OCL, to my knowledge, doesn't have even the beginnings of a state tracker yet, but it clearly is something Linux could make use of. By doing things this way he would then be able to make better use of gfx card so he wouldn't simply be using it for framebuffer/blitting.
                    As for the rest of your post, I wasn't suggesting he not use gallium, just that this poster made an interesting case for not writing a lightspark state tracker.

                    Comment


                    • #30
                      Originally posted by Svartalf View Post
                      Originally posted by BlackStar
                      The fact that everyone seems to be considering the raw Gallium3d API for acceleration goes to show how much OpenGL sucks. Think about it for a moment.
                      Keep trolling...
                      You, of all people, should understand how ill-suited OpenGL is for the short of work Lightspark is doing. For instance, efficient DMAs to VRAM are essential. In OpenGL you can:

                      a. set up an async transfer through a PBO (if that's available)
                      b. use glTexImage2D(..., null) discarding the previous data (requires efficient garbage collection inside the driver)
                      c. use glTexSubImage2D and ping-pong between two or more textures
                      d. use glTexSubImage2D alone
                      e. use glDrawPixels with an FBO
                      f. invoke Cthulu(*)

                      Each method may be faster or slower depending on the hardware and driver configuration. Testing this falls somewhere between "nightmare" and "impossible".

                      Then you have to generate OpenGL shaders on the fly (ugh(**)), read back to system memory if necessary (double ugh(***)), compose, re-upload, display. And all this in OpenGL 2.1 with 1.x fallbacks for older hardware.

                      [Add obligatory joke how Adobe's engineers *still* haven't managed to get this right after so many years]

                      I can see the appeal of Gallium for these tasks. Can't you? Off-hand: dead-simple DMA, sane, low-level shader instructions, efficient software fallback if the hardware is not up to the task.

                      (*) this is said to increase upload speeds but I haven't confirmed. Use at your own risk.
                      (**) GLSL isn't really suited to this task and ARB programs are deprecated and stuck to 2004-level capabilities.
                      (***) Think texture uploads, only more random. For example, Nvidia 480 achieves half the download speed of 280. Why? Noone knows, have fun optimizing your code.

                      Comment

                      Working...
                      X