Announcement

Collapse
No announcement yet.

asfdati = super reatrd driver retard crap retard retard retard

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by mirv View Post
    Sorry, I'm apparently ignorant of some of the developments here - just how is lack of people working on an open source driver the fault of Novell or ATI? I would imagine that the lack of people is more the responsibility of the community (which demanded documentation) as a whole.
    Perfectly good questions... The problem comes down to the size of the linux market share. Simply put the bottom line is that AMD does not have enough people working on the linux portion of there propriatary code base. Maybe 5 people max, maybe... That's just simply not enough people to maintain a propriatary driver. It's just not. FGLRX is doomed. On the other hand if AMD paid those same 5 people to work on the open driver that would almost double the number of people working on the open drivers.

    Can you imagine how much further along we'd be today if we had almost double the number of open source driver developers? Clearly it would have to be well organized. Throwing more people at the problem isnt going to fix it. But throwing people at specific problems will. Set aside a group of people to work on the DDX driver, and another to work on the DRI driver, and maybe another to work on Mesa.... There --IS-- enough parallelism to support more developers. Graphics drivers are big hairy monsters that have many different components with many different complexities.

    And none of this even mentions those folks that Novell is using ATi's money to work around in worthless circles.

    Comment


    • #42
      Originally posted by bridgman View Post
      Yeah, it's easy to forget that X11 has been evolving for 21 years already
      I was thinking how good an option the open driver will be, just by witnesing how much X sucks

      Comment


      • #43
        Originally posted by Zhick View Post
        Srsly, I don't understand why you guy's get so worked up about video-decode-acceleration. I just played 8 720p videos at the same time just for fun, and things were still smooth and cpu-usage wasn't even 90% for most of the time. Now I'll admit my system is probably not the slowest one out there, but it's also faaar from top-notch/up-to-date (C2D E6600).
        Sorry, but I can give you ten more examples where the opposite happens. The truth is that HD video playback in Linux is a mess no matter what graphics platform is used -- none has any acceleration.

        HD is a complex issue and it's not only resolution but also bit-rate and the ammount of movement that influence outcome. For example, I currently use an E8400 (3Ghz) with mplayer on an Intel G35 and guess what, it competely chokes on high-bitrate 1080p -- just try the (in)famous birds scene from Planet Earth.
        Another machine, a 2.7 Ghz AthlonX2 with nVidia 6150 on-board video chokes even on 720p content with lots of movement (various animation movies come to mind).

        So basically if you want flawless playback for HD content nowadays you must either overclock to something above 4Ghz or use video acceleration, the way it's done in *win.

        Part of the problem is that we don't have decent software players either, that is a sad truth and perhaps an F/OSS shortcoming, seeing as commercial alternatives do it better (for example, *none* of the available free back-ends uses multi-core... so even with an Octa-core 3Ghz processor you're stuck at the same performance as a single-core).
        On the other hand, having some sort of hadware acceleration would go a long way towards helping the situation, and it would allow people to use a low-cost processor and GPU in a HTPC for example, things you just can't do today (certainly a 40$ GPU can do a lot more for acceleration than a 1000$ Quad-core CPU, but you just don't have the option).

        Here's to hoping either Intel or AMD come up with hardware HD support in Linux, I for one don't have much hope from the green guys even though I always appreciate nice surprises

        Comment


        • #44
          Hurray to mgc8. I am all for decent HD playback. Although I don't know who is to blame, AMD or X

          Comment


          • #45
            I really hope they get off their asses and fix this mess.

            Comment


            • #46
              hahaha, the opening post was funny. I'll tell you why some people are so pissed. It's because of the hype of some new, up and coming ati linux drivers that was toted around a while back. If it had all been quiet and if nvidiaphobic fanboys hadn't spread lies around, there would be more seeing the glass half full about fglrx. Even with that said they won't give in to the fact this driver fails.

              Comment


              • #47
                Originally posted by mgc8 View Post
                Sorry, but I can give you ten more examples where the opposite happens. The truth is that HD video playback in Linux is a mess no matter what graphics platform is used -- none has any acceleration.
                Not really true. Accelerated dirac playback is available Nvidia cards supporting Cuda.

                Comment


                • #48
                  i'm running my linux box with radeon driver instead fglrx because of xorg 1.5 and it is a lot of pain...

                  surely it will not be supported for this month release...

                  what a frak!!! shame on you ATI.

                  Comment


                  • #49
                    Originally posted by deanjo View Post
                    Not really true. Accelerated dirac playback is available Nvidia cards supporting Cuda.
                    http://www.cs.rug.nl/~wladimir/sc-cuda/
                    Accelerating a codec nobody's ever heard of hardly counts. When that same research can be applied to h.264 or vc-1 or any of the common codecs out there *and* in a F/OSS application like ffmpeg/gstreamer/etc., the situation will be different. Until then my comment still stands.

                    I also happen to think (as I've stated in my original post) that the OSS community has failed in delivering this type of functionality -- as your link points out, it would be possible to do it using CUDA or Stream or other ways that don't necessarily use the hardware accelerating path on the card, but nobody has so far done it -- while a proof-of-concept CUDA h.264 encoder exists in the commercial world, nothing like that has appeared on our side, even though CUDA works perfectly well in Linux too... That of course is unfortunate.

                    Comment


                    • #50
                      Originally posted by mgc8 View Post
                      Accelerating a codec nobody's ever heard of hardly counts.
                      If your suggesting a coded that the pirate "scene" doesn't use then your right. Then again they don't know about ogg as well and just finally clued into h.264 and mkv as a container format.

                      When that same research can be applied to h.264 or vc-1 or any of the common codecs out there *and* in a F/OSS application like ffmpeg/gstreamer/etc., the situation will be different. Until then my comment still stands.
                      There are a few XBMC contributors working on that as we speak with Cuda.

                      I also happen to think (as I've stated in my original post) that the OSS community has failed in delivering this type of functionality -- as your link points out, it would be possible to do it using CUDA or Stream or other ways that don't necessarily use the hardware accelerating path on the card, but nobody has so far done it -- while a proof-of-concept CUDA h.264 encoder exists in the commercial world, nothing like that has appeared on our side, even though CUDA works perfectly well in Linux too... That of course is unfortunate.

                      Well the Cuda h.264 isn't a proof of concept, it does exist commercially and is available but we are not talking about encoding here but decoding. A large part of the issue here is that in the #1 used OS, such solutions is not needed (GPU assisted decoding) as it has had good acceleration for quite some time so it falls pretty much on the *nix crew to come up with such solutions. Until solutions such as openCL gets adopted as the "defacto" GPGPU standard in the *nix side of things, I can't see it improving. Nobody in the *nix world wan't to develop for one specific piece of hardware (OK scientific community excluded which they seem to have embraced Cuda). Let's face it, a developer doesn't want to have to write code for every single variation of GPU computing. Things will improve I'm sure once a standard is adopted until then developers could very well be painting themselves into a corner if the choose a solution that only serves some configurations but not a large majority of them,

                      Comment

                      Working...
                      X