Announcement

Collapse
No announcement yet.

Has AMD Finally Fixed Tearing With Its Linux Driver?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    Originally posted by GoremanX View Post
    Getting back on topic: I finally got around to installing this new driver and enabling the "tear-free" thing.

    I still get tearing on my desktop windows now that I've enabled the feature, but it seems to "fix" itself when I move the mouse cursor around. That part's new.

    I hate my ATI HD5750.
    Same Card, works fine for me using Debian Testing and Compiz with Liquorix kernel. I also accidently did a suspend to ram and I was surprised that the machine woke up again -- never did before. But generally fglrx seems to work better for me than for other people.

    Does glxgears report 60 fps for you? It does for me and this is the refresh rate of my monitor.

    Comment


    • #52
      Originally posted by mugginz View Post
      I guess ultimately it'd depend on how they pack their pixels but if you simply want to change frames with the flick of a register then my guess is they probably would store the alpha channel.
      No modern hardware supports 24bpp formats anymore. They are padded to 32bpp.

      IIRC, the last one that did 24bpp natively was some Matrox card from the previous century (the Millenium series, I think?)

      Comment


      • #53
        I tried it

        I tried it. when i played hd videos (sintel) with mplayer everything was out of sync and getting worse proogressively (using -vo gl). when i disabled it then everything went back to normal.

        Comment


        • #54
          You should be able to use xv or vaapi.

          Comment


          • #55
            Originally posted by selmi View Post
            I tried it. when i played hd videos (sintel) with mplayer everything was out of sync and getting worse proogressively (using -vo gl). when i disabled it then everything went back to normal.
            So in other words fail by AMD and nothings changed. If you own a ATI card do not expect a tear-free experience on the Linux desktop. Better to get a NVIDIA card which you will know the dammed basics (like watching a glitch free video) will work.

            AMDfail

            Comment


            • #56
              so you tried a beta-feature that is by no means ready for consumption and then you complain.

              You need a reality check. Quickly.

              Also - try a different player.

              Comment


              • #57
                Originally posted by bugmenot3 View Post
                So in other words fail by AMD and nothings changed. If you own a ATI card do not expect a tear-free experience on the Linux desktop. Better to get a NVIDIA card which you will know the dammed basics (like watching a glitch free video) will work.

                AMDfail
                Nope, as I wrote above: works very fine for me. Using gl and xv output.

                Comment


                • #58
                  Originally posted by mugginz View Post
                  Insufficient memory....

                  1920x1200 pixels = 2304000 pixels

                  @ 4 bytes per pixel make that 9216000 bytes yeh?

                  divide by 1048576 for megabytes and I get the magic number of

                  8.8 MBytes per frame buffer.

                  For three screens that gives me 26.37 MBytes for frame store.
                  Make that 79.10MBytes for a triple buffered configuration without rendering overhead.

                  Now consider the "alternative" vendor which did provide a frame locked Compiz desktop with frame locked video playback.

                  nVidia 9800GT 512M driving 2 x 1920x1200

                  That's 512M driving 2/3 of the pixels the ATI's 1GB is driving.

                  Going by nVidia's numbers there's the possibly of being able to drive four of those screens with 1GB.

                  Am I missing something in my numbers?
                  In truth, you don't have the entire card's memory available for framebuffer. That's why I said I needed to think about what to put up instead... But... you saved me the trouble of setting the stage for this...

                  If you think that the WHOLE card's memory is available for anything whatsoever, you'd be mistaken. With both NVidia and AMD, they have pools that when you exhaust the pool, you're done, out of memory. If you run your figures there, you've got in excess of 80Mb for the framebuffer (always round up, you DO NOT get to use fractional values for memory use, it's to the nearest kb or Mb and in some cases, it's rounded to the nearest POT value of the same- the silicon on these things are optimized for peak speed and they don't work like a CPU at all in that respect...) for nothing other than the screen rendering context. Add anything else such as a pBuffer rendering target and it diminishes further. Add 2D windows and it diminishes further. Most of them being triple buffered. If you hit roughly 300 or so Mb of that sort of thing, you're out of RAM on a 1Gb card.

                  Now, in your example of the NVidia card, I strongly suspect you will have difficulty supporting a third monitor's worth of resolution, even if the adapter handled it or you hacked something in with a triple-monitor adapter from Matrox.

                  9Mb for the single plane, single monitor.
                  18Mb for the single plane, double monitor.
                  27Mb for the single plane, triple monitor.

                  52Mb for the triple monitor, double buffering.

                  At this point, for 512mb, you're going to find you're at half or less of the render target pool for just the base framebuffer. If they are triple buffering, you're toast at that point, not enough memory.

                  Comment


                  • #59
                    my observation may not be very representable but just tested the tearing option with

                    opensuse11.4 milestone 6, catalyst 11.1

                    after enabling the fix, system locks up hard, reboot leads to a black screen, had to reinstall the driver to get back to normal behaviour

                    on the pro side window moving and resizing works very smooth.

                    Comment


                    • #60
                      Originally posted by Svartalf View Post
                      In truth, you don't have the entire card's memory available for framebuffer. That's why I said I needed to think about what to put up instead... But... you saved me the trouble of setting the stage for this...

                      If you think that the WHOLE card's memory is available for anything whatsoever, you'd be mistaken. With both NVidia and AMD, they have pools that when you exhaust the pool, you're done, out of memory. If you run your figures there, you've got in excess of 80Mb for the framebuffer (always round up, you DO NOT get to use fractional values for memory use, it's to the nearest kb or Mb and in some cases, it's rounded to the nearest POT value of the same- the silicon on these things are optimized for peak speed and they don't work like a CPU at all in that respect...) for nothing other than the screen rendering context. Add anything else such as a pBuffer rendering target and it diminishes further. Add 2D windows and it diminishes further. Most of them being triple buffered. If you hit roughly 300 or so Mb of that sort of thing, you're out of RAM on a 1Gb card.
                      I assumed issues such as you raise above would be in play but as I was under the impression that as all of the 80Megs wasn't active and only a portion of it needed to be "switchable to" as far as the scanouts were concerned, and only another frame worth of memory needed to be "selected" for rendering to it didn't seem quite right. I guess their architecture doesn't support that kind of "target" buffer selection from the entire pool as you highlight. As you're able to get a frame locked Eyefininity config like mine via Windows I was hoping that might be possible under Linux as well, but as I've found in the past, Windows current architecture seems to be better suited to my requirements. Perhaps I'll end up switching to it.

                      I had issues by simply booting to a Compiz desktop and enabling "TearFree". So the machines use of video card RAM would be about as low as it's going to get for this box.

                      My complaint was that TearFree wasn't available to me for my triple head system. If I need to go back to a dual head config for TearFree then I might as well go nVidia. Basically it's a solution to my problem as long as I don't have my current config :-( Very frustrating!

                      Originally posted by Svartalf View Post
                      Now, in your example of the NVidia card, I strongly suspect you will have difficulty supporting a third monitor's worth of resolution, even if the adapter handled it or you hacked something in with a triple-monitor adapter from Matrox.

                      9Mb for the single plane, single monitor.
                      18Mb for the single plane, double monitor.
                      27Mb for the single plane, triple monitor.

                      52Mb for the triple monitor, double buffering.

                      At this point, for 512mb, you're going to find you're at half or less of the render target pool for just the base framebuffer. If they are triple buffering, you're toast at that point, not enough memory.
                      You're assuming nVidia are using triple buffering to get their frame lockage. I don't know if they are, but either way, it's not a config I'm going to throw at that nVidia card.

                      It does bring up the question of what happens on a Windows box running a DX game configured for triple buffering in an Eyefinity setup though.

                      It's looking more and more like I'll switch to two nVidia cards. One will drive a TwinView desktop and the other will drive a separate X session. This will get me some of what I'm after and is a configuration I tested before going AMD when I borrowed a 9600GT to put with my 9800GT. It worked happily but I wanted to be able to move windows between all screens. This may prove to not be an option I'm going to have available at least in a frame locked way until a few more fglrx releases down the track, if at all.

                      Comment

                      Working...
                      X