Announcement

Collapse
No announcement yet.

Has AMD Finally Fixed Tearing With Its Linux Driver?

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    my observation may not be very representable but just tested the tearing option with

    opensuse11.4 milestone 6, catalyst 11.1

    after enabling the fix, system locks up hard, reboot leads to a black screen, had to reinstall the driver to get back to normal behaviour

    on the pro side window moving and resizing works very smooth.

    Comment


    • #62
      Originally posted by Svartalf View Post
      In truth, you don't have the entire card's memory available for framebuffer. That's why I said I needed to think about what to put up instead... But... you saved me the trouble of setting the stage for this...

      If you think that the WHOLE card's memory is available for anything whatsoever, you'd be mistaken. With both NVidia and AMD, they have pools that when you exhaust the pool, you're done, out of memory. If you run your figures there, you've got in excess of 80Mb for the framebuffer (always round up, you DO NOT get to use fractional values for memory use, it's to the nearest kb or Mb and in some cases, it's rounded to the nearest POT value of the same- the silicon on these things are optimized for peak speed and they don't work like a CPU at all in that respect...) for nothing other than the screen rendering context. Add anything else such as a pBuffer rendering target and it diminishes further. Add 2D windows and it diminishes further. Most of them being triple buffered. If you hit roughly 300 or so Mb of that sort of thing, you're out of RAM on a 1Gb card.
      I assumed issues such as you raise above would be in play but as I was under the impression that as all of the 80Megs wasn't active and only a portion of it needed to be "switchable to" as far as the scanouts were concerned, and only another frame worth of memory needed to be "selected" for rendering to it didn't seem quite right. I guess their architecture doesn't support that kind of "target" buffer selection from the entire pool as you highlight. As you're able to get a frame locked Eyefininity config like mine via Windows I was hoping that might be possible under Linux as well, but as I've found in the past, Windows current architecture seems to be better suited to my requirements. Perhaps I'll end up switching to it.

      I had issues by simply booting to a Compiz desktop and enabling "TearFree". So the machines use of video card RAM would be about as low as it's going to get for this box.

      My complaint was that TearFree wasn't available to me for my triple head system. If I need to go back to a dual head config for TearFree then I might as well go nVidia. Basically it's a solution to my problem as long as I don't have my current config :-( Very frustrating!

      Originally posted by Svartalf View Post
      Now, in your example of the NVidia card, I strongly suspect you will have difficulty supporting a third monitor's worth of resolution, even if the adapter handled it or you hacked something in with a triple-monitor adapter from Matrox.

      9Mb for the single plane, single monitor.
      18Mb for the single plane, double monitor.
      27Mb for the single plane, triple monitor.

      52Mb for the triple monitor, double buffering.

      At this point, for 512mb, you're going to find you're at half or less of the render target pool for just the base framebuffer. If they are triple buffering, you're toast at that point, not enough memory.
      You're assuming nVidia are using triple buffering to get their frame lockage. I don't know if they are, but either way, it's not a config I'm going to throw at that nVidia card.

      It does bring up the question of what happens on a Windows box running a DX game configured for triple buffering in an Eyefinity setup though.

      It's looking more and more like I'll switch to two nVidia cards. One will drive a TwinView desktop and the other will drive a separate X session. This will get me some of what I'm after and is a configuration I tested before going AMD when I borrowed a 9600GT to put with my 9800GT. It worked happily but I wanted to be able to move windows between all screens. This may prove to not be an option I'm going to have available at least in a frame locked way until a few more fglrx releases down the track, if at all.

      Comment


      • #63
        Originally posted by mugginz View Post
        I assumed issues such as you raise above would be in play but as I was under the impression that as all of the 80Megs wasn't active and only a portion of it needed to be "switchable to" as far as the scanouts were concerned, and only another frame worth of memory needed to be "selected" for rendering to it didn't seem quite right.
        Wow...just...wow...

        If you allocate it out of card memory, it's ALLOCATED. It's not swapped out. It's not placed on the host machine. It's "active" the moment you go to use it, even if it's not being displayed at the time- and it stays that way until you release it. That's the way it is on Linux. That's the way it is on OSX. That's the way it's on Windows. That 80mb's is used the moment you fire up the display as it's the final canvas that you push bits into. It's the framebuffer. Any off-screen or overlaid rendering targets (windows, pbuffers, etc. are peeled out of that space. You can't pull memory out of the vertex pool to give to the framebuffer pool. You can't pull texture memory out to hand it to framebuffers, but for some operations, you can use it as an offscreen render target to provide dynamic textures for 3D operations. NVidia's probably doing something slightly different such that they don't need triple buffering, but you're still going to need LOTS of card RAM to do three screens at peak resolutions well even with their cards.

        I guess their architecture doesn't support that kind of "target" buffer selection from the entire pool as you highlight. As you're able to get a frame locked Eyefininity config like mine via Windows I was hoping that might be possible under Linux as well, but as I've found in the past, Windows current architecture seems to be better suited to my requirements. Perhaps I'll end up switching to it.
        As an observation, you're going to encounter the same thing under Windows. The pools and code are the same on both OSes as far as the bulk of the drivers are concerned for both AMD and NVidia... It's part of why they've got drivers for Linux in the first place. If they didn't have a bunch of code recycling, there'd not be enough money in it for them, even with people like Dreamworks and the like insisting that they make the drivers possible. If you think Windows will do much better there, you're kidding yourself.

        You're assuming nVidia are using triple buffering to get their frame lockage. I don't know if they are, but either way, it's not a config I'm going to throw at that nVidia card.
        No, I wasn't. I was indicating that you're pushing limits with the double-buffering and two monitors on the 512Mb card. If you had to do triple buffering, you'd not make it.

        It does bring up the question of what happens on a Windows box running a DX game configured for triple buffering in an Eyefinity setup though.
        Same thing. Same rules apply. If you're pushing triple monitors at peak resolutions like that you're going to need a 1.5Gb RAM pool at minimum to do anything with it in triple buffering mode- or do what you're thinking of doing which is two cards for the triple monitor config.

        Comment


        • #64
          Originally posted by Svartalf View Post
          Wow...just...wow...

          If you allocate it out of card memory, it's ALLOCATED. It's not swapped out.
          Wow..just..wow...

          Durr, I know that. You need to stop assuming everyone that's not you is stupid.

          Some hardware has registers you know. Sometimes those registers can tell the hardware where to look for certain things.

          Sometimes memory for certain uses is in fixed places as well, but I don't think a 5870 works that way.

          So was assuming you can have several render target or frame buffers and you can switch which one is scanned out via a register write or two... or maybe a few more.


          Originally posted by Svartalf View Post
          As an observation, you're going to encounter the same thing under Windows. The pools and code are the same on both OSes as far as the bulk of the drivers are concerned for both AMD and NVidia... It's part of why they've got drivers for Linux in the first place. If they didn't have a bunch of code recycling, there'd not be enough money in it for them, even with people like Dreamworks and the like insisting that they make the drivers possible. If you think Windows will do much better there, you're kidding yourself.
          Yes, the hardware is the same whether you're using Windows, Linux or OSX. It doesn't magically and completely transmogrify dependant on the OS at hand.

          But...

          The results you get under Windows are much better. Due to drivers. That's the issue at hand.

          I'm looking at things from the outside and wondering, "Why can a Windows box do this properly and not a Linux box?" and surely it's safe to assume there's some Xorgness that's driving some of this which is causing the driver writers to need to use slightly different approaches. That's why I wonder, under Windows does Microsoft use the hardware in a triple buffered manner or double buffered to acheive v-sync. I doubt they're ahnging around waiting for the v-blank before they begin modifying the frame buffer. (as in single buffering)

          Originally posted by Svartalf View Post
          No, I wasn't. I was indicating that you're pushing limits with the double-buffering and two monitors on the 512Mb card. If you had to do triple buffering, you'd not make it.
          My point is that nVidia are doing what I want (for two screens) and they're doing it with 512. I believe they're doing v-sync with double not triple buffering. Either way AMD looses though. Ultimately, I get a frame locked desktop with nVidia and AMD can't manage that. We can talk about the implementation specifics but at the end of the day "No V-Sync for you"


          Originally posted by Svartalf View Post
          Same thing. Same rules apply. If you're pushing triple monitors at peak resolutions like that you're going to need a 1.5Gb RAM pool at minimum to do anything with it in triple buffering mode- or do what you're thinking of doing which is two cards for the triple monitor config.
          And this is surely a reason for AMD to try to make things v-sync with double, not triple buffering then

          As for the two cards for triple monitor config, I can drive four with two 512M cards if I can drive two with one card. To have them without Xinerama (the desktop unifying one, no the "This screen is this big and it's over there" one) you have to have two seperate desktops. One per card.

          As proof, I ran a double desktop via a 9800GT and a single via a 9600GT in the same box. They weren't unified which is what I wanted. The 5870 gives me that but with their particular bugs as apposed to nVidia's bugs. I like nVidia's bugs better but I like a unified and accelerated desktop as well.

          So I have to decide, do I do two separate desktops in the one box, or do I keep the unified desktop and wait for AMD to make it smoother in operation.

          Comment


          • #65
            I might give a manual installation a try if I can't find Ubuntu 10.10 ppa with this driver version somewhere.

            A few questions:
            1) Xv is now same as you would get with Nvidia proprietary drivers = tearless?
            2) Youtube/vimeo/etc... flashplayer is still not vsynced?
            3) Youtube/vimeo/etc... HTML5 video is tearless also?

            Comment


            • #66
              I risked and tried manual install following these instructions. This ended up in blank screen after reboot. Fortunately second try running ati-driver-installer-11-1-x86.x86_64.run without any deb options was success.

              Right now I can tell only that, yes indeed -- Xv tearing issues have gone. And yes indeed -- desktop likes to freeze while turning "tear-free" option on/off (workaround is quick switch to ctrl-alt-f1/ctl-alt-f7).

              Comment


              • #67
                2) Youtube/vimeo/etc... flashplayer is still not vsynced?
                3) Youtube/vimeo/etc... HTML5 video is tearless also?
                alright, I am not an expert on this, but to my eyes they both do benefit from "tear free desktop"

                example video there tearing is easily noticeable (especially in fullscreen mode) then "tear free" and "force vsync" are both disabled:
                http://www.youtube.com/watch?v=bv5IqCbJucc

                Comment


                • #68
                  Worked for me

                  Two 4890 in crossfire, I followed the instruction in the Phoronix OP, and video no longer tears even in fast moving scenes. I'm happy.

                  Comment


                  • #69
                    OK. I installed 11.1 on my gentoo system and enabled tear-free desktop. I only got to play with it for 20 minutes but here are my impressions:

                    - No more tearing!
                    - CPU usage playing videos jumped from 10% to ~60%
                    - Instead of tearing, a frame or two is now dropped exactly every 0.5 seconds
                    - Playing video on openGL output still dominates all workspaces
                    - Mouse cursor on 2nd monitor still corrupted

                    Honestly I'm going back to the 10.x series because I'd rather have tearing than frame-dropping and ridiculous CPU usage. (i5-2500K @ 4.3 GHz; 4 GB 1600 MHz 6-8-6 RAM; ATI 5770) I don't have a "sour attitude", I've just given up on the idea of fglrx ever providing the bare minimum basic features that all other drivers on all other operating systems have had since 1996. I'm just gonna get an Nvidia card and a watercooling system to kill the heat and noise... that's a matter I can take into my own hands... unlike these inexcusable piss-poor drivers

                    Comment


                    • #70
                      11.3 +MobilityHD2600+slow videos

                      Hi!
                      I have a Mobility HD2600Pro card in my laptop. The tear-free option was a pleasure for me in the catalyst control center, but if I enable it every videos (avi,wmv,mkv,mp4,mov...) is a bit slow or erratic.
                      Without the enabled tear-free option, there is tearing, but glxgears gives 10000fps/5sec. How can I enjoy my movies again?

                      Comment


                      • #71
                        Originally posted by jonyibandi View Post
                        Hi!
                        I have a Mobility HD2600Pro card in my laptop. The tear-free option was a pleasure for me in the catalyst control center, but if I enable it every videos (avi,wmv,mkv,mp4,mov...) is a bit slow or erratic.
                        Without the enabled tear-free option, there is tearing, but glxgears gives 10000fps/5sec. How can I enjoy my movies again?
                        Sadly I get the same thing: with tear free. The video is not as smooth with tear free enabled as with tear free disabled. Also using XINE and tear free, the audio & video slowly loose sync.

                        This on an HD4890 and on an E350.

                        Comment


                        • #72
                          Originally posted by jonyibandi View Post
                          Without the enabled tear-free option, there is tearing, but glxgears gives 10000fps/5sec. How can I enjoy my movies again?
                          simple - use open ati/amd drivers. free tear-free video playing is included in package
                          tested on Mobility Radeon X2300 with r300g and RV770 CE [Radeon HD 4710] with r600g. not too many fps with gears though yet but zero X crashes & leaks too.

                          Comment


                          • #73
                            Originally posted by jonyibandi View Post
                            Hi!
                            I have a Mobility HD2600Pro card in my laptop. The tear-free option was a pleasure for me in the catalyst control center, but if I enable it every videos (avi,wmv,mkv,mp4,mov...) is a bit slow or erratic.
                            Without the enabled tear-free option, there is tearing, but glxgears gives 10000fps/5sec. How can I enjoy my movies again?
                            Which version of the driver are you using?

                            Comment


                            • #74
                              Originally posted by mugginz View Post
                              Which version of the driver are you using?
                              I'm using the 11.3 catalyst driver. I can only play videos normally with xbmc. If I run xbmc, it overrides everything, so if I want to check my mails, I have to close xbmc...

                              Comment

                              Working...
                              X