Announcement

Collapse
No announcement yet.

Has AMD Finally Fixed Tearing With Its Linux Driver?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    Originally posted by mugginz View Post
    I assumed issues such as you raise above would be in play but as I was under the impression that as all of the 80Megs wasn't active and only a portion of it needed to be "switchable to" as far as the scanouts were concerned, and only another frame worth of memory needed to be "selected" for rendering to it didn't seem quite right.
    Wow...just...wow...

    If you allocate it out of card memory, it's ALLOCATED. It's not swapped out. It's not placed on the host machine. It's "active" the moment you go to use it, even if it's not being displayed at the time- and it stays that way until you release it. That's the way it is on Linux. That's the way it is on OSX. That's the way it's on Windows. That 80mb's is used the moment you fire up the display as it's the final canvas that you push bits into. It's the framebuffer. Any off-screen or overlaid rendering targets (windows, pbuffers, etc. are peeled out of that space. You can't pull memory out of the vertex pool to give to the framebuffer pool. You can't pull texture memory out to hand it to framebuffers, but for some operations, you can use it as an offscreen render target to provide dynamic textures for 3D operations. NVidia's probably doing something slightly different such that they don't need triple buffering, but you're still going to need LOTS of card RAM to do three screens at peak resolutions well even with their cards.

    I guess their architecture doesn't support that kind of "target" buffer selection from the entire pool as you highlight. As you're able to get a frame locked Eyefininity config like mine via Windows I was hoping that might be possible under Linux as well, but as I've found in the past, Windows current architecture seems to be better suited to my requirements. Perhaps I'll end up switching to it.
    As an observation, you're going to encounter the same thing under Windows. The pools and code are the same on both OSes as far as the bulk of the drivers are concerned for both AMD and NVidia... It's part of why they've got drivers for Linux in the first place. If they didn't have a bunch of code recycling, there'd not be enough money in it for them, even with people like Dreamworks and the like insisting that they make the drivers possible. If you think Windows will do much better there, you're kidding yourself.

    You're assuming nVidia are using triple buffering to get their frame lockage. I don't know if they are, but either way, it's not a config I'm going to throw at that nVidia card.
    No, I wasn't. I was indicating that you're pushing limits with the double-buffering and two monitors on the 512Mb card. If you had to do triple buffering, you'd not make it.

    It does bring up the question of what happens on a Windows box running a DX game configured for triple buffering in an Eyefinity setup though.
    Same thing. Same rules apply. If you're pushing triple monitors at peak resolutions like that you're going to need a 1.5Gb RAM pool at minimum to do anything with it in triple buffering mode- or do what you're thinking of doing which is two cards for the triple monitor config.

    Comment


    • #62
      Originally posted by Svartalf View Post
      Wow...just...wow...

      If you allocate it out of card memory, it's ALLOCATED. It's not swapped out.
      Wow..just..wow...

      Durr, I know that. You need to stop assuming everyone that's not you is stupid.

      Some hardware has registers you know. Sometimes those registers can tell the hardware where to look for certain things.

      Sometimes memory for certain uses is in fixed places as well, but I don't think a 5870 works that way.

      So was assuming you can have several render target or frame buffers and you can switch which one is scanned out via a register write or two... or maybe a few more.


      Originally posted by Svartalf View Post
      As an observation, you're going to encounter the same thing under Windows. The pools and code are the same on both OSes as far as the bulk of the drivers are concerned for both AMD and NVidia... It's part of why they've got drivers for Linux in the first place. If they didn't have a bunch of code recycling, there'd not be enough money in it for them, even with people like Dreamworks and the like insisting that they make the drivers possible. If you think Windows will do much better there, you're kidding yourself.
      Yes, the hardware is the same whether you're using Windows, Linux or OSX. It doesn't magically and completely transmogrify dependant on the OS at hand.

      But...

      The results you get under Windows are much better. Due to drivers. That's the issue at hand.

      I'm looking at things from the outside and wondering, "Why can a Windows box do this properly and not a Linux box?" and surely it's safe to assume there's some Xorgness that's driving some of this which is causing the driver writers to need to use slightly different approaches. That's why I wonder, under Windows does Microsoft use the hardware in a triple buffered manner or double buffered to acheive v-sync. I doubt they're ahnging around waiting for the v-blank before they begin modifying the frame buffer. (as in single buffering)

      Originally posted by Svartalf View Post
      No, I wasn't. I was indicating that you're pushing limits with the double-buffering and two monitors on the 512Mb card. If you had to do triple buffering, you'd not make it.
      My point is that nVidia are doing what I want (for two screens) and they're doing it with 512. I believe they're doing v-sync with double not triple buffering. Either way AMD looses though. Ultimately, I get a frame locked desktop with nVidia and AMD can't manage that. We can talk about the implementation specifics but at the end of the day "No V-Sync for you"


      Originally posted by Svartalf View Post
      Same thing. Same rules apply. If you're pushing triple monitors at peak resolutions like that you're going to need a 1.5Gb RAM pool at minimum to do anything with it in triple buffering mode- or do what you're thinking of doing which is two cards for the triple monitor config.
      And this is surely a reason for AMD to try to make things v-sync with double, not triple buffering then

      As for the two cards for triple monitor config, I can drive four with two 512M cards if I can drive two with one card. To have them without Xinerama (the desktop unifying one, no the "This screen is this big and it's over there" one) you have to have two seperate desktops. One per card.

      As proof, I ran a double desktop via a 9800GT and a single via a 9600GT in the same box. They weren't unified which is what I wanted. The 5870 gives me that but with their particular bugs as apposed to nVidia's bugs. I like nVidia's bugs better but I like a unified and accelerated desktop as well.

      So I have to decide, do I do two separate desktops in the one box, or do I keep the unified desktop and wait for AMD to make it smoother in operation.

      Comment


      • #63
        I might give a manual installation a try if I can't find Ubuntu 10.10 ppa with this driver version somewhere.

        A few questions:
        1) Xv is now same as you would get with Nvidia proprietary drivers = tearless?
        2) Youtube/vimeo/etc... flashplayer is still not vsynced?
        3) Youtube/vimeo/etc... HTML5 video is tearless also?

        Comment


        • #64
          I risked and tried manual install following these instructions. This ended up in blank screen after reboot. Fortunately second try running ati-driver-installer-11-1-x86.x86_64.run without any deb options was success.

          Right now I can tell only that, yes indeed -- Xv tearing issues have gone. And yes indeed -- desktop likes to freeze while turning "tear-free" option on/off (workaround is quick switch to ctrl-alt-f1/ctl-alt-f7).

          Comment


          • #65
            2) Youtube/vimeo/etc... flashplayer is still not vsynced?
            3) Youtube/vimeo/etc... HTML5 video is tearless also?
            alright, I am not an expert on this, but to my eyes they both do benefit from "tear free desktop"

            example video there tearing is easily noticeable (especially in fullscreen mode) then "tear free" and "force vsync" are both disabled:
            Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.

            Comment


            • #66
              Worked for me

              Two 4890 in crossfire, I followed the instruction in the Phoronix OP, and video no longer tears even in fast moving scenes. I'm happy.

              Comment


              • #67
                OK. I installed 11.1 on my gentoo system and enabled tear-free desktop. I only got to play with it for 20 minutes but here are my impressions:

                - No more tearing!
                - CPU usage playing videos jumped from 10% to ~60%
                - Instead of tearing, a frame or two is now dropped exactly every 0.5 seconds
                - Playing video on openGL output still dominates all workspaces
                - Mouse cursor on 2nd monitor still corrupted

                Honestly I'm going back to the 10.x series because I'd rather have tearing than frame-dropping and ridiculous CPU usage. (i5-2500K @ 4.3 GHz; 4 GB 1600 MHz 6-8-6 RAM; ATI 5770) I don't have a "sour attitude", I've just given up on the idea of fglrx ever providing the bare minimum basic features that all other drivers on all other operating systems have had since 1996. I'm just gonna get an Nvidia card and a watercooling system to kill the heat and noise... that's a matter I can take into my own hands... unlike these inexcusable piss-poor drivers

                Comment


                • #68
                  11.3 +MobilityHD2600+slow videos

                  Hi!
                  I have a Mobility HD2600Pro card in my laptop. The tear-free option was a pleasure for me in the catalyst control center, but if I enable it every videos (avi,wmv,mkv,mp4,mov...) is a bit slow or erratic.
                  Without the enabled tear-free option, there is tearing, but glxgears gives 10000fps/5sec. How can I enjoy my movies again?

                  Comment


                  • #69
                    Originally posted by jonyibandi View Post
                    Hi!
                    I have a Mobility HD2600Pro card in my laptop. The tear-free option was a pleasure for me in the catalyst control center, but if I enable it every videos (avi,wmv,mkv,mp4,mov...) is a bit slow or erratic.
                    Without the enabled tear-free option, there is tearing, but glxgears gives 10000fps/5sec. How can I enjoy my movies again?
                    Sadly I get the same thing: with tear free. The video is not as smooth with tear free enabled as with tear free disabled. Also using XINE and tear free, the audio & video slowly loose sync.

                    This on an HD4890 and on an E350.

                    Comment


                    • #70
                      Originally posted by jonyibandi View Post
                      Without the enabled tear-free option, there is tearing, but glxgears gives 10000fps/5sec. How can I enjoy my movies again?
                      simple - use open ati/amd drivers. free tear-free video playing is included in package
                      tested on Mobility Radeon X2300 with r300g and RV770 CE [Radeon HD 4710] with r600g. not too many fps with gears though yet but zero X crashes & leaks too.

                      Comment

                      Working...
                      X