Announcement

Collapse
No announcement yet.

Has AMD Finally Fixed Tearing With Its Linux Driver?

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #46
    Originally posted by Svartalf View Post
    Hm... (Note to self: Don't try zinging off a response before going to a meeting...you make dumb mistakes like this.)

    You're right... But he's still got insufficient memory for attempting a triple-buffered configuration. I just got the calculations off. He's still going to probably need 1.5 Gb at least. Let me mull over the best way to express what I'm trying to say here and I'll put it up here in a bit.
    Insufficient memory....

    1920x1200 pixels = 2304000 pixels

    @ 4 bytes per pixel make that 9216000 bytes yeh?

    divide by 1048576 for megabytes and I get the magic number of

    8.8 MBytes per frame buffer.

    For three screens that gives me 26.37 MBytes for frame store.
    Make that 79.10MBytes for a triple buffered configuration without rendering overhead.

    Now consider the "alternative" vendor which did provide a frame locked Compiz desktop with frame locked video playback.

    nVidia 9800GT 512M driving 2 x 1920x1200

    That's 512M driving 2/3 of the pixels the ATI's 1GB is driving.

    Going by nVidia's numbers there's the possibly of being able to drive four of those screens with 1GB.

    Am I missing something in my numbers?

    Comment


    • #47
      Question; Is it actually necessary to store the alpha channel?

      Comment


      • #48
        Originally posted by Silverthorn View Post
        Question; Is it actually necessary to store the alpha channel?
        I guess ultimately it'd depend on how they pack their pixels but if you simply want to change frames with the flick of a register then my guess is they probably would store the alpha channel.

        Comment


        • #49
          Maybe amd wants to sell new cards with more vram Usually only vram is splitted when you use a multi gpu card. I guess that's a programming error. Without official beta builds of the next driver you just have to wait several weeks till you can try again. I really hate that way because the error you found should have been found much earlier. But from the comments it seems that the new tearfree mode has got problems with powersave states (otherwise it would be unlogical that glxgears could help anything). I still will not use my amd card on my main system until those bugs are resolved. Basically my hd5670 would be faster than my gt220 but there are too many drawbacks still.

          Comment


          • #50
            Getting back on topic: I finally got around to installing this new driver and enabling the "tear-free" thing.

            I still get tearing on my desktop windows now that I've enabled the feature, but it seems to "fix" itself when I move the mouse cursor around. That part's new.

            I hate my ATI HD5750. I only bought it because of the promise of adequate open source drivers in the near future, but I've hated it ever since I first installed it. I miss my old Nvidia card. It was outdated and slow, but my desktop always looked great. I wish I could afford to buy a new video card and get rid of this trash.

            Comment


            • #51
              fuck blobs

              Originally posted by GoremanX View Post
              I hate my ATI HD5750. I only bought it because of the promise of adequate open source drivers in the near future, but I've hated it ever since I first installed it. I miss my old Nvidia card. It was outdated and slow, but my desktop always looked great. I wish I could afford to buy a new video card and get rid of this trash.
              heh, isn't entire 5xxx series supposed to be already supported by open graphic stack ?
              like, install kernel 2.6.37, or 36 if you can't; libdrm, xf86-video-ati, mesa from git and you're golden (have i said, it's a tear-free ?)! or maybe libdrm-2.4.23, xf86-video-ati-6.13.2, mesa-7.10 will suffice.
              i bought my 4730 when r600g only started to emerge in private branches or something (and 5xxx only were going to hit some local shelves). hopefully r600c was solid and 2D in ddx were ok. maybe playing games is a bitch but desktop is uncrashable. you should be masochistic to use "official" driver still.

              and now, i think i'll reboot main box in alternative OS and play some latest game in my native 1920x1080 resolution on my cheap-ass 100$ card. and, to hell with slow old nv7800gtx!

              Comment


              • #52
                pretty short sighted of you. You want to support a company that does not support open source drivers. Smart move. Not

                Comment


                • #53
                  Originally posted by GoremanX View Post
                  Getting back on topic: I finally got around to installing this new driver and enabling the "tear-free" thing.

                  I still get tearing on my desktop windows now that I've enabled the feature, but it seems to "fix" itself when I move the mouse cursor around. That part's new.

                  I hate my ATI HD5750.
                  Same Card, works fine for me using Debian Testing and Compiz with Liquorix kernel. I also accidently did a suspend to ram and I was surprised that the machine woke up again -- never did before. But generally fglrx seems to work better for me than for other people.

                  Does glxgears report 60 fps for you? It does for me and this is the refresh rate of my monitor.

                  Comment


                  • #54
                    Originally posted by mugginz View Post
                    I guess ultimately it'd depend on how they pack their pixels but if you simply want to change frames with the flick of a register then my guess is they probably would store the alpha channel.
                    No modern hardware supports 24bpp formats anymore. They are padded to 32bpp.

                    IIRC, the last one that did 24bpp natively was some Matrox card from the previous century (the Millenium series, I think?)

                    Comment


                    • #55
                      I tried it

                      I tried it. when i played hd videos (sintel) with mplayer everything was out of sync and getting worse proogressively (using -vo gl). when i disabled it then everything went back to normal.

                      Comment


                      • #56
                        You should be able to use xv or vaapi.

                        Comment


                        • #57
                          Originally posted by selmi View Post
                          I tried it. when i played hd videos (sintel) with mplayer everything was out of sync and getting worse proogressively (using -vo gl). when i disabled it then everything went back to normal.
                          So in other words fail by AMD and nothings changed. If you own a ATI card do not expect a tear-free experience on the Linux desktop. Better to get a NVIDIA card which you will know the dammed basics (like watching a glitch free video) will work.

                          AMDfail

                          Comment


                          • #58
                            so you tried a beta-feature that is by no means ready for consumption and then you complain.

                            You need a reality check. Quickly.

                            Also - try a different player.

                            Comment


                            • #59
                              Originally posted by bugmenot3 View Post
                              So in other words fail by AMD and nothings changed. If you own a ATI card do not expect a tear-free experience on the Linux desktop. Better to get a NVIDIA card which you will know the dammed basics (like watching a glitch free video) will work.

                              AMDfail
                              Nope, as I wrote above: works very fine for me. Using gl and xv output.

                              Comment


                              • #60
                                Originally posted by mugginz View Post
                                Insufficient memory....

                                1920x1200 pixels = 2304000 pixels

                                @ 4 bytes per pixel make that 9216000 bytes yeh?

                                divide by 1048576 for megabytes and I get the magic number of

                                8.8 MBytes per frame buffer.

                                For three screens that gives me 26.37 MBytes for frame store.
                                Make that 79.10MBytes for a triple buffered configuration without rendering overhead.

                                Now consider the "alternative" vendor which did provide a frame locked Compiz desktop with frame locked video playback.

                                nVidia 9800GT 512M driving 2 x 1920x1200

                                That's 512M driving 2/3 of the pixels the ATI's 1GB is driving.

                                Going by nVidia's numbers there's the possibly of being able to drive four of those screens with 1GB.

                                Am I missing something in my numbers?
                                In truth, you don't have the entire card's memory available for framebuffer. That's why I said I needed to think about what to put up instead... But... you saved me the trouble of setting the stage for this...

                                If you think that the WHOLE card's memory is available for anything whatsoever, you'd be mistaken. With both NVidia and AMD, they have pools that when you exhaust the pool, you're done, out of memory. If you run your figures there, you've got in excess of 80Mb for the framebuffer (always round up, you DO NOT get to use fractional values for memory use, it's to the nearest kb or Mb and in some cases, it's rounded to the nearest POT value of the same- the silicon on these things are optimized for peak speed and they don't work like a CPU at all in that respect...) for nothing other than the screen rendering context. Add anything else such as a pBuffer rendering target and it diminishes further. Add 2D windows and it diminishes further. Most of them being triple buffered. If you hit roughly 300 or so Mb of that sort of thing, you're out of RAM on a 1Gb card.

                                Now, in your example of the NVidia card, I strongly suspect you will have difficulty supporting a third monitor's worth of resolution, even if the adapter handled it or you hacked something in with a triple-monitor adapter from Matrox.

                                9Mb for the single plane, single monitor.
                                18Mb for the single plane, double monitor.
                                27Mb for the single plane, triple monitor.

                                52Mb for the triple monitor, double buffering.

                                At this point, for 512mb, you're going to find you're at half or less of the render target pool for just the base framebuffer. If they are triple buffering, you're toast at that point, not enough memory.

                                Comment

                                Working...
                                X