Announcement

Collapse
No announcement yet.

AMD Radeon HD 5750/5770

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    For standard content just use opengl as output and amdcccle to force quality mode/vsync on. Interesting is only 1080p.

    Comment


    • #62
      Hi Yall & Kano,

      openGL does get rid of the video-tearing and on my new build, works fine. I think I tried it with my "old build," and the hardware was too slow.

      VLC barfs. Xine and SMP are good. But with .mkv, SMP will sometimes shrink the video playback size, in 16.9 aspect-which is odd.

      Again, thanx for the heads up. :-)

      Greekgeek. :-)

      Comment


      • #63
        Well i often use VLC 1.0.2. You can set it to opengl output as well. My favorite player is mplayer however as i don't know how to enable something similar to -af volnorm with vlc (Xine has got a way to enable that too, just differently). Xine works with opengl with an additional override in .xine/config:

        video.output.opengl_renderer:2D_Tex

        it took a while till i found that out - otherwise i had no gui for vdr. But for 1090p even fast systems get out of sync sometimes when rendered to opengl. Saidly mplayer has got that problem too when the input is not 100% optimal, then a/v sync is lost and it does not resync. Win player do that, i really want that feature on linux too, that would fix lots of problems. It is useless to test 1080p docus when you can not see the speaker to find those issues however.
        Last edited by Kano; 10-16-2009, 08:28 AM.

        Comment


        • #64
          Originally posted by rohcQaH View Post
          The ability to connect three monitors is the reason I'd choose this over the 4770. No more crawling under the desk to attach the projector.
          And you know what REALLY funny is:
          If you have more then 1 monitor connected, the idle power consumption goes from 18W to 50W!

          Source (in German):
          http://ht4u.net/news/21031_multi-mon..._stromfresser/

          Unfortunalty this is not ATi specific. Nvidia has the same problem (in the article 275 and 295 are named).

          Thank you graphic vendors for lieing to your advanced customers! Your idle power is NOT the defined idle power!

          And thank you to all websites doing so called reviews! You ALL failed!

          Comment


          • #65
            Originally posted by Hasenpfote View Post
            And you know what REALLY funny is:
            If you have more then 1 monitor connected, the idle power consumption goes from 18W to 50W!

            Source (in German):
            http://ht4u.net/news/21031_multi-mon..._stromfresser/

            Unfortunalty this is not ATi specific. Nvidia has the same problem (in the article 275 and 295 are named).

            Thank you graphic vendors for lieing to your advanced customers! Your idle power is NOT the defined idle power!

            And thank you to all websites doing so called reviews! You ALL failed!
            For driving displays you need a particular bandwidth of memory to allow the display controllers to read out memory and still leave headroom for 3D and video clients to do initial rendering (until the clocks ramp up). This is immutable.

            For the single monitor case, you can use very low clocks quite safely. The more monitors you add the higher the memory clock (and possibly the engine clock) will need to be raised. Nothing sinister, but fundamental resource constraints.

            Regards,

            Matthew

            Comment


            • #66
              Originally posted by Hasenpfote View Post
              And you know what REALLY funny is:
              If you have more then 1 monitor connected, the idle power consumption goes from 18W to 50W!
              I was expecting it to increase, but that's more than I imagined. Thanks for the link! And thanks for the clarification, mtippett!

              I'd hope the drivers are smart enough to to power down when the monitors are off or not connected though. (for example when the computer is doing some simulations while I'm afk)

              As that's a technical problem that won't change, I'd still pick the 5770. It's not 13W with 2 monitors, but still less than other comparable cards. And as I just bumped my head against the desk when switching connectors, I really want that third output

              As soon as phoronix says "drivers are ready", I'm buying.

              Comment


              • #67
                Originally posted by mtippett View Post
                For the single monitor case, you can use very low clocks quite safely. The more monitors you add the higher the memory clock (and possibly the engine clock) will need to be raised.
                But why not being honest to the customer and saying "Our cards need idle 18W, when you only use one monitor! Otherwhise its 50W (or 60W or whatever)!" All reviews I read about the 5700/5800 series where saying, how great the idle power consumption is (which is true!). Especially with a feature like Eyefinity and prices for a 22" starting at 125Euros one can assume that people are using more than one monitor (if the girlfriend/wife does not have a different opinion). And suddenly it isnt so great anymore and people are asking, why their cards are so loud and being silent in the reviews. (In the link, they say, that the fans are running with 1400RPM.) The reviews a bought by AMD, AMD sucks, never again AMD, buy Nvidia, etc. pp. On heise.de in the comments people are still complaining about driver issues with AMD (on my side I had never any problems since my 9500pro). So some things can become kind of a longliving urban legend which can cost customers.

                So to quote rohcQaH: Are the drivers (now or in the future) intelligent enough to determine, if a monitor is active or not and adjust the clocks as needed (with steps like 50% for 2 monitors, 75% for three, etc.)? Are there any plans to let the user decide, if he wants to have lower clocks and problems (on 3dcenter.org they say something about flicker) or higher clocks and no problems?

                Thank you in advance for an answer.

                Comment


                • #68
                  Originally posted by mtippett View Post
                  The more monitors you add the higher the memory clock (and possibly the engine clock) will need to be raised. Nothing sinister, but fundamental resource constraints.
                  What if it were possible to lower the refresh rate when it's not needed? I assume there's at least some LCDs out there that can run at 24Hz for 1080p24 and the like. It could be an extra timeout in the same way the DPMS stuff works.

                  Comment


                  • #69
                    Originally posted by Hasenpfote View Post
                    But why not being honest to the customer and saying "Our cards need idle 18W, when you only use one monitor! Otherwhise its 50W (or 60W or whatever)!"
                    because that isn't the full truth either. Since memory bandwidth has to match output bandwidth, I'd expect a single 2560x1600-monitor (dual-link) to draw more power than your tiny 10" netbook display (if you can fit the 5870 into your netbook - or the other way round )

                    Ati releases idle watts and TDP, those are two interesting baseline values that are useful enough to compare different cards (even against nvidia's cards, no cheating there). Marketing presentations just won't have huge tables with wattages under different conditions. Very few people would even read them.

                    Comment


                    • #70
                      Originally posted by rohcQaH View Post
                      because that isn't the full truth either. Since memory bandwidth has to match output bandwidth, I'd expect a single 2560x1600-monitor (dual-link) to draw more power than your tiny 10" netbook display (if you can fit the 5870 into your netbook - or the other way round )
                      Ok, this is an argument.

                      Ati releases idle watts and TDP, those are two interesting baseline values that are useful enough to compare different cards (even against nvidia's cards, no cheating there). Marketing presentations just won't have huge tables with wattages under different conditions. Very few people would even read them.
                      But marketing can prepare slides to show how Nvidia currently sucks (http://tinyurl.com/yjzx42s ) and AMD prepares huge tables of technical information. And review sites and advanced users are interested in this technical information. Of course, if this got released in public documents, it's no fault of ATi but the reviewing sites and the users.

                      Well, in the end the difference in power consumtpion is no a real relevant buy factor if affects ATi and Nvidia. But I was quite shocked, that this was not discussed earlier (or I just overlooked it).
                      Last edited by Hasenpfote; 10-17-2009, 01:59 PM.

                      Comment


                      • #71
                        I really would like to know why ATI cards are soft limited to H264 L4.x decode and you have to google to finds hacks around that to use L5.1 (for DvXA). BD only needs L4.1 thats fine, but there are lots of L5.1 files out there which can play Nvidia cards - even using vdpau on Linux. That's a major drawback for video playback - even using the "mainstream" os.

                        Comment


                        • #72
                          If by "major drawback" you mean "99.9% of the customers simply won't care", then you are right. (You and I may be downloading H264 L5.1 videos, but the vast majority will never watch anything other than BD and Youtube "HD").

                          That said, I'd like to know the reason as well.

                          Edit (disclaimer): I just bought a fanless 9500 for my XBMC HTPC. VDPAU works, but I'm rather underwhelmed on the whole (sync issues, mainly, as well as not-very-good 2D performance. My previous 7600GS was much, *much* better in 2D).
                          Last edited by BlackStar; 10-20-2009, 04:28 AM.

                          Comment


                          • #73
                            For youtube you should enable vsync in nvidia settings and load

                            nvidia-settings -l

                            on startup. vdpau should be using vsync - and be sure you really use the latest BETA drivers!

                            Comment


                            • #74
                              Originally posted by Kano View Post
                              For youtube you should enable vsync in nvidia settings and load

                              nvidia-settings -l

                              on startup. vdpau should be using vsync - and be sure you really use the latest BETA drivers!
                              Sync issues, not vsync issues! As in audio out falling out of sync with video or video speeding up / down. I don't know whether this is an XBMC or a Nvidia issue, but I've disabled VDPAU until this can be resolved (my single-core Athlon 64 can handle 720p just fine and I don't have any 1080p content right now).

                              That said, I also have vsync issues - this card refuses to sync to the HDMI monitor as long as a VGA monitor is connected (using separate X screens). This causes pretty bad tearing on my TV, unless I disconnect the VGA monitor first - not very fun.

                              I'm using the 190.36 drivers from the VDPAU team, but I've tried everything several other versions from 180.xx, 185.xx and 190.xx with no change. If I actually knew that I'd encounter these kinds of problems, I'd probably have gone with an Ati card and the open drivers instead...
                              Last edited by BlackStar; 10-20-2009, 10:36 AM.

                              Comment


                              • #75
                                You know that you can control the VDPAU vsync via an environment var? VDPAU_NVIDIA_SYNC_DISPLAY_DEVICE should be set to the devicename nvidia-settings reports for it.

                                Comment

                                Working...
                                X