Announcement

Collapse
No announcement yet.

Catalyst 10.1 and Xorg 7.5 / 1.7.x?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by HardBall View Post
    bridgman, Thanks for the tips.

    I'm not that familiar with current versions of Fedora. The last version of Fedora that I used on a regular basis was FC5, and the last few years I've mostly stuck to Debian/Ubuntu/Mint; so I'm a bit out of the loop of the RH world these days. The version was Fedora I used was I think with 2.6.31-16, xserver 1.6.3 or 1.6.4 (don't remember which one), KDE4.3; and the radeonhd was manually installed. I don't remember using the any experimental package. I will try that again in the near future; currently I don't have FC installed, I will do that and let you know.

    I see that KMS definitely works with radeon driver for both FC12 and any version of Ubuntu with 2.6.31 or newer kernel. Although I haven't seen it working with radeonhd or fglrx yet. Paradoxically, it actually seems to work fine with fglrx for the desktop R4850, which is supposedly not supported by ATI.

    I can confirm the graphics corruption issue. With gnome systems, it is a rare occurrence, only has to do with transparency and/or svg display, and often crops up after waking from sleep (acpi related). On KDE systems, it is much more severe, especially for openSuse11.2, the entire screen sometimes is corrupted and unusable.

    The only system specific issue that I'm aware of is certain kernels (earlier ones are more problematic) does not mesh with the DSDT. The really early ones, like 2.6.24 refuses to boot at all. Kernels around 2.6.29 seems to use a workaround, and boots without certain acpi options (CPU p state), and as far as I can tell, 2.6.33 seems to have fixes, where those errors are gone from the boot process. But the issues with fglrx and radeonhd seem to be the same regardless of how compatible the kernel version with DSDT is; so I didn't think that would be likely the culprit. But I guess you never know.

    If you or anyone else has other ideas, please let me know. I'll be glad to try and see what happens.
    BTW, don't use radeonhd. use xf86-video-ati . radeonhd used to have an advantage over the normal OSS ati driver, but no longer does. In fact the ati driver has surpassed the radeonhd driver, I'm not sure if data and findings were merged into the ati driver and the radeonhd driver forgotten or what, but it is no where as good as the xf86-video-ati driver. I've tested both only to find this out myself. radeonhd used to be special because is supported HDMI audio, but now that has been implemented into the kernel for a while so radeonhd had lost its "special" place long ago.

    Comment


    • #32
      Originally posted by bridgman View Post
      Have to disagree with you here. Leakage current won't go down without reducing voltages, but dynamic current goes down more-or-less linearly with clock speeds. If voltage is constant then power dissipation goes down in line with the total current.
      However, heat generation is proportional to the square of the voltage, so the effects are much more apparent. I don't know what the ForceLowPowerMode in the OSS driver does, but I know that using fglrx in low power mode is much more effective than using rovclock with radeon to simply downclock the card (even when pushing it to the very low limit, next to screen corruption).

      The same goes for the CPU, I undervolted my laptop's processor by a tiny amount at each frequency and now the fan only kicks in when it gets stuck at 100% usage (read Flash, heh).
      Last edited by yotambien; 16 January 2010, 08:35 AM. Reason: bah

      Comment


      • #33
        Lowering clock speed in Catalyst for Windows and keeping the fan speed constant doesn't really offer much improvement. Maybe 2 or 3 C but that's it. Lowering voltage is what gives 15 C lower temps. We're talking 90C vs 75C here.

        Maybe the effect is not as high with other cards, I don't know, but the 4870 gets *HOT*. It uses AMD's reference cooler, btw.

        Comment


        • #34
          Before you lower voltage you normally need to drop engine and memory clocks, so you would presumably be seeing the accumulated benefit from all of those changes.

          Just to be clear, I'm not saying that lowering clocks alone is all you ever need, just disagreeing with the statement that lowering clocks alone makes no difference or is not worth doing. Lowering clocks is what the driver can do today, and if your GPU is running too hot you should be doing it even if *additional* power saving functionality can be added to the driver in the future.
          Test signature

          Comment


          • #35
            Originally posted by bridgman View Post
            Just to be clear, I'm not saying that lowering clocks alone is all you ever need, just disagreeing with the statement that lowering clocks alone makes no difference or is not worth doing. Lowering clocks is what the driver can do today, and if your GPU is running too hot you should be doing it even if *additional* power saving functionality can be added to the driver in the future.
            My point is that the OSS driver is not good for my card because of this and why I keep using Catalyst. The OSS drivers don't do the stuff AMD's/ATI's hardware designers intended a driver to do. Correct driver-level power management is an important part. I do not want the card to be useless in a year from now. I intent it to be used in older machines when I get a new card at some point. I do not trust the OSS drivers at this point; I believe they will result in hardware damage in the long run unless they implement power management correctly.

            Comment


            • #36
              No question, the current power management code imposes a performance penalty in exchange for reducing power. The point here is that some power saving features are available today, and advising someone not to use those features and saying they don't make any difference just because you don't see a big change on *your* system is not helping anyone.
              Test signature

              Comment


              • #37
                No advisories given by me on whether to enable low power mode or not. Just pointed out that the power management in the OSS drivers is lacking big time and if that's important to someone they should stick to fglrx for now.

                Comment


                • #38
                  Originally posted by RealNC View Post
                  It doesn't do anything else than downclocking (last time I checked). Voltages stay at default values. Since only voltages matter with temps, downclocking doesn't do anything helpful. Temps stay the same if the voltage doesn't go down, no matter how much you downclock the card.
                  Perhaps I misread your comment above, but it seemed to me you were saying that the ForceLowPowerMode option would not have any affect on temps.
                  Test signature

                  Comment


                  • #39
                    It's like a dude I know here who tries to sell me stuff with 1% rebate compared to other stores. Yes, 1% is a number, though a bit insignificant.

                    Comment


                    • #40
                      Sure, so is zero, but that's not the point. The ForceLowPowerMode option usually makes a useful difference in power consumption and GPU temps, a lot more than 1%, even if you are measuring in Kelvin rather than Celsius degrees.
                      Test signature

                      Comment

                      Working...
                      X