Announcement

Collapse
No announcement yet.

AMD Puts Catalyst 13.10 GPU Driver Into Beta State

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by gradinaruvasile View Post
    ...................
    Originally posted by pandev92 View Post
    Same with a apu amd a8 5600k , fglrx simply is not for me
    Same here with an a10-5800k. For 90% of the 80 games I have in my Steam library, it destroys fglrx. Smooth, no stuttering, and about 20-30% higher frame rates in Source games. I think the APUs on R600drivers are really in the sweet spot for performance with the FOSS drivers. There are one or two games that are having issues (due to known mesa bugs), but almost everything else works beautifully. And that's not even getting into the night and day difference with general desktop performance.

    No special repositories or anything compiled from source - I'm just using Arch with standard repos. It on 3.11 kernel and 9.2 mesa.



    Also, maybe it's just me, but it looks like the VDPAU UDV implementation on the R600 driver actually supports more video formats than flgrx.

    Comment


    • #22
      Originally posted by benmoran View Post
      No special repositories or anything compiled from source - I'm just using Arch with standard repos. It on 3.11 kernel and 9.2 mesa.
      I bought a brand new HD8330 ( new toshiba laptop with A4-5000): compiling 3.12 kernel, mesa 9.3.0-devel (with radeonsi and llvm-3.4) on a debian system, the FOSS driver works like a charm. Tried latest fglrx drivers but they suck (and obviously broke all dependecies).

      AMD is working great with their contribution to FOSS drivers.

      Comment


      • #23
        hmm, how about Mir/XMir ? AMD haven't said anything about support Mir and XMir

        Comment


        • #24
          Originally posted by mmstick View Post
          Then why are they 20x faster than the open source drivers? I don't have any outstanding problems with them with my 7950 other than performance in games.
          I have a HD 7970M and intel 4000 ("Enduro") in my notebook. AMD has support for that in their official driver, except you can't use more than one monitor without graphical corruption and you have to restart X (or start a second X, but that didn't work last time I tried) each time you want to use the discrete gpu.

          So I only have bumblebee/virtualgl as reference currently. Unigine Heaven runs at 11 fps. On a HD 7970M.

          When you say that fglrx is 20x faster, if xonotic runs with 190 fps (http://www.phoronix.com/scan.php?pag...st_11way&num=6) you say that fglrx will run it with 3800 fps?

          Comment


          • #25
            Originally posted by benmoran View Post

            Also, maybe it's just me, but it looks like the VDPAU UDV implementation on the R600 driver actually supports more video formats than flgrx.
            The real difference (if any) is only visible if you use native xvba (the fglrx drivers UVD) vs native radeon/VDPAU. And that can be tested only with xbmc's FernetMenta branch because thats the only player that supports both VDPAU and xvba natively (for VDPAU/radeon xbmc needs a yet unmerged few gl_interop mesa patches to work properly).

            Anyway, i used xvba with fglrx before and now i use mplayer with VDPAU (plain mplayer works best, the smplayer or other *mplayer gui wrappers sometimes have artifacts). Both seem to work well in general.
            The problem with fglrx is that sometimes it breaks xvba or even OpenGl (such as the 13.8 series) and the devs dont really listen
            Also VDPAU/radeon has more future because the devs are actually improving whereas xvba is basically unchanged for a long time. Also the spec is implemented by almost all media players.

            PS As i said before, the widely used xvba-va-driver (for example via vlc) is NOT real xvba - it uses xvba as backend via the libva interface, and thats inefficient (as in, for example ~35% cpu libva/xvba vs 5% CPU native xvba) and buggy (the same goes for nvidia-va-driver).

            Comment


            • #26
              Originally posted by gradinaruvasile View Post
              The real difference (if any) is only visible if you use native xvba (the fglrx drivers UVD) vs native radeon/VDPAU. And that can be tested only with xbmc's FernetMenta branch because thats the only player that supports both VDPAU and xvba natively (for VDPAU/radeon xbmc needs a yet unmerged few gl_interop mesa patches to work properly).

              Anyway, i used xvba with fglrx before and now i use mplayer with VDPAU (plain mplayer works best, the smplayer or other *mplayer gui wrappers sometimes have artifacts). Both seem to work well in general.
              The problem with fglrx is that sometimes it breaks xvba or even OpenGl (such as the 13.8 series) and the devs dont really listen
              Also VDPAU/radeon has more future because the devs are actually improving whereas xvba is basically unchanged for a long time. Also the spec is implemented by almost all media players.

              PS As i said before, the widely used xvba-va-driver (for example via vlc) is NOT real xvba - it uses xvba as backend via the libva interface, and thats inefficient (as in, for example ~35% cpu libva/xvba vs 5% CPU native xvba) and buggy (the same goes for nvidia-va-driver).
              Yes, I'm using the XVBA branch of XBMC for my AMD E450-based HTPC right now with FGLRX. To the credit of those devs, it actually does work really well - I've not had any issues with corruption for anything I've tried to play. The only issue is that it doesn't support MPEG2, which sucks because HDTV here is super high bitrate MPEG2. The E450 CPU can handle it, but it runs at full tilt.

              It looks like the open source R600 VDPAU UDV does support MPEG2, so I plan to switch over to that on my HTPC as soon as it's possible to do so. It seems to work great on my desktop w/ a10-5800k.

              Comment


              • #27
                Originally posted by benmoran View Post
                Yes, I'm using the XVBA branch of XBMC for my AMD E450-based HTPC right now with FGLRX. To the credit of those devs, it actually does work really well - I've not had any issues with corruption for anything I've tried to play. The only issue is that it doesn't support MPEG2, which sucks because HDTV here is super high bitrate MPEG2. The E450 CPU can handle it, but it runs at full tilt.

                It looks like the open source R600 VDPAU UDV does support MPEG2, so I plan to switch over to that on my HTPC as soon as it's possible to do so. It seems to work great on my desktop w/ a10-5800k.
                Make sure you patch mesa too with the gl interop patches otherwise you might have nasty surprises (i had massive memory leaks after ~20 minutes of vdpau/radeon playback with xbmc on my A8-5500). This happens only with xbmc because its a fully opengl accelerated application and displaying vdpau content properly requires these patches (both the nvidia and fglrx drivers have them).
                Mplayer is fine, it uses the vdpau display directly so no patches needed.

                Comment


                • #28
                  Originally posted by benmoran View Post
                  Same here with an a10-5800k. For 90% of the 80 games I have in my Steam library, it destroys fglrx. Smooth, no stuttering, and about 20-30% higher frame rates in Source games. I think the APUs on R600drivers are really in the sweet spot for performance with the FOSS drivers. There are one or two games that are having issues (due to known mesa bugs), but almost everything else works beautifully. And that's not even getting into the night and day difference with general desktop performance.

                  No special repositories or anything compiled from source - I'm just using Arch with standard repos. It on 3.11 kernel and 9.2 mesa.



                  Also, maybe it's just me, but it looks like the VDPAU UDV implementation on the R600 driver actually supports more video formats than flgrx.
                  Strange. I don't even get 30% of fglrx's performance and this is an optimistic estimation. Source games are unplayable, even very simple stuff like "Thomas was alone" or "Red Orchestra" is very laggy.
                  (Archlinux, everything up2date, xorg configs tweaked)
                  Using AMD APUs with RAM set to 1600mhz and VRAM set to 512mb (UEFI) is a common mistake. Perhaps with this bottleneck there isn't that much of an performance difference?
                  Personally i did not buy the a10-5800k just to be unable to make use of its performance. Regardless what driver the situation is far, far from satisfying.

                  I forgot to mention: Even with DPM the APU gets significantly hotter (idle). Catalyst: 36?C, Radeon: 45?C
                  Last edited by Kemosabe; 01 October 2013, 05:41 AM.

                  Comment


                  • #29
                    After one day, Xv continue to run so slow, where the radeon drivers opensource or nvidia drivers, are ok, flash is slow where opensource drivers run fine, steam games suffer a big stuttering.., I'm very tired of catalyst performance.

                    Comment


                    • #30
                      Originally posted by Kemosabe View Post
                      Strange. I don't even get 30% of fglrx's performance and this is an optimistic estimation. Source games are unplayable, even very simple stuff like "Thomas was alone" or "Red Orchestra" is very laggy.
                      (Archlinux, everything up2date, xorg configs tweaked)
                      Using AMD APUs with RAM set to 1600mhz and VRAM set to 512mb (UEFI) is a common mistake. Perhaps with this bottleneck there isn't that much of an performance difference?
                      Personally i did not buy the a10-5800k just to be unable to make use of its performance. Regardless what driver the situation is far, far from satisfying.

                      I forgot to mention: Even with DPM the APU gets significantly hotter (idle). Catalyst: 36?C, Radeon: 45?C
                      Something isnt right with your setup somewhere. Do you actually use dpm? You have to add manually radeon.dpm=1 to the kernel command line to use it.

                      I use Debian 64 bit, compiled everything from git (mesa, drm, xf86-ati, kernel) and it runs very well with Source games - FASTER than fglrx. Others such as hl1 based ones are a teeny bit slower actually (maybe the CPu doesnt throttles up enough?). And i dont have any xorg.conf tweaks.

                      "sensors" show me exactly the same temperatures as with fglrx (the sensor seems to misreport temperatures if the CPU is idle, but its exactly the same with fglrx and radeon). I have a A8-5500.

                      Notes (from personal experience):
                      - If you have 64 bit system you have to install/compile both 64 bit AND 32 bit mesa components to be able to play 32-bit games like the Steam ones.
                      - There is a bug with the radeon driver that for some reason sets the APUs voltage to its maximum most of the time if i use the ondemand cpufreq governor resulting in ~10 C degrees more heat. I worked around it by using the conservative governor and acpi=strict at the kernel command line.
                      Check the sensors output - the APU voltage has its limits between 0.9-1.34 v. After logging in the gdm on my system the voltage jumped up to 1.34v and stayed there until i did the dpms trick, when it came down to 0.9 and started changing as it should.

                      A local instant workaround is to do a dpms off/on switch on the monitor ("xset dpms off" command) - sometimes switching to a text vt then back does the same.

                      PS. Why is a mistake using 1600MHz memory clocks and 512 MB VRAM? I have just that and everything is just fine.
                      Last edited by gradinaruvasile; 01 October 2013, 06:46 AM.

                      Comment

                      Working...
                      X