Announcement

Collapse
No announcement yet.

r6xx 3D games

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • can anybody give me an update on the latest drivers with heroes of newerth?
    if needed i can provide you with a beta account.

    Comment


    • Apparently, QuakeLive also exists in 64 bits, and this was not the problem. Yesterday's blit commit fixed it and now it's working perfectly!

      Of course, I will have the 32-bit problem when the drivers become capable of running Doom3 and Quake4, but so far, I'm a happy camper, QuakeLive works!

      Comment


      • Newerth

        Originally posted by Pfanne View Post
        can anybody give me an update on the latest drivers with heroes of newerth?
        if needed i can provide you with a beta account.
        I've been trying on Arch 64 the last few days with all the mesa/libgl/drm/video-ati/etc git trees and stock 2.6.32 Arch kernel without any success. In fact the last few days I have zero 3D, whereas at least when I first tried last week some time, games like alienarena and worldofpadman worked amazingly well.

        HoN was giving me ARB_shader_something errors.

        I'm currently compiling (via ssh from work) a 2.6.33-rc4 kernel without the power management patches located in this PKGBUILD (which I initially tried):

        http://bbs.archlinux.org/viewtopic.p...690059#p690059

        It was failing at the Depmod stage like someone commented on later in that thread.

        I will try to update this thread tonight with my results.

        EDIT: This is on an ATI RV730XT [Radeon HD 4670].
        Last edited by xris; 01-19-2010, 01:18 AM.

        Comment


        • I don't know what's going on, but I don't have squat working these days...

          Had to revert to catalyst.
          Last edited by xris; 01-20-2010, 09:14 AM.

          Comment


          • Now that xorg-edgers has moved to Mesa 7.8, I now have OpenGL2.0 with radeon. Pretty much all my games are playable in some manner. GLSL in Nexuiz and Alien Arena no longer cause crashes and/or massive slowdows, but it causes the game to render garbage. Blur is usable in Nexuis as well, just as AA is (doesn't do anything, but neither does it slow the game to a crawl anymore). Warsow still crashes at start-up, Tremulous is slow and unplayable, Neverwinter Nights is too slow to play. Wine I haven't tested much, but performance with it didn't seem to change much (games run, but too slow).
            Too bad enabling KMS causes Firefox scrolling to be choppy. I just can't do without DRI2 anymore, so it's back to fglrx for now.

            Comment


            • Originally posted by Melcar View Post
              Too bad enabling KMS causes Firefox scrolling to be choppy. I just can't do without DRI2 anymore, so it's back to fglrx for now.
              Try disabling smooth scrolling in FF. Also, if you are using an older xf86-video-ati, try git master. Command batching support for r6xx+ EXA/Xv was added around the end of November and improves 2D performance significantly.

              Comment


              • Now i get OpenGL 2.0 as well. Great!
                But KMS won't work for me. I can turn it on (with radeon.modeset=1), but then i can't use compiz. If i try to turn it on, my screen gets totaly white. dmesg says kms is enabled. It tried it out with my Ubuntu karmic + xorg-edgers ppa + 2.6.32-4 kernel on an ATI Radeon HD 2600.

                Comment


                • Originally posted by Boerkel View Post
                  Now i get OpenGL 2.0 as well. Great!
                  But KMS won't work for me. I can turn it on (with radeon.modeset=1), but then i can't use compiz. If i try to turn it on, my screen gets totaly white. dmesg says kms is enabled. It tried it out with my Ubuntu karmic + xorg-edgers ppa + 2.6.32-4 kernel on an ATI Radeon HD 2600.
                  Make sure your r600 mesa driver is built with kms support. Also, make sure compiz is using an indirect context. Try running:
                  LIBGL_ALWAYS_INDIRECT=1 compiz

                  Comment


                  • After recent updates Darwinia is very playable (good framerate, only minor artifacts) and Doom 3, although with heavy shadow artifacts, has a decent framerate (30fps average in timedemo demo1).

                    This is with RV670, kernel 2.6.33-rc5 and git libdrm, mesa and drivers. Great work, devs

                    Comment


                    • I have to say that CS 1.6 is working quite well (some shadow glitches, but on specific maps only), but not with decent framerate... It sometimes drops to 5-10 FPS and is totally unplayable. I have to note that using fglrx it didn't drop that much (maybe sometimes to 30-40). I'm using RV670, everything connected to graphics from git and 2.6.33-rc4 kernel (KMS enabled). With UMS it's still painfully slow. Is there any way to change this situation? Will Gallium driver solve this issue?

                      Comment


                      • Originally posted by Wielkie G View Post
                        I have to say that CS 1.6 is working quite well (some shadow glitches, but on specific maps only), but not with decent framerate... It sometimes drops to 5-10 FPS and is totally unplayable. I have to note that using fglrx it didn't drop that much (maybe sometimes to 30-40). I'm using RV670, everything connected to graphics from git and 2.6.33-rc4 kernel (KMS enabled). With UMS it's still painfully slow. Is there any way to change this situation? Will Gallium driver solve this issue?
                        I believe Gallium will only work with KMS. But by the time the Gallium driver is usable, using KMS will be painless. Like where the Intel driver is now.

                        Comment


                        • You didn't understand me UMS is as slow as KMS - there is no performance difference between them in my case.

                          Comment


                          • FWIW I read your previous post the same way as pvtcupcakes, ie that KMS was slow in places but that UMS was worse

                            Unless one of the devs is familiar with exactly what the app is doing when framerates are low it's going to be hard to do more than guess about what development work is most likely to make a difference. Right now the focus is still more on making the apps run in the first place and accelerating commonly used functions than doing any specific optimization work.

                            What is the app doing when it gets slow, ie large amounts of detail, specific effects etc ?

                            Comment


                            • It's Half-Life based game (GoldSource engine), it's very simple and derived from Quake engine. FPS is low when I look at many triangles (for example it's a bit higher when I look on the floor). It may be connected to unoptimized CPU->GPU transfers. AFAIK this engine batches all triangles on every frame, so more triangles -> lower performance because of this bottle-neck. This was for sure the case for fglrx, but mesa could have introduced another bottle-necks. Also note that I play this game through wine (but still OpenGL). I'll try to start the game from console to find some interesting wine messages (if any).

                              Edit: Nothing much interesting in console, only this (may be produced by mesa):
                              Code:
                              warning: Unknown nb_ctl request:  4
                              repeated few times.
                              Last edited by Wielkie G; 01-24-2010, 04:20 PM.

                              Comment


                              • Originally posted by bridgman View Post
                                FWIW I read your previous post the same way as pvtcupcakes, ie that KMS was slow in places but that UMS was worse

                                Unless one of the devs is familiar with exactly what the app is doing when framerates are low it's going to be hard to do more than guess about what development work is most likely to make a difference.
                                Which is why some company interested in graphics should fund getting better OpenGL debug tools. I'm about ready to just give up entirely on OpenGL/Linux after seeing the DirectX tools like PIX and NVperfHud. I can't even put into words the difference they make; it's like taking regular programming from working in raw machine code to writing everything in plain English, and that's not an exaggeration.

                                Not only can you trace and profile every last bit of the entire graphics pipeline from your app down into the hardware execution of the shaders for any given pixel, you can play back frames and step through execution, you can get hotspots in your API usage, and the tools can give you errors that pretty much say "this thing right here is what's wrong with your performance, and here's how to fix it."

                                As a driver developer, the ability to see the call graphs through the API down to the hardware execution of shaders would help you identify hotspots and performance issues in the drivers without needing to look at the app's source at all. Even if you don't own the app, it would make it possible for users to submit logs with the profiling results.

                                Comment

                                Working...
                                X