Announcement

Collapse
No announcement yet.

E-450 graphics performance issues

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by bridgman View Post
    IIRC the algorithm of choice ended up breaking the rectangles into narrow horizontal slices the size of the vertical scroll distance, blitting rectangles one at a time, and flushing caches between rectangles. That was reliable and reasonably fast; not sure if any improvement since then has been figured out.
    FYI, the current algorithm seems to be a bit better - it does the copy in two steps with a temporary buffer. Still doesn't explain why it is so slow.

    Comment


    • #42
      I have performance issues on r200 also, lines test from gtkperf - with dri disabled it is good... the same thing is with x11perf tests... and all that compared with ums/exa. I think it is something with load new something , seeing the same behasvior when load some games, needs many seconds 10 or more to load new scene (dont know how to explain - it loads but slow like using swrast these 10+ seconds). Again when i compared that with ums/exa and mesa 7.5.2 all that is fine and smooth.

      Maybe something was wrongly setuped in kms or exa with kms, who knows

      Also textured video have some stairway efect, diagonal tearing maybe... So, these are the main bugs for me.

      And some alternative gui toolkits are much slower, like scrolling in fltk, min/max windows in fox toolkit, menus in softmaker office, etc...
      Last edited by dungeon; 16 July 2012, 06:08 AM.

      Comment


      • #43
        Originally posted by brent View Post
        FYI, the current algorithm seems to be a bit better - it does the copy in two steps with a temporary buffer. Still doesn't explain why it is so slow.
        Well, OK, it's simple: the whole command stream is flushed two times for that. No batching = slow.

        Comment


        • #44
          Originally posted by bridgman View Post
          I don't think my description talked about CPU load, did it ?

          IIRC all that rectangle-blitting and cache-flushing (and waiting for cache flushing) did eat up some CPU time, but not sure if 60% is reasonable.
          That was the impression I got from your description, all the stuff is done on the video chip. I just made the test on my 6-core Phenom with HD6870 and I get similar results (all cores between 17% and 25%). Playing a Youtube video without hardware acceleration uses less CPU.
          Something is weird with my hardware or your drivers.

          Comment


          • #45
            The problem is mostly twofold and I've already mostly addressed these already on various threads:
            1. Modern toolkits are using more advanced RENDER features. It's not possible to accelerate these and still be RENDER spec compliant on older GPUs because RENDER semantics don't map well to 3D hardware. It is possible accelerate them on modern GPUs, but the complexity starts to rival a 3D driver. It that case it starts to make more sense to take advantage of the 3D driver (better state tracking, integrated shader compiler) with something like glamor.
            2. EXA was designed years ago and does not provide the necessary infrastructure to accelerate mode advanced RENDER features without an overhaul.
            Thus, you end up with SW fallbacks for certain operations which means data ping-ponging between GPU and CPU buffers which almost always ends up being slower than pure CPU rendering or pure GPU rendering. You can try the glamor support in git which should improve things going forward as glamor picks up support for accelerating more and more operations using OpenGL.

            Comment


            • #46
              Originally posted by brent View Post
              Yes, I am aware of that, but on Intel and NVidia 2D performance through 3D hardware does not have performance issues.
              Have you checked modern NVidia hardware ? You mentioned performance on "an older NVidia GPU" which probably had 2D hardware acceleration.

              Originally posted by kobblestown View Post
              It is to be commended that AMD has some OSS policy and are releasing documentation that helps in this regard. Yes, it's a pity that the OSS Radeon driver is so far behind in therms of performance and features. And those are both mostly due to the incompleteness of the documentation.
              Sounds pretty dramatic, but I don't think it's actually *true*.

              We try to focus our developer support efforts where developers are actually working, so in general we don't let documentation (or code or info delivered in other forms) get in the way. If you want to pick on UVD I guess that works, but we said at the start *not* to assume UVD support unless/until we still said otherwise and that's still the case.

              The other area where this is probably true is power management for APUs, where it's tough to get clocks running at full speed with currently available information. We are working on that. The whole power management area changed drastically while we were planning the open source graphics project and it's pretty clear we underestimated the challenges there.

              For the rest of power management, it's actually one of those perverse situations where the *possibility* of documentation is holding things back... there's enough info out there to make significant improvements in power management, but the devs are holding off because *if* we can release more info (and in fairness we probably will have to if only for APUs) that would obsolete some of the work they could do now. I can't say I disagree with the devs, but it is certainly one of those "unintended consequences".

              Originally posted by kobblestown View Post
              So bridgman, please tell this to your superiors. Your efforts are not in vain. If only things could happen a bit quicker...
              What, specifically, do you think should be "quicker" (ie what parts of the chip do you think should be progressing faster in terms of us writing the code, learning the hardware, releasing code and writing documentation) ?
              Last edited by bridgman; 16 July 2012, 09:52 AM.
              Test signature

              Comment


              • #47
                The simple fact is that AMD just doesnt provide same level of performance as either nvidia or intel(on linux i mean). I had mostly amd hardware because was always cheaper that intel. Bought a 13 inch lenovo edge thinkpad with amd athlon neo x2 (l325) to discover later that it doesnt support cpu freq scaling, and every intel Cpu can do that, to make things sweet lenovo made a crappy bios that they have no intention of fixing so i still get firmware bug message when my laptop boots up. Fglrx always been very slow on this laptop but thanks to OSS drivers laptop is usable now and i get a tear free desktop. And the policy of AMD supporting opensource its just what it says : politics meaning lots of promises and just a few facts. Aldo i have been using amd cpu and gpus for like 10 years but now i`m tired of waiting for amd to catch up. So i think every AMD fanboy should at least once give Nvidia and Intel a try to see how it goes just to keep an open mind. And again many kudos to OSS developers without the work they put i wouldnt be typing from a linux box right now.

                Comment


                • #48
                  Originally posted by adriankx View Post
                  And the policy of AMD supporting opensource its just what it says : politics meaning lots of promises and just a few facts.
                  Can you be a bit more specific ? What "promises" are you talking about ? You know we have 4 full time developers contributing to the open source graphics drivers, right ?
                  Last edited by bridgman; 16 July 2012, 02:11 PM.
                  Test signature

                  Comment


                  • #49
                    Yes i know and they do quit alot! but everybody expected with AMD opening up their specs that the speed of developement in OSS drivers will go very fast. For example my hd3200 mobility is pretty old and still i dont think it has reached 70% feature parity with the blob. I have uvd1 so no video accelaration for me, all i`m trying to say is that many of us got very enthusiastic when amd released spec then hired OSS developers, but no one new the amount of huge work that required playing catch up with FGLRX. So in my opinion with intel GPU getting stronger with each generation is worth considering when purchasing a new laptop. I will still support and use OSS driver with my curent amd laptop, just that i won`t set my hopes so high this time.And lets say if i want to look on bright side off things i can think that my video card is pretty future proof with OSS driver and only can get better in time, but for those not willing to wait i would still recomend an intel ivybridge rig, its has good OSS support across the board from what i red i dont own such hardware so... i might be mistaken. I didnt mean to ofend in anyway its just a state of facts and some constructive criticism. In the end we all what the same thing a trouble free smoth linux experience.

                    Comment


                    • #50
                      Originally posted by adriankx View Post
                      but everybody expected with AMD opening up their specs that the speed of developement in OSS drivers will go very fast.
                      This is the part I don't understand. Anyone with graphics driver development experience knew how hard it was going to be, although there was some disagreement about which parts were hardest. The proprietary Linux drivers probably have north of 1,000 engineer-years invested in them (very rough guess)... why would anyone expect open source driver development to be any easier ?

                      The *only* things that make any of this work are (a) if you work on functionality in the right order you can get a fair amount of useability with the first small (relatively) amount of development work, (b) making source code or per-commit builds available allows non-developer "power users" to help locate commits which broke something, and (c) the nature of open source work tends to attract some unusually capable and motivated developers.

                      I'm still impressed with how *quickly* the development has gone, given all of the architectural transitions (KMS, GEM/TTM, DRI2, Gallium3D etc...) that had to happen in parallel.
                      Test signature

                      Comment

                      Working...
                      X