Announcement

Collapse
No announcement yet.

Retro thread - seeking 2d performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Retro thread - seeking 2d performance

    In brief - I need/want fast 2d performance with good visual quality, and don't really care about 3d. Am I better off dusting off an ancient Matrox G400 to replace a less-ancient nVidia 7600 GT?

    Seems to me that all of the performance emphasis these days is on 3d. The 2d performance is assumed, neglected, denegrated, whatever. The recent OSS vs Catalyst numbers on ATI article highlights that.

    My job has no need whatsoever for 3d, and I don't run any sort of eye candy that requires it, either. For my job, 2d performance and visual quality are everything. The company buys high-performance graphics cards, but these days that means fast 3d, which is irrelevant. Hopefully fast 3d implies fast 2d, but looking at the recent ATI article, that's not necessarily the case.

    I've seen that the nouveau drivers are faster for 2d than the proprietary nVidia drivers, but properly installing nouveau is rather disruptive, for now. I'm sure it will get easier as the new DRM/DRI stuff goes mainline, but for now it's not. I also have an old Matrox G400 gathering dust in a cabinet, and I'm wondering if it might actually be faster for 2d. The video quality of the matrox g400 was legendary in its time, though I've now got an LCD instead of a CRT, so that probably doesn't matter as much.

    Does anyone know of collections of 2d X11 benchmarks that might help me sort his out? A bit of fiddling with google hasn't helped, so far. There just isn't much emphasis on 2d peformance, these days.

  • #2
    I think the raw 2d speed is overrated. Many use compiz or kde 4 effects which is depending on 3d speed as well. When you would disable those effects (or even disable composite in xorg.conf) then in most cases 2d speed should be fine. It is just like this that many nvidia complains are only kde 4 users, kde 3 without effects usually did not suffer from those problems.

    Comment


    • #3
      I run plain old icewm, at the moment. None of the fancy desktops.

      As for 2d speed, I'm in chip design. So where others may be interested in rendering shaded triangles, I simply want polygons - millions of them. Not that you can meaningfully paint millions of polygons on the screen at any given moment in any visually significant way, but they're all there in the data, and need some form of high-level visual representation. Then we start zooming in. And for that matter, when looking at a full chip, millions might be a bit of an understatement. You can do a transister with under a dozen polygons, but not a very interesting one. Nor can you see that transistor at the full-chip level, but there are times when you'd like to navigate around the full chip, find an area of interest, and zoom in to where you can see individual transistors.

      Comment


      • #4
        I would say yes, based on my own experience with G450, but YMMV. Why not just try both?

        gtkperf, cairoperf (is it named that?), x11perf come to mind as 2d benchmarks.

        Comment


        • #5
          That's one way, and it might well be what I do. I was more looking to see if someone would declare it a slam-dunk first, one way or the other. Thanks for the benchmark suggestions, though. I may as well start x11perf some time before I leave though. Last I saw, it takes a while.

          Comment


          • #6
            There are a multitude of 2D operations. Make sure you know which ones you need before evaluating. From what I read, you don't want "Fast 2D". You want "Fast line drawing".

            Current GPUs are capable of drawing a few hundred millions of transformed and filled triangles per second, so a couple of million lines should be easy.

            But when it comes to drawing a short, straight non-AA line, it's often more work to send the drawing commands to the GPU than to draw the line on your CPU. If you really need speed, you could either
            - ignore your gpu and do full software rendering. Use whatever card you like, but stay away from anything old enough to have no digital outputs. If anything kills visual quality, it's an analog VGA connection.
            or
            - make sure your program utilizes the GPU efficiently and isn't stuck in layers of APIs.

            For the latter, you need to find out which API your software uses. X11, XRender, OpenGL, ...? As server roundtrips are to be avoided, OpenGL/DRI would be fastest. If you can use GL_LINE_STRIP instead of GL_LINES, even better, as you'll send less vertices to the card. If you care to write custom specialized shader programs, performance shouldn't be an issue at all.

            Comment


            • #7
              Find a cheap, used Radeon(HD). Fast 2D with EXA.

              Comment


              • #8
                I think you'll need OpenGl with vertex array and/or buffer object support. The key to succes is caching and grouping the graphics primitives, not drawing everything one by one (in immediate mode). It may also be of value to look at http://www.toped.org.uk/index.html for the code and the whole tool also. Last time I checked, the open source Radeon drivers didn't support the necessary operations, but that may have changed now.

                Regards,
                -Vesa

                Comment


                • #9
                  Originally posted by vesa View Post
                  I think you'll need OpenGl with vertex array and/or buffer object support. ... the open source Radeon drivers didn't support the necessary operations, but that may have changed now.
                  Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

                  Comment


                  • #10
                    I'm picking up prebuilt applications, some commercial, some internal. It is possible to know what APIs are used for both, but it's likely not possible to change it. The dominant commercial application uses QT3, if that gives any hints.

                    Comment

                    Working...
                    X