Announcement

Collapse
No announcement yet.

Is Windows 7 Actually Faster Than Ubuntu 10.04?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #91
    Originally posted by BlackStar View Post
    I'd wager that if you are seeing larger than 10% differences running the same code on D3D vs GL, then either the engine is badly written or you are hitting a driver bug. This 10% figure is sometheing I've seen come up relatively frenquently on opengl.org forums.
    Without a benchmark run through various applications that 10% might as well be 0% to 200% in validity.

    Comment


    • #92
      Originally posted by allquixotic View Post
      I notice dramatic differences in performance of the ATI Catalyst Linux driver on Ubuntu depending on whether compositing is enabled or disabled. For instance, with compositing disabled, 2D performance of the Catalyst driver is as sluggish as the open source Intel driver was in 2007! It takes up to a second for each refresh of scrolling the window in Chrome. I notice a similar tanking of FPS in 3d games without compositing. It seems that the Catalyst Linux driver is only competitive with compositing *enabled*, which you'd think would actually decrease performance, but due to the architecture of the graphics system, it actually makes everything way faster.
      Now that's strange, compositing used to reduce fglrx 2d performance significantly in the past. IIRC, there are some xorg.conf options that can improve performance significantly (Textured2D I think?)

      For what's worth, the OSS driver provides *significantly* faster 2d than fglrx on my 4850, regardless of the compositor in use. Better Xv, too, with less tearing and better colors. Now only thing I need is proper OpenGL 2.1 / GLSL 1.20 and I'll switch over completely.

      Comment


      • #93
        Originally posted by deanjo View Post
        Without a benchmark run through various applications that 10% might as well be 0% to 200% in validity.
        I'm not aware of any modern D3D10+ / GL3.x+ applications though. Only Unigine fits the bill and a single application doesn't really fit the bill. WoW is far too old to be meaningful now (plus it's not cross-platform).

        The best I can come up with is a potential Half-Life 2 release that supports an OpenGL renderer in addition to Direct3d (which might or might not happen). Starcraft 2 might also work but I doubt they will make the OpenGL renderer available on Windows (why should they? It will be optimized around Apple's OpenGL implementation, which means OpenGL 2.1 and workarounds for Apple bugs).

        Any better ideas?

        Comment


        • #94
          Originally posted by BlackStar View Post
          Now that's strange, compositing used to reduce fglrx 2d performance significantly in the past. IIRC, there are some xorg.conf options that can improve performance significantly (Textured2D I think?)

          For what's worth, the OSS driver provides *significantly* faster 2d than fglrx on my 4850, regardless of the compositor in use. Better Xv, too, with less tearing and better colors. Now only thing I need is proper OpenGL 2.1 / GLSL 1.20 and I'll switch over completely.
          I'd love to use the OSS driver, but they are not providing fast enough support for the HD5000 series. I have a HD5970. Unless I use fglrx, it's just an expensive SVGA card.

          That said, the performance of everything -- 2d and 3d alike -- is acceptable with fglrx. The composited desktop experience is excellent, and the performance on well-optimized engines like Unigine is good enough for gaming. The performance on the CrystalSpace engine (e.g. PlaneShift) is abysmal, but I can't pin all of that blame on ATI, since Crystalspace uses a lot of older (deprecated) OpenGL APIs.

          Rant below; watch out!

          The summary judgment I pass on all open source 3d graphics drivers to date is too little, too late. Even in their mature, finished form, like with the Intel G965 drivers that have been in development for many years, they are too little; and they're much, much too late (the G965 hardware I used to use has since been replaced with a more modern system).

          By the time the open source radeon stack can do interesting things like OpenGL 2.1 with a HD5970, I'll be ready to upgrade to a HD6000 series. Only the people who upgrade once every 6-8 years are going to see any benefit from this unimpressive rate of development.

          FOSS works great for things that can be done once and done well, like a web browser, office suite, or kernel. The minimum rate of progress of these products is not dependent on external factors such as changes in hardware. If it takes you 10-15 years to develop a solid, core technology like the Linux kernel, that's fine, because a lot of the effort that went into the Linux kernel in the 90s is still being used today.

          But when you're fighting against Moore's Law as you are with hardware drivers, FOSS simply can't keep up -- until the people creating the hardware start to put a proper priority (i.e. >= 50% of the development manpower) into open source drivers. Til then, Linux and other FOSS operating systems will be behind the curve with the most complicated device drivers such as 3d.

          Comment


          • #95
            Originally posted by BlackStar View Post
            You are mixing up API functionality with API design. Please don't, they are not the same thing: OpenGL 3.3/4.0 is modern in functionality but legacy-ridden in design. The original OpenGL 3.0 proposal would have fixed both issues in one go but the design was scrapped as too ambitious. The resulting spec only fixed 50% of the issue (functionality), leaving the broken design intact.

            Why is the design broken? Because it forces you to either stall the pipeline or aggressively reset state for every single operation. Want to stream in model data without disrupting rendering? You have to save the current array object binding (introducing a stall), bind the new array object, stream the data and finally restore the original array object. Same thing for element array objects, vertex array objects, textures, renderbuffers, framebuffers, pixel buffers and programs, multiplied by the number of middleware libraries you are currently using (want to use a 3d model loader? Maybe a text renderer or a UI library? Better save all your state because they might trample everything).

            None of this should be necessary in a modern API. None of this is necessary in Direct3d.

            Just take a look at the suggestion thread for OpenGL n+1. Direct state access is in nearly every single post there.
            NB: I don't understand the performance implications of the state machine, so I'm going by the thread here.

            The thread tells me that DSA doesn't give you any new possibilities of better performance, it just makes it easier to do so. I'm sure it's a very nice thing to have, but it shouldn't create a 10% difference on an optimized renderer such as Unigine. It's closed source, so you can't check. But it's reasonable to assume that they went through the trouble to get rid of the stalls.

            Comment


            • #96
              Originally posted by Remco View Post
              It's closed source, so you can't check. But it's reasonable to assume that they went through the trouble to get rid of the stalls.
              Of course it's possible to check... deanjo's suggested Direct3D benchmark.

              But the only problem there is that they are completely different renderers. If OpenGL is 10% slower, we could still say that the OpenGL renderer is bad. If OpenGL is 10% faster, we could say that the Direct3D renderer is bad.

              Comment


              • #97
                Originally posted by BlackStar View Post
                I'm not aware of any modern D3D10+ / GL3.x+ applications though. Only Unigine fits the bill and a single application doesn't really fit the bill. WoW is far too old to be meaningful now (plus it's not cross-platform).

                The best I can come up with is a potential Half-Life 2 release that supports an OpenGL renderer in addition to Direct3d (which might or might not happen). Starcraft 2 might also work but I doubt they will make the OpenGL renderer available on Windows (why should they? It will be optimized around Apple's OpenGL implementation, which means OpenGL 2.1 and workarounds for Apple bugs).

                Any better ideas?
                I don't think it has to be as "modern" as D3D10+ / GL3.x+ since many games are still written 9.0c / GL 2.1 level. If an OS is capable of running multiple renderers chances are that the default renderer will used in that game/app will be the one that offers better performance. I'm simply saying that if a Brand A vs Brand B comparison is to be done it shouldn't be limited to the lowest common denominator. If you do it's like someone taking a UDMA 66 drive, slapping it on a 40 wire IDE cable and then benching it.

                Comment


                • #98
                  The one other side effect of doing such a benchmark even if it was Unigine only would be that it could provide additional motivation to improve their product up to equivalent levels. We have seen many times in the past the power of a benchmark review where performance has been found lacking to only be fixed later because it was brought to light in such a review.

                  Comment


                  • #99
                    Just to point out: OpenGL also is used as a more industry standard rendering system (CAD, professional rendering tools, etc) and has some different requirements over directx as a result - I could give examples, but they could be a bit dated as I don't have much to do with anything that's not cross-platform these days.
                    That being said, I do like where OpenGL is headed, and actually found it useful that not everything was updated/changed/overhauled for 3.x and 4.x, simply because it allows for an easier transition to new ways of doing things.

                    Comment


                    • Originally posted by deanjo View Post
                      Not really sure where you get the idea that Ubuntu's default kernel has been optimized for servers. The kernel configs in Ubuntu are aimed as a multi-role config. It's not optimized specifically for server or desktop use. If you want a "in the can" default desktop kernel take a look at openSUSE where it defaults to a kernel that is "desktop tuned".
                      A while back, I recall looking into what options are used and the key 4 or 5 options that tend to define a kernel fitting well into a server role or a desktop role were set to the choices that would best suit a server. I believe that those are the defaults from kernel.org.

                      Comment

                      Working...
                      X