Announcement

Collapse
No announcement yet.

Intel Ivy Bridge: UXA vs. SNA - Updated Benchmarks

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by Rexilion View Post
    Same here, just too bad that there are only integrated variants of this great product :/ .
    Yeah, IGPs are bad, but APUs (in die graphics) are advancing at a tremendous rate. That's probably where the future lies. CPU's with lots of cores, and in-CPU fully integrated graphics. Cheers!

    Comment


    • #17
      Originally posted by mendieta View Post
      Exactly! I'm a physicst so I said "frequency", but the concept is exactly as you stated it.
      And to be concrete. You can use a convenient time unit. For instance, for the Kernel, you may want to do "number of kernel compiles per hour", which will hopefully be a few

      We may want to pull Michael into this conversation

      Comment


      • #18
        These benchmarks show an incredible improvement on some operations, but I wonder how that translates in real-world use; it seems that 2D operations are already very fast on my desktop (my netbook on the other hand is slow but I always thought it was due to the processor, not the GPU). The only thing that I (think I) understand is the Firefox canvas test; a 3 times improvement in drawing speed could be useful at times.

        With regards to PTS graphs, +1 for always using units where “more is better”, or (maybe simpler) drawing bars in a different color when “less is better”.

        Comment


        • #19
          Originally posted by stqn View Post
          These benchmarks show an incredible improvement on some operations, but I wonder how that translates in real-world use; it seems that 2D operations are already very fast on my desktop (my netbook on the other hand is slow but I always thought it was due to the processor, not the GPU). The only thing that I (think I) understand is the Firefox canvas test; a 3 times improvement in drawing speed could be useful at times.
          Indeed, the reality as shown by those benchmarks is that the application/toolkit are more often the rate limiting factor in 2D tasks. For example, the qgears2 "XRender" benchmark does all the image processing and shape rasterisation client side and fails to use XRender at all for GPU offload. The gtkperf demos spend more time doing runtime type checking of pointers than actually rendering. About the only time the DDX affects those results at all is when it performs atrociously.

          Firefox is the standout example; everything from page loading, to scrolling and canvas noticeably benefits from improvements in the DDX. What is harder to measure are the latency improvements that result in X requiring less CPU time to do the same amount of work - especially on these "big core" processors. Where this work matters most is on those slow devices, such as the Atom netbook and its descendents. You would be surprised by how much you ascribed to poor hardware that was in fact attrocious software and drivers.

          Comment


          • #20
            Originally posted by ickle View Post
            Where this work matters most is on those slow devices, such as the Atom netbook and its descendents. You would be surprised by how much you ascribed to poor hardware that was in fact attrocious software and drivers.
            This. Of course, on my overclocked 2600K + discrete GPU, the refresh rate of my screen is the limit, but I have some atom netbooks and htpc too, and they could live with a bit more snappiness

            Comment

            Working...
            X