Announcement

Collapse
No announcement yet.

Intel Ivy Bridge: UXA vs. SNA - Updated Benchmarks

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel Ivy Bridge: UXA vs. SNA - Updated Benchmarks

    Phoronix: Intel Ivy Bridge: UXA vs. SNA - Updated Benchmarks

    With the testing of the very latest Intel X.Org graphics driver, the SNA 2D acceleration back-end for the Ivy Bridge graphics is now the clear-cut winner for the Linux desktop over using the default UXA back-end...

    http://www.phoronix.com/vr.php?view=MTM4MTI

  • #2
    I think two simple things would make tests easier to digest:

    1. always show things in a way more is better. For instance, if something is measured in seconds to completion, show it as a frequency (1 / time-to-completion )
    2. Once a battery of tests are performed, each configuration gets a geometric average of all measurements, so you get one global number for each configuration

    Optionally, one could add one final plot where all averages are normalized to the slowest. Tom's hardware show their reviews like that, and it's pretty awesome.

    Back to Intel, I am so glad to see full open source support. My next desktop rig will be Intel (for the first time, ever)

    Cheers!

    Comment


    • #3
      Originally posted by mendieta View Post
      1. always show things in a way more is better. For instance, if something is measured in seconds to completion, show it as a frequency (1 / time-to-completion )
      Agreed, that would make reading easier.

      Originally posted by mendieta View Post
      Back to Intel, I am so glad to see full open source support. My next desktop rig will be Intel (for the first time, ever)
      Same here, just too bad that there are only integrated variants of this great product :/ .

      Comment


      • #4
        I would be interested in a benchmark comparing SNA and the GLAMOR library. I always wondered how much of a benefit doing 2D over opengl brings to the table as opposed to the techniques used by SNA.

        Comment


        • #5
          Originally posted by jrdls View Post
          I would be interested in a benchmark comparing SNA and the GLAMOR library. I always wondered how much of a benefit doing 2D over opengl brings to the table as opposed to the techniques used by SNA.
          Very little. Last I checked, GLAMOR was slower than UXA. It's meant to be hardware-independent, and it's better than whatever people used on radeon before, but it's not particularly aimed at better performance.

          Comment


          • #6
            Originally posted by GreatEmerald View Post
            Very little. Last I checked, GLAMOR was slower than UXA. It's meant to be hardware-independent, and it's better than whatever people used on radeon before, but it's not particularly aimed at better performance.
            Give it a little of time. AMD folks are now busy with bringing radeonSI up to pair with r600g. GLAMOUR will stay with their driver for quite a long time, they will tweak it eventually.

            Comment


            • #7
              Originally posted by Rexilion View Post
              Agreed, that would make reading easier.
              Same here, just too bad that there are only integrated variants of this great product :/ .
              I do wonder what a separate intel GPU freed from the space and heat concerns of a CPU die could do. (Compete with a three year old mid-range nvidia card, I guess - but with much nicer drivers.)

              Moving it off to PCIe instead of on-die would require changing some assumptions in the driver, though; especially GPGPU would have to readjust. Sharing the memory controller with the CPU is so much nicer than having to shuffle things over a comparatively slow PCIe connection, but OTOH you can use different RAM tech (GDDR5 has higher latency but more bandwidth than DDR3, I believe?)

              Comment


              • #8
                Originally posted by mendieta View Post
                1. always show things in a way more is better. For instance, if something is measured in seconds to completion, show it as a frequency (1 / time-to-completion )
                Showing completion time as frequency would be really annoying. Frequency tends to mean that something is repeating, which is not what's going on in the tests here.

                Comment


                • #9
                  Originally posted by kigurai View Post
                  Showing completion time as frequency would be really annoying. Frequency tends to mean that something is repeating, which is not what's going on in the tests here.
                  Agreed, but that does not make frequency inferior as a method to display everything as 'more is better'.

                  Comment


                  • #10
                    Originally posted by kigurai View Post
                    Showing completion time as frequency would be really annoying. Frequency tends to mean that something is repeating, which is not what's going on in the tests here.
                    How about normalizing it so the fastest (or slowest, or alphabetically first, or whatever) is at 1, and the others shows how much faster/slower they are (defined e.g. as "how many times could they do this in the time the reference needed to do it once")? It should be fairly easy to read - "2" means it is twice as fast/uses half the time.

                    Alternatively each test could have a canonical reference (sort of like 1 VAX MIPS being defined as the benchmark result of a VAX 11/780), but I'm not sure if that would be useful or messy.

                    Comment


                    • #11
                      Originally posted by kigurai View Post
                      Showing completion time as frequency would be really annoying. Frequency tends to mean that something is repeating, which is not what's going on in the tests here.
                      Well [nb of tasks]/[time] is not only a frequency, but also a speed. And that is meaningful even for single test units (how many time does it need to complete a given test => at which speed does it run through this given test).

                      Comment


                      • #12
                        Originally posted by mendieta View Post
                        I think two simple things would make tests easier to digest:

                        1. always show things in a way more is better. For instance, if something is measured in seconds to completion, show it as a frequency (1 / time-to-completion )
                        2. Once a battery of tests are performed, each configuration gets a geometric average of all measurements, so you get one global number for each configuration

                        Optionally, one could add one final plot where all averages are normalized to the slowest. Tom's hardware show their reviews like that, and it's pretty awesome.

                        Back to Intel, I am so glad to see full open source support. My next desktop rig will be Intel (for the first time, ever)

                        Cheers!
                        +1 for 2., but for 1., if the test is about "Less is better", bars and the "Less is better" text could be colored in dark red instead of dark blue.

                        Comment


                        • #13
                          Originally posted by mendieta View Post
                          I think two simple things would make tests easier to digest:

                          1. always show things in a way more is better. For instance, if something is measured in seconds to completion, show it as a frequency (1 / time-to-completion )
                          2. Once a battery of tests are performed, each configuration gets a geometric average of all measurements, so you get one global number for each configuration
                          I disagree that making the article easier to digest in that particular regard is a worthwhile goal. This would just make it easier for readers to skim the article, counting wins and losses. In my opinion, users need to be educated that the benchmark numbers have to be viewed in context and their validity and significance be understood, and that it is not possible to boil down the measured performance to a single number. Figuring out whether more or less is better in a figure is the first tiny step to this.

                          Readers who are not willing to read and understand the article can simply skip to the conclusion, they just won't be able to delude themselves into thinking that they formed their opinion based on the data.

                          Comment


                          • #14
                            Originally posted by erendorn View Post
                            Well [nb of tasks]/[time] is not only a frequency, but also a speed. And that is meaningful even for single test units (how many time does it need to complete a given test => at which speed does it run through this given test).
                            Some publications just measure time to completion, and then calculate an arbitrary score out of this. E.g. you sometimes find Linux kernel compile score given in 1000/s. (So if the compile finishes in 1000 seconds, one gets a score of 1, and if it finishes in 500 seconds a score of 2).

                            Comment


                            • #15
                              Originally posted by erendorn View Post
                              Well [nb of tasks]/[time] is not only a frequency, but also a speed. And that is meaningful even for single test units (how many time does it need to complete a given test => at which speed does it run through this given test).
                              Exactly! I'm a physicst so I said "frequency", but the concept is exactly as you stated it.

                              Comment

                              Working...
                              X