Announcement

Collapse
No announcement yet.

What Linux Benchmarks Would You Like To See Next?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by Michael View Post
    The replay is one part, yes. But then having the game exit / dump the information is the other important part (and also being able to start the replay from the command-line when launching the game), so no user-interaction needed at all.

    PHP-CLI code experience isn't even needed to make test profiles, they're just a simple bash script and XML file. If anyone can get Spring to fit the above requirement of being able to launch it, run replay, and quit, then I can easily whip up the test profile.
    Ok, what info needs to be dumped exactly? Render time of each frame?
    The latest Spring release shows off the new "Frame Grapher" with lots of little details about Spring preformance. I suspect that it would not be hard to dump that info. http://springrts.com/phpbb/viewtopic.php?t=32624

    Just to give some pointers to anyone that feels the need:
    Spring can be run in portable and issolation mode so it's one package (fixed game version and content) seperate from any other Spring(s) installed on the system.
    Manfile for command line arguments: https://github.com/spring/spring/blo...s/spring.6.txt
    A startscript is used to give all the game settings (which game, map and players) replays could be a special case.. https://github.com/spring/spring/blo...riptFormat.txt
    Spring starts looking for a config file with (graphical) settings once it starts http://springrts.com/wiki/Springsettings.cfg
    There are --benchmark and --benchmarkstart command line arguments, I don't know if it's usefull, might be related to this benchmark tool http://springrts.com/phpbb/viewtopic.php?f=12&t=30397

    That's the starting Spring and loading a game part.. still up is writing a script to drive the camera and dumping the to be defined information.

    Comment


    • #42
      @Michael,

      I was having a recent discussion on the Gentoo Forums about add-in HBA/HCA PCIe boards for older motherboards to add multiport SATA-3 (6G) support. I was having no end of issues with a Marvel Contoller based Addonics PCIe 8x 8 port SATA/SAS HBA card. Had to pull it from my Desktop rig in the end. I was getting signalling errors from one of my SSDs on this card and was unable to get suspend (to ram)/resume working - a big deal for a high wattage Nehalem- based system! A benchmark of some HBA cards to see if they bottleneck the largest generation of SSDs and general card reliability (suspend support, signalling, transfer reliability, etc.)

      The benchmarks you currently run are largely irrelevant to me. For graphics AMD is dead to me (following the debacle they have played on Radeon HD4xxx series card owners and also previously my X1950 Pro card) - driver support is awful. AMD support for games run under Wine is very spotty in my experience (and I help maintain a few WineHQ AppDB game pages). Therefore graphical benchmarks are irrelevant - as I have no choice but to buy an Nvidia GPU or make do with Intel.

      A look at GCC compilter & linker optimisations (LTO, graphite optimisations, etc.) would be very interesting. I do play around with these some (as one can with a source-based distro) - but haven't actually gotten around to crunching any figures... I guess you don't have time to do these (as they would probably need quite a bit of depth).

      Keep up the great site!

      Bob

      Comment


      • #43
        Originally posted by edgar_wibeau View Post
        Hi Michael,

        I know it depends on the test hardware you do or do not have at hand, but I'd like to see more benchmarks with interated GFX regarding 3D (games) and OpenCL. Especially the latter seems very interesting as you can possibly get decent compute power while only investing $100-200 for a midle class CPU or APU. Also, many notebooks and booksize PCs don't sport dedicated graphics and that way Linux users of such devices get little information how theyr system could benefit from newer driver developments. For example, I always miss iGPUs when I look at those comparisons between Catalyst and newest Radeon drivers.
        Originally posted by Tomin View Post
        Is there already a comparison of AMD APU graphics against low and mid range discrete graphics cards (using the APU as the processor ofc)? I'd like to know if it would be sensible to replace my Core 2 Duo + Radeon HD 6670 with a modern APU (also one could argue that I should go Intel Pentium / Intel i3 + Radeon HD 6670 route and may be right). I doubt Dual Graphics is possible on Linux but that would certainly be interesting. I'm currently using radeon driver but I could switch to Catalyst if it gives me added value.
        These, I'd like comparisons between graphics/opencl implementations in APUs and dGPUs, including both closed and open source.
        Also, I don't know if it's my idea, but you seem to have stopped benchmarking not-so-new R600 GPUs? Finally... you could also benchmark UVD in these platforms.

        Comment


        • #44
          Originally posted by Phoronixria View Post
          Just tossing around ideas:
          • Graphic works application benchmark (e.g. GIMP, Krita, Blender), with a bit of stress test as well (e.g. passing blur at 300dpi)
          I definitely like the idea to have Blender rendering tested in PTS. You can benchmark OpenCL, CUDA and other parallel languages that way. I'm going to the Blender conference Saturday, I can ask the devs for input if you (Michael) want me to?

          Comment


          • #45
            Originally posted by bobwya View Post
            @Michael,

            The benchmarks you currently run are largely irrelevant to me. For graphics AMD is dead to me (following the debacle they have played on Radeon HD4xxx series card owners and also previously my X1950 Pro card) - driver support is awful. AMD support for games run under Wine is very spotty in my experience (and I help maintain a few WineHQ AppDB game pages). Therefore graphical benchmarks are irrelevant - as I have no choice but to buy an Nvidia GPU or make do with Intel.
            Bob
            Care to elaborate on that poor support? I'm an AMD owner with a Radeon HD 4870, two Radeon HD 6850s and one Radeon HD 7950. Radeon HD 6000 series and prior have the best open source graphics driver performance -- far better than Catalyst ever was -- so I'm not sure why you would think that it is poor. Why should AMD support those graphics cards in Catalyst anyway when they run better on the open source graphics driver stack?

            Comment


            • #46
              Originally posted by Tim Blokdijk View Post
              I definitely like the idea to have Blender rendering tested in PTS. You can benchmark OpenCL, CUDA and other parallel languages that way. I'm going to the Blender conference Saturday, I can ask the devs for input if you (Michael) want me to?
              For a way to be able to automatically run blender tests would be great. Last time I talked with any Blender developers, there was like a performance test (or a way of just reporting the performance of like rendering a .blend file) but it required keyboard input to work, etc.
              Michael Larabel
              https://www.michaellarabel.com/

              Comment


              • #47
                Originally posted by Michael View Post
                For a way to be able to automatically run blender tests would be great. Last time I talked with any Blender developers, there was like a performance test (or a way of just reporting the performance of like rendering a .blend file) but it required keyboard input to work, etc.
                Cycles the "new" Blender ray-tracer has some standalone support http://wiki.blender.org/index.php/De...les/Standalone

                Comment


                • #48
                  The Articles are great. I'd like to see more relevant reporting and better charts.

                  The work done here and on OpenBenchmarking by the users of PTS, is interesting and valuable.

                  I feel that a lot of both how interesting/reabable and how valuable that information is, has become lost in an endless babble of self-promotion. We all know how much work goes into the articles , the benchmarking, the test suite, and the site, but saying it over and over and over and over and over and over and over and over and over and over ... does it get annoying when I repeat myself? Yes. okay so point made, right? Maybe just once per review page, and definitely no more than once per paragraph, k?

                  As for relevant reporting - sometimes articles make an effort to put the results in context. Other times not so much. Without context it's just a bunch of numbers, even to the regular viewers. There's loads of context available at OpenBenchmarking. Or rather, there *was* but now it's so locked down that it's poitness to follow the numerous links to the OpenBenchmarking site for related info. Maybe it's been a while since our beloved PTS one-man-band took a moment to view the site as a regular user, because he is obviously deluded as to how enjoyable that experience is, and if he were aware how unenjoyable it has become, he might recognise that he's probably losing a lot of readers due to the frustration of these basically dead links he's giving us. "Yeah, go check it out, right now, lots of information there! (just not for you!)"

                  Now my suggestion to improve the relevance is pretty simple: when presenting results, put up two result from Openbenchmarking results within the same category for the same class of hardware: one for around 50% percentile, and one from 95th percentile, so people have a clue how the reviewed hardware is performing compared to Average and Excellent hardware.

                  My suggestions to improve the charts: First off: change the colour palette. Or use SVG and let the users pick a palette. Ye gods the current set of colours is difficult to tell apart when there's more than 4 things being compared. SVG allows for different styles applied to the graphs - different dot periods, different widths, different fills. There's libraries which allow zooming and so on to help differentiate large result sets freely available too. Raphael, HighCharts, etc. Second: either make it much clearer that more = better, or normalize the results and flip the base so more always is better. IE instead of reporting A=16254361, B= 16624361, C=16716423, how about reporting A=100%, B=102.2%, C= 102.8% "normalized linearly to 100 = 16254361" ? because nobody cares about 8 decimal places of data - we want to know relatively speaking how much better or worse something is, and % to norm is the best way to represent that.

                  When deriving performance per watt, the max clock rate of a device rarely delivers it. CPUs use voltage to overcome the capacitance of their transistor gates, to reliably switch states in a given time period (portion of a single clock interval). Higher clocks require higher voltages. Each state change requires draining or charging billions of gates, which is an amount of energy defined as volts times capacitance, and that happens at each clock. After a certain point power usage starts to climb as the clock squared, more while approaching "thermal runaway", which means lowering the clock by 10% could in some circumstances reduce power usage by as much as 50%. But we rarely if ever see benchmarks for performance per watt with throttled cores. I think the Kaveri series is the only one I've ever seen benchmarked throttled here, but trust me, the people who really care about performance per watt are either Laptop users, server farm administrators, or perhaps a handful of BTC miners. There's a lot more of the former than either of the latter. A wise person trying to monetize their audience might try catering to the larger one. ;-)

                  Comment


                  • #49
                    Benchmark Gimp and other easily scripted, popular apps

                    @Phoronixria:

                    Yes, benching Gimp is actually easy since it's scriptable. So is Firefox for that matter. Both perform much better on 64-bit distros, and in some cases with hardware acceleration enabled. These numbers are meaningful to a lot of users!

                    Comment


                    • #50
                      Computational Chemistry

                      Originally posted by Prosthetic Head View Post
                      I'd like to see benchmarks with CP2K, QuantumEspresso or Abinit open source computational chemistry packages..
                      A bit of a niche interest I know, but I run a lot of DFT calculations so it's what matters to me.

                      More generally, some measure of power efficiency would be useful. Idle and performace / watt with different load types.

                      Anyway, keep up the good work!
                      Seconded! Some computational chemistry would be really good.
                      Another niche area that would be interesting is revisiting benchmarks under vmware/xen/other hypervisors vs native, especially with the VM on ramdisk.

                      Comment

                      Working...
                      X