Announcement

Collapse
No announcement yet.

Suldal Improvements; New OpenCL, VDPAU Benchmarks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Suldal Improvements; New OpenCL, VDPAU Benchmarks

    Phoronix: Suldal Improvements; New OpenCL, VDPAU Benchmarks

    The latest Phoronix Test Suite 4.0-Suldal improvements are now available and there's also some new test profiles for those interested in measuring the OpenCL/GPGPU performance under Linux along with NVIDIA's VDPAU video decoding speed...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Cool.

    For anyone who doesn't have an Intel CPU or AMD/Nvidia GPU, the AMD CL runtime also runs on any SSE2 CPU (as far as I know). There's also a GPL'd CL 1.0 implementation available from Seoul National University (SNU) and Samsung: http://opencl.snu.ac.kr/

    Comment


    • #3
      Michael, can you speak to the possibility of reducing the download size for the benchmarks? One of my biggest pet peeves for trying almost any of your benchmarks is the unnecessarily enormous size of most of them.

      I can understand if a game is hard-coded to require ALL of its assets to pass scrutiny before it even agrees to run, but IMHO these types of benchmark shouldn't even be used. Or if they are, then we need good alternatives that use a minimum of on-disk resources (after all, you're not playing through an entire game when you benchmark, and you aren't benchmarking how fast your HDD can load data, you're benchmarking how fast your GPU can render!)

      I have a fairly constrained internet connection but an extremely powerful system -- it's tough for me to run lots of benchmarks because the downloads kill me. Ideally I'd like to see you either recommend a set of CPU, OpenGL and OpenCL benchmarks that are very easy on the download size, or else start to modify some existing benchmarks to "trim the fat", by making the games/demos accept that not all the files are there, and start anyway with a reasonable subset of the data.

      I understand that big textures are.... well, big, but maybe the procedural generation demoscene can help us out a little bit. Surely you've heard of kkrieger, which was an amazing DX9.1 demo a few years ago that rendered a fully playable game about the length of two levels of Doom, but with graphics quality rivaling (or IMHO, exceeding) the quality of Quake 3? But the kicker is: the executable was only 96 kilobytes! Surely this type of thing can be done on Linux, and there's a lot of research going into procedural generation that will make it less of a hand-optimization feat and more along the lines of a procedural generation SDK that can produce "good" results (50 - 75% space savings versus shipping the actual rasters in typical compressed formats), but not as perfect as hand optimizations.

      Anyway, I can probably handle benchmarks that have to download 50 to 100 MB each. That's "reasonable" to me. So the Linux kernel compile one is fine. But once you start talking a gig and up for a benchmark, I'm leaning towards "not worth my time".

      Everyone let me know if you think of anything that might help us make progress in this area. And just to shoot down any smart-alecs, please don't mention something silly like Xz or 7-zip compression... I'm pretty sure the binaries Michael sends with PTS are at least compressed with bzip2 or similar, so throwing xz --best at it isn't going to save gigabytes of data... it might save about 100 megs on a 2GB download compared to bzip2. It's something, but 1.9GB is still a lot.

      Comment


      • #4
        I'm not seeing the point of the VDPAU test from the example on obo

        Is it to compare different versions of the drivers?

        If it were comparing VDPAU vs xvideo I'd understand or maybe to compare different cards but the test example you've given just says FPS for a few profiles

        Comment


        • #5
          Originally posted by allquixotic View Post
          Michael, can you speak to the possibility of reducing the download size for the benchmarks? One of my biggest pet peeves for trying almost any of your benchmarks is the unnecessarily enormous size of most of them.

          I can understand if a game is hard-coded to require ALL of its assets to pass scrutiny before it even agrees to run, but IMHO these types of benchmark shouldn't even be used. Or if they are, then we need good alternatives that use a minimum of on-disk resources (after all, you're not playing through an entire game when you benchmark, and you aren't benchmarking how fast your HDD can load data, you're benchmarking how fast your GPU can render!)

          I have a fairly constrained internet connection but an extremely powerful system -- it's tough for me to run lots of benchmarks because the downloads kill me. Ideally I'd like to see you either recommend a set of CPU, OpenGL and OpenCL benchmarks that are very easy on the download size, or else start to modify some existing benchmarks to "trim the fat", by making the games/demos accept that not all the files are there, and start anyway with a reasonable subset of the data.

          I understand that big textures are.... well, big, but maybe the procedural generation demoscene can help us out a little bit. Surely you've heard of kkrieger, which was an amazing DX9.1 demo a few years ago that rendered a fully playable game about the length of two levels of Doom, but with graphics quality rivaling (or IMHO, exceeding) the quality of Quake 3? But the kicker is: the executable was only 96 kilobytes! Surely this type of thing can be done on Linux, and there's a lot of research going into procedural generation that will make it less of a hand-optimization feat and more along the lines of a procedural generation SDK that can produce "good" results (50 - 75% space savings versus shipping the actual rasters in typical compressed formats), but not as perfect as hand optimizations.

          Anyway, I can probably handle benchmarks that have to download 50 to 100 MB each. That's "reasonable" to me. So the Linux kernel compile one is fine. But once you start talking a gig and up for a benchmark, I'm leaning towards "not worth my time".

          Everyone let me know if you think of anything that might help us make progress in this area. And just to shoot down any smart-alecs, please don't mention something silly like Xz or 7-zip compression... I'm pretty sure the binaries Michael sends with PTS are at least compressed with bzip2 or similar, so throwing xz --best at it isn't going to save gigabytes of data... it might save about 100 megs on a 2GB download compared to bzip2. It's something, but 1.9GB is still a lot.
          The only really large test profiles are for games... As far as trimming them down, there's probably some game assets that could be stripped away and dumped and then repackaged, etc. However, there's no plans at all for that. There's been no commercial customers to even mention download size as an issue and I think you're the first one bringing it up within the community. The other issue then with repackaging the games is that I would then need to toss them up on PTS servers, which are generally much slower than what you find out of the various other mirrors/project servers out there for download speeds. So if anyone is really concerned about trimming the download size they're welcome to do the repackaging and whatever else and I'll happily use updated binaries, but I don't think many other people view the current download sizes as a problem.
          Michael Larabel
          https://www.michaellarabel.com/

          Comment

          Working...
          X