Announcement

Collapse
No announcement yet.

Benchmarks Of AMD's Newest Gallium3D Driver

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    Originally posted by nikai View Post
    I had a quick look at the usage of gettimeofday in mesa, and apparently
    it never is used for accessing the wall time, but instead for timeouts
    like in the pb_bufmgr_cache, or for benchmarks elsewhere.

    So my thought was that it might make sense to replace it, if available,
    with POSIX clock_gettime and a cheaper monotonic clock with an
    unspecified starting point.

    Also according to the manpage of gettimeofday,

    Googling a bit I found an interesting patch on the Xorg-devel mailing list,
    http://lists.x.org/pipermail/xorg-de...st/012483.html
    which points to a Linux patch,
    http://marc.info/?l=linux-kernel&m=125073483507611&w=2

    Accordingly I replaced gettimeofday in mesas's os_time_get(), src/gallium/auxiliary/os/os_time.c
    with clock_gettime and CLOCK_REALTIME_COARSE.

    It's working nicely, but the problem is that I don't see a significant difference
    I don't understand why people still use gettimeofday(), everybody knows this is a slow syscall. Only Linux/x86_64 (and probably a few others) had a vsyscall for it. IIRC, clock_gettime() is better implemented and goes through a vsyscall even on Linux/i386 nowadays. Besides, the gettimeofday() situation is even worse on *BSD as I recall.

    Comment


    • #52
      Originally posted by gbeauche View Post
      I don't understand why people still use gettimeofday(), everybody knows this is a slow syscall. Only Linux/x86_64 (and probably a few others) had a vsyscall for it. IIRC, clock_gettime() is better implemented and goes through a vsyscall even on Linux/i386 nowadays. Besides, the gettimeofday() situation is even worse on *BSD as I recall.
      Because documentation sucks, mainly. "Everybody knows"? Oh, please. People have always recommended gettimeofday for accurate timing (just check gamedev.net for instance). Seriously, this is the first time I've ever seen someone recommend against it.

      Comment


      • #53
        Originally posted by airlied View Post
        Its about 100 developers working full time.

        Dave.
        How many trained monkeys at typewriters == one dev? I can probably get you several thousand of those willing to work double time.

        Comment


        • #54
          Originally posted by Smorg View Post
          How many trained monkeys at typewriters == one dev? I can probably get you several thousand of those willing to work double time.
          100 developers and each one of those probably a few orders of magnitude better than you are. Keep dreaming.

          Comment


          • #55
            Originally posted by gbeauche View Post
            I don't understand why people still use gettimeofday(), everybody knows this is a slow syscall. Only Linux/x86_64 (and probably a few others) had a vsyscall for it.
            Ah, that explains why I don't see a difference. Looks like on amd64 I was already using this monotonic clock.

            Comment


            • #56
              @The clock issue:

              I just benched this (100 million calls to each), and plain old gettimeofday is the fastest for me?

              gettimeofday 0.635s
              clock_gettime with CLOCK_MONOTONIC 1.193s
              clock_gettime with CLOCK_MONOTONIC_COARSE 4.747s

              Comment


              • #57
                Originally posted by Michael View Post
                PTS automatically sets vblank_mode to 0.
                This isn't enough. ddx does vsync unless you disable it from source code.
                Just enable vsync in Catalyst, it'll lead to fairer benchmarks. :P

                Comment


                • #58
                  Addendum: vblank and vsync are two different things. Both need to be disabled to get higher fps than refresh rate as far as I've seen.

                  Comment


                  • #59
                    Originally posted by Qaridarium
                    Really? i don't think so. the job for the 5 best devs should be bring openCL to work and dev a raytracing openCL based graphic engine.


                    and yes porting wayland to openCL
                    Why, so we could all play a ray-traced game at 1 fps? I'll stick with something that will actually be playable, thanks.

                    OpenCL still sucks in the binary drivers enough that no one is committing to it yet, let's wait for someone to at least start using the API before the OSS devs move their attention away from OpenGL.

                    Comment


                    • #60
                      Originally posted by Qaridarium
                      Really? i don't think so. the job for the 5 best devs should be bring openCL to work and dev a raytracing openCL based graphic engine.


                      and yes porting wayland to openCL
                      Yeah I think we shuold be porting Unity to OpenCL also.

                      Dave.

                      Comment

                      Working...
                      X