Announcement

Collapse
No announcement yet.

Mesa Threaded OpenGL Dispatch Finally Landing, Big Perf Win For Some Games

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by atomsymbol
    ... today's GPUs are capable of executing only a small number of threads in parallel.
    “Small number” = 1.

    On a GPU the finest details displayed on screen are thanks to SIMD operations...
    Think what the “S” stands for in “SIMD”.

    Comment


    • #32
      Originally posted by ldo17 View Post

      Still, a multi-threaded API is going to be bottlenecked when it gets to a single-threaded GPU.
      The GPU can be the bottleneck, but what does that have to do with the number of threads on either the CPU or the GPU?

      A multi-threaded API is meant to allow using multiple cores on the CPU, aside from providing architectural advantages on the CPU side.

      In games or applications where the GPU is the bottleneck, additional available performance on the CPU side might also be used, to some extent, to optimize the GPU calls so that there is less work for the GPU. At least in some cases, there are multiple options to implement something, in a way that is either more work for the CPU and less work for the GPU, or the other way around.
      Last edited by indepe; 07 February 2017, 06:14 PM.

      Comment


      • #33
        Originally posted by indepe View Post

        A multi-threaded API is meant to allow using multiple cores on the CPU...
        Which is only important if the CPU has a lot of work to do. Otherwise all those threads are just going to be waiting for their turn at the GPU.

        Comment


        • #34
          Originally posted by ldo17 View Post

          Which is only important if the CPU has a lot of work to do. Otherwise all those threads are just going to be waiting for their turn at the GPU.
          If that's your general situation, then it is also possibly important in so far as then you might be able to buy a less expensive CPU next time.

          Comment


          • #35
            Originally posted by atomsymbol
            The implementation is quite simple
            it is an idea, not an implementation. and we have no idea whether it will work

            Comment


            • #36
              Originally posted by atomsymbol
              Just a note 1: If a thread is defined to mean an UTM (Universal Turing Machine)
              no, we define it as something doing calculations on separate data

              Comment


              • #37
                Originally posted by atomsymbol

                That is false.
                What is “false”, exactly? Particularly since you were the one who used the term “SIMD”. Do you retract that now?

                Comment


                • #38
                  Originally posted by atomsymbol
                  You know nothing about the precision of mathematical definitions.
                  well, you know nothing about being right

                  Comment


                  • #39
                    Originally posted by ldo17 View Post
                    “Small number” = 1.
                    Generally speaking, a "thread" is generally assumed to mean "context of execution".

                    with regards to GPU's, each pixel's individual value can be seen as an independent execution context for the instance of a shader that is run over it - as well as over the other millions pixels in the image (or data samples in the offscreen buffer), each possibly running in its own gpu thread

                    those hundreds or thousands shader cores you see in today's GPU's are there to run one shader instance each, in parallel - so yes, GPU's are massively multithreaded

                    OTOH ( and probably what you were referring to), there's the problem of how the GPU exposes itself.. the OS (and even less userland) doesnt see a thousand processing cores, what it sees is a device on the bus, capable of receiving commands
                    And although these commands are queued so the os would not stall on each command send, until not too long ago GPUs had a single command queue/buffer
                    but this has changed in recent GPUs
                    Think what the “S” stands for in “SIMD”.
                    Single Instruction Multiple Data means you are operating on multiple operands in a single instruction, i.e. a vector operation - the opposite of a scalar operation (denoted Single Instruction Single Data - SISD)
                    but this is related to architectures and ISAs and orthogonal to the subject of multithreading and GPUs in particular
                    you can build an array of scalar units as well as a single large vector units and process pixels in parallel with either one - in fact , some GPUs are hybrids of the two

                    Comment


                    • #40
                      Originally posted by atomsymbol

                      In my mind a "thread" is defined as something capable of emulating any computable function. I don't know a better definition than this.
                      "Thread" usually is a software concept, a sequence of instructions that may be executed in parallel to other sequences of instructions.

                      For hardware, I think it corresponds to terms like "processing unit", something that can be switched between multiple threads, but at any given time executes a single thread / sequence of instructions, possibly in parallel to other processing units, where "thread" would correspond to something having its own "next-instruction" pointer.

                      EDIT: In practice, a thread also goes with a stack pointer, but the way I think about threads, the instruction pointer is the more essential part.
                      Last edited by indepe; 09 February 2017, 04:16 PM.

                      Comment

                      Working...
                      X