Announcement

Collapse
No announcement yet.

LLVM's Clang Compiler Is Now C++11 Feature Complete

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by Ericg View Post
    Its only hard right now because we lack the programming language to make it easy for us. I have no doubt that as time goes on a programming language will come along that will make multi-threading desktop apps very common and easy.
    There is Go ( http://en.wikipedia.org/wiki/Go_(programming_language) ). Google's system language. It encourages you to use thousands of parallel routines, if it is meaningful for your application.

    Comment


    • #22
      Originally posted by pingufunkybeat View Post
      As long as you have hundreds of I-frames in your video, of course it does.
      LOL, right... And the user will simply have to wait until the video is decoded before watching it, assuming 1080p(6MB per decoded frame) or even 2160p/4320p maybe at 60fps or more?

      Comment


      • #23
        Originally posted by erendorn View Post
        On the other hand, video codec divide pixel space in a given frame for that very reason (parallelism).
        Video decode is mostly done on dedicated hardware, but could be done in GPGPU, hence massively parrallelisable .

        Actually, anything that could benefit from GPGPU will use hundreds of threads (mostly, scientific calculations and image/video processing).
        I am surprised this BS is still spread around. Compression is inherently serial. It can not be done on GPU efficiently, this is the reason for dedicated hardware! All you can do is run image transforms and motion estimation in parallel. But that doesn't give you much as the heavy part is the decompression.

        Comment


        • #24
          Originally posted by log0 View Post
          LOL, right... And the user will simply have to wait until the video is decoded before watching it, assuming 1080p(6MB per decoded frame) or even 2160p/4320p maybe at 60fps or more?
          No way, unless you have TBs of RAM to store the decompressed video on a cache, there is no point in decoding all the video in advance.

          P.D: This was meant as a response to pingufunkybeat.

          Comment


          • #25
            Guys, the idea about parallelizing video mentioned "render", not encode/decode.

            Maybe he meant to generate the original video source data, e.g. in blender where it first needs to be created, only then encoded with a codec. In the rendering stage, the frames could be independent.

            Comment


            • #26
              Originally posted by log0 View Post
              LOL, right... And the user will simply have to wait until the video is decoded before watching it, assuming 1080p(6MB per decoded frame) or even 2160p/4320p maybe at 60fps or more?
              I thought we were talking about ENCODING? Or even rendering, where each frame (each pixel, even) is calculated separately.

              In the case of decoding, you're right, but it's perfectly possible to parallelise to block or pixel level. It won't be a linear speedup, but you can do loads if you parallelise properly.
              Last edited by pingufunkybeat; 04-20-2013, 08:49 AM.

              Comment


              • #27
                Originally posted by aceman View Post
                Guys, the idea about parallelizing video mentioned "render", not encode/decode.

                Maybe he meant to generate the original video source data, e.g. in blender where it first needs to be created, only then encoded with a codec. In the rendering stage, the frames could be independent.
                ^ (10 characters)

                Comment


                • #28
                  Originally posted by wargames View Post
                  Sorry, but what the hell... how many programs use hundreds of threads ?
                  I'm sure database-based apps can very well use many threads. Also parallelizing code makes use of threads to handle the various concurrent operations going on.

                  Comment


                  • #29
                    Originally posted by Drago View Post
                    There is Go ( http://en.wikipedia.org/wiki/Go_(programming_language) ). Google's system language. It encourages you to use thousands of parallel routines, if it is meaningful for your application.
                    By default it's all single-threaded. You have to manually set GOMAXPROCS to make it use more than one thread.

                    It may encourage goroutines, but they are not threads.

                    Comment


                    • #30
                      Originally posted by AJenbo View Post
                      It looks like they want to stay away from OpenMP and instead find a solution that works for hundreds of threads, they mention that OpenMP is only good for dozens of threads.
                      Link?

                      OpenMP is meant to start one thread per cpu, and then keep them in a pool. I'd need more info on what they say doesn't scale, as I see nothing obvious that wouldn't work if I had a 100-core cpu.

                      Comment

                      Working...
                      X