Announcement

Collapse
No announcement yet.

LLVM's Clang Compiler Is Now C++11 Feature Complete

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by elanthis View Post
    Only if you think proprietary compilers start and end with Microsoft and Intel. There are others, though much less frequently used.
    Ok, so according to http://wiki.apache.org/stdcxx/C%2B%2B0xCompilerSupport, we have [1]:

    GCC -- 100% (46/46 features implemented)
    Clang -- 96% (44/46 features implemented)
    Intel C++ -- 74% (34/46 features implemented)
    MSVC -- 65% (30/46 features implemented)
    IBM XLC++ -- 50% (23/46 features implemented)
    EDG eccp -- 39% (18/46 features implemented)
    Embarcadero C++ Builder -- 35% (16/46 features implemented)
    Sun/Oracle C++ -- 22% (10/46 features implemented)
    HP aCC -- 20% (9/46 features implemented)
    Digital Mars C++ -- 17% (8/46 features implemented)
    [*] This does not count the features in Clang SVN/3.3 as it is not currently released. This will be 100% then. It is also excluding Concepts which is on that list, but is not part of C++11.

    So according to this, the Intel and Microsoft compilers are ahead of the proprietary compilers in terms of C++11 support and the open source compilers are/will be feature complete.

    Comment


    • #22
      Originally posted by Ericg View Post
      Its only hard right now because we lack the programming language to make it easy for us. I have no doubt that as time goes on a programming language will come along that will make multi-threading desktop apps very common and easy.
      There is Go ( http://en.wikipedia.org/wiki/Go_(programming_language) ). Google's system language. It encourages you to use thousands of parallel routines, if it is meaningful for your application.

      Comment


      • #23
        Originally posted by pingufunkybeat View Post
        As long as you have hundreds of I-frames in your video, of course it does.
        LOL, right... And the user will simply have to wait until the video is decoded before watching it, assuming 1080p(6MB per decoded frame) or even 2160p/4320p maybe at 60fps or more?

        Comment


        • #24
          Originally posted by erendorn View Post
          On the other hand, video codec divide pixel space in a given frame for that very reason (parallelism).
          Video decode is mostly done on dedicated hardware, but could be done in GPGPU, hence massively parrallelisable .

          Actually, anything that could benefit from GPGPU will use hundreds of threads (mostly, scientific calculations and image/video processing).
          I am surprised this BS is still spread around. Compression is inherently serial. It can not be done on GPU efficiently, this is the reason for dedicated hardware! All you can do is run image transforms and motion estimation in parallel. But that doesn't give you much as the heavy part is the decompression.

          Comment


          • #25
            Originally posted by log0 View Post
            LOL, right... And the user will simply have to wait until the video is decoded before watching it, assuming 1080p(6MB per decoded frame) or even 2160p/4320p maybe at 60fps or more?
            No way, unless you have TBs of RAM to store the decompressed video on a cache, there is no point in decoding all the video in advance.

            P.D: This was meant as a response to pingufunkybeat.

            Comment


            • #26
              Guys, the idea about parallelizing video mentioned "render", not encode/decode.

              Maybe he meant to generate the original video source data, e.g. in blender where it first needs to be created, only then encoded with a codec. In the rendering stage, the frames could be independent.

              Comment


              • #27
                Originally posted by log0 View Post
                LOL, right... And the user will simply have to wait until the video is decoded before watching it, assuming 1080p(6MB per decoded frame) or even 2160p/4320p maybe at 60fps or more?
                I thought we were talking about ENCODING? Or even rendering, where each frame (each pixel, even) is calculated separately.

                In the case of decoding, you're right, but it's perfectly possible to parallelise to block or pixel level. It won't be a linear speedup, but you can do loads if you parallelise properly.
                Last edited by pingufunkybeat; 20 April 2013, 08:49 AM.

                Comment


                • #28
                  Originally posted by aceman View Post
                  Guys, the idea about parallelizing video mentioned "render", not encode/decode.

                  Maybe he meant to generate the original video source data, e.g. in blender where it first needs to be created, only then encoded with a codec. In the rendering stage, the frames could be independent.
                  ^ (10 characters)
                  All opinions are my own not those of my employer if you know who they are.

                  Comment


                  • #29
                    Originally posted by wargames View Post
                    Sorry, but what the hell... how many programs use hundreds of threads ?
                    I'm sure database-based apps can very well use many threads. Also parallelizing code makes use of threads to handle the various concurrent operations going on.

                    Comment


                    • #30
                      Originally posted by Drago View Post
                      There is Go ( http://en.wikipedia.org/wiki/Go_(programming_language) ). Google's system language. It encourages you to use thousands of parallel routines, if it is meaningful for your application.
                      By default it's all single-threaded. You have to manually set GOMAXPROCS to make it use more than one thread.

                      It may encourage goroutines, but they are not threads.

                      Comment

                      Working...
                      X