Announcement

Collapse
No announcement yet.

HOPE: The Ease Of Python With The Speed Of C++

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    Originally posted by alcalde View Post
    I found the original mention of Nuitka misleading. While it does convert to C++ there's been little focus on speed yet, only compatibility. As such, it's not close to the performance of Cython or Numba.
    I disagree that it was "misleading" to mention Nuitka, and here's why:

    Yes, compatibility has been the focus over raw speed, but there has been plenty of focus on speed as well. Read the website's main page for the release notes. It's much faster than standard CPython already by a large margin. It's not intended as a direct Cython/etc. competitor as the main point of Nuitka is to be able to use pure Python code, as-is. Cython is already a perfectly fine alternative for those willing to right non-standard code, but Nuitka is focused on pure python code). Obviously this implies certain limitations, but it's already able to compile code that's much faster than the standard CPython implementation so it's quite useful already. Still not fast enough? Try something else. Don't need compiled code but want faster that CPython? Use PyPy (and maybe someday Pyston, Hope, etc.).

    The option to get a single executable with no dependence on Python library files isn't working at the moment either. As such, there's little real benefit to it at the moment, although there can certainly be significant benefit in the future. It's a bold and impressive project.
    This is incorrect -- I use the standalone mode all the time. It's been working fine since at least version 5.4 for me (At least on my several Arch boxes).

    Comment


    • #72
      Originally posted by hoohoo View Post
      I do not disagree with you, Erendorn, that's the best way to go if it's within one's abilities.........

      - Many grad students and younger professors who were spoon fed Python or Matlab or Mathematica as undergrads. They were essentially incapable of writing their own C or C++ code to do simulations - dangling pointers, memory leaks, etc.
      - If you are such a person, then Python (Matlab, Mathematica) will let you write runnable code in a reasonable amount of time, whereas you might spend weeks getting some C or C++ code to work.
      - Some of these people were aware enough that Python, say, was slower than well written C/C++, and they sought to leverage Python bindings to CUDA, or used specialized C++ libraries with Python bindings......
      I think these points really illustrate why languages like python are popular.
      A sloppily written C++ program that performs better, but has memory leaks and crashes after a week, is not as good as a slow python program that takes longer to run. Common computations are mostly already available in the big common libraries, so speeds has become good enough. Does this mean a lot of these folks will never learn C or C++? Probably. Can't really blame them though.

      Comment


      • #73
        Originally posted by hoohoo View Post
        Perhaps I misunderstand what JIT compilers do. As I understand they will compile code fragments to binary and then cache the translated binary code - so they do not repeatedly compile the same stuff over and over. If this is the case then once all the frequently executed code paths in a program have been compiled we should see the overhead of a JIT compiler fall close to zero.

        Is this incorrect?
        No, this is correct. I have even heard of engines (js one? can't remember) that have several levels of optimization depending on the time spent running the fragment, and would recompile the most used parts with higher levels of optim. But even then, most (all?) JIT spend less time optimizing the code in the end than an offline compiler.
        Additionally, most non compiled languages have automatic memory management, one of they very features that makes such languages easier to write, but slower.
        Theoretically, a language with full code optimizations added to the engine, and unmanaged code for the key parts could perform just as well as compiled ones (C# can be I think, a quite nice language btw), but given that usually more money is poured into offline compiler than JIT ones for the category of lengthy code optimization, in practice you don't see it.

        I do agree with what you said in your other (previous) post btw.

        Comment


        • #74
          Originally posted by aidanjt View Post
          I'd rather the ease of C++, with the speed of C++. Python is a horrible language. If the standard C++ library isn't bloated enough for you, there's a bajillion libraries and frameworks to pick from to 'fix' that 'problem'. But you can't make python unsuck, its suckiness is built right into its very bones.
          Well, the project was created to speed up mathematical computations (and similar).
          Python is a great language for that, since it doesn't really require it's user to be programmer and uses bignum by default (arbitrary number representation that is limited only by available memory, unlike fixed type such as uint64_t of C).

          Too many people here already spam 'my x is better than your y', it just isn't constructive.

          Comment


          • #75
            Originally posted by erendorn View Post
            No, this is correct. I have even heard of engines (js one? can't remember) that have several levels of optimization depending on the time spent running the fragment, and would recompile the most used parts with higher levels of optim. But even then, most (all?) JIT spend less time optimizing the code in the end than an offline compiler.
            Additionally, most non compiled languages have automatic memory management, one of they very features that makes such languages easier to write, but slower.
            Theoretically, a language with full code optimizations added to the engine, and unmanaged code for the key parts could perform just as well as compiled ones (C# can be I think, a quite nice language btw), but given that usually more money is poured into offline compiler than JIT ones for the category of lengthy code optimization, in practice you don't see it.

            I do agree with what you said in your other (previous) post btw.
            Honestly I think media should pay more attention to PyPy software transactional memory than these bazillion new Python implementations. If STM will fly, you will not only get JIT'ed Python but also true parallelism for Python code that's run. This research project has been on-going for years and may take years more to produce results. The publications also happen so seldom I can understand how it could drive a sensationalist nuts

            Comment


            • #76
              Originally posted by erendorn View Post
              If it runs for months, don't use "faster python", use C++ or even Fortran, profile guided optimizations, and builds targeting the hardware you are using.
              Python is largely fast enough for 99% of tasks, but at the time scales you are mentioning even 1.5x perf impact (some jit vs compiled) is a damn lot.
              As I mentioned before, I work in a quite computationally heavy part of science. My workflow is to start writing everything in Python. While feeling quite at home with C++, it can never replace Python for me in the initial exploration part of work. With Python I can think about *what I want to get done" and then just do it. I don't have to worry about data structures (lists, dicts, sets, ...) and most of the stuff I need to parse my input is available. Not to mention the interactive shell or IPython notebook.
              C++ can never replace that for me, regardless if you throw boost or C++11 at it.
              Scientific work is very much different from "normal" programming work in the regard that you might not always know exactly what you want in the end, so refactoring speed is king.

              Then, when things have stabilised sufficiently, I start replacing the slowest part with a mix of Cython and/or C or C++.

              Taste in syntax is obviously subjective, but personally none of the C++ versions of the sample function posted in this thread has been close to as readable as the Python version.

              Comment


              • #77
                Originally posted by kigurai View Post
                Yeah, well, the thing is that anyone who knows Python would pretty much directly see what the previous Python function did.
                Now, I am by far a C++ expert, but I know my way around it, and that line is horribly unreadable.
                And no, even though boost is common, it is not standard C++.
                1. everyone who knows c++ would pretty much directly see what the previous c++ function did.
                2. the only difference with python is 'boost :: adaptors :: transformed ( std :: function < std :: size_t ( char ) >' and it could be shortened to one letter
                3. boost is fully standard c++, it does not require some non-standard compiler. you have some ideas without basis in reality. on the other hand, no part of python is standard, because python standard does not exist.

                Comment


                • #78
                  Originally posted by TheBlackCat View Post
                  No, I put the last one last because it much slower (using two loops instead of one loop and one hash table). Whether the first or second is faster depends on the string in question, but the third is always slower.
                  i already told you that you first and third options have equivalent complexity. so you clearly ordered them by simplicity.
                  Originally posted by TheBlackCat View Post
                  And I don't think
                  Code:
                  counts [ ch ] ++
                  is that much uglier than
                  Code:
                  counts [ ch ] += 1
                  well, your third variant had 7 lines loop body instead of one. that's the ugly part.

                  Comment


                  • #79
                    Originally posted by hoohoo View Post
                    Perhaps I misunderstand what JIT compilers do. As I understand they will compile code fragments to binary and then cache the translated binary code - so they do not repeatedly compile the same stuff over and over. If this is the case then once all the frequently executed code paths in a program have been compiled we should see the overhead of a JIT compiler fall close to zero.

                    Is this incorrect?
                    you misunderstood what toy languages do. they disallow you to write fast program no matter what compiler you will use. so for long running calculations you start with using proper language and then compile it with profile guided optimizations.
                    Last edited by pal666; 21 October 2014, 07:35 AM.

                    Comment


                    • #80
                      Originally posted by kigurai View Post
                      As I mentioned before, I work in a quite computationally heavy part of science. My workflow is to start writing everything in Python. While feeling quite at home with C++, it can never replace Python for me in the initial exploration part of work. With Python I can think about *what I want to get done" and then just do it. I don't have to worry about data structures (lists, dicts, sets, ...) and most of the stuff I need to parse my input is available. Not to mention the interactive shell or IPython notebook.
                      C++ can never replace that for me, regardless if you throw boost or C++11 at it.
                      Scientific work is very much different from "normal" programming work in the regard that you might not always know exactly what you want in the end, so refactoring speed is king.

                      Then, when things have stabilised sufficiently, I start replacing the slowest part with a mix of Cython and/or C or C++.

                      Taste in syntax is obviously subjective, but personally none of the C++ versions of the sample function posted in this thread has been close to as readable as the Python version.
                      Seems like a very reasonable workflow. Even though I personally like the language, I certainly wouldn't advise C++ as a prototyping or gluing language

                      Comment

                      Working...
                      X