Originally posted by RahulSundaram
View Post
Announcement
Collapse
No announcement yet.
Python 3.11 Performance Benchmarks Show Huge Improvement
Collapse
X
-
I don't know which other mainstream or popular program uses it.
As pointed out above, also used a lot on SBCs and micro controllers. Python on Linux Raspberry PIs, MicroPython or CircuitPython on other controllers. In fact the simulated Star Trek computer (from Original Series) that I built has two boards in in. A RPI4 talking to a Grand Central M4 Express. Python being the language used on both boards.
Here is the popular IEEE programmming Language chart for 2022 if anyone cares.Last edited by rclark; 25 October 2022, 09:03 PM.
- Likes 2
Comment
-
Originally posted by Danny3 View PostNice performance improvemetns!
But outside of Qbittorrent that uses Python for the search engine I don't know which other mainstream or popular program uses it.
- Likes 1
Comment
-
Originally posted by binarybanana View PostWhat about memory usage? Does this version use more memory as a trade-off for faster algorithms? I remember that memory usage between (IIRC) 3.6-3.7 increased by about 30%. This can suck on lower end machines like SBCs.
- Likes 5
Comment
-
Originally posted by hamishmb View PostI wonder how much closer this brings Python 3 to, for example, fairly unoptimized Rust code.
Not intended as a sarcastic comment - I know any compiled language will be way faster, just curious.
- Likes 4
Comment
-
Originally posted by atomsymbol
Just a note: Dynamic programming languages (LISP, Python; Java) can be much faster than statically compiled languages if the application's performance benefits from generating code at run-time.
LISP also refers to a family of languages, not a specific one. Not all LISP dialects have a JIT compiler either. Closure is a famous LISP dialect that runs on the JVM, so it does have JIT compilation.
- Likes 1
Comment
-
Originally posted by atomsymbol
I do not mean JIT in this case. I mean: code generated by the software developer.
Code:$ python >>> a="1+2" >>> eval(a) 3
JIT solves part of the issue, by generating machine code that will be optimized for the current usage profile. So only the first few runs will be slow. But the current Python implementation cannot do that.
- Likes 2
Comment
Comment