Originally posted by rbmorse
View Post
Announcement
Collapse
No announcement yet.
Fedora To Stop Providing i686 Kernels, Might Also Drop 32-Bit Modular/Everything Repos
Collapse
X
-
-
Originally posted by Mattia_98 View PostI don't get this whole war against 32-bit. 32-bit only machines are still fine... The whole fun of using Linux is that it runns everywhere. So why try to kill a part of all machines, just becuase some hipsters think others should not use it?
Leave a comment:
-
Originally posted by Mattia_98 View PostI don't get this whole war against 32-bit. 32-bit only machines are still fine... The whole fun of using Linux is that it runns everywhere. So why try to kill a part of all machines, just becuase some hipsters think others should not use it?
No, seriously. The don't think others should not use it, they just don't want to be bothered to have to maintain something they themselves do not use.
Leave a comment:
-
Originally posted by Mattia_98 View PostI don't get this whole war against 32-bit. 32-bit only machines are still fine... The whole fun of using Linux is that it runns everywhere. So why try to kill a part of all machines, just becuase some hipsters think others should not use it?
Leave a comment:
-
I don't get this whole war against 32-bit. 32-bit only machines are still fine... The whole fun of using Linux is that it runns everywhere. So why try to kill a part of all machines, just becuase some hipsters think others should not use it?
- Likes 1
Leave a comment:
-
Originally posted by oiaohm View PostEmulation does not mean the program exact structures or algorithms are perfectly respected. As long as the final result is the same everything works.
- Likes 1
Leave a comment:
-
Originally posted by aht0 View PostEhm.. Android games? Why test against something with so small performance requirements?
Originally posted by aht0 View PostLiterally all it needs is CPU cycles for it's scripting engine running on one of the cores, GPU could be potato, I am playing on GTX660 and by MSI Afterburner, game is still not using whole GPU resource pool.
Originally posted by aht0 View PostI know what the microstutter is. My point is, yeah, you could get it when running multi-GPU setups. AND you can have it when playing CPU intensive games where you have 'funky things' going on with your CPU. Increased latencies, thermal throttling, some other process trying to grab priority etc. I am not too sure that such "synthesized single thread" would translate into all that smooth gaming experience on a CPU intensive game.
Yes 8 core is require to speed up a single thread program a lot but you also then need other cores background processes. Please note 8 cores for single thread program only. You other programs need other locations to run. Amd up coming 16 processor will be the first in x86 space that can really do it. Lot of the demos were done with 12 core arm. 8 unified cores for the single thread boost and 4 OS cores.
Originally posted by Weasel View PostBecause it doesn't work in the real world. Don't forget you can even have self-modifying code (copy protection and anti-cheat use it for obfuscation), so the translation can't have too much "freedom" or creativity if it doesn't want to break.
Originally posted by Weasel View PostYou mean one core? But so are multi-threads limited to the resources of the CPU.
Originally posted by Weasel View PostTo make such statements shows a clear lack of logical understanding. How do you make a long dependency chain 4X faster? Not even OoO helps there.
Originally posted by Weasel;n1114683Many single-threaded apps are just coded plainly badly where not even increasing the OoO can improve performance that much, hence not even splitting it into threads unless the code is [Blogically[/B] rewritten (i.e. different algorithms/data structures). No binary translation will ever do this, deal with it.
Emulation does not mean the program exact structures or algorithms are perfectly respected. As long as the final result is the same everything works.
The examples are not just using 1 method to gain these speeds 4x is the total of many methods used in combination.
The reality this path has not been possible on the x86 or arm platforms for general users because the cpu dia configuration require has not been common.
The reality todo what I am talking about in general consumer hardware has not be possible because our systems either did not have enough cores or they were not configured right on the silicon dia to allow this.
Please note I said at the start if you are after performance. This is working out to like 65watts per thread. So this method can fast but its not power effective.
Weasel the big thing that emulated route has over letting CPU do OoO and speculative execution is that it can save between runs information on what routes are used and what refactoring can be done. Same way GPU these days are keeping a disc cache of pre build shaders. So instead of writing from source code rewrite from binary.
Leave a comment:
-
Originally posted by oiaohm View Posthttps://www.researchgate.net/publica...lative_slicing
This is a older one from 2009. I cannot find the newer one.
Originally posted by oiaohm View PostNormal out of order is restricted to the resources of one CPU.
Originally posted by oiaohm View PostDo note even the 2009 one notes that your speed up is capped. 2009 version no matter how much in resources you throw at it you are not going to gain above x3 gain in performance. The new version gives you 4X then you are tapped out.
Many single-threaded apps are just coded plainly badly where not even increasing the OoO can improve performance that much, hence not even splitting it into threads unless the code is logically rewritten (i.e. different algorithms/data structures). No binary translation will ever do this, deal with it.
- Likes 1
Leave a comment:
-
Ehm.. Android games? Why test against something with so small performance requirements?
Look up Arma series. Either original Operation Flashpoint, ArmA: Armed Assault or later ArmA 2. A3 already has 64bit support and pseudo-multithread (certain sets of tasks will be offloaded across cores). I quarantee ArmA: 2 would crush contemporary CPU when you get creative in it's Mission Editor. Happens in ArmA 3 too, despite 64bit, core offloading etc - set some hundred AI's combat each-other OR go into large multiplayer server and game' performance drops to complete shit. I've seen 25fps on my rig: Ryzen 5 [email protected] running 16GB 3333MHz DDR4 (overclocked 3200 kit)+512GB NVME SSD. Literally all it needs is CPU cycles for it's scripting engine running on one of the cores, GPU could be potato, I am playing on GTX660 and by MSI Afterburner, game is still not using whole GPU resource pool.
I'd be grateful as all %¤(/%&* if there was way to get around this single thread limitation business, ditch Windows AND still have functional anticheat engine running inside emulator too. Until then, color me sceptical.
I know what the microstutter is. My point is, yeah, you could get it when running multi-GPU setups. AND you can have it when playing CPU intensive games where you have 'funky things' going on with your CPU. Increased latencies, thermal throttling, some other process trying to grab priority etc. I am not too sure that such "synthesized single thread" would translate into all that smooth gaming experience on a CPU intensive game.Last edited by aht0; 20 July 2019, 05:27 AM.
- Likes 1
Leave a comment:
Leave a comment: