Announcement

Collapse
No announcement yet.

Fedora To Stop Providing i686 Kernels, Might Also Drop 32-Bit Modular/Everything Repos

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • aht0
    replied
    Originally posted by rbmorse View Post
    Because 32-bit was invented by old white men and must die...die...die!
    "Old white men" literally build basis to contemporary civilization. Let's kill that off too?

    Leave a comment:


  • smitty3268
    replied
    Originally posted by Mattia_98 View Post
    I don't get this whole war against 32-bit. 32-bit only machines are still fine... The whole fun of using Linux is that it runns everywhere. So why try to kill a part of all machines, just becuase some hipsters think others should not use it?
    That's an odd way of looking at it. I'm quite certain Red Hat doesn't care if you want to use 32-bit software on your machine, so they aren't trying to stop you. But it's not free for them, either, to support 32bit software. They have to make a determination about whether the cost to them is worth the benefit or not, and clearly it's reaching a point where they think the answer is no. They aren't out there rooting for 32 bit software to die and trying to convince you of anything though, they just want to stop spending money on it.

    Leave a comment:


  • rbmorse
    replied
    Originally posted by Mattia_98 View Post
    I don't get this whole war against 32-bit. 32-bit only machines are still fine... The whole fun of using Linux is that it runns everywhere. So why try to kill a part of all machines, just becuase some hipsters think others should not use it?
    Because 32-bit was invented by old white men and must die...die...die!

    No, seriously. The don't think others should not use it, they just don't want to be bothered to have to maintain something they themselves do not use.

    Leave a comment:


  • nanonyme
    replied
    Originally posted by Mattia_98 View Post
    I don't get this whole war against 32-bit. 32-bit only machines are still fine... The whole fun of using Linux is that it runns everywhere. So why try to kill a part of all machines, just becuase some hipsters think others should not use it?
    Because people barely care enough of it to even make sure it keeps building (you can imagine it's quite inconvenient if you routinely get package releases that don't build on distro that wants to build that architecture) let alone test it early enough that you can assume your machine still works after next update. If you assume everyone stops software development now, 32bit is fine. But that's a pretty weird assumption.

    Leave a comment:


  • Mattia_98
    replied
    I don't get this whole war against 32-bit. 32-bit only machines are still fine... The whole fun of using Linux is that it runns everywhere. So why try to kill a part of all machines, just becuase some hipsters think others should not use it?

    Leave a comment:


  • aht0
    replied
    Originally posted by oiaohm View Post
    Emulation does not mean the program exact structures or algorithms are perfectly respected. As long as the final result is the same everything works.
    Anticheat engines would probably throw fit in such a case..

    Leave a comment:


  • Weasel
    replied
    Originally posted by oiaohm View Post
    Emulation does not mean the program exact structures or algorithms are perfectly respected. As long as the final result is the same everything works.
    Sorry buddy, this part says it all. Not even compilers -- which have access to the source code -- can change the algorithm, and you expect binary translation to do it?

    Leave a comment:


  • oiaohm
    replied
    Originally posted by aht0 View Post
    Ehm.. Android games? Why test against something with so small performance requirements?
    Part funding part hardware requirements. 8 cores on 1 bit of silicon with a proper shared L3 have not been that common. Amd chiplets in current Zen2 is about the first x86 todo this properly. In arm you have been able to get chips to test this with for the last 6 years.

    Originally posted by aht0 View Post
    Literally all it needs is CPU cycles for it's scripting engine running on one of the cores, GPU could be potato, I am playing on GTX660 and by MSI Afterburner, game is still not using whole GPU resource pool.
    This kind of issue is why this has been researched. Like if you could get the scripting engine to run on a vcpu that happened to be 8 real cores hiding behind it programs jammed like that could be speed up.

    Originally posted by aht0 View Post
    I know what the microstutter is. My point is, yeah, you could get it when running multi-GPU setups. AND you can have it when playing CPU intensive games where you have 'funky things' going on with your CPU. Increased latencies, thermal throttling, some other process trying to grab priority etc. I am not too sure that such "synthesized single thread" would translate into all that smooth gaming experience on a CPU intensive game.
    microstutter in multi GPU setups is highly caused by physical distance between the cores. So you have 2 cards you joint that give the same number of cores as 1 card the single card does not microstutter the 2 cards do. There is no difference core count here but core placement has a huge effect on micro-stutter.

    Yes 8 core is require to speed up a single thread program a lot but you also then need other cores background processes. Please note 8 cores for single thread program only. You other programs need other locations to run. Amd up coming 16 processor will be the first in x86 space that can really do it. Lot of the demos were done with 12 core arm. 8 unified cores for the single thread boost and 4 OS cores.

    Originally posted by Weasel View Post
    Because it doesn't work in the real world. Don't forget you can even have self-modifying code (copy protection and anti-cheat use it for obfuscation), so the translation can't have too much "freedom" or creativity if it doesn't want to break.
    Emulators like qemu already contain code to deal with that. Its just speeding it up.

    Originally posted by Weasel View Post
    You mean one core? But so are multi-threads limited to the resources of the CPU.
    I should have wrote core. Problem is the way the arm white paper is written they use cpu for core and dia as the complete thing.

    Originally posted by Weasel View Post
    To make such statements shows a clear lack of logical understanding. How do you make a long dependency chain 4X faster? Not even OoO helps there.
    Because its out Out of Order alone. Like a pure out of Order speed up would be like 4 cores to 1 for 4 times faster. This is speculative execution and JIT optimisations.

    Originally posted by Weasel;n1114683Many single-threaded apps are just coded plainly badly where not even increasing the OoO can improve performance that much, hence not even splitting it into threads unless the code is [B
    logically[/B] rewritten (i.e. different algorithms/data structures). No binary translation will ever do this, deal with it.
    You are missing why needing double the processing power to perform the same speed without overhead. You have spectivate execution and OoO but you also have JiT runtime optimisation. The profiling information from the JIT can be saved between runs.

    Emulation does not mean the program exact structures or algorithms are perfectly respected. As long as the final result is the same everything works.

    The examples are not just using 1 method to gain these speeds 4x is the total of many methods used in combination.

    The reality this path has not been possible on the x86 or arm platforms for general users because the cpu dia configuration require has not been common.

    The reality todo what I am talking about in general consumer hardware has not be possible because our systems either did not have enough cores or they were not configured right on the silicon dia to allow this.

    Please note I said at the start if you are after performance. This is working out to like 65watts per thread. So this method can fast but its not power effective.

    Weasel the big thing that emulated route has over letting CPU do OoO and speculative execution is that it can save between runs information on what routes are used and what refactoring can be done. Same way GPU these days are keeping a disc cache of pre build shaders. So instead of writing from source code rewrite from binary.

    Leave a comment:


  • Weasel
    replied
    Originally posted by oiaohm View Post
    https://www.researchgate.net/publica...lative_slicing

    This is a older one from 2009. I cannot find the newer one.
    Because it doesn't work in the real world. Don't forget you can even have self-modifying code (copy protection and anti-cheat use it for obfuscation), so the translation can't have too much "freedom" or creativity if it doesn't want to break.

    Originally posted by oiaohm View Post
    Normal out of order is restricted to the resources of one CPU.
    You mean one core? But so are multi-threads limited to the resources of the CPU.

    Originally posted by oiaohm View Post
    Do note even the 2009 one notes that your speed up is capped. 2009 version no matter how much in resources you throw at it you are not going to gain above x3 gain in performance. The new version gives you 4X then you are tapped out.
    To make such statements shows a clear lack of logical understanding. How do you make a long dependency chain 4X faster? Not even OoO helps there.

    Many single-threaded apps are just coded plainly badly where not even increasing the OoO can improve performance that much, hence not even splitting it into threads unless the code is logically rewritten (i.e. different algorithms/data structures). No binary translation will ever do this, deal with it.

    Leave a comment:


  • aht0
    replied
    Ehm.. Android games? Why test against something with so small performance requirements?

    Look up Arma series. Either original Operation Flashpoint, ArmA: Armed Assault or later ArmA 2. A3 already has 64bit support and pseudo-multithread (certain sets of tasks will be offloaded across cores). I quarantee ArmA: 2 would crush contemporary CPU when you get creative in it's Mission Editor. Happens in ArmA 3 too, despite 64bit, core offloading etc - set some hundred AI's combat each-other OR go into large multiplayer server and game' performance drops to complete shit. I've seen 25fps on my rig: Ryzen 5 [email protected] running 16GB 3333MHz DDR4 (overclocked 3200 kit)+512GB NVME SSD. Literally all it needs is CPU cycles for it's scripting engine running on one of the cores, GPU could be potato, I am playing on GTX660 and by MSI Afterburner, game is still not using whole GPU resource pool.

    I'd be grateful as all %¤(/%&* if there was way to get around this single thread limitation business, ditch Windows AND still have functional anticheat engine running inside emulator too.​​​​​​ Until then, color me sceptical.

    I know what the microstutter is. My point is, yeah, you could get it when running multi-GPU setups. AND you can have it when playing CPU intensive games where you have 'funky things' going on with your CPU. Increased latencies, thermal throttling, some other process trying to grab priority etc. I am not too sure that such "synthesized single thread" would translate into all that smooth gaming experience on a CPU intensive game.
    Last edited by aht0; 20 July 2019, 05:27 AM.

    Leave a comment:

Working...
X