Originally posted by Britoid
View Post
Announcement
Collapse
No announcement yet.
Red Hat / Fedora To Work On Bringing Up Arm Laptops Under Linux
Collapse
X
-
Originally posted by schmidtbag View PostIn my experience, the slowness is only due to insufficient driver support. Relative to x86, it'll still be less bloated.
Originally posted by schmidtbag View PostActually it can handle x86 exclusive tasks too, you're just going to sacrifice a lot of performance in doing so.
But yeah, I agree. I've even found that old quad core Cortex A9 CPUs are perfectly adequate at handling most everyday CPU-bound tasks. The thing is, most software isn't compiled to use most of the special instructions you find in x86, which Clear Linux really helps exemplify. So, with the exception of a few libraries accelerated by certain x86 instructions (or by other hardware, like GPUs), you can actually have a pretty fluid ARM experience.
Originally posted by schmidtbag View PostI've considered this concept, since they kinda did the same thing with x86-64 (which is arguably a hybrid architecture). But I think it'd add way too much bloat and complexity to run both ARM and x86-64 binaries at the same time. Keep in mind that the separate cores would have to run their own kernel and basically run totally independent of each other, which also implies there'd be some performance issues getting them to communicate to each other.
I figure the bloat and complexity can't be any worse than targeted binaries that support multiple x86_64 instruction sets as far as launching programs is concerned. I don't think they'd need multiple kernels, just a kernel that understands both architectures and can interoperate between the two. Any graphics changeover from ARM to Radeon shouldn't be that different than various laptop setups where the Intel GPU switches to the AMD or Nvidia GPU when you play game. An ARM/x86 hybrid isn't that far out of the box considering all the work AMD has put into HSA, APUs, Infinity Fabric, and whatnot.
Originally posted by schmidtbag View PostI think it'd make more sense for AMD to make an x86 equivalent of big.LITTLE, where you have 4 low-power cores without SMT, uses short pipelines, and limited instructions. They'd likely remain below 1GHz and don't have boost clocks. Then you have another set of cores (of varying quantities depending on model) that are much more beefy, and can adjust their clock speeds independently of the low-power cores.
The low-power cores would be used for background processes and foreground tasks that barely demand any CPU power, like a text editor or a calculator. Then all of your main foreground and CPU-intensive tasks would be used for the other cores. All this would take is an adjustment to the CPU scheduler, and there could even be a profiler implemented so the scheduler automatically knows which core to assign the process to. In most systems, I'm sure you could just simply set the affinity based on the user (so for example, root-run processes would be run by the low-power cores).
Using a big.LITTLE-like approach for x86 could really help improve efficiency without the need to recompile software or have a nasty impact on latency. It's the same idea as using ARM.
I'm actually surprised x86 big.LITTLE hasn't been done via software by now. Seems like something that one of those Android governor people would make for a desktop or laptop CPU.
Comment
-
Originally posted by debianxfce View PostA much better and cheaper laptop for real (no company connection) open source Linux users:
Comment
-
Originally posted by schmidtbag View PostI wasn't aware of the servers have a UEFI (never had access to one)
but I'm not entirely sure if the Windows phones and tablets use it. It could just be a pseudo-UEFI, kinda like what some of the Chromebooks and Android devices have, where you get a primitive boot menu but you can't really do anything else with it.
UEFI is the internal firmware structure and API offered to the OS and to run EFI applications (usually boot managers, tools and filesystem drivers, and also the native GUI application that allows you to set up the hardware settings from the UEFI environment, that you mistakenly thought was UEFI itself).
And yes Windows phones/tablets use the true UEFI firmware.
https://www.windowslatest.com/2018/04/06/developer-runs-uefi-boot-manager-on-microsoft-lumia-950-xl/After weeks of trying to get Windows RT on Lumia phones up and running, a developer has managed to boot UEFI on Microsoft Lumia 950 XL, here’s evidence that at least one such experiment has succeeded. Microsoft introduced UEFI for Windows operating systems with Windows Server 2008 R2 and Windows 7. Unified Extensible Firmware Interface (UEFI) defines an interface between […]
Worthy of note is that u-boot is also capable of UEFI boot and it can do without ACPI (and its many evils) since it can pass the linux kernel a the board's flattened device tree on boot.
- Likes 2
Comment
-
Kernel, and VPU drivers have been the main issue on ARM.
the 2 ARM products worth hacking on now are the Odroid_N2 and the RockPro64 , Fedora was quick to fix the one application bug I found on ARM, but they will also need to start compiling against gl4es if they want the distro to work out of the box. Examples; https://forum.odroid.com/viewtopic.php?f=150&t=30170
Comment
-
Originally posted by skeevy420 View PostI was just meaning when compared to a 4 or 5 ghz Intel or AMD CPU. x86 has the brute force that ARM seems to be lacking.
I have an older Westmere. One generation before UEFI and AVX. ARM is like my CPU, it gets it done and fast enough, but there are better and faster ways.
I figure the bloat and complexity can't be any worse than targeted binaries that support multiple x86_64 instruction sets as far as launching programs is concerned. I don't think they'd need multiple kernels, just a kernel that understands both architectures and can interoperate between the two. Any graphics changeover from ARM to Radeon shouldn't be that different than various laptop setups where the Intel GPU switches to the AMD or Nvidia GPU when you play game. An ARM/x86 hybrid isn't that far out of the box considering all the work AMD has put into HSA, APUs, Infinity Fabric, and whatnot.
But if you're expecting the ARM CPUs to work as secondary co-processors (in the same way that a GPU can be) then sure, a lot of what you said is true, and a lot of the potential issues I brought up will be irrelevant. But then you've got a whole new set of problems, such as:
* You'll probably have to access those cores through PCIe, even if they share the same DIMMs. Depending on what you want the cores to do and what program you're running, this could dramatically slow down certain tasks.
* Since the ARM cores would be distinctly separate, whatever they run is somewhat hidden and abstracted from the user. So for example, you can't open up htop or task manager to view or modify the processes (or at least, you'd need a separate version to do so).
* If the ARM cores are meant to handle most of the low-level background and/or system tasks, that becomes a logistical issue if it's treated as a secondary processor.
* You need binaries compiled specifically for those cores if you actually want them to run efficiently. Assuming some of these binaries are needed for both the x86 and ARM cores, this will add a lot of bloat (since to my knowledge, this means associated libraries also need to be ARM compatible too). If you don't recompile them, then you're kinda defeating the purpose of using ARM.
Some of these issues could be alleviated if you use the ARM cores as your primary cores with x86 as secondary workhorse cores. But then you get another whole new set of issues that I don't feel like writing out.Last edited by schmidtbag; 09 April 2019, 09:22 AM.
- Likes 1
Comment
-
-
Originally posted by schmidtbag View PostWell yeah but nobody in their right mind would use ARM as such a workhorse. An ARM CPU below 2GHz is plenty fast enough for everyday needs. And yes, there are "workhorse" servers, but they're not built to churn data at high speeds; they're built to handle dozens of small tasks.
Originally posted by schmidtbag View PostBetter is relative. If all you care about is performance, ARM is not the right choice.
Originally posted by schmidtbag View PostI guess it really depends on how you're looking at this. To me, I was picturing the ARM cores to be running in tandem with the x86 cores at the same level in the system. In other words, if you had 4x ARM cores and 4x x86 cores, you'd see 8 total cores in your task manager.
But if you're expecting the ARM CPUs to work as secondary co-processors (in the same way that a GPU can be) then sure, a lot of what you said is true, and a lot of the potential issues I brought up will be irrelevant. But then you've got a whole new set of problems, such as:
* You'll probably have to access those cores through PCIe, even if they share the same DIMMs. Depending on what you want the cores to do and what program you're running, this could dramatically slow down certain tasks.
* Since the ARM cores would be distinctly separate, whatever they run is somewhat hidden and abstracted from the user. So for example, you can't open up htop or task manager to view or modify the processes (or at least, you'd need a separate version to do so).
* If the ARM cores are meant to handle most of the low-level background and/or system tasks, that becomes a logistical issue if it's treated as a secondary processor.
* You need binaries compiled specifically for those cores if you actually want them to run efficiently. Assuming some of these binaries are needed for both the x86 and ARM cores, this will add a lot of bloat (since to my knowledge, this means associated libraries also need to be ARM compatible too). If you don't recompile them, then you're kinda defeating the purpose of using ARM.
Some of these issues could be alleviated if you use the ARM cores as your primary cores with x86 as secondary workhorse cores. But then you get another whole new set of issues that I don't feel like writing out.
Comment
Comment