if sandboxing is not an issue, why not just use GNU Guix?
Announcement
Collapse
No announcement yet.
Clear Linux Developers Weigh Supporting Snaps
Collapse
X
-
Not directly related to the topic but in a perfect world, I can imagine a detection mechanism during the installation process so that the installer could download and install prebuilt optimized packages for the exact target machine and I wonder why this is not standard practice yet for the bigger performance oriented distros. Maybe the burden is still too high to maintain that for hundreds of different CPU architectures? They could restrict this to the most popular recent architectures or find a more clever way (JIT?) to handle this.
It still hurts my feelings that we cannot use all of the fancy new features of each new CPU generation in a seemless way, leaving performance and capabilites unused for ages.
Comment
-
Clear Linux was really conceived to be a cloud centric OS with datacenters and containers in mind. While they do supply a desktop GUI package to install, my impression is that it really isn't their development goal.
It seems more people want to take advantage of the performance it offers but in a desktop GUI format, clearly not in sync with datacenters and containers.
Interesting that people send Intel requests to make it more Ubuntu like in its DE, instead of people sending requests to Canonical to make Ubuntu more Clear Linux like in performance.
- Likes 5
Comment
-
Originally posted by oleid View PostAre there really many proprietary snap-only apps? Oo
- Likes 1
Comment
-
Originally posted by VanCoding View Post
Why is flatpak much better than snap? The only "advantage" that i hear often, is that flatpak has "runtimes" to build an app on. The intention is to save space, when multiple apps build on the same runtime and so share their dependencies. But that's stupid. When an app builds on a runtime, it won't need all of the stuff this runtime provides, which means I'm getting stuff that I don't need. Additionally, devs will just lazily build their apps on a runtime that provides much more than the app really needs instead of carefully picking the real dependencies. In the end, it results in a higher storage usage.
- Likes 1
Comment
-
Originally posted by mv.gavrilov View PostIn vain, it would be better if all to switch to a Flatpack. Flatpack is much better than Snap.
- Likes 6
Comment
-
-
Originally posted by VanCoding View Post
Additionally, devs will just lazily build their apps on a runtime that provides much more than the app really needs instead of carefully picking the real dependencies. In the end, it results in a higher storage usage.
Instead, they should do deduplication of files of applications and require apps to bring every single dependency by themselves. If two applications contain files with the same contents, they could be linked to just one file. Apps would also be more stable, because they're using the libs they're tested against, rather than the ones from the runtime, wich could change at any given time.
- Likes 1
Comment
-
Originally posted by stargazer View Post
Cost of storage as an argument against Flatpak is one I've never fully understood given modern storage mediums, can you help me to understand it? I understand it for content, where you can rapidly deal with huge storage needs, but the application area is usually tiny compared to modern storage mediums. It made sense when HDDs were measured in GB. It made sense again when SDDs first came out and had small capacities, but now 500GB SDDs are <= $100 US and dropping. Root partitions (excluding home and content spaces) are usually substantially less than 100 GB. Even if you are dealing with multiple VMs, the cost doesn't seem to me like it would be all that high. So where is the concern about the storage cost of duplicating a few GB in runtimes coming from?
Said another way, not all devices can be upgraded, especially those with soldered in eMMC memory like tablets. And why run Linux on a tablet? You like to use Linux. You can find/write the drivers that you need. And because you can.
Comment
-
Originally posted by ms178 View PostNot directly related to the topic but in a perfect world, I can imagine a detection mechanism during the installation process so that the installer could download and install prebuilt optimized packages for the exact target machine and I wonder why this is not standard practice yet for the bigger performance oriented distros. Maybe the burden is still too high to maintain that for hundreds of different CPU architectures? They could restrict this to the most popular recent architectures or find a more clever way (JIT?) to handle this.
It still hurts my feelings that we cannot use all of the fancy new features of each new CPU generation in a seemless way, leaving performance and capabilites unused for ages.
Will any distributions implement packages to "the Nth degree" of CPU architectures? I doubt it.
Why? While package creation and testing are commonly automated now, there is still the manual process of merging patches, resolving code conflicts, and managing package dependencies & versioning. The alternative is to load up the package creation and testing system with endless lists of "ifdef" rules so the compiling platform(s) can build the packages as desired by humans, but those endless lists are easy avenues to introduce errors.
Given the current state of technology the problem is not impossible to solve. It's simply a very steep series of mountains to climb that will cost a lot in time and person-power to accomplish. Who has the skilled resources needed to devote to a project like that where the ROI ("return on investment") is negligible at best?
Comment
Comment