Announcement

Collapse
No announcement yet.

Calamares 1.0 Distribution-Independent Installer Framework Released

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • teo_m
    replied
    Originally posted by r_a_trip View Post
    It's just an installer framework. Used only during installation and it can install Qt and GTK based desktops equally fine. No need for the outdated KDE vs Gnome purism wars here.
    Calamares founder/project manager/lead developer/etc here. Thank you everyone for the amazing feedback!
    I just wish to point out a few things which might have been unclear in the original announcement.

    Firstly, Calamares itself is a pure Qt application, not a KF5 application. The only KF5 bits are those required by the partitioning module, since it uses KDE PartitionManager code. While I would personally be happy if Calamares became the "usual installer" on KDE-oriented distributions, Calamares is not a KDE project, and we try to support and cater to all environments. I'm a KDE user and a member of the KDE community (I've been hacking on KDE projects for years now), but if Calamares doesn't look or work right in a GTK-based environment, I would consider that a bug and try to fix it.

    Secondly, I find it really weird when someone says a software is "bloated", because "bloat", whatever that might actually be, needs to be measured against what the software does. I try to keep our dependencies to a minimum, but I also need to meet a specific set of requirements, and I make hard choices on which tools to use to accomplish that. For a versatile, modular and brandable installer framework, Qt + yaml-cpp + Python + Boost.Python + a small part of KF5 (limited to a single module) doesn't seem unreasonable to me. Installers with different feature sets will require different sets of dependencies, and I'm sure some distributions will have their requirements met much better by some other installer.

    Leave a comment:


  • anda_skoa
    replied
    Originally posted by Luke_Wolf View Post
    As specified by the LSB, they're called RPMs

    The problem is that unless you statically link everything in, which you'll be yelled at for, distros are too different at the core for sharing packages between distribution families.
    Not necessarily.
    The problem with LSB RPM is that people wrongly assume that they need to be installed by the system package tool, especially by people on an RPM based distribution.

    The LSB defines RPM as the package format, but the packages should be handled by each distribution's LSB RPM tool.

    That approach has several advantages:
    - it doesn't need to require running as root, the LSB installation directory can have very different permissions to the system's main installation tree
    - such a package can only ever negatively interfere with another LSB package
    - installation can be made accessible to users who would not be granted system level installation priviledges.

    It is closer in concept to who the steam runtime works.
    A well defined base platform that 3rd party developers can target but which can be provided by distributions in anyway they see fit.

    Cheers,
    _

    Leave a comment:


  • Luke_Wolf
    replied
    Originally posted by nanonyme View Post
    Well, in ideal world distros would only be a concept for defaults, everything would be in common package repositories, API/ABI guarantees would be globally agreed on. Tight dependencies avoided as possible and solved through shims and such. GNU/Linux as is is not exactly an inviting platform in all of its fragmentation. It's just silly how many people seem to think forking will resolve issues. With good package system and proper alternatives support there shouldn't be a need to fork ever, just provide alternative package and a mechanism exists which user can use to choose between even packages with identical versions
    You know.... That sounds a lot like https://www.freebsd.org/

    Leave a comment:


  • duby229
    replied
    Originally posted by pythoneer View Post
    As i mentioned the win approach (thus the winsxs crap) is one end of both sides. The winsxs approach does nothing more than throw every crappy lib in a huge unmaintainable bucket and forget about it ? and dare you to clean up this mess! you have to reinstall your system

    The approach you mentioned reminds me on the latest one from Gnome. There is nothing wrong with this at the first place ? we can see that in action on Steam ? to maintain a runtime that every distro has just to install besides their own packages. but what is with libs not maintained in the runtime? are they to minor to think about so everything beyond the runtime needs to be statically linked or as shared lib with rpath "hacks"? or is it necessary to maintain something like "maven", "npm", "gems", "homebrew", "biicode" ... for that runtime to A) register your lib_xy_v1.2 as lib vendor on that repo so that B) every programm can fetch their desired version from there and is sure about that fact, that its maintained.

    so there is no need for 10 distro maintainers to maintain lib_xy_v1.2 on their distro. they can maintain it on this "global" runtime and everybody can benefit from this regardless which distro one uses.

    But then you have the problem that you need to be online to fetch that packages if you want to install a program which has libs to be fetched
    I think you're probably right.

    Personally I wouldn't worry too much about back porting security fixes or bug fixes. If application developers don't want to keep their code updated against the latest version of the runtime, then it is their own fault when their application falls out of use.

    On the other hand, If a significant group of contributors got involved it might be reasonable to at least back port security fixes. Something like gentoo GLSA.

    Leave a comment:


  • pythoneer
    replied
    Originally posted by duby229 View Post
    That sounds an aweful lot like winsxs though. Not a big fan of that Idea. Static binaries would be bad too, as you said, security fixes and so on.

    I think the solution comes from a version controlled common runtime environment. Instead of targeting distributions with their library dependencies, target a version controlled common runtime with it's library dependencies. From that base point everything else could be containerized. Then the only dependency is the version of the runtime that was targeted.
    As i mentioned the win approach (thus the winsxs crap) is one end of both sides. The winsxs approach does nothing more than throw every crappy lib in a huge unmaintainable bucket and forget about it – and dare you to clean up this mess! you have to reinstall your system

    The approach you mentioned reminds me on the latest one from Gnome. There is nothing wrong with this at the first place – we can see that in action on Steam – to maintain a runtime that every distro has just to install besides their own packages. but what is with libs not maintained in the runtime? are they to minor to think about so everything beyond the runtime needs to be statically linked or as shared lib with rpath "hacks"? or is it necessary to maintain something like "maven", "npm", "gems", "homebrew", "biicode" ... for that runtime to A) register your lib_xy_v1.2 as lib vendor on that repo so that B) every programm can fetch their desired version from there and is sure about that fact, that its maintained.

    so there is no need for 10 distro maintainers to maintain lib_xy_v1.2 on their distro. they can maintain it on this "global" runtime and everybody can benefit from this regardless which distro one uses.

    But then you have the problem that you need to be online to fetch that packages if you want to install a program which has libs to be fetched
    Last edited by pythoneer; 01 February 2015, 12:53 PM.

    Leave a comment:


  • duby229
    replied
    Originally posted by pythoneer View Post
    but you don't want every program to ship with its own version of every lib or at least no updateable version of it. package managers are good for delivering security updates but bad for 3rd party devs to deliver one package/bundle/binary/whatever that runs on every distro but don't ship with all the libs. if you ship all the libs there should be a mechanism to "register" these libs to the system package manager so it can be updated and shared by other programs and if you don't ship them the package manager needs to pull down these libs in all versions out there and must deliver a way to use them side by side. this is really a hard problem. the "solutions" out there win/osx/linux are only on one end: either ship all libs and fuck security (win, osx) or you go through the dependency hell and build you program for every single distro in every single version out there. You are not building for linux, you are building for Ubuntu 12.04 and Ubuntu 14.04, Fedora 21 etc. we need something to combine the best of these two worlds. Ship a "bundle" that runs on all distros but don't introduce security holes if that bundle and its libs don't get updates (e.g. the programmer quit to maintain its own program)

    Its "nice" that the distro package maintainers do so much work in maintaining the packages but it should not be their responsibility .. at the end they "own" the software rather than giving a platform on which others can run their software. If i write software for "linux" i need a kind maintainer on every distro to ship my software downstream. in my opinion this is not the way it should be in every situations. i really honor the hard work of the maintainers, but in my opinion they are not responsible of every shitty (my) software for they distro.

    jm2c
    That sounds an aweful lot like winsxs though. Not a big fan of that Idea. Static binaries would be bad too, as you said, security fixes and so on.

    I think the solution comes from a version controlled common runtime environment. Instead of targeting distributions with their library dependencies, target a version controlled common runtime with it's library dependencies. From that base point everything else could be containerized. Then the only dependency is the version of the runtime that was targeted.

    Leave a comment:


  • pythoneer
    replied
    Originally posted by duby229 View Post
    As far as this universal package installer... Um, does anybody else think packages should have been containerized a long time ago? The underlying distribution wouldn't matter so much if packages were installed and executed straight from the package.
    but you don't want every program to ship with its own version of every lib or at least no updateable version of it. package managers are good for delivering security updates but bad for 3rd party devs to deliver one package/bundle/binary/whatever that runs on every distro but don't ship with all the libs. if you ship all the libs there should be a mechanism to "register" these libs to the system package manager so it can be updated and shared by other programs and if you don't ship them the package manager needs to pull down these libs in all versions out there and must deliver a way to use them side by side. this is really a hard problem. the "solutions" out there win/osx/linux are only on one end: either ship all libs and fuck security (win, osx) or you go through the dependency hell and build you program for every single distro in every single version out there. You are not building for linux, you are building for Ubuntu 12.04 and Ubuntu 14.04, Fedora 21 etc. we need something to combine the best of these two worlds. Ship a "bundle" that runs on all distros but don't introduce security holes if that bundle and its libs don't get updates (e.g. the programmer quit to maintain its own program)

    Its "nice" that the distro package maintainers do so much work in maintaining the packages but it should not be their responsibility .. at the end they "own" the software rather than giving a platform on which others can run their software. If i write software for "linux" i need a kind maintainer on every distro to ship my software downstream. in my opinion this is not the way it should be in every situations. i really honor the hard work of the maintainers, but in my opinion they are not responsible of every shitty (my) software for they distro.

    jm2c
    Last edited by pythoneer; 01 February 2015, 10:18 AM.

    Leave a comment:


  • duby229
    replied
    As far as this universal package installer... Um, does anybody else think packages should have been containerized a long time ago? The underlying distribution wouldn't matter so much if packages were installed and executed straight from the package.

    Leave a comment:


  • r_a_trip
    replied
    Originally posted by curaga View Post
    The size added by the installer matters. If all that bloat means (for example) 50mb larger installer, that's 50mb less for installable software.
    I still don't see the problem. Linux is an OS that practically assumes network connectivity. What isn't in the DVD iso can be downloaded from the repository.

    Leave a comment:


  • cocklover
    replied
    but why? what problem has to be resolved with a new development? It will be better for KDE distro maintainers submit fixs to KDE bugs detected for his users. A new installer is not really needed, at least i never had any problem installing KDE distros. But I have bugs on every distro KDE used, since the first KDE version used 4.4 and some of them are still present. Right know I'm waiting for a synaptic replacement in qt, but is not really needed, I can use synaptic in kde(Mint). I rather have more gui tools for manage the distro like I have on Windows

    Last edited by cocklover; 01 February 2015, 09:41 AM.

    Leave a comment:

Working...
X