Announcement

Collapse
No announcement yet.

It's Looking Like Debian 9.0 Stretch Won't Support OwnCloud

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • anda_skoa
    replied
    Originally posted by jospoortvliet View Post
    I'm not the one who ran the numbers but searching for the location of configuration files on each server request creates an overhead is what I was told. Measurable and significant, which probably means 2% or more...
    If my searching you mean looking into several locations then ok, but that is something entirely different than just locating the file in a different place.

    Obviously the real problem is that the software has to read the config at every request in the first place, but I guess you are out of luck there as the stateless design of PHP doesn't allow you to do anything more performant.

    Cheers,
    _

    Leave a comment:


  • jospoortvliet
    replied
    Originally posted by anda_skoa View Post
    I doubt it.
    Unless /etc is on a different disk, the seek and read time will still be the same, especially once you have hot disk caches.

    Cheers,
    _
    I'm not the one who ran the numbers but searching for the location of configuration files on each server request creates an overhead is what I was told. Measurable and significant, which probably means 2% or more...

    With regards to "would Debian users prefer that over breaking their config" - I am sure of it, but users on every other platform would also incur this performance hit if we'd enable it in the normal ownCloud core. So it is unlikely to go in. We always told users: use our packages...

    Leave a comment:


  • anda_skoa
    replied
    Originally posted by W.Irrkopf View Post
    I lack the knowledge to understand whether performance is a sound agrument here:
    I doubt it.
    Unless /etc is on a different disk, the seek and read time will still be the same, especially once you have hot disk caches.

    Cheers,
    _

    Leave a comment:


  • W.Irrkopf
    replied
    Originally posted by jospoortvliet View Post
    In the mean time, we build our own Debian packages, as Kano just suggested, too. Sadly, the packages from Debian can't upgrade to those because they split things up and put some files outside of the ownCloud folder and all that. We don't support that for performance reason thus that's a unique patch - breaking upgrades.
    I lack the knowledge to understand whether performance is a sound agrument here:
    a) What type of performance are you talking about?
    b) Are there numbers to substantiate the performance arguments?
    c) If Debian users have lived with the bad performance so far, wouldn't it make sense to assume that they will prefer to live with it rather than to loose their configuration? Especially when looking at the argument about relegiously never moving stuff (or maybe I misinterpret this because there is a distinction between users and admins?). If you create packages intended for existing Debian installations of owncloud, you should perhaps consider including a patch in the debian/ directory.

    Leave a comment:


  • jospoortvliet
    replied
    Originally posted by boltronics View Post
    Why is splitting the package up a problem for ownCloud? If it's because one or more libraries you use are already packaged and are older than current ownCloud releases support, can't we just add updates to the applicable package(s) in debian-packports? As for Debian testing, I can't imagine this being a problem there.
    Two things. First, and I don't know the technical details here so I might be at least part off the mark, but there's a performance impact to having files in 'flexible' locations that was measured to not be insignificant, sadly. So config.php in /etc simply costs performance on each server request - uncool. That might or might not be the case for split up libraries we currently ship as part of ownCloud - not sure about that.

    Second, if you want to upgrade from a version that has config.php in a non-standard place to a version that is from us, stuff breaks unless you know what you're doing. That's a problem for users, especially considering we strive to make ownCloud EASY EASY EASY (that's why we have so many users - it is simple to install and manage an ownCloud server). Don't underestimate the ability of users to ignore every warning and written instruction and release note and get themselves stuck ;-)

    I've actually been in some heavy discussions with founder and maintainer Frank where we wanted to do X which would require a tiny, eanyweany small change from users to adopt. One example was the new end points for calendar & contacts. I argued we could do it - we can put it in the release blog, social, documentation, release notes, heck, even pop up a warning. If it is in all our communication, most users would read it, right? Frank refused - either we kept the end points the same or we don't do the change, period. You can't expect users to read ANYTHING you write. And my experience has proven him right. Plenty of changes we simply, technically, could not manage to avoid have forced users to take some kind of manual action and no amount of documentation and communication could stop users from asking me over social media again and again about it.

    Frank brings an almost religious hate for breaking anything and keeping stuff simple to ownCloud. I've recently described them in our 'three priorities' blog https://owncloud.org/blog/the-three-...nt-priorities/ and you can and will see it all over our github.

    Our decisions, including many people deride us for, can be invariably traced back to those priorities. That includes ominous examples like the disabling of automatic upgrading in packages (oh boy, do people hate it. Except for those who ctrl-C'ed their upgrade because 'apt-get took so long, it was probably stuck' and lost their entire ownCloud database) and many other things.

    Originally posted by boltronics View Post
    I can't speak for Debian, but I have thought about this and I think containerisation has a lot of worrying drawbacks. For example, there is talk about concerns over security. Most developers are never going to package themselves, and if they do it won't be for every platform, so distro maintainers will always need to take on the bulk of the packaging work (unless we drop a lot of packages). Now say there is a critical vulnerability found in libjs-bootstrap today, and everyone has to upgrade. Maybe there are 10 or more different applications all using that library. So while only one package historically had to be updated, package maintainers now have 10x+ the amount of work to do.

    But maybe somehow the burden will shift onto application developers, because it will make life so easy for them to do so. Well, how many of those developers are security experts? Do they watch the security mail lists for all the libraries and tools they need and include in their package? Can they be counted on to promptly push out releases when such issues are discovered? Very unlikely.

    Well maybe the distro security teams can step in in those examples, right? I'm not convinced. I don't believe it will always be easy to figure out how a package was built from scratch if things are released pre-bundled. It may not be possible for such a security team to exist, or at least be very meaningful. Even if all the source code is included and the build instructions are obvious, it may not be obvious how the bundled libraries came into being. Deb and RPM packages have a recipe that can be followed to reproduce the result* (eg. take this specific library release from this specific repository, apply these patches, copy this file to this location with these permissions and this file ownership, etc). Can all the other solutions say the same? Surely it's possible to write Docker containers that way (for example), but is it enforced, or just something which is possible if the developer decided to do it that way?

    *RedHat has historically had great difficulty providing the correct source RPMs for their binary package (probably in violation of various licenses), as previously documented by CentOS developers. If a company as big as RedHat can't even supply source code correctly, despite a package management system that basically mandates it, how the heck can we expect anyone else to get it right?

    And then there's all the deficiency issues with wasted space and memory usage due to packages not sharing libraries, etc.

    So in short, I think some of these upcoming packaging formats will be a big slap in the face for free software advocates, the security conscious and many others', and I feel they only aim to benefit developers who don't care about any of that. If any of them take off, it'll be a big step backwards.

    Edit: BTW, if you want my idea of an upcoming package management solution done right, take a look at NIX or GNU Guix. You can install whatever package versions you need without worrying about conflicts, and it's perfectly clear what is going on to the user/administrator. Flexibility and transparency combined to make a very powerful solution for all.
    Don't think I'm argueing for containerisation, I'm entirely with you in what you write. I personally think it's a horrible solution!

    But other solutions require wide adoption and collaboration between distributions and sadly, they have shown to be incapable of that. People, including myself, have been kicking that dead horse for a few decades now. The arbitrary differences which still exist between packaging guidelines for, say, Fedora and openSUSE or the differences in naming of packages between distributions, that stuff kills technological solutions like the Open Build Service or cross-distribution packaging technologies. And if the distributions can't get their act together then - well, desperate measures...

    I find it sad and frustration, often even infuriating, hence my whining about this on social media lately ;-)

    Leave a comment:


  • boltronics
    replied
    Originally posted by jospoortvliet View Post
    For example, to have Calendars work well with Windows 10, we need a fix in Sabre/DAV. Sabre has integrated our patch in their current stable branch - but that only supports PHP 7 and we still support PHP 5.4 so we can't upgrade to that release. It isn't clear yet if they want to backport. If not, well, we'll apply it to the version of Sabre/DAV we ship...
    Well that's an interesting point and a good argument (and I didn't see any mention of this when I was reading through the Debian mail list!).

    When packages have dependencies outside of Debian that are causing problems, there are only three options that I know of: Add the newer version (if applicable) to debian-backports, make a policy exception like the one made for Firefox, or drop the package. Adding backports of the php7.0 and Sabre/DAV library would unfortunately force ownCloud itself into debian-backports as well, but that's still preferable to the current situation IMO. It sounds like you may prefer ownCloud to be in debian-backports anyway. Perhaps putting that question to the mail list would give you further suggestions (if that's the main problem).

    Originally posted by jospoortvliet View Post
    Sadly, the packages from Debian can't upgrade to those because they split things up and put some files outside of the ownCloud folder and all that.
    Why is splitting the package up a problem for ownCloud? If it's because one or more libraries you use are already packaged and are older than current ownCloud releases support, can't we just add updates to the applicable package(s) in debian-packports? As for Debian testing, I can't imagine this being a problem there.

    Originally posted by jospoortvliet View Post
    Perhaps Debian also needs to start thinking about the future?
    I can't speak for Debian, but I have thought about this and I think containerisation has a lot of worrying drawbacks. For example, there is talk about concerns over security. Most developers are never going to package themselves, and if they do it won't be for every platform, so distro maintainers will always need to take on the bulk of the packaging work (unless we drop a lot of packages). Now say there is a critical vulnerability found in libjs-bootstrap today, and everyone has to upgrade. Maybe there are 10 or more different applications all using that library. So while only one package historically had to be updated, package maintainers now have 10x+ the amount of work to do.

    But maybe somehow the burden will shift onto application developers, because it will make life so easy for them to do so. Well, how many of those developers are security experts? Do they watch the security mail lists for all the libraries and tools they need and include in their package? Can they be counted on to promptly push out releases when such issues are discovered? Very unlikely.

    Well maybe the distro security teams can step in in those examples, right? I'm not convinced. I don't believe it will always be easy to figure out how a package was built from scratch if things are released pre-bundled. It may not be possible for such a security team to exist, or at least be very meaningful. Even if all the source code is included and the build instructions are obvious, it may not be obvious how the bundled libraries came into being. Deb and RPM packages have a recipe that can be followed to reproduce the result* (eg. take this specific library release from this specific repository, apply these patches, copy this file to this location with these permissions and this file ownership, etc). Can all the other solutions say the same? Surely it's possible to write Docker containers that way (for example), but is it enforced, or just something which is possible if the developer decided to do it that way?

    *RedHat has historically had great difficulty providing the correct source RPMs for their binary package (probably in violation of various licenses), as previously documented by CentOS developers. If a company as big as RedHat can't even supply source code correctly, despite a package management system that basically mandates it, how the heck can we expect anyone else to get it right?

    And then there's all the deficiency issues with wasted space and memory usage due to packages not sharing libraries, etc.

    So in short, I think some of these upcoming packaging formats will be a big slap in the face for free software advocates, the security conscious and many others', and I feel they only aim to benefit developers who don't care about any of that. If any of them take off, it'll be a big step backwards.

    Edit: BTW, if you want my idea of an upcoming package management solution done right, take a look at NIX or GNU Guix. You can install whatever package versions you need without worrying about conflicts, and it's perfectly clear what is going on to the user/administrator. Flexibility and transparency combined to make a very powerful solution for all.
    Last edited by boltronics; 31 March 2016, 09:03 AM.

    Leave a comment:


  • jospoortvliet
    replied
    Note that we try to upstream as much code as we can - but our first priority is to get a good product to our users, as I explained in the email Phoronix linked to. And not all upstreams are as easy to deal with as we'd like. For example, to have Calendars work well with Windows 10, we need a fix in Sabre/DAV. Sabre has integrated our patch in their current stable branch - but that only supports PHP 7 and we still support PHP 5.4 so we can't upgrade to that release. It isn't clear yet if they want to backport. If not, well, we'll apply it to the version of Sabre/DAV we ship...

    If we're special, I don't know. For sure, users should demand newer versions from their distribution than 7.0 - with tools like CUPS or Apache, old might be stable and do a fine job. With ownCloud, we're moving and growing so quick that using a version that is more than a year old means your version is less tested (we grew our user base a factor 3 last year...) and gets no backported fixes beyond security. So you're not gaining any stability staying on an old version, on the contrary.

    Now I get that people don't want to change anything on their server, and hey - I'm not saying we don't WANT to provide these things! It'd be great. But somebody has to do the work and our customers pay us for features (yeah, the n'th authentication mechanism or storage system...) while our contributors prefer to work on supporting PHP 7 in ownCloud 9 rather than doing anything to ownCloud 7... Can you blame them? Maybe, but you can't force them to do what you want until you pay them ;-)

    If somebody really wants to fix these problems, awesome, really. That's what I said.

    In the mean time, we build our own Debian packages, as Kano just suggested, too. Sadly, the packages from Debian can't upgrade to those because they split things up and put some files outside of the ownCloud folder and all that. We don't support that for performance reason thus that's a unique patch - breaking upgrades.

    Yeah, perhaps we have to move to containers or something like that. We're actually working on a Ubuntu Snap...

    I hope some of the Debian developers can join us at the conf and we can improve things somehow. Meanwhile, other distributions are looking for other solutions like Project Atomic and XDG-apps at Fedora, rolling stuff with Tumbleweed at openSUSE, Snappy at Ubuntu. Perhaps Debian also needs to start thinking about the future?

    Leave a comment:


  • Kano
    replied
    If you alone consider the needed upgrade path it is logical that it can not be in a package named owncloud. It seems that not even owncloudX would be enough but even owncloudX.Y with all needed steps. Basically somebody could put all of those in a repo but the practical use seems to be limited. If you bundle libs it is ok for me if you build everything from source but it is always possible that those contain security related bugs. If you run a system with critical data then you need to know when updates are needed and that the bundled libs are fixed too. It is usually better not to bundle own patched libs but try to upstream. If that is not possible then better ask yourself why not. Owncloud should really work on direct upgrades, upstream 3rd party changes and maybe build own Debian packages till these issues are resolved.

    Leave a comment:


  • c117152
    replied
    Originally posted by chrisq View Post
    The solution for owncloud is to create a docker image with all their modifications, and make that the only supported platform.
    The more projects turn to docker as a deployment solution, them more it becomes clear how problematic Apt and the LSB is. Don't get me wrong, you're absolutely right suggesting docker to circumvent distribution mandated library restrictions. But, for debian, it means their package system is simply outdated. Maybe, the NixOS solution of packaging multiple libraries and incorporating containment right into the packaging system is the right technical choice. Maybe, it's time for debian to start thinking about leaving the aging apt behind and seek greener pastures...

    Leave a comment:


  • Michael_S
    replied
    For the use-case of ownCloud, I've been playing with hosting my own instance of the open source sandstorm.io software. So far it works great, though I haven't done any heavy lifting with it.

    The Sandstorm developers are up front that their project is in heavy development beta, so they change things too quickly to make it worthwhile to make a software package for Debian or Fedora or similar. They have their own security update mechanism.

    Leave a comment:

Working...
X