No announcement yet.

Ubuntu Still Unsure On Using XZ Packages

  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by mercutio View Post
    does xz do multithreaded decompression yet? last i saw it was "coming sometime".

    i've always found unbuntu kind of slow with package installs, even on multicore cpus.

    although the time to update package lists is unrelated to the decompression, i wish that would be improved too.
    last time I used it no.


    • #17
      Someone already mentioned lzham in the previuos thread, similar compression ration than lzma but much faster decompression:


      • #18
        Originally posted by mercutio View Post
        heh i tried debian on 64mb of ram, apt-get is very bloated! ssh seemed fine though? you could try dropbear, and/or a uclibc based distribution.
        apt-get update caused a bunch of I/O, but that machine had a CF card for storage, so random I/O was no issue. I'm not sure what it would have been like with a physical HD. Do you use public key crypto for ssh auth? If not, then you're not going to have all the RSA/DSA calculation at login. Maybe I shouldn't be using 4K keys?


        • #19
          Originally posted by grotgrot View Post
          Package files can't change so they could be set to be forever cacheable.
          That's not necessarily true:
          • in case a package contains something that can do serious harm to your computer, it is possible for the admins to remove it or remove its read permissions to prevent as much additional harm as possible from happening, until a new fixed package is available. Setting it as eternally cacheable would prevent that option.
          • Another possible reason for a changed file is if an earlier copy contained wrong data or was not complete as the result from e.g. I/O errors, and gets overwritten with a correct copy later.

          BTW: if you run your own caching server, you can always override those caching headers if you prefer.


          • #20
            Removing read or similar permissions won't make much difference. Files will already be cached based on how long ago they were modified, and it will have no effect on machines that have already installed the package. The correct fix is to release a new package version. Remember that it is only the packages (dpkg) caching that we are talking about - the catalog of packages is not cached and is what is updated pointing to the new package version. The same issue applies if the package was borked. Note that the signature will fail if there was a problem after build time. Ubuntu's PPA servers do not let you overwrite an existing uploaded package and I'd assume the main archive is the same way. So my original premise stands - package issues are addressed by a version bump, not by overwriting, and unless they set the packages to be never cacheable the approaches you mentioned will have little effect.

            As for the last point, I don't see why it makes more sense for every cache administrator to go in and add extra configuration, rather than the source to set them correctly in the first place.