Announcement

Collapse
No announcement yet.

Docker Benchmarks: Ubuntu, Clear Linux, CentOS, Debian & Alpine

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by bug77 View Post
    But to my knowledge, you still start with a minimal Docker image and add what you need. I could be wrong though, my knowledge about Docker is almost 100% theoretical. One more thing that's been sitting on my to do list since forever...
    Docker deploys containers, a container is a blob of userspace stuff sharing the kernel with the host system, but otherwise isolated from the host system.

    Point is, you need a host system to place the kernel, bootloader, Docker infrastructure (to install, remove containers) and to do standard hypervisor duties.

    Comment


    • #32
      Originally posted by starshipeleven View Post
      Docker deploys containers, a container is a blob of userspace stuff sharing the kernel with the host system, but otherwise isolated from the host system.

      Point is, you need a host system to place the kernel, bootloader, Docker infrastructure (to install, remove containers) and to do standard hypervisor duties.
      Funny, I knew that, but now that you spelled it out for me, it became obvious the container doesn't need a complete OS image to run. Du-uh.
      I'm still stuck in the VM world, I guess.

      Comment


      • #33
        Originally posted by starshipeleven View Post
        Note that you would need to split distros again in hard categories like 32bit/64bit, (just with "x86 cpus 2010-2012", "x86 cpus 2013-2014", "x86 cpus 2015-2016" and so on, for example) as every few years they add new instructions and you can't run binaries that expect some instructions to work on hardware that does not support them. This means more annoyances for users, duplication in build servers, and so on. I personally think it is doable, but I doubt that for most usecases it is going to be worth it.

        Most half-serious distros offer source packages, you can easily download them, change the cflags to reflect what is your system and recompile, if you really need it.

        Clear Linux devs have implemented some trickery that detects the type of system and switches binaries for some applications, but I doubt this would not be a pain in the ass if done on a decent scale. Debian has like 60k packages, even if 80% is obsolete shovelware that does not need this, it's still a ton of work, and debugging issues will get so much more fun afterwards.
        One thing I was thinking isn't it possible to have 1 binary, but support multiple instruction sets? If it isn't why not. Sounds like something doable no ?

        Comment


        • #34
          Originally posted by Goddard View Post
          One thing I was thinking isn't it possible to have 1 binary, but support multiple instruction sets? If it isn't why not. Sounds like something doable no ?
          Theoretically yes but I don't think there is an automated way to do that (yet).
          If you add this feature to the compiler, so that it adds a processor feature check and multiple code paths automatically on compile time, it would probably work.

          You get bloated binaries of course, as they have to support more code paths.

          Comment


          • #35
            Originally posted by starshipeleven View Post
            Theoretically yes but I don't think there is an automated way to do that (yet).
            If you add this feature to the compiler, so that it adds a processor feature check and multiple code paths automatically on compile time, it would probably work.

            You get bloated binaries of course, as they have to support more code paths.
            Seems like that is what everyone is already doing now with the snaps/flatpaks any the like. Making the required download larger.

            Comment


            • #36
              Originally posted by Goddard View Post
              Seems like that is what everyone is already doing now with the snaps/flatpaks any the like. Making the required download larger.
              It's not a matter of download size (storage space is cheap) but of memory used at runtime, the executables must be loaded to be executed, a fat binary uses more ram.
              Snap/flatpack bundle a ton of libraries to keep shitty (or proprietary) apps happy and keep them isolated from the actual system so that even if they have debatable security they won't compromise the system.
              Admittedly, also the Snap/flatpack way uses more ram as the libraries in use are also loaded in ram, but the problem they are solving is more important, I guess.

              Comment

              Working...
              X