Announcement

Collapse
No announcement yet.

Contemplating A New, Public Linux Daily Kernel Build Server For Ubuntu/Fedora

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by macemoneta View Post
    Yes I use the Rawhide nodebug kernel currently for those daily tests but this repository doesn't seem to be updated daily (few times a week) and if doing both ubuntu/fedora builds could keep to using the same configuration.
    Michael Larabel
    https://www.michaellarabel.com/

    Comment


    • #12
      Originally posted by [wrd] View Post
      have you actually considered using AWS EC2 for cpu intensive tasks like this one? I'm curious because I would assume, that this could be cheaper and also be a matter of targeted donations. As far as I know GKH does this.

      The other thing I'm interested in is it would be a lot more useful if there would be global metrics about used resources. To get an idea what is the highest user value to add to the kernel.

      I.e.: statistics about

      * what hardware does one use
      * what distribution does one use
      * ideally performance indicators that show the most used code on the platform.
      I don't think it would be any cheaper since I own the hardware already, etc. OpenBenchmarking.org does keep stats about that but not public at the moment.
      Michael Larabel
      https://www.michaellarabel.com/

      Comment


      • #13
        Ubuntu has their PPAs that you could use for free, I guess other distros offer machines, too, and even Redhat gives free, but restricted, AWS instances with openshift (sadly, you'll probably lack the tools needed to build anything). Why don't you talk companies/distros and ask them for help?

        Also, why you aren't afraid of breaking your hardware when testing new kernels? Although I guess the possibility is very small, this COULD happen...

        Comment


        • #14
          Originally posted by asdfblah View Post
          Ubuntu has their PPAs that you could use for free, I guess other distros offer machines, too, and even Redhat gives free, but restricted, AWS instances with openshift (sadly, you'll probably lack the tools needed to build anything). Why don't you talk companies/distros and ask them for help?

          Also, why you aren't afraid of breaking your hardware when testing new kernels? Although I guess the possibility is very small, this COULD happen...
          Don't really need the help in that regard. I have the computing power available, producing a Debian kernel can be done from Git in just a couple of commands, I have the web servers, and Phoromatic can fill in the rest of the automation part.

          I've been running Git kernels for many years... In the past 11 years I think only twice I ever ran into two rare situations of hardware damage from bad kernel/drivers out of the hundreds of systems.
          Michael Larabel
          https://www.michaellarabel.com/

          Comment


          • #15
            Why? The only thing you'll be doing is distributing software not intended for public use to the public without any responsibility for the security concerns that brings.

            Comment


            • #16
              I'd buy a Phoronix branded USB stick that contained the live distro.

              Comment


              • #17
                Originally posted by |wrd| View Post
                have you actually considered using AWS EC2 for cpu intensive tasks like this one? I'm curious because I would assume, that this could be cheaper and also be a matter of targeted donations. As far as I know GKH does this.
                There is certain flaw in your logic. Do you honestly think Amazon would run their servers without making profits? Needless to say, this implies that they both cover server running costs AND making some profit. So there is room to make it cheaper: if you run server yourself, only bare running cost remains. You do not have to cover datacenter staff salary anymore, and 3rd party profit is no longer part of equation as well.

                Of course it can happen someone manages to cut down running costs, BUT still, believing in silver bullets and marketing BS is just plain stupid. Clouds aren't cheap. They are making profit out of this activity, dammit. This implies you can do it cheaper, by removing their margins out of equation.

                The other thing I'm interested in is it would be a lot more useful if there would be global metrics about used resources. To get an idea what is the highest user value to add to the kernel.
                The highest user value of Linux kernel is already here: it supports shitload of hardware, damn configurable, and opensource at once. Whatever, nobody on this planet is anyhow comparable to this powerhouse. Its just like ffmpeg, the one and the only thing which supports so many formats at once.

                * ideally performance indicators that show the most used code on the platform.
                Use "perf top", Luke. This is rather profiling/tracing than just benchmarking. And it would be much smarter to investigate system performance while running task where you think you do not have enough performance, taking a look around what is getting stuck and if there is good way to improve it.

                If you want to get some couple of random picks:
                1) In my case I can admit system spends a lot time in read_hpet(). Hell yeah, if you take a look on clocks way too often, it can get slow. Not really sure why some programs or libs want high precision clocks so much, but it accounts to like 10% of CPU cycles spent by system running browser, some stuff like geany and so on. Actually system load is around some mere 1-5% cpu, so its not a major issue. But still, spending most time just to take a look on clocks looks funny. This measured on more or less usual 64-bit kernel using usual ubuntu lowlatency config.

                2) Uhm, well, memcpy routines can easily be dominant code as well. Yes, world is still far from being zero-copy and in some tasks memcpy can easily account to like 30% of total time. Hopefully it explains why there're quite many efforts being put to optimize it.

                3) Filesystems. Ok, you can't have too much speed. Especially when there're turbo-fast SSDs, etc.

                P.S. and hmm, your nickname is rather funny: it breaks vBB quoting. Sure, vBB5 is utter shit, but it is still funny it lets one to register nicknamse it can't handle
                Last edited by SystemCrasher; 29 November 2015, 06:00 AM.

                Comment


                • #18
                  Originally posted by yogi_berra View Post
                  Why? The only thing you'll be doing is distributing software not intended for public use to the public without any responsibility for the security concerns that brings.
                  Already explained, because I need to do it anyways for my testing. If these packages are public, other enthusiasts/capable-people are able to also reproduce and validate my results... Or run it from the new PTS Desktop Live.
                  Michael Larabel
                  https://www.michaellarabel.com/

                  Comment


                  • #19
                    It's not hard to build a live image daily/nighty if everything is automated. For kernels you can use the kernel default packaging or adopt the Ubuntu packages, which may be harder to maintain yourself. The Linux tree has basic build support for rpm, so it would be very simple. Don't forget to strip the debug symbols or the packages will be extremely huge. Take a look there:



                    In case you wanted to use Kanotix for benchmarks add the latest AUFS (2) patchset on top of the Linux tree. I never built Fedora live but Kanotix Special provides even NV 340, 353 and fglrx in live mode (gfxdetect). It would be simple to replace the kernel, for New mesa git you would need to build it yourself, maybe against:



                    Did not try this repo yet.

                    Comment


                    • #20
                      If it is not a lot of work you can configure several kernel types as CK, or Manjaro style and then publish benchmarks between different common kernel configurations for different tasks, as you recently did.

                      Comment

                      Working...
                      X