Originally posted by macemoneta
View Post
Announcement
Collapse
No announcement yet.
Contemplating A New, Public Linux Daily Kernel Build Server For Ubuntu/Fedora
Collapse
X
-
Michael Larabel
https://www.michaellarabel.com/
-
Originally posted by [wrd] View Posthave you actually considered using AWS EC2 for cpu intensive tasks like this one? I'm curious because I would assume, that this could be cheaper and also be a matter of targeted donations. As far as I know GKH does this.
The other thing I'm interested in is it would be a lot more useful if there would be global metrics about used resources. To get an idea what is the highest user value to add to the kernel.
I.e.: statistics about
* what hardware does one use
* what distribution does one use
* ideally performance indicators that show the most used code on the platform.Michael Larabel
https://www.michaellarabel.com/
Comment
-
Ubuntu has their PPAs that you could use for free, I guess other distros offer machines, too, and even Redhat gives free, but restricted, AWS instances with openshift (sadly, you'll probably lack the tools needed to build anything). Why don't you talk companies/distros and ask them for help?
Also, why you aren't afraid of breaking your hardware when testing new kernels? Although I guess the possibility is very small, this COULD happen...
Comment
-
Originally posted by asdfblah View PostUbuntu has their PPAs that you could use for free, I guess other distros offer machines, too, and even Redhat gives free, but restricted, AWS instances with openshift (sadly, you'll probably lack the tools needed to build anything). Why don't you talk companies/distros and ask them for help?
Also, why you aren't afraid of breaking your hardware when testing new kernels? Although I guess the possibility is very small, this COULD happen...
I've been running Git kernels for many years... In the past 11 years I think only twice I ever ran into two rare situations of hardware damage from bad kernel/drivers out of the hundreds of systems.Michael Larabel
https://www.michaellarabel.com/
Comment
-
Originally posted by |wrd| View Posthave you actually considered using AWS EC2 for cpu intensive tasks like this one? I'm curious because I would assume, that this could be cheaper and also be a matter of targeted donations. As far as I know GKH does this.
Of course it can happen someone manages to cut down running costs, BUT still, believing in silver bullets and marketing BS is just plain stupid. Clouds aren't cheap. They are making profit out of this activity, dammit. This implies you can do it cheaper, by removing their margins out of equation.
The other thing I'm interested in is it would be a lot more useful if there would be global metrics about used resources. To get an idea what is the highest user value to add to the kernel.
* ideally performance indicators that show the most used code on the platform.
If you want to get some couple of random picks:
1) In my case I can admit system spends a lot time in read_hpet(). Hell yeah, if you take a look on clocks way too often, it can get slow. Not really sure why some programs or libs want high precision clocks so much, but it accounts to like 10% of CPU cycles spent by system running browser, some stuff like geany and so on. Actually system load is around some mere 1-5% cpu, so its not a major issue. But still, spending most time just to take a look on clocks looks funny. This measured on more or less usual 64-bit kernel using usual ubuntu lowlatency config.
2) Uhm, well, memcpy routines can easily be dominant code as well. Yes, world is still far from being zero-copy and in some tasks memcpy can easily account to like 30% of total time. Hopefully it explains why there're quite many efforts being put to optimize it.
3) Filesystems. Ok, you can't have too much speed. Especially when there're turbo-fast SSDs, etc.
P.S. and hmm, your nickname is rather funny: it breaks vBB quoting. Sure, vBB5 is utter shit, but it is still funny it lets one to register nicknamse it can't handleLast edited by SystemCrasher; 29 November 2015, 06:00 AM.
Comment
-
Originally posted by yogi_berra View PostWhy? The only thing you'll be doing is distributing software not intended for public use to the public without any responsibility for the security concerns that brings.Michael Larabel
https://www.michaellarabel.com/
Comment
-
It's not hard to build a live image daily/nighty if everything is automated. For kernels you can use the kernel default packaging or adopt the Ubuntu packages, which may be harder to maintain yourself. The Linux tree has basic build support for rpm, so it would be very simple. Don't forget to strip the debug symbols or the packages will be extremely huge. Take a look there:
In case you wanted to use Kanotix for benchmarks add the latest AUFS (2) patchset on top of the Linux tree. I never built Fedora live but Kanotix Special provides even NV 340, 353 and fglrx in live mode (gfxdetect). It would be simple to replace the kernel, for New mesa git you would need to build it yourself, maybe against:
Did not try this repo yet.
Comment
Comment