The whole point of Gentoo is to provide a framework that makes it easy to create yourself a distribution that is exactly like you want it to be (dependencies compiled in, compilerflags used, ...) instead of having to adhere to the visions of others how a distribution should be built.
It is not about speed, though it is a nice side effect that you can compile with march=native and set up optimizations on a per package basis.
Interesting, wonder if I would have been better off to start with Arch or Gentoo
I read many of these comments, they are quite interesting especially with the stuff about Arch and Gentoo giving users a lot more control. My own systems and quite a number I have given to friends all use a personal distro of mine based on a mix of UbuntuStudio (alpha) and Mint, but which is diverging more and more from the Ubuntu base. I have debs for all custom packages.
Kernel, X, and Mesa from PPA versions-the whole stack
Pulseaudio replaces with Volti volume control and a dummy package to keep APT happy. Would quickly add it back for any machine with a good CPU but no hardware mixer.
Desktop options Cinnamon(default), gnome-shell, KDE, IceWM selectable from lightdm
Brasero from Debian Unstable to avoid pulling in Unity packages
Ffmpeg 2.2 compiled by myself some time back, had to pull mpv from a ppa to go with it
My own GTK3 and icon themes, based on a mix of old UbuntuStudio and even older GNOME2 graphics.
My own Plymouth theme using my desktop wallpaper and "armed penguin" guards for encrypted systems
My own multi-encrypted disk boot-time unlocker calling cryptsetup, integrated with Plymouth, in both Upstart and Systemd versions that can be simultaniously installed for either boot system
I've switched the init system entirely over to systemd, with systemd/dracut for the initramfs. This was necessary to get ahead of porting my encrypted disk unlocker to systemd
Ubuntu's version of Plymouth finally caught up with Debian Unstable, allowing me to stop using a custom version of Plymouth
Sometimes I wonder if I should be on another distro with Ubuntu's direction, but I only see the top-level Unity distro doing anything that would be troublesome for my purposes. I do a lot of work to revert unwanted UI changes, etc, but lots of those come from upstream of any distro. Of course the pending Ubuntu systemd changeover is upstream, as Debian is switching over and many other distros. Very controversial, but I've been happy with it and enjoyed hacking it. Only on the machines I use myself until the next ubuntu release though, as I don't redistribute alphas to avoid creating nasty problems for non-hackers. I take a snapshot at each final release and that's what goes out the door.
If I were to start from scratch, maybe Gentoo would be the way to go, maybe something like Arch, but right now I've got this heavily modified distro based on an Ubuntu and Ubuntu PPA core, would be one hell of a job to port all of my changes to a new install from a different distro. On a new install of Mint, Ubuntu, or UbuntuStudio, I need only drop in my .deb packages and pull the appropriate other debian packages and it's done in minutes. My "master" install has been around since 2011 and installed as Oneiric 64 bit, was laboriously set up to clone an earlier install that went back all the way to Ubuntu Jaunty, which in turn was a remake of what went in as Hardy in 2008. All updates were continuously following Ubuntu alpha repos as a rolling release.
Luke, knowing that you focus on security of your systems this might interest you when it comes to Gentoo: https://wiki.gentoo.org/wiki/Hardened_Gentoo
And check out Arch wiki as well most security tweaks there can be transferred to any distro
Originally Posted by Vim_User
Thanks to both of the last posters for these tips
So far, I've had to concern myself with three main threat models: attempts to crack an encrypted disk stolen by police (already beaten them once it seems), the browser as an attack vector, and possible hostile access to powered-down machines. Fortunately, most of the security features available in Gentoo and Arch are also present in Ubuntu, though I use them more aggressively.
Originally Posted by alaviss
It looks like Ubuntu uses most of the main stack smashing and ASLR type measures by default, with programs deemed "sensitive" built as position independant.
Some of the steps recommended by Arch, such as BIOS passwords, are of limited value against against governmental attackers due to presumed master passwords. This was proven to be the case with ATA disk locking, where the FBI trivially unlocked a disk protected by an ATA security password in a notorious court case.
I don't have to concern myself with local privilige escalation attacks, as none of these are multi-user machines with some users protected from other users. Remote privilige escalation attacks as payloads to browser exploits are quite another matter! No matter how hardened any distro, if it is connected to a network and a browser is run, that is the main threat vector to any otherwise single-user machine. I've heard of people running the browser as it's own user, so normal hardening to protect one user against another can protect the /home directory. I run all browsers in RAM only, with the entire .mozilla directory copied into a tmpfs on opening Firefox, and simply dropping a copy of Torbrowser's entire folder into /tmp to fire it up from there, replacing the folder each time a change is made.
One thing Ubuntu does that I don't like is disabling AppArmor on Firefox by default, I enable it and use a more restrictive custom profile to limit the danger from browser exploits.
The rise of HTLM5 has helped a lot by permitting me to disable Flash by default, in fact to disable all NPAPI plugins by default. Flash is untrustable closed code, though it's behavior is very closely watched by everyone from security experts to the publishers of DRM'ed media, who themselves have issues about encryption keys. I don't install Java at all.
If the decline of non-US based free email services continues I may have to do a dedicated mail server, which would require a mimimalized (headless, no X) as well as hardened OS on dedicated hardware, plus encryption with the keys kept in CPU cache instead of RAM to defeat the cold boot attack. That would be a from scratch install, hardened Gentoo might be the way to go for it-and it could be optimized for the old hardware it would run on, something much harder to do in Ubuntu. Would use very old (pre-P4) hardware set up for low clock speeds and low power consumption.
Man and I thought that the distro holy warz were long since a thing of dim past...
I've never had problems with apt, RPM OTOH back in the day I used to use RH and yellowdog(powerpc) and christ back then would end up in circular dependency loops from hell which prompted me to switch x86 to some other non-RPM based distro, probably debian, then moved onto Ubuntu. Fink(apt based IIRC) on OSX was pretty problem free as well IIRC.
Originally Posted by Vim_User
I USED to build my own kernels ALL of the time as again back-in-the-day most kernels were i386 specific for maximal compatibility, OTOH I also remember my first linux install with craploads of floppies, having to bootstrap gcc, build xf86 from scratch, figure out wtf config settings actually worked with yer gfx card & disp. I'm just glad all this crap is behind me now... ran FreeBSD on my servers(read older retired from desktop usage machines. PORTs was pretty awesome, never had problems with it.)
Tried FC20 on a Sager 7330, don't care for it much and have been thinking of replacing it. Might give one of these a shot. (Playing with optimus & linux is bringing back bad memories ATM though...)
Dox: hell, I remember when the best docs were scattered all over hell and gone, then the linux documentation project, and ubuntu's dox grew up fairly well although they're not updated very frequently, but for quick answers w/o digging there's always enough ubuntu people on IRC to get a quick answer/lead w/o wasting time digging. Can't really say that about many other distros.
I never understood how people can claim Arch is faster when it uses all the same code. I use Arch and love it because of the rolling release system and getting updates ASAP without the hassle of compiling software myself.
The only possible performance difference I found with Arch over Mint and Fedora is that the base RAM usage was marginally lower as there were fewer processes running by default. I suppose, in some, low memory machines, this would have an impact on performance but for the average PC with 4gb+ of RAM I can't see it ever making a difference.
For my part, my system always felt more responsive during multi-tasking. This likely was also due to the low-end UI's I was running. Performance-wise, I wouldn't actually have a clue.
Aaaand, totally off-topic, does anyone know how to set the date function on these forums to Real World, ie DD/MM/YY? The American one confuses the shit out of me (I cant tell if 7/6 is June or July, but I'm not here to argue the point!). I can't ever seem to find it when I thin of it.