Announcement

Collapse
No announcement yet.

Linux 3.12 Kernel Released; Linux 4.0 Planning Talked Up

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • edoantonioco
    replied
    Seems to be than the kernel works better for games, but, now heats a lot, even with dpm activated. Kernel 3.11 works great in that aspect, but seems to be than is not the same for 3.12

    Leave a comment:


  • tomato
    replied
    Originally posted by leif81 View Post
    For a guy who runs such a tight ship it's pretty surprising how useless the Linux versing numbers are.

    I personally really like semantic versioning http://semver.org
    all changes added to Linux are done so in backwards-compatible matter. If Linux used semantic versioning we would still be at 1.x.y, with x being 3 digit number.

    In other words, for kernel it doesn't make sense.

    Leave a comment:


  • DrYak
    replied
    Originally posted by DanL View Post
    Kernel version numbers are fairly meaningless. You can't really tell anything about the kernel just from the version number unless you follow Phoronix or some other kernel news source. If the pattern continues to repeat, you may be able to tell which kernel is an LTS and which kernel a distro will use, but that's subject to change very quickly.
    Well, it depends, before the late 2.6.x cycle and switch to 3.0, the kernel version WERE meaning something precise.


    last:
    2.2.x = mainly bugfixes or micro features over the previous one.

    middle:
    with split: odd = devel/unstable, even = production.
    2.3.0 = starting to develop a bunch of new features, introducing new file systems, introducing new sub-systems in the kernel
    2.4.0 = new features are deemed "stable", making an stable release for the world to use and benefits from all the improvement in th 2.3 release.

    first:
    0.9 = Linus uploads kernel on FTP and let rest of the world backup it for him
    1.0 = first stable kernel.
    2.0 = complete revamping of the architecture. It's not simply adding a brand new USB sub-system, or rewritting the packet-filtering system. It's about re-doing the modularity of kernel.
    3.0 = following the old version scheme, should have come when 2.x series god completely rewritten and re-architectured beyond the few subsystem (Linus himself jokingly said that would be when he rewrites the kernel in a special dialect of Message-Passing Visual Basic)

    This did work at the beginning because the kernel was much smaller and development was done in lock steps.
    All new features/subsystems, etc. where developped together in an odd version, like 2.1 and then all the new stuff released at once in 2.2. With subsequent 2.2.x being only bugfixes.
    2.2.0 and 2.2.47 are more or less compatible.

    What had happened it that Linux actually grew bigger. Subsystem weren't all written at the same time. The kernel didn't change completely with each release. Instead some subsystem were urgently needed and were introduced before the next 2.7.x kernel, and were actually already thrown in the 2.6.x cycle. Other things are showing signs that they are NOT GOING to move at all for a long time, because they are good enough for now. Other things were developed but not as much at the same pace.

    Thus the 2.6.x series was a weird one. Between two closely related 2.6.x number (say from 2.6.4 to 2.6.5), not much had changed (as it was always the case in previous 2.4 ans 2.2 series, when such things where only bugfixes) but if you took numbers more appart (say 2.6.1 and 2.6.8 and 2.6.18, etc.) things weren't compatible anymore. So much "small things" did change, that they all add up and you end up with almost as many change as between 2.0 and 2.2, etc.

    The development model has organically changed from a unstable/stable alterning with strong versionning, to a continuous development model.

    If things were kept the same, you would end up with a kernel version 2.6.876 pretty quickly. Which wouldn't have anything in common with 2.6.
    But with no exact transition on the timeline. In stead, a long continuum of version between 2.6.1 and 2.6.876, each only adding a small part (change of 1 subsystem, addition of 1 or 2 filesystems) but overall so many small things adding up that after 875 intermediate, nothing even barely ressemble the beginning (whereas 2.2.0 and 2.2.47 are all mostly the same).
    (again no clear transition, just lots of cummulative small changes).

    The alternative would have been bumping the middle number each time a new sub-system is added. But then you would have also lots of aberrations. Like reaching 2.451.0 pretty soon. (but at least you know that this one is perfectly compatible with 2.451.1 and 2.451.3, but 2.450.4 and 2.452.0 have a rewrite in one of the subsystems. Say 2.452.0 introduces btrfs) or bizarre number sequence (so okay, you here that one team decided to develop a new packet filtering system. So you decided to call that 2.317.0 - odd number because it's still in development. But before they finish and stabilise (and you thus introduce 2.318.0 to the world) another team finishes porting a GPLed ZFS: how will you call that?! 2.320.0? 2.318.0 and change the net-filter team's version?! Now multiply this by the number of subsystems that are developed in parallel).

    That's why lots of software move to a different numbering scheme after a while:
    - before development happened in discrete big leaps.
    - now it's just a lot of micro-jumps. No new version is radically different. Each is only an improvement of a subpart. But it all adds up.

    Linux moved to 3.x versions. Firefox decided to bump up major numbers (and scare the shit of some sysadmins, even if the difference between, say 21 and 22 aren't as big as between 1.0 and 2.0 and 3.0, and deploying 22 should be as smooth as 21, except for that one single specific feature, but it should be possible for your IT department to monitor exactly which specific features it depends and only test for those).

    Originally posted by leif81 View Post
    I personally really like semantic versioning http://semver.org
    Semantic versioning works nicely for small to mid projects where all the development happens in lock-steps.
    pidgin version 2.6 added quite a few new things upon 2.5, but didn't break anything.
    pidgin version 2.0 was a complete rewrite using GTK2 and changed pretty much everything - up to the point that not 1 single 1.x plugin could work in 2.x
    pidgin version 2.10.7 fixed a bug in a few of the protocols.

    Firefox and the Linux Kernel have become much larger than that (and i feel that this will also be the case with pidgin after 3.x)
    Development doesn't happen in lock-steps
    Nobody will change everything at once (Linus will NOT start rewriting the whole kernel from scratch in message-passing VB.net as he joked before). Only small parts change at once. but all parts end-up changing *eventually*.
    You can't do semanting versionning (at least not on the whole. Subsystems could have their own internal semantic versions).

    Either you end-up always bumping up the major incredibly fast, because almost every new version you publish change some subsystem, and thus fucks up everything depending on it (but only things depending on it. The other 99.7% of plugins/software are completely un-affected, and even un-aware of this fact). That the way Mozilla choose for Firefox. (firefox 24 introduced "ASM.js" thus completely changing the way you do optimised javascript for it. But if you're not a developer of emscripten, you absolutely don't care about this. Adblock will work in 24 just as fine as in 23 an nobody will notice. Just like about every other software/plugin/app which is not Bannana Bread)

    Or you could only bump minor, because overall things don't change that much. Only 0.245 % of all software will be affected by the chance and break - beside these few exceptions, you're not breaking anything only introducing new features. And you end up with a major that will never ever move up. That was the case of Linux kernel during the 2.6.x cycle. things were heading toward a kernel version 2.6.91094 (which doesn't add much to 2.6.91094, but over time has grown completely incompatible with 2.6.3)

    Or you scratch everything and move to a different numbering scheme. New and completely arbitrary (that's the situation you have now with Linux 3.x). And most end-users don't care. Only integrator care, and follow specifically what interests them. If your into graphics you'll be following when the kernel will introduce its move from DRI2 to DRI3 (hey, subsystem with an almost semantic versioning !) and whatch out for these changes because you depend on them. If you're using the API to interface with joysticks, you don't give a fuck about anything, because it didn't move since the kernel 2.6 and its introduction of /dev/input and won't be introducing anything else for a while.

    Originally posted by ciplogic View Post
    But compared with real software like Windows, which in one year they make the "desktop is a tile", and anyone should write WinRT, Linux is more predictable. GTK+ 2 to 3 was a minor change by all standards, KDE 3 to 4 too.
    ...and don't forget "ribbons". The previous stuff Microsoft was in love with.

    Leave a comment:


  • mrugiero
    replied
    Originally posted by mercutio View Post
    ubuntu numbers after release date, so 13.04, 13.10 etc.
    I think Ubuntu has the best version naming model, clear and concise. And even though most of their code names sound silly, they are a simple and user friendly way to identify them, so I like that, too.

    Leave a comment:


  • ciplogic
    replied
    Originally posted by madjr View Post
    if they just plan 19 releases of a kernel version and the 20th is a new one, maybe they should make it more predictable.

    for example from v4 to v5, it would be more predictable to go by multiples of 5:

    4.00 , 4.05, 4.10, 4.15, 4.20 [...] 4.85, 4.90, 4.95, 5.00


    going from 2.60 to 3.00 and from 3.19 to 4.00, without it being an "awesome" big release and just a normal one is kinda weird IMO.

    But who said FOSS has ever been "predictable" ...
    Who said it should be predictable? Which software package is 100%?

    GCC 4 was a major rewrite of how optimizer works and for at least 1 or maybe two revisions was catching up to version 3. KDE4 was not predictable and Gnome 3.0 certainly wasn't. So why expecting something predictable from OSS?

    But who expects predictable releases, is mostly because wants the same stuff but more polished. Linux kernel is a mature stuff but it also active as a project, so I do see the point that from time to time to update the version to think as "feature levels".

    But compared with real software like Windows, which in one year they make the "desktop is a tile", and anyone should write WinRT, Linux is more predictable. GTK+ 2 to 3 was a minor change by all standards, KDE 3 to 4 too.

    Leave a comment:


  • arti
    replied
    Linux 4.0

    So, some time in the future we will have Linux NT 4.0 ?

    Leave a comment:


  • leif81
    replied
    For a guy who runs such a tight ship it's pretty surprising how useless the Linux versing numbers are.

    I personally really like semantic versioning http://semver.org

    Leave a comment:


  • s_j_newbury
    replied
    http://www.loc.gov/rr/scitech/battle.html

    Leave a comment:


  • s_j_newbury
    replied
    Originally posted by BSDude View Post
    Actually back in 2011 when Linus announced his desire to end with the 2.6 series and move to 3.0 once the kernel hits its 20th anniversary it seemed to me a good cut-off mark. Why don't they just move to 4.0 once the kernel turns 30? Like 4 would stand for the 4th decade of the linux kernel. Then the point release would stand for the year it was released from 0 to 9 and then the third point release would stand for all the sequential kernel releases in that year.

    So it would look like this
    Code:
    a.b.c
    where 'a' would stand for a release in the 2020s, the second 'b' would stand for the fourth year of the 20s (ie. 2024) and the 'c' would signify the second stable release of the kernel in 2024. I would assume that there would be no more than 5 releases in a particular year. If a new kernel is started at the end of the previous year and is released in the next then 'b' is bumped to the next number and 'c' goes back to 1.

    This versioning scheme definitely won't reach crazy numbers. We definitely will be long gone by the time the kernel will be in the twenties The negative aspect is the lack of simplicity at a glance, it has more dots. Also there is this discrepancy between the actual anniversary being in 1991 which puts the previous year in a bit of dilemma but I think that can be ignored and just consider the entire decade from 0 to 9 as to how many years the kernel exists.
    Of course you realise, decades, centuries etc, technically end/begin on year 1.

    We're currently in the second decade of the 21st century, and will be until Jan 2021, 2020 being the last year of the second decade. This irritated the hell out of me with the turn of century...

    /pedantic_mode

    Leave a comment:


  • drev
    replied
    Linus doesn't want to have a Linux 3.(some-large-number)
    what's wrong with that ?

    Leave a comment:

Working...
X