Announcement

Collapse
No announcement yet.

The Thousands Of FIXMEs & TODOs In The Linux Kernel

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • profoundWHALE
    replied
    Originally posted by ElectricPrism View Post
    You're just not getting me, I mean no offense but is English your first language?
    No, you're just wrong.

    Originally posted by ElectricPrism View Post
    I mean no offense but is English your first language?
    Yes, and even if it wasn't, you're still wrong.

    However, I will give credit where credit is due:

    Originally posted by ElectricPrism
    ✓Unity 8 being so far behind announced schedule
    Yes, people who are looking forward to it were disappointed.

    Originally posted by ElectricPrism
    Originally posted by profoundWHALE
    Originally posted by ElectricPrism
    ✓ Abandoned projects like ✓ Ubuntu Software Center
    Ubuntu Software Center IIRC is getting a complete re-write. That's like saying AMD is abandoning Catalyst or Mesa because of amdgpu work.
    Catalyst - A Kernel Driver
    Mesa - A Graphics Library
    AMDGPU - A Kernel Driver
    That's true

    Originally posted by ElectricPrism
    I had no idea that Catalyst, Mesa and AMDGPU were the same thing - please let me bask in your wisdom as apparantly all three are interchangably the same thing.
    They aren't, and I never said they were. Apparently, you lack the ability to properly discern between what is meant, and what you thought was meant.

    As in, you missed the point. In this case, I'm thinking it was willful misinterpretation.

    Originally posted by ElectricPrism
    Apparantly you lack the ability to properly interpret and discern the intent, flavor and full meaning of what people write that you read.
    You do have the ability to get creative with words, so I'll give you that. Regardless, what I perceived from you so far is that you are hypocritical and that you can be creative with words, so you shouldn't be surprised when you are 'called names'.

    Originally posted by ElectricPrism
    Are you aware that when you call someone names you shame yourself?
    For instance, someone who lies can be called a liar. However, a liar is generally associated with someone who has a reputation established. Of course, a liar could lie about someone lying, but then we get into paradoxes and finger pointing.

    So I didn't call you a troll directly. A troll (in my use of the word) would be someone who posts things simply to aggravate people, and that is their #1 purpose. I haven't checked your posting history because I could care less what you had said before, you're just wrong now, and your blatant disdain of Ubuntu is something that resembles trolling or having an ego.
    Last edited by profoundWHALE; 14 January 2016, 08:07 PM.

    Leave a comment:


  • northar
    replied
    Originally posted by ColinIanKing View Post

    Well, I want to refute that. In just past the past year alone I have made over 2000+ commits to various projects [1]. Some of these I've developed during my paid hours in Canonical and are used by various other distros. I've also been submitting patches to the Kernel rather frequently as I am focused on fixing small bugs that creep into the kernel.

    Here are some projects I've been creating for the greater good: http://kernel.ubuntu.com/~cking/
    Note that one of the projects, "The Firmware Test Suite" has is a UEFI recognised defacto firmware test suite used by many companies [2]

    So to say "Instead they use all their engineering force to fork stuff and create a software stack incompatible with all the existing FOSS solutions :/ " is tiresome, as you clearly need to check out your facts rather then keep re-cycling this fallacy.

    [1] https://github.com/ColinIanKing
    [2] http://www.uefi.org/testtools
    Nice with a serious reply. There a few trolls that seems more intent on kicking on Linuxbased development (and Ubuntu in special) than to kick at anything else. Just ignore, the rest of us knows the good stuff canonical and others do. Without Canonial desktopLinux would be in a sad sad state.

    Leave a comment:


  • Passso
    replied
    Originally posted by ColinIanKing View Post

    Well, I want to refute that...

    So to say "Instead they use all their engineering force to fork stuff and create a software stack incompatible with all the existing FOSS solutions :/ " is tiresome, as you clearly need to check out your facts rather then keep re-cycling this fallacy.
    Thank you for kicking nasty trolls and haters, I just wish I could see his blue face when he read your correct answer XD

    Leave a comment:


  • Luke_Wolf
    replied
    Originally posted by linuxcbon View Post
    Is it possible you discuss without calling people troll ? Grow up.
    You made an extremely ignorant statement to the point where I really wasn't sure if you were seriously saying that.

    Originally posted by linuxcbon View Post
    I know but in my opinion, a stable ABI is better because it needs less efforts.
    Only if you're solely considering the initial time investment is a singular ABI less effort than creating a pluggable system where you can load ABIs as appropriate, the overall time investment for having a single ABI is significantly higher because you have to make sure that it never ever breaks, whereas with a pluggable system you can version the interface, and dispatch as appropriate.

    Originally posted by linuxcbon View Post
    If the libraries didn't change too much, the old programs should work (more or less well). And it has nothing to do with the ABI : it is also the case for BSD and OSX, right ?
    You have no idea what the term ABI means do you? The ABI or Application Binary Interface is how one piece of compiled code talks to another piece of compiled code, and has everything to do with a program being compatible with a library or OS. I also hope that you realize that any code that is currently being actively maintained for anything that was compiled, and not Microsoft Windows, is likely not going to work with 15 year old binaries.

    The only reason that Windows has even maintained that long of compatibility in a manner that can actually be used is that Win32 is a huge surface that was largely set in stone after Windows 98, and provided a lot of the common functionality that programmers needed, and then on top of that the usual distribution mechanism is for someone to ship their program and all of the Dynamically Linked Libraries that their program relies upon together (which OS X developers also do, but the system libraries aren't set in stone, which means that programs can break between versions) whereas Linux and the BSDs you usually have one version/copy of the libraries installed in /usr and all of the binaries are linked to those, which means that a binary that worked on say... Fedora 23 may not in fact work on Fedora 24 without significant modification to Fedora 24 to pull in Fedora 23's libraries, if what the binary is linking to has broken ABI in any way.

    Leave a comment:


  • linuxcbon
    replied
    Originally posted by Luke_Wolf View Post
    I'm not sure if you're serious or trolling...
    Is it possible you discuss without calling people troll ? Grow up.

    Originally posted by Luke_Wolf View Post
    Okay... History Lesson Time. Macintoshes used to run on the Motorola 68k architecture, from the earliest Macintoshes, this was succeeded by the PowerPC architecture that persisted until a few years after OS X was adopted at Apple, when Steve Jobs decided to switch over from IBM to Intel as their supplier for CPUs, thus bringing in x86. During the PPC days of OS X, it actually was compatible with OS 9 programs, however the switch to x86 broke that compatibility because the x86 Architecture is not compatible with PPC and thus OS 9 programs would have to be virtualized in order to run. In fact the only reason that OS X even was compatible with OS 9 is because of the pluggable ABI system due to them being completely and utterly different Operating Systems.
    I know but in my opinion, a stable ABI is better because it needs less efforts.

    Originally posted by Luke_Wolf View Post
    Also good luck getting those 15 year old linux binaries to run on a modern linux distro because the kernel is just one of the ABIs you need to worry about, you also need all of the libraries associated with said binary to have also retained compatibility, which in most cases they won't have. Which is going to mean a lot of modifying the system outside of the purview of the package manager to try to force it to work.
    If the libraries didn't change too much, the old programs should work (more or less well). And it has nothing to do with the ABI : it is also the case for BSD and OSX, right ?

    Leave a comment:


  • Luke_Wolf
    replied
    Originally posted by linuxcbon View Post
    It's a good idea, so old programs compiled for linux still can work 15 years after...But Mac OS 9 programs are not compatible with Mac OS X...What's more convenient ?
    I'm not sure if you're serious or trolling...

    Okay... History Lesson Time.
    Macintoshes used to run on the Motorola 68k architecture, from the earliest Macintoshes, this was succeeded by the PowerPC architecture that persisted until a few years after OS X was adopted at Apple, when Steve Jobs decided to switch over from IBM to Intel as their supplier for CPUs, thus bringing in x86. During the PPC days of OS X, it actually was compatible with OS 9 programs, however the switch to x86 broke that compatibility because the x86 Architecture is not compatible with PPC and thus OS 9 programs would have to be virtualized in order to run. In fact the only reason that OS X even was compatible with OS 9 is because of the pluggable ABI system due to them being completely and utterly different Operating Systems.

    Also good luck getting those 15 year old linux binaries to run on a modern linux distro because the kernel is just one of the ABIs you need to worry about, you also need all of the libraries associated with said binary to have also retained compatibility, which in most cases they won't have. Which is going to mean a lot of modifying the system outside of the purview of the package manager to try to force it to work.

    Leave a comment:


  • linuxcbon
    replied
    Originally posted by Luke_Wolf View Post
    (The userspace ABI system as a huge example, Linux has a single ABI that is not allowed to break whereas BSD derivatives and Windows both use a pluggable ABI system thus allowing them to have their ABI versioned as opposed to one ABI that must never break ever).
    It's a good idea, so old programs compiled for linux still can work 15 years after...But Mac OS 9 programs are not compatible with Mac OS X...What's more convenient ?

    Leave a comment:


  • Luke_Wolf
    replied
    Originally posted by unixfan2001 View Post
    I wouldn't necessarily place all those TODOs into the comments. In fact, I don't understand why there are TODO.txt files.
    Seems to me they should invest in a better infrastructure and relegate most TODOs to an indexable scrum/kanban board or some other virtual representation, then restrict
    TODO comments to small, apparent issues.
    Unfortunately the Linux kernel doesn't follow anything that could be considered a modern development methodology on the upstream side of things, which is rather ridiculous given that the Linux Foundation should be more than capable of buying CI servers and the rest. It's on the growing list of reasons as to why I'm switching over to FreeBSD in the long run, not that FreeBSD doesn't have it's own rough edges and faults, but most of the ones I care about are due to having much more primitive desktop support (which PC-BSD is working to fix) as opposed to things that I as a developer consider to be ridiculous in a modern development environment (like the lack of CI and testing), or are stupid compared to what everyone else is doing (The userspace ABI system as a huge example, Linux has a single ABI that is not allowed to break whereas BSD derivatives and Windows both use a pluggable ABI system thus allowing them to have their ABI versioned as opposed to one ABI that must never break ever).

    Leave a comment:


  • unixfan2001
    replied
    Originally posted by ElectricPrism View Post
    It sounds like the kernel has excellent comment notations suggesting improvements above and beyond necessary functioning.
    I wouldn't necessarily place all those TODOs into the comments. In fact, I don't understand why there are TODO.txt files.
    Seems to me they should invest in a better infrastructure and relegate most TODOs to an indexable scrum/kanban board or some other virtual representation, then restrict
    TODO comments to small, apparent issues.

    ✓ Mir
    Contrary to popular belief, Mir isn't the same thing as Wayland.
    It's also tailored towards Ubuntu, which makes sense, considering Convergence is an Ubuntu feature not a general GNU/Linux concept.

    ✓Unity 8 being so far behind announced schedule
    I won't disagree there.

    ✓ Abandoned projects like ✓ Upstart, ✓ Ubuntu Software Center, ✓Ubuntu One
    Upstart is still being used but, quite frankly, isn't all that good for desktop use cases. Besides, the end user couldn't care less about what init system is being used.
    Meanwhile, the Ubuntu Software Center is being replaced with Gnome Software, which seems like a good choice that will benefit both sides.

    As for Ubuntu One: Cloud infrastructure and personell isn't all that inexpensive. If it had been more profitable for Canonical, I'm sure they wouldn't have eliminated it.

    ✓ The way the Ubuntu Council & Mark Shuttleworth forced Jonathan Riddell out of leadership on Kubuntu
    Both, Riddell and Canonical, can be equally blamed here. Neither acted professionally. Riddell possibly even less so.

    ✓ Kickstarting Ubuntu Edge and then not doing it after 12 million backing.
    First of all, it was an IndieGoGo campaign, not a Kickstarter.
    Secondly, why would they? The campaign was for 32 million USD and they failed to reach that goal, thus returning all the funds to the original owners.

    ✓ Waiting on full Ubuntu Convergance and a non-rebranded Android
    While Ubuntu Convergence is still not on par with Windows Continuum, it's already looking impressive.
    Development simply takes time. The same is true for offering a "non-rebranded Android" (I assume you're referring to phones that aren't rebranded Android phones, not Android phones without rebranding) experience.

    Unfortunately, offering own, custom hardware also requires solid sales and a continuous revenue stream.

    ✓ Waiting on Canonical to sell Ubuntu Phone directly to consumers via Telecommunication Stores
    See above. The numbers simply aren't there yet.
    The current, rebranded offerings aren't selling all that well, so no point in custom hardware.

    Leave a comment:


  • unixfan2001
    replied
    Originally posted by ElectricPrism View Post
    It sounds like the kernel has excellent comment notations suggesting improvements above and beyond necessary functioning.
    I wouldn't necessarily place all those TODOs into the comments. In fact, I don't understand why there are TODO.txt files.
    Seems to me they should invest in a better infrastructure and relegate most TODOs to an indexable scrum/kanban board or some other virtual representation, then restrict
    TODO comments to small, apparent issues.

    ✓ Mir
    Contrary to popular belief, Mir isn't the same thing as Wayland.
    It's also tailored towards Ubuntu, which makes sense, considering Convergence is an Ubuntu feature not a general GNU/Linux concept.

    ✓Unity 8 being so far behind announced schedule
    I won't disagree there.

    ✓ Abandoned projects like ✓ Upstart, ✓ Ubuntu Software Center, ✓Ubuntu One
    Upstart is still being used but, quite frankly, isn't all that good for desktop use cases. Besides, the end user couldn't care less about what init system is being used.
    Meanwhile, the Ubuntu Software Center is being replaced with Gnome Software, which seems like a good choice that will benefit both sides.

    As for Ubuntu One: Cloud infrastructure and personell isn't all that inexpensive. If it had been more profitable for Canonical, I'm sure they wouldn't have eliminated it.

    ✓ The way the Ubuntu Council & Mark Shuttleworth forced Jonathan Riddell out of leadership on Kubuntu
    Both, Riddell and Canonical, can be equally blamed here. Neither acted professionally. Riddell possibly even less so.

    ✓ Kickstarting Ubuntu Edge and then not doing it after 12 million backing.
    First of all, it was an IndieGoGo campaign, not a Kickstarter.
    Secondly, why would they? The campaign was for 32 million USD and they failed to reach that goal, thus returning all the funds to the original owners.

    ✓ Waiting on full Ubuntu Convergance and a non-rebranded Android
    While Ubuntu Convergence is still not on par with Windows Continuum, it's already looking impressive.
    Development simply takes time. The same is true for offering a "non-rebranded Android" (I assume you're referring to phones that aren't rebranded Android phones, not Android phones without rebranding) experience.

    Unfortunately, offering own, custom hardware also requires solid sales and a continuous revenue stream.

    ✓ Waiting on Canonical to sell Ubuntu Phone directly to consumers via Telecommunication Stores
    See above. The numbers simply aren't there yet.
    The current, rebranded offerings aren't selling all that well, so no point in custom hardware.

    Leave a comment:

Working...
X