Announcement

Collapse
No announcement yet.

Ubuntu Hit By A Vulnerability In "Eject"

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • schmidtbag
    replied
    Originally posted by trek View Post
    apart that it is suid root?
    Yeah, and? Have you ever taken a look inside /sbin? Those are binaries for root, and many of them do not pose any obvious leverage for hackers. That doesn't mean they can't be used that way (case in point - eject) but again, without the source code, nobody would waste their time attempting to figure out how.
    Last edited by schmidtbag; 08 April 2017, 10:47 AM.

    Leave a comment:


  • trek
    replied
    Originally posted by schmidtbag View Post
    There is nothing about the eject program that says "I offer great potential to steal data".
    apart that it is suid root?

    Leave a comment:


  • schmidtbag
    replied
    Originally posted by trek View Post
    you start with a bad assumption: the source code is always accessible in the form of assembler language, this is why there are so many exploit on microsoft windows, even if the original C source code was not published
    ...are you actually serious right now? This is ejectwe're talking about here. Do you honestly think someone would go out of their way to reverse-engineer or analyze assembler code to figure out security flaws in something as obscure as this? There is nothing about the eject program that says "I offer great potential to steal data". It's not like what you're suggesting can be done in a matter of hours, let alone guarantee success. Do you seriously think the average black-hat hacker wakes up one day and says "y'know what'd be a great use of the next couple days? Ripping my hair out while I attempt to find any semblance of a security hole via low-level code of a program most people never use". Even a hacker knows they'd have a better chance of getting somewhere by looking elsewhere.

    Even if hackers knew about this flaw in eject before it was patched, I don't think any of them would have bothered to take advantage of it.

    Leave a comment:


  • trek
    replied
    Originally posted by schmidtbag View Post
    But it isn't the best guarantee. Who knows how long this "eject" exploit actually existed - it could've been years. If hackers didn't have access to the source code, figuring this out likely would never have been possible for them.
    you start with a bad assumption: the source code is always accessible in the form of assembler language, this is why there are so many exploit on microsoft windows, even if the original C source code was not published

    Leave a comment:


  • schmidtbag
    replied
    Originally posted by trek View Post
    this is also the best guarantee: if no one found a bug, the software can be considered secure, after some time depending on the complexity of the code
    But it isn't the best guarantee. Who knows how long this "eject" exploit actually existed - it could've been years. If hackers didn't have access to the source code, figuring this out likely would never have been possible for them.

    There are 2 ways to look at the security of your code:
    A. You can close the source code and lower the chances of anyone discovering a problem, but, that's also making a rash assumption that the highly limited set of eyes interpreting the code will discover all flaws.
    B. You can open the source code and give an indefinite amount of people the ability to look at the code, but, that also means anyone with malicious intent can start causing damage before anyone else spots the problem.

    Depending on the application, one situation may be better than the other. In the case of the "eject" program, option A is the clear winner because without source code, there's a good chance nobody would've discovered the security flaw at all. Security flaws hardly matter if they're never exploited - kind of like locking your doors when you live miles away from civilization. "eject" also isn't exactly the most complex application, so it wouldn't surprise me if fewer than 100 people in the world actually looked at it's source code. Without enough eyes on the application, the the primary benefit of option B is removed.

    On the other hand, look at something like PHP. That is something people have been able to hack successfully without needing to view source code. This is something where option B is by far the best route, since it has a massive user-base and a high priority to hackers. If PHP were closed source, it would surely have failed a long time ago.

    There is no one-size-fits-all. I know people around here hate closed source to the point that they think using it will send them to hell, but the fact of the matter is, there are benefits to it, even for the user.

    Leave a comment:


  • nanonyme
    replied
    Originally posted by bellamyb View Post

    From a similar bug report for GlusterFS (http://lists.gluster.org/pipermail/b...st/032052.html)

    "Note: there are cases where setuid() can fail even when the caller is UID 0; it is a grave security error to omit checking for a failure return from setuid(). if an environment limits the number of processes a user can have, setuid() might fail if the target uid already is at the limit."

    Uhm, so, what, first fork-bomb your own account, then trick a binary not checking setuid success into doing setuid to it?

    Leave a comment:


  • trek
    replied
    Originally posted by schmidtbag View Post
    this is one of the only disadvantages to open-source software: a malicious attacker can spend the time analyzing code for security flaws and get away with performing the attack, at least for a little while.
    this is also the best guarantee: if no one found a bug, the software can be considered secure, after some time depending on the complexity of the code

    this is why stable and development branches are separate, to let stable versions to maturate their guarantee

    with closed source you can never be sure

    Leave a comment:


  • Vistaus
    replied
    Originally posted by dh04000 View Post

    Because a platform that never reports finding bugs is more safe than one that finds and reports them. /s
    I agree that moving from Ubuntu to Solus doesn't really make things safer per se, but Solus does report finding bugs. I frequently see that in the Updates tab in the Software Center (click the Information icon) or on git.solus-project.com.

    Leave a comment:


  • bellamyb
    replied
    Originally posted by M@yeulC View Post
    Good report there: https://bugs.launchpad.net/ubuntu/+s...t/+bug/1673627


    Doesn't look too bad, in my limited understanding. Unless there is a way to deliberately make the setuid and setgid calls fail? Anyone? I would be curious.
    From a similar bug report for GlusterFS (http://lists.gluster.org/pipermail/b...st/032052.html)

    "Note: there are cases where setuid() can fail even when the caller is UID 0; it is a grave security error to omit checking for a failure return from setuid(). if an environment limits the number of processes a user can have, setuid() might fail if the target uid already is at the limit."


    Leave a comment:


  • TheBlackCat
    replied
    Originally posted by monraaf View Post
    Debian never ever shipped eject from util-linux, so it was never "removed".
    Eject is enabled by default in util-linux and manually disabled in Debian, so I don't think it is inaccurate to say it is "removed".

    Leave a comment:

Working...
X