Announcement

Collapse
No announcement yet.

Stallman: If you want freedom don't follow Linus Torvalds

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by stevea View Post
    In the last para of my previous post I was thinking anti-TIVO, but wrote anti-DRM. The point is that any term that restricts the rights of the copyright owner moreso in GPLv3 than GPLv2 would seem to make them incompatible ... but IANAL. What if someone licensed his copyright code under GPLv2 based on the idea that his code COULD be locked into hardware where it is not accessible to user-update ? Let's say that a home product vendor uses GPLv2 code for the purpose of (among other things) tracking how many hours your HDTV has run for the purpose of warrantee validation. Getting the source code is fine so long as the company does NOT have to disclose an installation method. Clearly (to me) this GPLv2 code is incompatible with the GPLv3 anti-TIVO stuff based on the intent of the copyright holder being incompatible w/ GPLv3.

    Linus is right - that it would take a lot of cajoling and arm-twisting to make the ~hundred Kernel copyright holders agree that GPLv3 is compatible with their intentions and GPLv2 license. ... then the thousands of application copyright holders.
    I do not understand your point.
    GPLv2 was written to grant certain freedoms (the ones that RMS happens to deem important) to the licensee. By choosing that license the kernel developers (and others who chose that license) acknowledged the importance and validity of these freedoms.
    When GPLv2 was written DRM and exclusive hardware were not an issue. GPLv3 simple extends the *very same* freedoms to other new scenarios.
    Accepting GPLv2 but not v3 is hypocritical.

    Comment


    • #32
      Originally posted by linuxhansl View Post
      I do not understand your point.
      GPLv2 was written to grant certain freedoms (the ones that RMS happens to deem important) to the licensee. By choosing that license the kernel developers (and others who chose that license) acknowledged the importance and validity of these freedoms.
      When GPLv2 was written DRM and exclusive hardware were not an issue. GPLv3 simple extends the *very same* freedoms to other new scenarios.
      Accepting GPLv2 but not v3 is hypocritical.
      That statement is ridiculous. GPLv2 and GPLv3 are two similar but different licenses. It is completely sensible that some people will wish to license their software according to one license and not the other, since they have different terms. I would agree that the general intent of GPLv3 is similar to GPLv2, and that in some part it extends and details the notions of GPLv2. So completely sane and rational humans may agree or disgree with the differences or extensions based on their intentions.

      GPLv2 generally states that anyone should get source w/ the binary and that anyone is free to modify and use the source for derivative work, so long as they pass along the source with the binary. That is a basic statement of FOSS.

      GPLv2 attempts to prevent Tivo-ization, may prevent the use of patent (see patent retaliation), force authors to give up all patent rights for GPLv3 derived work, and greatly restricts the terms that may be added to the license and yet be compatible. I see that there may be cases where an author does not wish for these additional restrictions to apply to his code.
      ==
      "If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission".
      Last edited by stevea; 02 January 2008, 05:09 AM.

      Comment


      • #33
        Well withtout saying reason to further, but making some thinking to others i agree with both linux and stallman in "one kind of" way of my own as everybody else would, kind of way, sorry might sound bit stupid.
        But still i`m try to make people thing alternative way to work and so on, and like anyone else would say, i know that you would agree, butt then you are butt ugly.

        And sorry about my not perfect english.
        Last edited by Mota_boy; 08 January 2008, 02:04 PM.

        Comment


        • #34
          Originally posted by hobophobe View Post
          (Wikipedia)

          Is my scenario unlikely? Yes. Do things like this happen? Yes.
          The researchers also found several engineering issues:

          * The design did not have any hardware interlocks to prevent the electron-beam from operating in its high-energy mode without the target in place.
          * The engineer had reused software from older models. These models had hardware interlocks that masked their software defects. Those hardware safeties had no way of reporting that they had been triggered, so there was no indication of the existence of faulty software commands.
          * The hardware provided no way for the software to verify that sensors were working correctly (see open-loop controller). The table-position system was the first implicated in Therac-25's failures; the manufacturer gave it redundant switches to cross-check their operation.
          * The equipment control task did not properly synchronize with the operator interface task, so that race conditions occurred if the operator changed the setup too quickly.[clarify] This was evidently missed during testing, since it took some practice before operators were able to work quickly enough for the problem to occur.
          * The software set a flag variable by incrementing it. Occasionally an arithmetic overflow occurred, causing the software to bypass safety checks.
          I am not sure that the Therac-25 example is the best one to use... it was a failure of both software and hardware.

          Comment


          • #35
            Originally posted by indigo196 View Post
            I am not sure that the Therac-25 example is the best one to use... it was a failure of both software and hardware.
            Let me speak to this. I've spent most of my career working on embedded software, much it involving medical instrumentation (mostly CT scanners, but also MRI) and much of it involving avionic - aircraft electronics. Fortunately most of my work was not on a safety critical path, tho' some was.

            Yes such problems still happen, tho' it's far less likely to happen today than in the 1970-1980s.

            In the US both the FDA and FAA have developed similar very rigid methodologies for software development. The FAA DOA-178B for example requires that the entire design is detailed and documented and it goes through a design review with outside auditors. During the code development each module undergoes a design review that requires the code confirm precisely to the approved spec. If problems are discovered during implementation that require design changes; then they go back to the paper design phase, and the entire design, not just the changes , are re-audited.

            The problem is that subtle errors in software can result in horrible and mostly unpredictable repercussions. I was recently working on a real-time kernel approved for avionics applications. The kernel had recently been modified to allow the use of dual-processors (SMP, like the intel core2duo) and a subtle race condition was introduced when you attempt to set the system time at an instant when another thread is in the middle of reading the time the whole notion of time could be fouled up. The design was fine, the designer understood race conditions, but several sets of human eyes read the code and missed this one.

            It's relatively easy to add independent and redundant safety locks on systems to prevent the therac type overdose problems. The independent nature of the calculation helps prevent systemic problems. There are much bigger problem than the safety interlocks on "guns". Much of the observation is also based on calculations that may have gone awry. What if your physician gets an incorrect report of the amount of radiation applied (tho' within safety limits per application). What if they get an incorrect report of the tumor size of location ? What if the clock time in a bit of avionics has it's clock stall, so it wildly over-reports rates of turn and velocities and these over-reports cause some safety locks to kick-in ?

            There was a nice article in American scientist (the Sigma Xi publication) a number of years go on faults in engineered systems. The specific example topic was bridge failure, long before the Minnesota tragedy. The gist was that generally we learn new solutions only from such failures b/c we don't have the ability to foresee new failure modes except by experience.

            The current software challenge is to PROVE that we have at least eliminated the known failure modes. This is not currently possible except in a few axiomatically proven code implementations.

            Comment

            Working...
            X