Announcement

Collapse
No announcement yet.

Glibc 2.39 Should Be Out On 1 February & Might Drop Itanium IA64 Linux Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by AlanTuring69 View Post

    "I love my OpenVMS!" said at least one person, probably
    I had the misfortune of having to work with that one person a long time ago. He was also the one that was insisting on over engineered MIL-spec solutions many times the price of a merely satisfactory specifications-met solution. He never understood the concept of "perfect is the enemy of good enough". Probably would have fit right in with the Itanium team. Overly complex solutions for targeted solutions. Unfortunately those targeted perfect solutions weren't for the questions the overall industry was asking at the time. AMD64 as an extension to x86 was the right answer at the time. Intel recognized their blunder when AMD started threatening their server box cash cow. Opteron (don't confuse the physical CPU with the architecture) didn't kill Itanium. Itanium killed itself, and Intel answered with Xeon which would go on to dominate the server box and data center market, not Opteron.

    That said, OpenVMS isn't a bad OS. It has some useful features modern OSes still don't have. But again, it's proprietary and expensive. Good enough won in x86(64), Windows, Linux, and to some extent FreeBSD and potentially may win again with "good enough" ARM and/or RISC-V platforms in the near future in certain market segments. Just don't conflate open ISA with open and fully documented hardware. It's just as possible to hide the hardware backdoor behind the Triangulation iPhone malware in RISC-V or OpenPOWER as it is on ARM, or any other bolted on closed hardware like cell modems thanks to Qualcomm's cell modem patents.

    Comment


    • #12
      Originally posted by Dawn View Post
      Wave of "lol itanic, so shitty and pathetic" from people who have never touched an IPF system and don't understand what they were used for in 3... 2... 1...
      does it even matter if it was any good or bad? it's dead and buried at this point so they remove it.
      chip designers may learn a thing or two from it but for the rest of us it's a meaningless discussion.

      Comment


      • #13
        Itanium gave us a base for a common ABI for C++. I'd say that counts for something.

        Comment


        • #14
          TBH I get the impression this website is financed by Microsoft for the promotion of the deletion of Open Source.

          Comment


          • #15
            Originally posted by rene View Post
            TBH I get the impression this website is financed by Microsoft for the promotion of the deletion of Open Source.
            Eh, it's just an effect of how contemporary ad funded web sites work. Itanium support getting dropped is something that affects 0.000% of users, but some people rage click and need to argue about it anyway. So that's what we discuss, and thus it drives ad income to phoronix.

            Comment


            • #16
              Originally posted by vladpetric View Post

              Actually, it was far worse than that.

              The wished-for compilers simply couldn't be written (this is what Knuth said first, btw).

              […]

              The issue is Rice's theorem
              As you seem to realize later in your same post, it is not. Rice's theorem doesn't preclude you from giving an algorithm that decides for a property P either "provably true", "provably false" or "don't know". As long as the "don't know" for a desirable property is rare enough (in places where the optimization matters) or can be worked around this is completely fine in practice. Of course, the problem is that at may be hard to find an algorithm that achieves this, or even worse the desirable property may be provably false for many practical problems, or a desirable property may be hard to formulate in the first place, but neither of these has anything to do with Rice's theorem.

              Comment


              • #17
                Originally posted by archkde View Post

                As you seem to realize later in your same post, it is not. Rice's theorem doesn't preclude you from giving an algorithm that decides for a property P either "provably true", "provably false" or "don't know". As long as the "don't know" for a desirable property is rare enough (in places where the optimization matters) or can be worked around this is completely fine in practice. Of course, the problem is that at may be hard to find an algorithm that achieves this, or even worse the desirable property may be provably false for many practical problems, or a desirable property may be hard to formulate in the first place, but neither of these has anything to do with Rice's theorem.
                If you're talking about me, FYI, I realized certain things a long time ago (including, but not limited to, my PhD).

                How exactly would you quantify how many times "don't know" happens? It's a huge lot more often than you'd think if one doesn't cheat - by cheat I mean using the same train and test sets (containing source code), something that optimizing compiler writers are very keen on doing for some reason, but is outright laughable in any real scientific context btw.

                My point is that for Itanium these limitations simply can't be worked around.

                And saying that these limitations have nothing to do with Rice's theorem - it is exactly Rice's theorem that says that "don't know" must exist. Are you now not trying to summarily dismiss the theorem?

                Comment


                • #18
                  Originally posted by vladpetric View Post
                  How exactly would you quantify how many times "don't know" happens? It's a huge lot more often than you'd think if one doesn't cheat - by cheat I mean using the same train and test sets (containing source code), something that optimizing compiler writers are very keen on doing for some reason, but is outright laughable in any real scientific context btw.
                  I do not disagree that "don't know" may happen frequently in practice, or even that it can be hard to quantify in the first place. All I'm saying is that it is perfectly compatible with Rice's theorem that "don't know" is so rare it only happens once every 10 years in the entire world in practice.

                  My point is that for Itanium these limitations simply can't be worked around.
                  And what makes Itanium so much different here? My understanding was that it is much more about coming up with a reasonable desired property in the first place, but you seem to be much more knowledgeable in that area.

                  And saying that these limitations have nothing to do with Rice's theorem - it is exactly Rice's theorem that says that "don't know" must exist. Are you now not trying to summarily dismiss the theorem?
                  Ok, I'll grant you this one, that formulation was bad. What I meant to say is that yes, Rice's theorem says that "don't know" will always exist, but it doesn't make any statement about how big it must be (I'm sure there are refinements that actually give a lower bound on that, but it's probably much larger in practice), and that a very small "don't know" would not necessarily be a problem in practice. Neither does a "don't know" in practice necessarily arise from undecidability, as it can also come from prohibitive computational complexity.

                  Comment


                  • #19
                    Originally posted by vladpetric View Post

                    If you're talking about me, FYI, I realized certain things a long time ago (including, but not limited to, my PhD).

                    How exactly would you quantify how many times "don't know" happens? It's a huge lot more often than you'd think if one doesn't cheat - by cheat I mean using the same train and test sets (containing source code), something that optimizing compiler writers are very keen on doing for some reason, but is outright laughable in any real scientific context btw.

                    My point is that for Itanium these limitations simply can't be worked around.

                    And saying that these limitations have nothing to do with Rice's theorem - it is exactly Rice's theorem that says that "don't know" must exist. Are you now not trying to summarily dismiss the theorem?
                    Elbrus (ExpLicit Basic Resources Utilization Scheduling) uses VLIW https://en.wikipedia.org/wiki/Elbrus_(computer)
                    Last edited by Svyatko; 04 January 2024, 03:51 PM.

                    Comment


                    • #20
                      Originally posted by vladpetric View Post

                      Actually, it was far worse than that.

                      The wished-for compilers simply couldn't be written (this is what Knuth said first, btw).

                      It's not that Intel didn't try - they actually put together a pretty good team to get it done (I personally know 1, maybe 2 people, both top notch, who were on that team). But they failed miserably.

                      Why? Simple really, if one is actually willing to consider mathematical evidence (in the political sphere, of course, everything that is not convenient can and will be summarily dismissed).
                      In theory, theory and practice are the same. In practice, they are different. There's nothing wrong with your theory here, but the practical issues are completely different.

                      Consider the halting theorem (since it is more familiar to more people, and the argument is basically the same for Rice's theorem). You can't make a "halt tester" program that will take any arbitrary program, and tell you if it will ever halt or not. Proving this is quite simple if you have studied computation theory. And yet in practice, people test code, use debuggers, and produce software that they can confidently claim will do what they intend - whether that be halting or not. You do this using test suites, debuggers, software test engineers, aiming to test the halting behaviour of your program, and aiming for a reasonable confidence of the result rather than full proof of the result. You can't make a single set of testing programs and procedures that will handle any program - but you can do so for a given program, if it is reasonably amenable to testing.

                      In both theory and practice, perfect compilers cannot be made. But in practice, very good compilers can be made. However, in practice, it is impossible to make good enough compilers for the Itanium so that the result is remotely competitive (in speed, cost, or power consumption) with alternative processors. Rice's theorem applies just as much to x86-64 or ARM as to IA-64 - it has the same theoretical significance, and the same practical insignificance.

                      The problem with the IA-64 ISA was very simple. It aimed to schedule instructions at compile time, so that they could be executed at very high speed and parallelism with a (relatively) simple and efficient execution unit. You could avoid all the complications of hardware scheduling, speculative execution, register renaming, branch prediction, etc., that makes x86 processors so complicated and means that a large proportion of the power used by x86 devices is "bureaucracy", rather than actually executing instructions. The trouble is that for general purpose software (unlike the kernels of DSP algorithms) you simply don't have the information at compile time to do this well - far too much is dependent on the inputs, and these change all the time. When the best scheduling and instruction flow changes dynamically at run time, there is no way that a compiler can find a good solution at compile time.

                      The result was that throughput was poor, only a fraction of the theoretical maximum speed of the processors. So Intel had to add all the things they had wanted to avoid - the hardware re-ordering, speculative execution, and so on. This was vastly harder to do on the IA-64 than in the x86 world, with the result that the chips were monsters with massive power consumption and very mediocre performance. Some kinds of software worked well - things involving lots of predictable and repetitive calculations such as engineering and simulation software. But apart from that, they were only of interest to people wanting their mainframe-style robustness features, or those forced into using them by software lock-in. Thus they had a chicken and egg situation - almost no one made software for them, and the only thing that ran slower than IA-64 binaries on an Itanium were x86 binaries on an Itanium.


                      Comment

                      Working...
                      X