Announcement

Collapse
No announcement yet.

Ubuntu Linux Evaluating x86-64-v3 Based Build - AVX & Newer Intel/AMD CPUs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #91
    Originally posted by coder View Post
    Again, you've failed to provide any data to support that claim.
    You are the one defending the few snowflakes out there that were released during the past four years (= a reasonable product life cycle) and you want to persuade Ubuntu to care about these to implement the hwcaps solution. The burden of proof here is on you. Show me (or them) the data that these CPU represent a meaningful market share of Ubuntu Server or Ubuntu Desktop or we can end our discussion as I get tired of your ignorance of that fact without providing any data yourself. I've made my best guess on the relevance of these systems.

    Originally posted by coder View Post
    As a "legal person", you should appreciate the value of precedent, here. With no prior precedent to point to, it's really hard to make the case that the risk is one worth mitigating against. Especially when doing so hurts the bottom line of device makers.​
    I practice in the continental system, sorry, I don't value precendent as much as you anglo-americans do in the common law system. We also like to formulate abstract rules, but abstract thinking is not for everyone, I guess. Hence I cannot see a significant difference to similar discussions in the past. It is not unheared of that software requirements tend to rise over time, on a slower pace in the Linux ecosystem than on Windows or Apple, but rising nonetheless. We've had similar discussions in the past when going from i386 -> i586 or from i686 to x86-64. It is a technical and a business decision for Canonical in the end, they might find the trade-off acceptable if the market share of these systems is as high as I guessed them ("rounds to zero"). They might have got better data on this. Let's wait and see. As I have no stake in their decision, I am fine either way as I already use the performance-oriented CachyOS which provides v3 and v4 repos.

    In essence, I wanted to point out these user's poor decisions in the past, that CPU features and system specs matter in the long run, that there were enough alternatives out there that they could have gone with (since 2018 even in embedded low-power x86-compatible segment) and that some media advise or even common business practices (made by bean counters) out there was just short-sighted, especially for people looking for a longer than usual product life span. All others got a fair amount of life out of their systems and need to change their OS or invest into newer hardware (as an alternative for consumers, there are enough cheap AVX2-capable Xeons and cheap Chinese motherboards to support these nowadays or even newer AMD Ryzen stuff on the second hand market). My Xeon 2696 v3 / 64 GB DDR3 / modded Chinese X99 board of questionable quality are still very powerful to this date which was a 2014 product.

    Merry Christmas nonetheless!
    Last edited by ms178; 24 December 2023, 11:28 AM.

    Comment


    • #92
      Originally posted by ms178 View Post
      you want to persuade Ubuntu to care about these to implement the hwcaps solution.
      No, I'm just debunking your BS claims that this is necessary.

      Originally posted by ms178 View Post
      The burden of proof here is on you.
      Again, you're the one advocating for changing the status quo and alienating part of the market.

      Originally posted by ms178 View Post
      I get tired of your ignorance
      People in glass houses...

      Originally posted by ms178 View Post
      I've made my best guess on the relevance of these systems.
      Nobody asked for your guesses, which are effectively noise.

      Originally posted by ms178 View Post
      We've had similar discussions in the past when going from i386 -> i586 or from i686 to x86-64.
      As I've repeatedly pointed out, those 32-bit-only CPUs are decades old. It's not remotely comparable.

      Originally posted by ms178 View Post
      In essence, I wanted to point out these user's poor decisions in the past,
      It's painfully obvious you're no engineer. For an engineer to overbuild something, at added cost, resource footprint, complexity, etc. is also a poor decision. In your world, everything would cost a lot more and be even less efficient. If you ran a company, it'd probably go out of business by being too cautious relative to its competition. The market favors efficiency.

      Originally posted by ms178 View Post
      there are enough cheap AVX2-capable Xeons and cheap Chinese motherboards to support these nowadays or even newer AMD Ryzen stuff on the second hand market)
      Maybe that's fine for home hobbyists, but not businesses. They require things like certifications, warranties, and support. Not to mention that it's unclear just how much scale exists in that market. Business also like homogeneity. A hodgepodge of whatever you can find on the used market can easily end up costing more time & trouble than the savings are worth.

      Even for my own personal use, I don't trust used hardware. The cost savings just aren't worth any trouble I might run into. I've seen enough failures in old hardware, at my job, to know it's not a free lunch.

      Comment


      • #93
        Originally posted by coder View Post

        Nobody asked for your guesses, which are effectively noise.
        Ditto.

        Originally posted by coder View Post
        As I've repeatedly pointed out, those 32-bit-only CPUs are decades old. It's not remotely comparable.
        You narrow your view again on some minor details, yes, at that point in time axing support for 32-bit CPUs might have mattered less than the AVX2-incapable CPUs you bring up repeatedly without showing any market data that they actually matter today. You still fail to recognize the common theme, a rise in software requirements that leads to the obsolescence of such systems over time. You made it look like it is the first time such a rise of minimum OS requirements happens. It doesn't mean that needs to be a vector extension the next time (thinking of AI acceleration for future Windows products). Thinking of AI even more, it doesn't even need to be a hard requirement - if the software experience would be terrible without any form of AI accelerator, then people might feel the need to upgrade for the best experience.

        Originally posted by coder View Post
        It's painfully obvious you're no engineer. For an engineer to overbuild something, at added cost, resource footprint, complexity, etc. is also a poor decision. In your world, everything would cost a lot more and be even less efficient. If you ran a company, it'd probably go out of business by being too cautious relative to its competition. The market favors efficiency..
        If you really think so, this tells me that you cannot think out of your small box as a software engineer. It is painfully obvious that there are people - even more serious engineers - out there that need to take more responsible decisions as other people's lives depend on them making the right design choices (e.g. car safety, pharmaceuticals etc.) which also cannot be fixed as simple as shipping a software update.

        A simple case to illustrate my point: The Ford Pinto had a design flaw in its fuel tank placement, which made it susceptible to rupturing and catching fire in rear-end collisions. The fuel tank was located in a vulnerable position, and in certain accidents, it could be punctured by relatively minor impacts. The controversy gained significant attention when it was revealed that Ford was aware of the safety issues but had decided not to make design changes due to cost considerations. Internal documents, including a cost-benefit analysis, suggested that Ford had calculated that it would be cheaper to pay for potential lawsuits resulting from injuries and deaths than to implement design modifications. As a result, Ford faced numerous lawsuits from victims and their families who were injured or killed in accidents. In 1978, a widely publicized case, Grimshaw v. Ford Motor Co., resulted in a jury awarding significant damages to a plaintiff who had been severely burned in a Pinto accident. This case and others contributed to increased public awareness about product safety and led to changes in regulations and corporate practices in the automotive industry.

        The Ford Pinto case remains a landmark example of the ethical and legal issues associated with prioritizing cost savings over consumer safety in product design. And the line thinking behind it is not limited to product design but can be generalized to other industries and even personal life.

        Usually it is the sales people or accountants that ignore any such risks that are hard to quantify or aren't reflected in any price tag. I was taught to anticipate and to mitigate such risks (remember Downald Rumsfeld's comments about the known-unknowns and unknown-unknowns?) as you don't want to end up with a bad situation when such a risk realizes (and people always assume these won't happen, but have to call us lawyers when something went terribly wrong while they could have hired our services before to mitigate against most of these problems). While I am biased by profession here, I think that proper risk management matters to everyone regardless of which job you have. And a proper solution against the most relevant scenarios might come with a price tag of its own which might be worth it nonetheless in the end if the damage it helps to mitigates is worth it.

        In our case, the higher upfront and operating costs for going with an i3 or an AMD-equivalent would have been negligable when comparing them to these Elkhart Lake+ systems. On the other hand, the cost of the shorter life span with a now-mandatory change in hardware or effort in changing the operating system wasn't taken into account properly when the purchase decision was originally made. A reasonable IT expert could have pointed out such a risk to the decision maker, as such a change in requirements could have been anticipated since 2019 (even before a company gives them any kind of formal notification). If the company is on a three to four year product life span, I would agree that it won't matter that much to mitigate against such a risk. But as you've brought up the embedded and industrial systems as a use case for these recent non-AVX2-capable CPUs that are typically on a longer life span with a higher cost of replacement than usual (e.g. cell phone tower), it makes this purchase decision even more obviously bad. These people now either need to bite the bullet or are at the mercy of third parties to invest into a solution. That's not a situation businesses want to be in at all. This also could have been avoided as I showed you AMD's embedded lineup with AVX2 existed even before Elkhart Lake launched.

        Originally posted by coder View Post
        Maybe that's fine for home hobbyists, but not businesses. [...] Even for my own personal use, I don't trust used hardware. The cost savings just aren't worth any trouble I might run into. I've seen enough failures in old hardware, at my job, to know it's not a free lunch.
        Fair enough, going the used hardware route is not suitable for everyone and obviously not meant for businesses. I brought that up as an option for Linux hobbyists and tinkerers as a cost-effective solution as people using Linux on the desktop tend to be more tech-savvy still. Businesses should learn from such a mistake and think through their options more carefully in the future even if the alternatives come with a cost. This story also underlines the value of a form factor that allows for socketable CPUs, a platform that scales, the availability of replacement parts or servicability in general. There is an industry trend to design throw-away technology which I find wasteful.
        Last edited by ms178; 26 December 2023, 05:45 AM.

        Comment


        • #94
          Originally posted by ms178 View Post
          Ditto.
          No, not ditto. I'm not the one claiming to provide data that's actually devoid of any substance.

          Bad data is worse than no data, because it give you a false confidence.

          Originally posted by ms178 View Post
          AVX2-incapable CPUs you bring up repeatedly without showing any market data that they actually matter today.
          You haven't shown any market data that they don't matter, yet you're the one advocating to de-support them.

          Originally posted by ms178 View Post
          You made it look like it is the first time such a rise of minimum OS requirements happens.
          No, but it's virtually unprecedented for a commercial operating system to de-support such new CPUs. It's a false-equivalence.

          In fact, the entire x86 (32-bit) argument is a red herring, which I mean in the purest sense of the term.

          The problem we face is that you're thinking and arguing like a lawyer, rather than an engineer or business person. You just want to convince a jury, regardless of whether your client is truly innocent. That's the problem with lawyers. If an engineer or business person convinces themselves of an untruth, that doesn't make it untrue. There could be real consequences that arise, hence why it's in their interest to seek & know the truth. Lawyers are successful if they win the case, regardless of what the actual truth is.

          Originally posted by ms178 View Post
          Ford was aware of the safety issues but had decided not to make design changes due to cost considerations. Internal documents, including a cost-benefit analysis, suggested that Ford had calculated that it would be cheaper to pay for potential lawsuits resulting from injuries and deaths than to implement design modifications.
          This was taken out of context, which is just the sort of thing lawyers love to do. The issue Ford faced was that their product's safety didn't differ much from that of their peers. If you change the design, you increase the product cost, thereby causing more consumers to buy (what are now less-safe) competing vehicles, and meanwhile hurting the company's bottom line. In the end, have you really saved lives?

          Now, the underlying problem was that automobile safety wasn't a selling point, which was a consequence of the fact that there wasn't high-quality independent safety data available for consumers to use in making purchasing decisions. Had that been the case, there could've been added value from the safer design and it wouldn't be hard to justify. That's what really changed, between then and now. But, people like you love to stand on their high moral horse.

          I guarantee you that, even today, automotive & other companies continue to make cost vs. safety tradeoffs. It's just that the market value of safety has increased to the point where these decisions are more at the margins than around major deficiencies.

          Originally posted by ms178 View Post
          going the used hardware route is not suitable for everyone and obviously not meant for businesses. I brought that up as an option for Linux hobbyists and tinkerers as a cost-effective solution as people using Linux on the desktop tend to be more tech-savvy still.
          BTW, I also believe it's not worthwhile to use old hardware. We've seen numerous examples documented on this site of long-standing bugs affecting legacy hardware. Therefore, the case could be made to prefer a newer CPU of lesser capabilities than something much older. My personal policy is to try to stay within a window of 1 to 10 years. By the time hardware is 1 year old, software support is usually very good. By the time it's more than 10 years old (which, coincidentally, lines up with Haswell's initial launch), vastly fewer people are using it and bugs are more likely to go unnoticed or fixing them de-prioritized.

          TBH, I don't have reservations about using Haswell, today. One of my home machines is a Haswell and I even run Ubuntu on it. However, I'd start to get uneasy about running machines much older than that (such as Core 2 or Nehalem). I think Sandybridge is probably a good cutoff point, but I'm unsure how long that will continue to be the case.

          Comment


          • #95
            Originally posted by coder View Post
            The problem we face is that you're thinking and arguing like a lawyer, rather than an engineer or business person. You just want to convince a jury, regardless of whether your client is truly innocent. That's the problem with lawyers. If an engineer or business person convinces themselves of an untruth, that doesn't make it untrue. There could be real consequences that arise, hence why it's in their interest to seek & know the truth. Lawyers are successful if they win the case, regardless of what the actual truth is.

            This was taken out of context, which is just the sort of thing lawyers love to do. The issue Ford faced was that their product's safety didn't differ much from that of their peers. If you change the design, you increase the product cost, thereby causing more consumers to buy (what are now less-safe) competing vehicles, and meanwhile hurting the company's bottom line. In the end, have you really saved lives?

            Now, the underlying problem was that automobile safety wasn't a selling point, which was a consequence of the fact that there wasn't high-quality independent safety data available for consumers to use in making purchasing decisions. Had that been the case, there could've been added value from the safer design and it wouldn't be hard to justify. That's what really changed, between then and now. But, people like you love to stand on their high moral horse.

            I guarantee you that, even today, automotive & other companies continue to make cost vs. safety tradeoffs. It's just that the market value of safety has increased to the point where these decisions are more at the margins than around major deficiencies.
            Please, I don't twist anyone's words but simply point out different perspectives to a problem that might shed a different light on the whole discussion. There are a lot of philosophical and legal question that you bring up here but cannot go into too much detail. I wouldn't use the word truth to begin with, as this is a morally loaded term, the questions in front of the court would be about negligence. And what your peers do might be an indicator but does not constitute a legal threshold of its own (as they could also do too little to meet the requirements that people can reasonably expect). Hence the important part is to define what the duty of care is in each case. When a company, such as an automobile manufacturer, designs and produces vehicles, it is under a legal obligation to take reasonable steps to ensure that the products are safe for their intended use. This duty encompasses several key principles:
            1. Design Defects: Manufacturers are expected to design products that are reasonably safe when used as intended. A design defect exists when a product's design is inherently unsafe, posing a risk of harm to consumers. In the context of vehicle design, this may involve issues such as structural weaknesses, inadequate safety features, or placement of critical components like fuel tanks.
            2. Foreseeability of Risks: Manufacturers are expected to anticipate and address foreseeable risks associated with the use of their products. If a risk is foreseeable, and steps could reasonably be taken to mitigate or eliminate that risk, a failure to do so may be considered a breach of the duty of care.
            3. Testing and Quality Control: Manufacturers are obligated to implement adequate testing and quality control processes to identify and rectify any defects or dangers in their products. This includes conducting thorough testing during the design and manufacturing phases and addressing any issues that arise.
            4. Industry Standards: Compliance with industry safety standards is often used as a benchmark for determining whether a manufacturer has met its duty of care. If a manufacturer fails to adhere to established industry standards, it may be considered evidence of negligence.
            5. Consumer Warnings: Manufacturers have a duty to provide clear and adequate warnings about any potential dangers associated with the use of their products. If a product has inherent risks that users may not be aware of, the manufacturer should provide sufficient warnings or instructions to mitigate those risks.
            Any breach in any of these areas might be enough to constitute negligent behavior. That's why companies need to invest certain amounts of money to reach that expected level of care (or they would cut these efforts to the bare minimum or even below that to get a product out the door, as these costs always cut into their profit margins). But if they do only the bare minimum, the risks are high that a jury or a judge out there might find their efforts too low and they would have to pay up for this in court, even paying more in the end than taking the right decision from the start (which is the expected minimum in the view of the market plus a safety margin). Hence there needs to be such a safety margin to be certain to meet the legal expectations everywhere (which is hard to quantify beforehand but needs to go along the mentioned key principles).

            Apply this standard to the market expectation of the products at hand. You will realize that these Elkhart Lake systems were not meeting that threshold for the anticipated product life span which could have been mitigated by chosing a competitor's product that brings better features with them.​ This was a forseeable risk for these customers as it was talked about since 2019.

            I am aware that this risk-mitigating thinking might conflict with the way of thinking of engineers and sales people. After all a mixed team that offers different perspectives might yield the best advise for the decision maker. I just want to remind you that it is not up to them to define the neccessary duty of care in the end, that's in the hands of a judge or a jury. An unhappy Management might be even less forgiving.

            Originally posted by coder View Post
            BTW, I also believe it's not worthwhile to use old hardware. We've seen numerous examples documented on this site of long-standing bugs affecting legacy hardware. Therefore, the case could be made to prefer a newer CPU of lesser capabilities than something much older. My personal policy is to try to stay within a window of 1 to 10 years. By the time hardware is 1 year old, software support is usually very good. By the time it's more than 10 years old (which, coincidentally, lines up with Haswell's initial launch), vastly fewer people are using it and bugs are more likely to go unnoticed or fixing them de-prioritized.

            TBH, I don't have reservations about using Haswell, today. One of my home machines is a Haswell and I even run Ubuntu on it. However, I'd start to get uneasy about running machines much older than that (such as Core 2 or Nehalem). I think Sandybridge is probably a good cutoff point, but I'm unsure how long that will continue to be the case.
            I was hesitant at first about used hardware (or even sketchy new hardware from China). But I got addicted to used Xeons in the Westmere era. I could overclock a X5670 (unlocked 6-core, X58) to 4.2 Ghz which was insane considering the low base clock, my ASUS motherboard took 200 Mhz BCLK with ease and with that upgrade, the system lasted me a very long time and could even sell the used CPU and motherboard for a great price as they were highly sought after.

            As Haswell is still used as a common optimization target today, I still feel comfortable in using it despite its age. It is interesting that there were no micro architectural leaps important enough to set a newer base line in the consumer space to this date. With AVX 10.2, APX and AI accelerators on the horizon, I doubt that AVX-512 will gain as much relevance in the consumer desktop market as Haswell/AVX2 did.
            Last edited by ms178; 26 December 2023, 06:37 PM.

            Comment


            • #96
              Originally posted by ms178 View Post
              Apply this standard to the market expectation of the products at hand. You will realize that these Elkhart Lake systems were not meeting that threshold for the anticipated product life span
              Again, that's simply not true. Point to another example of such a new CPU that was subsequently de-supported, so soon after shipping in substantial volume.

              Originally posted by ms178 View Post
              As Haswell is still used as a common optimization target today, I still feel comfortable in using it despite its age. It is interesting that there were no micro architectural leaps important enough to set a newer base line in the consumer space to this date.
              You're seeing only the CPU ISA/feature level stuff, and not system-level support. There's a lot more to a computer than just the CPU. Controllers for USB, SATA, PCIe, ACPI, and lots of other bits and bobs that have to work properly, for a trouble-free computing experience. Even at the CPU level, there are lots of details the kernel needs to worry about that userspace programs don't interact with.​ So, it's really kernel bugs in all of those numerous areas that I tend to worry about.

              Comment

              Working...
              X