Announcement

Collapse
No announcement yet.

Intel CPUs Reportedly Vulnerable To New "SPOILER" Speculative Attack

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • torsionbar28
    replied
    Originally posted by Wojcian View Post
    Stop playing strawman now. It's as relevant as mentioning Russia and Iran in this thread.
    You were saying? https://www.theinquirer.net/inquirer...g-eu-elections

    Leave a comment:


  • Weasel
    replied
    Originally posted by wswartzendruber View Post
    By that logic, guns shouldn't have safeties and cars don't need seatbelts.
    Except I can easily show you car accidents and gun accidents do happen and thus it invalidates the entire point. Try harder.

    Leave a comment:


  • wswartzendruber
    replied
    Originally posted by Weasel View Post
    It's just for PR. Or they do it for *drumroll* servers.

    Show me a single normal (non-server) user hit by one of these speculative execution exploits and the extent of the damage (many disable the protections for performance btw).

    I'm waiting.
    By that logic, guns shouldn't have safeties and cars don't need seatbelts.

    Leave a comment:


  • Weasel
    replied
    Originally posted by Wojcian View Post
    So, following your logic every serious operating system is made by paranoid wackos. Simply wonderful, genius.
    It's just for PR. Or they do it for *drumroll* servers.

    Show me a single normal (non-server) user hit by one of these speculative execution exploits and the extent of the damage (many disable the protections for performance btw).

    I'm waiting.

    Leave a comment:


  • Wojcian
    replied
    Originally posted by Weasel View Post
    You mean not satisfying paranoid wackos. And the analysts don't care of paranoia, rightfully.
    So, following your logic every serious operating system is made by paranoid wackos. Simply wonderful, genius.

    Leave a comment:


  • Weasel
    replied
    Originally posted by wswartzendruber View Post
    Interestingly, stock analysts continue to rate Intel as a "buy" despite disturbing patterns in their security practices.
    You mean not satisfying paranoid wackos. And the analysts don't care of paranoia, rightfully.

    Leave a comment:


  • wswartzendruber
    replied
    Originally posted by Wojcian View Post

    Yep, they're result of insecurity by design which makes me wonder if there are so many idiots in CPU business or perhaps, these vulnerabilities were introduced intentionally.
    I think it likely that profits drove CPU design rather than security. If the engineers say the preferred CPU design is also slow, I much doubt the executives would go for that. But if the engineers design something fast, but possibly insecure, I think the executives would go for it. So long as profits continue to come in.

    Interestingly, stock analysts continue to rate Intel as a "buy" despite disturbing patterns in their security practices. To customers, the product and its utility are the whole reason for spending money. But to executives, the product is merely a means to an end, and the end is only about making more money.

    Leave a comment:


  • Wojcian
    replied
    Originally posted by torsionbar28 View Post
    Lolwhat? How is an attack on a ship that occurred in 1967, relevant to a discussion about security vulnerabilities in IT equipment?
    Stop playing strawman now. It's as relevant as mentioning Russia and Iran in this thread.

    Don't like intel CPU's? Feel free to switch to Longsoon or KaiXian or Dhyana. Or just use AMD, as these new SPOILER vulnerabilities don't apply. Never attribute to malice, what can plausibly be explained as incompetence. Thinking like yours is what creates conflicts.

    Anyone who's been around IT for a while (not you, it seems) clearly recognizes these flaws for what they are. Sometimes a cigar is just a cigar. I.e. No tin foil hat required.
    Yep, they're result of insecurity by design which makes me wonder if there are so many idiots in CPU business or perhaps, these vulnerabilities were introduced intentionally.

    Leave a comment:


  • torsionbar28
    replied
    Originally posted by Wojcian View Post
    No, I've just seen analysis and reports, so I have enough knowledge to say it was a false flag. When comes to USS Liberty there's something more about it. I respect every soldier who serves well his country and who doesn't harm innocent people (being a soldiers is not just about killing, but saving and helping others). By mentioning the X files I can safely say you neither have a honor nor respect to victims of these acts of war.

    Ps. It seem you didn't bother to acknowledge it's USA CPUs company that polluted most of the countries in the world by potential backdoors in every possible IT segment. By accusing China or Iran in this case you just show your words are trash.
    Lolwhat? How is an attack on a ship that occurred in 1967, relevant to a discussion about security vulnerabilities in IT equipment?

    Don't like intel CPU's? Feel free to switch to Longsoon or KaiXian or Dhyana. Or just use AMD, as these new SPOILER vulnerabilities don't apply. Never attribute to malice, what can plausibly be explained as incompetence. Thinking like yours is what creates conflicts.

    Anyone who's been around IT for a while (not you, it seems) clearly recognizes these flaws for what they are. Sometimes a cigar is just a cigar. I.e. No tin foil hat required.

    Leave a comment:


  • Hugh
    replied
    Originally posted by Weasel View Post
    I mean in some cases, for specific pieces of code. Nothing to do with probability.

    If an app is overall fast but dog slow when filling some table then a user isn't going to give a shit that it's "fast on average" because he will rage when he has to fill the table, even if he only fills it once per month.

    The point was that branch prediction is very essential even to instruction scheduling, which can only be done at runtime, because branch prediction is done at runtime. When you compile the code, no matter how smart you are as the compiler (you could even hand-code it in asm!), you will NOT be able to schedule it effectively for BOTH situations, and BOTH can happen, depending on what the user does.

    Well, other than rewriting the code and duplicating the function to handle each case (but the example was over-simplified btw). We all know most people don't code or accept such code, so real world code will be written "poorly" and thus Itanium suffers in practice on real world code.
    Of course probabilities matter in performance analysis. CPU designers (now) make their decisions based on this. Just look at the title of the classic text "Computer Architecture: a Quantitative Approach" https://www.elsevier.com/books/compu...-0-12-811905-1

    Every once in a while a low-use, high-impact instruction is invented and added to an architecture. Famously, a three-letter-agency got Seymour Cray to add "population count" to the CDC 6600 (it was already in the IBM Stretch / Harvest machine for the same reason). Intel added AES-NI for a similar reason.

    CISC machines often had cute instructions that made some things faster, but were not worth the silicon in performance terms. The VAX gets picked on for the subroutine linkage instructions and the polynomial evaluation instructions. But it was also true of the IBM/360 SS instructions; the IBM/360 model 44 had anomalously good price/performance because it didn't implement them (as a result of its reduced instruction set, it didn't need microcode).

    The difficulty of branch prediction varies. In GPU worloads, branches are very expensive -- typically both sides of an IF are valuated, sequentially. So workloads that allow for static prediction are heavily favoured.

    In "commercial" workloads, branches are quite heavily used and are hard to predict.

    In interpreters for dynamic languages (eg. javascript), a lot is up in the air until runtime (eg. datatype of a variable) so JITs that handle this are favoured. Note: people have repeatedly invented architectures that handled this and yet failed in the marketplace. Think of LISP machines, for example. The SPARC architecture has support for tagged data and yet I don't think that it was much used (I could be wrong).

    If your problem matters a lot, and it isn't supported efficiently by current processors, you might have to use FPGAs or even ASICs. Examples: bitcoin mining; network packet processing at speed; the fancy math done by modems.

    Bad prediction causes pipeline bubbles. One way to reduce the cost is to have shorter piplelines. My impression is that what killed the Pentium 4 was long pipelines (in an attempt to allow higher clock rates). It's a tough trade-off. But CISC usually adds a pipeline stage just for cracking the instruction. And perhaps an earlier stage for gathering it (since instructions are variable-length).
    Last edited by Hugh; 10 March 2019, 11:03 AM.

    Leave a comment:

Working...
X