Announcement

Collapse
No announcement yet.

Intel Rolls Out 10nm Pentium/Celeron CPUs, Previews Rocket Lake

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by vladpetric View Post
    You know, there is actually a way to test it out to some degree. With an AMD Ryzen retail, on an x570 pro mobo or similar, you try vanilla rowhammer (not ECCploit) ... If you're successful at vanilla rowhammer, it means that ECC doesn't work. If you're not successful ... well, you haven't really learnt much.
    A common tactic is to just overclock the RAM until you start getting memory errors, and then see if ECC corrects and reports them.

    Another option is to try an ECC DIMM that's known to be defective.

    Probably the best (and most difficult) is to use the kernel-support for injecting faults, and see if they're reported. A guy in the RealWolrdTech thread where Linus posted was talking about this, but hadn't updated with his relults, last I checked.

    BTW, someone posted a link to an article showing that rowhammer is still possible (though much more difficult), even with ECC. But, what should be a lot easier is to try rowhammer and see if it causes an ECC event. That's all you need -- for it to trigger a correctable error. However, if your ECC support is indeed broken, then I guess you'd get a successful rowhammer, instead.

    Comment


    • #32
      Originally posted by Inopia View Post
      TBH they kind of skipped the 5th gen by not really releasing more than token Broadwell dektop offerings.
      It was still a design iteration and had its own 5000-series numbering. And as a partial substitute, they released the Haswell-Refresh series (although that was still numbered as part of the 4000-series).

      Comment


      • #33
        Originally posted by tildearrow View Post
        What happened to the fused Intel+AMD GPU laptop chip?
        Long-ago discontinued. What did you expect?
        Last edited by coder; 11 January 2021, 10:23 PM.

        Comment


        • #34
          Nobody has yet mentioned the 14nm Rocket Lake desktop CPUs. Anandtech noticed a mention of 125 W TDP (with 250 W turbo), in the footnotes of one of their slides. I wonder where that'll put its base clocks.
          Last edited by coder; 11 January 2021, 10:24 PM.

          Comment


          • #35
            Originally posted by coder View Post
            Not exactly. Fedora is drawing the line at "near-Nehalem", which is the ISA level before AVX and a bar which these definitely clear.
            RHEL 9 is doing this, but when did Fedora decide to switch to that?

            Comment


            • #36
              I know what STEM stands for... I want to know what exactly is 78% faster.

              Comment


              • #37
                Originally posted by Space Heater View Post
                RHEL 9 is doing this, but when did Fedora decide to switch to that?
                You're right -- my bad. I'll fix my original post.

                Here's the article: https://www.phoronix.com/scan.php?pa...86-64-v2-Plans

                Comment


                • #38
                  Originally posted by coder View Post
                  They could still split them in half and execute them as 2x 128-bit parts. Didn't AMD do that in Zen1? I know Pentium did that with SSE, in the Pentium 4.

                  In terms of area, AVX bloats the register file, but I wonder if that's even enough to bother about.

                  Anyway, let's not forget this is a 10-way core with 6-wide decode! So, we're not exactly talking about a microcontroller or IoT core. And its ancestors had SSE/SSE2 going at least as far back as the 22 nm days, so you'd expect at least basic AVX/AVX2-support wouldn't be completely off the table.

                  In case you missed it: https://en.wikichip.org/wiki/intel/m...t#Architecture
                  Register file, ALUs, and bypass networks cost significantly more power when things are 256-wide.

                  Sure, they could split them into 2x 128 bits, but then you could also just be using regular SSE42 (AMD has a few instructions that work on their early processors, but performance sucks to the point that it's not worth using the specialization; my main beef is with pext/pdep)

                  Look, you're generally making good points, I don't mean to nitpick.

                  Comment


                  • #39
                    Originally posted by vladpetric View Post
                    Look, you're generally making good points, I don't mean to nitpick.
                    Thanks. No worries.

                    Comment


                    • #40
                      Originally posted by coder View Post
                      They could still split them in half and execute them as 2x 128-bit parts. Didn't AMD do that in Zen1? I know Pentium did that with SSE, in the Pentium 4.

                      In terms of area, AVX bloats the register file, but I wonder if that's even enough to bother about.

                      Anyway, let's not forget this is a 10-way core with 6-wide decode! So, we're not exactly talking about a microcontroller or IoT core. And its ancestors had SSE/SSE2 going at least as far back as the 22 nm days, so you'd expect at least basic AVX/AVX2-support wouldn't be completely off the table.

                      In case you missed it: https://en.wikichip.org/wiki/intel/m...t#Architecture
                      I'm curious what out-of-order decode actually means. Do you know?

                      I kinda' doubt that they're doing out-of-order fetch.

                      Decoding instructions is for the most part stateless (first instruction doesn't affect the decoding of the second instruction, unless you do some fusion at decode), so it doesn't matter too much the order in which you decode them, as long as you rename them in the correct order.

                      Comment

                      Working...
                      X