Announcement

Collapse
No announcement yet.

Intel + Microsoft Continue Work On Replacing More SMM "Black Boxes" With PRM

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by uid313 View Post
    I don't know if you are trolling, but I presume you do.
    Were you looking at a mirror while typing this?

    Originally posted by uid313 View Post
    You can't compare fabrication processes, because the transistors is not even so it can be measured at different points, it kind of looks like a triangle. Also, nowadays it is much a marketing thing. Intel's 10 nm and 14 nm is widely considered better than other companies processes that are also labeled 10 and 14 nm. So Intel's 14 nm is probably like Samsung's 12 nm. That said, Intel has lost their lead, and it is true that TSMC's 5 nm is super to Intel's 10 nm.
    Count the amount of transistors in each then. Why even mention this when you, yourself, said that it's at least 1 generation ahead anyway? If not 2, because even AMD's 7nm is 1 generation ahead, and this is ahead of AMD.

    Originally posted by uid313 View Post
    Intel x86 is the oldest architecture of the ARM, Alpha, POWER and SPARC architectures and was before the RISC revolution, so x86 has its legacy in the CISC times, even though it internally works much like a RISC processor nowadays.
    Yeah, that's what makes it good, the fact it's CISC on the ISA level. Easily extensible, and commonly used simple instructions with small encoding, not to mention direct operations with immediates (even at the micro-op level) and single instructions for reading/writing memory, which also decode to one uop, so if you consider this CISC instead of having separate memory loads, then x86 is CISC even at the uop level. To get the equivalent efficiency, RISC would have to fuse two ISA instructions into one uop, which is beyond dumb and shows why the ISA itself is pure garbage.

    x86 has fusing as well, but mostly on common things like "compare then jump", which are already encoded efficiently at the ISA level (one byte for jumps, without the offsets).

    Vector instructions may be long and large, but they do a lot and so it doesn't really matter.

    Originally posted by uid313 View Post
    While old ARMv6 and v7 had its drawbacks and legacy baggage, the ARMv8 is a very modern legacy-free architecture, there is no doubt that it is superior to x86, nobody debates it.
    You talk about knowing CPU designs and then throw bullshit techno-babble buzzwords to prove the opposite? You clearly have not the slightest clue of what you're talking about. RISC is shit. Deal with it. They're just butthurt x86 was/is under patents.

    Comment


    • #32
      Originally posted by mdedetrich View Post
      Completely wrong, the numbers behind the fabrication process have been meaningless for a while, they don't represent a true metric anymore (especially since 3d gates have started being used) and they only thing they do indicate is a fab's current generation of node. de8auer actually used a special laser/microscope to cut a physical transistors to compare (in this case AMD's 7nm and Intel's 14nm) and if you look at the physical size of the gates, the 7nm has no relation to the 14nm.

      You can see the video here https://www.youtube.com/watch?v=1kQUXpZpLXI . His basic summary of the video is "doing such comparisons are stupid" for many reasons (one of which is that since the gates are 3d you are only using one axis as a comparison).
      Not as stupid as using a fab process 2 generations ahead and denying it's superior like a rabid fanboy.

      If you don't like it, then simply don't do it. All comparisons must be denied since they're on different generations. Period. Or "doing such comparisons" is the only way to go and de8auer's advice is bullshit.

      Comment


      • #33
        Originally posted by Weasel View Post
        Not as stupid as using a fab process 2 generations ahead and denying it's superior like a rabid fanboy.

        If you don't like it, then simply don't do it. All comparisons must be denied since they're on different generations. Period. Or "doing such comparisons" is the only way to go and de8auer's advice is bullshit.
        Step 1: Watch the video (the guy literally cuts the transistors to physically compare the size)
        Step 2: Say less stupid things

        Comment


        • #34
          Originally posted by mdedetrich View Post
          You do realize you are comparing a chip that is passively cooled versus a chips that require a heatsink/fan/vapor chamber ontop of which having to do ridiculous workarounds to make sure that the thermals are in check, right?
          Yes I do and that because the uplift between 7nm to 5nm is insane and also AMD desktop chip has a global foundries 14nm part as the I/O die. Yes that I/O die can be putting out 100C+ alone. Yes the I/O die can cause the complete Rizen chip to thermal throttle if you don't have good enough cooling. The mobile chips of AMD do the I/O and processor cores in a single die so all the same nm this is why there is such a huge difference in power need between amd desktop and mobile parts. 14nm global foundries is close to 14 nm TSMC. So the I/O die in desktop part is 3 generations behind 7nm and 4 generations behind 5nm. Price of having parts that fair behind is you have to drive them harder and produce more heat to get equal performance.

          AMD in the desktop processes did a time and cost cutting move to get there desktop CPU to market. Of course this cost cutting move has a price bad thermals. Really bad thermals on from one point of view. But still way better thermals than Intel is doing with their 14nm+++++ because at least some of the AMD silicon in the chip is closer to modern.

          Like this is not as bad as one historic nm uplift where using prior nm to see future nm performance passively cooled you had to use liquid nitrogen. Replicating the next generation performance on the prior generation does equal insane amount cooling difference and insane amount of power usage difference as well. Sometimes this is worse than others the heatsink/fan/vapor chamber on top to replicate the next generation is quite a light bit of cooling really.

          Originally posted by mdedetrich View Post
          ARM's ISA is more modern and power efficient than x86/64, x86/64 has so much historical baggage to it that it is hurting the chips power efficiency, the only saving grace for x86/64 is the ecosystem built onto it.
          This is more guess work. ARM ISA has a lot of baggage as well.

          Comment


          • #35
            Originally posted by mdedetrich View Post
            Step 1: Watch the video (the guy literally cuts the transistors to physically compare the size)
            Step 2: Say less stupid things
            Problem here is silicon nodes are 3d structures. The 3d structure defines their properties.



            This does not show well in the video. Do not the bottom picture. 12+++ vs 7nm TSMC. Note what is boxed bottom picture 12+++ all the same things are box. Only 1 of the 7nm TSMC line up. This is not a bad cut. 7nm TSMC is one about generation a head of difference compared to Intel 12+++. So 7nm TSMC would be expected to line up Intel 10nm.

            This using numbers as labels really does cause hell working out what fab production is equal to another one. Intel nm numbers being about 1 generation ahead of where you think it would be use to make people compare intel with amd and go since these are the same nm the bad performance of the AMD must be bad AMD design when it was simple production method difference.

            Production method difference is a very big game changer and should not be taken lightly. M1 being 5nm TSMC and next generation AMD will be 5nm TSMC this will be a rare case to compare designs on equal footings for once.

            Comment


            • #36
              Originally posted by oiaohm View Post

              Problem here is silicon nodes are 3d structures. The 3d structure defines their properties.

              https://hexus.net/tech/news/cpu/1456...icro-compared/

              This does not show well in the video. Do not the bottom picture. 12+++ vs 7nm TSMC. Note what is boxed bottom picture 12+++ all the same things are box. Only 1 of the 7nm TSMC line up. This is not a bad cut. 7nm TSMC is one about generation a head of difference compared to Intel 12+++. So 7nm TSMC would be expected to line up Intel 10nm.

              This using numbers as labels really does cause hell working out what fab production is equal to another one. Intel nm numbers being about 1 generation ahead of where you think it would be use to make people compare intel with amd and go since these are the same nm the bad performance of the AMD must be bad AMD design when it was simple production method difference.

              Production method difference is a very big game changer and should not be taken lightly. M1 being 5nm TSMC and next generation AMD will be 5nm TSMC this will be a rare case to compare designs on equal footings for once.
              Yes, I literally said this in my original post, because the gates are now 3d any such metric comparisons. Even de8auer in the video said that such comparisons are meaningless, he literally said at the end of the video (paraphrasing) "what you should learn from this is that such comparisons are stupid"


              Originally posted by oiaohm View Post
              Yes I do and that because the uplift between 7nm to 5nm is insane and also AMD desktop chip has a global foundries 14nm part as the I/O die. Yes that I/O die can be putting out 100C+ alone. Yes the I/O die can cause the complete Rizen chip to thermal throttle if you don't have good enough cooling. The mobile chips of AMD do the I/O and processor cores in a single die so all the same nm this is why there is such a huge difference in power need between amd desktop and mobile parts. 14nm global foundries is close to 14 nm TSMC. So the I/O die in desktop part is 3 generations behind 7nm and 4 generations behind 5nm. Price of having parts that fair behind is you have to drive them harder and produce more heat to get equal performance.

              AMD in the desktop processes did a time and cost cutting move to get there desktop CPU to market. Of course this cost cutting move has a price bad thermals. Really bad thermals on from one point of view. But still way better thermals than Intel is doing with their 14nm+++++ because at least some of the AMD silicon in the chip is closer to modern.

              Like this is not as bad as one historic nm uplift where using prior nm to see future nm performance passively cooled you had to use liquid nitrogen. Replicating the next generation performance on the prior generation does equal insane amount cooling difference and insane amount of power usage difference as well. Sometimes this is worse than others the heatsink/fan/vapor chamber on top to replicate the next generation is quite a light bit of cooling really.



              This is more guess work. ARM ISA has a lot of baggage as well.
              Original ARM yes, not ARM64 (ARM64 removed a lot of baggage and its actually closer to MIPS). Even taking into account original ARM, it was designed from a clean slate in the late 80's so it has a lot less baggage than Intel, furthermore it was specifically designed for power efficiency (x86/64 was not). ARM ISA's in general also have a lot more revisions (they don't have the same backwards compatibility as x86/64)

              Also you are exaggerating the difference between 5nm and 7nm, you can check Apples own benchmarks between node sizes to figure this out. Actually in reality, people were complaining that the performance improvement from 7nm to 5nm was more disappointing than expected, i.e. see https://kenyannews.co.ke/technology/...ked-benchmark/ .

              Even AMD's most recent performance improvement with Zen3 was only due to architectural reasons (i.e. having one global cache per CCD) rather than node size.

              So to set your expectations, don't expect the jump from TSMC's 7nm to 5nm to achieve anything more than 15-20% due to the node shrink alone and such changes will not meaningfully account for the current disparity. Of course AMD can do architectural improvements for Zen4, but Apple can do the same with M2 so yeah (also there are rumors that AMD is planning to release an ARM chip so Apple has kind of opened the flood gates here).

              BTW there is a good article here with references that detail how the ISA in ARM64 has meaningful changes for performance/efficiency https://www.extremetech.com/computin...m1-performance . For example I just realized the hyperthreading was gimmick for X86/64 to squeeze out more performance because of limitations in the ISA, ARM64 has 1 thread per core which makes things a lot more efficient/easier to scale.
              Last edited by mdedetrich; 08 December 2020, 11:17 AM.

              Comment


              • #37
                Originally posted by mdedetrich View Post
                Also you are exaggerating the difference between 5nm and 7nm, you can check Apples own benchmarks between node sizes to figure this out. Actually in reality, people were complaining that the performance improvement from 7nm to 5nm was more disappointing than expected, i.e. see.
                Yet snapdragon 7nm to 5nm there is 25% uplift.


                This is not the only vendor that did this. Apple was the really odd one out they dropped node size then did not gain performance. Did they reduce power usage?

                Originally posted by mdedetrich View Post
                So to set your expectations, don't expect the jump from TSMC's 7nm to 5nm to achieve anything more than 15-20% due to the node shrink alone and such changes will not meaningfully account for the current disparity.
                There has been a serous mixed bag. Apple on 5nm not been that great. A14 5 nm TMSC loses to a snapdragon 865 that is 7nm what means it losing by miles to a snapdragon 875 that 25% faster. There are multi arm vendors other than apple who have got going from there 7nm to 5nm anywhere between 25% boost to 40% boost. 40% boost is linked to what silicon TSMC nodes your design users not architectural improvements. Yes the fact other vendors were getting boost with the change this is why the apple example was disappointing.


                Originally posted by mdedetrich View Post
                BTW there is a good article here with references that detail how the ISA in ARM64 has meaningful changes for performance/efficiency https://www.extremetech.com/computin...m1-performance . For example I just realized the hyperthreading was gimmick for X86/64 to squeeze out more performance because of limitations in the ISA, ARM64 has 1 thread per core which makes things a lot more efficient/easier to scale.
                The reality even have a ISA advantage on paper does not mean Apple has not goofed their implementation A14 for phones at 5nm is clearly goofed or optimised for power?

                hyperthreading/SMT is not just a gimmick.


                There have been experiments with risc-v the 20/30% boost is theory possible with SMT even with arm. Its the ability to have less inactive silicon. If you are running two related threads in a SMT processor you can get less cache operations to run those threads then running them on two single threaded cores. This is the horrible trap here. SMT advantage is multi-threaded workloads at a price on single threaded. Ok if the ISA is not ideal in single threaded putting SMT on it makes it perform very well in multi threaded workloads.

                Comment


                • #38
                  Originally posted by uid313 View Post
                  Since Apple's new M1 CPU seems to execute emulated x86 code faster than Intel and AMD executes native x86 code, maybe it is time for Intel and AMD just to ditch x86 and build a ARM CPU or a RISC-V CPU with emulation for x86?
                  The M1 is a RISC CPU with capabilities to deal with x86 instructions efficiently on chip. (Not good though, the performance is just in the range of acceptable)

                  It would be a surprise for you to know that Intel and AMD as well as other x86 vendors do that since the mid 90s.

                  The main speed and efficiency benefits of the M1 are its 5nm node and the fact that Apple fails to cool their CPUs since soon 20 years, what does the newest i9 do if it runs at 95°C the moment there is some load and it clocks far under base clock. Now they have a CPU that can actually clock in their bad designs.

                  Comment


                  • #39
                    Yeah, no, still not gonna trust Intel crap.

                    Bad enough my 2014-era i5 has it built in, and that system stays off the net.

                    And if Microsoft wants to finally convince me that, yes, they've really changed their ways and, no, they're not gaslighting there employees and us all... they can release under FLOSS licenses the code they use to bring up the x86/x86_64 platforms and handle ACPI, so that those systems which still can't boot Linux from the hard drive without disabling ACPI -- like my FX-8800p-based HP Envy 15h -- can finally and forever be fixed.

                    That'll be a big enough shooting of themselves in the foot that I'll start trusting them a little.

                    Still wouldn't trust their code in somewhere so delicate, though.

                    Comment


                    • #40
                      Originally posted by Alexmitter View Post

                      The M1 is a RISC CPU with capabilities to deal with x86 instructions efficiently on chip. (Not good though, the performance is just in the range of acceptable)

                      It would be a surprise for you to know that Intel and AMD as well as other x86 vendors do that since the mid 90s.

                      The main speed and efficiency benefits of the M1 are its 5nm node and the fact that Apple fails to cool their CPUs since soon 20 years, what does the newest i9 do if it runs at 95°C the moment there is some load and it clocks far under base clock. Now they have a CPU that can actually clock in their bad designs.
                      So you saying once Intel and AMD get on 5 nm they will have performance per watt like the Apple M1?

                      Comment

                      Working...
                      X