Announcement

Collapse
No announcement yet.

Linux 5.8 Will Finally Be Able To Control ThinkPad Laptops With Dual Fans

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by torsionbar28 View Post
    1. Non-x86 architectures (like POWER) own a sizable portion of the HPC market. numacross disputed this claim, saying "Do you have any numbers to back those claims up?" So I provided the numbers which back the claim, showing 4 of the top 10 are non-x86. i.e. "a sizable portion".
    In the entire TOP500 list you posted there's:
    • 1 Sunway - 0.2%
    • 2 ARM - 0.4%
    • 3 SPARC - 0.6%
    • 4 PowerPC - 0.8%
    • 10 POWER - 2%
    • 480 x86/amd64 - 96%
    This is not "a sizeable portion of the HPC market".

    Comment


    • #32
      Originally posted by Sonadow View Post

      Not a fan of thick laptops anymore after having bought and used three Chinese fanless laptops with Apollo Lake and Gemini Lake processors.

      And many mainstream mobile Intel processors can now be used fanless. So far it's only Ryzen laptops with fans nowadays.
      If you're fine with some ULV chip then that's absolutely fine, but the problem gets really serious once you get into mobile workstation territory, which is what I'm looking for in a laptop. I don't think I'd even enjoy doing basic things with a slow CPU.

      Comment


      • #33
        Originally posted by uid313 View Post
        I am not a Mac user, but would kind of like one of those new new upcoming ARM-based Macbooks to run Linux on.
        Zen 3 can't do anything, sure the RDNA2 can be great and the graphics capabilities may turn out to be impressive, and while Zen 3 will be better than Zen 2 it is still based on the inferior x86 architecture, so while it can be less worse than before, its still stuck with all that legacy baggage, outdated design and poor design decisions of the x86 ISA.



        Well, modern x86 don't even use BIOS anymore, they use UEFI. There is some UEFI stuff going on in the ARM world too, but but the biggest manufacturer is Qualcomm and they don't want any standard because they want short product life cycles for ARM-based products so they can sell new SoCS.

        The lack of standardized init for ARM is quite unfortunate as the architecture is very well designed and hence have great capability for good performance.
        Huge number of assumptions and big claims from you.

        ARM has existed for a huge number of years - should have been running rings around x86 processors for lots of years if your claims had been true.

        And how can you even write "Zen 3 can't do anything" without blushing? No limit for blushing in you?

        Lots of claims of inferior x86 architecture. Well lots and lots of years ago - maybe even before you were born - quite a number of other people claimed x86 was inferior. The 680xx died. The PPC haven't had an easy time. MIPS? The Alpha processor? They must have gone away because of all the poor design decisions of the x86... When AMD took the memory interface from the Alpha and merged with their x86 cores, they suddenly had an Alpha killer. The interesting thing with the x86 is the small instructions - it means that lots of extra instructions fits in the cache.

        Next thing - you claim the x86 is insecure. But you don't seem to realize that it isn't the x86 instruction set that makes the Intel processors suffer - it's the implementation of the cache layers. And this isn't something that relates to the x86 architecture itself. And we haven't seen it all yet - we might see more processors with similar problems when enough time is invested in reverse-engineer their behavior. I'm not sure if you realize it, but smart cards and other security-oriented designs needs additional work to keep timing and current consumption identical whatever the processor is doing - all to not leak information about the internal encryption steps. Real workstation-class processors needs the same - if the goal/need is to protect them from leaking information to hostile software. But that kind of mitigation comes at a cost - when a cache miss costs 100 times the time of a cache hit, you can't just have constant time unless you are willing to slow the processor down to a crawl.

        So in the end, the problem with security isn't an x86 problem. It's a question of caching and segmentation of memory etc. And the solution - on x86 or ARM or anything else - is that either have the processor switch mode between fast or secure or offload secure operations to a secondary processor. And to introduce RAM encryption etc. But again - the mitigations aren't for x86. The mitigations are needed for any general-purpose architecture.

        Finally - if you now are the worlds premiere processor designer then let us know what design decisions you would have made different when Intel designed the x86 processor. They had the best engineers in the world available. But they didn't have you. So enlighten us. Let the beam of light shine on the uneducated.

        The biggest problem with x86 is Intel. And at Intel, the biggest problem isn't the engineers but the management. Intel have had very shitty management for a huge number of years. like the morons that decided that the Intel chipsets should only support RAMBUS memory. Or the criminals who demanded that Dell only shipped Intel computers. Or the fools who ordered the development of the P4 processor with the requirement "maximize frequency at all cost". Or the greedy bastards who have milked the company for quite a number of years just because Intel has been so much ahead of AMD, instead of letting the engineers reworking the implementation based on todays knowledge and tools. But none of these issues are x86 problems - only Intel management fuckups.

        Comment


        • #34
          Originally posted by Sonadow View Post

          2.5" bays serve no purpose if M2 slots are present. Especially if the M2 slot supports NVMe, which is miles faster than SATA3.

          And there's no way a 1TB or 2TB M.2 SSD is going to get filled up from 50,000 RAW shots, so that 4TB 2.5" SSD has totally no advantage over the 1TB M.2 SSD.
          My aging Canon 5D Mk 2 needs about 25-30 MB per raw image - more if configured to save both JPEG + raw. So 50k shots at 25 MB would be over 1 TB. Real cameras have a tendency to eat disk like crazy.

          Comment


          • #35
            Originally posted by zyxxel View Post
            ARM has existed for a huge number of years - should have been running rings around x86 processors for lots of years if your claims had been true.
            ARM have traditionally targeted the embedded market, not performance markets such as workstation and servers.
            Also while ARM have existed for a very long time, ARMv8 is not to be confused with ARMv7. The ARMv8 (AArch64) is a modern, clean architecture and departure from the earlier ARM designs.
            ARM now have the new Cortex-X1 processor that is going for performance. It looks very promising. Expect the Cortex-X1 at 3 GHz to outperform anything from Intel and AMD at 4.5 GHz.

            Originally posted by zyxxel View Post
            Lots of claims of inferior x86 architecture. Well lots and lots of years ago - maybe even before you were born - quite a number of other people claimed x86 was inferior. The 680xx died. The PPC haven't had an easy time. MIPS? The Alpha processor? They must have gone away because of all the poor design decisions of the x86... When AMD took the memory interface from the Alpha and merged with their x86 cores, they suddenly had an Alpha killer. The interesting thing with the x86 is the small instructions - it means that lots of extra instructions fits in the cache.
            Intel were able to come out on top despite a worse architecture due to enormous resources, big pockets and having been years ahead when it comes to manufacturing technology. Intel has lost its lead in manufacturing technologies and now companies like TSMC have caught up. Samsung and Global Foundries have also caught up.

            Originally posted by zyxxel View Post
            Next thing - you claim the x86 is insecure. But you don't seem to realize that it isn't the x86 instruction set that makes the Intel processors suffer - it's the implementation of the cache layers. And this isn't something that relates to the x86 architecture itself. And we haven't seen it all yet - we might see more processors with similar problems when enough time is invested in reverse-engineer their behavior. I'm not sure if you realize it, but smart cards and other security-oriented designs needs additional work to keep timing and current consumption identical whatever the processor is doing - all to not leak information about the internal encryption steps. Real workstation-class processors needs the same - if the goal/need is to protect them from leaking information to hostile software. But that kind of mitigation comes at a cost - when a cache miss costs 100 times the time of a cache hit, you can't just have constant time unless you are willing to slow the processor down to a crawl.

            So in the end, the problem with security isn't an x86 problem. It's a question of caching and segmentation of memory etc. And the solution - on x86 or ARM or anything else - is that either have the processor switch mode between fast or secure or offload secure operations to a secondary processor. And to introduce RAM encryption etc. But again - the mitigations aren't for x86. The mitigations are needed for any general-purpose architecture.
            The x86 architecture is so ineffective that it needs symmetric multhithreading (SMT), Intel calls their implementation HyperThreading. Architectures like ARM-v8 (AArch64) and RISC-V are so efficient that they don't have any need for SMT and without SMT they don't suffer from many of the vulnerabilities due to speculative execution.
            A CPU has a front-end and a back-end, the backends on the x86 is ineffective so the front-end stays idle, so Intel uses SMT to have two backends that execute simultaneously to avoid having the front-end staying idle. With effective ISA designs this is not needed.

            Originally posted by zyxxel View Post
            Finally - if you now are the worlds premiere processor designer then let us know what design decisions you would have made different when Intel designed the x86 processor. They had the best engineers in the world available. But they didn't have you. So enlighten us. Let the beam of light shine on the uneducated.
            Intel had a good design compared to other designs with x86 when it was launched. It was a nicer architecture than the earlier VAX designs.

            Originally posted by zyxxel View Post
            The biggest problem with x86 is Intel. And at Intel, the biggest problem isn't the engineers but the management. Intel have had very shitty management for a huge number of years. like the morons that decided that the Intel chipsets should only support RAMBUS memory. Or the criminals who demanded that Dell only shipped Intel computers. Or the fools who ordered the development of the P4 processor with the requirement "maximize frequency at all cost". Or the greedy bastards who have milked the company for quite a number of years just because Intel has been so much ahead of AMD, instead of letting the engineers reworking the implementation based on todays knowledge and tools. But none of these issues are x86 problems - only Intel management fuckups.
            That's just a few of the mismanagements. Others includes not pursuing 64-bit, dual-core, etc. We got 64-bit and dual-core x86 processors thanks to AMD. Not supporting new versions of USB because they wanted to push Thunderbolt, etc. But AMD will have limits in what they can do too with the legacy x86 architecture.

            Comment


            • #36
              Originally posted by uid313 View Post
              The x86 architecture is so ineffective that it needs symmetric multhithreading (SMT), Intel calls their implementation HyperThreading. Architectures like ARM-v8 (AArch64) and RISC-V are so efficient that they don't have any need for SMT and without SMT they don't suffer from many of the vulnerabilities due to speculative execution.
              A CPU has a front-end and a back-end, the backends on the x86 is ineffective so the front-end stays idle, so Intel uses SMT to have two backends that execute simultaneously to avoid having the front-end staying idle. With effective ISA designs this is not needed.
              No this isn't true. The name used is totally irrelevant. But not all instructions has the same processing need - and having a floating "cloud" of computational units inside the core that the pipeline can flow instructions into is an advantage when keeping a high computation rate without requiring a silly high memory bandwidth.

              And having hyperthreading is still not something that has with the x86 architecture to do. It's an implementation choice. Another choice would be to have fully separate processor cores with their individual pipelines - and let each individual core stall when it lacks data or when the pipeline needs to delay something because an instruction unit isn't ready yet.

              Doing out-of-order execution into a "cloud" of instruction units is a rather common way to have superscalar processors that can use a very wide memory interface to bring in and execute multiple "sequential" instructions concurrently. Especially since it doesn't make sense to have a FPU adder and an FPU multiplier just hanging around just in case the current instruction stream happens to need some floating point processing - that's where you get the gain from pooling the internal execution units instead of "hard-coding" them into a single core.

              The bothersome thing here isn't the hyper-threading - but that Intel management haven't allowed the engineer to move from two threads into more threads, all sending in tasks into an even larger pool of execution units. But that is still not x86 architecture, but implementation.


              Intel had a good design compared to other designs with x86 when it was launched. It was a nicer architecture than the earlier VAX designs.


              That's just a few of the mismanagements. Others includes not pursuing 64-bit, dual-core, etc. We got 64-bit and dual-core x86 processors thanks to AMD. Not supporting new versions of USB because they wanted to push Thunderbolt, etc. But AMD will have limits in what they can do too with the legacy x86 architecture.
              Hm. In your previous post, you told us the bad design of the x86 architecture. You several times also used the term ISA.

              You did write things like:
              - "Yes, the Intel x86 is the worst of all established architectures."
              - "still stuck with the shitty ass x86 architecture"
              - "poor design decisions of the x86 ISA"

              When you get cornered, you suddenly starts to discuss bad Intel management - i.e. that Intel hasn't invested in keeping the actual implementation updated. But that wasn't what all the previous posts was about. You have been very clear through this thread that it's the x86 ISA that you think is so totally inferior while totally failing to separate ISA from implementation of ISA.

              ISA - by definition - is something very abstract. It's just what the instructions looks like. And it's the short, varying-size, x86 instructions that have had the x86 spank so many RISC processors. For lots of years, the claim was that it's too complicated to decode variable-length instructions. But the x86 ISA has never suffered from this. It's the variable-length instructions that have given the x86 more dense binaries - which means less memory bandwidth needed and less cache needed.

              And no - it wasn't because of AMD that we got dual-core x86 processors. It was because of physics. Around the P4 time, it was quite obvious that our current physics knowledge doesn't allow us to keep stepping up clock frequencies. For many hears, high-end machines had already been running multi-processor solutions. It was the logical next step that with improved cooling and higher integration (reducing the energy consumption per transistor switching) you could start packing two processors into the same socket.

              Next again - exactly what have "supporting new versions of USB" with the x86 architecture to do? Again and again you are mixing the x86 ISA concept with actual implementations in actual systems. x86 as concept has nothing with USB support to do. For most x86 systems, the USB implementation isn't even in the processor but in an external chipset even if there exists x86 chips that integrates memory interfaces, peripheral devices etc.

              As long as you aren't able to understand and debate the actual x86 ISA, I recommend that you stop writing x86 in this thread and instead talk about specific Intel processors - because your complains seems to be with specific implementations in specific Intel chips and not (!) related to the x86 ISA.

              Comment


              • #37
                Originally posted by zyxxel View Post
                No this isn't true. The name used is totally irrelevant. But not all instructions has the same processing need - and having a floating "cloud" of computational units inside the core that the pipeline can flow instructions into is an advantage when keeping a high computation rate without requiring a silly high memory bandwidth.

                And having hyperthreading is still not something that has with the x86 architecture to do. It's an implementation choice. Another choice would be to have fully separate processor cores with their individual pipelines - and let each individual core stall when it lacks data or when the pipeline needs to delay something because an instruction unit isn't ready yet.

                Doing out-of-order execution into a "cloud" of instruction units is a rather common way to have superscalar processors that can use a very wide memory interface to bring in and execute multiple "sequential" instructions concurrently. Especially since it doesn't make sense to have a FPU adder and an FPU multiplier just hanging around just in case the current instruction stream happens to need some floating point processing - that's where you get the gain from pooling the internal execution units instead of "hard-coding" them into a single core.

                The bothersome thing here isn't the hyper-threading - but that Intel management haven't allowed the engineer to move from two threads into more threads, all sending in tasks into an even larger pool of execution units. But that is still not x86 architecture, but implementation.



                Hm. In your previous post, you told us the bad design of the x86 architecture. You several times also used the term ISA.

                You did write things like:
                - "Yes, the Intel x86 is the worst of all established architectures."
                - "still stuck with the shitty ass x86 architecture"
                - "poor design decisions of the x86 ISA"

                When you get cornered, you suddenly starts to discuss bad Intel management - i.e. that Intel hasn't invested in keeping the actual implementation updated. But that wasn't what all the previous posts was about. You have been very clear through this thread that it's the x86 ISA that you think is so totally inferior while totally failing to separate ISA from implementation of ISA.

                ISA - by definition - is something very abstract. It's just what the instructions looks like. And it's the short, varying-size, x86 instructions that have had the x86 spank so many RISC processors. For lots of years, the claim was that it's too complicated to decode variable-length instructions. But the x86 ISA has never suffered from this. It's the variable-length instructions that have given the x86 more dense binaries - which means less memory bandwidth needed and less cache needed.

                And no - it wasn't because of AMD that we got dual-core x86 processors. It was because of physics. Around the P4 time, it was quite obvious that our current physics knowledge doesn't allow us to keep stepping up clock frequencies. For many hears, high-end machines had already been running multi-processor solutions. It was the logical next step that with improved cooling and higher integration (reducing the energy consumption per transistor switching) you could start packing two processors into the same socket.

                Next again - exactly what have "supporting new versions of USB" with the x86 architecture to do? Again and again you are mixing the x86 ISA concept with actual implementations in actual systems. x86 as concept has nothing with USB support to do. For most x86 systems, the USB implementation isn't even in the processor but in an external chipset even if there exists x86 chips that integrates memory interfaces, peripheral devices etc.

                As long as you aren't able to understand and debate the actual x86 ISA, I recommend that you stop writing x86 in this thread and instead talk about specific Intel processors - because your complains seems to be with specific implementations in specific Intel chips and not (!) related to the x86 ISA.
                The reason I brought up Intel management and mentioned USB was because you stated that Intel suffered from poor management, and I agreed, giving a couple of reasons why I thought their management was poor.
                I am very well aware that the USB got nothing to do with the ISA, and I am also aware that implementations are microarchitectures which are different from ISA.
                The microarchitecture deals with things such as fabrication technology, core count, cache levels, cache size, pipeline width and pipeline length.
                The ISA deals with operands, instructions, instruction lengths, and instruction encoding.

                The 64-bit support in x86 were extensions that were tacked on as an after though. AArch64 and RISC-V were explicitly designed 64-bit from the start.

                Even Intel knows that the x86 architecture is shitty, which is why they tried to create a new ISA when they made Itanium.

                Comment


                • #38
                  Originally posted by uid313 View Post

                  The reason I brought up Intel management and mentioned USB was because you stated that Intel suffered from poor management, and I agreed, giving a couple of reasons why I thought their management was poor.
                  I am very well aware that the USB got nothing to do with the ISA, and I am also aware that implementations are microarchitectures which are different from ISA.
                  The microarchitecture deals with things such as fabrication technology, core count, cache levels, cache size, pipeline width and pipeline length.
                  The ISA deals with operands, instructions, instruction lengths, and instruction encoding.

                  The 64-bit support in x86 were extensions that were tacked on as an after though. AArch64 and RISC-V were explicitly designed 64-bit from the start.

                  Even Intel knows that the x86 architecture is shitty, which is why they tried to create a new ISA when they made Itanium.
                  The change to 64-bit wasn't just an after thought. It was a very carefully made decision that made it possible to seamlessly run 32-bit and 64-bit code at the same time.

                  And more specifically - it isn't really an extension that have represented any real problems.

                  Next thing - Intel did plan both a 64-bit x86 and the Itantium. Their progress on the Itanium was not (!) great. They had huge problems getting it up to speed and it was one hell of a task to try to write a compiler for it. Have you looked at it? Where you an active professional when Intel did fight to get it running?

                  The Itanium wasn't really designed to replace the Pentium line of processors - it was intended to be a high-end workstation/server chip. So similar market as the Intel Xeon line of processors. But the big difference between Itanium and the x86 is that the Itanium was designed for compiler-side optimization. All smartness (trying to figure out stalls, out-of-order possiblities etc) should be handled by the compiler. That was a monster task to take on. And ended up a disaster that they had to kill off.

                  The big success of the x86 on the other hand is that all speed optimizations that have arrived over the years are mostly automagic. You don't get a totally shitty binary if you compile for "686". Building for an older "common" ancestor would produce a program that todays x86 chips will load and execute - and where the pipeline in real-time will analyze and introduce on-the-fly optimizations for. If I use a x86 compiler, I can probably select 20 different processor models I want to optimize for. But for 98% of the code, it doesn't matter so it's best to use a "generic" processor variant. That's also why multimedia applications tends to have some magic loops compiled in multiple variants and the first time you run the program it checks if that innermost compress/decompress loop should use either code compiled with a specific code optimization or compiled to use some specific processor instructions.

                  That todays x86 instruction set happens to be named after AMD wasn't because Intel didn't intend for a 64-bit x86 instruction set. They did produce one - but they were several years after AMD since their view was that 64-bit was (at that time) something for workstations/servers where Itanium was intended to be the king. When they realized their problems with Itanium - and at the same time noticed that the interest in 64-bit wasn't just in super-expensive machines - they ended up a bit too late. Microsoft said that they refused to support two different 64-bit x86 dialects. So Intel had to talk with AMD and then readjust the encoding. Then Intel held a large press conference where they introduced *their* new instruction set under *their* new name.

                  In the end - it was the genius of the smooth move from 32-bit to 64-bit that made the x86 continue to keep the market shares. The 64-bit x86 did not end up a failure, so POWER etc could not move in with any "superior" alternatives.

                  You are much too focused on the age of the x86 heritage and the explicit short-comings from Intel rolling their thumbs for the last 5+ years, and sees a need to claim how bad the x86 has to be because of this. In reality, it's still not the instruction set that is the really important part of what ends up to be a good processor. It's how it's implemented. But the code density of the x86 continues to be a bit problem to compete with for other architectures, however "clean" they may be.

                  Comment


                  • #39
                    Originally posted by zyxxel View Post
                    The Itanium wasn't really designed to replace the Pentium line of processors - it was intended to be a high-end workstation/server chip. So similar market as the Intel Xeon line of processors. But the big difference between Itanium and the x86 is that the Itanium was designed for compiler-side optimization. All smartness (trying to figure out stalls, out-of-order possiblities etc) should be handled by the compiler. That was a monster task to take on. And ended up a disaster that they had to kill off.
                    Monster task or not, it was a static optimization (done once during compilation) rather than at runtime.

                    In many real world cases you literally cannot optimize something statically because it depends on runtime conditions (branch prediction and speculative execution). This is why Itanium was a failure. You need "smartness" at runtime not at compilation. And Intel found that out the hard way.

                    Comment


                    • #40
                      Originally posted by Weasel View Post
                      Monster task or not, it was a static optimization (done once during compilation) rather than at runtime.

                      In many real world cases you literally cannot optimize something statically because it depends on runtime conditions (branch prediction and speculative execution). This is why Itanium was a failure. You need "smartness" at runtime not at compilation. And Intel found that out the hard way.
                      It was a monster task too, because it was still very hard to try to figure out the expected latencies of memory accesses making it hard to really know how many cycles away a specific execution unit would actually be ready for the next task. There was just too much guessing needed, which is why Intel had to give up on it. The x86 pipleline instead adapts on-the-fly as it dynamically gets feedback from cache and memory subsystems. Lots of hairy logic involved, but the Intel engineers took the cost so the rest of the world didn't have to.

                      In older 386, 486 code, the better developers did keep count of memory access cycles to hand-optimize assembler routines. But then the world moved to memory modules with varying access times where the access times suddenly were not fixed anymore. No fun to supply three different hand-optimized variants to be selected after having tried to auto-detect the RAS/CAS effects for the specific system.

                      Comment

                      Working...
                      X