Announcement

Collapse
No announcement yet.

Linux 5.8 Will Finally Be Able To Control ThinkPad Laptops With Dual Fans

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by rlkrlk View Post

    I want a big laptop (like my current ThinkPad P70, no less) if for no other reason than to load it to the gills with storage. I've had my P70 for less than 3 years, so it has a way to go yet, but I don't like the trends toward eliminating 2.5" bays and kill off decent cooling. If I were to buy a laptop, I'd want a reasonably powerful CPU (say, one of the Ryzen 7 or Ryzen 9 Zen2 chips). I don't mind being a pack mule.

    Now, I'm admittedly not likely to shoot 50,000 frames next football and basketball seasons, thanks to our fiend COVID-19, but that's the kind of thing I do a lot of. And I was still hoping Canon would introduce a replacement for the 7DmkII, presumably with a somewhat higher pixel count, further consuming storage. I have copies of everything on my server, of course, but it's convenient having them both places (and not to mention that I'm a pack rat).
    2.5" bays serve no purpose if M2 slots are present. Especially if the M2 slot supports NVMe, which is miles faster than SATA3.

    And there's no way a 1TB or 2TB M.2 SSD is going to get filled up from 50,000 RAW shots, so that 4TB 2.5" SSD has totally no advantage over the 1TB M.2 SSD.

    Comment


    • #22
      Originally posted by uid313 View Post
      The IPC is way ahead that of any x86 processor, so even at lower frequencies it can outperform then while using less energy
      second part of your statement does not follow from first part, i.e. it is wishful thinking. and first part of your statement also doesn't follow from anything but your imagination

      Comment


      • #23
        Originally posted by Sonadow View Post
        2.5" bays serve no purpose if M2 slots are present. Especially if the M2 slot supports NVMe, which is miles faster than SATA3.

        And there's no way a 1TB or 2TB M.2 SSD is going to get filled up from 50,000 RAW shots, so that 4TB 2.5" SSD has totally no advantage over the 1TB M.2 SSD.
        That's true in some cases. But the alternative viewpoint is that with nvme you're spending 5-10 times as much for nothing but photo storage space. In fact, even more in most cases, since most of us probably have multiple spare 2.5's laying around that we could throw in there and swap out for nothing. For photo storage (music collection storage, etc), multiple cheap backup copies beats raw speed for a lot of uses.

        Comment


        • #24
          Originally posted by numacross View Post
          Do you have any numbers to back those claims up? Because it doesn't seem like it:
          Originally posted by carewolf View Post
          Maybe 20 years ago, but those days are long over. AMD64 killed the non-x86 CPUs in the server and HPC markets, what is left is all legacy stuff. Now there are just two markets x64 and ARM, but with some minor overlap on the low end of x64 and high-end of ARM.
          Sounds like neither of you have any HPC market experience.
          See here: https://www.top500.org/lists/2019/11/
          3 of the top 10 fastest supercomputers in the world are POWER based, including the #1 and #2 fastest. The #3 fastest is also non-x86. FYI Your article about commodity server sales is not relevant in a discussion about HPC. Edit: ARM is also not relevant in HPC.
          Last edited by torsionbar28; 05-04-2020, 01:54 AM.

          Comment


          • #25
            Originally posted by torsionbar28 View Post
            Sounds like neither of you have any HPC market experience.
            See here: https://www.top500.org/lists/2019/11/
            3 of the top 10 fastest supercomputers in the world are POWER based, including the #1 and #2 fastest. The #3 fastest is also non-x86. FYI Your article about commodity server sales is not relevant in a discussion about HPC.
            Modern supercomputing is all about maximizing the GPU capability per node. The list you are quoting doesn't mean what you think it means. Yes, Power9 holds its own, but it's the reducing of the number of nodes and maximizing of the GPU's per node that makes the difference. Power9 is useful in that regard because of the massive number of ultra-high speed channels that can be made available - it's not about Power9 being somehow superior in terms of raw cpu processing speed.

            Comment


            • #26
              Originally posted by torsionbar28 View Post
              Sounds like neither of you have any HPC market experience.
              See here: https://www.top500.org/lists/2019/11/
              3 of the top 10 fastest supercomputers in the world are POWER based, including the #1 and #2 fastest. The #3 fastest is also non-x86. FYI Your article about commodity server sales is not relevant in a discussion about HPC. Edit: ARM is also not relevant in HPC.
              As andyprough noted they are only POWER because of support for NVLink used by the GPUs. Vast majority of their speed is GPU and not CPU.

              It will be interesting what NVidia does next because POWER10 is still MIA, and bandwidth available in the "old world" of PCIe is catching up - AMD EPYC has 128 lanes of PCIe 4.0 and that's ~512GB/s which exceeds the dedicated NVLink 2.0 in POWER9 which is 300GB/s.

              ARM might not be relevant yet, but that might change with for example the Fujitsu A64FX.

              How many new supercomputers are going to be based on POWER/SPARC?

              Comment


              • #27
                Originally posted by uid313 View Post
                No it is not, x86 was designed as CISC architecture, then came RISC which was deemed superior. So Intel implements modern x86 microprocessors in a way that is more like a RISC architecture. Later DEC Alpha, ARM, SPARC, POWER and MIPS were released which were considered superior but Intel could still stay competitive due having greater fabrication technology that was years ahead and due to enormous resources.

                But all of these architectures are rather ineffective and does not scale well, and to be able to perform they had to resort to implementing SMT because the architecture was ineffective.

                Then RISC-V was designed which was very clean and designed by brilliant people with lots of experience, it is much more effective and have a much higher instruction per clock cycle. Then ARM designed ARMv8 which while it carries the ARM name it has little to do with the old ARMv7, its actually a clean new architecture, unlike x86-64 which was just 64-bit extensions shoehorned onto the x86 architecture.
                I think you need to learn the difference between Instruction Set Architecture (ISA) and microarchitecture. x86 can even have fused memory load+op at the µarch level, while stupid RISC has to have separate loads which can be a big bottleneck. Not to mention much more compact code, and cache is another big bottleneck.

                There's a reason no ARM can ever compete with x86, and they always shift their goal posts to "power efficiency" that is completely irrelevant when it comes to performance.

                Comment


                • #28
                  Originally posted by Sonadow View Post

                  2.5" bays serve no purpose if M2 slots are present. Especially if the M2 slot supports NVMe, which is miles faster than SATA3.

                  And there's no way a 1TB or 2TB M.2 SSD is going to get filled up from 50,000 RAW shots, so that 4TB 2.5" SSD has totally no advantage over the 1TB M.2 SSD.
                  That's 50K per football and basketball season (and those are JPEGs, not RAWs admittedly). I currently have in toto about 350,000 frames. Plus video and other stuff. It adds up. All told, I currently have something over 5 TB of stuff on my laptop.

                  4TB M.2's are starting to become available, but are extremely expensive. 4TB 2.5" SSDs are considerably cheaper, and 8TB (well, 7.68TB) drives also exist. The higher performance of NVMe doesn't much matter for this unless I want to rebuild my database. Of course, laptops (and others) could support U.2 drives, which would be the best of both worlds.

                  Comment


                  • #29
                    Originally posted by Weasel View Post
                    I think you need to learn the difference between Instruction Set Architecture (ISA) and microarchitecture. x86 can even have fused memory load+op at the µarch level, while stupid RISC has to have separate loads which can be a big bottleneck. Not to mention much more compact code, and cache is another big bottleneck.

                    There's a reason no ARM can ever compete with x86, and they always shift their goal posts to "power efficiency" that is completely irrelevant when it comes to performance.
                    From what I have heard, the new ARMv8 64-bit CPUs have a higher IPC than Skylake and at a lower clock rate and power draw performs as a Core 2 Duo.

                    Comment


                    • #30
                      Originally posted by andyprough View Post
                      Modern supercomputing is all about maximizing the GPU capability per node. The list you are quoting doesn't mean what you think it means. Yes, Power9 holds its own, but it's the reducing of the number of nodes and maximizing of the GPU's per node that makes the difference. Power9 is useful in that regard because of the massive number of ultra-high speed channels that can be made available - it's not about Power9 being somehow superior in terms of raw cpu processing speed.
                      The list means *exactly* what I think it means, and it conveys precisely the point I was aiming to convey. HPC clusters tend not to remain in the top 10 for very long, due to national competition and the rate of advancement in HPC. Therefore, the top 10 is most informative as a current snapshot of the state of the art in the HPC world.

                      To reiterate the points I stated previously:

                      1. Non-x86 architectures (like POWER) own a sizable portion of the HPC market. numacross disputed this claim, saying "Do you have any numbers to back those claims up?" So I provided the numbers which back the claim, showing 4 of the top 10 are non-x86. i.e. "a sizable portion".

                      2. Non-x86 architectures are still very relevant in the HPC world. carewolf disputed this claim, saying "Maybe 20 years ago, but those days are long over. AMD64 killed the non-x86 CPUs in the server and HPC markets, what is left is all legacy stuff." So I provided data, thus proving him wrong, by about 20 years.

                      Contrary to what you may think, I am aware of GP-GPU computing, LOL. I never said a word about "raw cpu processing speed" as you say. I asserted that POWER was alive and relevant in the HPC market, that is all. I did not dive into the details of why it remains relevant. You made an assumption; an incorrect one.
                      Last edited by torsionbar28; 05-04-2020, 03:55 PM.

                      Comment

                      Working...
                      X