Announcement

Collapse
No announcement yet.

Linux Prepares For Next-Gen AMD CPUs With Up To 12 CCDs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by Sonadow View Post
    Libreoffice is still stuck on using only one cpu thread for everything except Calc while MS Office has had proper multithreading and multicore support for who knows how long. Running LO on anything slower than an i3-grade processor is outright frustrating to the point of being just barely usable.
    So you're saying, that you find weird and possibly unacceptable that a community-driven software that probably can muster the equivalent of around a few dozen full-time people for the *whole* project, could not yet achieve what a company that *earns dozen of BILLIONS in net profit PER YEAR* and has probably several thousand of people dedicated to its office suite did since a few years ago?

    Mindblowing news, really.

    Originally posted by Sonadow View Post
    That, and the fact that much software still suck at proper multithreading makes the multicore race for anything about 8C16T practically pointless for general desktop computing.
    Good thing then that the *humans* are actually already multithreaded and "multi-activity"! Ask any random "lambda" people, he'll tell you he's on Windows with at least one or two dozen of active services he's even unaware of, plus at least one browser (which are all multithreaded now as far as I remember) with at least half a dozen consuming services, very probably also a music streaming app, and possibly a few documents opened.
    And that's before considering any "public archetype" like gamer (game + streaming more and more), multimedia designer (file explorers, one to three instances of music/photo/video editor), etc...

    So, yeah, you probably don't benefit *enough* from going from 8 to 12 cores that it would justify a change, for now. But that's not a reason to dismiss the total number of cores under the assumption that only one application is active and under heavy load at any given time.

    Comment


    • #32
      Originally posted by Sonadow View Post
      A 32C64T mainstream processor with > 64GB of non-ECC memory works great as a dedicated headless build machine for personal use. There is no reason to be limited to HEDT and server hardware for such configurations.
      Why wouldn't you want ECC? Given how inexpensive the HEDT and server stuff is these days, I see no reason to step down to consumer grade stuff.

      Anytime you're talking content creation (like compiling code) you want stability and reliability first and foremost, so ECC is a must. For content consumption, this is less important. You will not see a 32c consumer CPU any time soon for the simple reason that content consumption (consumer) applications have no need for that many cores, and consumer software typically doesn't scale well across cores anyways. The price point of consumer grade software is such that developers don't have the budget to implement proper multithreading. See the vast majority of pc games as an example.
      Last edited by torsionbar28; 24 November 2021, 09:29 AM.

      Comment


      • #33
        Originally posted by Sonadow View Post
        while MS Office has had proper multithreading and multicore support for who knows how long.
        They have? AFAIK the entire office suite except Excel (since Excel 2007) is single threaded. Microsoft in their docs for Office 2022: "Code in Office solutions always runs on the main UI thread.".

        And I also have a hard time understanding what parts outside of Excel/Calc that would benefit from using multiple threads.

        Comment


        • #34
          Originally posted by F.Ultra View Post

          They have? AFAIK the entire office suite except Excel (since Excel 2007) is single threaded. Microsoft in their docs for Office 2022: "Code in Office solutions always runs on the main UI thread.".

          And I also have a hard time understanding what parts outside of Excel/Calc that would benefit from using multiple threads.
          I've been using libreoffice professionally on a i3 class machine for years, and don't really have any need for threading. Maybe file import/export could use it. But that's it.

          Comment


          • #35
            Originally posted by Sonadow View Post

            cmake, make and ninja still do not know how to automatically scale jobs according to the number of cpu threads available and always default to building on a single thread unless -j or --parallel is passed to the build; this never happens when building sln projects in Visual Studio where the build is always spread across all available threads by default. Rustc claims to be multithreaded, and yet it only occupies one cpu thread when invoked in a firefox compile.
            First, cmake is only a replacement for configure - on some packages you can use either make or ninja for the backend. Yes, with make you need to pass -jN. For ninja it usually uses one job per core (or, what the linux kernel regards as a core), and if you have more than 4 such cores it will schedule N+2 jobs. On even my 3400G I get by default 10 threads when building jobs which use ninja. Note that on almost all large packages there are dependencies which mean that at various points in the build only one core (100% cpu in top) can be used.

            Note that configuring modern versions of top to actually show useful information takes some time, the default settings with whatever you use for init on the top line are somewhat useless for watching what is actually running.

            Back in the day, normal packages used to use up to 100% of cpu per thread (according to top). With rust, some of the build can be using all available cpus on one item (top at times shows towards 800% used by one rust job on the 3400G, but more often several rust jobs using 100%, or a mix of one bigger rust job and some smaler ones). That is for firefox, I don't know where you get your information that rust only uses one cpu thread in a firefox compile.

            Comment


            • #36
              Originally posted by torsionbar28 View Post
              Why wouldn't you want ECC? Given how inexpensive the HEDT and server stuff is these days, I see no reason to step down to consumer grade stuff.

              Anytime you're talking content creation (like compiling code) you want stability and reliability first and foremost, so ECC is a must. For content consumption, this is less important. You will not see a 32c consumer CPU any time soon for the simple reason that content consumption (consumer) applications have no need for that many cores, and consumer software typically doesn't scale well across cores anyways. The price point of consumer grade software is such that developers don't have the budget to implement proper multithreading. See the vast majority of pc games as an example.
              ANd why would someone building FOSS for personal use as a hobby require ECC?

              Comment


              • #37
                Originally posted by Sonadow View Post
                Libreoffice is still stuck on using only one cpu thread for everything except Calc while MS Office has had proper multithreading and multicore support for who knows how long. Running LO on anything slower than an i3-grade processor is outright frustrating to the point of being just barely usable.
                This caught my attention. I only write docs of any substantial size and complexity at my job, which uses MS Office simply because they always have. So, I wonder what sort of docs cause performance problems?

                I'm intrigued how these sorts of WYSIWYG document editors have been with us since the mid/late 80's (Mac), running on CPUs with literally a thousandth of the single-thread performance, or less. And yet, you do still sometimes see performance problems (even in the vaunted MS Office suite). I know it's not exactly fair to compare 80's -era word processors with their modern descendants, but it does make you think.

                Comment


                • #38
                  Originally posted by Citan View Post
                  So, yeah, you probably don't benefit *enough* from going from 8 to 12 cores that it would justify a change, for now. But that's not a reason to dismiss the total number of cores under the assumption that only one application is active and under heavy load at any given time.
                  For more software to make heavier use of multi-core, we need optimizations around work-stealing. Intel has some interesting inter-core interrupt scheme in Ice Lake or Sapphire Rapids (I forget), which is a start.

                  But the main thing we need is better OS support, so that the OS effectively maintains and manages application-level work queues, instead of each process having its own pool of worker threads. You don't want a worker thread to start working on something, only to get preempted right after and not run again till long into the future. That's going to bottleneck attempts at multi-core scaling, except when only one app is getting the entire CPU to itself, and none of the libraries it's using have their own private worker thread pools.

                  Comment


                  • #39
                    Originally posted by Sonadow View Post
                    ANd why would someone building FOSS for personal use as a hobby require ECC?
                    I use ECC in my own machines, when possible (i.e. except for my laptop... grrr). My reason is simple: I value stability and my time, more than the price delta between ECC RAM/platform and non-ECC.

                    Where I consider ECC to be a must is in work on high-value data and in servers.

                    Speaking specifically of FOSS, I'd say anyone building packages for redistribution should consider the cost to downstream users, if they produce a bad build due to memory errors. For that reason, it's probably also a good idea to use a filesystem with checksums, like BTRFS. That said, it seems like most distros have their own build service, which presumably utilizes appropriately-spec'd server hardware.

                    Comment


                    • #40
                      BTW, "make -jN" is a feature I use many times each day. Sure "make -j32" is quick, but plain old default single-threaded "make -j1" stops much closer to the actual command-line compiler error. I usually do "make -j32; make" ftw.

                      Comment

                      Working...
                      X