Announcement

Collapse
No announcement yet.

Windows 10 vs. Eight Linux Distributions On The Threadripper 3970X

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by schmidtbag View Post
    Well, depends how you look at it. Does it matter when you're watching the Olympics?
    The overall performance is more important but that doesn't always tell a complete story. Windows for example performs the worst on average, but there are moments where it was #1, or at least placed better than everything other than Clear (and Clear isn't exactly a good representative of the average Linux setup). That shouldn't be ignored - that means Linux has room for improvement.
    Exactly. When everything is close enough, first and last is moot. Then you have Manjaro's Dav1d results

    Comment


    • #22
      Originally posted by orangemanbad
      You obviously don't understand how caches work. At all.
      No, I do. However, you are frustrated and fail to get past it.

      Features aren't only configured at run-time, but many can be configured at compile-time, too. When a feature is being disabled at compile-time can a compiler leave it's code out completely and optimise the remaining code more effectively. And even when a feature was compiled into an executable may this not simply result in a single, additional code block, but in the addition of many code blocks throughout the entire executable. For example, a compiler may decide to inline a feature at every code location it is being used at. This is something a cache has no influence on, because caches only have an effect at run-time.

      A feature can also cause an application to require more memory for it's data. This is also something a cache has no influence over. When a feature requires additional bytes per data record, i.e. +1000 bytes more per record, and your application manages millions of records then a data cache has to cache those extra bytes, too.

      Compilers often don't know the exact size and associativity of caches at compile-time, nor are all of a compiler's optimisations aware of caches. As a result do many applications not perform as perfectly as one wants them to. Worse, even in cases where it would be possible is it often not being done, because compilers are often being instructed to only do general optimisations for a wide range of hardware.

      Caches themselves are not perfect. They are finite in size and so is their associativity and granularity. It makes them effective on average and for a large number of applications, but caches can be thrashed and when it happens can it lead to regressions and because of a simple feature, a function, a code block or only a single statement.

      Comment


      • #23
        Originally posted by tildearrow View Post
        Does a Spigot server approximate the workload performed in the H2 test? If so, I will think of switching to Windows if I buy this processor.

        Oh no, Windows beating Linux at x264... Does this mean I have to set it up as a Windows machine and keep using my current Linux one and transfer 4K frames at 60fps over 25GbE using RDMA and find a way to mount my XFS drive on Windows to enjoy that FPS boost?
        Brilliant. You'll also want to deploy some VM's running Weblogic, and SOA Suite, so you can manage this workflow via web app.

        Comment


        • #24
          Really informative comparitive benchmark. Although I run Gentoo, I can relate to Clear Linux results due to my compilation configuration.

          Although Windows 10 is starting to look better, think it's slow compared to Window 95/XP era.

          Comment


          • #25
            Originally posted by orangemanbad
            No it doesn't. At least not on the order you're suggesting. I suggest you go back to the drawing board and actually learn how cacheline loads work. If CPU caches operated the way you're trying to suggest, there'd be literally no reason for them to even exist.
            It has got nothing to do with cache lines. A cache can only accelerate the access to memory. It cannot stop an application from requiring more memory.

            Take audio applications for example. Some can process up to 24-bit/96kHz audio samples and as a result require significantly more memory and time to process these. This is considered a feature. Some applications don't have this feature and can only process up to 16-bit/48kHz samples. How do you propose a cache line changes that??
            Last edited by sdack; 02-19-2020, 04:18 PM.

            Comment


            • #26
              Originally posted by Spooktra View Post
              Here's what I would like to know, do all distros run the cpu at the same clock speed?
              You may be up to something. I recently noticed that default installations of Manjaro now include TLP (power saving configuration tool mainly intended for laptops, but usually not harmful on desktop PCs). Clear Linux certainly does not install TLP by default just like most distros don't.
              It could be that TLP applied some default settings that hinder maximum performance or that TLP thinks it should activate battery saving settings because some USB device's battery made it think the machine is a laptop.

              Another issue are the different CPU governors used by the different distros. As listed on the first page, some use ondemand, some use performance. Manjaro's setting strangely isn't listed. AFAIK it applies either ondemand or schedutil by default on AMD systems, not sure which one.

              Comment


              • #27
                Originally posted by orangemanbad
                Now you're just changing the goalposts ...
                Right back at you. Whatever you are trying to say you haven't said it. If you cannot find the words then say nothing, but wanting to make it personal is not worth my time.

                Comment

                Working...
                X