Announcement

Collapse
No announcement yet.

AMD Threadripper 1950X Linux Benchmarks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Thanks a lot for those benchmarks.
    Can you also share the dump message (dmesg) of Threadripper to see how it boots with Linux ?

    Comment


    • #42
      Originally posted by bigletter View Post

      Asus explicitly only list windows as supported for that motherboard. So if you run linux and have bios issues ( not that unlikely for a brand new product ) and report them to asus all you will get back is a reply that your operating system isn't supported. Asus do support linux on some of their high end workstation boards but not on their gamer boards. I have personally had this issues on two occasions with asus, first time iommu implementation issues second time on board sound issues. After reading up on the virtio mailinglist i got the impression i am not the only one getting this treatment from asus. This might or might not be of relevance to you. I would say if you plan to make use of more workstation related features such as iommu, virtualization and so on you should be aware of the possibility of getting a broken bios, and a vendor that dont see it as a priority to fix it. In that case should pick another motherboard from a more linux friendly vendor, say AsRock.

      Then, the obvious oss zealot remark. The chance of getting a working full speed oss driver for nvidia cards is about as likely as Trump and Kim Jong-Un becoming best buddies. With such new hardware you will want to run pretty bleeding edge software and then the binary only drivers will give you a headache, heck they give a headache for everyone. A solution here might be to just buy a cheap extra gpu with good oss support. That way you can boot into wintendo to game or boot a gaming tailored linux setup, and then still be able to run bleeding edge software to get a good linux graphics experience.



      just my two öre.

      Thanks for the suggestion I have replaced the ASUS with ASRock X399 Professional GAMING
      After searching a bit I read some comments that lot of people had problems with Asus even in windows and some others problems with fitting components in motherboard and also the fan was noisy. Also for memory I went with G.Skill Trident Z RGB 32GB DDR4 32GTZR Kit 3600 CL16 (4x8GB)
      I will have the pc delivered in one month
      (some parts are not available atm)

      Thank you all for your suggestions!

      Comment


      • #43
        Hello everyone,

        I am building a new threadripper system as well,

        Below are the parts I have order so far or will order in the near future,

        Motherboard: ASUS ROG Zenith Extreme
        CPU: Threadripper 1950X
        CPU Block: EKWB TR4
        ​​​Memory: 64 GB G Skill Trident Z RGB 3600 16GB per stick
        PSU: Already have a five year old AX 1200 from my old system but I may order a new one.
        HDD: Samsung 960 Evo Pro - 1TB
        I may add two 6 TB WD Blacks for additional storage
        Case: Thermaltake View 71 TG

        Radiators: Not sure if u should go with EKWB or HW.
        1x 360
        1x420

        GPU:

        2x ASUS GTX 1080 Ti Poseidon

        Pump & Reservoir:

        2x EKWB D5

        Fittings & Tube: After so much research, I think I will go with the Bits power fittings but I'm not too sure about the tubing, definitely PETG.

        Coolant: Mayhem's White Pastel

        Fans: All Corsair ML120 and ML140 white led

        I am going with a black and white theme.

        Any thoughts? I will take any feedback so I can improve my build as much as possible. My last rig is 5 years old.
        Still running a 3930k OC 4.2 GHz with 32 GB RAM

        Comment


        • #44
          Originally posted by BF90X View Post
          Hello everyone,

          I am building a new threadripper system as well,

          Below are the parts I have order so far or will order in the near future,

          Motherboard: ASUS ROG Zenith Extreme
          CPU: Threadripper 1950X
          CPU Block: EKWB TR4
          ​​​Memory: 64 GB G Skill Trident Z RGB 3600 16GB per stick
          PSU: Already have a five year old AX 1200 from my old system but I may order a new one.
          HDD: Samsung 960 Evo Pro - 1TB
          I may add two 6 TB WD Blacks for additional storage
          Case: Thermaltake View 71 TG

          Radiators: Not sure if u should go with EKWB or HW.
          1x 360
          1x420

          GPU:

          2x ASUS GTX 1080 Ti Poseidon

          Pump & Reservoir:

          2x EKWB D5

          Fittings & Tube: After so much research, I think I will go with the Bits power fittings but I'm not too sure about the tubing, definitely PETG.

          Coolant: Mayhem's White Pastel

          Fans: All Corsair ML120 and ML140 white led

          I am going with a black and white theme.

          Any thoughts? I will take any feedback so I can improve my build as much as possible. My last rig is 5 years old.
          Still running a 3930k OC 4.2 GHz with 32 GB RAM
          Due to some problems others people had with ASUS ROG Zenith Extreme I have ordered the ASRock X399 Professional GAMING.
          Other than that I find your system a bit too extreme I mean 64GB RAM is too much but ofc is up to you.
          The same goes with 2x 1080Ti , One GTX1080TI think it will make your pc future proof for 4k and VR gaming but as I said is up to you how will use your pc.
          Also I dont trust pumbs or any water cooling system. I many cases the flow is not enough to cool the system so I prefer fans.
          Last edited by wolfyrion; 29 August 2017, 07:08 AM.

          Comment


          • #45
            Originally posted by wolfyrion View Post
            Also I dont trust any water cooling system. In many cases the flow is not enough to cool the system so I prefer fans.
            LOL, don't trust him: a strong custom loop is way better than *ANY* air cooling system.
            ## VGA ##
            AMD: X1950XTX, HD3870, HD5870
            Intel: GMA45, HD3000 (Core i5 2500K)

            Comment


            • #46
              Originally posted by k1e0x View Post
              What a chip.. super impressive.. I want one.



              You prob don't want to use ZFS on a M.2, the entire concept of ZFS was based on the (at the time) fact that CPU's are a lot faster than storage so it allows ZFS to make passes over the data and a lot of it's features are tuned for spinning disks.. that being said you *can* do this and people do use it on SSD's. You'd get some benefit like checksum, snapshots and compression so there may be reasons to do it but personally I'd just do UFS / ext4 and back it up to a ZFS NAS.

              It depends on the CPU and data rate of the m.2 to tell if there would be a performance hit with ZFS, and how large that would be.. not sure.. you can be safe knowing there isn't one with a more basic file system though.
              Well this was true at some point but as far as ZFS on linux goes(it has extra features and optimizations that at least I know are missing on Mac/OpenIndiana) it plays really really well with SSDs now(as long as you set ashift=12 and scheduler to no-op).

              I haven't tested with M.2 slots precisely but I have tested NVME PCIe enterprise SSDs(that should be the same in theory) on servers in RAID1(as well as caches for spinning pools) and the performance is overwhelming(is not so much with old type SAS SSDs since the controller have more relevance here so not much ZFS can do) specially in cases like virtualization or no-sql databases, well is true for any scenario where you need low latency under extremely parallel read loads.

              For example, when testing (dual e5 xeons if I remember right) we created 8 VM with 10Gb cards(PCIe PT) and installed Mongo with 100gb DBs(each with random precomputed data for testing) and used 8 WS with 10Gb nics with the client app(it does a series of automated reports and operations resulting in 1GB of data) pointing to one of those VMs. in resume we got something like this:

              1.) 1 PCIe SSD, each vm network usage was around 300MB per second noticing each VM had random latency spikes
              2.) 2 PCIe SSD RAID1, latency is gone and each VM was around 800mb/s over the network
              3.) 2 PCIe SSD RAID1+Compression, every 10Gb card is clogged at maximum transfer rate

              Long story short, we switched the client to Mellanox to have grow space in the future when SSD get even faster. now I agree with you about RAID0 on NVME make no sense and I also agree with you that in scenarios where your reads are not close to this extremes RAID1 or RAIDs in general make no sense for an user and I totally agree that for an regular user PCI PT the extra SSD to VM is a way better option.

              I posted here simply to clear clarify that ZFS is not for spinning disks only anymore and that while is true you have a performance hit in exchange for some features this hit disappear depending on the workloads(those are more server oriented tho), so while is true for a normal desktop user the hit may be noticeable under the right load it perform great(probably second to none), so is about the right tool for the job(this could apply to BTRFS as well off but I haven't tested it)

              Comment


              • #47
                Originally posted by darkbasic View Post

                LOL, don't trust him: a strong custom loop is way better than *ANY* air cooling system.
                I was not sure if he was serious or not. My old system has been running between 65°C to 71°C on a AIO (Corsair H100) for 5 years so I know I want something similar or with better performance so a custom water look may help a bit.

                I am going for a two loop system, 1 loop for CPU and 1 loop for the two graphics cards.

                Comment


                • #48
                  Originally posted by jrch2k8 View Post

                  Well this was true at some point but as far as ZFS on linux goes(it has extra features and optimizations that at least I know are missing on Mac/OpenIndiana) it plays really really well with SSDs now(as long as you set ashift=12 and scheduler to no-op).

                  I haven't tested with M.2 slots precisely but I have tested NVME PCIe enterprise SSDs(that should be the same in theory) on servers in RAID1(as well as caches for spinning pools) and the performance is overwhelming(is not so much with old type SAS SSDs since the controller have more relevance here so not much ZFS can do) specially in cases like virtualization or no-sql databases, well is true for any scenario where you need low latency under extremely parallel read loads.

                  For example, when testing (dual e5 xeons if I remember right) we created 8 VM with 10Gb cards(PCIe PT) and installed Mongo with 100gb DBs(each with random precomputed data for testing) and used 8 WS with 10Gb nics with the client app(it does a series of automated reports and operations resulting in 1GB of data) pointing to one of those VMs. in resume we got something like this:

                  1.) 1 PCIe SSD, each vm network usage was around 300MB per second noticing each VM had random latency spikes
                  2.) 2 PCIe SSD RAID1, latency is gone and each VM was around 800mb/s over the network
                  3.) 2 PCIe SSD RAID1+Compression, every 10Gb card is clogged at maximum transfer rate

                  Long story short, we switched the client to Mellanox to have grow space in the future when SSD get even faster. now I agree with you about RAID0 on NVME make no sense and I also agree with you that in scenarios where your reads are not close to this extremes RAID1 or RAIDs in general make no sense for an user and I totally agree that for an regular user PCI PT the extra SSD to VM is a way better option.

                  I posted here simply to clear clarify that ZFS is not for spinning disks only anymore and that while is true you have a performance hit in exchange for some features this hit disappear depending on the workloads(those are more server oriented tho), so while is true for a normal desktop user the hit may be noticeable under the right load it perform great(probably second to none), so is about the right tool for the job(this could apply to BTRFS as well off but I haven't tested it)
                  I'm with you here and i'm aware. You think its faster than UFS / ext4 tho? .. if speed is the point go for speed. Like I said before ppl do use ZFS on SSD and it's fine for that.. for the right use cases.

                  Comment


                  • #49
                    Originally posted by wolfyrion View Post
                    Any suggestions?
                    wrong videocard vendor

                    Comment


                    • #50
                      Originally posted by schmidtbag View Post
                      RAIDing M.2 drives will very likely cause worse performance except in synthetic sequential read/write tests
                      extraordinary claims require extraordinary evidence

                      Comment

                      Working...
                      X