Announcement

Collapse
No announcement yet.

The Sandy Bridge Core i7 3960X Benchmarked Against Today's Six-Core / 12 Thread AMD/Intel CPUs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • perpetually high
    replied
    Originally posted by yokem55 View Post
    And I generally agree with you - But you do have to be pretty mindful of where the code you are running is coming from. Ie making sure all your repo's are signed, don't install random .deb's, tarballs, appimages, etc. Even building electron apps from source can be risky because of the security shit-show that npm is. The same problem with apps installed with pip/pypi. Also you have to cross your fingers that this stuff never becomes easily exploitable via javascript in your browser...
    Yeah I haven't seen any reports of Spectre/Meltdown/etc exploits in the wild, but then again who knows how many zero days or unknown attacks exist at the moment (on a side note- Zero Days documentary was a great watch)

    What brings me peace at home is that a router gives a good line of defense. I only have one port open and that's for OpenVPN via the pi-hole (using PiVPN software). If for example there's an exploit in OpenVPN, there's a clear entry for an attacker on my IP. Any other interface I can hopefully safely assume is inaccessible. I imagine if you have a webserver/ssh/etc on, even more points of entry. Shell shock, Heartbleed, I'm sure more will come out.

    The pi-hole is nice because you see an overview of your network requests across all your devices on the network. So if a Windows PC on the network was phoning home, you'd know right away. In terms of software, I get all my apps from official websites, git and package repositories but like you said, huge trust system there.

    The web browser is definitely the easiest point of entry. I'm using Firefox + the pi-hole to block ads/malicious domains + DuckDuckGo Privacy Essentials. I should probably go a step further and be responsible and install the NoScript extension. Firefox also came out with a Firefox Private Network addon the other day which is pretty cool.

    Leave a comment:


  • yokem55
    replied
    Originally posted by perpetually high View Post
    Agree 100%. In my opinion, the cloud providers are the ones that got really screwed with the Intel mitigations, because they have to prioritize security and keep them on. But for regular ol' you and I, that should be off (on the condition like you said that you're being mindful and not running malicious code).
    And I generally agree with you - But you do have to be pretty mindful of where the code you are running is coming from. Ie making sure all your repo's are signed, don't install random .deb's, tarballs, appimages, etc. Even building electron apps from source can be risky because of the security shit-show that npm is. The same problem with apps installed with pip/pypi. Also you have to cross your fingers that this stuff never becomes easily exploitable via javascript in your browser...

    Leave a comment:


  • perpetually high
    replied
    Originally posted by atomsymbol View Post

    I agree that mitigations=off is good for notebook&desktop machines (assuming the user ensures that the machine never runs malicious code).

    On the other hand, cloud machine providers shouldn't boot the machines with mitigations=off.
    Agree 100%. In my opinion, the cloud providers are the ones that got really screwed with the Intel mitigations, because they have to prioritize security and keep them on. But for regular ol' you and I, that should be off (on the condition like you said that you're being mindful and not running malicious code).

    (nice -n12 make -j$(nproc), NVMe SSD)
    Btw: that nice command is inverted. That's giving the process a priority of +12 (lower priority) instead of -12 (higher priority) which I'm sure is what you wanted.

    I find "nice -n -12" to be easier to remember and more intuitive than the double hyphen (nice --12) and prevents accidentically using the positive number instead of negative.

    On a quick side note - with Feral's GameMode, I find a nice value of -4 to be the best. Since PulseAudio runs at -11, it seems -12 through -20 caused some stuttering in games.

    Leave a comment:


  • perpetually high
    replied
    Originally posted by pmorph View Post
    May I ask, which are the use cases you are noticing the difference? I'm still running 250Hz, CONFIG_PREEMPT off, since I can't really notice the latency even with this setup. Maybe I have just adapted to not "feel" it, but I still think there can't be much room for improvement that would be meaningful in my desktop experience.
    In just general day-to-day multitasking use, web browsing, application start time, etc. I don't attribute this all to the 1000 hz timer or preempt, the BFQ scheduler helps a lot as well (previously was using 'none' which was quick also, but kinda dumb).

    GNOME 3.34, even more help. I posted a gif the other day of just opening apps to give you an example. Also, using "preload" (sudo apt install preload) still makes a lot of sense for today's desktops, but again, you're not going to find that kind of stuff installed and enabled by default.

    For compiling software, using "make -j4" (or however many threads you have) and ccache helps speed things up in that department. Compling the kernel with ccache takes only about 6-7 minutes for me. (and that's with the standard Ubuntu config + my changes)

    Things like using the Feral's GameMode and the "performance" governor (intel_pstate driver) make sense for gaming also. When I'm not gaming though, I leave it on powersave and find it performs well.

    My main point though in this thread was to highlight booting with the kernel parameter "mitigations=off" on Intel systems.

    Leave a comment:


  • atomsymbol
    replied
    Originally posted by perpetually high View Post

    In my situation I'm more than happy to give up a little IPC for more latency. To me, latency is king on desktop. On a server, probably throughput is. It really all depends on the use cases of desktop vs servers, and I agree everyone has to assess their own situation.

    The kernel is built "generic-x86_64" to satisfy everyone, which makes sense. But likely, if you're an enthusiast trying to make your system as efficient and optimized as possible, feeding it a hearty "-march=native" does a kernel good. I've mentioned before - lots of goodies in the kernel config that improve performance and latency that wouldn't make sense to enable by default, but doesn't mean they shouldn't be enabled for your system.

    About the 1000 Hz kernel, here's what Linus/the kernel says about it, and I find it works well for my 4c/4t:

    (image)

    In respect to preemptive kernel, it suits my needs on desktop as the image below explains. The combo of this + 1000 hz has made my system incredibly responsive. Then you couple that with BFQ scheduler and maybe a mitigations=off for good measures, and the system really flies.

    (image)
    I agree that mitigations=off is good for notebook&desktop machines (assuming the user ensures that the machine never runs malicious code).

    On the other hand, cloud machine providers shouldn't boot the machines with mitigations=off.

    ----

    If the user presses a keyboard key, moves the mouse, or a network packet arrives to the machine's network card, the machine will start to process the event as soon as possible (without unnecessarily waiting up to 1ms (timer frequency 1000Hz)) if there are free hardware threads available.

    With an idle machine, 300 Hz timer and preempt_none, I get 1.753ms ping latency to local WiFi router:

    Code:
    $ ping 192.168.2.1
    --- 192.168.2.1 ping statistics ---
    21 packets transmitted, 21 received, 0% packet loss, time 49ms
    rtt min/avg/max/mdev = 1.090/1.753/2.279/0.245 ms
    With the same config, but running a full Linux kernel 5.2 rebuild in another terminal window (nice -n12 make -j$(nproc), NVMe SSD), I still get nearly the same ping latency (1.799ms):

    Code:
    $ ping 192.168.2.1
    --- 192.168.2.1 ping statistics ---
    18 packets transmitted, 18 received, 0% packet loss, time 57ms
    rtt min/avg/max/mdev = 0.794/1.799/2.336/0.331 ms

    Leave a comment:


  • Guest's Avatar
    Guest replied
    I'm still waiting for a decent (non-Nvidia) GPU with path tracing to come out, before I consider building a new PC. Right now, I'm stuck with my i7-3720QM, which is OK even for development work as of now, but it feels like single threaded games murder that CPU.

    Leave a comment:


  • dispat0r
    replied
    For best performance I would consult the clear linux kernel defaults:
    https://github.com/clearlinux-pkgs/l.../master/config
    On my desktop I just use these with full preempt. You can also add the patches from this repo for better performance.

    Leave a comment:


  • pmorph
    replied
    Originally posted by perpetually high View Post
    Then you couple that with BFQ scheduler and maybe a mitigations=off for good measures, and the system really flies.
    May I ask, which are the use cases you are noticing the difference? I'm still running 250Hz, CONFIG_PREEMPT off, since I can't really notice the latency even with this setup. Maybe I have just adapted to not "feel" it, but I still think there can't be much room for improvement that would be meaningful in my desktop experience.

    Leave a comment:


  • xpue
    replied
    Originally posted by perpetually high View Post
    1000hz timer
    ... Degrades performance.

    Leave a comment:


  • perpetually high
    replied
    Originally posted by atomsymbol View Post

    Some notes:

    ...
    Good info - thanks.

    In my situation I'm more than happy to give up a little IPC for more latency. To me, latency is king on desktop. On a server, probably throughput is. It really all depends on the use cases of desktop vs servers, and I agree everyone has to assess their own situation.

    The kernel is built "generic-x86_64" to satisfy everyone, which makes sense. But likely, if you're an enthusiast trying to make your system as efficient and optimized as possible, feeding it a hearty "-march=native" does a kernel good. I've mentioned before - lots of goodies in the kernel config that improve performance and latency that wouldn't make sense to enable by default, but doesn't mean they shouldn't be enabled for your system.

    About the 1000 Hz kernel, here's what Linus/the kernel says about it, and I find it works well for my 4c/4t:



    In respect to preemptive kernel, it suits my needs on desktop as the image below explains. The combo of this + 1000 hz has made my system incredibly responsive. Then you couple that with BFQ scheduler and maybe a mitigations=off for good measures, and the system really flies.

    Leave a comment:

Working...
X