Announcement

Collapse
No announcement yet.

Ampere Altra Announced - Offering Up To 80 Cores Per Socket

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • archsway
    replied
    Originally posted by AndyChow View Post
    And just like the bulldozer, I hear this Altra chip is great with INT but not that good with FP. From anandtech "Ampere didn’t provide similar numbers for SPEC2017_fp, because the company states that the SoC has been developed with INT workloads in mind."
    Arm seems to have been focusing on integer performance for a while.

    IIRC even some older (2015) ARM chips are pretty competitive with Zen for integer division performance.

    Leave a comment:


  • edwaleni
    replied
    Originally posted by AndyChow View Post

    I hear this Altra chip is great with INT but not that good with FP. From anandtech "Ampere didn’t provide similar numbers for SPEC2017_fp, because the company states that the SoC has been developed with INT workloads in mind."

    Fortunately Ampere is much more upfront and is not afraid of some benchmarks like Cavium is with ThunderX2.

    They gladly send out units for press kickoffs, whereas Cavium hides in the shadows and only allows review units where they know they will only get good press.

    If a company wants to market to the Xeon/Epyc crowd, then you gotta have the balls to let it get tested. That means everything...the good, the bad and the ugly.

    Leave a comment:


  • pal666
    replied
    Originally posted by willmore View Post
    Unless you're renting out cores and charge the same for all cores, I don't know of a time this would be true. You can always have one core do two jobs, but you can't always have two cores do one job faster. Given 2 cores at speed 1 or 1 core at speed 2, you're a fool to pick the 2 cores.
    this works both ways - you can't get half of core when fast core is faster than you need

    Leave a comment:


  • AndyChow
    replied
    "But now with more resources and engineering talent under their belt". Wow, they've really taken the concept of failing forward to a whole new level. I kid, I kid.

    What's with the hate against AMD's bulldozer? I still run a FX-8150. Sure, it's not really worth the electricity it consumes, but if I changed it, I'd also have to buy new ram, new motherboard, etc. One of the first chip on the market to iommu, full passthrough capability, great virtualization chip and accepts basic ECC ram. It's really a classic.

    And just like the bulldozer, I hear this Altra chip is great with INT but not that good with FP. From anandtech "Ampere didn’t provide similar numbers for SPEC2017_fp, because the company states that the SoC has been developed with INT workloads in mind."



    Leave a comment:


  • Spacefish
    replied
    Originally posted by Britoid View Post
    But will it run Crysis ?
    Yes, but the framerate won´t be that good unfortunatley as llvmpipe can use so much cores, but still it´s not that optimized

    I ran some benchmarks on a 64 core zen 2 CPU with 16 DDR4 DIMMs, so theorectically it can push up to 410 GB/sec, but even a 5 year old midrange GPU with much less bandwidth and compute power blows it out of the water...

    I don´t know where the bottleneck is, as the CPU load was still arround ~10% or less when running the benchmark.. Probably some Bandwidth limitation per core or other issues

    B.t.w. Win 10 sucks hard at handling 128 Threads, it can only handle 64 threads per "CPU" so an EPYC with active hyperthreading is split into two "CPU Domain".. One process in Win 10 can only be executed on one "CPU Domain" so you can´t really use all Threads from one Process lol..

    This get´s massively stupid if you have a 72 core CPU with 4 way hyperthreading (Dual Socket Xeon Phi) as Windows will split up the CPU into 5 Domains, as this is the min domain count which yield cluster of < 64 Cores.. They seriously fucked this up in their Kernel....

    Edit: Running Windows on machines with complex NUMA architecture or a lot of threads is stupid in the first place anyway and no one who things straight would do this...
    Last edited by Spacefish; 03 March 2020, 05:58 PM.

    Leave a comment:


  • Luke
    replied
    I am antifa and proud of it. If you are a cop, it's no wonder you want people using chips with backdoors in them, while you no doubt rely on Intel's "quality assurance" IME switch to make it harder for serious operators to get into your servers and encrypt all your warrants with ransomware (which was done near Boston a few years ago).

    Leave a comment:


  • Luke
    replied
    Originally posted by torsionbar28 View Post
    What are you rattling on about? This is a server chip designed for cloud workloads. This is not a desktop peecee. And Bulldozer, WTF? Kick that obsolete trash to the curb. No one doing any kind of real work is using that today, and no sane person has any desire to keep using it for another ten years, lmao.
    A server chip may not be DESIGNED for other uses, that doesn't mean it CAN'T be used for them, especially when it is old and being sold off surplus.

    I myself use bulldozer to this day: I am not employed so not about to trash a machine that does today exactly what it did in 2012 as well as it ever did, and I am not shooting video in 4K so I don't NEED more. I don't play closed-source, paid games, so video editing is my main high performance workload. Compiling MATE, compiz, GTK etc go plenty fast in Bulldozer and I'm not building kernels every day. Don't need more power. Speaking of power, the big Threadripper chips use even more than Bulldozer, and Bulldozer idles (e.g sitting on a webpage with no JS running) at theoretical 35w and actually about 50w at the proc. Ryzen would have to get down to less than 20W at idle for chip with same full power performance to even begin to pay the electrical and materials cost of manufacturing a brand new chip. Same as buying a new car that saves gas but burns over 10,000 pounds of coal or fracked gas to smelt the metals, roll the sheet metal, cast the engine and transmission parts, etc.

    By comparison, running my existing proc until it quits years down the road, than buying an old server (e.g the one in discussion here that is new today) at an auction or computer show doesn't use any new fabrication resources whatsoever.

    Also bulldozer doesn't have the untrusted AMD PSP or Intel IME that can compromise security on an encrypted machine handling sensitive raw clips that must be carefully edited to use only the parts that can be publicly released. I once had to burn a grand jury subpeona for raw video clips after the big Aug 2018 counterprotest against Nazis in DC. They withdrew it, knowing I would never cooperate and that they could not defeat my encryption. I'm not about to pay money to add an additional potential back door to my encrypted disks.

    Leave a comment:


  • existensil
    replied
    Originally posted by phoronix View Post
    Phoronix: Ampere Altra Announced - Offering Up To 80 Cores Per Sockethttp://www.phoronix.com/vr.php?view=28933
    This article contains a typo:

    On the 4th paragraph, the 2nd sentence begins:

    On a power efficiency basis with SPEC int rate they claim 1.14x the perf-per-Watt
    However, the included chart shows a 1.41x perf-per-watt improvement. Looks like you transposed the numbers in the article body.

    Leave a comment:


  • torsionbar28
    replied
    Originally posted by Luke View Post
    Thus, a cluster of 80 overclocked ARM cores that ran as fast as overclocked bulldozer (4.3 GHZ here) should end up being at least 4x as fast real-world for a perfectly scaling multithreaded job. This would require that no one job force all the others to wait while using more than 1/80th of the total resources and being single-threaded. If that worked, we would have realtime video rendering of 4K video to H264 (rejecting patent-troll favorite H265, which is twice as CPU intensive).

    Right now, this might be an expensive server core. Ten years from now, that same rack-mount server box with everything in it might sell at a computer show for a few hundred bucks if even that, as something even faster comes along. Assuming my bulldozer chip lives that long, this could make a replacement for it.
    What are you rattling on about? This is a server chip designed for cloud workloads. This is not a desktop peecee. And Bulldozer, WTF? Kick that obsolete trash to the curb. No one doing any kind of real work is using that today, and no sane person has any desire to keep using it for another ten years, lmao.

    Leave a comment:


  • willmore
    replied
    Originally posted by waxhead View Post
    It is not always about the total performance in synthetic benchmarks. Sometimes many slower cores may be beneficial instead of fewer faster cores.
    Unless you're renting out cores and charge the same for all cores, I don't know of a time this would be true. You can always have one core do two jobs, but you can't always have two cores do one job faster. Given 2 cores at speed 1 or 1 core at speed 2, you're a fool to pick the 2 cores. The only exception may be latency sensitive jobs or when dealing with realtime activities--not likely something you'd see machine like this used for.

    Leave a comment:

Working...
X