Intel Core i5 14600K & Intel Core i9 14900K Linux Benchmarks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • uid313
    replied
    Originally posted by TemplarGR View Post

    You were the one who compared them first dude.... Also, you are ignorant of how cpus are made. You do not simply increase or decrease the clockrate by changing a switch, in order for the clockrate to be able to go higher you need to change the architecture and/or the process node. x86 architectures are able to clock high and use wider execution units per core because of their architecture, arm cores CANNOT. So the raw performance of Intel is higher than ARM because Intel has designed it that way. Yes it uses more power, but that is to be expected, if ARM ever develops a desktop cpu it will be using similar levels of power. So you can't say Intel's architecture is "bad". Also Apple cores are severely overrated/overhyped and are not really better than Intel either. And Apple also benefits by compiling the OS and Software for their architecture, unlike Intel and x86 software which is more generic.
    No, I never mentioned the iPhone at all.
    I never compared them. I compare Intel and AMD CPU against Apple desktop M3 CPU and Qualcomm Snapdragon Elite X for laptops and desktops, not Snapdragon 8, their mobile chipset.

    Leave a comment:


  • TemplarGR
    replied
    Originally posted by uid313 View Post

    You're comparing a several generations old <1 W iPhone system-on-a-chip against a ... what? a 100 W Intel/AMD CPU? and say that the iPhone 12 mini performs bad?

    You cant compare a mobile system-on-a-chip against a desktop CPU, you would have to compare a Apple M3 CPU against AMD or Intel.
    You were the one who compared them first dude.... Also, you are ignorant of how cpus are made. You do not simply increase or decrease the clockrate by changing a switch, in order for the clockrate to be able to go higher you need to change the architecture and/or the process node. x86 architectures are able to clock high and use wider execution units per core because of their architecture, arm cores CANNOT. So the raw performance of Intel is higher than ARM because Intel has designed it that way. Yes it uses more power, but that is to be expected, if ARM ever develops a desktop cpu it will be using similar levels of power. So you can't say Intel's architecture is "bad". Also Apple cores are severely overrated/overhyped and are not really better than Intel either. And Apple also benefits by compiling the OS and Software for their architecture, unlike Intel and x86 software which is more generic.

    Leave a comment:


  • TemplarGR
    replied
    Originally posted by uid313 View Post
    Intel and AMD both have shit CPUs which are very energy inefficient so in order to increase performance they have to feed them with a lot of more power in order to yield a little bit more performance. So especially the more expensive high-end CPUs are shitty, like the Intel Core i9 and Ryzen 9 series, so it is much better to buy a Intel i5 or Ryzen 5 rather than the very power thirsty i9 or Ryzen 9.

    I look forward Qualcomm launching their new Snapdragon Elite X which is going to crush both Intel and AMD. Soon Nvidia will join too with their new high performance ARM CPU and Intel and AMD will be left behind with their shitty x86 CPUs.

    Intel and AMD really need to make either a ARM or RISC-V CPU. The x86 architecture is at a dead end. Intel nor AMD can make x86 CPUs that are good enough to compete with the ARM-based offerings of Apple, Qualcomm and Nvidia.

    If HP and Dell want to sell something, they need to bring some ARM-based products to the market, because their products are inferior. We all hate Apple but they got the best laptops on the market. Microsoft knows it, their Surface Book is shit, it cannot compete, they need ARM. Samsung knows it, their laptops are shit, they need ARM.
    This is actually not true. ARM cpus have far worse IPC potential than x86 cpus. While for many daily tasks ARM cpus are "good enough", when you have software that can really exploit them, or cpu bottlenecked games, there is no comparison, really....

    Leave a comment:


  • ktecho
    replied
    Originally posted by Classical View Post
    Compare e.g. the results of the iPhone 12 with the Intel 12600K
    Wait, are you comparing the performance of a mobile phone designed to last hours on a lithium battery, to a desktop CPU can that heat my house?

    Leave a comment:


  • uid313
    replied
    Originally posted by Classical View Post

    In my opinion, ARM is actually not that much better than what AMD and Intel currently use. Compare e.g. the results of the iPhone 12 with the Intel 12600K that I published here:
    There are other Windoze system in the house. I have a Radeon RX 580 graphics card. I'm most certainly not planning on moving hardware components around trying to get this stupid benchmark to run. My youngest sister has an i7 8700k + RX 580 8GB in the system I built for her so I can ask her what...


    As you can see, the performance of the iPhone 12 mini (as fast as a standard iPhone) is really very weak in the 'animation & skinning', 'particles' and 'AI agents' sections.
    In some cases, the Intel is more than 10 times faster or more than 5 times faster. That has a big effect on the averages.
    As long as ARM performs so weakly, Intel and AMD have no real competition in the desktop segment.

    What I am wondering is what RAM speeds were used for this test of the fourteenth generation?
    This is something essential to know and it is not mentioned.

    Another thing I wonder is whether speedometer 2.0 or version 2.1 was used for the tests? Again, this is hard to find and would be best stated specifically.

    Same thing with Jetstream. Is it version 2.0 or 2.1 that was tested?
    For many people, version 2.1 is going to score higher.
    You're comparing a several generations old <1 W iPhone system-on-a-chip against a ... what? a 100 W Intel/AMD CPU? and say that the iPhone 12 mini performs bad?

    You cant compare a mobile system-on-a-chip against a desktop CPU, you would have to compare a Apple M3 CPU against AMD or Intel.

    Leave a comment:


  • Slartifartblast
    replied
    How do the E cores work with KVM ? I don't own an Intel CPU with them so it's why I ask.
    Last edited by Slartifartblast; 01 November 2023, 07:12 AM.

    Leave a comment:


  • Classical
    replied
    Originally posted by uid313 View Post
    Intel and AMD both have shit CPUs which are very energy inefficient so in order to increase performance they have to feed them with a lot of more power in order to yield a little bit more performance.
    In my opinion, ARM is actually not that much better than what AMD and Intel currently use. Compare e.g. the results of the iPhone 12 with the Intel 12600K that I published here:
    There are other Windoze system in the house. I have a Radeon RX 580 graphics card. I'm most certainly not planning on moving hardware components around trying to get this stupid benchmark to run. My youngest sister has an i7 8700k + RX 580 8GB in the system I built for her so I can ask her what...


    As you can see, the performance of the iPhone 12 mini (as fast as a standard iPhone) is really very weak in the 'animation & skinning', 'particles' and 'AI agents' sections.
    In some cases, the Intel is more than 10 times faster or more than 5 times faster. That has a big effect on the averages.
    As long as ARM performs so weakly, Intel and AMD have no real competition in the desktop segment.

    What I am wondering is what RAM speeds were used for this test of the fourteenth generation?
    This is something essential to know and it is not mentioned.

    Another thing I wonder is whether speedometer 2.0 or version 2.1 was used for the tests? Again, this is hard to find and would be best stated specifically.

    Same thing with Jetstream. Is it version 2.0 or 2.1 that was tested?
    For many people, version 2.1 is going to score higher.

    Leave a comment:


  • drakonas777
    replied
    Originally posted by uid313 View Post
    Intel and AMD both have shit CPUs which are very energy inefficient so in order to increase performance they have to feed them with a lot of more power in order to yield a little bit more performance. So especially the more expensive high-end CPUs are shitty, like the Intel Core i9 and Ryzen 9 series, so it is much better to buy a Intel i5 or Ryzen 5 rather than the very power thirsty i9 or Ryzen 9.

    I look forward Qualcomm launching their new Snapdragon Elite X which is going to crush both Intel and AMD. Soon Nvidia will join too with their new high performance ARM CPU and Intel and AMD will be left behind with their shitty x86 CPUs.

    Intel and AMD really need to make either a ARM or RISC-V CPU. The x86 architecture is at a dead end. Intel nor AMD can make x86 CPUs that are good enough to compete with the ARM-based offerings of Apple, Qualcomm and Nvidia.

    If HP and Dell want to sell something, they need to bring some ARM-based products to the market, because their products are inferior. We all hate Apple but they got the best laptops on the market. Microsoft knows it, their Surface Book is shit, it cannot compete, they need ARM. Samsung knows it, their laptops are shit, they need ARM.
    Nice wish list.

    Reality is most of the premium thin&lights (X1 and likes) will go for 9W Meteor Lake next year and it won't look as bad against Snapdragon Elite X as 13gen P/H series trash looks. Furthermore, Snapdragon Elite X is going to be expensive. We will be very lucky if entry level for these will start around 1000USD, but even then last-gen Apple Silicon will give better value since for 1K you will get some shit tier Asus model with Elite. So yeah, we will see.

    But it's exciting piece of hardware - I agree. Also, such fucking abominations like AVX10.x will certainly not add any favors for x86 ecosystem.
    Last edited by drakonas777; 01 November 2023, 04:42 AM.

    Leave a comment:


  • coder
    replied
    Originally posted by fitzie View Post
    I wonder if 14th gen will work with W680 motherboards.
    Yes. All LGA 1700 board should support the 14th generation, however a new BIOS version is probably required. I don't know whether there are any boards which won't even boot into BIOS with a 14th gen CPU installed, but older motherboards would sometimes have this problem. Motherboards with a BMC usually have that as a workaround.

    Originally posted by fitzie View Post
    would certainly spec out an ecc based 14th gen core box to compare with if it's compatible
    Just be aware that W680 boards don't support ECC with all CPU models. In the 12th gen, Intel put the cut-off at >= i5-12500. If you used a i5-12400 or below, ECC wouldn't be supported. It has nothing to do with which die the CPU is using, either. It's pure market segmentation.

    The same distinction exists in 13th gen, but I don't know where the cutoff is. Not sure about 14th gen, but check the motherboards documentation & online support info, for details.

    Leave a comment:


  • coder
    replied
    Originally posted by AkulaMD View Post
    Does this 14th Gen comes with a brand new efficiency core (small core) or they are just using the efficiency core that was launched with the 12th Gen?
    As I said above, the CPUs branded as 14th gen are actually using the exact same dies as 12th gen and 13th gen, depending on the model in question.

    Between the 12th & 13th gen, the only apparent change in the E-cores is a doubling of the L2 cache (i.e. from 2 MB to 4MB per quad-core cluster). Perhaps the speed of their ring bus port also increased.

    Leave a comment:

Working...
X