Announcement

Collapse
No announcement yet.

Intel Core Ultra 7 155H Meteor Lake vs. AMD Ryzen 7 7840U On Linux In 300+ CPU Benchmarks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by andyprough View Post

    Intel doesn't have to pay them. That's the beauty of Youtube - Youtube pays people to do fake reviews all by itself.
    So any review where AMD does not come out as the clear winner must clearly be "fake"?

    Got it.

    Comment


    • #32
      Originally posted by Michael View Post

      I said out of the box on Ubuntu 23.10 with Linux 6.5 is where networking and other support on the Acer Swift Go 14 was missing. But on Linux 6.6+ there is Meteor Lake graphics by default, etc. All major functionality should be there with Linux 6.7 Git as tested but it's a matter if any more improvements -- or like CPUfreq / scheduler optimizations -- come in the near future. So far i haven't seen any major optimizations/improvements for MTL pending beyond what's in 6.7 Git.
      I have seen enough articles over the years on this website where performance improves over time with newer kernel revisions and the fact that there are benchmarks, like the NAS Parallel Benchmark and GraphicsMagick where there are such huge swings between AMD and Intel winning different benchmarks, not to mention ML losing in Intel's own benchmarks, this tells me there is something wrong here.

      i would not be surprised to see a review from you in 6 months where ML performance has evened out quite a bit.

      Not to mention the real jewel of this CPU, the Arc graphics which i expect to offer excellent Open CL performance.

      I'm looking forward to seeing your Windows vs Linux test on this laptop, I will go on record right now and predict that ML fairs much better against AMD's offerings under Windows, only because I expect Windows support to be better at this point than Linux.

      Comment


      • #33
        Originally posted by andyprough View Post
        PCMag has a review of this same Acer Swift Go 14 laptop up and is saying that the results are limited by the 28 watt restriction for the laptop, and that you won't see the true Core Ultra 7 155H performance until a laptop with the full 45 watts is available. What's that all about?

        I see on the Intel spec site these power listings:
        HTML Code:
         Processor Base Power 28 W
        Maximum Turbo Power 115 W
        Minimum Assured Power 20 W
        Maximum Assured Power 65 W
        Is this chip available in a 45 watt laptop configuration?
        Laptops will always be limited by the power/thermal envelope the package is designed to tolerate. Some have better thermal and power characteristics and tolerances imposed by the engineering details. This is why otherwise near identical hardware has different performance characteristics between different OEMs and even the same hardware in different models from the same OEM.

        As an example, the original M1 Macbooks came in 2 versions: Air and Pro. The Air quickly reaches thermal throttling under load because it lacks active cooling while the Pro has active cooling so the Pro out performs the Air once throttling kicks in. That's not a big deal if you're mostly doing productivity tasks with the Air. While it does make a difference if you're performing long lived tasks where thermal throttling becomes an issue, media encoding, real time signal analysis, and just about any game, for examples.

        Likewise, I have identical NVMe Intel SSDs in a Dell laptop and a MSI desktop, same PCI-e generation in both. The Dell hits thermal constraints quickly killing the NVMe SSD's performance. The desktop, which doesn't have those constraints because of better cooling vastly outperforms the Dell in all I/O metrics with that SSD because it never hits thermal throttle.

        It's also why laptops are nota good benchmark platform for the individual components that make up the whole. The overall package's design performance is what's being compared, not the CPU, GPU, I/O, RAM etc. per se. Thermally constrained designs without controlling for those design differences in the tests make for inaccurate representation of absolute component performance.

        That being said, if all you're doing is comparing one Intel generation to another, the last gen (13th gen) seems to have consistently better single core performance than the just released generation (14th) while multicore performance has increased reported in other review channels. How much depends on the application. I don't remember the reports on power usage as a practical matter. Battery life as a practical issue varies a lot by user, but I wouldn't expect any quantum leaps - and I'd expect OEMs to still lie about their battery life metrics by a factor of 2.

        Comment


        • #34
          Originally posted by sophisticles View Post
          So any review where AMD does not come out as the clear winner must clearly be "fake"?

          Got it.
          I don't know, but I do know that youtubers are paid by view count, size of audience and so forth - not based on the honesty of their reviews.

          I'm personally hoping this generation of Intel chips is a real winner, as I know of at least two laptop makers that will be offering corebooted versions with IME disabled.

          Comment


          • #35
            Originally posted by Radtraveller View Post

            Many vms, multiple k8s clusters 32tb of nvme drives.. on my dual 20/40 c/t cpu workstation with 256gb ram.. just figure something half the cpu, ram and storage and I could take some work with me…. Go work ata coffee shop or something and get out of the “office” once in awhile…. ;-)
            You could remote in via nomachine workstation

            Comment


            • #36
              Originally posted by andyprough View Post

              I don't know, but I do know that youtubers are paid by view count, size of audience and so forth - not based on the honesty of their reviews.

              I'm personally hoping this generation of Intel chips is a real winner, as I know of at least two laptop makers that will be offering corebooted versions with IME disabled.
              The architecture is so complicated, a mess of compromises. The buyer of this CPU is paying for a lot of transistors that don't work together very well: there are high efficiency cores that can't help much when there is work to be done, and high power cores that are ostracised when there is not much work to be done. There are probably some sweet-spot loads where the compromises fit nicely but there surely will a lot of other loads where this architecture is inefficient. Imagine the poor scheduler working out what cores to use.

              Comment


              • #37
                In general Intel's laptops have better connectivity package compared to AMD's. So if you want to use laptop only without being able to connect docking station, thunderbolt devices, etc... AMD might be a good choice. So far new AMD laptop support only USB4 if manufactorer bothered to wire it.

                Comment


                • #38
                  Originally posted by ptrwis View Post
                  Stupid big.LITTLE architecture
                  100% this. People constantly underestimate dimensional complexity.
                  Instead of fixing their "p"-cores, they added the e-cores, which look nice on paper but is an absolute nightmare to optimize for.
                  Being instruction set different really does not help either.

                  Honestly, I think the only way back for Intel is going back to the drawing board and starting beating AMD at basic metrics again.
                  Not more dimensional complexity, obscure instruction sets, accelerators that no one uses etc.

                  Comment


                  • #39
                    Originally posted by stormcrow View Post

                    Unless you intend to replace those OLED laptops in the next 2 years, don't buy them. OLEDs have a built in lifetime that no burn-in protection will entirely solve. It's a physical limitation because organic components have a limited lifetime. Just turning them on burns life time even if you don't have static imaging. This is basic chemistry/physics and no currently in use mitigations will extend their lifetime to those of LCD/LED panels (OLEDs last months versus LCD back lights lasting years). LCD screens merely grow dim, OLEDs change color over their lifetime as the organic components degrade at different rates leading to inaccurate color representation and additional eye strain. Since I have monitors and laptops that are over 5 years old and still have decent time left on their clocks, you couldn't even pay me to have OLEDs if I'd end up having to replace the monitor every 2-3 years because the organic components have degraded to an unacceptable degree. No thanks.
                    I have a 6 yo OLED smartphone which has zero burn in and looks as good as new.

                    Comment


                    • #40
                      The weird thing is, ARM has shown how BIG.little can be right, identical ISA on all cores, even when done via complex decoder expansion, good pipeline usage indicators, switchable cache congruence across clusters.
                      ARM makes it easy for the scheduler to decide when to switch to a high performance core, and also makes the switch easy.
                      Intel, the opposite though.

                      I remember when ARM patched big.LITTLE support into the kernel it took like 2-3 revisions until everything was smooth including hot-plugging and such.
                      Intel now seems to bake more and more intelligence into the SoC's firmware, which "should magically" improve performance and efficiency, somewhat opaque to the OS.

                      Comment

                      Working...
                      X