Announcement

Collapse
No announcement yet.

Apple M2 vs. AMD Rembrandt vs. Intel Alder Lake Linux Benchmarks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by qarium View Post

    and be sure 55% higher multicore score is better than 6-8% higher singleclock score ...
    That depends on the task but for gaming it's IPC. Yes 55% is better for blender and handbrake, but core count isn't everything.
    you i think you already learned a lot... for example if you want to OC ram you want ECC ram... or the point that if you want faster system you have to optimise the bottleneck and a 8% faster core in singlecore performance does not help you at all if the bottleneck at 4K gaming is the gpu...
    If I wanted to know why this is happening to your CPU I would ask the people at overclocked.net, which I also visit. I learned a lot from those people about tweaking Ryzen systems. Your machine losing single threaded performance from an overclock is odd.

    Do you really think that 55% multicore performance translates into 55% better gaming performance? IPC and GPU performance are what mostly matters. Here's my Unigine Heaven Benchmark. Whatever that overclock did will probably be worse than without.

    i think you can learn a lot if you are at phoronix.com forum ;-)
    You are aware I've been here since 2010 and have posted as much as you have?
    Attached Files

    Comment


    • Originally posted by mdedetrich View Post
      There is no point of arguing about this because you doing the no true scotsman fallacy and claiming that "web developers" are not "real" programmers.
      No, I said "even if you DO include them". I was giving you every possible advantage, because I'm much less invested in this and it seemed reasonable to be generous. If you *do* want to restrict it to only the developers who "really need ALL the performance of the M1", which is what the specific point was, then that's up to you.

      > I don't care about the barristers.

      A barristER is a lawyer. A baristA (also, only one R) is someone who works at a cafe.

      > If you check wikipedia

      That's not what either of us was talking about. You said "ridiculous number of installs", and that's the only number relevant to this question. Changing it to "contributors to this project", or even "people who've uploaded a makefile or whatever", that's also going to scale your population down by such a huge factor that you couldn't possibly be right even if you otherwise would have been.

      Your citation, incidentally, has apparently long since rotted.

      > In regards to other statistics such as installs (even on formula level) see https://formulae.brew.sh/analytics/

      Thanks. I found that page when I went looking before, but I can't get it to show any numbers. I get row and column headings, but no actual data - and the column headings are all in homebrew-specific jargon, which is literally useless to anyone who isn't already involved with the ecosystem enough to be able to translate that into reality. If we claimed that every Jenkins run was "a Linux install", there'd be a trillion Linux users by now, which is all very nice but doesn't actually provide the data we need to be able to do the math with.

      I was hoping you were making your argument in enough good faith to provide those numbers in the first place, and I'm STILL hoping that, so let's give this one last try.
      The ballpark figures for M1/M2 Macbook sales over the last year is 6.6-6.8M/Q, i.e. ~26.8M/yr. How many homebrew installs - that's "homebrew" itself, not "recipes" or whatever other cutesy name they give each package / script / whatever that IT downloads - are there in the last 365 days?
      If it's, say, 20M+, you've resoundingly proven your case, and I award you an entire Internets. (Feel free to rub it in!)
      If it's 2.5M+, that's 1 in 10, which would make your guess very optimistic, but still far less wrong than mine.
      If it's ~1M, that's in 1 in 27, and we're both terrible at this.

      So, what's the answer?

      Comment


      • Originally posted by arQon View Post
        Thanks. I found that page when I went looking before, but I can't get it to show any numbers. I get row and column headings, but no actual data - and the column headings are all in homebrew-specific jargon, which is literally useless to anyone who isn't already involved with the ecosystem enough to be able to translate that into reality. If we claimed that every Jenkins run was "a Linux install", there'd be a trillion Linux users by now, which is all very nice but doesn't actually provide the data we need to be able to do the math with.
        Comparing it with Jenkins is like comparing Apples with Oranges.

        Homebrew is typically only ever installed once per mac system, so they are as accurate as you can get in terms of stats. Pretty much no one reinstalls homebrew unless they format their mac or the installation is completely broken (extremely rare). Homebrew is installed once and you forget about it.

        Originally posted by arQon View Post
        I was hoping you were making your argument in enough good faith to provide those numbers in the first place, and I'm STILL hoping that, so let's give this one last try.
        The ballpark figures for M1/M2 Macbook sales over the last year is 6.6-6.8M/Q, i.e. ~26.8M/yr. How many homebrew installs - that's "homebrew" itself, not "recipes" or whatever other cutesy name they give each package / script / whatever that IT downloads - are there in the last 365 days?
        If it's, say, 20M+, you've resoundingly proven your case, and I award you an entire Internets. (Feel free to rub it in!)
        If it's 2.5M+, that's 1 in 10, which would make your guess very optimistic, but still far less wrong than mine.
        If it's ~1M, that's in 1 in 27, and we're both terrible at this.

        So, what's the answer?
        This question is meaningless because the original hypothesis you made is completely opaque and ill defined. Originally, many pages back you claimed that Macbooks are not suited for "real work". So unless you properly define what "real work means" (which you haven't) this entire argument is pointless because you can massage stats to prove whatever you want. Also you are approaching the problem from the wrong direction, your theory that Macbooks aren't for "real work" is that very few people use Mac's as a total proportion of "real" programmers which is probably the stupidest way to prove anything because there are very many different types of "real work" and there are also other reasons why people may not get a Mac outside of "real work" (i.e. in developing countries Mac's are extremely expensive so while a lot of programmers may like to use one they just cannot afford it but that doesn't mean that Macbook pro's are ill suited to "real work").

        Welcome to the world of statistical analysis which almost all internet/armchair commentators and politicians do a completely terrible job of often because they just want to prove their point, and you are also in this set. I am not going to bother to argue with you to "prove" that Macbook Pro's are for "real work" because that is something that requires a paper/thesis/dissertation. You asked for stats on homebrew, I gave them to you, make of that what you will but I am sure you will always find a way to prove your point because of your ill defined hypothesis.

        All I can state is that I have worked in company that has >3k developers (and by developers I mean "real" programmers) and >60% of those machines were Macbook's. I also know a few people that have worked in the big tech companies and they have also stated similar usage rates for Macbook pro, and these are "real" programmers. There also obvious exceptions (i.e. Microsoft/Redhat).

        You are the one that started with the implication that only hipster coffee barista's used macbook pro, but thats not the case.

        And if you wondering why a lot of "real" programmers use Macbook Pro's, its because they want a *nix/Posix system that "just works" (i.e. they don't have to spend a non trivial amount of time configuring and fuking around with DE's/window managers) and stuff breaking a lot of the time. Not saying that MacOS doesn't break, but as someone who used both OS's for decades constantly as part of fulltime work, the number of times that MacOS breaks I can fit on a single hand where as with Linux distros, well, don't get me started. And unfortunately, a lot of these people get paid to do "real" work and not fuk around with their systems all of the time (and where I work which is a Linux shop we are also now adding Mac's as an option for similar reasons).
        Last edited by mdedetrich; 14 August 2022, 07:24 AM.

        Comment


        • Originally posted by Dukenukemx View Post
          That depends on the task but for gaming it's IPC. Yes 55% is better for blender and handbrake, but core count isn't everything.
          If I wanted to know why this is happening to your CPU I would ask the people at overclocked.net, which I also visit. I learned a lot from those people about tweaking Ryzen systems. Your machine losing single threaded performance from an overclock is odd.
          the real professionals i already told this they do not aim at higher clocks they aim to optimise bottlenecks...
          means the infinity fabric ... some people even run multiplicator 1:1 in the ram only to get higher infinity fabric clocks.

          Originally posted by Dukenukemx View Post
          Do you really think that 55% multicore performance translates into 55% better gaming performance? IPC and GPU performance are what mostly matters. Here's my Unigine Heaven Benchmark. Whatever that overclock did will probably be worse than without.
          i already told you this depents on what game you play and also what you do inside of this game.
          if you play ashes of singularity with 20 000 units on the map i am 100% sure 55% higher multicore performance translate into 55% better gaming performance.

          Originally posted by Dukenukemx View Post
          You are aware I've been here since 2010 and have posted as much as you have?
          i did ask [email protected] in the past to delete 8000 phoronix.com forum posts from me.
          Phantom circuit Sequence Reducer Dyslexia

          Comment


          • Originally posted by Dukenukemx View Post
            That depends on the task but for gaming it's IPC.
            about your unigine benchmark you do benchmark in 2K to make it look like your 6-8% faster in singlecore cpu count or matters.
            but it does not. if you benchmark at 4K nothing of your 6-8% faster cpu will show up in the result.

            also your benchmark is openGL similar to dx11 it was historically single-core technology means your 6-8% faster single core cpu count... but as soon as you use dx12 and vulkan the result on multicore is much better even in resulting to outperform your result if someone has 55% higher multicore performance.

            this means to benchmark old openGL stuff to declear a winner between a 55% faster in multicore task system and a 6-8%faster single core task system is complete nonsense.

            but this all shows your status of cour brain you talk outdated mantras because your knowlege is outdated and your flexibility to learn new stuff is absent.
            Phantom circuit Sequence Reducer Dyslexia

            Comment


            • Originally posted by qarium View Post

              about your unigine benchmark you do benchmark in 2K to make it look like your 6-8% faster in singlecore cpu count or matters.
              but it does not. if you benchmark at 4K nothing of your 6-8% faster cpu will show up in the result.
              I ran it at 1080p because that's what my monitor displays and what majority of people playing games runs at.
              also your benchmark is openGL similar to dx11 it was historically single-core technology means your 6-8% faster single core cpu count... but as soon as you use dx12 and vulkan the result on multicore is much better even in resulting to outperform your result if someone has 55% higher multicore performance.
              Does Unigine Heaven have Vulkan support? Pick a benchmark and I will run it. Doesn't matter anyway you have a Vega64, so you should be faster regardless, unless you borked your setup and it runs slower than my Vega56. I noticed you didn't run the benchmark, or did you?
              this means to benchmark old openGL stuff to declear a winner between a 55% faster in multicore task system and a 6-8%faster single core task system is complete nonsense.
              Pick a benchmark that's going to use the GPU as well as CPU for a gaming test and I'll run it.
              but this all shows your status of cour brain you talk outdated mantras because your knowlege is outdated and your flexibility to learn new stuff is absent.
              Everything in computers is cumulative, nothing gets outdated. Even though Vulkan and DX12 make better use of multicore CPU's, they do not max load the cores evenly. Pure math like Cinebench, compression, or video encoding will do this just fine. Logic where things need to happen in a certain order can't just spread itself out to all your CPU threads. Game developers could throw sound code onto one core, and maybe physics code to another, and etc while making sure it all stays in sync and doesn't introduce a ton of bugs in the process. This is why IPC is still important unless you lack cores and then some tasks need to share the same CPU thread. Coding for multiple cores in games is hard.

              Comment


              • Originally posted by Dukenukemx View Post
                I ran it at 1080p because that's what my monitor displays and what majority of people playing games runs at.
                this means i need to set my 4K display to 2K and benchmark it to proof you anything?

                LOL... there are many reasons why this benchmark makes no sense first openGL is outdated technology with only legacy purpose and mandle/metal/vulkan/d12 tech and hardware is already 10 years old.

                second to benchmark at 2K is in fact nonsense because all future devices of relevance have 4K or 5K
                this means you only do 2K benchmarking because you sit on outdated tech your monitor is outdated and no one in the market care about people who run outdated stuff. even smartphones have higher resolution than 2K today.

                3. Unigine Heaven means all Unigine benchmarks are no game at all you can make a video and play the video with the same visual effect. this benchmark has no practice relevance at all. its a outdated engine on outdated tech.

                if you want to have state of the art tech run some Proton benchmarks or Godot 4.0 engine benchmarks

                Originally posted by Dukenukemx View Post
                Does Unigine Heaven have Vulkan support?
                unigine is a opengl engine without any modern game to play i had oil-rush in the past the only game with this engine ...

                Originally posted by Dukenukemx View Post
                Pick a benchmark and I will run it. Doesn't matter anyway you have a Vega64, so you should be faster regardless, unless you borked your setup and it runs slower than my Vega56. I noticed you didn't run the benchmark, or did you?
                no to be honest to run 10 years outdated tech openGL with a engine "unigine" who does not have a single game to play has no relevance in my life. its a waste of time.

                if you want any releavnt data maybe we should run ashes of the singularity benchmark in proton...
                but your 2K display makes it null and void as a GPU benchmark...

                it maybe works as a cpu benchmark ignoring the display resolution and run at the lowest resolution...

                Originally posted by Dukenukemx View Post
                Pick a benchmark that's going to use the GPU as well as CPU for a gaming test and I'll run it.
                again we could run ashes of singularity in proton but with 2K resolution it is no gpu benchmark at all...
                it could only work as a cpu benchmark at the lowest resolution of the display like benchmark on 800x600 pixel...

                Originally posted by Dukenukemx View Post
                Everything in computers is cumulative, nothing gets outdated.
                thats plain and simple wrong... or do you use the glide 3Dfx API ? do you use directX1.0 2.0 3.0 ? the oldest still in use is DirectX9.0... thanks to Vulkan openGL is defacto obsolete ... X11 is obsolte because of wayland...

                i am not going to benchmark with you a 25+ years old Glide 3Dfx API game because its pointless.

                get the point ashes of singularity run on mandle/dx12/vulkan this game is from 2016 it is 6 years old

                why we should benchmark older stuff than 6 years old ?... you never buy new hardware to benchmark 10 years old or older stuff.

                this makes no sense at all.

                Originally posted by Dukenukemx View Post
                Even though Vulkan and DX12 make better use of multicore CPU's, they do not max load the cores evenly.
                on ashes of singularity it does in fact max out your cpu...

                Originally posted by Dukenukemx View Post
                Pure math like Cinebench, compression, or video encoding will do this just fine. Logic where things need to happen in a certain order can't just spread itself out to all your CPU threads. Game developers could throw sound code onto one core, and maybe physics code to another, and etc while making sure it all stays in sync and doesn't introduce a ton of bugs in the process. This is why IPC is still important unless you lack cores and then some tasks need to share the same CPU thread. Coding for multiple cores in games is hard.
                in ashes of singularity every unit is a NPC with its own AI what is a seperat thread run on the cpu core.

                you can play with 20 000 units on the map...

                ashes of singularity is very relevant even today.... because proton runs it very well.
                Phantom circuit Sequence Reducer Dyslexia

                Comment


                • Originally posted by Dukenukemx View Post
                  I ran it at 1080p
                  i did download and test ashes of singularity...

                  dx12 mode did result in crash at startup
                  vulkan mode did result in crash at startup
                  CPU benchmark mode only works in dx12 and vulkan
                  what does run: Dx11 mode and GPU benchmark at DX11 in proton.

                  you need dx12/vulkan to have any benefit with higher multicore score system...

                  the GPU benchmark at 2K is pointless ... and you do not have 4K to benchmark.

                  it was a good idea but very sad that the status is useless.

                  and your point at 2K gpu benchmark yes your cpu is 6-8% faster but cour vega56 is slower means no meaning full result at all.

                  at 4K the result is the 6-8% faster singlecore cpu does not matter at all and the vega64 is faster than vega56...

                  nothing to proof here.

                  the only think what would be interesting but does not work is the CPU benchmark mode in dx12/vulkan... because this would be the proof that 55% higher multicore score is better than 6-8% higher singlecore result.
                  Phantom circuit Sequence Reducer Dyslexia

                  Comment


                  • Originally posted by ldesnogu View Post
                    I show you a section that explicitly explain how ARMv4/v5 instructions behave differently from ARMv7 and you still claim compatibility is ensured?
                    So I did a bit of research because I've seen v5 code run on newer ARMs and couldn't believe you. Turns out you are right, its not 100% compatible in theory. But code compiled with gccs or clangs standard settings compiles v5 without unaligned access therefore avoiding the incompatibility and in practice is able to run v5 code on any AArch32 variant.

                    Comment


                    • Originally posted by mdedetrich View Post
                      Homebrew is typically only ever installed once per mac system, so they are as accurate as you can get in terms of stats.
                      Then it's ideal for this, which is what I was trying to get established, because this isn't generally a place where Apple users hang out.

                      > This question is meaningless because the original hypothesis you made is completely opaque and ill defined. Originally, many pages back you claimed that Macbooks are not suited for "real work".

                      Untrue, and untrue. I said that Macbooks generally weren't *used* for "real work" - as you know, since you quoted that piece in your comment about how they were good development machines - which you and I both defined as "actually pegging the CPU".

                      Let's cut to the chase: you've now repeatedly made claims as to what I've "said" that are not just false, they're literally the exact opposite of what I actually wrote. Your reinterpretations of the words are getting more disconnected each time, with your replies also getting further away from the topic each time, and turning into increasingly desperate attempts to dodge or reframe the question.

                      You've put *way* more effort into avoiding the issue than answering it would have taken, which is generally a very solid hint that you've realized you made a mistake but you're too proud to admit it, so now you're floundering and just throwing out as many excuses as you can in the hope someone will fall for it. (Which seems a waste of effort really: there's only you and me that care at all, and since I can read you're unlikely to convince me).

                      Maybe you're just really confused. It happens. You did at least accept that the Airs are vanity items, so that's something.
                      I'd have *liked* something a bit more concrete than my wild guess at what the numbers are, because I'm endlessly curious that way, but I get the impression from your response that I was at worst in the right orders of magnitude, so I guess that'll have to do.

                      > And if you wondering why a lot of "real" programmers use Macbook Pro's

                      I'm not wondering about it at all. Most of my friends have ever since Apple abandoned 68K, which is long before laptops became jewelry. But they're a tiny minority of Macbook users overall - which was my point, despite your recent attempts to pretend it wasn't.

                      If you ever change your mind, feel free to let us know what the answer was. Nobody's going to ridicule you for simply being wrong on a subject where everyone involved was just making their best guess in the first place.
                      Last edited by arQon; 16 August 2022, 03:51 AM.

                      Comment

                      Working...
                      X