Announcement

Collapse
No announcement yet.

AMD Ryzen 7 PRO 6850U "Rembrandt" Linux Benchmarks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    TL;DR Can you turn this into a Steam Deck Dev Machine ++ AMD HIP/ROCM/HSA dev machine and then test gaming and compute performance?

    =-_-======\Original ADHD Rant Below/===================================

    Can you test:

    1) AMD HIP performance in Blender with this laptop? That's the big kicker for a few friends (and friends kids) who are finally going to school or back to school for their hobby.

    I feel they need a gaming laptop with a dedicated GPU for basic Blender stuff for classes and hobbies -- but I'm curious to see if HIP can use the GPU + CPU together to render decently well (within a budget) when compared to a gaming rig or something with a low-end nVidia MX GPU)

    2) Darktable 4.0 OpenCL performance w/the latest rocm opencl drivers from the 5.2.1 release?

    3) Gaming performance with FSA on using HoloISO to install the latest SteamOS zomg. (OK maybe that is a specific drool-worthy request that should go to a gaming-focused site maybe? But not sure where to ask lul)

    Or even better: HoloISO installed, gaming tests under the Gamescope/SteamOS compositor with FSA enabled vs. under KDE Plasma ++ AMD ROCM/HIP/HSA SDKs installed ++ testing the Blender Cycles HIP performance ++ Darktable OpenCL performance --- BASICALLY turning this into a gaming ++ productivity ++ "life/photo management" laptop.

    =-_-=======\Side Note/========

    -- I've done this with a Raven Ridge based Athlon 3050U 2 Cores ++ 2 Vega CU ++ 32GB RAM ++ 1TB nvme laptop and am curious to see how an actually unleashed mobile device platform... Yeah, its cool watching radeontop and bashtop show my compiler pushing its doodads into those 2 Vega CUs (platform-limited to 1100MHz+2400MHz DDR4 RAM) --- but it would be even cooler with this laptop!

    Only thing I have not done... benchmarks... mostly because this is like Mendocino but worse. It seems like it would be unreasonable for someone to use an Athlon 3050U as a gaming platform. Then again -- if there is an openbenchmarking run I can duplicate, screw it, might as well xD
    Last edited by Eirikr1848; 22 July 2022, 05:57 AM.

    Comment


    • #22
      Originally posted by hamishmb View Post

      Cool I think running Linux helps a lot too, I imagine it's not so great on eg Windows 10. I was slightly wrong also, it's a 3632QM - might be the exact same CPU as you.
      Yes this exact one. I ran it with Win 10 for a few months, as long as Windows doesn't decide to do updates "in the background" it's equaly fast, at least not noticable slower. This CPU is not some old celeron shit. It competes with more modern i5s and even the Core i7-1180G7 in multithreded applications. And single threading isn't to bad either compared with more modern 15W CPUs from intel. Got mine for 70 €. T430 is the last Thinkpad generation where you can change the CPU.

      EDIT: Apparently the CPU can take 32GB of RAM
      Look it up beforehand, the platform needs to support it too. On my T430 16 GB is the offical max, although it might work with more.
      Even with a VM running I never needed more than 16GB but ofcourse it depends on your use case.

      Comment


      • #23
        Originally posted by rabcor View Post
        <rant>
        New hardware is terribly unexciting these days, I'm on a 3, pushing 4 year old laptop right now...
        </rant>
        Performance in the same tdp has pretty much tripled in the last 3 years, and battery life for high performance devices has nearly doubled. I got a new laptop this year, same price bought me double the performance of my 2 year old laptop.

        Complaining about absence of hype, but the "hype" buyers are common to buy new computers over meager 30-50% upgrade. Double the performance is quite a substantial upgrade, and if you can't tell, then you probably don't need anything extra over what you have anyway.

        x86 is a very mature platform. And prior to zen it was in a decade long stagnation. Comparing it to a novelty product by a novelty company with a novelty soc design inside is ridiculous. There's lots of low hanging fruit to be picked there, something already evident from the barely incremental M2. Yeah, it ain't gonna be impressive every gen. A car is quick to increase velocity when accelerating from a still, but very slow when it is already moving at a speed.

        For x86 laptops in particular, things have been pretty exciting. We actually got competition now, mobile zen started iffy but is now dominant in terms of performance. There are thin, light and affordable laptops that have 4 times the power of my HPC workstation from 10 years ago. There's generic x86 handheld devices that can play aaa games.

        Complaining about chips getting into the hundreds of watts... Are you aware that for a time, x86 cpus didn't even require a heatsink, much less a fan. Those things used to be in the watts, and they've gotten significantly more useful as they got into the tens of watts, so hundreds of watts is but a logical continuation. In a few years time, wafer scale chips will become more feasible, and the tdp will move in the kilowatts range, then you can have meshes of wafer scale chips into memory coherent megawatt range single logical systems, that's already foreseeable today.

        As for the legitimate portion of your complaints - that's pretty much the "cgi" effect on movies - having more hardware to do more effects than ever - doesn't really result in better movies, just in more shallow movies, putting more emphasis on cheap abundant cgi. More power means be careless with what you do. The bulk of perf increases we get are being negated by inefficient software. lousy programming languages, poorly designed libraries... Making it run well on limited hardware - that's what pushes developers to get creative. I am yet to play a FSP with better multiplayer dynamics feel than quake 3, it's got nothing to do with software maturity or hardware performance, it was just written well, even the same devs pushing it further didn't really make it any better. Quite often simply having "more" can make you bloated, sloppy, buggy, lazy and inefficient.

        There's also the "auxiliary cost amortization". A 512 gb ssd is more affordable than a 256 gb ssd which is more affordable than a 128 gb ssd. That's why because while the 3 have different number of memory chips, they do still need the controller as a common component regardless the capacity. It is the same with power - sure they can make a lot more smaller and less power hungry chips, but that would require too much additional sockets and motherboards and whatnot.

        So they go all out on what's economically feasible as a die size, and the power is whatever it can handle at the clocks it can reach. If you get double the performance per socket, then your data center requires only half of the rack space, motherboards, auxiliary components and whatnot. If I need a 1000 watts worth of performance, I'd rather have it in a single brick than an entire wall. Many people do with one pc at a time, but some require more than any single system can provide. And things scale much easier across the same cpu die than they scale across different sockets, which scale much better than different systems.

        And it is not like lower tdps are being neglected neither. 15 watt platforms have moved into desktop performance, desktop platforms have moved into HEDT performance. So while the industry is introducing new bigger and bigger power targets, all existing market demand is being met with upgrades as well. It is certainly much better than then quad core DECADE that barely saw double the cpu perf increase over an actual decade. The last few years, we are getting a doubling more or less annually. 5-7 watt x86 is now becoming a thing too, and I mean without being abysmal as previous attempts.

        So yeah, there seems to be more computers than cool, exciting things to do with them. Creativity is less constrained by hardware than ever. So we use the hardware to... complain there's not exciting enough hardware? Good job!
        Last edited by ddriver; 22 July 2022, 07:19 AM.

        Comment


        • #24
          Originally posted by Anux View Post
          Yes this exact one. I ran it with Win 10 for a few months, as long as Windows doesn't decide to do updates "in the background" it's equaly fast, at least not noticable slower. This CPU is not some old celeron shit. It competes with more modern i5s and even the Core i7-1180G7 in multithreded applications. And single threading isn't to bad either compared with more modern 15W CPUs from intel. Got mine for 70 €. T430 is the last Thinkpad generation where you can change the CPU.


          Look it up beforehand, the platform needs to support it too. On my T430 16 GB is the offical max, although it might work with more.
          Even with a VM running I never needed more than 16GB but ofcourse it depends on your use case.
          Good to know, I didn't realise it was still that competitive.

          Yeah, that's worth considering - it probably maxes out at 16 GB. Often they're higher than the manufacturer says though.

          At any rate, I don't see myself needing 32GB of RAM on this for the foreseeable future. It's already 10 years old and hinges are slightly broken so it might not last long enough anyway, but we'll see. I like keeping things working for as long as possible rather than replacing.

          Comment


          • #25
            Originally posted by hamishmb View Post
            Good to know, I didn't realise it was still that competitive.
            Yeah something good has to be gained of Intels 10 year long quad core +3% strategy.

            Comment


            • #26
              Originally posted by Anux View Post
              Yes this exact one. I ran it with Win 10 for a few months, as long as Windows doesn't decide to do updates "in the background" it's equaly fast, at least not noticable slower. This CPU is not some old celeron shit. It competes with more modern i5s and even the Core i7-1180G7 in multithreded applications. And single threading isn't to bad either compared with more modern 15W CPUs from intel. Got mine for 70 €. T430 is the last Thinkpad generation where you can change the CPU.


              Look it up beforehand, the platform needs to support it too. On my T430 16 GB is the offical max, although it might work with more.
              Even with a VM running I never needed more than 16GB but ofcourse it depends on your use case.
              I have a T430 too. At work a colleague handed me a older Dell laptop that someone threw at the trash, asking if I wanted it. I said "sure" just thinking about the HDD. To my complete surprise, it was a (i imagine) maxed out machine with that i7, 8GB of RAM and a Blu-ray player. It must had cost a small fortune when new in 2012. So I grabbed the CPU, RAM, HDD (for a external case) and the BR drive, all for free!!! Like they say: one man's trash is another man's treasure.
              Last edited by [email protected]; 22 July 2022, 08:21 AM.

              Comment


              • #27
                Originally posted by rabcor View Post
                <rant>
                I mean if I could upgrade to something like the Apple M1 or M2 I'd actually be interested more for the power saving & battery capacity aspects of it than anything else. But those products have no competitor on the market and i'm not droppin 5 trillion dollars on a laptop just because there's a picture of an apple on it (if anything it's a reason for me not to buy it at all, since i know the manufacturer will only provide me with a garbage operating system and make it as hard as they can to switch to a better OS)
                </rant>
                Having gotten a M1 pro from work as a work laptop I will say that even though its really expensive, its probably the first time a Mac device is worth that massive price tag. I was using a a Thinkpad Carbon x1 Gen 9 with an Intel CPU before and even though the Macbook M1 pro is only slightly thicker/heavier, its clean compiling non trivial codebases around 7-10x faster which is frankly ridiculous.

                And don't get me started on the battery life.
                Last edited by mdedetrich; 22 July 2022, 04:30 PM.

                Comment


                • #28
                  Awesome benchmark review. I'm on aging laptops with Dell D7378 i5 7500 and Lenovo Carbon X1 gen6 i5 8350 cpus right now. The Dell doesn't even probably handle my DJI Mini 2's video footage while the Lenovo does. The AMD Ryzen 6850U is on my shortlist for next laptop though looking to see how USB4 support plays out on laptops for peripherals.

                  Originally posted by mdedetrich View Post

                  Having gotten a M1 pro from work as a work laptop I will say that even though its really expensive, its probably the first time a Mac device is worth that massive price tag. I was using a a Thinkpad Carbon x1 Gen 9 with an Intel CPU before and even though the Macbook M1 pro is only slightly thicker/heavier, its clean compiling non trivial codebases around 7-10x faster which is frankly ridiculous.

                  Yes definitely tempting never used Mac myself but ever since starting to use DJI Mini 2 drone and messing with video, Macbook has my eye too.

                  Comment


                  • #29
                    Originally posted by mdedetrich View Post

                    Having gotten a M1 pro from work as a work laptop I will say that even though its really expensive, its probably the first time a Mac device is worth that massive price tag.


                    One thing to note is that the new M2 Macbook's with 256GB storage is half the speed of the original M1's. So you're losing some value even compared to older M1 devices. Unless you get a 512GB model, which you're paying $200 more for another 256GB of storage. Storage which you can't upgrade or replace. Part of the reason I think Apple did this is because the expense that goes into making the M series chips. Compared to AMD and Intel chips, the M series is a good 4x to 5x larger in transistor count, and it's all on one single die. What that means is that these chips are expensive to manufacture, especially on 3nm. The defect rate must be high, as the bigger the chip the more likely a defect will occur. This is why AMD went chiplet in order to improve their yields to lower cost and improve performance, something Apple isn't doing. What this means is that Apple has to charge a higher price and offer less in storage in order to maintain profits, compared to going with Intel chips. By the end of the year AMD will have their Zen4 chips running on 5nm, but their laptop versions will be out in 2023 running on 4nm. Intel will be using 3nm next year as well, though for what I don't know. Whatever the case is, Apple won't have the power advantage anymore by next year. AMD and Intel's x86 will have caught up in power efficiency, and will be a lot cheaper as well. Technically AMD and Intel chips are already a lot cheap but don't have the power advantage, though I would argue that really depends on what you do with these laptops as the M series is only really good at video editing and single threaded applications in terms of power efficiency.

                    Remember that AMD, Intel, and Nvidia have a much larger market share to spend R&D on, and have years in engineering their products. Apple though has basically been taking ARMv8 and made some tweaks and "borrowed" Imaginations GPU tech to make their SoC's. All for what is essentially 10% market share. Eventually Apple will have to shift gears and either pay Nvidia or Qualcomm to make them ARM SoC's and even maybe go AMD for discrete GPU's, or go right back to Intel and x86. We Linux users have a hard time getting Windows applications working on x86, so imagine what Apple M1 users have to deal with on Mac OSX, with ARM, without functional OpenGL or Vulkan. Unless you have a very specific reason to use an Apple M based product like video editing, you're going to have a bad time.

                    Comment


                    • #30
                      Originally posted by Dukenukemx View Post


                      One thing to note is that the new M2 Macbook's with 256GB storage is half the speed of the original M1's. So you're losing some value even compared to older M1 devices. Unless you get a 512GB model, which you're paying $200 more for another 256GB of storage. Storage which you can't upgrade or replace. Part of the reason I think Apple did this is because the expense that goes into making the M series chips. Compared to AMD and Intel chips, the M series is a good 4x to 5x larger in transistor count, and it's all on one single die. What that means is that these chips are expensive to manufacture, especially on 3nm. The defect rate must be high, as the bigger the chip the more likely a defect will occur. This is why AMD went chiplet in order to improve their yields to lower cost and improve performance, something Apple isn't doing. What this means is that Apple has to charge a higher price and offer less in storage in order to maintain profits, compared to going with Intel chips. By the end of the year AMD will have their Zen4 chips running on 5nm, but their laptop versions will be out in 2023 running on 4nm. Intel will be using 3nm next year as well, though for what I don't know. Whatever the case is, Apple won't have the power advantage anymore by next year. AMD and Intel's x86 will have caught up in power efficiency, and will be a lot cheaper as well. Technically AMD and Intel chips are already a lot cheap but don't have the power advantage, though I would argue that really depends on what you do with these laptops as the M series is only really good at video editing and single threaded applications in terms of power efficiency.

                      Remember that AMD, Intel, and Nvidia have a much larger market share to spend R&D on, and have years in engineering their products. Apple though has basically been taking ARMv8 and made some tweaks and "borrowed" Imaginations GPU tech to make their SoC's. All for what is essentially 10% market share. Eventually Apple will have to shift gears and either pay Nvidia or Qualcomm to make them ARM SoC's and even maybe go AMD for discrete GPU's, or go right back to Intel and x86. We Linux users have a hard time getting Windows applications working on x86, so imagine what Apple M1 users have to deal with on Mac OSX, with ARM, without functional OpenGL or Vulkan. Unless you have a very specific reason to use an Apple M based product like video editing, you're going to have a bad time.
                      So thanks for the massive wall of text/rant about something that is tangentially but not really relevant to what I was saying.

                      For starters the laptop I have is a Macbook Pro M1 14", not the Macbook 13" M2 Air (which is the laptop that has the SSD problem you are complaining about). Also nice that you are cherry picking what is probably the only single minor negative thing.

                      Secondly if you normalize for power efficiency, nothing from AMD/Intel comes close to the M1 Pro (14 or 16 inch). To put things into perspective, as stated before the M1 pro is compiling codebases 2-10x for laptops in the same class and while doing so generates so little thermal heat that to date the fans on my M1 pro haven't even spun while having a battery life of ~12-18 hours.

                      Its not just the M1 CPU that happens to be faster, its also the fact that since its a SOC the memory is also on the same chip that the CPU/GPU which means the memory latency is ultra low. That is ontop of the fact that the M1 pro memory is 256-bit LPDDR5.

                      To be blunt hardware wise nothing even comes close to the Apple M1 pro for laptops of the same size. Its unfortunate that for now you need to use MacOS but progress on Asahi Linux is insanely fast so I wouldn't be surprised if a few years down the line if you want you can get the best of both worlds (best in class hardware with Apple Silicon + Linux as the OS).
                      Last edited by mdedetrich; 24 July 2022, 06:21 AM.

                      Comment

                      Working...
                      X