Blender 4.1 Will Further Expand Linux's CPU Rendering Performance Lead Over Windows

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Panix
    replied
    Originally posted by SciK View Post

    Since VAT is the same across the country, consumer prices are shown with tax included. Having to remember to add a “sales tax” on top of the advertised price is more of a US thing.

    A guide to the value-added tax (VAT) for German freelancers and businesses.
    Oh okay - well, that was in reference to Mr. Q 'bragging' (?) to how cheap the 7900 XTX was in his country (Germany) - but, when you check pcpartpicker - the prices look high there, too. Doing conversion - the cheapest card looks like 950 Euros (and it wasn't at that price for long). The avg looks to be around 1000 Euros - which comes out similar to prices in Canada for the average - around $1452 CAD - the cheapest 7900 XTX is currently $1249 (and yes, these are before tax) - so the price is always way higher than the price you're looking at.
    So, $1470 before tax and other cards are over $1500. It seems like a similar situation in Germany.

    I was arguing that these AMD gpus (lacking productivity features) are overly expensive for what they are and AMD didn't reduce prices to increase sales - which caused a lot of consumers including those who actually bought one of these higher tier AMD cards to chastise AMD for it.

    Leave a comment:


  • SciK
    replied
    Originally posted by Panix View Post
    Edit: You don't have any taxes in Germany? I didn't know that. Wow. /s
    Since VAT is the same across the country, consumer prices are shown with tax included. Having to remember to add a “sales tax” on top of the advertised price is more of a US thing.

    A guide to the value-added tax (VAT) for German freelancers and businesses.

    VAT is always included in the advertised price.1 If the price label says 20€, the customer pays 20€ including VAT.​
    Last edited by SciK; 15 February 2024, 06:11 AM.

    Leave a comment:


  • sentry66
    replied
    I think I read someone saying Cinema 4D runs on linux, however there is no Linux version of C4D. It is Mac and Windows only. Linux pretty much always renders faster than Windows for CG render engines. It's not just that it performs faster in processing, but a lot of the highest-end networking storage technologies like WekaIO etc. run natively on Linux, plus Linux offers container technology for both the OS or apps to further maximize render performance. For a render farm of hundreds of render nodes and many CG artists running large files over the network, going through Samba to Windows is a bottleneck in terms of latency. Windows realtime GPU rendering performance is decent though, but for offline rendering, Linux GPU performance wins. CPU performance is pretty much always faster on Linux.

    Leave a comment:


  • tenchrio
    replied
    Originally posted by qarium View Post

    you see you can not render this scene on your 16GB vram 4070ti... you can not even render it on a 4090...
    this is also the answer about is the 7900XTX usefull for blender or not.
    and the clear answer is no its not. all your babbling about buy a consumer gpu for blender is complete nonsense.
    I have to jump in here since neither of you two (yes Panix included) seem to do any 3D or AI work yourself and are now both making assumptions that are just untrue.

    Workstations cards aren't magically better than consumer cards, in fact in many cases they share the exact same chip as consumer cards.
    For instance the AD104 is used in both the RTX 4070, 4070 TI as well as the 4080 but also in the workstation cards RTX 4000 ADA and the RTX 4500 ADA.
    So yes, they are kind of just consumer/gaming cards with more ram (and both of those examples have 20GB and 24GB of Vram but we will come back to that).
    However in many cases the consumer cards out speed their workstation equivalents due to a bigger profile (so more cooling), more shader cores and larger power draw, you mentioned the RTX 6000 ADA but in terms of performance it would still lose to an RTX 4090. Hence some artists prefer buying the gaming cards because the VRAM difference might not matter. The consumer cards also tend to have better viewport performance and would arguably allow for a better sculpting experience at the same budget (especially with dyntopo on).

    With that being said VRAM is important and can even be levied for better performance by for instance baking which precalculates part of the scene so it doesn't have to be done over and over (and I would rant about the lack of benchmarks on this front existing similar to CPU's being Blender benchmarked on anything but Simulation times which usually can't be done on a GPU but I will digress before I discover the word limit on Phoronix). This can be useful for instance for assets you reuse or animation where particular assets might stay in frame for a while or viewed from different angles (so characters or some mcguffin that is in frame half the time). Baking can also lower vram usage for instance by baking the normal maps of complex meshes onto simplified version (really easy to do with the multires modifier).

    If you keep running out of memory despite optimized meshes with baked normals and textures then you either start resorting to CPU rendering (which is significantly slower than even HIP without RT and would result in a much bigger time difference than Optix VS HIP and I don't even know if APUs are finally supported in Blender as no benchmarks on their igpu seem to be present when googling) or using a Renderfarm, and if you are getting a subscription for the latter because your Nvidia cards keep throwing Out of Cuda memory you might as well still have opted for AMD as AMD's viewport performance isn't affected by RT acceleration (and in fact seems more optimized compared to Nvidias at least for Blender 3.6 but price performance wise it would be very interesting for a workflow using cloud rendering for the final image/sequence). You still need to make the scene/objects/sequence before it arrives on the render farm to begin with and outfitting every workstation with 6000ADA would be rather insane and as mentioned before the gaming cards tend to have better performance in the viewport anyway.

    With that being said, I find it a bit misleading to use a benchmark from what can be said to be the biggest animation company in the world to demonstrate VRAM usage.
    To use a more nuanced yet real example there is the Cosmonaut Laundromat/Victor benchmark with the recommended VRAM size of 12GB. It baffles me that this benchmark isn't more front and center, it is the perfect example of real life performance as it is straight from an indie/short film and I remember when I couldn't render it on my GTX 1080 TI that this was a prime example of a benchmark to know if the card has sufficient V-RAM for indie projects in the future, a beacon to guide me to an upgrade (and an excuse to point and say No my scene isn't un-optimized, my GPU just lacks VRAM). Yet I have found it to be mostly absent from just about every review site, guess because very few GPUs would make the cut.

    A shortage of VRAM can also still render but affect render time performance as demonstrated by Blender Rookie (the 3060 12GB beats the 3070 by minutes). However most if not all of the Blender benchmarks used by techtubers and gpu reviewers tend to be extremely lenient on the VRAM side, almost unrealistically so. For example Scanlands can even be rendered by a 4GB 6500XT, so naturally I dived into the Scanlands file and found procedural brick textures and mix shaders on top of mix shaders for simple P-BSDF nodes that only had a difference in their color input (and 1 with a different roughness value which had me scratching my head as it was called "islands3" but was attached to the shader of a mesh that was clearly a building, I guess a reused texture) which honestly could be baked for better/faster performance (but more VRAM usage). Rendering the scene only used 2GB of VRAM and I know from experience that baking materials to 4K textures definitely increases the vram usage with varying amounts depending on the output. Hell I might deep dive into this and see how hard it would be to optimize vs the results it would net me in performance.

    However I also feel like I need to point out that not every render engine supports AMD, you guys have been talking mostly about cycles but there are so many more and depending on your type of project you could opt for another one. Eevee for instance doesn't care about RT-Cores (at least not yet, Eevee-next should fully release soon but with it using Vulkan instead of HIP or CUDA it will have AMD RT from day one) and would be used in more cartoon-esque creations (VRAM usage can still skyrocket with particle emiters). V-Ray on the other hand is more used for architectural renders and unfortunately only runs on CUDA hardware as do other popular render engines like Maxon RedShift and OctaneRender. Luxcore is an alternative that does support AMD, but still people who work with V-Ray might not be willing to switch to avoid needing to re-adjust their workflow.

    Additionally not every scene in Cycles seems to benefit from Optix RT acceleration as much as the next. This was demonstrated in the last Blender deep dive on Techgage with the White Lands Render benchmark barely benefiting from the RT acceleration at all (except for that RX6500XT somehow but I feel like that card is indicator on if that benchmark uses even 2K textures or not at this point).

    In summary, consumer/gamer cards are used by 3D artists and while AMD isn't unusable they aren't directly a 100% certain choice nor are they 100% excluded from being an option. There are a myriad of reasons why you would opt for one over the other.
    Yes AMD HIP-RT should have been working by now (Intel's oneapi does even on Linux) but yes Nvidia's low VRAM can impact performance and sometimes outright deny rendering a scene that it should.

    I would write about what I remember about AI from college but honestly short answer; more VRAM better results/performance. Just Google Deep Learning optimizers. Even for hobby or freelance projects like making a LORA on a 16GB card they can net better results and faster as they make the learning rate adaptive. ​You won't make anything groundbreaking but you might make a Lora that gets to the top of civitai. Image size also affects VRAM usage as puget systems has demonstrated (on 512X512 which is pretty common VRAM usage tops at 13.7GB for Nvidia so even the 4070 TI super could do it, 1024x1024 is still perfectly handled by the RTX 4090 but the 4080 falls off with Net Dim 64 and higher without gradient checkpointing).

    Leave a comment:


  • bytemaniak
    replied
    Originally posted by sophisticles View Post
    I guess once again I need to be the voice of reason.
    [...]
    The developers want to keep propagating the myth of Linux superiority so they concentrate their efforts on the Linux version.
    What a pile of nonsense. Anyone can go contribute to Blender and make the Windows version perform better. This is not some imaginary QAnon shadowy cabal delusion, this is real life.

    Leave a comment:


  • qarium
    replied
    Originally posted by Panix View Post
    I'm under the impression that Blender can't implement the ray tracing acceleration because the library/runtime there is closed, atm?
    Perhaps, Michael can clear that up on the site sometime when Blender and HIP-RT is better implemented in Linux?
    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

    I never implied that 16gb was optimal for Blender or AI - it's just that it's 'ok' for a consumer card - and for Nvidia - at least, things will work until it requires more vram - unlike AMD - in which Blender is limited in performance due to the HIP feature and lacking the ray tracing acceleration part which and speed up performance. That part has been missing for quite a while - and there's other issues or accusations of stutters/crashes/problems - of not being able to start renders etc.
    Whereas, my point with the Nvidia gpu with less vram - it'll work more quickly with Optix - despite the lack of vram.
    The AMD gpus are gaming cards. Even though, the flagship RDNA 3 gpu might have 24gb of vram.
    Does Davinci Resolve in Linux - if using an AMD (for e.g. RDNA 3 card), require the closed driver? I think it requires a hybrid - meaning both - the closed components on top....afaik.
    just stick to a current real world example to detect if a vram capacity is usefull for blender work or not:

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

    After our extended tour through where pbrt-v4 spends its time getting ready to render the Moana Island scene, we finally look at rendering, comparing perform...

    "Disney’s Moana Island scene​"

    keep this in mind: "the scene requires about 29GB vram for a 1920x1080 render"

    you do not even want to make a production in 2K you want 4K or 5K...

    you see you can not render this scene on your 16GB vram 4070ti... you can not even render it on a 4090...
    this is also the answer about is the 7900XTX usefull for blender or not.
    and the clear answer is no its not. all your babbling about buy a consumer gpu for blender is complete nonsense.

    its plain and simple a lie. any serious project will not fit in the VRAM.

    if you want to do any serious work in blender you need a 48GB vram gpu like Nvidia PNY RTX 6000 Ada Generation, 48GB GDDR6, 4x DP, Smallbox

    ​this is 8500€ as you can see on geizhals.at

    and this is what you really don't get it... the cheapest AMD PRO w7900 i have ever seen was 3.829​€
    https://www.computeruniverse.net/de/...als&utm_campai gn=cpc&utm_medium=katalog&utm_content=artikel&APID =727

    keep in mind that even a APU like AMD ryzen 8700G with radeon 780 will do a better job in blender than a 4070TI with only 16GB vram.

    this all shows that you should not give people advice if what you say is clearly wrong.

    "it'll work more quickly with Optix"

    it does not matter if something works more quickly with OptiX if it does not work at all because the project need more vram than your card has.

    and keep in mind this example "29GB vram for a 1920x1080 render" if you want 4K or 5K you need much much much more than 29GB

    Leave a comment:


  • Panix
    replied
    Originally posted by qarium View Post

    just stop spead lies the AMDGPU-PRO driver for many years now by default just install the very standard opensource driver with AMDGPU kernel part and RadeonSI OpenGL driver and RADV vulkan driver.
    this means you do not need a closed-source AMDGPU-PRO driver to do something like HIP-RT...

    and also "either don't work at all or are 'experimental' - and you experience bugs, crashes - in other words, it doesn't work"
    for many years now i did not have crashes and bugs and so one. and of course i did not have the case that it does not work.
    HIP in Blender with my Vega64 and w7900 just works in fedora for over a year now.

    people even report that ROCm/HIP even works for their 8700G with radeon 780 everything better than a Vega64 just works now.
    you can say in case of vega64 its maybe 5-6 years to late, 5700XT/RDNA1 3-4 years to late, 6900XT/RDNA2 2-3 years to late RDNA3 1 year to late of course but in 2024 it just works.

    first you admit that 16GB isn't enough for AI/Deep learning and then you claim a nvidia 16gb vram card is better than a 24gb vram 7900XTX card.
    no dude these 16GB vram cards are not for AI/DeepLearning professionals they are gaming cards the Playstation5 has 16GB vram means these Playstation5 games run well.

    thats the reason why these 16GB vram cards like Nvidia 4070TI and AMD 7900GRE are cheap because AMD/Nvidia know they do not fit for AI/DeepLearning.

    you can not afford it ? a AMD ryzen 8700G system with 192GB ram is really cheap.
    AMD Ryzen 7 8700G=338€
    GIGABYTE A620M DS3H=106€
    Corsair Vengeance schwarz DIMM Kit 192GB, DDR5-5200=700€
    prices from: geizhals.de
    many poor mans AI/DeepLearning Researchers do buy exactly this and its the cheapest way to go into this field.

    keep in mind even if you buy these poor mans researchers a Nvidia 4090 they will not be happy with only 24GB vram.

    keep in mind that modern Nvidia products like nvidia-gh200 are also APU/SOCs

    "whether it's a 16gb or 24gb gpu"

    they aim for nvidia-gh200 with 96GB vram.

    "At least, the crappy 16gb Nvidia card would work"

    who cares if you can not do the AI/DeepLearning research you want to perform ?
    all the 16GB research was done 4-5 years ago with the Radeon7...

    a 16GB 4070 is a gaming card. accept this fact.
    I'm under the impression that Blender can't implement the ray tracing acceleration because the library/runtime there is closed, atm?
    Perhaps, Michael can clear that up on the site sometime when Blender and HIP-RT is better implemented in Linux?

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite


    I never implied that 16gb was optimal for Blender or AI - it's just that it's 'ok' for a consumer card - and for Nvidia - at least, things will work until it requires more vram - unlike AMD - in which Blender is limited in performance due to the HIP feature and lacking the ray tracing acceleration part which and speed up performance. That part has been missing for quite a while - and there's other issues or accusations of stutters/crashes/problems - of not being able to start renders etc.

    Whereas, my point with the Nvidia gpu with less vram - it'll work more quickly with Optix - despite the lack of vram.

    The AMD gpus are gaming cards. Even though, the flagship RDNA 3 gpu might have 24gb of vram.

    Does Davinci Resolve in Linux - if using an AMD (for e.g. RDNA 3 card), require the closed driver? I think it requires a hybrid - meaning both - the closed components on top....afaik.

    Leave a comment:


  • qarium
    replied
    Originally posted by Panix View Post
    You're so full of crap as usual. 1) I can dislike Nvidia as a co. and say that their features especially in Linux - are better than AMD's - and more expansive - compared to AMD's - which offer FOSS but often require amdgpu-pro or features like HIP-RT that either don't work at all or are 'experimental' - and you experience bugs, crashes - in other words, it doesn't work - but, 'they're working on it' forever.

    just stop spead lies the AMDGPU-PRO driver for many years now by default just install the very standard opensource driver with AMDGPU kernel part and RadeonSI OpenGL driver and RADV vulkan driver.
    this means you do not need a closed-source AMDGPU-PRO driver to do something like HIP-RT...

    and also "either don't work at all or are 'experimental' - and you experience bugs, crashes - in other words, it doesn't work"
    for many years now i did not have crashes and bugs and so one. and of course i did not have the case that it does not work.
    HIP in Blender with my Vega64 and w7900 just works in fedora for over a year now.

    Originally posted by Panix View Post
    Rocm - we talked about - only some cards are supported - that's WIP too. Everything I read about AI/SD/ML - 'go with Nvidia' - that is not my bias or my recommendations - that's the overwhelming consensus from ppl in that field.

    people even report that ROCm/HIP even works for their 8700G with radeon 780 everything better than a Vega64 just works now.
    you can say in case of vega64 its maybe 5-6 years to late, 5700XT/RDNA1 3-4 years to late, 6900XT/RDNA2 2-3 years to late RDNA3 1 year to late of course but in 2024 it just works.

    Originally posted by Panix View Post
    The other bs you are spewing now - 16gb isn't enough - yes, but, that's only half right - and just another example of you twisting the truth in favour of your AMD shilling. It depends on the project - yes, some will require more than 16gb - but, hey, it's better than a 24gb 7900 xtx card that is problematic in that app/field - that has crashing software or features that aren't working.
    first you admit that 16GB isn't enough for AI/Deep learning and then you claim a nvidia 16gb vram card is better than a 24gb vram 7900XTX card.
    no dude these 16GB vram cards are not for AI/DeepLearning professionals they are gaming cards the Playstation5 has 16GB vram means these Playstation5 games run well.

    thats the reason why these 16GB vram cards like Nvidia 4070TI and AMD 7900GRE are cheap because AMD/Nvidia know they do not fit for AI/DeepLearning.

    Originally posted by Panix View Post
    "for you everything is redundant what is in favor of AMD..." - yes, I should just concede on a 'maybe' - or future theory - and just shell $1000 on something that is a 'promise to be.' That's how everyone should buy their hardware nowadays. LOL! Are you really serious? Stop making jokes, man.
    Sure, ppl who are really serious - and can afford 256gb of RAM and M1 computers, sure - but, I am talking about intro AI hardware
    you can not afford it ? a AMD ryzen 8700G system with 192GB ram is really cheap.
    AMD Ryzen 7 8700G=338€
    GIGABYTE A620M DS3H=106€
    Corsair Vengeance schwarz DIMM Kit 192GB, DDR5-5200=700€
    prices from: geizhals.de
    many poor mans AI/DeepLearning Researchers do buy exactly this and its the cheapest way to go into this field.

    keep in mind even if you buy these poor mans researchers a Nvidia 4090 they will not be happy with only 24GB vram.

    keep in mind that modern Nvidia products like nvidia-gh200 are also APU/SOCs

    Originally posted by Panix View Post
    - and they typically get Nvidia - whether it's a 16gb or 24gb gpu - since AMD's only offering of 24gb is the 7900 xtx - which is pretty unreliable so far - but, it might be 'getting there' one day. I can find them sometimes (used) for a $1k (many loonies) but the extra power and the 'uncertainty' in so many software areas - and the likelihood of it 'not working properly' in various software makes it a difficult sell.
    At least, the crappy 16gb Nvidia card would work - and things would work (out of the box) in most cases. That doesn't mean I like Nvidia as a company - ppl sometimes are forced to go with the solution that works, you know ?
    "whether it's a 16gb or 24gb gpu"

    they aim for nvidia-gh200 with 96GB vram.

    "At least, the crappy 16gb Nvidia card would work"

    who cares if you can not do the AI/DeepLearning research you want to perform ?
    all the 16GB research was done 4-5 years ago with the Radeon7...

    a 16GB 4070 is a gaming card. accept this fact.

    Leave a comment:


  • tenchrio
    replied

    Originally posted by sophisticles View Post
    I guess once again I need to be the voice of reason.

    Let's assume that Windows sucks and Linux is great, then what is the excuse why this improvement can't be included in the Mac version of Blender?

    The reason why this change only seems to apply to the Linux version is because that's the way the developers want it. Blender is GPL, Linux is GPL and Windows and Mac are not..
    "Voice of reason", lists other 3D rendering software but doesn't bother searching if there is a performance difference on those between WIndows and Linux.
    Spoiler alert , there are, V-Ray also out performs Windows when it is run on Linux (example 1, example 2). Octanebench on GPU compute is the same story. I can't find comparisons for the Arnold render engines of the bat but considering how much your comment is clearly a bad faith argument I don't see why I should bother with the trouble.

    Considering Blender is also open source then why isn't there a magical Windows fork that runs better by making it not-gpl as you imply? Hell make it yourself, prove the Blender Foundation is intentionally making the software worse for Windows due to the GPL (or whatever delusion you got yourself into).
    MacOS also performed close, sometimes even better then Linux when it came to Blender back when it was on intel CPU's, must have been a slip up in the GPL cabal.

    Both Cinema4D and Maya also run on Linux, and its increase in Hollywood is undeniable. Additionally that has to be the laziest search for Blender made movies ever.
    Blender was used together with Maya for Spiderverse, specifically for the grease pencil feature.
    The Japanese animation studio Khara has moved to Blender (but who ever heard of Evangalion).
    But I guess neither of those projects generated a dime? It is also rare these days in Hollywood that only 1 piece of software is used in their production pipeline, the 3D animation team might use Maya or Cinema4D but in that same company the VFX team might be using Houdini (also runs better on Linux BTW, must all be into that GPL conspiracy of yours).

    You also realize Blender doesn't have a sales department like the other 3D software companies do? So of course paid software would be more prevalent in Hollywood blockbusters. Those big Hollywood executives go into contract with sales people of that software for millions of dollars and multiple year contracts, the Blender Foundation is a non profit and yet somehow it has made its way into Hollywood, if anything that is a testament that it is pretty solid software.

    And yes Alike took 5 years to create, tend to be the norm with passion projects compared to profit ones, but maybe you should take 10 before you ever hit post again, quality over quantity and all that.

    Leave a comment:


  • Panix
    replied
    Originally posted by qarium View Post

    "I am not a fan of Nvidia"

    give me a break from this nonsense really.

    "RDNA 4 is redundant in this discussion as a I told you before"

    for you everything is redundant what is in favor of AMD...

    "Nvidia uses OptiX for Blender"

    Nvidia only use optiX for Blender for people who don't care about the correctness of the output.
    for every other people who want a correct output they of course expect the customers to use CUDA.

    "and its performance is way better than AMD - if you had a 4070 Ti which is cheaper than any 7900 XT or XTX right now - it'll be much better performance for the $$."

    a 4070TI only has 16GB vram and experts in the deep learing field reportred here on phoronix that even many years ago they did run out of memory with 16GB vram cards for their projects. this is not something new people who had the radeon 7 with 16gb vram reported this 2-3 years ago.
    so it does not matter if the 4070 tI has better performance you are limited in what modells you can use on 16GB vram.

    people even buy a AMD ryzen 8700G with radeon 780 iGPU to do AI work just to make sure they have similar to apple M1/M2/M3 SOCs a unified memory modell and can put in 192 or even 256GB ram just to make sure they never run out of ram.

    this is the reason why no real expert would buy a 4070 Ti for AI/Deep Learning

    the stuff you can do with 16GB vram is already outdated for many years now. the radeon7 with 16GB vram did came out in 2019 we have 2024 now.

    "You don't have any taxes in Germany?"

    we have 19% tax on graphiccards.

    by the way the 24GB cards are also not well fit for AI/Deep Learning the Nvidia cards and also amd cards for this market all have 48GB vram or more.
    You're so full of crap as usual. 1) I can dislike Nvidia as a co. and say that their features especially in Linux - are better than AMD's - and more expansive - compared to AMD's - which offer FOSS but often require amdgpu-pro or features like HIP-RT that either don't work at all or are 'experimental' - and you experience bugs, crashes - in other words, it doesn't work - but, 'they're working on it' forever.

    Rocm - we talked about - only some cards are supported - that's WIP too. Everything I read about AI/SD/ML - 'go with Nvidia' - that is not my bias or my recommendations - that's the overwhelming consensus from ppl in that field.

    The other bs you are spewing now - 16gb isn't enough - yes, but, that's only half right - and just another example of you twisting the truth in favour of your AMD shilling. It depends on the project - yes, some will require more than 16gb - but, hey, it's better than a 24gb 7900 xtx card that is problematic in that app/field - that has crashing software or features that aren't working.

    "for you everything is redundant what is in favor of AMD..." - yes, I should just concede on a 'maybe' - or future theory - and just shell $1000 on something that is a 'promise to be.' That's how everyone should buy their hardware nowadays. LOL! Are you really serious? Stop making jokes, man.

    Sure, ppl who are really serious - and can afford 256gb of RAM and M1 computers, sure - but, I am talking about intro AI hardware - and they typically get Nvidia - whether it's a 16gb or 24gb gpu - since AMD's only offering of 24gb is the 7900 xtx - which is pretty unreliable so far - but, it might be 'getting there' one day. I can find them sometimes (used) for a $1k (many loonies) but the extra power and the 'uncertainty' in so many software areas - and the likelihood of it 'not working properly' in various software makes it a difficult sell.

    At least, the crappy 16gb Nvidia card would work - and things would work (out of the box) in most cases. That doesn't mean I like Nvidia as a company - ppl sometimes are forced to go with the solution that works, you know ?

    Leave a comment:

Working...
X