Announcement

Collapse
No announcement yet.

Asahi Linux Update Brings Experimental Apple M2 Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • qarium
    replied
    Originally posted by Dukenukemx View Post
    Nvidia got a middle finger from Linus Torvalds for their open source nature while using ARM. This isn't ARMs fault but that many companies are trying to use ARM to make a closed ecosystem where you can do anything you want, so long as it's within their rules.
    right it is not ARMs fault... and i am fine with it as long i am not forced to buy the product.

    all systems i use and buy are all completely open with custom rom on my smartphone and fedora on my pc.

    Originally posted by Dukenukemx View Post
    Most small devices run on open source code but most of them won't let you touch it with a 10 foot pole. Just cause they use open source code doesn't mean they're open, or their platform is open.
    right but who cares ? if they do not force you to buy it you can just ignore that.

    i ignore it to. i do not care about these fools and idiots to be honest. if people buy closed devices it is their fault.

    Originally posted by Dukenukemx View Post
    I can install anything I want on a x86 machine, including Android and Mac OSX. For the most part it works. For most Android phones I need someone in Russia living in their parents basement to port LineaseOS over to a specific device, assuming there's a way to flash a new rom onto that device. Usually with some problems because again it's mostly done by a single guy. There are like thousands of ARM based Android devices that need to each be catered to get LineaseOS onto them, and some of them just never get that treatment. What good is using open source code if the platform is inherently locked down?
    right but why not just buy a smartphone with pre installed LineageOS or /e/OS both are based on cyanogenMOD...
    like this company: https://murena.com/products/smartphones/

    be sure this fix all your problems you get distance form evil google and you also distance yourself from greedy smartphone manufacturers ... my smartphone is made in germany from gigaset but i have /e/OS on it and i get security updates years after gigaset no longer make security patches.

    this is the point about all these closes devices you are not forced to buy any of these products you can buy your smartphone with cyanogenMOD/lineageOS/e-OS on it and ignore the closed market all together.

    Originally posted by Dukenukemx View Post
    Basically x86 is living off the back of IBM compatibles from decades ago, and ARM just doesn't have that legacy to benefit from. Even though ARM has been around as far back as the 3DO, it just never created a central method of loading an OS. And yea most devices that use x86 have closed source OS's but for many years one of them was Apple's Mac OSX.
    right thats all true but again are you forced to buy these not well supported ARM closed devices ?
    i truely don't care at all about other stupid people who buy closed and incompatible crap.
    i plain and simple do not buy it.

    "it just never created a central method of loading an OS"

    isn't it a joke that apple did exactly this ? if you buy apple m2 you get exactly this...

    Originally posted by Dukenukemx View Post
    I paid $100 for my Moto X4 and it runs the latest LineageOS. How much you paid for a truly open device? It's only truly open unless you pay extra, amiright? Having to go out of my way to buy a specific device to benefit from ARM's open nature seems to need me to go out of my way. I haven't found a x86 device that won't let me install Linux. Maybe ChromeOS devices but I hate those, and they run mostly open source too.
    at the time 2 years ago i bought this a gigaset gs290 with the rom from gigaset was at 130€
    the same smartphone with e/OS/ from murena.com was 230€
    now you say 3 things:
    1. your 100 dollar motox4 is cheaper well tight
    2. the gigaset version is cheaper well right
    3. you pay extra for the eOS one from murena.com well right.
    but there are benefits to you do not have to do it yourself and you get much longer updates as you pay for the service and so one and so one-...

    Originally posted by Dukenukemx View Post
    I'm not doing your homework to prove you're right.
    who cares ? it does not mean you are right ...
    it is what i read in 100 of specific articles i read on this topic and also from 1000 of forum posts i read about this topic.
    it is just not worth it to proof anything to you.
    be sure it is like i told you and no one cares about your opinion even if this sounds not very friendly to you.

    Originally posted by Dukenukemx View Post
    Also, Apple M chips have some extra transistors to make it possible to emulate x86 fast. Again I don't fault Apple for this since you can't just dump x86 and go ARM straight away. As for transistor count, I don't know which part is ARM and which part is not. All I can do is compare a mobile x86 AMD chip with one from Apple and look at performance and realize that Apple is using twice as many transistors. Where those transistors are going is uknown.
    no its not unknown there are die shots and zone descriptions from apple.
    and the ARM general purpose cpu part is small compared to the rest.
    https://semianalysis.substack.com/p/...d-architecture


    Originally posted by Dukenukemx View Post
    I'm not sure if you mean title based rendering is using x264, which wouldn't make sense
    the only conection between x264 and tile based rendering is the patent problem
    and i write about the patent problem

    [QUOTE=Dukenukemx;n1336639] or it's inferior because AMD is using an inferior decoder and encoder and that somehow translates over to rendering? Just in case, title based rendering breaks up the image into tiles and determines if a texture that's hidden needs to be rendered. Turns out GPU's will render every texture in view and not in view and waste memory storage and bandwidth doing so. There were other methods to avoid rendering unneeded textures but it wasn't until Maxwell for Nvidia and Polaris for AMD. Even then, AMD's then Polaris implementation of tile rendering wasn't as good as Nvidia's.[/video]

    no there is not a encoder and decoder in rendering lol... i only talk about the patent problem

    as you say: "it wasn't until Maxwell for Nvidia and Polaris for AMD. Even then, AMD's then Polaris implementation of tile rendering wasn't as good as Nvidia's"

    the AMD version was only not as good as the one from Nvidia because AMD did use a patent free implementation and Nvidia just did get the orginal patent license.

    [QUOTE=Dukenukemx;n1336639]
    Was the situation, because now AMD's RDNA2 is far better than Nvidia's. The power usage on AMD's RDNA2 based GPU's is far better than Nvidia's Ampere. None of AMD's GPU's make use of GDDR6X, and still does great in performance. Assuming that Apple M2 Maxx uses half the transistor count for the GPU which is at 57 Billion, the AMD RX 6900 uses only 26.8 Billion. Intel's 12900K which is faster than the M1 Ultra, has an estimated transistor count of around 8.2 billion.[/video]

    sure it is not impossible for AMD to avoid the Patent fee and at the same time make a better product and develop a better technology... sure its all inside of the possibility.
    but AMD still does not have the relevant tile based rendering patent.

    you write about tranistors and not about clock...
    but the amount of on/off switches in the chip is tanistor count multiblied with the clock speed.
    means 26.8 billion tranistors*2,7ghz for the rx6900...
    the apple m2 is only 3,2 ghz compared to the 12900K who is 5,5ghz...

    but if you calculate 8,2*5,5ghz=45,1
    and then 45,1/3,2= 14,,,
    this means at 3,2ghz intel would need 14 billion transistors at 3,2ghz
    apple one is 20*3,2=64

    i think the different of these (14billion tranistors @3,2ghz) vs apples 20 billion tanistors is the ASIC part there are tranistors not used for the most benchmarks like the 264 encoder and decoder and other similar ASIC tech.

    Originally posted by Dukenukemx View Post
    i know similar videos companies like intel or nvidia cheat people into buy much more powerfull hardware because some inefficiencies in the software burn all the performance to ashes

    Originally posted by Dukenukemx View Post
    As a Linux user you think I want DXVK or VKD3D over a native implementation of DX11 and DX12? I want a Gallium Eleven and Twelve because it would yield better performance. Don't get me wrong, DXVK and VKD3D are pretty fast, but it would be faster if Mesa has a native working DX11 and DX12 implementation, like it was done with Gallium Nine. Even Valve knew that getting that to work hassle free is just not gonna happen. Would be nice if every developer made games in Vulkan, but that isn't gonna help without a strong bridge for them to come over to Linux.
    i think the end-consumer does not care if it is metal or dx12 or vulkan or DKVK or WebGPU

    really what is your problem if microsoft agree on WebGPU and apple agree on WebGPU and google agree on WebGPU ...??? then in the end you use WebGPU directly on linux without the browser.

    what you want all "native" to linux/vulkan is: "is just not gonna happen"


    Originally posted by Dukenukemx View Post
    Now Apple could make a Vulkan driver for Mac OSX but doesn't because they want to lock in developers. We are talking about the same company that doesn't let Emulators or Web Browsers onto iOS, due to it being able to compete against the app store. And yes, FireFox for iOS is not the same FireFox for Android and everything else as because by Apple's policies, all web browsers must use the built-in WebKit rendering framework and WebKit JavaScript on iOS. Firefox can't even use their own Gecko layout engine.
    Also please don't say WebKit is better anyway. It's like I can read your mind.
    apple think they are smart but it is clear that they are not smart.
    but i am 100% sure WebGPU becomes the one and only standard and that even apple will drop metal in favour of WebGPU..
    i don't care if apple goes native vulkan on macOS instead i care if they go WebGPU...

    Originally posted by Dukenukemx View Post
    Apple is one of many who supported WebGPU. By this logic, Microsoft also supports WebGPU so I'm sure Microsoft is also open to open standards. If Apple went on their own to force developers to use Metal for webpages then the industry would laugh at them and never support it. Even Apple realizes that they have limitations in how far their influence can reach. Also again, why did Apple release Metal when they have WebGPU? Because WebGPU doesn't replace Metal, Vulkan, and DX12.
    no thats all wrong you cheat on the timeline ... you claim something like at the same time metal and webgpu did come into this world and apple did choose metal to hurt peope... thats complete bullshit really.

    at the time of the metal release there was no WebGPU

    apple metal is from 2014

    WebGPU is from 2018:
    "On June 1, 2018, citing "resolution on most-high level issues" in the cross-browser standardization effort, Google's Chrome team announced intent to implement the future WebGPU standard.[2]"
    in other words WebGPU standard is not yet released...

    So yes there is a possibility that WebGPU will replace metal on macOS...

    Leave a comment:


  • Dukenukemx
    replied
    Originally posted by qarium View Post
    i did talk about ARM in general and apple is the only one who does not follow this opensource ecosystem rule.
    Nvidia got a middle finger from Linus Torvalds for their open source nature while using ARM. This isn't ARMs fault but that many companies are trying to use ARM to make a closed ecosystem where you can do anything you want, so long as it's within their rules.
    most ARM devices run on an open source ecosystem... and for x86 it is the opposite most of them run on closed source ecosystem.
    Most small devices run on open source code but most of them won't let you touch it with a 10 foot pole. Just cause they use open source code doesn't mean they're open, or their platform is open. I can install anything I want on a x86 machine, including Android and Mac OSX. For the most part it works. For most Android phones I need someone in Russia living in their parents basement to port LineaseOS over to a specific device, assuming there's a way to flash a new rom onto that device. Usually with some problems because again it's mostly done by a single guy. There are like thousands of ARM based Android devices that need to each be catered to get LineaseOS onto them, and some of them just never get that treatment. What good is using open source code if the platform is inherently locked down?

    Basically x86 is living off the back of IBM compatibles from decades ago, and ARM just doesn't have that legacy to benefit from. Even though ARM has been around as far back as the 3DO, it just never created a central method of loading an OS. And yea most devices that use x86 have closed source OS's but for many years one of them was Apple's Mac OSX.
    you see you just have the wrong mindset and you know it is a fact that ARM is used mostly in opensource ecosystem and x86 is mostly used in a closed source ecosystem.

    and again you claim you are forced to buy such closed devices but it is fact you are not forced you can use an open one.
    I paid $100 for my Moto X4 and it runs the latest LineageOS. How much you paid for a truly open device? It's only truly open unless you pay extra, amiright? Having to go out of my way to buy a specific device to benefit from ARM's open nature seems to need me to go out of my way. I haven't found a x86 device that won't let me install Linux. Maybe ChromeOS devices but I hate those, and they run mostly open source too.
    you can google it yourself but it is also in this forum multible insiders from amd and intel admit it in this forum
    the legacy parts of x86 costs you 5% of the tranistors.
    and do not again mix up the transistor count for the SOC with the tranistor count of the general purose cpu core.
    I'm not doing your homework to prove you're right. Also, Apple M chips have some extra transistors to make it possible to emulate x86 fast. Again I don't fault Apple for this since you can't just dump x86 and go ARM straight away. As for transistor count, I don't know which part is ARM and which part is not. All I can do is compare a mobile x86 AMD chip with one from Apple and look at performance and realize that Apple is using twice as many transistors. Where those transistors are going is uknown.

    thats the point why i say linux fits to the apple m1/m2 because of the large open-source ecosystem.

    AMD did not have the orginal title based renering patents similar to the x264 encoding and decoding ASIC patents.
    companies like apple and nvidia just go and license the orginal patents similar to S3TC texture compression patents.
    AMD is notorious known for not pay for Patent licenses because they then develop inferior versions who are patent free.

    "Their title based rendering though is far above and beyond what Apple has"

    https://en.wikipedia.org/wiki/Tiled_...op_and_console
    the patent runs out in 2026
    yes amd has tile based rendering but i am sure it is similar to x264 a inferior solution.
    I'm not sure if you mean title based rendering is using x264, which wouldn't make sense, or it's inferior because AMD is using an inferior decoder and encoder and that somehow translates over to rendering? Just in case, title based rendering breaks up the image into tiles and determines if a texture that's hidden needs to be rendered. Turns out GPU's will render every texture in view and not in view and waste memory storage and bandwidth doing so. There were other methods to avoid rendering unneeded textures but it wasn't until Maxwell for Nvidia and Polaris for AMD. Even then, AMD's then Polaris implementation of tile rendering wasn't as good as Nvidia's.

    Was the situation, because now AMD's RDNA2 is far better than Nvidia's. The power usage on AMD's RDNA2 based GPU's is far better than Nvidia's Ampere. None of AMD's GPU's make use of GDDR6X, and still does great in performance. Assuming that Apple M2 Maxx uses half the transistor count for the GPU which is at 57 Billion, the AMD RX 6900 uses only 26.8 Billion. Intel's 12900K which is faster than the M1 Ultra, has an estimated transistor count of around 8.2 billion.


    we agree to disagree here. compile time overhead does not result in run-time overhead...
    As a Linux user you think I want DXVK or VKD3D over a native implementation of DX11 and DX12? I want a Gallium Eleven and Twelve because it would yield better performance. Don't get me wrong, DXVK and VKD3D are pretty fast, but it would be faster if Mesa has a native working DX11 and DX12 implementation, like it was done with Gallium Nine. Even Valve knew that getting that to work hassle free is just not gonna happen. Would be nice if every developer made games in Vulkan, but that isn't gonna help without a strong bridge for them to come over to Linux. Now Apple could make a Vulkan driver for Mac OSX but doesn't because they want to lock in developers. We are talking about the same company that doesn't let Emulators or Web Browsers onto iOS, due to it being able to compete against the app store. And yes, FireFox for iOS is not the same FireFox for Android and everything else as because by Apple's policies, all web browsers must use the built-in WebKit rendering framework and WebKit JavaScript on iOS. Firefox can't even use their own Gecko layout engine.

    Also please don't say WebKit is better anyway. It's like I can read your mind.
    WebGPU is a standard all do agree to ... why not ? its the best consensus who all do agree to.

    but you claim apple does not do open standards you claim they only do metal as a walled garden..

    you are plain and simple wrong apple does in fact do open standards its called WebGPU
    Apple is one of many who supported WebGPU. By this logic, Microsoft also supports WebGPU so I'm sure Microsoft is also open to open standards. If Apple went on their own to force developers to use Metal for webpages then the industry would laugh at them and never support it. Even Apple realizes that they have limitations in how far their influence can reach. Also again, why did Apple release Metal when they have WebGPU? Because WebGPU doesn't replace Metal, Vulkan, and DX12.

    Leave a comment:


  • qarium
    replied
    Originally posted by Dukenukemx View Post
    That makes no sense. Apple isn't running open source ecosystem. Apple won't even let you side load on their iOS devices,
    i did talk about ARM in general and apple is the only one who does not follow this opensource ecosystem rule.

    most ARM devices run on an open source ecosystem... and for x86 it is the opposite most of them run on closed source ecosystem.

    Originally posted by Dukenukemx View Post
    unlike Android. Even calling Android a Linux OS is pushing it. You obvious don't have experience putting custom roms onto an Android phone like I do. It's a nightmare. HTC phones are probably the worst as you gotta sign onto their website to get a code to unlock the boot loader. It's worse if you wanna SOFF your device. It's still a problem even for the easy devices, because it's never as easy as just install it like you do with x86 devices.
    I have a made in germany gigaset gs290 with custom rom /e/OS
    https://murena.com/products/smartphones/
    its very easy if you buy your phone with the custom rom pre-installed.
    if someone wants this they should not buy a HTC and install custom rom instead they should buy one with custom rom pre-installed.

    Originally posted by Dukenukemx View Post
    I don't care for the OEM/manufacturer. I care about the end user.
    you see you just have the wrong mindset and you know it is a fact that ARM is used mostly in opensource ecosystem and x86 is mostly used in a closed source ecosystem.

    and again you claim you are forced to buy such closed devices but it is fact you are not forced you can use an open one.

    Originally posted by Dukenukemx View Post
    Citation needed. What sources?
    you can google it yourself but it is also in this forum multible insiders from amd and intel admit it in this forum
    the legacy parts of x86 costs you 5% of the tranistors.
    and do not again mix up the transistor count for the SOC with the tranistor count of the general purose cpu core.

    Originally posted by Dukenukemx View Post
    Again, citation needed. Even if Intel did, what's the problem with that? I can run old software. That's a good thing. Apple M* owners can run older software very slowly.
    you are free to google it for yourself.
    the problem with that is that you only need it if you have large legacy closed source apps like steam game libary on x86_64
    if you control the ecosystem like apple do you do not need it. thats why they use ARM without legacy bits.
    if you are on a open-source ecosystem like linux devices closed or open you also do not need it .
    Wintel microsoft+intel are the only one who need this legacy cruft because of all the closed source apps compiled to outdated x86 isa.
    apple do not need it and linux also does not need it.

    thats the point why i say linux fits to the apple m1/m2 because of the large open-source ecosystem.

    Originally posted by Dukenukemx View Post
    I'm confused here because you talk about title based rendering and then video codecs. These two are not the same. If you're talking about AMD's video decoding and encoding ASIC then yes it's inferior. Their title based rendering though is far above and beyond what Apple has. Apple's GPU tech is really dated and bad. They don't even have Ray-Tracing. Intel's new GPU will have Ray-Tracing.
    AMD did not have the orginal title based renering patents similar to the x264 encoding and decoding ASIC patents.
    companies like apple and nvidia just go and license the orginal patents similar to S3TC texture compression patents.
    AMD is notorious known for not pay for Patent licenses because they then develop inferior versions who are patent free.

    "Their title based rendering though is far above and beyond what Apple has"

    https://en.wikipedia.org/wiki/Tiled_...op_and_console
    the patent runs out in 2026
    yes amd has tile based rendering but i am sure it is similar to x264 a inferior solution.

    Originally posted by Dukenukemx View Post
    I didn't say you can't use it outside of a browser, but that it's ideally meant for a browser.
    It's more overhead either way. MoltenVK works the same way, and you still lose performance. Not a lot, as you might lose anywhere from 1% to 3%, but that's enough reason to get Apple to make a Vulkan driver for Mac OSX.
    Vulkan can do that, and already has. That is not the purpose of WebGPU but if that were the case then WebGPU is just making the situation worse. There are a few new games coming to Mac OSX like Resident Evil 8. It's using Metal, and using Apple's Metal 3 API with MetalFX. Apple wouldn't be working on Metal if WebGPU was to replace it, and that's because it isn't meant to.
    we agree to disagree here. compile time overhead does not result in run-time overhead...
    also this "it's ideally meant for a browser." does not mean that it can not replace all the other standards…

    WebGPU is a standard all do agree to ... why not ? its the best consensus who all do agree to.

    but you claim apple does not do open standards you claim they only do metal as a walled garden..

    you are plain and simple wrong apple does in fact do open standards its called WebGPU

    Leave a comment:


  • Dukenukemx
    replied
    Originally posted by qarium View Post
    "ARM is typically found on machines that lock themselves down unless you some trickery to get around it."

    i have no problem with that as long i am not forced to buy this crap. i count them as open-source ecosystem because they are mostly based on linux.
    That makes no sense. Apple isn't running open source ecosystem. Apple won't even let you side load on their iOS devices, unlike Android. Even calling Android a Linux OS is pushing it. You obvious don't have experience putting custom roms onto an Android phone like I do. It's a nightmare. HTC phones are probably the worst as you gotta sign onto their website to get a code to unlock the boot loader. It's worse if you wanna SOFF your device. It's still a problem even for the easy devices, because it's never as easy as just install it like you do with x86 devices.
    "Nothing open there"

    it is not open for you as a end-user but it is open-ecosystem from the viewpoint of a OEM/manufacturer because it runs linux and the open-stack.
    I don't care for the OEM/manufacturer. I care about the end user.
    you just dont read what i write you talk about tranistor count of the SOC i only talk about the general purpose cpu part.
    according to multible sources ARM saves 5% on that part.
    Citation needed. What sources?
    according to multible sources intel keep 5% of legacy cruft of tanistors around.
    Again, citation needed. Even if Intel did, what's the problem with that? I can run old software. That's a good thing. Apple M* owners can run older software very slowly.
    all i said is this: apple has a patent license of this technology and AMD did not have it AMD did developed their own patentfree version who has not the same result.
    it is similar situtation with the h264/265 patent intel and nvidia has the patent license and AMD did develop their own inferior ASIC to avoid the patent fee result is AMDs encoder produce ugly result.

    all i said is: AMD does not have the Imagination Technologies powerVR Tile-based Rasterization patent license.
    I'm confused here because you talk about title based rendering and then video codecs. These two are not the same. If you're talking about AMD's video decoding and encoding ASIC then yes it's inferior. Their title based rendering though is far above and beyond what Apple has. Apple's GPU tech is really dated and bad. They don't even have Ray-Tracing. Intel's new GPU will have Ray-Tracing.
    your claim that you can not use Webassembly/WebGPU outside of the browser is a complete lie.
    " WASI: how to run WebAssembly code outside of your browser"
    https://tomassetti.me/wasi-how-to-ru...-your-browser/
    I didn't say you can't use it outside of a browser, but that it's ideally meant for a browser.
    "It's a higher level API"

    wrong it is low-level API with a high-level representation language(metal)
    the high level language is translated to SPIR-V bytecode just like vulkan.
    "WebGPU causes overhead"
    wrong it does not produce overhead at run-time it is like "Rust"
    it is translated into SPIR-V bytecode at compile time and the end result is the same as vulkan.
    It's more overhead either way. MoltenVK works the same way, and you still lose performance. Not a lot, as you might lose anywhere from 1% to 3%, but that's enough reason to get Apple to make a Vulkan driver for Mac OSX.
    they all agree that over the long run WebGPU can replace all other APIs...
    Vulkan can do that, and already has. That is not the purpose of WebGPU but if that were the case then WebGPU is just making the situation worse. There are a few new games coming to Mac OSX like Resident Evil 8. It's using Metal, and using Apple's Metal 3 API with MetalFX. Apple wouldn't be working on Metal if WebGPU was to replace it, and that's because it isn't meant to.


    Leave a comment:


  • qarium
    replied
    Originally posted by Dukenukemx View Post
    That makes no sense. ARM is typically found on machines that lock themselves down unless you some trickery to get around it. Even then, no one OS can be used for every ARM device. This isn't ARMs fault but due to it having no central unifying force, no two ARM devices behave the same. The only exception is the RPI and even they don't have a UEFI boot loader like you see on x86 PCs. ARM is still licensed either way, but unlike x86 you can buy the license. Nothing open there.
    these devices are not "Open" in the meaning of GPLv3 right.

    but they count as open-source ecosystem because they run linux instead of closed source windows.
    means they are linux in the meaning of GPLv2

    "ARM is typically found on machines that lock themselves down unless you some trickery to get around it."

    i have no problem with that as long i am not forced to buy this crap. i count them as open-source ecosystem because they are mostly based on linux.

    "Nothing open there"

    it is not open for you as a end-user but it is open-ecosystem from the viewpoint of a OEM/manufacturer because it runs linux and the open-stack.

    Originally posted by Dukenukemx View Post
    You do know that Apple's M series chip dwarf x86 chips in transistor count. Not sure where you get 5% from but AMD and Intel seem to be more efficient in transistor count. Much more so in GPU transistors as well, including from Nvidia.
    you just dont read what i write you talk about tranistor count of the SOC i only talk about the general purpose cpu part.
    according to multible sources ARM saves 5% on that part.

    Originally posted by Dukenukemx View Post
    As far as I'm aware modern x86 chips simulate older instructions through some software built into the CPU. They don't keep the transistors around for them.
    there are evidence that intel emulate MMX in software now right because MMX was faster in the past than it is today but no one blame intel for this because no one use MMX anymore.
    and x87 floading point and SSE1 and SSE2 ?
    according to multible sources intel keep 5% of legacy cruft of tanistors around.

    Originally posted by Dukenukemx View Post
    You do know everyone uses title based rendering right? This was the main reason why Nvidia's Maxwell was so efficient. AMD introduced it I believe by Polaris.
    https://www.anandtech.com/show/10536...ation-analysis
    all i said is this: apple has a patent license of this technology and AMD did not have it AMD did developed their own patentfree version who has not the same result.
    it is similar situtation with the h264/265 patent intel and nvidia has the patent license and AMD did develop their own inferior ASIC to avoid the patent fee result is AMDs encoder produce ugly result.

    all i said is: AMD does not have the Imagination Technologies powerVR Tile-based Rasterization patent license.

    Originally posted by Dukenukemx View Post
    WebGPU is a web browser based API that you wouldn't want to use for gaming. It's a higher level API that makes it easier to work on multiple API's. The only reason we have multiple API's is because of Microsoft and Apple, but Windows does support Vulkan so this is mostly a problem from Apple. WebGPU causes overhead which is the reason why low level API's like Vulkan and Metal were created.
    your claim that you can not use Webassembly/WebGPU outside of the browser is a complete lie.
    " WASI: how to run WebAssembly code outside of your browser"
    https://tomassetti.me/wasi-how-to-ru...-your-browser/

    "It's a higher level API"

    wrong it is low-level API with a high-level representation language(metal)
    the high level language is translated to SPIR-V bytecode just like vulkan.
    "WebGPU causes overhead"
    wrong it does not produce overhead at run-time it is like "Rust"
    it is translated into SPIR-V bytecode at compile time and the end result is the same as vulkan.

    Originally posted by Dukenukemx View Post
    API's like Vulkan are too low level for web browsers for security and validation reasons. WebGPU fixes that but isn't meant to replace Vulkan or Metal. If Apple alone made a standard that only they used then nobody would use it. It's literally made for a web browser hence why Apple, Mozilla, Microsoft, Google, support it.
    it is similar to Rust all the high-level parts of rust are at compiler time with zero-overhead at run-time.
    WebGPU at compiler time is translated into SPIR-V bytecode and is then the same result as vulkan with zero overhead at run time.

    its not "literally made for a web browser" they just use the webbrowser as a norm and standardisation tool.
    similar to WebAssembly and WASI you can run it outside the browser.

    "Apple, Mozilla, Microsoft, Google,"

    they all agree that over the long run WebGPU can replace all other APIs...


    Originally posted by Dukenukemx View Post
    Stop it. WebGPU is meant for the web and not for graphically intense tasks. It isn't replacing Vulkan.
    OpenGL and Vulkan are pretty open standards. Apple is fixing a problem that doesn't exist. It's only a problem on Apple's platforms.
    It causes more draw calls so yes it does reduce performance. Why are all the Apple apologists like this?
    similar to rust all the overhead is at compile time with zero overhead at run-time.
    it does not cause you more draw calls if you use it outside the browser.

    Leave a comment:


  • Dukenukemx
    replied
    Originally posted by qarium View Post
    in theory ARM is legacy to and also closed source to... but if you do marketshare research compared to x86 the ARM ecosystem tent to be more opensource than x86
    That makes no sense. ARM is typically found on machines that lock themselves down unless you some trickery to get around it. Even then, no one OS can be used for every ARM device. This isn't ARMs fault but due to it having no central unifying force, no two ARM devices behave the same. The only exception is the RPI and even they don't have a UEFI boot loader like you see on x86 PCs. ARM is still licensed either way, but unlike x86 you can buy the license. Nothing open there.
    in my opinion the benefit of ARM compared to x86 is the fact that ARM general purpose cpu cores (i do not talk about the ASIC parts here i only talk about the general purpose cpu parts that it saves 5% transistors.
    now you think 5% transistors who do consume power ?? then you say there is proof that it does not consume power... right i talk about death transistors who do nothing but they are there for legacy compatibility reasons.
    You do know that Apple's M series chip dwarf x86 chips in transistor count. Not sure where you get 5% from but AMD and Intel seem to be more efficient in transistor count. Much more so in GPU transistors as well, including from Nvidia.
    intel or amd could do the same just cut out any MMX and X87 floadingpoint unit and SSE1 and even SSE2 and make it a SSE3/AVX only cpu... it looks like ARM is just faster in droping legacy to spare transistors.
    As far as I'm aware modern x86 chips simulate older instructions through some software built into the CPU. They don't keep the transistors around for them.
    right... but apple did not license AMD GPU tech like intel instead they did License Imagination Technologies powerVR Tile-based Rasterization because this is the one who use less energy compared to all other variants.
    You do know everyone uses title based rendering right? This was the main reason why Nvidia's Maxwell was so efficient. AMD introduced it I believe by Polaris.
    https://www.anandtech.com/show/10536...ation-analysis
    yes right a wrapper but it is a simple one compared to (openGL vs directX11)
    and your second thought is wrong you claim you can not use metal code for a open-standard.
    thats plain and simple wrong because the metal code is the same as in WebGPU.
    WebGPU is a web browser based API that you wouldn't want to use for gaming. It's a higher level API that makes it easier to work on multiple API's. The only reason we have multiple API's is because of Microsoft and Apple, but Windows does support Vulkan so this is mostly a problem from Apple. WebGPU causes overhead which is the reason why low level API's like Vulkan and Metal were created.
    yes right this sounds right and sane but it is not....
    all your babbling is about metal and metal only without the knowelege that apple works on WebGPU to.
    how does WebGPU fit into your theory even google and microsoft support it
    API's like Vulkan are too low level for web browsers for security and validation reasons. WebGPU fixes that but isn't meant to replace Vulkan or Metal. If Apple alone made a standard that only they used then nobody would use it. It's literally made for a web browser hence why Apple, Mozilla, Microsoft, Google, support it.
    all your toxic anti apple babbling about how evil metal is dissolves in the air if you see WebGPU as the future standard for graphic API and gpu compute..
    Stop it. WebGPU is meant for the web and not for graphically intense tasks. It isn't replacing Vulkan.
    this means apple does not force a walled garden "force developers to make stuff exclusive for them"
    instead they force the developers to develop for this best of a future we can get with WebGPU.
    and they need to force it to defeat CUDA and OpenCL and OpenGL and DirectX and so one and so one.

    WebGPU will be the one standard to defeat all of them.
    OpenGL and Vulkan are pretty open standards. Apple is fixing a problem that doesn't exist. It's only a problem on Apple's platforms.
    i do not think you lose performance because in WebGPU you translate metal high level language to vulkan SPIR-V ... and from this point on it is exactly the same as vulkan... because vulkan does the same.
    It causes more draw calls so yes it does reduce performance. Why are all the Apple apologists like this?

    Leave a comment:


  • qarium
    replied
    Originally posted by Dukenukemx View Post
    That's for streaming. I'm the kind of person who uses what's best for quality and file size. I still convert videos to Xvid due to my cars flip down display only support that format. If Google decides that you must use VP10 or whatever then you will use it.
    that you convert videos for your old flip down display in your car is exactly the same case as google streaming convert the video into multiple formats on the server side.
    google discovered that harddrive space does not cost them much but IO bandwidth to the consumers hurt them much. thats why if you upload a video they convert it to h264(for legacy hardware) and VP9(legacy but compared to 264 still save internet bandwith) and AV1 and i think they even do other codexes to like 265 and so one. every customer gets the stream what is best for them. google absolute favor is in AV1 because this save them much harddrive space because 1 AV1 file can serve multible resolution streams at the same time. h264 instead is mostly low-resolution because it is very expensive to have it in 0,5K and 1K and 2K and 4K video format. AV1 is much better 1 file handle all resolutions to the streaming customer.

    Originally posted by Dukenukemx View Post
    Excuse me but isn't ARM also legacy closed source unless open by the developer? Isn't Mac OSX closed source?
    in theory ARM is legacy to and also closed source to... but if you do marketshare research compared to x86 the ARM ecosystem tent to be more opensource than x86

    Originally posted by Dukenukemx View Post
    You aren't making sense as I've already told you that the M2 drains battery fast when playing demanding games. As fast or faster than some x86 machines.
    you have a misconception what is the benefit of ARM compared to x86
    its not battery drain at playing games... it maybe does not save power at games but their GPU is Tile-based Rasterization what really saves energy. but thats not about the cpu.
    there are much more metrics than power consumtion or not...

    in my opinion the benefit of ARM compared to x86 is the fact that ARM general purpose cpu cores (i do not talk about the ASIC parts here i only talk about the general purpose cpu parts that it saves 5% transistors.
    now you think 5% transistors who do consume power ?? then you say there is proof that it does not consume power... right i talk about death transistors who do nothing but they are there for legacy compatibility reasons.

    intel or amd could do the same just cut out any MMX and X87 floadingpoint unit and SSE1 and even SSE2 and make it a SSE3/AVX only cpu... it looks like ARM is just faster in droping legacy to spare transistors.

    Originally posted by Dukenukemx View Post
    Max settings or a game that you barely get a playable frame rate would have the same results in battery life. The only laptops that have this luxury are ones with discrete GPU's, and ones that are fairly modern. Also WoW was getting 35fps regardless of what the person did on the M1, and 35fps is pushing it for playable.
    right... but apple did not license AMD GPU tech like intel instead they did License Imagination Technologies powerVR Tile-based Rasterization because this is the one who use less energy compared to all other variants.

    just ask yourself what apple could do better than this ? they use the lowest nm node they use the best IP tech they can license and so one...

    it is similar to S3TC texture vram compression in the past this was the relevant patent to get performant and powersave graphics... sure the S3TC patent expired but the Imagination Technologies powerVR Tile-based Rasterization patent still exist and AMD does not have this patent means they can not use it

    the only technology apple could add here is the FSR3 of the radeon RX7000xt series means the 4/8bit machine learning engine to perform similar task than nvidias DLSS2.3 with their 4/8bit machine learning hardware engine.

    Originally posted by Dukenukemx View Post
    You can't use Vulkan on Metal without a wrapper. Doesn't matter how related they are, because the code you use for Metal can't be use for anything else.
    yes right a wrapper but it is a simple one compared to (openGL vs directX11)
    and your second thought is wrong you claim you can not use metal code for a open-standard.
    thats plain and simple wrong because the metal code is the same as in WebGPU.

    WebGPU is a zombie like creature it has the high-level language of metal it also has the bytecode of vulkan SPIR-V (Standard Portable Intermediate Representation) and then it compiles into machine code via ACO or LLVM...

    the official way to develop WebGPU is: you use the high-level language from metal what is the same in webGPU and then the first pass compiler translate it to SPIR-V bytecode and then it is compiled into binary.

    the not so official way is you use any language you want compile it to SPIR-V and there is a official SPIR-V to metal reverse engineering tool who produce easy to read high-level code.

    Originally posted by Dukenukemx View Post
    The purpose of Metal is to push developers to learn something that can only be used on their platforms and thus hoping to force developers to make stuff exclusive for them.
    yes right this sounds right and sane but it is not....
    all your babbling is about metal and metal only without the knowelege that apple works on WebGPU to.
    how does WebGPU fit into your theory even google and microsoft support it

    your theory also does not fit into the fact that the high level representative language is part of the WebGPU standard. what is then translated to SPIR-V bytecode and then compiled to binary format.

    if WebGPU becomes the defacto standard of the internet and compared with Webassembly you can run this code everywhere... it is even a fact that with WebGPU you do no longer need: DirectX or Vulkan or metall also you do not need openGL anymore.
    the joke about WebGPU is this: you do not even need CUDE or OpenCL anymore

    because WebGPU can do compute to

    all your toxic anti apple babbling about how evil metal is dissolves in the air if you see WebGPU as the future standard for graphic API and gpu compute..

    this means apple does not force a walled garden "force developers to make stuff exclusive for them"
    instead they force the developers to develop for this best of a future we can get with WebGPU.
    and they need to force it to defeat CUDA and OpenCL and OpenGL and DirectX and so one and so one.

    WebGPU will be the one standard to defeat all of them.

    Originally posted by Dukenukemx View Post
    They're willing to bet they have so much market share and influence that it doesn't matter if you want Vulkan. Apple even banned MoltenVK for using a non-public interface.
    they did not banned MoltenVK in generall they did ban the use of non-public interface..
    in my knowelege they patched MoltenVK to not use this non-public interface..


    Originally posted by Dukenukemx View Post
    This is not a new tactic as it's been tried by Sony. In the end open standards win. Even Microsoft ported DX12 to Linux because the writing is on the wall. It's not open source which makes it useless but they know you need other platforms to support something for it to stay relevant.

    WebGPU is a wrapper for other API's and therefore like MotelVK will lose some performance.
    i do not think you lose performance because in WebGPU you translate metal high level language to vulkan SPIR-V ... and from this point on it is exactly the same as vulkan... because vulkan does the same.

    Originally posted by Dukenukemx View Post
    Everyone wants to shoot the messenger.
    You mean my opinion would be bought then yes. As it stands my opinion is my own.
    We would not be better off for it. Microsoft kinda did and we all know there's going to be a version of Windows using the Linux kernel. I'm not going to think of the what possibilities there is with Apple or Microsoft. Their past is more than enough for me to pass judgement. You cannot change what Apple has done with what Apple could do. They're evil and anti-consumer.
    all these evil and anti-consumer people have a education problem they don't know that they do not need to be evil and anti-consumer .. they smallowed their own satanist evilness propaganda over time that they think it is true. but it is nothing more than a illusion.

    the good players like IBM/RedHat,Valve,AMD they make more money than the bad evil players...

    even this: my last apple product was macintosh performa 5200 from 1994... this alone is proof that apple burns so many people who never again buy any apple product...

    it would be much better for them to be on the good and right side of history like valve...

    do you know the steve jobs video about internet explorer from microsoft ?
    microsoft did pay apple money to integrate internet explorer into macOS.
    and the result of the public and audience was shockingly...
    https://youtu.be/WxOp5mBY9IY?t=143
    watch this part of the video...

    if you are on the evil side they don't cheer you on, they boo you out...

    maybe apple change more than just the USB-Type-C stuff in near future.

    Leave a comment:


  • Dukenukemx
    replied
    Originally posted by qarium View Post

    to be honest you are not agaist it but you sound like it...

    the problem of new codexes emerging who do not run on old-ASICs are not a problem at all and i can explain you why. in the past offline world this would have had a problem because on the DVD or Bluray are only 1 movie in only 1 video format and if your general purpose hardware is to slow and your ASIC does not support it then it does not run. in the time of netflix and youtube this is no longer true! why? simple they detect what your machine does support on codex and then they send you the right video with the right codex.
    if your hardware only supports x264 you get the 264 data
    if your hardware supports 264 and vp9 youtube choose to send you vp9 to save bandwidth..
    if your hardware also supports h265 and you choose 4K resolution netflix send to you the right video data on h265
    if your hardware also supports VP1 these companies send you the right video data ...

    this means you claim there is a problem on the clind sind because ASIC does not support a new emerging codex but all this is fixed server side on netflix and youtube and the clinds always get the video data they need.
    That's for streaming. I'm the kind of person who uses what's best for quality and file size. I still convert videos to Xvid due to my cars flip down display only support that format. If Google decides that you must use VP10 or whatever then you will use it.
    if the games would not be legacy closed source apps i am sure valve would be better of using m2 hardware from apple... but the problem is thats all legacy x86 closed source apps so ARM is not an option.
    Excuse me but isn't ARM also legacy closed source unless open by the developer? Isn't Mac OSX closed source? You aren't making sense as I've already told you that the M2 drains battery fast when playing demanding games. As fast or faster than some x86 machines.
    well game on mobile device on "max settings" like on desktop... who in the real world would ever do this ?
    Max settings or a game that you barely get a playable frame rate would have the same results in battery life. The only laptops that have this luxury are ones with discrete GPU's, and ones that are fairly modern. Also WoW was getting 35fps regardless of what the person did on the M1, and 35fps is pushing it for playable.
    well thats all true but this also shows that they are 2 years behind apple...
    Not gonna disagree with you here.
    thats not a excuse at all in my point of view mandle,vulkan,dx12,metall,WebGPU thats all nearly the same.
    it is not like openGL vs direktX11 that was a huge difference...
    all these technologies: mandle,vulkan,dx12,metall,WebGPU are all historically copy from AMD mandle..
    and the translation between all these mandle,vulkan,dx12,metall,WebGPU is simple...
    it is not like translate directX11 to openGL is a hard task because of the big architectual differences.
    You can't use Vulkan on Metal without a wrapper. Doesn't matter how related they are, because the code you use for Metal can't be use for anything else. The purpose of Metal is to push developers to learn something that can only be used on their platforms and thus hoping to force developers to make stuff exclusive for them. They're willing to bet they have so much market share and influence that it doesn't matter if you want Vulkan. Apple even banned MoltenVK for using a non-public interface.

    This is not a new tactic as it's been tried by Sony. In the end open standards win. Even Microsoft ported DX12 to Linux because the writing is on the wall. It's not open source which makes it useless but they know you need other platforms to support something for it to stay relevant.

    and also there is a real possibility that WebGPU will replace all these standards in the future.
    because WebGPU is what they all agree on. but the difference between WebGPU and Metall is smal.. remember WebGPU directly comes from apple...
    WebGPU is a wrapper for other API's and therefore like MotelVK will lose some performance.
    you are not Toxic because you tell the truth you are 'Toxic in how you handle the topic.
    Everyone wants to shoot the messenger.
    i think even if you would get a phone call from apple make you technical director on a new all-in-linux group you would not stop being toxic...
    You mean my opinion would be bought then yes. As it stands my opinion is my own.
    but apple could change and join the linux movement at any time ...
    We would not be better off for it. Microsoft kinda did and we all know there's going to be a version of Windows using the Linux kernel. I'm not going to think of the what possibilities there is with Apple or Microsoft. Their past is more than enough for me to pass judgement. You cannot change what Apple has done with what Apple could do. They're evil and anti-consumer.

    Leave a comment:


  • qarium
    replied
    Originally posted by hamishmb View Post
    Thanks for sharing the things on power use, those look interesting. It's still significant that battery life is excellent for many everyday uses, however.

    People from any community/group can be toxic, and sadly a lot are from IT communities in general as far as I can tell - it's not unique to Linux/macOS/Windows/BSDs/etc but I agree that it isn't good.

    EDIT: Also, for the record, I mainly use Linux on a 3rd gen i7 (laptop) and a Ryzen-powered desktop. Windows runs in VMs, and I have a few Macs around for macOS.
    its not telling the truth and point to facts what makes it toxic it is how you handle the topic

    for example act like people like me are irrational apple fanboys even if i do not have any apple product...

    or his attack on apple metall because it is not a standard but it is fact that apple goes all on on WebGPU as a standard and WebGPU will replace all standards like dx12 or vulkan in the long run.

    many of these toxic people claim you are not allowed to talk about apple m2 cpus because apple itself is evil even if you never had bought any apple product and you plain and simple are interested in the technology of the apple m2 cpu ...

    Rockchip does the same as apple if you like this way to go you can buy Rockchip shares or buy rockchip hardware. it is not as fast as apple m1/m2 but it is not x86 and it also has a ton of ASIC put on the ARM..

    Leave a comment:


  • qarium
    replied
    Originally posted by Dukenukemx View Post
    You're missing the point. I'm not against it, I'm just saying don't confuse ARM vs ASIC. There are limitations that comes inherently with ASIC's as well, such as not able to encode in a new or unsupported video format.
    to be honest you are not agaist it but you sound like it...

    the problem of new codexes emerging who do not run on old-ASICs are not a problem at all and i can explain you why. in the past offline world this would have had a problem because on the DVD or Bluray are only 1 movie in only 1 video format and if your general purpose hardware is to slow and your ASIC does not support it then it does not run. in the time of netflix and youtube this is no longer true! why? simple they detect what your machine does support on codex and then they send you the right video with the right codex.
    if your hardware only supports x264 you get the 264 data
    if your hardware supports 264 and vp9 youtube choose to send you vp9 to save bandwidth..
    if your hardware also supports h265 and you choose 4K resolution netflix send to you the right video data on h265
    if your hardware also supports VP1 these companies send you the right video data ...

    this means you claim there is a problem on the clind sind because ASIC does not support a new emerging codex but all this is fixed server side on netflix and youtube and the clinds always get the video data they need.

    Originally posted by Dukenukemx View Post
    Depends on the game but Breath of the Wild gets at best 3 hours on the Nintendo Switch. Switch has a 4310mAh battery vs 5,313mAh on the Deck. Not a world of difference, plus you can emulate the Switch on the Deck.
    if the games would not be legacy closed source apps i am sure valve would be better of using m2 hardware from apple... but the problem is thats all legacy x86 closed source apps so ARM is not an option.

    Originally posted by Dukenukemx View Post
    No they haven't. But this again does largely depend on the game. For example, this person gets 2 hours and 40 minutes in World of Warcraft at max settings. He was able to get 10 hours but he also lowered the settings down all the way and limited the frame rate. Though getting 35fps in WoW is rather pathetic. Here's a gaming laptop also getting 10hours with lowered settings. Here's another person running Fortnite and only getting 1 hour and 36 minutes.
    well game on mobile device on "max settings" like on desktop... who in the real world would ever do this ?

    Originally posted by Dukenukemx View Post
    These are valid, yes. AMD is going 5nm for desktop this year, and next year they'll be on 4nm on laptop. Intel bought a lot of TSMC 3nm, just like Apple so expect Intel to have 3nm parts next year, just like Apple.
    well thats all true but this also shows that they are 2 years behind apple...

    Originally posted by Dukenukemx View Post
    What you want to do can be done on a Chromebook. You're also avoiding my points. Use the CPU hard and it will eat the same power as a x86. If the point of owning a 8 or 10 core CPU with 10+ GPU's is to remote access a workstation then that's fiscally irresponsible, also a poor excuse.
    yes right. thats all right but i just showed you how developer do this...
    non of these competent people buy a light small laptop and then do compiling kernels on it on a regular basis.
    of course you can do it on w chromebook but i think if you have the money the apple m2 is the better option...

    Originally posted by Dukenukemx View Post
    You're making excuses for Apple. They could and should make a native Vulkan driver. If given the choice then nobody would use Metal if Vulkan was available. Also like Rosetta2, you're losing performance with MoltenVK.
    thats not a excuse at all in my point of view mandle,vulkan,dx12,metall,WebGPU thats all nearly the same.
    it is not like openGL vs direktX11 that was a huge difference...
    all these technologies: mandle,vulkan,dx12,metall,WebGPU are all historically copy from AMD mandle..
    and the translation between all these mandle,vulkan,dx12,metall,WebGPU is simple...
    it is not like translate directX11 to openGL is a hard task because of the big architectual differences.

    and also there is a real possibility that WebGPU will replace all these standards in the future.
    because WebGPU is what they all agree on. but the difference between WebGPU and Metall is smal.. remember WebGPU directly comes from apple...




    Originally posted by Dukenukemx View Post
    That was a good thing. I care not which wire I use to charge my device so long as they are all the same wires.
    I'm sensing a lot of potential.
    yes in rare cases sometimes politics do something good.
    like the Typ-C case politics could force apple to make second operating system possible on apple iPhones..
    they could force apple to also support linux on their hardware.

    politics if not occupied by cabale deep state members could do a lot of good.

    Originally posted by Dukenukemx View Post
    So I'm toxic for pointing out how toxic Apple is? The irony. Apple is a bad company for so many reasons, even making Microsoft look like a saint. Not that Microsoft is pro open source or pro consumer rights but Microsoft wishes they had what Apple has. The things Microsoft does that's evil is because they want to become Apple. When Microsoft released Windows for ARM they only let users get apps from the app store, just like on iOS. Now Microsoft is preventing Linux install, just like Apple's T2 chip. Microsoft's Pluton is no different from Apple's T2 Security Chip, just that Microsoft is many years late to the party.
    Remember that you're defending a company who again doesn't use open standards and forces their proprietary standards. You support a company who has not contributed to Linux. You support a company who has employee's that jump off building because they're making your crap. This is also the same company that had a revolt from their employee's in India due to pay. You know, the same company who dodges taxes but moving it around the world. You defending Apple makes you no saint.
    you are not Toxic because you tell the truth you are 'Toxic in how you handle the topic.
    i think even if you would get a phone call from apple make you technical director on a new all-in-linux group you would not stop being toxic...

    dude the last apple product i ever touched was a macintosh performa 5200 from 1994 ... so i am not an apple fanboy.

    and to be honest i think microsoft is more evil because of the monopoles they have and all the good stuff from microsoft is not because of microsoft but instead because of anti trust regulations put on microsoft.

    apple does in fact impact me zero time because i just do not have apple products...

    but microsoft hurts me very much even if i do not have windows installed... but every job you do you come in touch with microsoft office and even if you want to perform a professional training in the computer sector they make the final exame on microsoft windows.

    yes right apple is evil and microsoft is even more evil ...

    but apple could change and join the linux movement at any time ...

    Leave a comment:

Working...
X