Announcement

Collapse
No announcement yet.

Asahi Linux Update Brings Experimental Apple M2 Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    Originally posted by qarium View Post
    in theory ARM is legacy to and also closed source to... but if you do marketshare research compared to x86 the ARM ecosystem tent to be more opensource than x86
    That makes no sense. ARM is typically found on machines that lock themselves down unless you some trickery to get around it. Even then, no one OS can be used for every ARM device. This isn't ARMs fault but due to it having no central unifying force, no two ARM devices behave the same. The only exception is the RPI and even they don't have a UEFI boot loader like you see on x86 PCs. ARM is still licensed either way, but unlike x86 you can buy the license. Nothing open there.
    in my opinion the benefit of ARM compared to x86 is the fact that ARM general purpose cpu cores (i do not talk about the ASIC parts here i only talk about the general purpose cpu parts that it saves 5% transistors.
    now you think 5% transistors who do consume power ?? then you say there is proof that it does not consume power... right i talk about death transistors who do nothing but they are there for legacy compatibility reasons.
    You do know that Apple's M series chip dwarf x86 chips in transistor count. Not sure where you get 5% from but AMD and Intel seem to be more efficient in transistor count. Much more so in GPU transistors as well, including from Nvidia.
    intel or amd could do the same just cut out any MMX and X87 floadingpoint unit and SSE1 and even SSE2 and make it a SSE3/AVX only cpu... it looks like ARM is just faster in droping legacy to spare transistors.
    As far as I'm aware modern x86 chips simulate older instructions through some software built into the CPU. They don't keep the transistors around for them.
    right... but apple did not license AMD GPU tech like intel instead they did License Imagination Technologies powerVR Tile-based Rasterization because this is the one who use less energy compared to all other variants.
    You do know everyone uses title based rendering right? This was the main reason why Nvidia's Maxwell was so efficient. AMD introduced it I believe by Polaris.

    yes right a wrapper but it is a simple one compared to (openGL vs directX11)
    and your second thought is wrong you claim you can not use metal code for a open-standard.
    thats plain and simple wrong because the metal code is the same as in WebGPU.
    WebGPU is a web browser based API that you wouldn't want to use for gaming. It's a higher level API that makes it easier to work on multiple API's. The only reason we have multiple API's is because of Microsoft and Apple, but Windows does support Vulkan so this is mostly a problem from Apple. WebGPU causes overhead which is the reason why low level API's like Vulkan and Metal were created.
    yes right this sounds right and sane but it is not....
    all your babbling is about metal and metal only without the knowelege that apple works on WebGPU to.
    how does WebGPU fit into your theory even google and microsoft support it
    API's like Vulkan are too low level for web browsers for security and validation reasons. WebGPU fixes that but isn't meant to replace Vulkan or Metal. If Apple alone made a standard that only they used then nobody would use it. It's literally made for a web browser hence why Apple, Mozilla, Microsoft, Google, support it.
    all your toxic anti apple babbling about how evil metal is dissolves in the air if you see WebGPU as the future standard for graphic API and gpu compute..
    Stop it. WebGPU is meant for the web and not for graphically intense tasks. It isn't replacing Vulkan.
    this means apple does not force a walled garden "force developers to make stuff exclusive for them"
    instead they force the developers to develop for this best of a future we can get with WebGPU.
    and they need to force it to defeat CUDA and OpenCL and OpenGL and DirectX and so one and so one.

    WebGPU will be the one standard to defeat all of them.
    OpenGL and Vulkan are pretty open standards. Apple is fixing a problem that doesn't exist. It's only a problem on Apple's platforms.
    i do not think you lose performance because in WebGPU you translate metal high level language to vulkan SPIR-V ... and from this point on it is exactly the same as vulkan... because vulkan does the same.
    It causes more draw calls so yes it does reduce performance. Why are all the Apple apologists like this?

    Comment


    • #72
      Originally posted by Dukenukemx View Post
      That makes no sense. ARM is typically found on machines that lock themselves down unless you some trickery to get around it. Even then, no one OS can be used for every ARM device. This isn't ARMs fault but due to it having no central unifying force, no two ARM devices behave the same. The only exception is the RPI and even they don't have a UEFI boot loader like you see on x86 PCs. ARM is still licensed either way, but unlike x86 you can buy the license. Nothing open there.
      these devices are not "Open" in the meaning of GPLv3 right.

      but they count as open-source ecosystem because they run linux instead of closed source windows.
      means they are linux in the meaning of GPLv2

      "ARM is typically found on machines that lock themselves down unless you some trickery to get around it."

      i have no problem with that as long i am not forced to buy this crap. i count them as open-source ecosystem because they are mostly based on linux.

      "Nothing open there"

      it is not open for you as a end-user but it is open-ecosystem from the viewpoint of a OEM/manufacturer because it runs linux and the open-stack.

      Originally posted by Dukenukemx View Post
      You do know that Apple's M series chip dwarf x86 chips in transistor count. Not sure where you get 5% from but AMD and Intel seem to be more efficient in transistor count. Much more so in GPU transistors as well, including from Nvidia.
      you just dont read what i write you talk about tranistor count of the SOC i only talk about the general purpose cpu part.
      according to multible sources ARM saves 5% on that part.

      Originally posted by Dukenukemx View Post
      As far as I'm aware modern x86 chips simulate older instructions through some software built into the CPU. They don't keep the transistors around for them.
      there are evidence that intel emulate MMX in software now right because MMX was faster in the past than it is today but no one blame intel for this because no one use MMX anymore.
      and x87 floading point and SSE1 and SSE2 ?
      according to multible sources intel keep 5% of legacy cruft of tanistors around.

      Originally posted by Dukenukemx View Post
      You do know everyone uses title based rendering right? This was the main reason why Nvidia's Maxwell was so efficient. AMD introduced it I believe by Polaris.
      https://www.anandtech.com/show/10536...ation-analysis
      all i said is this: apple has a patent license of this technology and AMD did not have it AMD did developed their own patentfree version who has not the same result.
      it is similar situtation with the h264/265 patent intel and nvidia has the patent license and AMD did develop their own inferior ASIC to avoid the patent fee result is AMDs encoder produce ugly result.

      all i said is: AMD does not have the Imagination Technologies powerVR Tile-based Rasterization patent license.

      Originally posted by Dukenukemx View Post
      WebGPU is a web browser based API that you wouldn't want to use for gaming. It's a higher level API that makes it easier to work on multiple API's. The only reason we have multiple API's is because of Microsoft and Apple, but Windows does support Vulkan so this is mostly a problem from Apple. WebGPU causes overhead which is the reason why low level API's like Vulkan and Metal were created.
      your claim that you can not use Webassembly/WebGPU outside of the browser is a complete lie.
      " WASI: how to run WebAssembly code outside of your browser"
      WASI: run WebAssembly code outside of your browser. Now it can also be used to run applications outside of the browser, thanks to WASI.


      "It's a higher level API"

      wrong it is low-level API with a high-level representation language(metal)
      the high level language is translated to SPIR-V bytecode just like vulkan.
      "WebGPU causes overhead"
      wrong it does not produce overhead at run-time it is like "Rust"
      it is translated into SPIR-V bytecode at compile time and the end result is the same as vulkan.

      Originally posted by Dukenukemx View Post
      API's like Vulkan are too low level for web browsers for security and validation reasons. WebGPU fixes that but isn't meant to replace Vulkan or Metal. If Apple alone made a standard that only they used then nobody would use it. It's literally made for a web browser hence why Apple, Mozilla, Microsoft, Google, support it.
      it is similar to Rust all the high-level parts of rust are at compiler time with zero-overhead at run-time.
      WebGPU at compiler time is translated into SPIR-V bytecode and is then the same result as vulkan with zero overhead at run time.

      its not "literally made for a web browser" they just use the webbrowser as a norm and standardisation tool.
      similar to WebAssembly and WASI you can run it outside the browser.

      "Apple, Mozilla, Microsoft, Google,"

      they all agree that over the long run WebGPU can replace all other APIs...


      Originally posted by Dukenukemx View Post
      Stop it. WebGPU is meant for the web and not for graphically intense tasks. It isn't replacing Vulkan.
      OpenGL and Vulkan are pretty open standards. Apple is fixing a problem that doesn't exist. It's only a problem on Apple's platforms.
      It causes more draw calls so yes it does reduce performance. Why are all the Apple apologists like this?
      similar to rust all the overhead is at compile time with zero overhead at run-time.
      it does not cause you more draw calls if you use it outside the browser.
      Phantom circuit Sequence Reducer Dyslexia

      Comment


      • #73
        Originally posted by qarium View Post
        "ARM is typically found on machines that lock themselves down unless you some trickery to get around it."

        i have no problem with that as long i am not forced to buy this crap. i count them as open-source ecosystem because they are mostly based on linux.
        That makes no sense. Apple isn't running open source ecosystem. Apple won't even let you side load on their iOS devices, unlike Android. Even calling Android a Linux OS is pushing it. You obvious don't have experience putting custom roms onto an Android phone like I do. It's a nightmare. HTC phones are probably the worst as you gotta sign onto their website to get a code to unlock the boot loader. It's worse if you wanna SOFF your device. It's still a problem even for the easy devices, because it's never as easy as just install it like you do with x86 devices.
        "Nothing open there"

        it is not open for you as a end-user but it is open-ecosystem from the viewpoint of a OEM/manufacturer because it runs linux and the open-stack.
        I don't care for the OEM/manufacturer. I care about the end user.
        you just dont read what i write you talk about tranistor count of the SOC i only talk about the general purpose cpu part.
        according to multible sources ARM saves 5% on that part.
        Citation needed. What sources?
        according to multible sources intel keep 5% of legacy cruft of tanistors around.
        Again, citation needed. Even if Intel did, what's the problem with that? I can run old software. That's a good thing. Apple M* owners can run older software very slowly.
        all i said is this: apple has a patent license of this technology and AMD did not have it AMD did developed their own patentfree version who has not the same result.
        it is similar situtation with the h264/265 patent intel and nvidia has the patent license and AMD did develop their own inferior ASIC to avoid the patent fee result is AMDs encoder produce ugly result.

        all i said is: AMD does not have the Imagination Technologies powerVR Tile-based Rasterization patent license.
        I'm confused here because you talk about title based rendering and then video codecs. These two are not the same. If you're talking about AMD's video decoding and encoding ASIC then yes it's inferior. Their title based rendering though is far above and beyond what Apple has. Apple's GPU tech is really dated and bad. They don't even have Ray-Tracing. Intel's new GPU will have Ray-Tracing.
        your claim that you can not use Webassembly/WebGPU outside of the browser is a complete lie.
        " WASI: how to run WebAssembly code outside of your browser"
        https://tomassetti.me/wasi-how-to-ru...-your-browser/
        I didn't say you can't use it outside of a browser, but that it's ideally meant for a browser.
        "It's a higher level API"

        wrong it is low-level API with a high-level representation language(metal)
        the high level language is translated to SPIR-V bytecode just like vulkan.
        "WebGPU causes overhead"
        wrong it does not produce overhead at run-time it is like "Rust"
        it is translated into SPIR-V bytecode at compile time and the end result is the same as vulkan.
        It's more overhead either way. MoltenVK works the same way, and you still lose performance. Not a lot, as you might lose anywhere from 1% to 3%, but that's enough reason to get Apple to make a Vulkan driver for Mac OSX.
        they all agree that over the long run WebGPU can replace all other APIs...
        Vulkan can do that, and already has. That is not the purpose of WebGPU but if that were the case then WebGPU is just making the situation worse. There are a few new games coming to Mac OSX like Resident Evil 8. It's using Metal, and using Apple's Metal 3 API with MetalFX. Apple wouldn't be working on Metal if WebGPU was to replace it, and that's because it isn't meant to.


        Comment


        • #74
          Originally posted by Dukenukemx View Post
          That makes no sense. Apple isn't running open source ecosystem. Apple won't even let you side load on their iOS devices,
          i did talk about ARM in general and apple is the only one who does not follow this opensource ecosystem rule.

          most ARM devices run on an open source ecosystem... and for x86 it is the opposite most of them run on closed source ecosystem.

          Originally posted by Dukenukemx View Post
          unlike Android. Even calling Android a Linux OS is pushing it. You obvious don't have experience putting custom roms onto an Android phone like I do. It's a nightmare. HTC phones are probably the worst as you gotta sign onto their website to get a code to unlock the boot loader. It's worse if you wanna SOFF your device. It's still a problem even for the easy devices, because it's never as easy as just install it like you do with x86 devices.
          I have a made in germany gigaset gs290 with custom rom /e/OS

          its very easy if you buy your phone with the custom rom pre-installed.
          if someone wants this they should not buy a HTC and install custom rom instead they should buy one with custom rom pre-installed.

          Originally posted by Dukenukemx View Post
          I don't care for the OEM/manufacturer. I care about the end user.
          you see you just have the wrong mindset and you know it is a fact that ARM is used mostly in opensource ecosystem and x86 is mostly used in a closed source ecosystem.

          and again you claim you are forced to buy such closed devices but it is fact you are not forced you can use an open one.

          Originally posted by Dukenukemx View Post
          Citation needed. What sources?
          you can google it yourself but it is also in this forum multible insiders from amd and intel admit it in this forum
          the legacy parts of x86 costs you 5% of the tranistors.
          and do not again mix up the transistor count for the SOC with the tranistor count of the general purose cpu core.

          Originally posted by Dukenukemx View Post
          Again, citation needed. Even if Intel did, what's the problem with that? I can run old software. That's a good thing. Apple M* owners can run older software very slowly.
          you are free to google it for yourself.
          the problem with that is that you only need it if you have large legacy closed source apps like steam game libary on x86_64
          if you control the ecosystem like apple do you do not need it. thats why they use ARM without legacy bits.
          if you are on a open-source ecosystem like linux devices closed or open you also do not need it .
          Wintel microsoft+intel are the only one who need this legacy cruft because of all the closed source apps compiled to outdated x86 isa.
          apple do not need it and linux also does not need it.

          thats the point why i say linux fits to the apple m1/m2 because of the large open-source ecosystem.

          Originally posted by Dukenukemx View Post
          I'm confused here because you talk about title based rendering and then video codecs. These two are not the same. If you're talking about AMD's video decoding and encoding ASIC then yes it's inferior. Their title based rendering though is far above and beyond what Apple has. Apple's GPU tech is really dated and bad. They don't even have Ray-Tracing. Intel's new GPU will have Ray-Tracing.
          AMD did not have the orginal title based renering patents similar to the x264 encoding and decoding ASIC patents.
          companies like apple and nvidia just go and license the orginal patents similar to S3TC texture compression patents.
          AMD is notorious known for not pay for Patent licenses because they then develop inferior versions who are patent free.

          "Their title based rendering though is far above and beyond what Apple has"


          the patent runs out in 2026
          yes amd has tile based rendering but i am sure it is similar to x264 a inferior solution.

          Originally posted by Dukenukemx View Post
          I didn't say you can't use it outside of a browser, but that it's ideally meant for a browser.
          It's more overhead either way. MoltenVK works the same way, and you still lose performance. Not a lot, as you might lose anywhere from 1% to 3%, but that's enough reason to get Apple to make a Vulkan driver for Mac OSX.
          Vulkan can do that, and already has. That is not the purpose of WebGPU but if that were the case then WebGPU is just making the situation worse. There are a few new games coming to Mac OSX like Resident Evil 8. It's using Metal, and using Apple's Metal 3 API with MetalFX. Apple wouldn't be working on Metal if WebGPU was to replace it, and that's because it isn't meant to.
          we agree to disagree here. compile time overhead does not result in run-time overhead...
          also this "it's ideally meant for a browser." does not mean that it can not replace all the other standards…

          WebGPU is a standard all do agree to ... why not ? its the best consensus who all do agree to.

          but you claim apple does not do open standards you claim they only do metal as a walled garden..

          you are plain and simple wrong apple does in fact do open standards its called WebGPU
          Phantom circuit Sequence Reducer Dyslexia

          Comment


          • #75
            Originally posted by qarium View Post
            i did talk about ARM in general and apple is the only one who does not follow this opensource ecosystem rule.
            Nvidia got a middle finger from Linus Torvalds for their open source nature while using ARM. This isn't ARMs fault but that many companies are trying to use ARM to make a closed ecosystem where you can do anything you want, so long as it's within their rules.
            most ARM devices run on an open source ecosystem... and for x86 it is the opposite most of them run on closed source ecosystem.
            Most small devices run on open source code but most of them won't let you touch it with a 10 foot pole. Just cause they use open source code doesn't mean they're open, or their platform is open. I can install anything I want on a x86 machine, including Android and Mac OSX. For the most part it works. For most Android phones I need someone in Russia living in their parents basement to port LineaseOS over to a specific device, assuming there's a way to flash a new rom onto that device. Usually with some problems because again it's mostly done by a single guy. There are like thousands of ARM based Android devices that need to each be catered to get LineaseOS onto them, and some of them just never get that treatment. What good is using open source code if the platform is inherently locked down?

            Basically x86 is living off the back of IBM compatibles from decades ago, and ARM just doesn't have that legacy to benefit from. Even though ARM has been around as far back as the 3DO, it just never created a central method of loading an OS. And yea most devices that use x86 have closed source OS's but for many years one of them was Apple's Mac OSX.
            you see you just have the wrong mindset and you know it is a fact that ARM is used mostly in opensource ecosystem and x86 is mostly used in a closed source ecosystem.

            and again you claim you are forced to buy such closed devices but it is fact you are not forced you can use an open one.
            I paid $100 for my Moto X4 and it runs the latest LineageOS. How much you paid for a truly open device? It's only truly open unless you pay extra, amiright? Having to go out of my way to buy a specific device to benefit from ARM's open nature seems to need me to go out of my way. I haven't found a x86 device that won't let me install Linux. Maybe ChromeOS devices but I hate those, and they run mostly open source too.
            you can google it yourself but it is also in this forum multible insiders from amd and intel admit it in this forum
            the legacy parts of x86 costs you 5% of the tranistors.
            and do not again mix up the transistor count for the SOC with the tranistor count of the general purose cpu core.
            I'm not doing your homework to prove you're right. Also, Apple M chips have some extra transistors to make it possible to emulate x86 fast. Again I don't fault Apple for this since you can't just dump x86 and go ARM straight away. As for transistor count, I don't know which part is ARM and which part is not. All I can do is compare a mobile x86 AMD chip with one from Apple and look at performance and realize that Apple is using twice as many transistors. Where those transistors are going is uknown.

            thats the point why i say linux fits to the apple m1/m2 because of the large open-source ecosystem.

            AMD did not have the orginal title based renering patents similar to the x264 encoding and decoding ASIC patents.
            companies like apple and nvidia just go and license the orginal patents similar to S3TC texture compression patents.
            AMD is notorious known for not pay for Patent licenses because they then develop inferior versions who are patent free.

            "Their title based rendering though is far above and beyond what Apple has"

            https://en.wikipedia.org/wiki/Tiled_...op_and_console
            the patent runs out in 2026
            yes amd has tile based rendering but i am sure it is similar to x264 a inferior solution.
            I'm not sure if you mean title based rendering is using x264, which wouldn't make sense, or it's inferior because AMD is using an inferior decoder and encoder and that somehow translates over to rendering? Just in case, title based rendering breaks up the image into tiles and determines if a texture that's hidden needs to be rendered. Turns out GPU's will render every texture in view and not in view and waste memory storage and bandwidth doing so. There were other methods to avoid rendering unneeded textures but it wasn't until Maxwell for Nvidia and Polaris for AMD. Even then, AMD's then Polaris implementation of tile rendering wasn't as good as Nvidia's.

            Was the situation, because now AMD's RDNA2 is far better than Nvidia's. The power usage on AMD's RDNA2 based GPU's is far better than Nvidia's Ampere. None of AMD's GPU's make use of GDDR6X, and still does great in performance. Assuming that Apple M2 Maxx uses half the transistor count for the GPU which is at 57 Billion, the AMD RX 6900 uses only 26.8 Billion. Intel's 12900K which is faster than the M1 Ultra, has an estimated transistor count of around 8.2 billion.


            we agree to disagree here. compile time overhead does not result in run-time overhead...
            As a Linux user you think I want DXVK or VKD3D over a native implementation of DX11 and DX12? I want a Gallium Eleven and Twelve because it would yield better performance. Don't get me wrong, DXVK and VKD3D are pretty fast, but it would be faster if Mesa has a native working DX11 and DX12 implementation, like it was done with Gallium Nine. Even Valve knew that getting that to work hassle free is just not gonna happen. Would be nice if every developer made games in Vulkan, but that isn't gonna help without a strong bridge for them to come over to Linux. Now Apple could make a Vulkan driver for Mac OSX but doesn't because they want to lock in developers. We are talking about the same company that doesn't let Emulators or Web Browsers onto iOS, due to it being able to compete against the app store. And yes, FireFox for iOS is not the same FireFox for Android and everything else as because by Apple's policies, all web browsers must use the built-in WebKit rendering framework and WebKit JavaScript on iOS. Firefox can't even use their own Gecko layout engine.

            Also please don't say WebKit is better anyway. It's like I can read your mind.
            WebGPU is a standard all do agree to ... why not ? its the best consensus who all do agree to.

            but you claim apple does not do open standards you claim they only do metal as a walled garden..

            you are plain and simple wrong apple does in fact do open standards its called WebGPU
            Apple is one of many who supported WebGPU. By this logic, Microsoft also supports WebGPU so I'm sure Microsoft is also open to open standards. If Apple went on their own to force developers to use Metal for webpages then the industry would laugh at them and never support it. Even Apple realizes that they have limitations in how far their influence can reach. Also again, why did Apple release Metal when they have WebGPU? Because WebGPU doesn't replace Metal, Vulkan, and DX12.

            Comment


            • #76
              Originally posted by Dukenukemx View Post
              Nvidia got a middle finger from Linus Torvalds for their open source nature while using ARM. This isn't ARMs fault but that many companies are trying to use ARM to make a closed ecosystem where you can do anything you want, so long as it's within their rules.
              right it is not ARMs fault... and i am fine with it as long i am not forced to buy the product.

              all systems i use and buy are all completely open with custom rom on my smartphone and fedora on my pc.

              Originally posted by Dukenukemx View Post
              Most small devices run on open source code but most of them won't let you touch it with a 10 foot pole. Just cause they use open source code doesn't mean they're open, or their platform is open.
              right but who cares ? if they do not force you to buy it you can just ignore that.

              i ignore it to. i do not care about these fools and idiots to be honest. if people buy closed devices it is their fault.

              Originally posted by Dukenukemx View Post
              I can install anything I want on a x86 machine, including Android and Mac OSX. For the most part it works. For most Android phones I need someone in Russia living in their parents basement to port LineaseOS over to a specific device, assuming there's a way to flash a new rom onto that device. Usually with some problems because again it's mostly done by a single guy. There are like thousands of ARM based Android devices that need to each be catered to get LineaseOS onto them, and some of them just never get that treatment. What good is using open source code if the platform is inherently locked down?
              right but why not just buy a smartphone with pre installed LineageOS or /e/OS both are based on cyanogenMOD...
              like this company: https://murena.com/products/smartphones/

              be sure this fix all your problems you get distance form evil google and you also distance yourself from greedy smartphone manufacturers ... my smartphone is made in germany from gigaset but i have /e/OS on it and i get security updates years after gigaset no longer make security patches.

              this is the point about all these closes devices you are not forced to buy any of these products you can buy your smartphone with cyanogenMOD/lineageOS/e-OS on it and ignore the closed market all together.

              Originally posted by Dukenukemx View Post
              Basically x86 is living off the back of IBM compatibles from decades ago, and ARM just doesn't have that legacy to benefit from. Even though ARM has been around as far back as the 3DO, it just never created a central method of loading an OS. And yea most devices that use x86 have closed source OS's but for many years one of them was Apple's Mac OSX.
              right thats all true but again are you forced to buy these not well supported ARM closed devices ?
              i truely don't care at all about other stupid people who buy closed and incompatible crap.
              i plain and simple do not buy it.

              "it just never created a central method of loading an OS"

              isn't it a joke that apple did exactly this ? if you buy apple m2 you get exactly this...

              Originally posted by Dukenukemx View Post
              I paid $100 for my Moto X4 and it runs the latest LineageOS. How much you paid for a truly open device? It's only truly open unless you pay extra, amiright? Having to go out of my way to buy a specific device to benefit from ARM's open nature seems to need me to go out of my way. I haven't found a x86 device that won't let me install Linux. Maybe ChromeOS devices but I hate those, and they run mostly open source too.
              at the time 2 years ago i bought this a gigaset gs290 with the rom from gigaset was at 130€
              the same smartphone with e/OS/ from murena.com was 230€
              now you say 3 things:
              1. your 100 dollar motox4 is cheaper well tight
              2. the gigaset version is cheaper well right
              3. you pay extra for the eOS one from murena.com well right.
              but there are benefits to you do not have to do it yourself and you get much longer updates as you pay for the service and so one and so one-...

              Originally posted by Dukenukemx View Post
              I'm not doing your homework to prove you're right.
              who cares ? it does not mean you are right ...
              it is what i read in 100 of specific articles i read on this topic and also from 1000 of forum posts i read about this topic.
              it is just not worth it to proof anything to you.
              be sure it is like i told you and no one cares about your opinion even if this sounds not very friendly to you.

              Originally posted by Dukenukemx View Post
              Also, Apple M chips have some extra transistors to make it possible to emulate x86 fast. Again I don't fault Apple for this since you can't just dump x86 and go ARM straight away. As for transistor count, I don't know which part is ARM and which part is not. All I can do is compare a mobile x86 AMD chip with one from Apple and look at performance and realize that Apple is using twice as many transistors. Where those transistors are going is uknown.
              no its not unknown there are die shots and zone descriptions from apple.
              and the ARM general purpose cpu part is small compared to the rest.
              Apple announced their new 20 billion transistor M2 SoC at WWDC. Unfortunately, it’s quite a minor uplift in performance in some areas such as CPU. Apple’s gains mostly came from the GPU and video editing side of things.



              Originally posted by Dukenukemx View Post
              I'm not sure if you mean title based rendering is using x264, which wouldn't make sense
              the only conection between x264 and tile based rendering is the patent problem
              and i write about the patent problem

              [QUOTE=Dukenukemx;n1336639] or it's inferior because AMD is using an inferior decoder and encoder and that somehow translates over to rendering? Just in case, title based rendering breaks up the image into tiles and determines if a texture that's hidden needs to be rendered. Turns out GPU's will render every texture in view and not in view and waste memory storage and bandwidth doing so. There were other methods to avoid rendering unneeded textures but it wasn't until Maxwell for Nvidia and Polaris for AMD. Even then, AMD's then Polaris implementation of tile rendering wasn't as good as Nvidia's.[/video]

              no there is not a encoder and decoder in rendering lol... i only talk about the patent problem

              as you say: "it wasn't until Maxwell for Nvidia and Polaris for AMD. Even then, AMD's then Polaris implementation of tile rendering wasn't as good as Nvidia's"

              the AMD version was only not as good as the one from Nvidia because AMD did use a patent free implementation and Nvidia just did get the orginal patent license.

              [QUOTE=Dukenukemx;n1336639]
              Was the situation, because now AMD's RDNA2 is far better than Nvidia's. The power usage on AMD's RDNA2 based GPU's is far better than Nvidia's Ampere. None of AMD's GPU's make use of GDDR6X, and still does great in performance. Assuming that Apple M2 Maxx uses half the transistor count for the GPU which is at 57 Billion, the AMD RX 6900 uses only 26.8 Billion. Intel's 12900K which is faster than the M1 Ultra, has an estimated transistor count of around 8.2 billion.[/video]

              sure it is not impossible for AMD to avoid the Patent fee and at the same time make a better product and develop a better technology... sure its all inside of the possibility.
              but AMD still does not have the relevant tile based rendering patent.

              you write about tranistors and not about clock...
              but the amount of on/off switches in the chip is tanistor count multiblied with the clock speed.
              means 26.8 billion tranistors*2,7ghz for the rx6900...
              the apple m2 is only 3,2 ghz compared to the 12900K who is 5,5ghz...

              but if you calculate 8,2*5,5ghz=45,1
              and then 45,1/3,2= 14,,,
              this means at 3,2ghz intel would need 14 billion transistors at 3,2ghz
              apple one is 20*3,2=64

              i think the different of these (14billion tranistors @3,2ghz) vs apples 20 billion tanistors is the ASIC part there are tranistors not used for the most benchmarks like the 264 encoder and decoder and other similar ASIC tech.

              Originally posted by Dukenukemx View Post
              i know similar videos companies like intel or nvidia cheat people into buy much more powerfull hardware because some inefficiencies in the software burn all the performance to ashes

              Originally posted by Dukenukemx View Post
              As a Linux user you think I want DXVK or VKD3D over a native implementation of DX11 and DX12? I want a Gallium Eleven and Twelve because it would yield better performance. Don't get me wrong, DXVK and VKD3D are pretty fast, but it would be faster if Mesa has a native working DX11 and DX12 implementation, like it was done with Gallium Nine. Even Valve knew that getting that to work hassle free is just not gonna happen. Would be nice if every developer made games in Vulkan, but that isn't gonna help without a strong bridge for them to come over to Linux.
              i think the end-consumer does not care if it is metal or dx12 or vulkan or DKVK or WebGPU

              really what is your problem if microsoft agree on WebGPU and apple agree on WebGPU and google agree on WebGPU ...??? then in the end you use WebGPU directly on linux without the browser.

              what you want all "native" to linux/vulkan is: "is just not gonna happen"


              Originally posted by Dukenukemx View Post
              Now Apple could make a Vulkan driver for Mac OSX but doesn't because they want to lock in developers. We are talking about the same company that doesn't let Emulators or Web Browsers onto iOS, due to it being able to compete against the app store. And yes, FireFox for iOS is not the same FireFox for Android and everything else as because by Apple's policies, all web browsers must use the built-in WebKit rendering framework and WebKit JavaScript on iOS. Firefox can't even use their own Gecko layout engine.
              Also please don't say WebKit is better anyway. It's like I can read your mind.
              apple think they are smart but it is clear that they are not smart.
              but i am 100% sure WebGPU becomes the one and only standard and that even apple will drop metal in favour of WebGPU..
              i don't care if apple goes native vulkan on macOS instead i care if they go WebGPU...

              Originally posted by Dukenukemx View Post
              Apple is one of many who supported WebGPU. By this logic, Microsoft also supports WebGPU so I'm sure Microsoft is also open to open standards. If Apple went on their own to force developers to use Metal for webpages then the industry would laugh at them and never support it. Even Apple realizes that they have limitations in how far their influence can reach. Also again, why did Apple release Metal when they have WebGPU? Because WebGPU doesn't replace Metal, Vulkan, and DX12.
              no thats all wrong you cheat on the timeline ... you claim something like at the same time metal and webgpu did come into this world and apple did choose metal to hurt peope... thats complete bullshit really.

              at the time of the metal release there was no WebGPU

              apple metal is from 2014

              WebGPU is from 2018:
              "On June 1, 2018, citing "resolution on most-high level issues" in the cross-browser standardization effort, Google's Chrome team announced intent to implement the future WebGPU standard.[2]"
              in other words WebGPU standard is not yet released...

              So yes there is a possibility that WebGPU will replace metal on macOS...

              Phantom circuit Sequence Reducer Dyslexia

              Comment

              Working...
              X