Announcement

Collapse
No announcement yet.

Tiny Corp Puts Their AMD-Powered Compute Boxes "On Hold"

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    Originally posted by Quackdoc View Post
    no it doesn't, AMD gpus will randomly hit various ring time outs causing the card to need to reset, which usually fails, but even when it doesnt fail, it will still usually kill the compositor and all child processes.
    OK I stand corrected regarding it just being VFIO, but I haven't seen that with an AMD card yet. I don't stand corrected on the ridiculous "made of rice" comment.

    Comment


    • #52
      Originally posted by Panix View Post
      AMD's gpu division is a joke and a clown show.
      Suggested addition: EVERY GPU maker (and, really, CPU / motherboard maker) for "consumer electronics" market devices are clowns inflicting painful bad jokes on us all.

      On the one hand look at the state of technology -- 100s of GBy RAM easily attainable, GPUs with 1TBy/s+ VRAM BW, GPUs with O(10k) SIMD processors, O(70 TFLOP FP32) SIMD compute performance.
      It is truly an era of personal "supercomputers" at least by the standards of supercomputing that prevailed
      for most of my life.

      Sure the giant HPC clusters / data centers et.al. have several orders of magnitude more than a single "high end" consumer PC but my point is about what each SMB / personal desktop / server can handle.

      And yet HOW and WHY do we have most of these capabilities at all in the consumer market?
      ONLY so kids & beyond can play the latest glitzy meaningless video games.
      Apparently it's not enough to have a personal "super-computer", it's got to have flashing RGB lights on everything from the RAM DIMMs to the GPU to the cooling fans. Better get rid of that utilitarian steel case, too, make
      it either glass or thin mesh so you can see the lights blink!

      So here we are with system technology we'd have drooled over ~20 years ago and it's "for gamers",
      made at "race to the bottom" quality levels, the best (very limited) warranty one is going to get on one's $2000 GPU is 3 years, probably much worse for many others and the rest of the system components.

      One is pretty damn lucky if one can even build such a system without something physically / functionally being broken day #1 and after that, well, good luck with the HW / FW / SW not being so buggy that your system
      will glitch / crash / whatever on a regular basis.

      Here obviously even experts can't get a box full of AMD's top of the consumer-line GPUs to work HPC loads
      stably / predictably.

      Intel can't even be bothered to release any code / documentation / APIs to support the most basic GPU system monitoring & control sensor / performance for LINUX.

      NVIDIA can't be bothered to have their settings / monitoring / control utility work in wayland without X.

      AMD couldn't be bothered to have ROCm supported across their consumer GPU line-up as at least NVIDIA has CUDA working on all GPU models.

      None of these clowns can manage to figure out how to run a business in any other way than keeping things as broken (architecturally, features) as possible so that their customers are in a perpetual state of suffering with limited / broken things and paying through the nose every couple of years for incremental improvements when
      what's really neglected are foundational things.

      How have we "let" the modern GPU "arise" to such ridiculous dominance over the past 30 years.
      Literally the system CPU is almost an irrelevant dot of computing OPs / bandwidth compared to the GPUs.

      The only good thing one can say about the desktop x86 PC platform architecture is that it is "an open standard" unlike macs or servers so at least one has some vendor choice for each piece of the system and thanks to FOSS the SW upon it besides the drivers.

      But otherwise? The SW/FW/HW/mechanics are so broken they'd seem like a ridiculous kid's project
      compared to any actual quality piece of computing gear today or from the past 5 decades.

      The vast RAM BW should be core to the system, not a black box isolated on some plug-in toy (literally!) GPU.
      Same for the multi-thousand-core SIMD processors.

      What is a computer besides its RAM and processing system's ability to process data in RAM at high throughput and reliably?

      We've fallen off the edge of "computers should compute correctly" so these super-computing GPUs and consumer systems don't generally even "support" / enable ECC.

      The only access to the GPU computing is behind so many layers of legacy DRI, DRM, black-box HW/FW, etc. etc. that companies like this give up on trying to optimize their code for HPC / dataflow / DSP since actually using commodity computers / GPUs for computing is almost an astonishing after-thought to Intel, AMD, NVIDIA who'd have us "bread and circuses" play video games only and leave the computing to the data center / cloud entirely.

      How are we going to get computers that are actually POWERFUL but also RELIABLE in this mess?
      The thought of GPUs dominating the marketplace for years to come as the only real computing option is bleak.

      Intel / AMD / NVIDIA / Microsoft clearly have no real incentive to "fix" the PC consumer platform.
      Apple went off in their own direction and would happily be a consumer-facing old-IBM without
      competition or any pressure to moderate costs and allow consumer choice.

      Forget open source, what about open COMPUTERS, the things you need to you know actually RUN open
      source. And how about a sustainable approach where HW is actually made with quality, modularity,
      durability and isn't EOLed to become e-waste in 2-5 years even before huge defects / bugs are ever worked
      out so we stay in a state of perpetual alpha?

      Something has to replace the PC architecture, scale it, and also accommodate diversity in the standpoint
      of going beyond x86 to RISC-V, ARM, hopper-style super-chips, whatever.



      Comment


      • #53
        I really wonder how AMD itself writes/develops the firmware software if there is no way to debug on crash. Maybe they have some special debugging hardware not available to outsiders?

        Comment


        • #54
          pong Very well said. You should have a blog or forum or site about this or something... Myself and many others would join your ranks.

          Comment


          • #55
            Originally posted by cend View Post
            It has been noted that whoever working on AMD's GPGPU software has a very weird mindset...
            It's probably cuz they keep quitting and moving on to other pastures?

            Comment


            • #56
              Originally posted by NeoMorpheus View Post
              I see that the Ngreedia secret agents disguised as FOSS followers are out in full force in trashing AMD.

              Lets see the following:
              Yeah… switching to Nvidia is an empty threat. They’re never gonna support this, and would quite likely attempt legal action, especially using gaming GPUs (which is mentioned in their EULA). Intel has less to lose in the GPU space, but the A770 isn’t exactly equivalent in compute or memory capacity?


              I’m not an expert… but allowing unsigned firmware seems like a questionable idea these days?
              Especially for a product TinyCorp is selling to enterprise customers?



              This feels like it is something that takes a while, if it’s possible at all? I’d think it would need combed over to make sure it doesn’t leak trade secrets somehow, then run by legal for patent and licensing concerns and again, please do make the same bullshit "demands" from Ngreedia, I double dare you.

              I can understand that AMD doesn’t like the optics of telling an AI company to pound sand, but these guys really seem to think the world revolves around them, and that buying gaming cards entitles them to a priority line to engineering to assist with their off-label use case.​

              Also, AMD might be making a superhuman effort to do not tell this manchild to get Enterprise level hardware (and support) since they know that everything they say or do, will be used against them, just like all the Ngreedia rabid followers already have done in this thread.

              Honestly I wish that AMD gave FOSS the middle finger, like Ngreedia to see what else will all of you crap about them.
              Did you even read the article? They claim the driver is unstable/buggy - crashes - and it sounds like they just want something reliable - maybe Nvidia serves that, I dunno - but, the problem is they're unable to debug/diagnose/troubleshoot because of ....closed, firmware? Anyway, it just sounds like the FOSS part isn't really benefiting them for them to be looking at the other 'brands.'

              Comment


              • #57
                Originally posted by pWe00Iri3e7Z9lHOX2Qx View Post

                For general desktop and gaming use, your life is going to be a lot easier on AMD unless you stick with a limited number of distros that have really good NVIDIA support (e.g. Pop!_OS). If GPU compute is super important to you, NVIDIA / CUDA may be worth the pain. But again, if you want things to "just work" on any distro for basic desktop needs or gaming, you can save yourself a lot of pain by staying with AMD (or Intel).
                Is this going to be the same answer once explicit sync works w/ Nvidia gpus (thus, improving the Wayland experience)?

                AMD gpus - have a pro/con situation despite the so-called FOSS driver benefit - and 'interaction' with the Linux kernel ecosystem. There's also poor support of GPUCompute software and other software stack problems - AMD seems to be too concerned with the gaming industry - i.e. their console market and although, they might give AI some attention, it's only because they don't want to be left behind (with Intel and Nvidia chasing after the AI industry) - it's not because they want to improve *your* experience with that software area - i.e. ROCm/HIP-RT etc.

                Comment


                • #58
                  Originally posted by pong View Post
                  Suggested addition: EVERY GPU maker (and, really, CPU / motherboard maker) for "consumer electronics" market devices are clowns inflicting painful bad jokes on us all.
                  On the one hand look at the state of technology -- 100s of GBy RAM easily attainable, GPUs with 1TBy/s+ VRAM BW, GPUs with O(10k) SIMD processors, O(70 TFLOP FP32) SIMD compute performance.
                  It is truly an era of personal "supercomputers" at least by the standards of supercomputing that prevailed
                  for most of my life.
                  Sure the giant HPC clusters / data centers et.al. have several orders of magnitude more than a single "high end" consumer PC but my point is about what each SMB / personal desktop / server can handle.
                  And yet HOW and WHY do we have most of these capabilities at all in the consumer market?
                  ONLY so kids & beyond can play the latest glitzy meaningless video games.
                  Apparently it's not enough to have a personal "super-computer", it's got to have flashing RGB lights on everything from the RAM DIMMs to the GPU to the cooling fans. Better get rid of that utilitarian steel case, too, make
                  it either glass or thin mesh so you can see the lights blink!
                  So here we are with system technology we'd have drooled over ~20 years ago and it's "for gamers",
                  made at "race to the bottom" quality levels, the best (very limited) warranty one is going to get on one's $2000 GPU is 3 years, probably much worse for many others and the rest of the system components.
                  One is pretty damn lucky if one can even build such a system without something physically / functionally being broken day #1 and after that, well, good luck with the HW / FW / SW not being so buggy that your system
                  will glitch / crash / whatever on a regular basis.
                  Here obviously even experts can't get a box full of AMD's top of the consumer-line GPUs to work HPC loads
                  stably / predictably.
                  Intel can't even be bothered to release any code / documentation / APIs to support the most basic GPU system monitoring & control sensor / performance for LINUX.
                  NVIDIA can't be bothered to have their settings / monitoring / control utility work in wayland without X.
                  AMD couldn't be bothered to have ROCm supported across their consumer GPU line-up as at least NVIDIA has CUDA working on all GPU models.
                  None of these clowns can manage to figure out how to run a business in any other way than keeping things as broken (architecturally, features) as possible so that their customers are in a perpetual state of suffering with limited / broken things and paying through the nose every couple of years for incremental improvements when
                  what's really neglected are foundational things.
                  How have we "let" the modern GPU "arise" to such ridiculous dominance over the past 30 years.
                  Literally the system CPU is almost an irrelevant dot of computing OPs / bandwidth compared to the GPUs.
                  The only good thing one can say about the desktop x86 PC platform architecture is that it is "an open standard" unlike macs or servers so at least one has some vendor choice for each piece of the system and thanks to FOSS the SW upon it besides the drivers.
                  But otherwise? The SW/FW/HW/mechanics are so broken they'd seem like a ridiculous kid's project
                  compared to any actual quality piece of computing gear today or from the past 5 decades.
                  The vast RAM BW should be core to the system, not a black box isolated on some plug-in toy (literally!) GPU.
                  Same for the multi-thousand-core SIMD processors.
                  What is a computer besides its RAM and processing system's ability to process data in RAM at high throughput and reliably?
                  We've fallen off the edge of "computers should compute correctly" so these super-computing GPUs and consumer systems don't generally even "support" / enable ECC.
                  The only access to the GPU computing is behind so many layers of legacy DRI, DRM, black-box HW/FW, etc. etc. that companies like this give up on trying to optimize their code for HPC / dataflow / DSP since actually using commodity computers / GPUs for computing is almost an astonishing after-thought to Intel, AMD, NVIDIA who'd have us "bread and circuses" play video games only and leave the computing to the data center / cloud entirely.
                  How are we going to get computers that are actually POWERFUL but also RELIABLE in this mess?
                  The thought of GPUs dominating the marketplace for years to come as the only real computing option is bleak.
                  Intel / AMD / NVIDIA / Microsoft clearly have no real incentive to "fix" the PC consumer platform.
                  Apple went off in their own direction and would happily be a consumer-facing old-IBM without
                  competition or any pressure to moderate costs and allow consumer choice.
                  Forget open source, what about open COMPUTERS, the things you need to you know actually RUN open
                  source. And how about a sustainable approach where HW is actually made with quality, modularity,
                  durability and isn't EOLed to become e-waste in 2-5 years even before huge defects / bugs are ever worked
                  out so we stay in a state of perpetual alpha?
                  Something has to replace the PC architecture, scale it, and also accommodate diversity in the standpoint
                  of going beyond x86 to RISC-V, ARM, hopper-style super-chips, whatever.
                  I really do not understand tiny-corp it is very well known why AMD don't can open up the firmware fully and thats the DRM/Hollywood part with youtube/netflix and amazone-prima video demand robust DRM and thats always a closed source black box thats why the closed source firmware.

                  it looks like tiny-corp just trolls AMD....

                  its a miracle why tiny-corp does not go with OpenPOWER/LibreSOC/Solidsilicon.com/ and redsemiconductor.com/


                  Red Semiconductor - mission is to develop & deliver a new class of microprocessor chip set optimised for vector instructions.


                  for many years AMD did not need to compete with other gpu companies on price on linux because they where the only with good opensource driver.

                  but now nvidia with their BIG-Firmware approach also gives people open-source drivers and open-source kernel module

                  so right now AMD lost nearly all benefits its only that the Nvidia firmware is 62mb big big big and the AMD firmware is much smaller.

                  now there is a battle about the AI/deep learning people and all the closed-source AI people are already on nvidia and CUDA...

                  also the HDMI-FORUM who did NUKE the AMD opensource driver for HDMI2.1 show that there is no middle ground anymore

                  mean AMD now need to choose between they go full opensource or else they continue to play with the HDMI-forum/MPEG LA people and go down and become the next dinosaurs that did go extinct​...

                  if AMD does not do open hardware.. i am sure the solidsilicon.com redsemiconductor.com people will do ​and tiny-corp should join them instead of waiting for a open-source firmware.

                  for many years for companies like Nvidia and AMD it was like DRM and Copyprotection was more important than the open-source compute people but
                  the wind is turned 180degree now the big money is in AI and of course opensource and open hardware.

                  one thing is for sure the Copyright and Patent people will be crushed by AI ... because modern AI can create any kind of art and also can create any kind of tech and copyright law and patent laws do not protect AI created artwork or tech.

                  this means copyright laws who outlaw the break of DRM and Copyprotection is more or less death... who need to break copyright law if a AI can create it in seconds.

                  this means big corporations like SONY or Disney they are defeated and their draconian copyright laws and laws to outlaw breaking DRM is more or less rendered outdated.
                  Phantom circuit Sequence Reducer Dyslexia

                  Comment

                  Working...
                  X