Announcement

Collapse
No announcement yet.

ZLUDA Takes On Third Life: Open-Source Multi-GPU CUDA Implementation Focused On AI

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Developer12
    replied
    Originally posted by A1B2C3 View Post

    The company's lawyers are apparently intoxicated by the opportunity to do anything. understand, I am saying that if I bought a company's product, I cannot thereby cause damage to them. I can only give benefits. I will cause damage if I don't buy their product. I think it's clear that those who buy their products from the company benefit the company. but in order for their products to be bought for them, there must be convenient and large software. if there is convenient and large software, then the product will be more popular and more people will buy it, if they buy more, then the company will have more money. but for someone to write software, they need to pay money, and this is a cost. tell me from all that you understand that if any community creates free software for an NVIDIA product, then the company receives a double benefit. the first thing that happens is that a company needs to pay less money to a software developer. secondly, people know better what they need and no one else will make the software more convenient, which means that such a product with such software will be very popular and therefore will sell better and bring more money. Tell me, what can NVIDIA claim damages for? for making NVIDIA's product more popular, for example? am I making an NVIDIA product more marketable? Are all the fools there?
    1) community makes open replacement for cuda

    2) nvidia looses out on opportunities to milk cuda users for more money, especially enterprise customers

    3) nvidia sues the community members that made the open replacement for patent infringement

    it's that simple

    Leave a comment:


  • Developer12
    replied
    Originally posted by A1B2C3 View Post

    we don't make money from it. Look, I bought a video card. I can throw it away, I can give it away, I can do anything with the video card. It's mine, I paid the price for it. Do you know how the price is formed? the price includes everything, taxes, royalties and much more. I paid for all this. I can program for my video card, which I paid money for. no one can tell me that I can't use CUDA when they are part of a video card for which I paid a decent price. I didn't steal it. I'm not breaking anything. I use what I paid the money for. Is this a crime? I think you've lost your mind ,
    you have an astoundingly poor understanding of patent law. none of what you said was a defence. ESPECIALLY if your GPU isn't an nvidia one.

    the closest you came was not personally mkaing money, but nvidia can still seek damages if you enable someone else to infringe.

    Leave a comment:


  • Developer12
    replied
    Originally posted by A1B2C3 View Post
    we will not fight NVIDIA and their policies and their patents. let their technology belong only to them. we just wanted to build on that. everything they have requires a lot of refinement. but they do not allow their product to be developed. This is a big stupidity. I think we will soon lose interest in them. I understand that it doesn't matter to them right now because they are drawing money from the AI sector. but it won't always be like this. Then what? Will they try to come back to us again? No, we won't need them anymore. we are tired of forgiving their deceptions and their stupidity.
    Too bad, building on what they have REQUIRES you to have a licence. Don't have a licence? doesn't matter how good your intentions are. You get massive fines.

    Leave a comment:


  • Developer12
    replied
    Originally posted by wertigon View Post

    Should expire soon though, CUDA came out in 2006 and patents only last for 20 years. Although, to be fair that's 20 years of Nvidia holding the tech hostage.
    Nvidia file new patents every year. You get to contend with the patents from 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020, 2021, 2022, 2023, and 2024.

    Oh, and for your implementation of CUDA to actually be *useful* you need to implement stuff from no more than 3-5 years ago. And even that is a stretch, since the AI/ML workloads this is explicitly targeting routinely require the latest version of CUDA to pick up support for whatever new neural network layer the research papers just cooked up.

    Leave a comment:


  • Developer12
    replied
    Originally posted by A1B2C3 View Post

    Guys, look at this example. you came to the store and bought a salad. We sat down at a cozy table and got ready for dinner. you see green peas in a salad and you think OK, I'm going to try it now, but suddenly it turns out that you can't eat these peas because they're patented and they're only in the salad to give you an appetite. well, you think, well, OK, then there is a very tasty corn in the salad, but suddenly it turns out that it is also patented and patents do not allow you to use it as food. you think, well, OK, there's a delicious pickled onion left in the salad, I'll eat it. and so, after buying a salad, you ate only onions and this onion turns out to be leased only. You will have to return it. the conditions were written in small print on the salad next to its composition. all these devices that are being sold to us now remind us of this salad. The question is, then what am I paying money for? for the opportunity to smell the salad? for the opportunity to take a photo with a salad?
    That's a lot of words for someone who clearly would rather be on recipes.com.

    Leave a comment:


  • wertigon
    replied
    Originally posted by Developer12 View Post

    Patents. Lots and lots of CUDA patents.
    Should expire soon though, CUDA came out in 2006 and patents only last for 20 years. Although, to be fair that's 20 years of Nvidia holding the tech hostage.

    Leave a comment:


  • Developer12
    replied
    Originally posted by schmidtbag View Post
    I don't really see how CUDA's license agreement is relevant regardless of where he's from. If you haven't installed CUDA and/or do not agree to the license and you are not using any of Nvidia's CUDA code then what legal power do they even have? Unlike a Nintendo emulator, the primary use of ZLUDA isn't obviously used for piracy, and since it isn't [yet] targeting Nvidia GPUs then at that point, it has practically nothing to do with Nvidia at all.

    Think of it like the Mono project vs .NET. MS would have an incentive to shut that down, but they didn't, probably because they had no power to do so.
    Patents. Lots and lots of CUDA patents.

    Leave a comment:


  • DiamondAngle
    replied
    Regardless of if zluda is legal (it is) zluda is a bad idea.

    Gpu compute code is not performance portable, cuda is not even really performance portable across NVIDIA generations.
    By restricting yourself to executing ptx compiled from kernel code written with nvidia gpus in mind you hand nvidia a performance advantage you can not recover from.
    Hip already has this problem to a large degree since it is source compatible with cuda via hipify, many projects "support" hip by simply translating cuda code to hip, this results in extremely performance deficient code on amd gpus, especially on cdna and gcn which are more different to nv gpus than rdna is (the whole point of rdna being that it is more like a nv gpu so that this kind of code runs better).
    zluda makes this mutch mutch worse.
    It would be a huge strategic loss for amd if zluda becomes wide spread and accepted as the "default" way to do compute on amd gpus.

    Leave a comment:


  • Panix
    replied
    I give it a year- 2 tops, before the project is dumped. AI/ML is nvidia territory and reading the original blog post - they can't utilize RT. They're going back to code pre-AMD? What is the point of all this? It's funny someone brought up Tiny Corp. This is the company that was trying to use AMD gpus but getting no assistance or support from AMD. So, even when a company is buying a crapload of AMD gpus and using them for your business, AMD provided little to no support and the software/project was full of bugs and problems.

    Amd's hardware can't use the software they have and/or it's not supported or it can't be utilized fully. AMD gpus only have a limited use-case and have too many limits and restrictions to be worth using/buying... I don't care if ppl are a gamer - go buy AMD gpus to your heart's content then. ML/AI/productivity - that's where the $$ is - whether you like it or not - and that's the only reason that guy is continuing the project - because of that demand. However, he'll need a lot of luck (since it's Nvidia 'territory' - ppl in that line of work or working in that field buy nvidia).

    Leave a comment:


  • ssokolow
    replied
    Originally posted by kurkosdr View Post

    Software is copyrightable in the EU, which means you can't distribute copies of algorithms or source code verbatim without permission, what you can't do in some EU countries is enforce software patents (patents are a different thing from copyright). So, it's possible to reverse-engineer software in some EU countries without fear of software patent lawsuits, but you can't copy software or its source code verbatim.

    For example, the ZLUDA guy can't copy verbatim any parts of the source code of Nvidia's proprietary drivers, even if he somehow stumbles upon such source-code, ZLUDA is exclusively reverse-engineered code.



    The fact that what you are describing is the hypothetical process of patenting a character archetype, but what you are talking about is copyright, hints to me that you have misread something, but I won't go on the basics of copyright law and patent law here (for example how a copyrighted work differs from a patent).
    The other responders have said what I'd have said. An algorithm is an idea, like a character archetype. Copyright protects implementations of ideas, like novels that use those character archetypes.

    My best guess is that English isn't your first language and you were unaware that, among English programmer vernacular, "not a specific implementation" is literally part of the understood definition of "algorithm".
    Last edited by ssokolow; 05 October 2024, 04:45 AM.

    Leave a comment:

Working...
X