return to table of content

Nvidia bans using translation layers for CUDA software to run on other chips

marklar423
28 replies
6d

I think they're trying to do the trick where they don't ban the emulation per se, but they make it a violation of the license to disassemble the code to understand how it works. Then they go after emulating projects for copyright infringement, since they must have violated copyright to get a compatible version running.

It's a dirty trick IMO, but it has seen some mixed success.

samus
8 replies
6d

Clean-room reverse engineering is specifically not forbidden, and has been successfully done in the past to develop the Nouveau driver. Disassembling Nvidia's binaries is plausibly forbidden, but it would be very surprising if they could extend that ban on binaries produced by their compiler. However, the trademark could be used to attack statements about the emulation layer. I see parallels to Google vs. Oracle

salawat
7 replies
5d23h

Note Nouveau is dead in the water due to DMCA 1201, plus Nvidia's use of secrutful FALCON units to cryptographically verify firmware signatures.

samus
5 replies
5d21h

Recently, development has picked up again though. They basically surrendered and now use the proprietary blob. The Mesa driver supports Vulkan 1.3 and Zink will be used for OpenGL going forward. Now on to more performance optimization and maybe bringing up some of the other proprietary IP blocks...

salawat
4 replies
5d21h

Realistically, all that means is that Nouveau is giving in to the death of rights of First Sale, which is a fight I'm not willing to give up on fighting for yet even if every deep pocketed interest and it's sister seems to be swinging that way.

anticensor
2 replies
5d7h

First sale is not a thing in my country. Resale defaults to forbidden, and when allowed by original creator, is simply subject to the same royalty and licence protections as initial sale of the copyrighted work. Clean room reverse engineering is however allowed.

gpderetta
1 replies
5d2h

Interesting, which country?

edit: Never mind.

anticensor
0 replies
4d23h

My home country, Turkey. We also have a vignette system for books (books are an exception to the resale royalties, as long as resold copies bear the vignette; never-sold works and school textbooks are exempt from the vignette system) and compulsory registration for video games.

samus
0 replies
5d20h

I don't completely get your point I fear. Using the blob is simply accepting the fact that the cards are hamstrung without it. They won that round. Let's hope that AMD and Intel eventually get it together and come up with products that can slowly erode Nvidia's moat.

kmeisthax
0 replies
4d22h

To those wondering what salawat is talking about: a while back Nvidia started signing the firmware that loads onto the auxiliary processors on their GPUs. Nouveau asked Nvidia for a minimal signed blob they could legally redistribute and load but was basically ghosted by Nvidia. Without that blob they can only drive the GPU at minimum power, which is often worse than Intel integrated graphics.

The bit about DMCA 1201 is mostly hypothetical. i.e. Nvidia's code signing looks like a DRM scheme if you squint. If Nvidia really wanted to, they could argue that, say, cracking the code signing is illegal no matter what you do with it. My gut feeling is that the courts probably wouldn't be too keen on Nvidia suing over this in the case of, say, someone writing their own power management firmware and using an exploit to load it onto a GPU. DMCA 1201 is supposed to prevent you from copying other people's work, not writing your own.

All of this is a moot point because Nvidia more recently released an "open source driver" that loads a single unified firmware blob. This has allowed building a new driver called NVK, which aside from having to load said blob is FOSS, and actually usable for Nvidia GPUs.

j1elo
6 replies
5d23h

I wonder how it's not a trivial workaround to have a separate, anonymous project with the sole scope of disassembling the code (and thus violating the license), while acting as knowledge source for emulators.

This way, emulators wouldn't be violating any license, just using technical details learned from third party sources. The anonymous project itself would be attacked by Nvidia, and in case this succeeded, replicas of the repo would probably pop up quickly and easily. But good luck having a chinese Gitee repo closed for american copyright infringement!

mech422
4 replies
5d15h

It appears from the comments here that just disassembling the nvidia code would NOT be a violation? it seems that only disassembling to use in a compatibility layer would be ?

I believe, this 2 stage/project system was how DeCSS for DVDs worked? Someone cracked it and posted the code, which could them be picked up/used by others as it was 'public knowledge' or some such ??

AstralStorm
3 replies
5d5h

It is already prohibited on copyright grounds. They put the extra clause to "close" some of the reverse engineering provisions not covered.

mech422
0 replies
4d19h

I thought in the EU, reversing is (was?) expressly allowed for 'interoperability' - so if someone in EU reversed something and published it...others could then use it as it was 'public knowlege' ?

loup-vaillant
0 replies
5d1h

Copyright grounds? Copying stuff can violate copyright, but documenting stuff? No no no, you'd need a specific law to prevent disclosing of information. Maybe misappropriation of trade secrets?

justinclift
0 replies
4d18h

It is already prohibited on copyright grounds.

What makes you think that?

imtringued
0 replies
5d10h

Nothing about this prevents you from developing a CUDA conformance test suite that can be used to verify that any given CUDA implementation is functioning properly. Nvidia wouldn't even be allowed to take it down.

ActionHank
4 replies
6d

So they just put up their hand and said they also want to give the EU money?

brookst
3 replies
5d23h

At this point with the level of demand they have, it might be easier and cheaper to just not do business in the EU.

jijijijij
2 replies
5d21h

Funny how nobody does that, eh?! Could be because the EU is the second biggest consumer market on this planet. But of course, they could totally focus on China, now third and rising, instead. Maybe they can even ask the CCP to help them with their IP claims and stuff.

brookst
1 replies
3d3h

Or it could be because the EU has not yet reached the tipping point where it’s not worth it, or it could be that they have but companies are adapting by reducing investment and it will take years for patterns to emerge.

Not sure where the weird China stuff came from. Are you saying that as long as the EU is even slightly better than China we should celebrate it?

jijijijij
0 replies
2d3h

Are you saying that as long as the EU is even slightly better than China we should celebrate it?

No. I am saying, as long as the American market isn't enough for the growth necessities these giga corporations set themselves up to, they pretty much have no choice, but to go with the European market - since it's the second largest consumer market. And since the argument is about Nvidia being silly about IP stuff, the Chinese market - the third largest and therefore next best alternative to the European market - isn't exactly known to give a fuck about IP, at all, so good luck with that.

Also:

EU has not yet reached...

EU is even slightly better than China...

Jesus Christ. Throw your unread Atlas Shrugged copy into the cringe bin, and fucking touch some grass, for real.

colechristensen
2 replies
6d

The alternative is NVIDIA charging for what they're spending money on and making CUDA as big a cost as the hardware. I don't know what I agree with but hardware-supported-software being free is a good thing, your competitors being able to use it for free is a bad thing.

I wonder legally if it would work to instead have licensing that charged a reasonable fee to run CUDA on non-NVIDIA hardware. Just don't enforce it for developers, but have it big enough and enforced enough so corporations would hesitate or pay... or just make it large enough to make it not make sense to run on non-NVIDIA cards.

shadowgovt
1 replies
5d23h

hardware-supported-software being free is a good thing, your competitors being able to use it for free is a bad thing.

... bad thing for whom? Especially having been in the unenviable position of needing to debug implementation errors in shader compilation, I'd argue that for the end user, it's much better if the software is not only usable for free, but open-source. To the extent that the ownership the hardware vendors place upon their software prevents this, I'd say that's a bad thing.

If NVIDIA can't compete on hardware they're just bailing water from a sinking ship. All their competition has to do is provide and adopt a just-as-good open software standard on cheaper hardware and people will flock to it, NVIDIA will be forced to provide compatibility or become an also-ran, and they'll lose anyway.

colechristensen
0 replies
5d20h

"All their competition has to do" is quite the assumption, when in fact CUDA and drivers is a very significant portion of the work NVIDIA does. NVIDIA is winning because of their software work, this is where they've added the most value to their product. NVIDIA can compete on hardware, but the reason AI/ML is where it is today rests largely on the investment in software NVIDIA has done in the last 20 years enabling these kinds of things. Everyone could be using the open source alternatives, but they're not because the OSS isn't as good because... nobody paid for it.

I'm all for open source, you should be able to run whatever you want on your NVIDIA card (and you can, no walled gardens there), but it doesn't go as far as insisting that if someone wrote software it should be free.

wakawaka28
0 replies
5d22h

Isn't CUDA documentation thorough enough to reimplement it without reverse-engineering? (Serious question.)

lunfard000
0 replies
6d

I dont think China cares too much about what NVIDIA thinks considering the ban in place.

hfgjbcgjbvg
0 replies
4d20h

So dirty. Makes the whole industry look bad.

dontupvoteme
0 replies
5d23h

So what happens when someone in e.g. China or Russia releases something who they can't practically prosecute, can everyone else then use it?

paulryanrogers
11 replies
6d

APIs aren't subject to copyright. So unless CUDA is more than an API they're just going to make some lawyers rich and waste everyone's time.

laweijfmvo
3 replies
6d

Making lawyers rich to waste time is how super rich companies stifle competition...

samus
2 replies
6d

That works against small fishes, but it would be a different story if AMD or Intel take part.

laweijfmvo
1 replies
5d20h

Then it wastes double the time, and makes double the lawyers rich :D

samus
0 replies
5d12h

It at least increases the amount of precedence regarding the subject. That would eventually decrease the time such trials take.

jsheard
3 replies
6d

The API may not be copyrightable, but the widely-used official libraries (cuDNN, cuBLAS, OptiX, etc) are, so in practice I think they can at least say that anything which uses those libraries can't be run on competitors hardware.

zozbot234
2 replies
6d

AIUI, these libraries are generally implemented as shared objects that the resulting binary has to link to. So it should be quite possible to reimplement the interface that they expose, moreover something like ROCM has to do that anyway in order to compile HIP source code from scratch. Looks like that's what https://github.com/ROCm/hipDNN/ does.

jsheard
1 replies
6d

Possible, yes, but reimplementing the CUDA runtime and the official libraries and getting them up to par with the originals is a much bigger task than just doing the former and running Nvidias libraries in it. AMD did try to do a complete reimplementation of cuDNN with hipDNN but "last commit 5 years ago" doesn't inspire much confidence in it being competitive with cuDNN proper.

imtringued
0 replies
5d10h

They don't have any tensor cores so there is not much reason for them to work on this.

bitwize
1 replies
6d

Per the Federal Circuit, APIs are subject to copyright. The Supreme Court just ruled that Google's use of APIs was fair use.

ddtaylor
0 replies
5d23h

You mean the Oracle vs Google case right?

colechristensen
0 replies
6d

CUDA (and plenty of things lumped in with it that one would think of as CUDA) are not simply an API. Somewhere in between a programming language and a low level operating system.

plussed_reader
2 replies
6d

An arms race of the update whack-a-mole as the encrypted blob grows and encompasses more...

lunfard000
1 replies
6d

wouldn't that require breaking the ABI? Many enterprise wont be happy if they do so. Also, there is not way to prevent using old cuda compilers.

reactordev
0 replies
6d

Not a way to prevent it but they can make damn sure its difficult to find.

Example: Where is Newtonsoft's Physics Library v1.x? It was awesome, easy, fast, and worked with my engine (or rather, my engine worked with it?). Gone. Nowhere to be found, not even on the internet archives way-back-machine.

It's rather trivial for a juggernaut like NVidia to wipe the earth of older cuda compilers by tweaking a driver and making cuda compilation cloud-based.

mordae
2 replies
5d23h

Reversing in order to achieve interop is explicitly legal in EU. Well unless you have to break DRM in process.

rerdavies
0 replies
5d19h

Does that render UELA clauses that forbid reverse engineering invalid?

mech422
0 replies
5d15h

I believe, this 2 stage/project system was how DeCSS for DVDs worked? Someone cracked it and posted the code (for interop, in EU), which could then be picked up/used by other projects as it was 'public knowledge' or some such and didn't expose them to legal risk ?

jsheard
0 replies
6d

I think there's two issues at play: re-implementing the CUDA APIs, and using that re-implementation to run Nvidias own libraries like cuDNN on non-Nvidia hardware. Precedent around emulation and the Google vs. Oracle case probably protects the former, but Nvidia is probably entitled to say how you're allowed to use their own software in the latter.

bitwize
0 replies
6d

given the legal history of emulation.

Funny you should say this just after Nintendo shut down Yuzu and collected hefty damages in the settlement.

jstanley
24 replies
6d1h

People keep using proprietary software and keep getting burned by it. When will we learn?

moffkalast
8 replies
6d

Hopefully Vulkan eventually gets a headless compute kernel version.

jsheard
3 replies
6d

Vulkan has always supported headless mode, everything related to graphics and presentation is optional.

It has a way to go before its compute model is as powerful and easy to use as CUDA is though.

moffkalast
1 replies
5d23h

Wait, really? How does one set that up without just getting llvmpipe? I've turned over half the internet and I could never get it to work without installing some kind of window system.

jsheard
0 replies
5d23h

I'm just speaking from the perspective of the spec, which defines surfaces and swapchains as extensions that are never guaranteed to be available, so a compliant driver may work in the absense of any kind of GUI by just reporting those extensions as not supported. Whether your Vulkan driver actually supports running in a headless context is another question though, and of course the Vulkan software you're trying to run has to gracefully handle the case where surface/swapchain aren't available.

zozbot234
0 replies
6d

There are various efforts to compile OpenCL and SYCL to Vulkan, the Mesa folks are working on this as part of the RustiCL project. But full capability will require some extensions beyond what plain Vulkan provides.

solardev
2 replies
6d

Could WebGPU fill that need?

kllrnohj
1 replies
6d

WebGPU can't fill any need that GL, DirectX, or Vulkan can't. WebGPU isn't a native GPU API, but an abstraction over existing ones. As such its feature set is only ever at best comparable, but in practice will always be worse than the native ones (always worse due to needing to be implementable by multiple platforms - at a minimum both vulkan and metal)

solardev
0 replies
5d22h

I see. Thanks for the explanation!

mandarax8
0 replies
6d

You can already do compute without creating a swap chain or presenting right?

superkuh
4 replies
6d

What's the alternative? AMD only supports their (consumer) GPUs for ~4 years via ROCm in some instances. If you buy the card at any time except release day you only get a couple years of compute support.

To answer my own question: opencl, and it's just as bad as it was in 2014. Or, slowly, people are starting to do compute with Vulkan. This might be the best way forwards even if it's an awkward choice.

whatshisface
3 replies
6d

Investment is not rewarded until the platform is used, which won't be until investments are made... it's a multi billion dollar chicken and egg problem that only could have been averted a decade ago by consumers refusing to be locked in to a proprietary standard.

dotnet00
2 replies
6d

A decade ago, CUDA was still offering a more usable platform than the competition. The problem could only have been averted if AMD had properly committed to investing in their platform at least decade ago, just as NVIDIA has been doing for almost 2 decades now.

whatshisface
1 replies
5d23h

A decade ago, CUDA was better, but I don't think the industry had crossed the threshold of being stuck on it. AMD still had the option of investing to catch up, and there would have been the possibility of an industry consortium. Now, with the whole ML stack having been optimized over ten years of rapid development for a single proprietary standard, and especially with the deliberate obstructionism exemplified by the linked article, it is much less likely that a consortium or competitor could catch up.

TeMPOraL
0 replies
5d20h

A decade ago we still called it GPGPU and it was a somewhat niche stuff used for specialized applications. There wasn't enough interest in the field to warrant that heavy investment. Fast-forward ten years, and we're in rapid inflation phase (the cosmological kind, not the financial kind) - whatever choices were there at the start got baked in, and are sinks for all the free money that's coming in and demanding fast results. AMD could've solved this 10 years ago if they invested in it, but they didn't have future knowledge about ML on GPUs being the next big thing.

andersa
2 replies
6d

There are no viable alternatives if you require high performance or even just the software to actually work properly.

jvanderbot
0 replies
6d

Yes - this is a competition problem. If there were viable alternatives, then interoperability wouldn't be something we really talk about. Instead it'd be abstraction layers to run on the chip-specific runtimes. And then it's just CUDA trying to "beat" the alternative.

bee_rider
0 replies
6d

Trying to best Nvidia’s language on Nvidia’s hardware seems pretty hopeless.

I wonder if that’s the wrong abstraction layer? There exists stuff like CUBLAS, which is of course using CUDA under the hood, but it also is something like a BLAS. Maybe as the AI/ML world keeps developing people will be grow more stubborn about sticking to frameworks. We probably just need a couple rounds of people getting burned by vendor lock-in, I guess.

Karellen
2 replies
6d

If people were going to learn not to use proprietary software/hardware like the stuff that the wankers at Nvidia churn out, they'd have learned it over a decade ago when Torvalds gave them a big "fuck you" - which had been a long time coming even back then.

People who keep picking Nvidia don't want to learn not to use proprietary tools.

whatshisface
1 replies
6d

That's a bit like saying "the slaves in the roman silver mines didn't want to rebel." Maybe a decade ago it was because they didn't care about abstract ideals like software freedom, but now they are well and truly stuck, and can serve only as a cautionary tale to other industries.

bee_rider
0 replies
6d

Nvidia offers an attractive product with strings attached, we very well could call it anti-competitive. But it is not very similar to slavery, nobody is getting beaten or chained up, the “victims” are willing participants.

Maybe some comparison to company stores could be warranted…

renewiltord
1 replies
5d23h

Who's getting burned? It's the best stack to build on.

The only ones getting burned are the third-party guys who want adoption.

wmf
0 replies
5d22h

Nvidia customers are paying $30K for GPUs that should be under $10K; that's how they're getting burned.

sevagh
0 replies
6d

Who's burned by what? NVIDIA's customers are happily training their models on superior hardware, so much so that they're begging to be able to order and spend more.

hackerlight
0 replies
5d15h

Collective action problem. So, never, because structurally there are game theory reasons pushing individual customers to behave this way. Therefore you can't blame the customer you have to blame the regulator and system.

dotnet00
18 replies
6d

This just seems to be a poorly researched knee-jerk article written based on a single off-hand tweet from 2 weeks ago, citing a clause that has been around for years, and thus obviously is not in response to something talked about 3 weeks ago.

As an example, the license here has the exact phrase, on a file last changed 2 years ago: https://github.com/NVIDIA/spark-rapids-container/blob/dev/NO...

The EULA here, listed as last updated in 2021, also has this clause: https://docs.nvidia.com/cuda/eula/index.html

tzs
7 replies
5d22h

The article has been updated to clarify:

Nvidia has banned running CUDA-based software on other hardware platforms using translation layers in its licensing terms listed online, but those warnings previously weren't included in the documentation placed on a host system during the installation process. This language has now been added to newer versions of CUDA.
dotnet00
6 replies
5d22h

That still doesn't seem accurate. I have CUDA 11.6 on my machine from Dec 2021, just checked the EULA included there, and it did have the mentioned clause there too.

Aloisius
5 replies
5d22h

It was in fact added to the EULA.txt file in 11.6. The header says it was updated October 8, 2021.

    %  diff EULA.txt-11.5 EULA.txt-11.6 |grep -2 trans
    >   8. You may not reverse engineer, decompile or disassemble
    >     any portion of the output generated using SDK elements for
    >     the purpose of translating such output artifacts to target
    >     a non-NVIDIA platform.

That said, I believe 11.6 was released in January, 2022.

dotnet00
2 replies
5d21h

That said, I believe 11.6 was released in January, 2022.

Ah that's right, the Dec 2021 timestamp was for 'last modified', but 'created' is Jan 2022.

rootkea
1 replies
5d16h

How can 'created' timestamp > 'last modified' timestamp?

dotnet00
0 replies
5d14h

Last modified was probably carried over from what the file had when it was packaged into the installer, with created being the time for when I actually installed the toolkit.

aneutron
0 replies
5d19h

This clause would absolutely not fly in a French court, and IANAL but IIRC US case law does allow for reverse engineering for interoperability.

AstralStorm
0 replies
5d5h

Emulation via an API wrapper is neither of these though.

mvdtnz
5 replies
6d

I think you're focused too much on when this clause was added. Most of us are not so concerned with the chronology.

dotnet00
1 replies
6d

I'm focused on what the article, and the full headline claims.

"Nvidia bans using translation layers for CUDA software to run on other chips — new restriction apparently targets some Chinese GPU makers and ZLUDA"

The time the clause was added also matters, because if it's ~3 years old and since then various translation layers backed by other large competitors have been released without any open lawsuit, it's a lot less concerning than if the clause were added right now, as, at least to me, it suggests that they're not talking about simply copying the API based on documentation and known quirks, but reverse engineering how the internals function for the purposes of a translation layer (eg same thing as MS banning reverse engineering of Windows for the purpose of adding the functionality to Wine, but tolerating clean room reimplementations).

AstralStorm
0 replies
5d5h

I'm pretty sure that would be a dead clause since it is more restrictive than reverse engineering for compatibility, and every GPU and CPU is internally a translation layer...

So, it's essentially a clause flying in the face of antitrust law. NVidia wants to try it in court and pay more fines? Let's go. :)

HarHarVeryFunny
1 replies
5d23h

Well, never mind the date, the clause just doesn't say what the article author claims it to say, not even close ...

It doesn't say you can't run CUDA code via a CUDA comabability layer - it says you can't reverse engineer CUDA code in order to translate it to something else (i.e. NOT CUDA) to run on non-NVIDIA hardware.

It's almost the exact opposite of what the article claims - the only way to legally run CUDA software on non-NVIDA hardware would in fact be to leave it as-is and run via a compatibility layer!!

paulmd
0 replies
5d17h

It’s a constant problem with nvidia articles, sites love to make up unapologetic bullshit or cite ancient stuff as if it’s new.

A few months ago all the internet sites were abuzz with jensen’s “recent” comment that “nvidia is fully focused on ai now” and the article that is from makes it clear that it happened in the “mid 2010s”.

Gamersnexus actually had the citation on the screen with “by the mid 2010s” literally on screen, they still went with “recently” and refused to issue a correction or retraction (I asked).

https://youtu.be/VSSb-t76EpU?t=147

Similarly, the “nvidia sells directly to mining farms!” articles from a few years ago, were citing an article that was an estimate of mining sales based on the hash rate… it never accused anyone of selling anything to anyone, but tech media didn’t bother reading the source, and once the first article got rolling everyone just cited that instead.

https://twitter.com/dylan522p/status/1332502890104188929

Just like with apple “parts/defect stories” news sites know these are potent money-makers that drive a lot of clicks, and frankly I think there is a lot of open fanboyism and hostility among tech media today. GN didn’t “accidentally” refuse to do a correction, they want to push that narrative. Just like the “long-term value is mathematically impossible” etc - we have been in a world where reviewers are openly feuding with one of the vendors for over a half decade over the direction of product development and the death of Moores law in terms of pricing and performance increases.

Winning rhetorical points in that debate is more important to tech media nowadays than little things like journalistic integrity, or reading your sources.

https://www.youtube.com/watch?v=tu7pxJXBBn8&t=273s

germandiago
0 replies
6d

Yet the article seems to imply they did it recently also according to my understanding, which is misleading.

byteknight
3 replies
6d

Agreed that that may be deceptive, but the underlying issue remains. It is prohibited.

loup-vaillant
2 replies
5d2h

Is it? I mean, is the clause enforceable to begin with?

thethimble
1 replies
5d1h

Feels similar to the Microsoft vs. Sun lawsuit over Java which Sun won and eventually led to Microsoft building .NET

ImprobableTruth
0 replies
4d23h

MS vs Sun was about MS doing their usual EEE thing and adding parts to MS Java that weren't part of the spec (which is why it was also a trademark lawsuit).

Google vs Oracle is about whether APIs are copyrightable and Google won.

justinclift
8 replies
6d1h

On the face of it, this sounds like abusive behaviour by a monopolist.

Wonder if it'll be seen that way legally though?

nottorp
7 replies
6d

For the US, didn't Oracle win the java api lawsuit?

jjice
5 replies
6d

Looks like they lost [0], with Google winning 6-2, but maybe Oracle is trying to appeal it? I'm not familiar with the remanding process so I can't comment on that part of this quote.

In April 2021, the Supreme Court ruled in a 6–2 decision that Google's use of the Java APIs fell within the four factors of fair use, bypassing the question on the copyrightability of the APIs. The decision reversed the Federal Circuit ruling and remanded the case for further review.

[0] https://en.wikipedia.org/wiki/Google_LLC_v._Oracle_America%2....

nottorp
4 replies
6d

Oh interesting. The last piece of news I read was probably the decision before that, that ruled in Oracle's favour. Blame Covid.

So is it final, or they can still drag it on?

From wikipedia:

"Justice Stephen Breyer wrote the majority opinion. Breyer's opinion began with the assumption that the APIs may be copyrightable, and thus proceeded with a review of the four factors that contributed to fair use:"

That doesn't look so good.

vidarh
2 replies
5d23h

It's fairly typical that the court wanted to make the narrowest decision possible. By conclusion that *even under the assumption they _may_ copyrightable, Google didn't violate copyright they saved themselves the hassle of deciding on the copyright issue.

nottorp
1 replies
5d23h

IANAL. So basically the assumption language means there is no rule either that APIs are copyrightable or not, yet...

Could have been worse i guess.

vidarh
0 replies
5d22h

Well. There's no Supreme Court ruling on the copyrightability. The Federal Circuit did hold that APIs are copyrightable, and as far as I understand the Supreme Court carefully avoided deciding whether or not they were right about that.

So it's not great, as it does leave the Federal Circuit finding that the APIs were copyrightable standing so far, but as you say it could have been worse - it does not have remotely the same weight.

ender341341
0 replies
5d23h

My (non-lawyer) take on that is when they say

"began with the assumption that the APIs may be copyrightable, and thus proceeded with a review of the four factors that contributed to fair use"

is that they're not saying APIs are copyrightable and basically ignored that question because they ruled that even if they are copyrightable googles use would be fair use and oracle doesn't have a case.

it's a fairly common method cases are resolved, you say "assuming the plaintiff claims are all true, do they actually have cause of action for a lawsuit?"

salawat
3 replies
6d

Sounds anti-competitive af. How has this not bubbled up to the FTC for anti-trust action yet?

ronsor
1 replies
6d

You can always report it to the FTC yourself, or Nvidia's competitors can sue them for antitrust violations.

muragekibicho
0 replies
6d

Be the change you want to see in the world

segasaturn
0 replies
6d

The FTC is not going to take action that would hurt a $2T American corporation and help Chinese hardware manufacturers/reverse engineers, no matter how valid such a case would be.

chatmasta
3 replies
6d

I don't understand why Nvidia is so obstinate on this front. They would solidify their lead in hardware if they open sourced the entire CUDA software stack. Their hardware competitors are going to reverse it anyway, so they may as well open source the thing and benefit from all the momentum that comes with owning the community's favored software and the hardware that it runs on.

roughly
0 replies
6d

Nvidia is worth more money than God because every ML pipeline out there uses CUDA and the only way to use CUDA is on Nvidia hardware - they already own the community’s favored software and the hardware it runs on. CUDA’s not the product, it’s the moat.

gjsman-1000
0 replies
6d

Exclusivity works. As much as Hacker News likes to dismiss it.

bluedevil2k
0 replies
6d

Really? All these companies have their code written in CUDA and when it comes time to buy more GPUs they can make a decision - buy more Nvidia chips that will “just work”, or buy AMD/Intel and spend time and money writing new, potentially buggy, software to duplicate the software I’ve already written. Seems like an easy decision for the buyers, and Nvidia’s vendor lock in is complete.

Tistel
3 replies
6d

People should check out Google's JAX. Work in a high level language and run anywhere. Nvidia should just be commodity hardware if people avoid vendor lock in.

fisf
1 replies
6d

That's fine and dandy, until you realize that Jax only has a limited amount of backends. E.g. rocm support is still experimental.

Somebody has to build those optimized backends -- it's not just a matter of people picking the wrong stack.

nerpderp82
0 replies
5d23h

I just looked at Jax and XLA, it is odd to me that they aren't targeting SPIR-V directly.

nerpderp82
0 replies
6d

Shimming CUDA is a waste of effort that only reinforces Nvidia's market dominance. Targeting higher level interfaces, Jax, Taichi, ArrayFire, etc is imho a better strategy. We have already seen systems like LLama.cpp and their ilk support alternative backends for training and inference.

Now the vast majority of the compute cycles have centered around a handful of model architectures, implementing those specific architectures in whatever bespoke hardware isn't difficult.

Target specific applications not the whole complex library/language layer.

wzdd
2 replies
6d

This doesn't appear to ban using translation layers.

The text is "You may not reverse engineer, decompile or disassemble any portion of the output generated using Software elements for the purpose of translating such output artifacts to target a non-Nvidia platform".

That would appear to (attempt to -- it may not be enforceable) restrict the creation of translation layers. I don't understand how you could infer "bans using translation layers" from the above clause, and indeed the tweet they're referencing does not.

AIUI Zluda is something like Wine, in that it's an API reimplementation. It would be weird to call running Wine reverse engineering, decompilation or disassembling -- it's effectively just linking.

marshray
0 replies
5d23h

It's almost like they wrote it specifically to be invalid under a compatibility exception.

williamDafoe
2 replies
5d23h

They will lose in court. I am reminded of IBM trying to ban 3rd party disk drive makers for from making disks that fit the IBM disk interface in the 1960s. They lost, too. However, ZLUDA may have to do a clean room reimplementation of all of CUDA, like google did with their javascript reimplementation, however ...

rerdavies
1 replies
5d19h

I'm reminded of Apple modifying their hard drives to return "Copyright (c) Apple Computers" to appropriate prodding, and refusing to mount any drive that didn't do so. That one never went to court. These days, you just do the same thing with a little rudimentary crypto.

tamimio
1 replies
6d

I don’t think Nvidia can enforce this in the US let alone China.

zoobab
0 replies
5d1h

Not in the EU, there are interoperability exceptions in the EU copyright directive. There are even more exemptions in french Law which goes further than other countries.

raggi
1 replies
6d

go all in on webgpu compute

astlouis44
0 replies
6d

Been pondering over this a ton recently. WebGPU not only represents higher-end rendering in a browser, but true cross-platform compute that will increasingly get closer and closer to native performance. This is huge, because it comes with the portability aspect as well.

Where I think WebGPU has the most promising role to play is in inference of smaller optimized AI models, in client hardware. Users expect software to run anywhere, and for developers being able to deploy a portable binary that "just works" is huge. Not to mention the immense cost savings... now you won't get a massive model, we're going to need the cloud for those for a while yet. But if you can run it locally, why not? And end users spend most of their time in browser these days, so it's obvious to see where this is all headed.

mindcrime
1 replies
6d1h

Ironically, this is just going to increase interest in ROCm and other alternatives.

foobarian
0 replies
6d

Wonder if they know and expect that. It's brilliant! Maybe AMD should announce a ban on developing CUDA compatibility layers for their kit

mawadev
1 replies
5d20h

This is how nvidia will slowly rot

rerdavies
0 replies
5d19h

In the meantime, NVIDIA has quickly become the world's third most valuable company.

hagbard_c
1 replies
6d

Nvidia can go bite my shiny metal ass as far as I'm concerned. Hey, European Commission, once you're done taking 0.5% of the fruit factory's yearly income for their gate keeping tendencies here's another juicy company for you to investigate.

amelius
0 replies
5d23h

It would be great if nVidia would charge Apple 30% of their revenue for the use of CUDA.

andersa
1 replies
6d

a new clause in CUDA 11.5 reads

Huh? That was released like 3 years ago.

woadwarrior01
0 replies
5d2h

Translation layers would only move up the stack and target DSLs like Triton, Numba, etc.

varbhat
0 replies
5d2h

khronos,please come up with something and let that be good.

nindalf
0 replies
6d

Their ban is legal, not enforced in code. I wish them best of luck enforcing those terms and conditions against Chinese hardware makers in China. And even Nvidia would know that the European competition regulator would take a dim view of it. I guess there is some benefit to dragging it out, benefiting from the CUDA monopoly for a year or two more.

nimbius
0 replies
5d2h

Looks like nvidia is trying to keep the lynchpin of their entire business model from crumbling underneath them. ZLUDA lets you run unmodified CUDA applications with near-native performance on AMD GPUs.

https://github.com/vosen/ZLUDA

With Triton looking to eclipse CUDA entirely, im not sure this prohibition does anything more than placate casual shareholders.

https://openai.com/research/triton

nhggfu
0 replies
5d13h

author doesn´t even bother to say what CUDA is, just refers to it by an acronym

html even has an <acryonym> element

would be fab if people would communicate effectively when they are writing on the web, as a job.

mikefallen
0 replies
5d2h

This should be raising some eyebrows from regulators regarding antitrust no?

loup-vaillant
0 replies
5d2h

Can they? The article quotes:

You may not reverse engineer, decompile or disassemble any portion of the output generated using SDK elements for the purpose of translating such output artifacts to target a non-NVIDIA platform.

Okay, so what if m purpose is just to explain to the world how it works? Maybe I'm just interested in the precise semantics of CUDA and all its possibly undocumented edge cases? Maybe what people do with this knowledge is none of my business? Maybe if I write a Vulkan translation layer I only do this so I can run it on NVDIA hardware?

And maybe, just maybe, their clause is an overreach and unenforceable? Though at this point I'd rater seek legal advice from a registered attorney.

jrepinc
0 replies
5d

Fsck you nvidia even more. Just going the same evil ways as Nintendo I see. Good thing I don't waste my money on your products.

joshscholar
0 replies
4d21h

ROCm contains a AMD's version of a translation layer for CUDA (though I've read people saying that it has some restrictions to get around Nvidia's language, not sure what. Maybe it has to be precompiled.)

But ROCm has a DIFFERENT trick. It has it's own language that can be translated into CUDA or for other targets. If you use THAT language instead of CUDA you can get the effect of CUDA on Nvidia without the source being IN CUDA and you compile that language for AMD or INTEL.

hfgjbcgjbvg
0 replies
4d20h

This red team vs green team thing needs to stop.

hfgjbcgjbvg
0 replies
4d20h

All this does it make computing more expensive. I’m sure there’s some people who want that but not great for humanity imo.

hangonhn
0 replies
6d

The stated reason from the article makes little sense. The Chinese will just tell Nvidia to go take a hike, if the legal agreement holds any water at all in Chinese law. The only people this will affect would be AMD and Intel. Something isn't quite adding up.

fransje26
0 replies
5d23h

Well, that's excellent news. It now means competitors can now concentrate on coming with an alternative, instead of coming with a half-arsed solution. Looking at you, AMD.

dooglius
0 replies
5d23h

This appears to apply only to binaries compiled with NVCC; compiling CUDA code with LLVM/clang for non-nvidia would not be a license violation (disclaimer: IANAL)

cm2187
0 replies
5d3h

Even for large ML/LLM models, how large is the part of the code that interacts with CUDA? In other words, how much of a moat does nvidia have if a better competitor comes up? My intuition is that the code itself could probably ported in a few weeks, ie if there is a moat, it doesn't consist in the CUDA implementation.

anonymousDan
0 replies
5d2h

Surely this is anticompetitive?

Zambyte
0 replies
5d2h

You may not reverse engineer, decompile or disassemble any portion of the output generated using SDK elements for the purpose of translating such output artifacts to target a non-NVIDIA platform.

Does this actually have any legal weight besides being conveyed by a multi-trillion dollar company?

LispSporks22
0 replies
6d

Well I think we just witnessed the end of CUDA then. We usually code around such brain damage.

K0balt
0 replies
5d7h

In a way, this is great news for less regulated markets because it might give them a cost advantage in ML services commensurate with their lower overall development costs. This might help developing markets compete with established market leading countries.

JonChesterfield
0 replies
5d23h

Translating machine code between architectures is a dubious proposal in the first place. Taking compiled cuda shaders, disassembling them and recreating shaders for some other architecture from the pieces really should be more effort than compiling the source directly to that other architecture.

FredFS456
0 replies
5d2h

Nvidia trying to prevent adversarial interop by using EULAs. I doubt the Chinese companies will care, Intel and AMD may be sued if they don't comply. My understanding is that this means Intel/AMD will have to spend more time writing middleware (eg. ROCm's replacements for cuBLAS and cu* libraries) rather than run Nvidia's existing middleware on their own CUDA translation layer.