return to table of content

U.S. clears way for antitrust inquiries of Nvidia, Microsoft and OpenAI

roenxi
121 replies
3d13h

Nvidia solved a hard problem and their competitors all failed. I spent years with an AMD card growing progressively more annoyed at their self-inflicted apparent inability to multiply matrices on demand. Nvidia had nothing to do with their failure, unless they had some sort of high-level mole in AMD's driver teams.

Hitting the only successful company in a difficult field with legal assaults is not the obvious path to success. Nvidia has nothing close to a monopoly; there is no moat and they operate in a commodity space that will behave like a normal competitive market. It just happens that their competitors were wall-to-wall complacent about how quickly the hardware world can change and they stupidly thought graphics cards were for displaying graphics. It turns out graphics cards are for building AI and crypto markets.

vlovich123
34 replies
3d13h

The thing with monopoly laws is it doesn’t matter how you got into pole position. The point is that you’re illegally utilizing tangential benefits of the pole position to maintain that position as well as enter new markets.

including how the company’s software locks customers into using its chips, as well as how Nvidia distributes those chips to customers.

The lock-in is probably a losing argument on technical merits unless NVIDIA is doing something nefarious there (which the lawsuit will potentially reveal). If nVidia is distributing chips in preferred ways to partners it wants to succeed (e.g. those partnered with MSFT which it has a very cozy and tight-knit relationship), that could be an anti trust thing.

Microsoft structured its minority stake in OpenAI in part to avoid antitrust scrutiny, The Times has reported. Microsoft also reached a deal in March to hire most of the staff of Inflection AI, another A.I. start-up, and license its technology. Because the deal was not a standard acquisition, it may be harder for regulators to scrutinize.

So there is an argument to be made that MSFT is acutely self-aware its behavior could be viewed as engaging in antitrust. If there’s a law against structuring financial transactions, why isn’t similar behavior in the business world viewed similarly as trying to bypass the spirit of the law?

roenxi
31 replies
3d12h

Just on that last paragraph, because the frame annoys me - that is bullying. It isn't fair to insinuate that a company did something wrong by attempting to obey the law.

MS knows all about antitrust. They've hired a bunch of lawyers and told them to do what needs to be done to be compliant with the regulations. That shouldn't then be implied as evidence that they are guilty of antitrust activities! What are the lawyers supposed to do, structure the company to maximise the likelihood of an antitrust lawsuit being bought? Nobody should want that, regulator, company or customer.

elefanten
12 replies
3d12h

You ignore “the spirit of the law” with fancy acrobatics around “the letter of the law”… at your own peril.

I can’t comment on the claims here, but if MSFT is skirting the intent brazenly enough, the predictable regulatory reaction is no bullying. Everyone involved is well aware of that dynamic. No need to pity MSFT.

pointlessone
11 replies
3d9h

Is spirit of the law ever a concern in the USA? I’m asking because it is in the EU and every US company seem to struggle with the concept if I am to judge by how they engage GDPR, DMA and similar regulations in the EU.

vlovich123
8 replies
3d9h

The US Constitution has largely been interpreted in a “spirit of the law” manner since the founding. Originalism has taken significant durable gains in terms of SCOTUS appointments since the 1980s; prior to that originalists had a good chance of changing their POV once appointed. I suspect similar viewpoints are reflected in the judge appointments as well. Letter of the law benefits the powerful because they can always stay ahead of legislative attempts to fix the letter of the law (or lobby to change the letter). Spirit of the law is much harder to corrupt.

Spirit of the law isn’t perfect itself obviously because it is legitimate to point out that it becomes hard to know what the law actually is, especially since governments have gotten into legislating a lot of regulatory nuance and it’s hard to distinguish malicious compliance from good faith effort. It’s also corruptible from overzealous prosecution looking to make a name for itself by taking on unpopular entities that aren’t actually doing anything wrong.

AnthonyMouse
7 replies
3d8h

Letter of the law benefits the powerful because they can always stay ahead of legislative attempts to fix the letter of the law (or lobby to change the letter). Spirit of the law is much harder to corrupt.

This is precisely the opposite. "Spirit of the law" makes the rules squishy and indeterminate, providing opportunities for fancy lawyers to bend the result to their own interests.

"Letter of the law" often leads to harsh results when the law is drafted poorly, because if they wrote something dumb then you get something dumb instead of a judge rewriting the law to make people happy. But the people they're making happy are usually the powerful, so pick your poison.

vlovich123
4 replies
3d7h

No law can ever be written to capture every possible application of the underlying spirit. Thus your ability to escape the spirit is directly correlated with how many lawyers you can hire to find loopholes in the text (or just flat out lie).

It’s also important to remember that societies naturally undergo shifts over time. It’s impossible to continuously update a codified set of laws when the underlying moirés of the time have shifted; you’ll just be constantly arguing over the updates to add. Any law written perfectly today becomes imperfect simply through the passage of time. That’s why the Bible and any prescriptive religious text feels so outdated on many recommendations - it’s a snapshot in time of the values of a culture but those values change. There was even a fantastic sci-fi short story on this exact point of cultural shift [1] that’s worth a read.

[1] https://qntm.org/mmacevedo

AnthonyMouse
3 replies
3d7h

No law can ever be written to capture every possible application of the underlying spirit. Thus your ability to escape the spirit is directly correlated with how many lawyers you can hire to find loopholes in the text (or just flat out lie).

The assumption here is that the rules would be complicated and provide lots of opportunities for gamesmanship. Now suppose the rule is "no company shall have more than 30% market share in any market, any that does shall be broken into no fewer than twelve independent pieces."

No loopholes, if you exceed 30% market share you get broken up. And if they find a loophole then you amend the law and take it out.

It’s impossible to continuously update a codified set of laws when the underlying moirés of the time have shifted; you’ll just be constantly arguing over the updates to add.

That's just politics. Somehow you need a process to decide what the law should be. The output of that process is the new law. If the output sucks then get a new process. But whether people can agree on what the law should be is a separate issue than whether we should even know what the law as enacted is supposed to mean.

That’s why the Bible and any prescriptive religious text feels so outdated on many recommendations - it’s a snapshot in time of the values of a culture but those values change.

That's fine, nobody is saying that you can't change the law if a case comes out in a bad way. But it should be the legislature rather than the courts to do it, and the new understanding shouldn't be applied to past behavior ex post facto.

ickelbawd
2 replies
3d3h

Please clearly define the meaning of “you” in your simple law.

Literally me? Okay, I don’t own 50% of the market—my company does.

Oh you mean my company? Okay, my company doesn’t own 50% of the market—each of my companies only control 25%.

Oh you mean me and my companies? Okay, well I only own one company and my wife owns the other one.

Oh you mean… I think you get the point…

Now move on to all the other words you used: what defines a “market”? What does “broken” mean here? What does “independent” mean here? I’m sure it’s quite clear to you—that you know it when you see it. I’m also quite sure others have different interpretations.

I agree with the spirit of your comment otherwise, but simple laws rapidly become complex laws because people are complicated and language is flawed.

roenxi
0 replies
2d18h

That isn't a loophole, it is an interpretation. It isn't possible to have laws without an assumption that a reasonable person is interpreting them (which is 95% of the reasons why judges are involved).

But we know that the interpretation phase has a lot of wiggle room (the US constitution is a regular parade of this sort of thing - the Roe v. Wade fight or the abuse of the commerce clause for example). When they go bad these things aren't loopholes as much as they are just ignoring the written law with a polite fiction and it is up to different interest groups to work out where the power lies to get what they want. Political reality vs. the theoretical rule of law ideal.

But that is necessarily independent of what is written in the law itself. If a group has enough power to overrule the written law then what you write in the law won't be able to stop them.

AnthonyMouse
0 replies
2d10h

Please clearly define the meaning of “you” in your simple law.

"no company shall have more than 30% market share in any market, any that does shall be broken into no fewer than twelve independent pieces."

It doesn't contain the word "you".

Okay, I don’t own 50% of the market—my company does.

Then your company would be violating the law. "Company" means a set of entities that share a common ownership.

what defines a “market”?

A set of products or services that serve as fungible substitutes for one another.

What does “broken” mean here?

It means they no longer share common ownership. This is also what independent means.

I agree with the spirit of your comment otherwise, but simple laws rapidly become complex laws because people are complicated and language is flawed.

But all of those things are just their ordinary meaning. Writing them down would make the law more explicit but it doesn't make it any more complicated. The definitions aren't each a separate set of criteria that have to be complied with separately, they're just a clarification to reduce possible ambiguity.

In particular, what you're doing is resolving edge cases. But the basic law has already addressed 99% of cases, because they're not ambiguous. An independent restaurant in a major city does not have >30% market share for food because there are many, many competitors. Microsoft has >30% market share for desktop operating systems because Microsoft has ~70% market share for desktop operating systems.

And we have to distinguish between two things here. One is, you see the word "company" and the dictionary says one of the meanings is a military unit, and then Microsoft claims that they aren't a company because they aren't a military unit. But if something has two meanings and one of them doesn't make sense in context, that's not the intended meaning. Using the "spirit of the law" for this kind of resolution is inherently necessary.

The other is, the law isn't ambiguous, but the unambiguous result is undesirable and the only way for a judge to fix it is to disregard the text and make something up. They shouldn't do this.

bee_rider
1 replies
3d7h

Seems like either system is exploitable by the rich.

AnthonyMouse
0 replies
3d7h

The difference is this: If you use the letter of the law, an ordinary person has the same chance to find a loophole as the rich. Which will tend to cause them to get closed, because the rich have more money but ordinary people are more numerous, and the ability for ordinary people to exploit them would pressure the government to fix them.

Sometimes this would make the law more complicated, because the situation has intrinsic complexity and you have to enumerate the edge cases. Sometimes it would make the law simpler, because the existing complexity is extraneous and only an opportunity for gamesmanship. But either way it creates an evolutionary pressure for improvement.

nabla9
1 replies
3d9h

Spirit of the law is more important in the US.

In common law systems (US, UK), there is greater emphasis on the spirit or intent behind the law. Judges have more flexibility to interpret laws and use general principles.

Civil law systems place more importance on the letter of the law. The spirit behind the law is secondary. The written legal codes are more important.

EU law is mixture of both, but most continental European countries have civil law system (Ireland does not, Nordic countries have mix of civil and common law)

vlovich123
0 replies
3d9h

Since the 1980s letter of the law has seen a remarkable upswing in power. I wouldn’t say spirit of the law applies anymore to SCOTUS rulings & I suspect federal appointments followed similar biases so it’s up & down the federal court. Not sure about state judges. Do you agree?

vlovich123
9 replies
3d11h

This is a confusing argument to me. Are you saying that structuring and smurfing should then be legal? All structuring and smurfing is is an attempt to make sure that financial transactions stay below the statutory reporting threshold.

Also, I think you’re conflating my statement which was meant to be prescriptive with a descriptive statement. I’m saying if structuring and smurfing are illegal in a financial context, MSFT doing so to skirt antitrust laws should be similarly illegal - if the laws don’t prohibit it, the laws should be updated. But I’m not a lawyer so it’s possible laws already prohibit it or MSFT violated some laws. Commenting in either direction on an information free article isn’t wise.

As for the spirit of the law vs letter of the law, that’s an ongoing debate as old as history. It’s personally weird to me to encounter letter of the law people given that the spirit of the law has a much richer history behind it to my view and seems more defensible. Some amount of rule lawyering is required, but violating the spirit of the law is a more robust legal principle that can withhold participants trying to find creative workarounds around the spirit of the law.

roenxi
5 replies
3d11h

Ah I see, I was in two minds about including the quote but decided against it. I was talking about "Microsoft structured its minority stake in OpenAI in part to avoid antitrust scrutiny, The Times has reported. Microsoft also reached a deal in March to hire most of the staff of Inflection AI, another A.I. start-up, and license its technology. Because the deal was not a standard acquisition, it may be harder for regulators to scrutinize.", not what you wrote. Obviously they structure the deal to involve scrutiny; they aren't going to structure it to invite scrutiny; the deal is designed to avoid anything that would allow a random regulator to get involved. The Times is insinuating without cause.

I don't think you're in a position to bully Microsoft. I doubt they care about your opinion.

kortilla
4 replies
3d10h

If you need to intentionally structure a deal to avoid scrutiny and have it described that way, it implies what you really wanted was much worse.

“We packed the goods in nondescript boxes to avoid scrutiny by the border guards…”

Scrutiny isn’t something you need to avoid as a big company if you’re not fucking around. The lawyers are there to handle the scrutiny, not to hide stuff from it.

roenxi
3 replies
3d10h

If you need to intentionally structure a deal to avoid scrutiny and have it described that way, it implies what you really wanted was much worse.

It doesn't. That is like saying that if someone structures their affairs to pay less taxes it implies that wanted to do something illegal. They had to choose some structure, they picked the safest and easiest one. In both the hypothetical tax case and the real MS case, I suspect.

The incentive is to choose structures that minimise regulatory engagement. The fact that MS is following incentives isn't a red flag. Not a green one either, it isn't interesting except as something that the NYTimes can hook insinuations on.

jsnell
1 replies
3d9h

This is a funny discussion, because "structuring" has a very specific meaning when it comes to finance. It's about executing your financial transactions in a way that avoids regulatory scrutiny (for example sizing transactions such that they fall just under reporting thresholds).

Structuring is explicitly illegal in the US.

This is the point that vlovich123 was making, and given you totally missed it, it seems like you might not be familiar with what the term means in this context.

roenxi
0 replies
3d8h

Sorry for being unclear on this, I should have quoted. But I'm not, and never was, talking about vlovich123's point in any comment in this thread. I didn't have an opinion on what vlovich123 said.

SpaceNugget
0 replies
3d7h

It does. You can't replace the X and Y in "Structuring X to avoid Y" and still end up with the same implication.

Structuring your business to avoid breaking the law is good.

Structuring your business to avoid some otherwise regular audits that would reveal whether you are breaking the law is obviously suspicious. One might think that there is a reason why getting audited might result in negative consequences for your business.

chollida1
0 replies
2d23h

This is an easy asnwer.

With banking, structuring laws mean banks need to report any transaction over $10,000.

So if I need to send you $100,000 and s split it up into 11 payments so each payment comes under the $10,000 reporting limit that breaks the law because my payment was for $100,000 but i structured it to get around the $10,000 reporting limit.

If Microsoft approaches anit trust limits if it owns more than 50% of a company and buys 49% then no strucuturing took place as they only bought 49%. Now if they bough 10% in each of 6 different shell companies that they own such that they'd control 60% of hte company, then this is illegal as it works around the limits.

But owning up to the limit is perfectly fine as it doesn't in anyway break the law or work around it. That's the very reason for having a limit, to say you can go up to this limit, but not over.

I'm guessing you don't deal in finance at all as we deal with this all the time. You'll see funds owning up to 9.9% of companies to avoid the 10% reporting threshold. Again, nothing illegal here as the government has said they are perfectly fine with funds owning up to <10% of a company.

bri3d
0 replies
2d23h

I think the issue here is the overloaded use of the term "structuring."

If I think I might be doing something that antitrust regulators won't like, so I hire a lawyer to tell me what I can do, and they tell me "you can buy up to 49.9% of this company," and I buy 49.9% of the company, that's fine. It's neither illegal nor unethical. It's also not "structuring" in the technical, financial sense, but a layman might say "you structured the deal to avoid falling afoul of regulation," even though it's the wrong use of the word. I think that's what's happening in this conversation (I have no idea about what MSFT actually did or didn't do, just the direction the conversation here is going).

AnthonyMouse
0 replies
3d7h

You picked a particularly bad example. Structuring is a notoriously bad law because innocent people make transactions that would be considered structuring all the time, and making it illegal has no legitimate purpose when the government could just use the aggregate amount of deposits over some period of time for its reporting threshold instead of the amount of each individual deposit. It's like creating a rule with a loophole that gets exercised through ordinary behavior, and then instead of closing the loophole they impose criminal penalties on the ordinary behavior.

But then you consider that it's used for civil asset forfeiture (i.e. law enforcement stealing money from innocent people without ever proving a real crime) and it suddenly makes sense:

https://www.washingtonpost.com/news/the-watch/wp/2014/03/24/...

Whereas with MSFT it's a different situation. There is nothing illegal or wrongful about depositing a large amount of money, there is just a heinous law that makes it illegal to do it in a common way. Whereas monopolizing a market is intended to be illegal regardless of how you do it, so there shouldn't be a way to structure things to monopolize the market without breaking the law. The way you're intended to comply with the law is by not monopolizing the market.

iforgotpassword
7 replies
3d12h

Because laws are always flawless and never contain any loopholes, and it has never happened ever that lawyers were specifically instructed by a company to find those loopholes and exploit them as hard as possible.

mike_hearn
6 replies
3d9h

The idea of law as having loopholes is relevant for politicians, who are tasked with ensuring that the written law reflects their intent. It's not relevant for courts, regulators, people or companies, who are all tasked with following the law as actually written. For those people there is no "spirit" or "true law".

The reason this is important is it's a sword that cuts both ways. It's always tempting to argue that whatever you'd personally like to happen is what politicians really intended, and any gap between reality and their preferred outcome is therefore a "loophole". But once you get into saying people should follow intent, not written law, others can easily argue that politicians never intended the law to be interpreted like that against them, and so therefore they are morally justified in ignoring it. It can be used against you as easily as you can use it against them.

A common example of this problem is income vs capital gains taxes. One ideological tribe is very fond of arguing that people who have income mostly from investments rather than wages are exploiting a "loophole" in tax law, but of course the reason there are different rates to begin with is exactly because politicians wanted to encourage investment. There's plenty of cases where this intent is discussed in written literature and there's no other reason to distinguish between income sources then set differing rates. There is no "loophole" and nor is the "spirit" of the law being violated. But you hear such claims all the time.

The other reason it's problematic is because you can't really know what the intent of lawmakers was. The law was their best collective effort at writing down what they wanted, as a result of numerous compromises and disagreements between different people. If they didn't write it down properly or the resulting compromise was a mess, that's on them, but a working legal system doesn't allow people to just blow off their written instructions and assume they know what was really meant.

eru
3 replies
3d8h

For those people there is no "spirit" or "true law".

Actually, there is. Many judges take various interpretations of the 'spirit' into account. See eg https://en.wikipedia.org/wiki/Originalism for one example.

And if the courts use some 'spirit' guide them, companies and their lawyers better try and predict what that 'spirit' recommends.

If they didn't write it down properly or the resulting compromise was a mess, that's on them, but a working legal system doesn't allow people to just blow off their written instructions and assume they know what was really meant.

Well, then by that definition the real life legal system of the US ain't working?

mike_hearn
2 replies
3d6h

Originalism is the opposite of that, no? Originalists are all about following what was actually written down, using an understanding of English as it was used at the time. It rejects modern re-interpretations of the law and avoids speculating about the 'spirit' of the constitution, even if that would be more convenient. The competing view is that of a 'living' body of law in which the meaning continually changes without the wording itself changing.

eru
1 replies
3d5h

Well, either way: than the 'living body of law' theory is an example of such a 'spirit'.

mike_hearn
0 replies
2d10h

It is and you'll notice a lot of people really, really hate it. Trump just broke all fundraising records by miles, because a lot of people who previously didn't support him feel that New York has ignored the written letter of the law in favor of a tribalistic "spirit". There are similar society-rending controversies happening in Europe due to the activist lawmaking of the ECHR.

These two perspectives aren't both equally valid: the courts are not allowed under any system of civics to simply do whatever the judge feels like. They are only given leeway to interpret the law when they have no other choice because the statutes are unclear. It's a last resort, and often the result will be people "getting away with it" because there's no law against what they did.

iforgotpassword
1 replies
3d2h

Yes, I'm aware of that distinction. However I read the comment I was replying to so that they suggested the antitrust case is unfounded because Nvidia didn't break any laws. It's precisely why the antitrust case is valid/needed, because if Nvidia were breaking any laws, you wouldn't need antitrust, you'd just take them to court.

ethbr1
0 replies
2d22h

Antitrust is law and court.

tzs
0 replies
3d7h

The thing with monopoly laws is it doesn’t matter how you got into pole position. The point is that you’re illegally utilizing tangential benefits of the pole position to maintain that position as well as enter new markets.

The following is for US law.

Your first sentence is not really needed because US monopoly law really doesn't care if you are actually in pole position. It's more about monopolization than it is about monopoly.

To a first approximation think of it as being about fair competition. You could have a complete 100% monopoly in some particular market but if you got that monopoly by simply outcompeting everyone else by making a better product and you were not trying to use that to expand into other markets by doing things like tying you would probably not have an antitrust problem.

eru
0 replies
3d8h

The thing with monopoly laws is it doesn’t matter how you got into pole position. The point is that you’re illegally utilizing tangential benefits of the pole position to maintain that position as well as enter new markets.

This depends on jurisdiction, and what judges you get.

Eg Standard Oil was slapped with anti-trust law, despite neither having a monopoly nor exploiting its market share like a monopoly. See eg https://en.wikipedia.org/wiki/Standard_Oil#Legacy_and_critic... or https://fee.org/articles/the-myth-that-standard-oil-was-a-pr...

Some economic historians have observed that Standard Oil was in the process of losing its monopoly at the time of its breakup in 1911. Although Standard had 90 percent of American refining capacity in 1880, by 1911, that had shrunk to between 60 and 65 percent because of the expansion in capacity by competitors.
aydyn
13 replies
3d10h

Still have no idea how this dude is relevant other than he wrote some scifi books one time. What are his technical accomplishments. Oh, nothing?

Anyone can talk, especially nowdays because of AI, ironically.

hgomersall
7 replies
3d6h

I think he is relevant because he writes stuff that resonates. You might disagree, but I am judging him based on his very real accomplishments.

aydyn
6 replies
3d1h

If his ideas resonate and are so profound, tell us what they are and why. Don't point to his blog and worship him like some minor deity.

hgomersall
5 replies
3d1h

How does one worship a minor deity? You're not obliged to read what I post any more than I'm obliged to persuade you. Feel free to ignore the links, or feel free to read them. You might gain something from them as I have, but I'm not going to expend any efforts on trying to enlighten you or any such nonsense.

aydyn
4 replies
3d1h

I didnt read what you said because you didn't say anything. You just linked to his blog while carefully making sure to point out it was CD who wrote it.

That's literally dictionary definition of idolatry.

hgomersall
3 replies
3d1h

Idolatry: noun; the worship of idols.

It has a decent number of up votes, so clearly some people think it worth posting. As I said, you don't have to read it, and I don't really care if you do. I'm not sure how linking to something with a reference is worship, but ok.

You seem very angry.

aydyn
2 replies
2d20h

I'm not angry, I'm simply pointing out that you place undue value on a name the vast majority of people don't care about.

What you're doing, explicitly, is placing more value on name than idea. A fallacy. I'm obviously not saying "worship of a minor deity" literally, but colloquially.

You're right, I can read it or not. Same with you, you can take valid criticism or ignore it and be all the poorer for it.

You can also continue admitting you have no rebuttal as you hyper-focus on the _diction of the conversation_ instead of the _conversation_, like how to define worship and idolatry in a casual conversation.

hgomersall
1 replies
2d11h

You have no criticism. You object to the fact that I linked to an article I liked. You think I did that because I assign undue weight to the author. I did not. I referenced the author because he is well known and it allows others to know what they are linking to. The fact you assert I did it to invoke some kind of authority is flat out wrong. I didn't summarise because the linked article is, in my opinion, a very well argued article - a summary is only going to be a poor facsimile, so better to link to the article itself.

Generally discourse would be better if we linked to well argued and reasoned articles than have a load of blow hards that like the sound of their own voice rather too much.

If you're going to say things like "literally the dictionary definition of idolatry" when it's nothing of the sort, expect people to pick up on it. Your reasoning appears to be very much in the domain of a cheese.

I shall no longer be replying to or monitoring this thread. Take care.

aydyn
0 replies
1d18h

Interesting, I get it now. Just one question, how many times a day do you pray to your doctorow shrine?

eru
0 replies
3d8h

Many people like his writing, and his political stances.

criddell
0 replies
3d7h

Many of us appreciate the work he’s put in with the EFF. When you are right about things over-and-over for 20 or 30 years, people tend to listen to you.

WheatMillington
0 replies
2d22h

Why are you attacking the author instead of addressing the message?

EasyMark
0 replies
2d22h

why is any journalist, blogger, or critic important? They are popular the vast majority of the time, and have earned it via their blog/podcast/magazine articles. It's not because they're the best SME in the field or even ever practiced in the field, it's "can they bring things to light?" for the masses, and he often does.

torginus
13 replies
3d12h

Honestly I don't get how the current situation could develop.

Video games have used compute shaders forever, and those work just fine on other cards.

I remember doing some quite involved stuff for my thesis using OpenCL a decade ago, and it worked just fine. Nowadays OpenCL is dead for some reason...

I just don't get what is there in CUDA that makes it so special. As far as I remember, a GPGPU API consists of a shader language, a way to copy stuff to and from the GPU, a way of scheduling and synchronizing work, atomics, groupshared variables, and some interop stuff.

Is the vendor lock-in due to CUDA libs? In that case, the problem is not CUDA, but the libraries themselves. Not sure about today, but performance-portability basically didn't exist back then. You needed to do specialize your code for the GPU arch at hand. It didn't even exist between different generations of cards by the same vendor. Even if you could run CUDA code on AMD, it would be slow, so you need to do a rewrite anyways.

roenxi
7 replies
3d10h

There probably isn't anything in CUDA that makes it special. They are well optimised math libraries and the math for most of the important stuff is somewhat trivial. AI seems to be >80% matrix multiplication - well optimised BLAS is tricky to implement, but even a bad implementation would see all the major libraries support AMD.

The vendor "lock in" is because it takes a few years for decisions to be expressed in marketable silicon and literally only Nvidia was trying to be in the market 5 years ago. I've seen a lot of AMD cards that just crashed when used for anything outside OpenGL. I had a bunch of AI related projects die back in 2019 because initialising OpenCL crashed the drivers. If you believe the official docs everything would work fine. Great card except for the fact that compute didn't work.

At the time I thought it was maybe just me. After seeing geohotz's saga trying to make tinygrad work on AMD cards and having a feel for how badly unsupported AMD hardware is by the machine learning community, it makes a lot of sense to me that it is a systemic issue and AMD didn't have any corporate sense of urgency about fixing those problems.

Maybe there is something magic in CUDA, but if there is it is probably either their memory management model or something quite technical like that. Not the API.

jb1991
2 replies
3d7h

What do you think about SYCL as a viable cross-platform GPU API?

roenxi
0 replies
3d6h

I'd have been happy to use OpenBLAS if it worked on a GPU. Any API is good enough for me. I have yet to see anything in the machine learning world that required real complexity, the pain seems to be in figuring out black box data, models and decyphering what people actually did to get their research results.

The problem I had with my AMD card was that SYCL, like every other API, will involve making calls to AMD's kernel drivers and firmware that would crash the program or the computer (the crash was inevitable, but how it happened depended on circumstances).

The AMD drivers themselves are actually pretty good overall, if you want a desktop graphics card for linux I recommend AMD. Open source drivers have a noticeably higher average quality than the binary stuff Nvidia puts out. Rock solid most of the time. But for anything involving OpenCL, ROCm or friends I had a very rough experience. It didn't matter what, because the calls eventually end up going through the kernel and whatever the root problem is lives somewhere around there.

paulmd
0 replies
2d23h

The biggest problem with SyCL is that AMD doesn’t want to back a horse that they don’t control (same reason they opposed streamline) so they won’t support it. When the #2 player in a 2-player market won’t play ball, you don’t have a standard.

Beyond that, AMD’s implementation is broken.

Same as Vulkan Compute - SPIR-V could be cool but it’s broken on AMD hardware, and AMD institutionally opposes hitching their horse to a wagon they didn't invent themselves.

This is why people keep saying that NVIDIA isn't acting anticompetitively. They're not, it's the Steam/Valve situation where their opponents are just intent on constantly shooting themselves in the head while NVIDIA carries merrily on along getting their work done.

eru
2 replies
3d8h

The vendor "lock in" is because it takes a few years for decisions to be expressed in marketable silicon and literally only Nvidia was trying to be in the market 5 years ago.

It's crazy, because even 10 years ago it was already obvious that machine learning was big and is only going to become more important. AlphaGo vs Lee Sedol happened in 2016. Computer vision was making big strides.

5 years ago, large language model hadn't really arrived on the scene yet, at least as impressively as today, but I think eg Google was already using machine learning for Google Translate?

hot_gril
1 replies
2d18h

Goes to show how difficult and important execution is. There was also Hangouts vs Zoom.

eru
0 replies
2d15h

Or Skype vs Zoom. Skype had a lot of mindshare, it was basically synonymous with calling people via the internet for a while.

But somehow Zoom overtook them during the pandemic.

mschuetz
0 replies
3d10h

The magic is that CUDA actually works well. There is no reason to pick OpenCL, ROCm, Sycl or others if you get a 10x better developer experience with CUDA.

physicsguy
1 replies
3d7h

Writing and porting kernels between different GPU paradigms is relatively trivial, that's not the issue (although I find the code much clunkier in everything other than CUDA). The problem is that the compiler toolchains and GPU accelerated libraries for FFT, BLAS, DNN, etc. which come bundled with CUDA are pretty terrible or non-existent for everything else, and the competitors are so far away from having a good answer to this. Intel have perhaps come closest with OneAPI but that can't target anything other than NVidia cards anyway, so it's a moot point.

hot_gril
0 replies
2d18h

I'm not very familiar with this area, so this is a useful insight. Otherwise I was really confused what's so special about CUDA.

skocznymroczny
0 replies
3d10h

I don't know specifically about OpenCL, but the big thing about CUDA is that it's supported across operating systems and across the whole spectrum of NVidia GPUs. AMD has ROCm, but they have to be dragged kicking and screaming to support any of the consumer grade GPUs. ROCm is still pretty much Linux only, with limited support on Windows and only the high-end consumer offerings (7900XT and 7900XTX) are officially supported for ROCm. Because they want to push compute users (and AI users) into workstation grade GPUs. In comparison, CUDA is usable and supported even on mid range NVidia GPUs such as RTX 3060. And most of the software around things like AI is done by enthusiasts with gaming PCs, so they write their software for the interface that supports them.

I have an AMD card that I try to use for AI stuff and it's an uphill battle. The most popular workloads such as Stable Diffusion or Llama more or less work. But as soon as you go into other, less mainstream workloads it starts to fall apart. Projects install CUDA versions of torch by default. To make them work you have to uninstall them manually and install the ROCM versions of torch. Then it turns out some Python dependency also uses CUDA, so you also have to find a ROCM fork for that dependency. Then there's some other dependency which only has a CUDA and CPU version and you're stuck.

jbenjoseph
0 replies
3d10h

I think a lot of it is customer service. If you work in ML you probably have some NVIDIA engineer's number in your phone. NVIDIA is really good at reaching out and making sure their products work for you. I am not a Torch maintainer but I am sure they have multiple NVIDIA engineers on speed dial that can debug complicated problems gratis.

jb1991
0 replies
3d7h

Nowadays OpenCL is dead for some reason...

Where does SYCL fit into this picture, is it a viable replacement for cross-platform GPU access?

nimish
13 replies
3d13h

Yep. AMD and Intel had two decades to come up with a CUDA alternative or even a parallel implementation and both failed to do anything. Incompetence.

oytis
10 replies
3d10h

The alternative to CUDA is called OpenCL. There has been speculations that poor performance of NVIDIA GPUs with OpenCL compared to CUDA is an intentional anticompetitive practice, but I don't feel confident to tell for sure.

mschuetz
6 replies
3d10h

OpenCL is an alternative to CUDA just like Legos are an alternative to bricks. The problem with OpenCL isn't even the performance, it's everything. If OpenCL were any good, people could use it to build similarly powerful applications on cheaper AMD GPUs.

jb1991
3 replies
3d7h

OpenCL is just a spec. It's up to companies to implement it in a successful way or not. There is no reason in and of itself that OpenCL can't compete with CUDA on performance. The fact that Apple's Metal, which is pretty good, is actually implemented with a private OpenCL system is proof that the spec is not to blame.

jonas21
1 replies
2d23h

The fact that Apple's Metal, which is pretty good, is actually implemented with a private OpenCL system is proof that the spec is not to blame.

I don't understand. If OpenCL was so good, why did Apple create Metal instead of just using OpenCL?

jb1991
0 replies
2d13h

For similar reasons why Microsoft created DirectX. It allows them to have a system where the software and hardware are more tightly integrated than using a cross-platform spec. It also allows them to situate the API within the context of the operating system and other languages that are used on Apple platforms, making things easier on developers. And at least in that regard, they certainly succeeded. Metal is probably the easiest GPU API offered on any platform. Not necessarily the most powerful, but it’s almost trivial to hand write a compute kernel and spin it off.

talldayo
0 replies
2d22h

Well, performance isn't the issue. Like the parent said, the problem is mostly that CUDA is such a radically mature API and everything else isn't. You might be able to reimplement all the PCIe operations using OpenCL, but how long would that take and who would sponsor the work? Nvidia simply does it, because they know the work is valuable to their customers and will add to their vertical integration.

OpenCL isn't bad, and I'd love to see it get to the point where it competes with CUDA as originally intended. The peaceable sentiment of "let's work together to kill the big bad demon" seems to be dead today, though. Everyone would rather sell their own CUDA-killer than work together to defeat it.

paulmd
1 replies
2d23h

AMD’s first attempt at displacing CUDA-as-a-runtime was called AMD APP (advanced parallel processing) so this layer did indeed exist. It just sucked as badly as the rest of AMD’s slop.

https://john.cs.olemiss.edu/heroes/papers/AMD_OpenCL_Program...

Bonus points: the rest of the software libraries intended to compete with the CUDA ecosystem are still online in the “HSA Toolkit” GitHub repo. Here’s their counterpart to the Thrust library (last updated 10 years ago):

https://github.com/HSA-Libraries/Bolt

Nvidia had multiple updates in the last year the last time I checked. That’s the problem.

lmeyerov
1 replies
3d6h

And much more importantly for ooencl's success, poor compatibility of OpenCL implementations on Nvidia hardware by Nvidia + Microsoft platform-level software

Eg, opencl on Nvidia GPUs for windows is/was missing (I have a many-year GitHub issue with them + Microsoft), which matters for individuals, and Nvidia does not support OpenCL for its core convenience analytics libraries like the RAPIDS Python ecosystem, which is core to its massive data center market. We initially built for compatibility/distribution, but it didn't matter: That gap closed a lot of doors for us as a small ISV choosing what & how to build, and in turn, ultimately prevents our customers from buying AMD, Intel, etc

I'm not up to snuff on whether that qualifies as using its position anti-competitively anywhere, but it's a real market issue

EasyMark
0 replies
2d22h

The horrible/non-support of openCL is why I thoroughly support the government looking into Nvidia's monopoly practices.

roenxi
0 replies
3d6h

That theory doesn't really make sense. Nvidia didn't need AMD's permission to write a high-quality compute interface. Why would AMD need Nvidia's? AMD has their own internal opinions for how well they want compute to work; if OpenCL isn't doing it for them they should have figured that out quickly and then built their own layer as a stopgap. But we've seen what happens when they do that; ROCm. The problem is that they don't really know how and it is a glaring capability gap (didn't know I suppose - I expect they're learning fast).

If anything, the situation with OpenCL suggests AMD and friends were the ones dragging their feet. Nvidia correctly identified that being led by the OpenCL committee would lock them out of billions (trillions, if you believe the stock market) of profit and routed around the blockage rather than compromise their engineering standards.

deadbabe
1 replies
3d10h

How do we know the strategy wasn’t to avoid releasing a CUDA alternative until NVIDIA is wounded by an anti-trust lawsuit and then when they get broken up deploy your CUDA alternative and speed past them? Could be playing the long game.

throwthrowuknow
0 replies
3d9h

That’s not the case at all. AMD and Intel could be making money hand over fist right now if they had products that worked with the ML ecosystem. If they had any magic bullet they’d be idiots to not have used it years ago, they’re missing out on tens of billions of dollars on purpose for what? Even if NVIDIA got broken up their cards would still have massive demand and profit margins. I don’t think you understand how much of a shortage of compute there currently is, anyone who isn’t blatantly incompetent or ignoring ML can make money. AMD and Intel are just fat corporations who didn’t see the potential and thought business as usual would stay number one. Now they’re standing on the sidelines with their dicks in their hands while Jensen is leading the parade.

fnordpiglet
13 replies
3d11h

Some companies such as standard oil had extraordinarily effective vertically integrated monopolies. The issues came into play when they used their market dominance to secure things like rebate structures with rail roads and made agreements to buy up transit capacity at very favorable rates that left competitors at a serious disadvantage.

A real question might be how Nvidia prioritizes capacity and delivery and pricing to OpenAI and Microsoft for their superior product that has no meaningful competitor. It’s not an issue that Nvidia, like standard oil, had a superior business and product. It’s an issue when their scale and the scale of their adjacent partners lead to agreements that strangle others that need access to the supply chain. For instance if Anthropic is at a disadvantage to open ai for Nvidia capacity due to agreements between them that are mutually beneficial to the exclusion of Anthropic, that’s anti competitive.

fasa99
3 replies
3d

Another point with nvidia isn't "better product" but "what API does everyone use" and that's CUDA. Owned by NVIDIA. It would be like if SGI, who invented openGL as an open source version of their proprietary Iris GL, kept it proprietary, and now everyone uses SGI hardware because all games and 3D software use Iris GL. In this scenario SGI may also be very keen to make sure Iris GL has quirks that strongly tie it to SGI-only hardware to keep dominance. That seems a little more "okay you're overleveraging your position" versus "you put out a better product in a fair market"

talldayo
2 replies
2d3h

Another point with nvidia isn't "better product" but "what API does everyone use" and that's CUDA. Owned by NVIDIA.

One could very easily argue this is the status-quo; Microsoft ships DirectX which they withhold from competitors, and Apple does the same with Metal. Both of those monopolize their respective markets (PC gaming and mobile gaming) yet we see zero attempts to reconcile the two.

Because of the perceived hostility between vendors, I would not be surprised to see Nvidia argue that they're already as open as they could possibly be.

fnordpiglet
1 replies
1d14h

Except this isn’t actually even true on the surface.

Apple doesn’t have a monopoly on mobile gaming or even mobile. Windows is more of a monopoly than iOS, but it’s not a monopoly in gaming platforms by a large margin. OpenGL is a totally viable alternative. Etc.

CUDA is definitely a big part of the lock Nvidia has on their market. Other companies could with investment out class then hardware wise. But because of their API lock there’s no point.

talldayo
0 replies
1d4h

OpenGL is a totally viable alternative.

Not really, given that Apple platforms have more or less entirely depreciated the featureset with no guarantee or expectation to keep it around.

Other companies could with investment out class then hardware wise.

I don't even believe you. Who else can compete with Mellanox networking and custom ARM v9 cores? Even Apple doesn't really ship anything worth comparing to Nvidia's server offerings, and they're arguably best-positioned to usurp Nvidia's TSMC friendship.

eru
3 replies
3d8h

Standard oil wasn't a monopoly.

See https://en.wikipedia.org/wiki/Standard_Oil#Legacy_and_critic...

They even lost market share during the time they were alleged of having and exploiting a monopoly:

Although Standard had 90 percent of American refining capacity in 1880, by 1911, that had shrunk to between 60 and 65 percent because of the expansion in capacity by competitors.

See https://fee.org/articles/the-myth-that-standard-oil-was-a-pr... for a more opinionated take.

DarkNova6
1 replies
3d7h

Standard oil wasn't a monopoly.

Bro...

eru
0 replies
3d5h

?

fnordpiglet
0 replies
2d17h

If you read the article rather than selectively quoting antitrust movement on standard oil started in 1880s and culminated in 1911. Likewise Netscape vs Microsoft concluded long after Netscape was very dead. The fact litigation and stuff takes a long time to unfold is not a material fact in such a case. A large part of standard oils erosion of market share was an attempt to prop up their case against breakup as well as headwinds against them due to the governments pursuit.

But it’s a bizarre misreading to say they were not a monopoly to say that by the time things hit the fan they had lost a lot of market share from their peak monopoly.

The specific issues that standard oil hit up against was their deals with railroads that solely benefited them by virtue of their scale and their ability to effectively dictate their own pricing for logistics. Likewise they would buy up all capacity on crucial lines for competitors and only partially use it purely to block market access for competitors. There is also a view that being incredibly efficient can reduce competition in markets adjacent to you. Success in business for an individual company isn’t the only goal of a capitalist state, but rather competition in markets is considered a goal of the state in itself - survival of the fittest implies there’s something to survive against, and an ever expanding monopoly growing into new markets and dominating them starts to retard innovation and competition systemically. You don’t have to agree with the thesis, but that’s the way the system works.

JamesBarney
2 replies
3d

Why is it illegal to choose who you sell to? That's not bullying their competitors out of their own market.

wmf
1 replies
2d23h

If they wanted to, Nvidia could pick winners and losers in AI by withholding GPUs. That would be using their power in one market to influence another market. I haven't seen much evidence of that though.

fnordpiglet
0 replies
2d17h

Except as there’s a backlog if the enter into collusive agreements as a monopoly they are impacting adjacent markets (AI and adjacent companies competing with open ai and Microsoft). Also bulk pricing agreements can be seen as collusive even without inventory scarcity because it structurally advantages a picked winner.

When you hold a monopoly the same rules don’t apply. That’s why it’s often advantageous to allow material competition in your own market because your adjacency influence can be outsized and cause unintended consequences -even if you normally would have been allowed to do such things-. Those unintended consequences don’t require malicious intent they just have to exist and be materially a result of your monopoly. The more your maliciously collude though the worse the remedy will be for you.

foota
1 replies
3d10h

Say that Nvidia partners with open AI because they think open AI is the best positioned to succeed (and hence result in increase demand for Nvidia) as opposed to if they got some direct financial kickback, it that legal?

fnordpiglet
0 replies
2d17h

Probably yes if Nvidia has a monopoly. Because competitors of open ai have no alternatives they’ve killed competition. If doing so is to their advantage that doesn’t make it better, that just makes it rational from their perspective. But regulation against anticompetitive behavior targets behavior that structurally kills competition. Most collusion isn’t direct financial transfers but cooperative agreements to dominate adjacent markets for mutual benefit.

trhway
3 replies
3d12h

Nvidia had nothing to do with their failure, unless they had some sort of high-level mole in AMD's driver teams.

I wonder why Jensen wouldn't help his cousin Lisa Su with a good advice :)

In general it isn't surprising that hardware companies like Intel and AMD fall at software - just talk to any programmer at such a hardware shop. What striking is that Jensen is able to have such a prominent and huge software development in a naturally hardware company. NVDA is the only hardware company where my programmer acquaintances don't complain about it being a hardware company.

Dalewyn
2 replies
3d12h

it isn't surprising that hardware companies like Intel and AMD fall at software

A lot of what keeps Intel relevant is their prowess in both hardware and software, bringing properly refined products to market when it matters. Intel's failure against Nvidia had far more to do with their business strategy of CPU First missing the broadside of a barn; it's like bringing a knife to a gunfight, of course you're going to get your ass wrecked.

AMD fucking sucks at software though, no disagreement there.

7f4784e3f1
1 replies
3d11h

Intel has some great software like MKL and related libraries, but, speaking from experience, due to its internal politics Intel is unable to innovate in a meaningful way or build a reasonable SW strategy for GPUs.

The oneAPI heavily contributed to their GPU failures IMO. Choosing SYCL and promoting non-existing performance portability and pouring resources in the trying to shove CPU and GPU (and at some point FPGA!) into the same programming model while disinvesting from OpenCL was a big mistake, but it was probably the only way to please both CPU and GPU management.

And the problem is not SYCL per se but the fact that they try to make it an open standard rather then trying to provide a way to extract maximum performance from their HW. They needed to build a CUDA alternative, where, after jumping through considerable hoops, you can get to actually programming tensor cores for those who need it. Not surprisingly, Intel is not using SYCL for their GPU kernels (look at oneDNN sources). With Raja gone there may be hope, but I'm not holding my breath.

The only thing that they have that is competitive is Habana's stuff which is outside of the oneAPI.

I would not be surprised if AMD has a similar issue of infighting due selling both CPUs and GPUs.

And NVIDIA does not have this problem.

orhmeh09
0 replies
3d11h

What are your thoughts on the move from ICC to the LLVM-based ICX? Does that fit in with this product strategy somehow?

samplatt
3 replies
3d11h

Nvidia had nothing to do with their failure, unless they had some sort of high-level mole in AMD's driver teams.

You're pretty close, but it's one level of abstraction away from the truth: NVidia spends absolutely earth-shakingly stonking amounts of money working directly with the devs of major studios, fixing issues with their drivers as they come, and in some cases paying studios to lock-in talking to them about the bugs they find.

It's why (and how) they release new driver patches for every AAA release, and part of the reason why it triggers a redownload of the shader cache on a whole slew of Steam games every major version release.

AMD and intel don't have a chance; NVidia is inside the design loop for game development far more thoroughly than other hardware vendors.

reissbaker
1 replies
3d10h

Of course AMD and Intel could just do the same thing. But they don't. That's not Nvidia's fault.

In fact I wish Intel and AMD would do the same! It's great customer service to know that games are going to work well out of the box with Nvidia, because Nvidia spent tons of money working directly with devs on optimization and even per-game bugfixes.

As a longtime PC gamer I wish AMD released better GPUs, but they seem incapable of doing so generation after generation. Intel is getting better at the low end, so there's at least some hope there. But Nvidia is just the best in class, because they put in a huge amount of polish and work.

My understanding is this is the same approach Nvidia took with CUDA: investing a massive amount to make it fast, accurate, and broadly available, while AMD continued to push out barely-tested drivers — for example, see hotz's woes with tinygrad and finally giving up and labeling the "red" (aka AMD) tinybox driver quality as "Mediocre" on their product page, with the "green" (aka Nvidia) tinybox driver quality listed as "Great" https://tinygrad.org/ — and to this day restricts their competing library (ROCm) to specific ultra-expensive server-class GPUs.

Dalewyn
0 replies
3d10h

In fact I wish Intel and AMD would do the same!

Intel at least works very closely with Microsoft to iron out the details for Windows.

WheatMillington
0 replies
2d22h

You're treating AMD and Intel as resource-poor upstarts, which is ridiculous. They have every opportunity to provide the same support as nvidia.

mort96
3 replies
3d9h

There is no moat

Isn't the CUDA API a fairly significant moat?

roenxi
2 replies
3d6h

I don't think so, no. I had an AMD card; I didn't feel like the lack of CUDA was slowing me down. I managed to implement everything I tried on the CPU just fine. My problem was that every approach I tried to implement my own stuff on the GPU using things like Vulkan or OpenCL led to crashes or frustration. In fairness I didn't know how to do basic operations on a graphics card with OpenGL and would be relying on super basic programming material out of AMD with titles like "how to multiply a matrix on an AMD GPU" [0]. They can be forgiven for not supporting amateurs I suppose ... but now I own a Nvidia card and they have stuff like https://docs.nvidia.com/cuda/cublasdx/introduction1.html.

But I stress that the issue there isn't that CUDA has tutorials. It is that I expect the tutorial code not to lead to a crash or system lockup. There is a well supported path to do those basic operations.

If you look at the CUDA webpage [1] you see things like "cuRAND", "cuFFT" and "cuBLAS". That isn't a moat - those are first year software engineering topics. AMD managed to make it look like a moat by not taking the compute market seriously; and anyone else competes at highly parallel compute.

[0] As far as I can tell; doesn't exist.

[1] https://developer.nvidia.com/cuda-zone

hot_gril
1 replies
2d18h

You and a few others have said that OpenCL is buggy on AMD. Is it buggy on Nvidia too?

roenxi
0 replies
2d18h

I don't know. I'd use CUDA rather than experimenting with it; CUDA seems to be where all the investment is and my experience with OpenCL on AMD was so bad I don't feel the cross-platform argument would make any sense.

It is a bit of a chicken-egg problem for me. I couldn't make OpenCL work on an AMD GPU, so I didn't manage to learn that much about OpenCL. At the time I assumed it was just me, but in hindsight I never saw an OpenCL-based approach to a compute problem that worked reliably on my machine so maybe it wasn't.

But I don't think it really matters. The algorithms in the field don't seem to be hard and I never felt like I was struggling when implementing them on the CPU without any special API at all. My issues were conceptually similar to George Hotz's famous rants where he had crashes when running the demo app in a loop. In the experimenting phase I found I couldn't run code on the GPU with any API.

I'm sure the situation improved and part of it was just me; towards the end of my time with AMD I could run stable diffusion inference and it'd work great for 10-40 minutes before the kernel paniced or whatever - so it was definitely technically possible to get a "hello world++" style thing running. But I never felt it was the APIs that were holding me back.

Dalewyn
2 replies
3d12h

not the obvious path to success.

It is if success is defined as deliberate failure.

Personally, a lot of US government actions of the past few decades in hindsight do not look like they had the interests of America or Americans in mind.

pas
0 replies
3d9h

Can you name a few examples please?

HPsquared
0 replies
3d10h

I wonder if anyone connected to this is short NVDA.

uoaei
1 replies
3d11h

there is no moat and they operate in a commodity space that will behave like a normal competitive market

Sure, until you consider the massive difference in accessible capital vs their rivals. Sure, VCs can fund startups, but Nvidia is publicly traded, which is just investment of a different form, and largely a form that you are barely accountable to. And that's on top of all the capital they directly control. That's an incredibly advantageous position to be in, and one that would be too tempting for most to resist exploiting.

paulmd
0 replies
2d23h

AMD actually had more resources than NVIDIA for a good long while (although they did have more places to spread them over, that was their choice to do so), and the balance of power didn't really shift until the 2014-era when AMD really "took their foot off the gas" on R&D. 290X was the last time AMD could be described as "in the lead" and Maxwell opened up a gap that AMD never managed to close.

Raja actually specifically called out 2012 as being the timeframe when that happened, because AMD decided the future of GPUs was iGPUs and not dGPUs, that the latter segment was going to go away entirely as iGPUs got faster and faster. Oops.

https://youtu.be/590h3XIUfHg?t=1956

rafaelmn
1 replies
3d9h

It's interesting that Jensen seems to be selling this "software development is dead to AI" story when that means his profitability moat (software) is dead as well. All AMD needs to do is "hey Copilot rewrite this CUDA code to run on AMD hardware". Somehow that's not happening.

belter
0 replies
3d9h

AMD had 10 years to get competent on software. It's not happening.

jb1991
1 replies
3d7h

self-inflicted apparent inability to multiply matrices on demand

Isn't it one of the fundamental applications of a GPU, whether for ML compute or more commonly for 3D graphics, to multiple matrices? In what way did AMD GPUs fail to do this?

sebzim4500
0 replies
3d7h

He means big matrices.

And when you try to do compute on AMD GPUs they crash your kernel if you look at them wrong.

belter
1 replies
3d9h

It's a bit ironic to talk about lawsuits over monopolies in an emerging AI market, that hardly existed 24 months ago, while ignoring the two worst monopolies in different domains, led by Microsoft and Google.

hot_gril
0 replies
2d18h

They aren't ignoring Microsoft and Google, that's ongoing too. But I agree that they should not be messing with the AI market.

layla5alive
0 replies
3d4h

You're grossly misinformed. CUDA is a huge anti-competitive moat, and that's no accident.

deadbabe
0 replies
3d10h

AMD is more than just cards.

blueboo
0 replies
3d10h

Their competitors didn’t win. That’s not the same as failing, insofar as building technology goes. Business, sure

bayindirh
0 replies
3d8h

Nvidia solved a hard problem and their competitors all failed.

And openly undermined technologies and standards that aimed to do the same in an interoperable way.

This is, in my eyes, enough to hit with a litigation, but we'll see.

WheatMillington
0 replies
2d22h

All of that can be true AND they're engaging in anti-competitive behavior, which should not be tolerated.

EasyMark
0 replies
2d23h

Well I'm sure that the courts will agree with your assertion if it's quick. That's why it's not left up to just one regulatory agency to define an illegal monopoly. I, for one, am glad they are coming under scrutiny. If there's "no there there" then they'll be just fine.

mdorazio
33 replies
3d13h

It's difficult for me to understand the thinking here. Nvidia is dominant because AMD and Intel completely dropped the ball on investing in frameworks to compete with CUDA. Microsoft... invested in OpenAI when it was clear that the newer batch of generative models had legs. OpenAI isn't really anticompetitive as far as I can tell - they just have the most money with which to vacuum up the most data and hire the top engineers.

Investigating is fine, but if the concern is the direction of AI in general then that's a job for legislation, not antitrust action. I.e. a job that Congress kind of sucks at.

shmerl
27 replies
3d13h

CUDA is pure lock-in, so it only makes sense for antitrust regulators to evaluate if it's causing anti-competitive damage to the market. Hint - it is. Lock-in is never good.

arvinsim
14 replies
3d13h

Wouldn't that logic also apply to, say, Apple?

HWR_14
8 replies
3d12h

Apple doesn't control an entire industry. You can buy an Android phone and fully participate in the mobile phone world never talking to Apple. You cannot participate in the AI world without NVIDIA.

shmerl
5 replies
3d12h

You don't need to control the entire industry to cause anti-competitive damage. You just need to have enough leverage that your influence can't be ignored.

Examples of Apple doing that is banning competing browsers on iOS and then pushing W3C and developers in the direction they want due to "you can't ignore us". There was a whole list of bad examples. Touch events, fighting against SPIR-V in WebGPU, fighting against adoption of Media Source Extensions (to benefit their video solutions) and etc. and etc.

They very clearly cause a ton of damage to the market by slowing down and sabotaging the progress of interoperable technology to harm competition.

threeseed
2 replies
3d9h

Apple doesn't prevent competing browsers. Just different engines.

And just because they don't rush to incorporate every web feature doesn't make them anti-competitive. Especially when most of the time they are right to do because either (a) they impact security or battery life or (b) they are non-standard.

Case in point SPIR-V which unless I am mistaken is exclusively controlled by Khronos.

shmerl
1 replies
3d4h

> Apple doesn't prevent competing browsers. Just different engines

And you should know well it's the same thing, since above listed issues are defined by the engine. Whatever label you put on top doesn't change the essence of what the problem is.

Basically, you completely missed the point.

threeseed
0 replies
2d20h

No I stated a fact.

There is far more to a browser than just the engine as we've seen with Arc.

HWR_14
1 replies
3d4h

Apple is the only competitor to Google's 65% (direct) plus reskinned Chromium dominance of the browser market.

Meanwhile, I'm happy with the direction they push W3C and developers. Without them, Google would just push the web to whatever they want (more than they already do).

shmerl
0 replies
3d4h

That doesn't excuse all the garbage Apple doing for the sake of their lock-in. They totally should have been blasted by anti-trust years ago for banning competing browsers on iOS. And many other things.

Seems like EU finally started getting the point before US regulators did.

throwthrowuknow
1 replies
3d9h

And why can’t you just use an AMD GPU? Is it because stores don’t sell them? Is it because motherboards don’t support them? Is it because you can’t rent one from a cloud provider? No, it’s because they don’t fucking work when you try to use them for ML.

HWR_14
0 replies
3d5h

That's not really relevant. Antitrust laws tend to focus on abusing your monopoly position, not having it. Microsoft was sued for using its Windows monopoly to force IE on people, not for having the monopoly by itself.

bmacho
3 replies
3d9h

It applies to game consoles as well, nintendo/sony/microsoft selling a hardware, and require to pay THEM when I want to run something on it, is outrageous.

qeternity
2 replies
3d7h

I don't really understand this argument. There are tons of alternatives. If I want to build and design a product a certain way, and you don't like it, simply don't buy it? This is a genuine question because I realize I am probably in the minority on HN, so please don't downvote just because of that.

I just think that regulations in general should be applied only when necessary, and then be applied with great force.

The user experience for game consoles largely falls back on the manufacturer. If Xbox had loads of unvetted buggy games and malware swimming in the ecosystem, people might be less inclined to buy an Xbox. So Microsoft sets about establishing some control over the ecosystem. Apple's App Store was really the first time that your average Joe could download an application from the internet and not have to worry about viruses. It was a big deal that added a lot of value to the user experience.

bmacho
0 replies
3d7h

It was, probably, but it is not anymore. It forces itself to be a middle man by every possible method, and just leeches on transactions.

Terretta
0 replies
22h43m

It's particularly annoying that those who either don't understand or actively dislike a product Maker's core values or principles poured into that Maker's product, keep trying to tear apart people's choice to buy a product built that way.

shmerl
0 replies
3d13h

Of course it would. And should be applied.

lostmsu
6 replies
3d13h

How can a product on its own cause anti-competitive damage?

shmerl
5 replies
3d13h

When it's used as a tool that makes it impossible or very costly to switch from what such tool is tied to.

To give an example. If CUDA wouldn't have been tied to Nvidia hardware, developers could use CUDA on any competing hardware. Being tied limits the choice. That's the essence of lock-in damage. Development tools should be development tools, not ways to control the market.

That's why there is value in something that breaks lock-in - that improves competition.

jonathankoren
1 replies
3d12h

But what prevents anyone from developing an OpenCUDA?

It's been almost 25 years ago, but this was the crux of the Sun Microsystems v. Microsoft lawsuit. It's why OpenJDK exists. Even three years ago the Supreme Court ruled that you can't copyright an API.

I mean, it feels breathtakingly obvious. Not only is there historical precedents for just doing this, there're legal precedents as well.

shmerl
0 replies
3d11h

Cost. What prevents someone from making something new to fight against an incumbent? Cost of entry most of the time. That's the mentality in the lock-in. Make switching too expensive and difficult to even bother. Possible? Yes, but hard enough.

When someone manages to overcome the cost and commoditize what lock-in was walling - it improves things. Even better if someone manages to do it while breaking lock-in itself. Sort of like writing a wrapper to run CUDA on non Nvidia GPUs (what ZLUDA is doing).

DaiPlusPlus
1 replies
3d12h

If CUDA wouldn't have been tied to NVIDIA hardware...

Initially I thought that AMD didn't offer a CUDA reimplementation out of NIH-syndrome (as it would be a marketing coup for NVIDIA), but then I saw that NVIDIA seem to be actively trying to shut-down independent attempts at running CUDA on non-NVIDIA hardware: https://www.techpowerup.com/319984/nvidia-cracks-down-on-cud...

jonathankoren
0 replies
3d12h

They should lose. As I said in another comment, SCOTUS just ruled that APIs weren't copyrightable, and there's a HUGE parallel with Microsoft and Sun Microsystems over Java. In the end, Sun settled for Microsoft to stop saying "Java compatible" and $20 million. The community on the other hand got OpenJDK four years later.

Just do it. Fuck the lawyers. When you win, you toss some cash and they'll ya for it.

https://www.cnet.com/tech/tech-industry/sun-microsoft-settle...

1123581321
0 replies
3d12h

There would have to be more to it. If cuda is just the software that Nvidia customers use, or nvidia hardware is chosen because cuda is better software, it's not going to pass a market welfare test nor would monopolistic practices be found. Expensive switching costs per se aren't considered damage to welfare.

mschuetz
2 replies
3d10h

Sure is, but there is nothing stopping AMD or Intel from building a working alternative to CUDA, so how is it anti-competitive? The problem with OpenCL, Sycl, ROC, etc. is that the developer experience is terrible. Cumbersome to set up, difficiult to get working accross platforms, lack of major quality of life features, etc.

akamaka
1 replies
3d7h

One example of how they might be abusing their monopoly is by forcing data centers to pay an inflated price for hardware that is similar to consumer GPUs: https://github.com/DualCoder/vgpu_unlock

_aavaa_
0 replies
3d3h

If this is abuse, I have bad news for you about AMD and their arbitrary restriction of rocm for consumer GPUs.

faeriechangling
1 replies
3d2h

Making a must-have product doesn’t constitute being an anti-competitive monopolist. Coca Cola isn’t a monopolist because classic coke just tastes so good and yet they refuse to share the recipe with Pepsi Cola.

Anti-competitiveness is more leveraging the must-have nature of CUDA to shelter Nvidia from competition. E.G. an OEM can’t sell professional laptops that support CUDA if it sells AMD gaming laptops or something - just to illustrate. Or Coca Cola refusing to sell coke to a store that also sells Pepsi causing the store to not stock Pepsi.

Monopoly law isn’t meant to simply punish success and exclusivity.

shmerl
0 replies
3d

It's not about a must have. It's about a must have being tied to them. That tying is what's harming the market by preventing interoperability. It only makes sense to prevent such kind of traps.

I.e. a must have can be a must have without artificially added tying (lock-in) detail. Then it's not causing harm.

faeriechangling
1 replies
3d13h

Nvidia has done many anticompetitive things over the years even if what you’re saying is generally true. Gameworks comes to mind.

paulmd
0 replies
2d23h

It's not anticompetitive to try and foster fields of software in which your competitors are underinvested/underperforming/disadvantaged. AMD is trying to do the same thing right now - AI/NPU is an area where they think they have the lead over Intel, so guess what, they're selling AI now. AI hardware, AI software...

As far as typical gameworks examples, I am specifically thinking back to tessellation - everyone freaked about Crysis 2 (despite the fact that it didn't actually use max-LOD or no-culling during actual gameplay) or Hairworks. All of a sudden it stopped being a thing, around the Polaris/Vega era, right? You know why that was?

Because AMD finally fixed their tessellation performance. Their hardware was deficient and underperforming, and the problem went away when they fixed their hardware.

The same is true of the software today. AMD's problem isn't that NVIDIA is stomping on their fingers, it's that ROCm crashes when you run the tutorial/sample programs.

shrimp_emoji
0 replies
2d22h

OpenAI: neither open (they're closed) nor A (they train on the output of natural intelligence) nor I (okay, this is a stretch, but it's a schizophrenic black box that nobody can explain or debug deterministically, so some "intelligence" you got there nyah).

moose_man
0 replies
3d13h

It seems like the investigations are linked so it could be they are acting in concert in a way that prevents or disrupts competition. Say for instance, OpenAI was using Microsoft money to buy chips and Nvidia was favoring it in a way that hurt other AI startups.

kersplody
0 replies
3d9h

It's fine to be a monopoly because you have the best product. It's not OK to use that monopoly position to stifle competitors.

So has enough anticompetitive behavior occurred to distort the market? Unless I'm missing something, I'm just not seeing this investigation having legs.

osnium123
11 replies
3d13h

I don’t understand why the US government is moving to cripple NVidia. It’s not Jensen’s fault that he’s a visionary CEO who anticipated this market and invested in CUDA. Jensen also didn’t anything to do with the fact that Intel had incompetent leadership for 15 years before Pat Gelsinger joined in 2021.

vlovich123
8 replies
3d13h

It’s questionable whether or not he anticipated this specific market vs he just positioned the company to eat up sales to high energy particle physics & similar super-computer applications and then got lucky that this accelerator also turned out to work pretty well for AI. At best, he saw that he needed a more general compute platform for GPUs but OpenCL was a thing so he wasn’t alone in recognizing this. There’s even an argument to be made that NVidia intentionally undermined OpenCL to cause it to fail; since NVidia has the fastest cards & mind share, if they make CUDA perform better than OpenCL, they encourage developers to buy into that development environment and thus lock out the standard from adoption.

bcatanzaro
3 replies
3d1h

People have been saying that GPUs randomly happened to be good at ML since at least 2014 when Nervana was founded. It was clear 12 years ago to anyone paying attention that DL was revolutionizing the world and that NVIDIA GPUs were at the center of the revolution. Whatever random chance factored into NVIDIA's success is outweighed by the decade+ of pedal-to-the-metal development NVIDIA has undertaken while its competitors decried AI as an overhyped bubble.

I have been there for this history, working on ML on GPUs at NVIDIA for a few years before Jensen decided to productize my little research project, CUDNN.

vlovich123
1 replies
2d23h

The history page for CUDA is pretty accessible [1]. It originated from experiments at Stanford in 2000 with Ian taking on leadership of CUDA development in 2004.

In pushing for CUDA, Jensen Huang aimed for the Nvidia GPUs to become a general hardware for scientific computing. CUDA was released in 2006. Around 2015, the focus of CUDA changed to neural networks.[8]

Credit to Jensen for pivoting, but I recall hearing about CUDA networks from Google tech talks in 2009 and realizing they would be huge. It wasn't anything unique to realize NNs were a huge innovation but it did take another 5 years for it to mature enough and for it to become clear that GPUs could be useful for training and whatnot. Additionally, it's important to remember that Google had a huge early lead here & worked closely with Nvidia since CUDA was much more mature than OpenCL (due to intentional sabotage or otherwise) and Nvidia's chips satisfied the compute needs of that early development.

So it was more like Google leading Nvidia to the drinking well and Nvidia eventually realizing it was potentially an untapped ocean and investing some resources. Remember, they also put resources behind cryptocurrency when that bubble was inflating. They're good at opportunistically taking advantage of those bubbles. It was also around this time period that Google realized they should start investing in dedicated accelerators with their TPUs because Nvidia could not meet their needs due to lack of focus (+ dedicated accelerators could outperform) leading to the first TPU being used internally by 2015 [2].

Pretending like Jensen is some unique visionary seeing something no one else in the industry didn't is insane. It was a confluence of factors and Jensen was adept at navigating his resources to take advantage of it. You can appreciate Nvidia's excellence here without pretending like Jensen is some kind of AI messiah.

[1] https://en.wikipedia.org/wiki/CUDA

[2] https://en.wikipedia.org/wiki/Tensor_Processing_Unit

bcatanzaro
0 replies
2d18h

I was at NVIDIA at that time working on ML on GPUs. Jensen is indeed a visionary. It’s true as you point out that NVIDIA paid attention to what its customers were doing. It’s also true that Ian Buck published a paper using the GPU for neural networks in 2005 [1], and I published a paper using the GPU for ML in 2008 while I did my first internship at NVIDIA [2]. It’s just not true that NVIDIA’s success in AI is all random chance.

[1] https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=1575717 [2] https://dl.acm.org/doi/pdf/10.1145/1390156.1390170

talldayo
0 replies
3d

Yeah, you really get this sense when you page through their research history on the topic: https://research.nvidia.com/publications?f%5B0%5D=research_a...

CUDA won big because it made a big bet. Were it that OpenCL was ubiquitous and at feature-parity with CUDA, maybe there would be more than one player dealt-in at the table today. But everyone else folded while Nvidia ran the dealer for 10 long years.

paulmd
2 replies
2d23h

It’s questionable whether or not he anticipated this specific market vs he just positioned the company to eat up sales to high energy particle physics & similar super-computer applications and then got lucky

it's actually specifically not questionable because it's been covered by journalists etc.

https://www.newyorker.com/magazine/2023/12/04/how-jensen-hua...

Within a couple of years, every entrant in the ImageNet competition was using a neural network. By the mid-twenty-tens, neural networks trained on G.P.U.s were identifying images with ninety-six-per-cent accuracy, surpassing humans. Huang’s ten-year crusade to democratize supercomputing had succeeded. “The fact that they can solve computer vision, which is completely unstructured, leads to the question ‘What else can you teach it?’ ” Huang said to me.

The answer seemed to be: everything. Huang concluded that neural networks would revolutionize society, and that he could use CUDA to corner the market on the necessary hardware. He announced that he was once again betting the company. “He sent out an e-mail on Friday evening saying everything is going to deep learning, and that we were no longer a graphics company,” Greg Estes, a vice-president at Nvidia, told me. “By Monday morning, we were an A.I. company. Literally, it was that fast.”

Around the time Huang sent the e-mail, he approached Catanzaro, Nvidia’s leading A.I. researcher, with a thought experiment. “He told me to imagine he’d marched all eight thousand of Nvidia’s employees into the parking lot,” Catanzaro said. “Then he told me I was free to select anyone from the parking lot to join my team.”
vlovich123
1 replies
2d20h

I mean your article literally suggests that CUDA was a misstep. It failed to bring supercomputing to the masses until they got bailed out by AI coinciding with needing supercomputer powers.

In marketing cuda, Nvidia had sought a range of customers, including stock traders, oil prospectors, and molecular biologists. At one point, the company signed a deal with General Mills to simulate the thermal physics of cooking frozen pizza. One application that Nvidia spent little time thinking about was artificial intelligence. There didn’t seem to be much of a market.

Despite the snub, Hinton encouraged his students to use cuda, including a Ukrainian-born protégé of his named Alex Krizhevsky, who Hinton thought was perhaps the finest programmer he’d ever met. In 2012, Krizhevsky and his research partner, Ilya Sutskever, working on a tight budget, bought two GeForce cards from Amazon. Krizhevsky then began training a visual-recognition neural network on Nvidia’s parallel-computing platform, feeding it millions of images in a single week. “He had the two G.P.U. boards whirring in his bedroom,” Hinton said. “Actually, it was his parents who paid for the quite considerable electricity costs.”

Basically Hinton thought CUDA was a better platform and kept encouraging his acolytes to utilize it with NVidia completely ignoring it. For what it’s worth ChatGPT and LLMs are not the start of the story. I remember seeing Hinton present his research in 2009 and thought it was a revolutionary step forward. It took a much longer amount of time for NVidia to notice that they should be marketing CUDA to AI. They did so around the same time that Google started building their own inference accelerators for Tensorflow. So at that point NVidia was getting multiple signals already from a big customer that there’s something here.

paulmd
0 replies
2d20h

You seem to be tangentially attempting to refute a claim I didn't make / responding to a comment I didn't make? Are you lost?

Yes, it's probably true that in 2009 or whatever it wasn't on NVIDIA's radar as being a major R&D priority. It was by 2014-2015, which is what I cited.

You claimed it couldn't be proven that Jensen specifically targeted AI as a priority, and it's clear that by 2015 or so he was, at a time when AMD was not even taking GPGPU itself seriously as a development priority let alone focusing on a specific niche or field like AI.

It's questionable whether or not he anticipated this specific market ...

again: no, it is not, literally he's been focused on this specific market for a decade and we have the receipts.

if your point is that he didn't see it fifteen years ago instead of ten years ago... ok I guess you win then, discussion concluded.

radicalbyte
0 replies
3d12h

It seemed like a no-brainer to me to open the hardware up to more general purpose compute.. but I'm a software guy who started on machines where where completely open and can "do magic" thanks to that.

bmacho
0 replies
3d9h

Nvidia always tries to be anticompetitive, creating and exploiting vendor lock-in, instead of giving back to the world , like e.g. like Sun did, open-sourcing everything.

It is definitely Jensen's fault.

I think it is a law of nature that shitty and harmful behaviour pays off, but that's not a problem, if we can correct it time-to-time. With antitrust inquires for example.

Vespasian
0 replies
3d10h

Laws in this field do not make a moral judgement on how a company got where they are.

They look at what you are doing with the power you actually have. Once you are the size and importance of MS, OpenAI or Nvidia, the rules change and everyone knows that.

zooq_ai
4 replies
3d12h

what's with democrats and destroying success? It stems from their fundamental philosophy that if you are successful then you must be evil or have done something illegal.

threeseed
3 replies
3d9h

DOJ operates independently of the White House.

And Merrick Garland is about as bi-partisan a choice as you could pick.

zooq_ai
2 replies
3d1h

DOJ and FTC is appointed by White House and they make sure they align with their ideologies.

There is a reason Lina Khan goes after companies while her republican counterpart wont.

Thank God the supreme court will kneecap all these useless agenda-driven agencies

threeseed
1 replies
2d20h

They are recommended by the White House and confirmed by Congress.

It is far closer to a bi-partisan choice than you are making it seem.

zooq_ai
0 replies
2d20h

Confirmed by Senate, which has been democrats since 2021

null_investor
2 replies
3d9h

People attribute great leadership to Satya, but to me it seems to be about be willing to fight with regulators, and funnily enough the regulators haven't been working so much recently.

Just have a look at the size that Google, Microsoft, Meta others have got, not because they have a outstanding product (like Cuda), but because they can use their dominant market position to buy smaller businesses and monetize their products as part of a bundle (Whatsapp, Activision Blizzard acquisition and the list goes on). Or just build a new product and shove it to their customers, like with MS Teams/Threads.

The US government seems to have actually aided those US companies to keep doing their monopolist practices, with many other countries around the world fighting against them alone, like Europe.

The situation is so bad right now, that Europe itself is now distancing itself from US to potentially also fight even harder against those companies, which bring barely any jobs to Europe, but have market share in the 90s and beyond.

I'm pretty certain that companies in Europe could build its own Facebook, Instagram, Whatsapp, their own android fork etc. They don't do it because for a long time Europe has been a US pseudo-vassal.

vinyl7
0 replies
3d7h

After WW2, the US was the manufacturing superpower of the world, and it made the US very wealthy. China has since taken that spot, and the US didn't have any major play in the global economy. Starting with Obama, the US started investing heavily into tech via regulatory dismemberment and fiscal policy (ZIRP). The tech monopoloy is intended so that the US has control of a market in the global economy.

Synaesthesia
0 replies
3d8h

Yes the US government is definitely assisting US tech companies, and their global monipoly. Look at how they went after Huawei.

exabrial
2 replies
3d13h

Ffs, how about Apple, Google, Facebook? Quit jacking around.

passwordoops
0 replies
3d13h

Please go to your favorite search engine and copy the following:

-DOJ v Google

-DOJ v Facebook

-DOJ v Apple

-DOJ V Amazon

moose_man
0 replies
3d13h

Google and Apple are currently in antitrust litigation, and the government is looking into Facebook's acquisitions of Insta and Whatapp.

robotnikman
1 replies
3d12h

Microsoft and OpenAI I might see something, but Nvidia? The only reason they are where they are right now was by investing a decade beforehand into CUDA

surfingdino
0 replies
3d11h

It's about using their status today and how, not where they were yesterday.

pciexpgpu
1 replies
3d8h

I am surprised the path to AGI isn’t first paved with a simple request to the AI gods to replace cuda.

On the other hand, Microsoft does seem to have used mere mortals to write a driver/gpu independent way to run workloads in azure (Maia/Triton).

Microsoft is truly unstoppable no matter which way this whole AI race goes.

talldayo
0 replies
3d

I mean, I own an Nvidia GPU and I'd rather CUDA get replaced too. Very few people besides Nvidia shareholders truly love the status-quo here.

But really, the blame has to be put on the rest of the industry. If you want to kill CUDA, you have to attack it where Nvidia won't defend; make something Open Source and cross-platform. The problem is getting everyone to sit at the same discussion table. Microsoft has a half-dozen accelerator programs in the work and benefits from the ecosystem fracture. AMD is desperate for anyone but themselves to do their work for them. Apple is hedging their bet on piecemeal acceleration while trying their hardest not to fall behind. Google is trying their hardest to distance themselves from hardware and focuses mostly on software. And Nvidia could care less what everyone does, because they're shipping the largest servers out of everyone mentioned.

Microsoft is truly unstoppable no matter which way this whole AI race goes.

I can forsee one way they get crushed. If OpenAI keeps dealing in risky business with celebrities and intellectual property, Microsoft might be forced to divest one way or another. Without their OpenAI deal, Microsoft is in a much worse position more similar to Google. If everyone else is willing to ante-up on an Open Source CUDA alternative, then Microsoft's window closes to exploit the demand for a solution. Slim chance that everyone buries their hatchets though.

bicepjai
1 replies
3d12h

Sometimes we just need to understand we donot have all the information before we give subjective opinion with incomplete information

ineedaj0b
0 replies
3d12h

we all work in the industry. you can tell if something is fishy you don't need to wait.

where there is smoke there is fire

teh_infallible
0 replies
3d11h

Sounds to me like some regulators saw dollar signs and wanted a cut.

surfingdino
0 replies
3d11h

Microsoft seems to have found themselves with more than they bargained for when they decided to go all in on AI. OTOH, Microsoft is quite familiar with antitrust inquiries, because they've been there before https://en.wikipedia.org/wiki/United_States_v._Microsoft_Cor....

maeln
0 replies
3d8h

Seeing some of the comment here, you could make a conspiracy theory that NVidia/MS/OAI used their generative algorithm to write comment defending them on HN :p .

All jokes aside, I know that antitrust is not just about monopoly, but if we just focus on this part, it really does not matter if one company became a monopoly fair and square by just being better/smarter/luckier than everyone else, you still probably don't want it to be a monopoly. And making sure it still has competitor is usually beneficial in the long run. A monopoly might act "fair" for a long time (although it would be hard to know since you won't have anything to compare them with), but their power allow them to turn against the consumer whenever they want with little to no consequence. As an example, you can't really complain about an inquiry against NVidia, and then complain about their pricing and/or how hard it is to get some of their card (especially the one meant for training) because they don't care about you if you don't have a good relationship with them (and a lot of $$).

All in all, pretty sure this inquiry will lead nowhere, or just a slap on the wrist, as is usually the case. It is just a signal to those company to be careful and to avoid using their dominant position on one market to push themselves unfairly in another.

jb1991
0 replies
3d7h

Some talk of OpenCL vs. Cuda in this thread, but I'm wondering what, if anything, does SYCL have to offer in this area? Do Nvidia GPUs support SYCL?

ineedaj0b
0 replies
3d12h

initially tech was an apolitical asleep bear. the bear is waking up and both sides are not happy there is a bear in the room.

the bear needs to pick a side or even better... be a bear. be competent. bears are strong and can command a room themselves.

you have to accept responsibility you're a bear now.

blackeyeblitzar
0 replies
3d11h

Microsoft for sure needs to be split up, fined for what they did with Teams, and regulated for how they are bundling AI. I’m less concerned about Nvidia

XiS
0 replies
3d12h

Can't wait for al the upcoming internal tech emails, hehe

SilverBirch
0 replies
3d9h

It's interesting to me that Microsoft and Nvidia are being lumped together in this reporting. To me there seems to be two very different issues. Nvidia managed to execute on a combined software/hardware product category that has yielded fantastic results and is difficult to copy. That's great. It's nothing to do with Anti-trust. Are they charging a premium for scarce products now? Sure, but all you can do is damage the market by attacking them there's no real way to increase demand. At best you can limit their profits, just handing over margin to their customers.

The Microsoft situation is very different however. You've got this behemoth that has basically just decided to take it's magic money tree and use it essentially buy growth through a huge number of complex deals that are designed to circumnavigate anti-trust. Sorry but "We're not buying you, we're just handing your investors a tonne of money, letting you shut down your company and then hiring all your employees" is bullshit. And also "We're investing in you <oh and here are all these terms about how we own X Y Z of your technology and you must buy A B C of our resources" is also bullshit.

You can make a credible argument that Nvidia uniquely earned the success they're seeing with the AI stuff, I don't think you can make that argument with Microsoft.

Now if you can join the dots - Nvidia has a cosy deal with MSFT to give MSFT GPUs and small companies then have to deal with MSFT because they have all the access to compute, then I think that would be a slam dunk. Leverage Microsoft's Cloud business to hoover up startups in AI.