return to table of content

AMD records its highest server market share in decades

treprinum
95 replies
5d2h

Still wondering how AMD has only ~25% of desktop market share given recent Intel issues.

MostlyStable
29 replies
5d2h

My lab is trying to purchase a new office computer with heavy enough workloads to justify a mid-to-high tier CPU. I recommended that we buy an AMD machine primarily because of the current intel issues.

The first problem I ran into is that the majority of companies selling pre-built computers do not offer an AMD option (and pre-built is a must. We are not allowed to use grant money to buy all the parts and assemble ourselves, which would have been my preferred option). And when they do, they are often limited in selection and might only be older generation or budget-tier CPUs.

I did eventually manage to find an AMD option that fit our needs and my boss forwarded to billing for approval. We got significant push back with IT recommending various intel machines that they claimed were "better" (although also several hundred dollars more expensive), but were also available through the universities preferred sales partners.

We could probably fight and get them to allow us to purchase what we want (the fact that they have to "allow" it when 100% of our funding is non-university dollars is a separate, ridiculous issue), but honestly, at this point we have probably already spent more staff time on this than the real world differences, even including the micro-code problems, are worth.

I'd bet that a lot of corporate contexts, and even home enthusiast customers, operate on the same momentum: they have always bought intel and so they will continue to buy intel. Even in the absence of some strong belief that intel is "better".

pdimitar
8 replies
4d23h

pre-built is a must. We are not allowed to use grant money to buy all the parts and assemble ourselves, which would have been my preferred option

Huh. Why? This seems arbitrary.

CoastalCoder
3 replies
4d23h

I spent probably $20k on labor costs alone to order 4 developer laptops for my team when I worked for the U.S. Navy.

Procurement rules often exist for good reasons, but they're one of the reasons I moved back to the private sector.

pdimitar
2 replies
4d22h

Apologies for my ignorance, but can you describe it to me in practical terms what does it exactly mean to spend $20k on labor costs just to order 4 developer laptops, please?

mandevil
0 replies
4d20h

That's the number of hours it took to do all of the paperwork necessary for the acquisition, times the hourly rates charged to the government (as a general rule for USG cost-plus contracts, 1.5x the developer salary to cover benefits, contractor overhead, etc.) by the people doing the paperwork.

The one I remember best from when I was working gov't contractors was that there were forms that certified we were currently complying with all current US trade embargoes and sanctions regimes. So we had to investigate and make sure that we were complying with them all- the State department has a searchable list of all the people and companies that are under various levels of sanctions. But that paperwork, times all of the other forms we had to fill out, times a competitive bidding process (with possibility for appeals) to make sure the government wasn't getting ripped off, etc.

This can happen in the private industry as well: I remember once seeing a whole bunch of level 2 managers in a conference room battling it out over exactly which laptop to buy for ~15 developers who needed special extra beefy hardware for a project, and commenting to my L1 manager that the price difference between the two laptops was less than the cost to the company of that meeting, so it would have been wiser to just buy the more expensive laptops and forget the argument.

CoastalCoder
0 replies
4d22h

Soliciting multiple bids, getting exceptions to normal computer procurement rules, navigating the other more mainstream rules, and some other stuff, IIRC.

Happily it was a while ago and I purged most of the details from my brain.

metalliqaz
0 replies
4d23h

because the bean counters track assets, which means computers, not various parts.

jpeloquin
0 replies
4d20h

It often is arbitrary. Also, the rules as enforced by the awardee institution often differ from the funding agency's written rules. The people who are responsible for reviewing purchasing are only responsible for making sure no rules are broken, not with producing anything, so they tend to interpret the rules conservatively and just say "no" if they're not sure. The IT department also may try to block anything they didn't pick even if the funding agency would allow it. (Speaking from experience with university bureaucracy.)

It's not out of concern for cost-benefit ratio (overhead from parts selection and assembly) or anything like that. Getting something useful done is solely the researcher's problem.

bluGill
0 replies
4d22h

How much time is spent choosing and assembling parts? The less time you spend on tasks like that the more valuable time spent on something useful. (or so it is assumed)

CJefferson
0 replies
4d23h

Because if you assemble yourself, and it doesn't turn on, it's your job to debug which part is failing. If you buy pre-build you get a warranty for the whole thing.

We had some people at work buy parts and it was fine for a while, then became a nightmare, unstable machines where we could never figure out what part was to blame. It isn't worth paying Dev's salaries to let them debug their faulty hardware.

vlovich123
4 replies
5d1h

Why not start your own company (even non profit) that offers prebuilt machines by buying the parts you want and assembling them?

dijit
3 replies
5d

Doing it for his current employer would likely open him up to liability regarding embezzlement.

But it’s not a bad idea, the margins for PC building is really low for the general public though.

vlovich123
2 replies
5d

I don't see how a non-profit where the cost of the machine is the cost of the parts only could even realistically be embezzlement.

ineptech
0 replies
4d23h

It's not that it would be embezzlement, it's that it pattern matches to embezzlement. People with purchasing authority generally can't choose vendors they are involved with because that's such a popular way to steal money.

bluGill
0 replies
4d22h

That is a very common way embezzle money. Well not the PC business, generally you pick something that looks higher margin and/or harder to get into, but really anything will work so long as it looks like a legitimate business that does something real. The scam is easy: buy something cheap, and charge a lot of $$$ for labor or support costs. It is very easy to hide overhead that can be go back to you.

You also typically are not directly involved in the business - that is both too much work and too obvious: instead you get your brother-in-law/uncle/cousin/... to start the business so it looks independent. Then the relative does expensive family vacations, football games and other such things that you happen to be invited on along with all your friends in common. (that is you need to make it look like money is never given to you, but because it is paid for you save a lot of money).

The other common thing to do is a charity. If a politician's family is in any way involved with a charity you should assume it is a scam - the charity might otherwise do good work but the family member is there as a way to hide bribes. (again, the money never reaches the politician, but it reaches family members who the politician wants to do well). This of course makes life hard for honest family members of politicians - everything they do looks like the politician (not them!) asking for a bribe.

etempleton
4 replies
4d23h

I think there is a lot of trust of Intel. It is the only processor company most people know.

For IT departments I think a lot of it is that they want you to have processors they know work with their software stack. In addition, Intel has their own software stack that works with their processors that include features such as Remote Desktop and security. Intel, like Microsoft, has become very good at catering to the Corporate IT crowd.

pdimitar
2 replies
4d21h

For IT departments I think a lot of it is that they want you to have processors they know work with their software stack

I thought the world solved this problem with compilers like 30 years ago.

Also you're vastly overestimating the knowledge of the average computer users. Many of them can't even tell the difference between the CPU and GPU.

aljgz
1 replies
4d21h

Sorry, even you overestimated. A lot of users think CPU is the case.

p_l
0 replies
4d19h

It actually used to be a somewhat valid term, yes. The case with the computer was the Central Processing Units, and the rest was peripherals.

minkles
0 replies
4d20h

It’s also that most IT people doing corp stuff don’t even give a crap or even know what half the hardware is these days. I mean our IT department bought developers laptops with 8Gb of soldered in RAM and no expansion then bitched at the dev team when they were told to piss off

f1shy
3 replies
5d1h

Do I remember correctly that intel was paying companies for not offering AMD? Like Dell?

pdimitar
0 replies
4d21h

Don't be naive. This only means that they wisened up and made sure they will not be caught for 15 years now.

Alupis
0 replies
5d

The effects live on, unfortunately. Justice was not served as an outcome of that case.

Today, people's mindshare is still largely with Intel, mostly due to nearly 2 decades of associated brand new computers, and performance with Intel, since they were the only offered option 90% of the time.

Think about your typical non-technical person shopping for a computer. "I want Intel, they've always done well for me".

Sn0wCoder
2 replies
4d14h

System 76 has some AMD desktop / laptop options https://system76.com/desktops

Sounds like its a no go anyway, but just in case this is not the one you already found.

dagw
1 replies
4d9h

But even there you see the problem. The Ryzen CPUs are only in their lower end budget systems, and the Threadrippers are only in their expensive high end systems. If you need a 'mid-level' system (like we buy where I work), with say around a dozen cores, an RTX 4090, and 128 GB of RAM, then you have to go with Intel even here. Their cheapest Threadripper computer with matching specs is literally $3000 more.

paulmd
0 replies
4d1h

the ryzen 7/9 model is literally more expensive than every intel system they offer on that page?

and yes, threadripper is unfortunately quite expensive now, that's a decision AMD made back in 2019 with the 3000 series to go heavily upmarket on that. In most cases if you are a customer who is in the market for threadripper, you are better off just using epyc now, unless your highly-parallel workload also needs very high max clocks.

renewiltord
0 replies
5d

You need to go to an integrator and get an Epyc-based machine. You can do $7k ish per.

diabllicseagull
0 replies
4d21h

I've had a similar experience a few years ago. At the time, enthusiast Ryzen CPUs had just started outperforming Xeons that Dell and others were exclusively pushing. A sub thousand box edging out a ten thousand dollar workstation wasn't enough to move the needle until Threadripper workstations started to show up. Even then I wasn't able to push our IT to consider AMD as a serious replacement. Maybe that's partially why we still see ten Intel options for every AMD one.

acchow
0 replies
4d22h

the majority of companies selling pre-built computers do not offer an AMD option

Don’t you just need to find 3-4 sellers to do a shopping comparison? Don’t need anything close to the “majority”

belval
16 replies
5d2h

Well there are several reasons:

- Intel is still competitive for gaming and productivity (albeit at a somewhat higher power consumption)

- AMD does not have a good budget offering (competing with i3 at ~120$)

- People don't refresh their desktop as often as a laptop, especially for gaming where realistically an i7-7700k from 2017 coupled with a modern GPU will comfortably output 60fps+ at 1440p.

Drew_
14 replies
5d1h

i7-7700k from 2017 coupled with a modern GPU will comfortably output 60fps+ at 1440p

I can say by first hand this is not true for any modern MP game.

In general, I hate these "it does X FPS at Y resolution" claims. They're all so reductive and usually provably false with at least a few examples.

Zambyte
5 replies
5d1h

On the contrary I can say first hand that it is true (3700X which is two years newer, but on benchamrks it is a toss up between the two). What modern GPU at you using?

WarOnPrivacy
3 replies
5d1h

I generally rule out all Intel CPUs between the 4770 and 11th Gen.

The exception is when I need dirt-cheap + lower-ish power consumption (which isn't gaming)

Ekaros
1 replies
5d1h

An now 13th and 14th Gen... Before it is absolutely certain issues are fixed... So that leaves 12th?

baq
0 replies
5d

12700k here, absolute unit of a CPU.

Zambyte
0 replies
5d

Your preference does not change whether or not it can achieve > 60fps at 1440p in modern multiplayer games when paired with a modern GPU

Drew_
0 replies
4d23h

3700X's 8-cores is a generational leap above the 7700k's quad core for modern games. They aren't comparable at all.

I ran a GTX 1080 and then an RTX 3080. The performance was not very good in modern games designed for current gen consoles like pretty much any BR game for example. Some games got high FPS at times but with low minimum FPS.

Macha
2 replies
5d

I would be shocked if a 7700K with a modern GPU does not get 60fps at 1440p in Rainbow Six, Rocket League, LoL, Dota, CS, Fortnite, etc.

jajko
0 replies
4d23h

I game on archaic i5-4460 (4 cores, max 3.2ghz), paired with rtx 2070 super and 16gb I can run literally everything new in 50-100 fps range, coupled with 34" 1440p VRR display not much more to desire for single player. Running this maybe 6 years with only graphic card and corresponding psu change.

Ie Cyberpunk 2077 everything max apart from rtx stuff and its rarely below 80fps, dlss on good looks. Baldurs gate 3 slightly less but still smooth. God of war same. Usually first think I install is some HD texture pack, so far never any performance hit to speak of. Literally why upgrade?

Consoles and their weak performance are effectively throttling PC games last few years (and decade before), not much reason to invest into beefy expensive power hungry noisy setup for most gaming, just to have few extra shiny surface reflections?

Drew_
0 replies
4d23h

These are all old games so I wouldn't be surprised either

ineptech
1 replies
4d23h

It so happens that toms recently tested this exact question (how much performance do you lose pairing a modern card with an older gpu, compared to the same card with a modern cpu). Full results at [0] but the short answer is that a $900 RTX 4080 gpu with a 2017 cpu will generally do 60+fps at 1440p in most games, but as low as 55 in a few.

0: https://www.tomshardware.com/pc-components/gpus/cpu-vs-gpu-u...

Drew_
0 replies
4d20h

Not super surprising for single player games since they're usually much easier on the CPU than multiplayer. I was not getting minimum 60 FPS in Warzone for example.

replygirl
0 replies
4d23h

you'd be amazed what runs at 60fps in 4k if you simply turn down the settings

belval
0 replies
4d22h

not true for any modern MP game

Pretty big claim

at least a few examples

Reasonable claim

Comfortably doesn't mean everything will be at 60fps, it means that most things will be at 60fps so someone with a 7700k will not be feeling pressure to change their CPU (the entire point of the thread).

abhinavk
0 replies
5d1h

A big chunk of gamers just play games like Valorant, CS, Fifa and CoD which usually run much better.

saltcured
0 replies
4d22h

Aren't there Ryzen models as the budget offering that overlaps with i3?

WarOnPrivacy
7 replies
5d1h

Still wondering how AMD has only ~25% of desktop market share given recent Intel issues.

I sometimes pick Intel over AMD because I don't have AMD's CPU naming conventions memorized - and I'm in a hurry.

Panzer04
2 replies
4d17h

Especially in laptops, AMD has a downright insane naming scheme designed to confuse buyers - so this is honestly kind of understandable..

Sakos
1 replies
4d9h

And Intel is somehow better? They're all terrible, including Nvidia, and it always takes a ridiculous amount of research to figure out what's what.

WarOnPrivacy
0 replies
4d4h

You're mostly right. During the post-4770 wasteland years, I found I often couldn't ID mobile chips using my mental desktop crib notes.

jhickok
1 replies
5d

Like you don't have time to research the CPUs for a personal machine? Man I usually go overboard reading up on the hardware going into my own desktop.

WarOnPrivacy
0 replies
4d21h

Like you don't have time to research the CPUs for a personal machine?

No. It's when I have to quickly line up hardware for my customers.

I'll have 5 projects in play and I get an email from a client saying they need a workstation or notebook for a new hire ASAP. I can skim Intel CPUs among Lenovo offerings fairly quickly.

Back when I was a system builder, I knew every capability of every chip. When Intel had hobbling virtualization (VT/x, etc), I knew which chips were capable. In that day, I often chose AMD to save time because every AMD CPU was fully capable.

dukeyukey
1 replies
5d

Hah, I felt that urge last time I bought a PC - I could easily roughly compared Intel performance vs price, but with AMD I had to look up half the models.

WarOnPrivacy
0 replies
4d21h

I could easily roughly compared Intel performance vs price, but with AMD I had to look up half the models.

That's sort of what I'm up against.

AMD makes excellent products. If I have a day or two I can ask my son for recommendations. He knows AMD lines off the top of his head. If I have 10 minutes, I go with what I know.

mdasen
6 replies
5d1h

Honestly, this is one of the reasons I think Intel's stock is likely undervalued at this point (not that I want to invest my money with management I don't feel confident in).

Intel is facing a lot of threats and they rested on their laurels. At the same time, Intel is still 75-80% of the market - even as AMD started killing Intel in performance for several years. I'm less sure that Intel will hit its fab milestones than Intel's press releases, but I'm also not sure that AMD can really take advantage of that. AMD is reliant on TSMC's capacity and as we've seen recently, AMD often has to wait a year or two before it can use TSMC's latest processes. Even AMD's upcoming Zen 5 processors will be using TSMC's 4nm N4P process (with future Zen 5 processors expected to be 3nm). If Intel is able to get 18A out the door in 2025, Intel's fabrication will certainly be competitive with what AMD can get (and likely quite a bit better).

From what I've read (and I might be wrong), it sounds like Intel is moving to High NA EUV soon (now-ish/2025) while TSMC is waiting until around 2030 (https://www.tomshardware.com/tech-industry/manufacturing/tsm...). Is TSMC falling into the trap of letting a competitor out-invest them - the same trap that Intel fell into ~2012-2022? Probably not, but it does seem likely that the advantage TSMC had will taper away over the next couple years - assuming Intel isn't full of crap about their progress.

Even if TSMC maintains a lead against Intel, can AMD take marketshare? Specifically, can AMD produce as many chips as demanded or are they somewhat limited by how much capacity they can get from TSMC? For example, if people want X number of AMD server chips, does AMD manufacture 25% of that number and 75% of customers buy Intel chips because that's what is available due to AMD's lack of fab capacity? I'm not saying that is the case - it's a genuine question for someone that knows more about the industry than I do.

We do know that TSMC has limited capacity. Apple tied up TSMC's 3nm capacity such Apple is the only company shipping 3nm parts. It looks like Intel will be the next company shipping 3nm parts with their Lunar Lake using TSMC's N3B process this fall.

I'd also note that Intel's fabs aren't unlimited either. Meteor Lake moved to Intel 4 in December 2023, but Emerald Rapids (server) remained on Intel 7 at the same time. Intel can't just churn out as many Intel 4 chips as they'd like.

I too wonder how/why AMD is still below 25% marketshare given Intel's issues. I wonder if AMD has limitations on how much fab capacity it can get which is stifling attempts at gaining marketshare. At the same time, it looks like Intel is on a course to fix its issues - if one can believe what Intel is saying. If Intel is able to ship 18A chips in 2025 at volume, it seems like it should fix the big issue that Intel has been facing - inferior fabs. AMD has been able to out-do Intel in part because it's been able to rely on TSMC's better fabrication. Is this just a blip where Intel looks bad 2020-2024? That's a genuine question because I don't really know enough about the industry. Maybe someone will say that Intel's PR is mostly bluster and that they're still being run by beancounters trying to optimize for short-term profits. Maybe someone will say that Intel is investing in the right direction, but there's still a lot of risk around pulling off High NA EUV for 2025. Maybe someone will say that Intel is likely to hit their milestones and will regain the title of best-fab.

However, given that AMD hasn't been able to really hammer Intel's marketshare, it feels like Intel has more staying power than I'd think. Again, like the parent comment, it's always puzzled me that AMD hasn't gained a ton of marketshare over the past 5 years.

treprinum
3 replies
5d

All these points about Intel are just waiting for an answer to "Is Intel capable of changing its prevailing culture or not?". There is a reason why Intel fell behind and it wasn't due to a lack of engineering geniuses inside. You might bet on the stock market and see. AMD obviously has a SPOF in TSMC and its limited capacity.

mdasen
2 replies
4d23h

Yep, that's a lot of the essence. I would point out a few things, though.

Unlike many other cases of a company getting a bad culture, Intel hasn't seen its marketshare destroyed. They still have time to turn things around. Even if things go in a very bad direction, they could become a fabless company like AMD and compete for TSMC's capacity on equal terms with AMD.

If you look at a company like Yahoo, Google had already become dominant when it was trying to turn itself around. AMD hasn't even hit a third of the market.

Not only that, but it's probable that AMD isn't capable of becoming even 50% of the market due to capacity constraints. If AMD can make 25-30% of the market's processors and no more due to capacity constraints, even if Intel processors are inferior, they're still going to be the bulk of sales. By contrast, Google had basically infinite capacity to dominate search, Facebook had basically infinite capacity to crowd out MySpace and other rivals, etc. The point is that Intel has a lot more runway to turn itself around if it's essentially guaranteed 70% of the market.

In some ways, this resembles the Boeing/Airbus situation. While Boeing has been in a bad place recently, Airbus can't take advantage of it. That gives Boeing a long time to change its trajectory and is probably why Airbus is only worth around 15% more than Boeing. Even if airlines want to buy Airbus planes, they can't. Likewise, even if OEMs and datacenters want to buy AMD chips, maybe they can't because AMD doesn't have the capacity to make them.

In some cases, you need to rapidly change the culture and direction of a company because you're quickly dying. In Intel's case, it seems likely that they could waddle along for a long time if AMD isn't able to quickly grab marketshare. I think it's easier to gradually change the culture over a longer period of time if you have the ability to stave off collapse in the meantime.

For example, maybe we'll look back on Intel in a decade and say "yea, the change started around 2020 when Intel realized it had fallen behind. It continued to fall behind while it tried to correct things, but then turned a corner and in 2025 they'd regained the fab crown." Are we already close to that culture change showing up in the public view?

I just think I don't know enough about the industry to really know. Maybe someone would say "Intel has already done the culture change and we've seen that in things like Intel 4 and you're really going to notice it in 2025 and 2026 with Intel beating TSMC to High NA EUV." Maybe someone would say "No, intel hasn't changed its culture and you can see that Intel 4 is a niche where they can't even make enough volume to put out a server product with it." Or maybe Intel is focused on the more medium-term with 18A and not wanting to waste resources on a less-than-stellar interim process.

I guess my question isn't just whether Intel is capable of changing its culture, but whether that's already happened. The public perception is a lagging indicator. Most of the public didn't have a bad perception of Boeing 1996-2017. We only recognized the problems in hindsight. Likewise, we recognized Intel's shortcomings in hindsight. We won't recognize whether Intel has changed for years after it has already changed.

So, has Intel's culture already changed (and we're going to see some pretty awesome stuff from them in 2025 and 2026) or has Intel been on the same path to mediocrity for the past 4 years (and they've just gotten better at press releases announcing that they'll be better soon)?

Wytwwww
1 replies
4d21h

maybe they can't because AMD doesn't have the capacity to make them.

Or Intel just cut prices on their server CPU so that they are sort of competitive despite much higher power usage to core ratio?

acchow
0 replies
4d21h

If the Generative AI growth continues, then physical real estate and power usage will be top priorities.

The data center shortage means hyperscalers want as much performance as possible per server (and ideally the best performance/watt because cooling is also a concern). This why AMD has double the ASP of Intel in the server market.

qwytw
0 replies
4d22h

Isn't Meteor Lake made by TSMC (at least some parts like the GPU)? Which would explain the capacity constraints.

high_na_euv
0 replies
4d22h

Aint high na euv for 14A?

Der_Einzige
5 replies
5d1h

To this day, Intel is still deeply ahead in software support for AI.

MKL, OpenAPI, Intel Optimization for sklearn, for pytorch, etc (there are many more). No equivilants exist from AMD but NVIDIA has an equivalent for all of these (i.e. CuPy).

Using AMD for AI stuff only makes sense if you never believe that you may want to do any kind of ML-like workload on or with the CPUs assistance. This is true for many folks though, but for those it's not true for, they will buy blue until Intel no longer exists.

treprinum
4 replies
5d

MKL runs on Ryzens just fine and newest Ryzens all have AVX512 which all new desktop/laptop Intels lack. "AI" on CPU is a gimmick, everybody does it on GPU anyway (outside 128GB RAM M3 Max that has some LLM use, but it's still due to shared memory with GPU).

0xDEADFED5
2 replies
4d20h

I'm doing fast AVX512 embeddings on Ryzen, and fast ONNX AVX512 reranking on Ryzen. Though I do the actual heavy lifting on GPU, doing all the RAG stuff in CPU is helpful. AI on CPU is still mostly a gimmick, but as models get smaller and more capable it's becoming less of a gimmick.

treprinum
1 replies
4d19h

Yeah, but the "AI" on CPU means some basic NPU doing (sparse) matrix multiplication, not AVX512.

0xDEADFED5
0 replies
2d18h

I'm handling dense, sparse, and colbert vectors generated by BGE-M3 model on CPU, pretty sure that counts as "AI"

Der_Einzige
0 replies
5d

Certain workloads aren't total "gimmicks" (i.e. small embeddings models), and often those CPU optimizations are on things that absolutely do take advantage of the CPU well (i.e. graph analysis) or on traditional ML algorithms like random forests (intel optimizations for sklearn) which are still important and do still run well on CPUs.

Not to mention that most people have more RAM than they do VRAM.

Running MKL on your datacenter grade AMD processor technically violates some license and it relies tricks like this - https://danieldk.eu/Posts/2020-08-31-MKL-Zen

Technically, if your found to be allowing this as a cloud provider by Intel, they have grounds to sue you, so yes, you can use MKL on a Ryzen at home, but you put yourself at risk for lawsuits by doing this in a data center at scale

bugbuddy
4 replies
5d2h

Because Intel is still top dog when it comes to single threaded performance even against Zen 5.

treprinum
3 replies
5d2h

Yeah, but that's at the cost of a 50% probability of a CPU failure within a year. Moreover, it doesn't hold true in games vs X3D CPUs.

bugbuddy
2 replies
5d2h

That’s hyperbole. There’s no data to back such a claim.

Cold_Miserable
1 replies
4d6h

Indeed. Intel is trash right now. No AVX512 thus AMD is infinitely faster.

bugbuddy
0 replies
3d12h

Define infinitely

Rinzler89
4 replies
5d2h

Intel N100 NUCs with 16GB RAM and 512GB SSDs can be had for less than $200 on Amazon. What does AMD have in that price point?

NUCs with AMD APUs are nearly double the price. AMD is top dog in the high performing PCs but not everyone buys high performance PCs.

tracker1
1 replies
5d1h

There aren't really any AMD options as cheap as the N100/95/97 mini-pc options. That said, there are some truly great options in the sub-$600 mini-pc segment. The Beelink SER8 with an 8845hs is pretty great to say the least. Bought one a few weeks ago for HTPC/Emulation usage running ChimeraOS, it's done a very good job. Though I might replace the BT/WiFi controller for something better supported. Only hung up once, but still annoying.

I wouldn't consider it "high performance" but it's definitely good performance and far better than any Intel mini-pc options I've looked into.

theropost
1 replies
4d20h

This is true, though an AMD 5700U mini PC might go for $300, or less if you can find a bargain. However, it will hands down dominate a N100. 2.5 to 3x faster, not to mention the APU (vega 7) for video/gaming capabilities. I was always Intel, but AMD has really stepped up in the last few years. Their APU line up is top notch, and pretty incredible for the mini PC market (and laptops). The newer gen are great too, but for the best bargains, then Zen 3 lineup is the way to go

Panzer04
0 replies
4d17h

There's a lot of options from both mfgs above the bottom end. Alder lake CPUs (eg. 12600h is simarly priced with 12 cores) are pretty solid in the same price class as Zen 3. I think both have pretty competitive miniPC offerings (with intel usually offering more cores) in the midrange.

Definitely agree asst if you plan on actually using the minipc swinging a real mobile chip with 3x the perf of N100 class processors is almost certainly worth it.

jeffbee
2 replies
5d2h

Intel sells a complete package with highly integrated laptop things such as wifi (CNVio2) and webcam (IPU6) for a very low price, and they have things that fleet buyers want like vPRO.

treprinum
1 replies
5d2h

The article had a separate "mobile" category for that where AMD had <20% market share.

jeffbee
0 replies
5d2h

While true, people still want things like Wi-Fi and vPro in a desktop. And USB4 and what not.

graton
2 replies
5d2h

Prebuilt PC sales is likely that cause. I'm sure AMD has a much higher share for those who are building their computers. But that is most likely an overall small share of the market of computers sold.

jtriangle
0 replies
5d1h

Most computers sold are prebuilts, and most computers sold are for commercial use, ie office bees, so video game oriented PC's are a relatively small corner of the market.

adrian_b
0 replies
5d2h

The Amazon statistics show much higher sales of AMD CPUs than of Intel CPUs, so this supports your belief about DIY computers.

sweca
1 replies
5d2h

I just bought an AMD chip for my new server build for this reason. But then I found out about the Sinkclose vulnerability...

WarOnPrivacy
0 replies
5d1h

Before a Sinkclose package can be deployed, it needs some multi-stage exploits to be successful. I think it's value is against exclusive targets.

I've had more than 0 clients in my career that might draw that kind of attention but not a lot more.

bangaladore
1 replies
5d

My next CPU will be AMD. Why I'm not upgrading yet:

1. I generally use the same CPU for 2-3 years. I have a 11700k right now.

2. AMD hasn't released the beefier versions of its new CPU. High-end consumer Intel chips have many more cores/threads (P + E), while AMD's top right now has 16 threads.

#2 Is why I'm holding off for now. I'm waiting for the X3D version of their chip with ~2x the core count and more threads.

BeefWellington
0 replies
3d20h

#2 is just incorrect, the top end ThreadRipper (7995WX) has 96 Cores / 192 Threads.

Intel has nothing that comes close to it.

xboxnolifes
0 replies
4d22h

Because, like you said, the issues are recent.

pathless
0 replies
5d2h

Intel has had very good marketing for decades, and they've also surrendered to marginless sales just for the sake of appearing dominant. AMD's strategy of late seems to be for higher margins, but lower share.

mrweasel
0 replies
4d23h

Depending on you definition of recent, because it's to recent to really make any significant impact.

If you're thinking recent as in the last few years, then because the performance and power issues with Intel aren't large enough to make any difference for regular desktop use. Most will be on a 3 - 5 year upgrade cycle, so your new office PC just needs to be better than a five year old one, which it will be.

What it might do it damage the used market. Prices on Intel based refurbished PCs needs to drop, by a lot, now that we know that many/most of those CPUs are damaged beyond repair and will continue to degrade.

jtriangle
0 replies
5d1h

We changed our current order for workstations to AMD parts. They arrive today or tomorrow I think.

Odds are it would have been fine, they were just i3 worker bee boxes, but, it's not really worth the risk when there's a viable alternative.

deelowe
0 replies
4d21h

Because AMD's general purpose offerings are pretty poor in comparison to intel (e.g. per per watt, productivity, etc). AMD only really shines in high end applications like gaming or high core count cloud computing.

dagw
0 replies
5d1h

It's really hard to buy pre-built high end desktop computers from big brands with the latest AMD CPUs.

JasonSage
0 replies
4d20h

I think it's going to change a lot in coming years, but it's early days.

I feel like the writing has been on the wall for Intel's downward trajectory and AMD's substantial improvements for a few years now, but I think a lot of trust and brand loyalty has papered over the signs for Intel enthusiasts.

On a hardware news site covering the recent news, I saw for the first time in Intel buyers... shock. Incredulity. Disappointment. It's the first time I've seen self-admitting Intel fans come out in numbers questioning their beliefs and perceptions.

I think in the next 4-year cycle as home PC builds turn over and Intel buyers are coming back to market, there's going to be a large influx of AMD converts. The Intel disaster lately will have turned erosion into an exodus. Maybe not a monumental one. But the AMD numbers are going to grow seriously over the next half decade.

jmakov
26 replies
5d3h

Will be interesting to see how Intel recovers if at all. Actually, is there today (or since 1 or 2 years) any reason to go with Intel on desktop or server?

foobiekr
6 replies
5d2h

Strategically, in global terms from a US-centric point of view, it would be very good if Intel could recover, as having the entire world reliant on TSMC would be bad for everyone. Also there really isn't any other org like Intel at the moment in the broader sense in a few ways (mostly advanced research).

That said, Intel really has nothing to fall back on. Internally, it is very much like many of the other big companies - run by executive whim papered over with justifications that are tissue thin. There are a lot of companies like this - Cisco, Intel, whatever the hell is left of IBM, Nokia, Lattice, NetApp, etc. - that will probably die this decade if they fail to reboot their executive and managerial cultures.

I have a lot of friends who have been in the great parts of Intel (the CPU design teams and, believe it or not, fabrication) and the crap parts (everything else, especially networking and storage [what's left of it], except maybe the perenially suffering ICC team). Pat has failed to make things materially better because the system that is the body of Intel is resilient to disruptions of the sort he is trying. A massive replacement and flattening of culture is required there and he has not been able to execute on it.

Turnover is coming for all the companies I list above. If I had to go out on a limb and suggest a longterm survivor from the list above, it might be Lattice. Otherwise everyone listed is driving hard straight to the ground internally, whether the market sees it or not.

Miraste
3 replies
5d2h

IBM has been heading toward the ground for thirty years, and they never seem to hit it. Whatever problems they have, they are both diversified and deeply entrenched in all kinds of government and S&P 500 systems and processes, to the point that their performance barely matters. Intel doesn't have anything like that to paper over their deficiencies.

robotnikman
1 replies
4d23h

Plenty of places still running AS/400 on crazy high support contracts, and no plans of moving on.

marcosdumay
0 replies
4d19h

crazy high

Not high enough that it can't go higher.

Maybe somebody should make a betting site around when the next order-of-magnitude increase on the rent of AS/400 hardware will come. With 6 months granularity.

IMO, the people without a plan are crazy. But yeah, there's plenty of them.

bob1029
0 replies
4d14h

IBM runs much of US banking and insurance.

Their hardware is actually fairly remarkable. If you have a certain kind of mission, it really makes sense to build a bunker around some mainframes.

insane_dreamer
0 replies
4d18h

IBM has successfully pivoted to consulting and software solutions these days. It also did $9B in net profit in 2023, while Cisco did $12B in net profit. I'd say that's still pretty healthy.

bgnn
0 replies
4d10h

Oh Nokia is a good example. It's mainly a network infrastructure company now, similar to Cisco, Juniper and Huawei. It's not on the news like Intel but it's dying a similar death. The stories I hear from friends working there reminds me of Intel culture.

IMHO the best outcome for these companies is to go for spin-offs route. Phillips did that with a lot of business units they had and struggle to run (ASML, NXP, their semiconductor fabs etc) and all these businesses turned out to be successful after 4-5 years of tough times. Freshly spun-off companies sometimes went for selling a while project and its dev team to others, or in some cases these projects became spin-offs on their own right. A lot of dead weight (mid-to-high management layers) got fired in the process. It was the right thing to do. Now Phillips is mainly a medical device company, tiny compared to what it used to be, and it is still struggling.

epolanski
6 replies
5d2h

I think Intel still takes the crown when it comes to maximizing every single fps.

Also, for a while, it was more competitive in the mid range than AMD's equivalent from a price/performance point of view.

tracker1
5 replies
5d1h

This is where I think AMD really messed up on the current releases... If they'd called the 9600X a 9400 and the 9700X a 9600X, and charged like $100 less, I think the reception would have been much stronger. As it stands, it's more about wait and see. You can use PBO and get like 10-15% more performance at double the energy, but even then compared to the rest of the market, it just comes up short.

The 9800X3D and 9950X(3D) options will really carry this cycle if they're good and the pricing adapts appropriately. I'm not holding my breath. I've been holding on with my 5950X since it's release and likely going to continue unless the 9950X(3D) is compelling enough. Not to mention the DDR5 memory issues with larger sizes or more sticks. 96gb is probably enough, but at what cost.

adrian_b
4 replies
4d23h

The 9600X and 9700X are not a useful upgrade for gamers, but they are excellent for anyone else.

Their energy efficiency is much better that for any previous x86 CPUs and for those who use applications that benefit from AVX-512 the desktop Zen 5 brings an increase in throughput higher than for any new CPU of the last five years (the last time when a desktop CPU had a double throughput over its predecessors was in 2019, with Zen 2 over Coffee Lake Refresh). Also in the applications like Web browsers or MS Office, which prefer single-thread performance, they beat even the top Raptor Lake Refresh of 6.0 GHz, which is much more expensive and it consumes a power several times greater.

Moreover, as shown by TechPowerUp, the performance of Zen 5 under Windows is suboptimal in comparison with Intel in the programs that use a small number of active threads, like the games, because the Windows scheduler uses a policy that favors power savings over performance, even if that is a bad choice for a desktop CPU. That means that the scheduler prefers to make both threads of a core active, while keeping idle the other cores, even if the right policy (which is used on Intel) is to begin to use the second threads of the cores only after all the cores have one active thread.

This should be easy to correct in the Windows scheduler, which does the right thing for Intel, where first one thread is made active on each P-core, the all the E-cores are made active, and only if more active threads are required the second threads of the P-cores are made active too.

tracker1
3 replies
4d23h

I think it would depend on the cost of electricity.. as the performance for most is similar to the prior gen, which costs significantly less.

Alupis
2 replies
4d21h

Electricity costs is perhaps the last thing a pc gamer considers when choosing a new CPU. That's if they consider it at all...

That would be like asking how many MPG's a rebuilt 1969 Ford Mustang Boss 429 gets... it was not built for efficiency - it was built for performance.

tracker1
1 replies
4d18h

FTR, I drive a 2016 Dodge Challenger RT 392 Hemi Scat Pack Shaker... No the MPG isn't my top concern... that said I don't actually drive it that much as I WFH.

Is a 1969 Ford Mustang Boss 429 really something to compare an x600 class CPU to all the same? Even then, a gamer is more likely to get 6000/6400 memory, properly balance the fclk and run with PBO. I still wouldn't suggest the 9600X, given the current pricing compared to other options... Just like I wouldn't recommend people pay $20k markup on a Ford Bronco.

Alupis
0 replies
4d

I was more making the point that a person building/choosing a computer for gaming is highly unlikely to consider efficiency at all. People who tweak timings, voltages, etc are typically doing so to eek out marginal performance benefits - not to become more efficient.

So, much like a class muscle car, people buy them to go fast, not be efficient.

ISV_Damocles
4 replies
5d3h

I personally only use AMD (excepting one test machine), but Intel does have the best single-thread performance[1] so if you have some crufty code that you can't parallelize in any way, it'll work best with Intel.

[1]: https://www.tomshardware.com/reviews/cpu-hierarchy,4312.html...

adrian_b
2 replies
5d2h

The new Zen 5 has a much better single-thread performance than any available Intel CPU.

For instance a slow 5.5 GHz Zen 5 matches or exceeds a 6.0 GHz Raptor Lake in single-thread performance. The faster Zen 5 models, which will be launched in a couple of days, will beat easily any Intel.

Nevertheless, in a couple of months Intel will launch Arrow Lake S, which will have a very close single-thread performance to Zen 5, perhaps very slightly higher.

Because Arrow Lake S will be completely made by TSMC, it will have a much better energy efficiency than the older Intel CPUs and also than AMD Zen 5, because it will use a superior TSMC "3 nm" process. On the other hand, it is expected to have the same maximum clock frequency as AMD Zen 5, so its single thread performance will no longer be helped by a higher clock frequency, like in Raptor Lake.

listic
1 replies
5d1h

in a couple of months Intel will launch Arrow Lake S, which will have a very close single-thread performance to Zen 5

Will they? Intel Innovation event was postponed "until 2025"[1], so I assumed there is not going to be any big launch like that in 2024, anymore? Arrow Lake S was supposed to debut at Intel Innovation event in September [2]

[1] https://www.intel.com/content/www/us/en/events/on-event-seri...

[2] https://videocardz.com/newz/intel-says-raptor-lake-microcode...

adrian_b
0 replies
5d

The Intel Innovation event was canceled to save money. This has nothing to do with the actual launch of future products, which are expected to bring more money. Intel can make a cheap on-line product launch, like most such launches after COVID.

Since the chips of Arrow Lake S are made at TSMC and Intel does only packaging and testing, like also for Lunar Lake, there will be no manufacturing problems.

The only thing that could delay the Arrow Lake S launch would be a bug so severe that it would require a new set of masks. For now there is no information about something like this.

Rohansi
0 replies
5d2h

Unless your workloads are not very cache optimized like most games, then AMD's 3D V-cache CPUs take the lead.

etempleton
2 replies
5d3h

For some specific workloads. Many people in the Plex community will point to Quicksync as being superior for media encoding than anything that AMD offers.

The next couple of years will be interesting. If Intel can pull off 18a they will very likely retake the lead, however if 18a underwhelms Intel will be in real trouble. There is also the possibility that AMD chooses Intel as a fab in the future, which would feel a bit like when Sonic showed up on a Nintendo console (for some reason it just feels wrong).

russelg
0 replies
4d8h

The quicksync preference seems mostly to be because up until the 7000 series, AMD didn't even include integrated graphics on their chips (meaning no video encoder) (barring the G variants).

I would think the grounds are much more even now.

darknavi
0 replies
4d23h

I was going to say Quicksync. I just spec'd and bought a new server build and the igpus on a few year old Intel chips is hard to beat for price and tdp.

treprinum
1 replies
5d2h

N100 & N305 have no competition in their own space IMO. For the rest I don't see any advantage outside momentum and "nobody is going to be fired for using IBM" approach.

hangonhn
0 replies
4d21h

This is fine for Intel actually. What really matters for the fortunes of Intel is their foundry service now. Intel could very well make those chips for NVidia if they can figure out the foundry service.

wmf
0 replies
5d1h

Intel could come back once all their chips are on 3 nm later this year.

llm_nerd
0 replies
5d2h

At some price points and for some scenarios, or if a particular pre-built has everything you need for the right price and happens to have Intel, sure there are plenty of situations where Intel is the right choice.

It is amazing to see how Intel has faltered. This was all laid in stone a decade ago when Intel was under delusions that its biggest competitor was itself, and the only thing it cared about was ensuring that their lower power and lower cost devices didn't compete with their cash-cow high end products. They optimized for the present and destroyed their future.

GeekyBear
10 replies
4d22h

I'm happy to see an AMD with plenty of income that they can invest in the development of future products. We've seen how little advancement we get when Intel doesn't have effective competition to spur them along.

AtlasBarfed
8 replies
4d20h

I'm no Intel fan but the last time that AMD had market leadership and anything which was under Hector Ruiz who is they basically completely stopped innovating. I don't think that'll happen under Lisa, but the track record of AMD and market leadership is they just kind of sit on it. We'll see what happens this time around

Of course, last time Intel was still sitting on a metric pile of Fab technology and other stuff that was still in the pipeline. Intel appears a lot more starred for such potential innovations.

The path forward appears to be radical ISA switches or something besides yet another x86 node shrink. But Intel's track record for doing anything outside of x86 processors is very very very very poor.

Deukhoofd
4 replies
4d4h

The path forward appears to be radical ISA switches or something besides yet another x86 node shrink

What I'm worried about is that the path they'll choose is the Apple route, and begin releasing SoCs, hurting repairability and upgradability.

paulmd
2 replies
4d2h

"what I'm worried about is that the path they'll choose is the NVIDIA route and begin releasing incremental updates while segmenting/upcharging based on RAM capacity".

like the null hypothesis is: maybe these things aren't "Intel things" or "NVIDIA things" but "the way the market works"/"the way the technology is progressing".

maybe there are good engineering reasons to package memory, maybe there are good reasons vram capacity stalled out, maybe there are good reasons nobody is racing to launch a $179 8GB card anymore.

but people would rather believe that every single company is just choosing to leave revenue and marketshare on the table by "joining oligopoly" or something, idk

the null hypothesis is that maybe the engineers are actually making good decisions and this is just what they can deliver given the constraints they're operating under, but conspiracies feel better.

this idea that "tech is taking a direction I don't like, therefore it's malicious/obviously marketing-driven and there's no technical reasoning or merit" idea is really noxious and toxic, and unfortunately quite pervasive. there are good engineering reasons to package memory. you will see more of it in the future.

hakfoo
1 replies
3d15h

There was a time when video card memory was sort of an anti-segmentation thing. It used to be so unimportant that manufacturers would design cards with more RAM on them than the GPU could use effectively.

For example, there was no technical justification for the 4GB GT710-- any software that could use 4GB of video memory would crap out for the anemic core performance. But it was a great market differentiator-- we'll give you a larger number for the same basic price.

paulmd
0 replies
2d15h

yep, this really wound to a close once the Titan series started. There were some models of 780 6GB - but there were never any models of 780 Ti 6GB, because that would have been basically a GTX Titan Black without the professional drivers. And then you had the actual workstation cards with double that much again (Quadro K6000 is 12GB).

Another one people don't realize - the reason AMD put 8GB on the 390 series is because it was cheaper. The Hawaii die uses 512b, which is 16x GDDR modules... in 2014/2015 the price for 4 gigabit (=512 megabyte) modules was actually cheaper than the price for 2 gigabit (=256 megabyte) modules. Production naturally tends to roll over to the newer, higher-capacity modules over time, and the 2Gb modules were falling off the tail end of the production curve.

The reason I'm bringing this up is that it's kind of illustrative of an underlying difference between then and now. Memory density was actually outstripping the actual needs at that point - and today GPU VRAM capacity is actually heavily limited by the density of the memory modules.

By the time you hit Pascal in 2016, the GTX 1080 was using 1GB GDDR5X modules, and that was actually a very new tech at the time. Quadro P4000 and P6000 actually used clamshell because the 2GB modules weren't available yet.

2GB modules first really hit the market with the turing quadro series (RTX 6000 used a non-clamshell 12x2GB, RTX A8000 used a clamshell 2x12x2GB) but GDDR6X lagged behind again and didn't hit the market until midway through Ampere. The 3090 was a clamshell of 1GB modules (2x12x1GB GDDR6X) and the 3090 Ti moved to 2GB modules in a non-clamshell configuration.

And there still isn't anything bigger than 2GB (=16 gbit) modules available today. There probably won't be until at least middle of next year (super refresh?). The density increases in DRAM started stalling out about the same time as everything else, and you can't double the module size and then clamshell anymore, so there simply is a lot less flexibility.

Ironically the MCD idea was quite timely imo. For an Infinity Link (not infinity fabric!) that's smaller than a single GDDR6 PHY, they move two entire memory controllers and PHYs to an external die. That's part of the reason "AMD is less stingy with VRAM" in the internet discourse... they built the technology that disaggregates the memory config from the memory bus config. They paid some huge design penalties (in total area, idle power, data movement, physical package dimensions, higher BOM costs for that memory, etc) and then... just didn't bother to exploit any of the advantages it gave them, and immediately abandoned it in RDNA4.

There is no reason AMD couldn't put 4 memory PHYs on a new MCD instead of 2. It's "disaggregated" for a reason beyond just yields, right? But AMD never bothered to make any other MCDs, so there is only the 2x PHY MCD die. Obviously you'd need probably some new packages especially for the big one - you could do similar "reverse 7900GRE" interposers that let you put a N32 into a N31 package while still driving all the memory, just wouldn't increase actual bandwidth, but the big die would need a bigger package to route twice the memory. But actually the larger physical size becomes an advantage at that point - bigger die is more shoreline. ctrl-f shoreline for some discussion on how that's affecting current designs: https://www.semianalysis.com/p/cxl-is-dead-in-the-ai-era I personally think there is a similar advantage with PHY pinout, the PHY pins need to be on the edge and AMD simply has a bigger package with more edge area (without driving up costs too much).

And the thing is, that bigger package area and higher idle power are also big downsides in other scenarios. AMD had big big hopes of making inroads into the laptop market with RDNA3/7900M, and they got almost zero uptake, because they were offering a bigger package with higher idle power, at a moment in time when laptop OEMs are actually looking at ditching dGPUs entirely to cram in more battery life (most are not against the FAA limits yet) so they can compete with Apple. AMD literally made exactly the diametrically opposite/wrong product for that market, and yet also failed to do any of the interesting high-VRAM configurations that disaggregation could have allowed too. Typical red-team moment right there - and this is the whole reason 7900GRE exists, to move the sorts of yields they would have shunted to 7900M.

That's sort of the unfortunate reality, is that memory capacity has just slowed to a crawl, and the nature of die shrinks (PHY area doesn't shrink, which makes it preferable to shift from more PHYs to fewer+more cache instead, or use faster memory like 5X or 6X) has pushed actual bus width down over time to contain costs. And this is probably not going to change very quickly either - maybe we get 24gbit modules in 2025, but when is 32gbit coming, like 2026 or 2027? You can't just keep increasing bus width forever either, especially with wafer costs rising significantly every gen. And AMD actually had an answer to that, and just chose not to do anything with it, or use it ever again!

The server market is a lot less afflicted because they've got CoWoS stacking and HBM. Like if you have 8-hi or 16-hi HBM3E then realistically you've got more than the gaming market will need for a long time... but it's too expensive to use on gaming cards for now (maybe if AI crashes it will lead to gaming HBM cards to soak up the capacity). Like yeah, it's kinda not just that NVIDIA doesn't care about the gaming market because AI is more profitable... it's that these are the solutions they can offer at this point in time, based on the tech that is available to them. And AMD had a workaround for that.

--

anyway: yeah I was the sucker who bought the double VRAM cards, but I actually did get good usage out of it. ;) I had a Radeon 7850 2GB I got late in 2012 for $150 on black friday, and that was actually a great little card for 3 years or so. I think that's the appeal, that not only do you not ever have to worry about VRAM, but that it will have a useful second life as a downmarket card in your spare PC or kid's PC.

I also did have a GT 640 4GB (GDDR5, at least!)... and that was pretty nifty back in 2014 for this new-fangled CUDA stuff! GK208 actually had all the capabilities of a Tesla K40 wedged into a $60 package. I do totally understand where people are coming from on the cost, I see it too... but right now the cost increases mean that if you're not willing to spend more, you're stepping down product tiers, which means things like smaller memory buses and lower total capacity etc. And actually it's when things slow down that "futureproofing" does become possible - just like the 2600K became a legendary cpu because cpu progress slowed to a crawl after that. Something like a 3090 24GB is both cheap and still doesn't really have any major downsides, other than 40-series having framegen (and AMD implemented non-DLSS framegen anyway) and 40-series generally being much better at path-tracing. Buy the higher card and keep it longer, it makes more sense than stepping down product tiers because you're upgrading every year.

--

And my overall point with the first section is that people really tend to discount those technical factors when they analyze the market. It's not just that "NVIDIA is being stingy" or whatever. Sure they are reserving clamshell for workstation cards (because besides drivers, that's really the only segmentation they have left), but actually this is just what the tech can deliver right now without costs going mad.

Turing and Ampere used older/cheaper nodes with much lower density - and they were massively large, inefficient dies by historical standards. GTX 1070 was a cut-down 300mm2 die just like RTX 4070... and people forget the accusations of "fake MSRP" at pascal launch too. NVIDIA tried to position FE as an above-MSRP premium card, and nobody else followed the MSRP either. It was a $449 card at launch, in 2016 dollars. Similarly GTX 670 was a $399 card at launch, for a cutdown ~300mm2 die.

It's just that Turing and Ampere had obscenely large dies by historical standards. RTX 2060 had a 443 mm2 die or somesuch, nearly as big as GTX 1080! Both 2080 Ti and 3090 used a die that was about 70% larger than the 1080 Ti!

People tend to make simplistic analyses about "the die isn't big enough [compared to the last 2 gens with atypically large low-density dies on old/bad/cheap nodes]" and whine the memory bus isn't big enough and the capacity isn't high enough... but there's also all these technical forces pushing the engineering in that direction, too.

https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_proces...

I often wish that gamers and gamer-adjacent media had a little more of a sense of shou ga nai/shikata ga nai. Like if it doesn't make sense to upgrade then don't - I don't think anyone expects 1-gen upgrade cycles to be worthwhile anymore. It's just somewhat silly the amount of tears cried over things that ultimately can't be helped, and nobody's directing the hate at, say, Micron or Hynix or Samsung for not keeping up with their own cadences. This is just how this market is in this post-post-post moore's law era - even just the basic shrinks and basic improvements in other components are falling through badly, and obviously that has implications for the downstream products that want to use those technologies. Shikata ga nai.

DDR5 isn't great either, 2DPC is basically almost not worthwhile anymore due to the performance hit and honestly RAM clocks are not increasing all that much either, unless you are running in 1DPC mode, at which point 6000 or 6400 is doable. And DDR6 is supposed to not be that great either. Things are just falling apart, the tech is not progressing very well anymore. Hitting the physical limits.

timschmidt
0 replies
4d2h

SoCs are inevitable. The CPU is a black hole which will eventually absorb all parts of the computer. As SoCs become more capable, discrete components will inevitably find themselves less commonly used outside of datacenter and workstation. This is a natural outcome of increasingly dense transistors and was observed by the CEO of Centaur Technologies in the 90s.

One of the neat aspects of multi-chip modules and specifically AMD's implementation of chiplets, is that they are able to share chiplets across segments, improving yields, volume, binning, and profitability all at the same time. Their 3D VCache has also been very exciting, as we're starting to see Epyc CPUs with L3 cache measured in gigabytes.

I think Apple made very smart decisions about jumping to on-package DRAM in terms of taking advantage of the increased bandwidth to bolster performance, and in the ways it worked with their pricing model. Though I think it's clear that 8gb system memory is about the minimum anyone could get away with shipping in such a system. As things get denser, numbers will only go up, and the pain of being stuck with a fully integrated system should decrease. 10 years from now, a SoC with 256gb DRAM integrated will feel very different from a bottom tier M1, while likely occupying a similar price point.

marcosdumay
1 replies
4d20h

The path forward appears to be radical ISA switches

Why up to this day people still can't make x86 clones? The amd64 architecture is nearly 20 years old, and lots of compilers target ancient processors.

What kind of legal protection is there that is lasting for that long?

(Anyway, I'm glad RISC-V is taking off.)

mlyle
0 replies
4d20h

* SSE and other architectural extensions combined with supporting patents

* Intellectual property minefields about how to adapt every new micro-architectural innovation to the mess that is x86 architecture.

Finally, barriers to entry in developing a competitive processor are fundamentally high. Combining them with the risk of litigation makes it untenable.

Panzer04
0 replies
4d17h

I don't see an ISA switch being worth much. As I understand it today, most cores are basically an ISA interpreter on the front of a bespoke core. There's not much to be gained from using a specific ISA or arch (unless it's so different as to not be the same thing, like GPU vs CPU).

Probably they could also go the way of making the guarantees provided by the processor as weak as possible, but that's even harder to switch to, I'd imagine.

Narhem
0 replies
4d18h

Intels market share is pretty close to a monopoly.

scrapcode
7 replies
5d3h

Is there any indication at all that this is due to businesses bringing their hardware back on-prem? Do you think "history repeats itself" will hold true in the on-prem v. cloud realm?

jeffbee
4 replies
5d2h

Why do you believe that could be a related phenomenon? How much server market is going to non-cloud vs. cloud? Some analysts estimate the global server market last year at $95 to $150 billion. AWS spent $48 billion on capital last year. Google bought $32 billion in capital, most of which is "on-prem" from their perspective. A switch in the machine-of-the-day from Intel to AMD at one of these large buyers could easily cause the slight change in market share mentioned in the article.

Overall I think a lot of people continue to underestimate the size of the heavy hitters on the buy side of the server market.

scrapcode
3 replies
5d

Why wouldn't it be a viable reason? Every customer that AWS and Google have has the potential to make that decision and I believe it gets more appetizing with every data breach / funnel failure / etc.

jeffbee
2 replies
5d

Every indicator we have is that movement to the cloud is a massive tidal wave, and all we have on the retreat to on-prem is a handful of anecdotes and vibes. It's really up to the proponent of such an argument to provide some evidence for the extraordinary claim that return to on-prem is signifiant enough to influence server market share (or even establish that such revanchists would prefer AMD more than cloud buyers do).

marcosdumay
0 replies
4d19h

the retreat to on-prem

On-prem only makes sense for really large organizations, exactly the ones that can negotiate with cloud providers.

What I do expect to see eventually is a large movement towards commodity IaaS providers. And those tend to be middle-sized and not design their own hardware, so they act like the on-premises market. (But I can't say I'm seeing any movement here either.)

bluGill
0 replies
4d22h

I doubt there will be a move to on-prem so much as places that already knew for their workload the cloud didn't make sense will be a little more careful and so not move. For many servers the cloud makes sense though, and so I think we will see places move more and more as they see the value of the cloud.

For a large company with accountants they can do the accounting math, and a rack or two of servers is expensive - it need real estate, trained employees, managers for those employees, HVAC, power, backup servers, backup power, .... This adds up faster than on-prem advocates think. Many of those costs are things that work better in data center sized warehouse than a smaller company and so cloud can often be the a lower price once you really add up the costs. This is particularly true if you can share cloud resources in some way.

wongarsu
0 replies
5d3h

Anecdotally it seems like the rose-tinted glasses are coming off in terms of the amount of money and work saved by moving towards the cloud. Of course those aren't the only reasons to move to the cloud. But a lot of companies moved to the cloud due to promises and expectations that didn't pan out, so at least some of them moving back is only natural. Especially with many new solutions around private cloud and serverless allowing them to bring the "good parts" of the cloud to hardware they own or rent.

That's just the normal hype cycle playing out.

keyringlight
0 replies
5d2h

One thing I wonder about is whether it's the long term effects of things like spectre/meltdown coming home to roost. I can only speculate as I'm not in the server industry, but IIRC with the mitigations that reduced performance intel benefited a lot with companies buying more servers to make up the shortfall while AMD Zen was still proving itself to what seems like a conservative market. Has the "no one ever got fired for buying X" reputation shifted?

kens
6 replies
4d23h

The article states that Intel has 75.9% of datacenter CPU shipments and AMD has 24.1%. This implies that ARM has 0% of the server market, which is not the case. I suspect this article neglected to state the important restriction to "*x86* server market", which makes a big difference to the conclusions.

FuriouslyAdrift
5 replies
4d22h

By unit quantities, yes... by revenue, however, they are nearly neck in neck

"While Intel earned $3.0 billion selling 75.9% of data center CPUs (in terms of units), AMD earned $2.8 billion selling 24.1% of server CPUs (in terms of units), which signals that the average selling price of an AMD EPYC is considerably higher than the ASP of an Intel Xeon."

Sammi
3 replies
4d22h

If companies are really spending twice the money on amd chips as these numbers imply, then this should mean that a amd server chip has a 2x perceived value compared to an intel one.

nicoburns
1 replies
4d22h

I think there are AMD chips with twice as many cores as the Intel ones, so this kinda makes sense.

timschmidt
0 replies
4d2h

Epyc is also vastly ahead of Xeon in terms of number of PCIe lanes provided, which can be critical in many applications.

Panzer04
0 replies
4d17h

Probably depends on the tier of processor too. Not all servers probably need 10k 64/128core CPUs.

hakfoo
0 replies
3d15h

I'm curious how they count a server/data-centre CPU.

The Xeon range covers servers, to HEDT, to basically "desktop workstation that requires ECC" levels of functionality and pricing.

For AMD those are three separate ranges, so I'm curious if they can effectively differentiate between a Ryzen bought for a desktop and one going in a server rack.

cletus
2 replies
5d

It's hard to believe it's been roughly two decades since the Athlon/Opteron almost killed Intel, which would've been the last time AMD did so well in server market share.

The short version of this story is that Intel licensed the x86 instruction set back in the 90s to several companies including AMD and Cyrix. Intel didn't like this as time went on. First, they couldn't trademark numbers, which is why the 486 went to the Pentium. Second, they didn't want developers producing compatible chips.

So Intel entered into a demonic pact with HP to develop EPIC. That's the architecture name. Itanium was the cip. Merced was one of the early code names. This was in the 90s when it wasn't clear if RISC or CISC would dominate. As we now know, this effort was years last, with huge cost overruns and by the time it shipped it was too expensive for too little performance.

At the same time, on the consumer front we had the Megahertz Wars. Intel moved from the Pentium-3 to the Pentium-4 that scaled really well with clock speed but wasn't great with IPC. It also had issues with pipelines and failed branch prediction (IIRC). But from a marketing perspective it killed AMD (and Cyrix).

Why is this important? Because 64 bit was around the corner and Intel wanted to move the market to EPIC. AMD said to hell with that and released the x84-64 instruction set (which, by the terms of the licensing agreemnt, Intel had a right to use as well) and released the Athlon series of desktop chips, followed later by the Opteron server chips. These were wildly successful.

The Pentium-4 hit a clock ceiling of 3-4 GHz, which still pretty much exists today. In the 90s it was thought chips would scale up to 10GHz or beyond by many.

What saved Intel? The Pentium-3. You see the Pentium-3 had morphed into a mobile platform because it was very energy efficient. First as Pentium-M and later as Core Duo and Core 2 Duo. This was the Centrino platform. Some early hackers took Centrino boards and built desktops. The parts were hard to get if you weren't a laptop OEM. They probably salvaged laptops.

Anyway by the mid to late 2000s, Intel had fully embraced this and it became the Core architecture that has evolved largely ever since to what we have today.

But back then Intel was formidable in terms of bringing new smaller processes online. This was a core competency right up until the 10nm transition in the 2010s, which was years late. And TSMC (and even Samsung) came along and ate their lunch. I can't tell you that happened but Intel never recovered, to this day.

As for AMD, after a few years they never seemed to capitalize on their Opteron head start. Maybe it was that Intel caught up. I'm not really sure. But they were in the wilderness for probably 10-15 years, right up until Ryzen.

Intel needs to be studied for how badly they dropped the bag. Nowadays, their CEO seems to be reduced to quoting the Bible on Twitter [1].

I'm glad to see AMD back. I still believe ARM is going to be a huge player in the coming decade.

[1]: https://www.indiatoday.in/technology/news/story/intel-facing...

reginald78
0 replies
4d23h

IIRC Intel never wanted to license x86. IBM required it so they'd have a second supplier. And Intel has been trying to correct that ever since.

bluGill
0 replies
4d22h

What saved Intel? The Pentium-3.

The Pentium-pro which become the Pentium-2, then Pentium-3 then... Of course each change in name come with some interesting new features, but the Pentium-pro was where that linage started.

nickdothutton
1 replies
5d1h

I would be amazed if Intel has completely abandoned the kind of business practices they are famous for, and were fined (circa $1.5 billion) for WRT systems builders/retailers.

wmf
0 replies
5d1h

It usually takes 5-10 years for the evidence to come out. Whatever anti-competitive practices they're using now don't seem to be working very well.

more_corn
0 replies
4d20h

Sometimes all you gotta do to win is stay on your feet when your opponent stumbles.

lvl155
0 replies
5d

At this point, I am asking why AMD doesn’t buy Intel. Sure, antitrust police will be all over it but it would be an epyc irony for AMD.

foobarian
0 replies
5d

Wait a little more and maybe we won't need to add more choices to AWS amd64 or arm64! :-)

albertopv
0 replies
5d

Soon migrating part of DC to Oracle Cloud (don't ask...), a consultancy firm will do the job, all VM will be on AMD CPUs.

Sammi
0 replies
4d22h

At this point I avoid Intel for the same reason I avoid Boeing.