Everyone acts as though Intel should have seen everything coming. Where was AMD? Was AMD really competitive before Ryzen? Nope. Core 2 series blew them out of the water. Was ARM really competitive until recently? Nope. Intel crushed them. The problem for Intel is the inertia of laziness due to a lack of competition. I wouldn’t count them out just yet, however. The company’s first true swing at modern GPU was actually good for a first attempt. Their recent CPUs while not quite as good as Ryzen aren’t exactly uncompetitive. Their foundry business faltered because they were trying a few things never before and not because they were incompetent. Also, 20A and 18A are coming along. I am not an Intel fan at all. I run AMD and ARM. My dislike isn’t technological though, it’s just that I hate their underhanded business practices.
ARM has been really competitive since, well, 2007, when the first iPhone hit the market, and when Android followed in 2008. That is, last 15 years or so. Not noticing a hugely growing segment that was bringing insane reams of cash to Apple, and Qualcomm, Samsung and others involved is not something I could call astute.
Pretty certainly, Intel is improving, and of course should not be written off. But they did get themselves into a hole to dig out from, and not just because the 5nm process was really hard to get working.
And it's not like they didn't notice either. Apple literally asked intel to supply the chips for the first iPhone, but the intel CEO at the time "didn't see it".
https://www.theverge.com/2013/5/16/4337954/intel-could-have-...
I agree mobile was a miss but the linked article actually quotes Intel's former COE making a pretty good argument why they missed:
In that circumstance, I think most people would have made the same decision.
I’m not so sure, he made a choice purely on “will it make money now” not “well let’s take a chance and see if this pays of big and if not we’ll loose a little money”
It’s not like they couldn’t afford it and taking chances is important
Ok, but you have to view this through the lens of what was on the market at the time and what kind of expectations Intel likely would have had. I can't imagine that Apple told Intel what they were planning. Therefore, it would have been reasonable to look around at the state of what existed at the time (basically, iPods, flip phones, and the various struggling efforts that were trying to become smartphones at the time) and conclude that none of that was going to amount to anything big.
I'm pretty sure most people here panned the iPhone after it came out, so it's not as if anyone would have predicted it prior to even being told it existed.
And that statement is hilarious in light of the many failed efforts (eg subsidies for Netbooks and their embedded x86 chip) where they lit billions on fire attempting to sway the market.
FWIW I don't buy his explanation anyway. Intel at the time had zero desire to be a fab. Their heart was not in it. They wanted to own all the IP for fat margins. They have yet to prove anything about that has changed despite the noise they repeatedly make about taking on fab customers.
Intel also had a later chance when Apple tried to get off the Qualcomm percent per handset model. This was far after the original iPhone. Apple also got sued for allegedly sharing proprietary Qualcomm trade secrets with Intel. And Intel still couldn’t pull it off despite all these tailwinds.
Kind of speaks to how Intel was not competitive in the space at all. If it was truly that the marginal cost per part was higher than the requested price, either Apple was asking for the impossible and settled for a worse deal with an ARM chip, or Intel did not have similar capabilities.
In that circumstance, I think most MBAs would have made the same decision.
Fixed that for you
"We couldn't figure out how much a chip design would cost to make" is pretty damning, in my book.
That was very luck for Apple though. Nokia made deals with Intel to provide the CPU for upcoming phone models, and had to scramble to redesign them when it became clear Intel was unable to deliver.
Not quite true - the intel projects were at a pretty early stage when Elop took over, and the whole Microsoft thing happened - and the projects got canned as part of the cleanup and moving to Windows for the phones.
The CPUs were indeed horrible, and would've caused a lot of pain if the projects had actually continued. (source: I was working on the software side for the early Nokia intel prototypes)
Thanks for the insights. N9 was originally rumored to use Intel, and it was speculated so [1] still half a year before the release. Was that then also switched by Elop as part of the whole lineup change, or were these rumors unfounded in the first place?
[1] https://www.gottabemobile.com/meego-powered-nokia-n9-to-laun...
Pretty much all rumors at that time were very entertainingly wrong.
I think at the time that article got published we didn't even have the intel devboards distributed (that is a screen and macroboards, way before it starts looking like a phone). We did have some intel handsets from a 3rd party for meego work, but that was pretty much just proof of concept - nobody ever really bothered looking into trying to get the modem working, for example.
What became the N9 was all the time planned as an arm based device - exact name and specs changed a few times, but it still pretty much was developed as a maemo device, just using the meego name for branding, plus having some of the APIs (mainly, qt mobility and QML) compatible with what was scheduled to become meego. The QML stuff was a late addition there - originally it was supposed to launch with MTF, and the device was a wild mix of both when it launched, with QML having noticeable issues in many areas.
Development on what was supposed to be proper meego (the cooperation with intel) happened with only a very small team (which I was part of) at that time, and was starting to slowly ramp up - but massive developer effort from Nokia to actually make a "true" meego phone would've started somewhere mid-11.
Very interesting, thanks for setting the record straight!
And a few years prior to that Intel made the most competitive ARM chips (StrongARM). Chances are that an Intel chip would have powered the iPhone had they not scrapped their ARM division due to “reasons”
Intel had purchased/gotten StrongARM from DEC.
DEC had started developing ARM chips as they concluded it was a bad idea to try and scale down their alpha chips to be more energy efficient.
Then, after the success of these ARM chips in the blackberry and most of the palm PDAs as well as MP3 players and HTC smartphones, Intel sells it off, so it could focus on trying to make its big chips more energy efficient, making the mistake DEC avoided.
iPhone was a defining moment, but at the time it was completely obvious that smartphones would be a thing, it's just that people thought that the breakthrough product would come from Nokia or Sony-Ericsson (who were using ARM SoCs from TI and Quallcomm respectively). Selling off the ARM division would not have been my priority?
So it's a string of unforced errors. Nevertheless, Intel remains an ARM licensee, they didn't give that up when selling StrongARM, so it seems some people still saw the future..
Sounds like the classic Innovators Dilemma. There wasn't a lot of margin in the ARM chips so Intel doubled down on their high margin server and desktop chips. ARM took over the low end in portable devices and is now challenging in the datacenter.
Apple has been working with Arm since 1987, when work on the Apple Newton started: https://www.cpushack.com/2010/10/26/how-the-newton-and-arm-s...
Intel never "crushed" ARM. Intel completely failed to develop a mobile processor and ARM has a massive marketshare there.
ARM has always beaten the crap out of Intel at performance per watt, which turned out to be extremely important both in mobile and data center scale.
I got curious about how ARM is doing in the data center and found this:
https://www.fool.com/investing/2023/09/23/arm-holdings-data-...
ARM would be even more popular in the datacenter if getting access to Ampere CPUs was possible.
I can get a top of the line Xeon Gold basically next day with a incredibly high quality out of band management from a reputable server provider. (HP, Dell).
Ampere? Give it 6 months, €5,000 and maybe you can get one, from Gigabyte. Not known for server quality.
(yes, I'm salty, I have 4 of these CPUs and it took a really long time to get them while costing just as much as AMD EPYC Milan's).
HPE has an Ampere server line that is quite good, especially considering TCO, density, and the IO it can pack. But yeah you'll have to fork some cash.
ARM server CPUs are great, I'd move all of our stuff to them once more competition happens. Give it a few more years.
You can get them on Oracle cloud servers for whatever you choose to do, last i looked and used them.
Since roughly the first year of covid the supply generally has been quite bad. Yes, I can get _some_ xeon or epyc from HPE quickly, but if I care about specific specs it's also a several month long wait. For midsized servers (up to about 100 total threads) AMD still doesn't really have competition if you look at price, performance and power - I'm currently waiting for such a machine, the intel option would've been 30% more expensive at worse specs.
I'm using Ampere powered servers on Oracle cloud and boy, they're snappy, even with the virtualization layer on top.
Amazon has its own ARM CPUs on AWS, and you can get them on demand, too.
Xeons and EPYCs are great for "big loads", however some supercomputer centers also started to install "experimental" ARM partitions.
The future is bright not because Intel is floundering, but there'll be at least three big CPU producers (ARM, AMD and Intel).
Also, don't have prejudices about "brands". most motherboard brands can design server-class hardware if they wish. They're just making different trade-offs because of the market they're in.
I used servers which randomly fried parts of their motherboard when see some "real" load. Coming one morning and having no connectivity because a top of the line 2 port gigabit onboard Ethernet fried itself on a top of the line, flagship server is funny in its own way.
It'll probably get get there, but it'll probably be a slow migration. After all, no point in tossing out all the Xeon that still have a few years left in them. But I believe Google is now also talking about or is already working on their own custom chip similar to Graviton now. [1]
[1] https://www.theregister.com/2023/02/14/google_prepares_its_o...
I'm guessing this has increased since 2021. I've moved the majority of our AWS workloads to ARM because the price savings (it mostly 'just works'). If companies are starting to tighten their belts, this could accelerate even more ARM adoption.
Oracle, and even Microsoft, have decently large arm64 deployments now too (compared to nothing).
The Amazon Graviton started by using stock ARM A72 cores.
They certainly tried selling their chips below cost to move into markets ARM dominated, but "contra revenue" couldn't save them.
https://www.fool.com/investing/general/2016/04/21/intel-corp...
The name they chose to try and make it not sound like anti-competitive practices just makes it sound like Iran-Contra
The curse of having weak enemies is that you become complacent.
You're right: AMD wasn't competitive for an incredibly long time and ARM wasn't really meaningful for a long time. That's the perfect situation for some MBAs to come into. You start thinking that you're wasting money on R&D. Why create something 30% better this year when 10% better will cost a lot less and your competitors are so far behind that it doesn't matter?
It's not that Intel should have seen AMD coming or should have seen ARM coming. It's that Intel should have understood that just because you have weak enemies today doesn't mean that you have an unassailable castle. Intel should have been smart enough to understand that backing off of R&D would mean giving up the moat they'd created. Even if it looked like no one was coming for their crown at the moment, you need to understand that disinvestment doesn't get rewarded over the long-run.
Intel should have understood that trying to be cheap about R&D and extract as much money from customers wasn't a long-term strategy. It wasn't the strategy that built them into the dominant Intel we knew. It wouldn't keep them as that dominant Intel.
To be fair, they should have seen Ryzen coming, any long-term AMD user knew years before Ryzen landed that it was going to be a good core because AMD were very vocal about how badly wrong they bet with Bulldozer (previous core family).
AMD bet BIG on the software industry leaning in heavily on massive thread-counts over high throughput, single-threaded usage... But it never happened so the cores tanked.
It was never a secret WHY that generation of core sucked, and it was relatively clear what AMD needed to do to fix the problem, and they were VERY vocal about "doing the thing" once it became clear their bet wasn't paying off.
(I'm curious about this story, as I am unfamiliar with it.)
Why did that generation of core (Bulldozer) suck?
What was it that AMD needed to do to fix the problem?
(Links to relevant stories would be sufficient for me!)
There's been lots written about this but this is my opinion.
Bulldozer seemed to be designed under the assumption heavy floating point work would be done on the GPU (APU) which all early construction cores had built in. But no one is going to rewrite all of their software to take advantage of the iGPU that isn't present in existing CPUs and isn't present in the majority of CPUs (Intel) so it sort of smelt like Intel's itantic moment, only worse.
I think they were desperate to see some near term return on the money they spent on buying ATI. ATI wasn't a bad idea for a purchase but they seemed to heavily overpay for it which probably really clouded management's judgement.
I thought it was a bad idea when I first read of it. It reminded me of Intel's Netburst (Pentium 4) architecture.
Chips and Cheese has probably the most in depth publicly available dive for the tech reasons why Bulldozer was the way it was:
https://chipsandcheese.com/2023/01/22/bulldozer-amds-crash-m...
https://chipsandcheese.com/2023/01/24/bulldozer-amds-crash-m...
---
From a consumer perspective, Bulldozer and revisions as compared to Skylake and revisions were:
+ comparable on highly multi-threaded loads
+ cheaper
- significantly behind on less multi-threaded loads
- had 1 set of FPUs per 2 cores, so workloads with lots of floating point calculations were also weaker
- Most intensive consumer software was single or a very small number of thread focused still (this was also a problem for Intel in trying to get people to buy more expensive i7s/i9s over i5s in those days)
The Bulldozer design had a few main issues.
1.Bulldozer had a very long pipeline akin to a Pentium 4. This allows for highclocks but comparatively little work being done per cycle vs their competition. Since clocks have a ceiling around 5GHz they could never push the clocks high enough to compete with intel. 2.They used a odd core design with 1 FPU for every 2 integer unit instead of the normal 1:1 that we have seen on every x86 since the i486. This leads to very weak FPU performance needed for many professional applications. Conversely it allowed for very competitive performance on highly threaded integer applications like rendering. This decision was probably under the assumption APUs would integrate their GPUs better and software would be written with it in mind since a GPU easily out does a CPUs FPU but it requires more programming. This didn't come to be. 3. They were stuck using Global Foundries due to previous contracts when they spun it off requiring AMD use GloFlo. This became a anchor as Gloflo fell behind market competitors like TSMC. Leaving AMD stuck on 32nm for a long while, until gloflo got 14nm and eventually AMD got out of the contract between zen 1-2.
bonus: Many IC designers have bemoaned how much of bulldozers design was automated with little hand modifications which tends to lead to a less optimized design. 3. 3.
They've seen all that. You don't have to have an MBA or a MIT degree to plot projected performance of your or your competitors' chips.
It was process failures. Their fabs couldn't fab the designs. Tiger lake was what? 4 years late?
> It's that Intel should have understood that just because you have weak enemies today doesn't mean that you have an unassailable castle.
Their third employee that later went on to become their third CEO and guide Intel from the memory to processor transition literally coined the term and wrote a book called "Only the Paranoid Survive" [1]. It's inexcusable that management degraded that much.
[1] https://en.wikipedia.org/wiki/Andrew_Grove#Only_the_Paranoid...
Yes, I agree. However, I don’t necessarily see this book title as an imperative to innovate. Patent trolling can also be a way to deal with competitors.
After all, Apple and ARM came from the idea to have better end user products around softer factors than shear CPU power. Since Intel‘s products aren’t highly integrated Phones nor assembled computer, Intel had no stake directly.
It is complex.
Apple came from recreational “there is now a 10 times cheaper CPU than anything else and I can afford to build my video terminal into real computer in my bedroom” and “maybe we can actually sell it?”. [1]
ARM literally came from “we need a much better and faster processor” and “how hard can this be?” [2]
[1] https://en.wikipedia.org/wiki/History_of_Apple_Inc.#1971%E2%...
[2] https://en.wikipedia.org/wiki/ARM_architecture_family
This sounds like Google. Some bean counter is firing people left and right and somehow they think that's going to save them from the fact that AI answers destroy their business model. They need more people finding solutions, not less.
Intel's flaw was trying to push DUV to 10nm (otherwise known as Intel 7).
Had Intel adopted the molten tin of EUV, the cycle of failure would have been curtailed.
Hats off to SMIC for the DUV 7nm which they produced so quickly. They likely saw quite a bit of failed effort.
And before we discount ARM, we should remember that Acorn produced a 32-bit CPU with a 25k transistor count. The 80386 was years later, with 275k transistors.
Intel should have bought Acorn, not Olivetti.
That's a lot of mistakes, not even counting Itanium.
N7, TSMC's competitor to Intel 7, does not use EUV either.
There are multiple versions of N7. The N7 and N7P are DUV while the N7+ is EUV.
Acorn’s original ARM chip was impressive but it didn’t really capture much market share. The first ARM CPU competed against the 286, and did win. The 386 was a big deal though. First, software was very expensive at the time, and the 386 allowed people to keep their investments. Second, it really was a powerful chip. It allowed 11mips vs ARM3’s 13, but the 486 achieved 54mips. ARM6 only hit 28mips. It’s worth noting that the 386 also used 32bit memory addressing and a 32 bit bus while ARM was 26bit addressing with a 16 bit bus.
At the same time, it had unquestioned performance dominance until ARM made the decision for embedded.
ARM would have been much more influential under Intel, rather than pursuing the i960 or the iAPX 432.
Just imagine Intel ARM Archimedes. It would have crushed the IBM PS/2.
Whoops.
Seriously, even DEC was smart enough.
https://en.m.wikipedia.org/wiki/StrongARM
Coincidentally, ARM1 and 80386 were both introduced in 1985. I'm a big fan of the ARM1 but I should point out that the 386 is at a different level, designed for multitasking operating systems and including a memory management unit for paging.
Intel had StrongARM though. IIRC they made best ARM cpus in the early 2000s and were designing their own cores. Then Intel decided to get rid of it because obviously they were just wasting money and could design a better x86 mobile chip…
The problem is that Intel has had a defensive strategy for a long time. Yes, they crushed many attempts to breach the x86 moat but failed completely and then gave up attempts to reach beyond that moat. Mobile, foundry, GPUs etc have all seen half-hearted or doomed attempts (plus some bizarre attempts to diversify - McAfee!).
I think that, as Ben essentially says, they put too much faith in never-ending process leadership and the ongoing supremacy of x86. And when that came to an end the moat was dry.
Part of the problem is Intel is addicted to huge margins. Maybe of the areas they have tried to enter are almost commodity products in comparison so it would take some strong leadership to convince everyone to back off those margins for the sake of diversification.
They should have been worried about their process leadership for a long time. IIRC even the vaunted 14nm that they ended up living on for so long was pretty late. That would have had me making backup plans for 10nm but it looked more like leadership just went back to the denial well for years instead. It seemed like they didn't start backport designs until after Zen1 launched to me.
100%! Also in a way reversing what Moore/Grove did when they abandoned commodity memories. Such a hard thing to do.
On the contrary, they tried to pivot so many times and enter different markets, they bought countless small companies, some big, nothing seemed to stick except the core CPU and datacenter businesses. IIRC mobileye is one somewhat successful venture.
Except they weren't real pivots. Mobile / GPU were x86 centric, foundry half hearted without buying into what needed to be done. Buying a company is the easy bit.
Trying to breathe as Intel was pushing their head under water.
We saw AMD come back after their lawsuit against Intel got through and Intel had to stop paying everyone to not use AMD.
Kind of, but not really in laptops :( they're doing great on handhelds though.
I think they're doing better (disclaimer: writing this from a Ryzen laptop) and their latest chip has better thermals and consumption, with a decent reputation compared to 10 years ago for instance. But yes, it's a long road ahead.
If you're process nodes are going wayyyy over schedule, it shouldn't take much intelligence to realize that TSMC is catching up FAST.
You should probably have some intel (haha) on ARM and AMD chips. They didn't care.
Why? It's monopoly business tactics, except they didn't realize they weren't Microsoft.
It's not like this was overnight. Intel should have watched AMD like a hawk after that slimeball Ruiz was deposed and a real CEO put in charge.
And the Mac chips have been out, what, two years now, and the Apple processors on the iPhones at least 10?
Come on. This is apocalyptic scale incompetence.
Microsoft also got its lunch ate during this time by mobile. They have a new CEO who’s had to work hard to reshape the place as a services company.
Intel's recent CPUs are not as good as Ryzen? That hasn't been correct for a few years now.
The problem is this... The money cow is datacenters, and especially top of the line products where there is no competition.
Then fastest single core and multi core x86 cpus that money can buy will go to databases and similar vertically scaled systems.
That's where you can put up the most extreme margins. It's "winner takes all the margins". Beeing somewhat competitive, but mostly a bit worse is the worst business position. Also...
I put money on AMD when they were taking over the crown.
Thank you for this wakeup call, I'll watch closely if Intel can deliver on this and take it back. I'll habe to adjust accordingly.
Well, yes.
Look at Japan generally and Toyota specifically. In Japan the best award you can get for having an outstanding company in terms of profit, topline, quality, free-cash, people, and all the good measures is the Deming Award. Deming was our guy (an American) but we Americans in management didn't take him seriously enough.
The Japanese to their credit did ... ran with it and made it into their own thing in a good way. The Japanese took 30% of the US auto market in our own backyard. Customers knew Hondas, Toyotas cost more but were worth every dollar. They resold better too. (Yes some noise about direct government investment in Japanese companies by the government was a factor too, but not the chief factor in the long run).
We Americans got it "explained to us." We thought we were handling it. Nah, it was BS. But we eventually got our act together. Our Deming award is the Malcolm Baldridge award.
Today, unfortunately the Japanese economy isn't rocking like it was the 80s and early 90s. And Toyota isn't the towering example of quality it once was. I think -- if my facts are correct --- they went too McDonalds and got caught up in lowering price in their materials, and supply chain with bad effects net overall.
So things ebb and flow.
The key thing: is management through action or inaction allowing stupid inbred company culture to make crappy products? Do they know their customers etc etc. Hell, mistakes even screw-ups are not life ending for companies the size of Intel. But recurring stupidity is. A lot of the times the good guys allow themselves to rot from the inside out. So when is enough enough already?
The writing was on the wall 6 years ago; Intel was not doing well in mobile and it was only a matter of time until that tech improved. Same as Intel unseating the datacenter chips before it. Ryzen I will give you is a surprise, but in a healthy competive market, "the competition outengineered us this time" _should_ be a potential outcome.
IMO the interesting question is basically whether Intel could have done anything differently. Clay Christianson's sustaining vs disrupting innovation model is well known in industry, and ARM slowly moving up the value chain is obvious in that framework. Stratechery says they should have opened up their fabs to competitors, but how does that work?
No, but ARM should've rung many bells.
Intel poured tons of billions in mobile.
Didn't understood the future from smartphones to servers was about power efficiency and scale.
Eventually their lack of power efficiency made them lose ground in all their core business. I hope they will get this back and not just by competing on manufacturing but architecture too.
Only the Paranoid Survive
Crushed by Intel's illegal anticompetitive antics?
And Sandy Bridge all but assured AMD wouldn't be relevant for the better part of a decade.
It's easy to forget just how fast Sandy Bridge was when it came out; over 12 years later and it can still hold its own as far as raw performance is concerned.
They should have known it was coming because of how many people they were losing to AMD, but there is a blindness in big corps when management decide they are the domain experts and the workers are replaceable.
Intel’s problem was the cultural and structural issues in their organization, plus their decision to bet on strong OEM partner relationships to beat competition. This weakness would prevent them from being ready for any serious threat and is what they should’ve seen coming.