return to table of content

Intel's Humbling

BirAdam
72 replies
16h26m

Everyone acts as though Intel should have seen everything coming. Where was AMD? Was AMD really competitive before Ryzen? Nope. Core 2 series blew them out of the water. Was ARM really competitive until recently? Nope. Intel crushed them. The problem for Intel is the inertia of laziness due to a lack of competition. I wouldn’t count them out just yet, however. The company’s first true swing at modern GPU was actually good for a first attempt. Their recent CPUs while not quite as good as Ryzen aren’t exactly uncompetitive. Their foundry business faltered because they were trying a few things never before and not because they were incompetent. Also, 20A and 18A are coming along. I am not an Intel fan at all. I run AMD and ARM. My dislike isn’t technological though, it’s just that I hate their underhanded business practices.

nine_k
18 replies
16h16m

ARM has been really competitive since, well, 2007, when the first iPhone hit the market, and when Android followed in 2008. That is, last 15 years or so. Not noticing a hugely growing segment that was bringing insane reams of cash to Apple, and Qualcomm, Samsung and others involved is not something I could call astute.

Pretty certainly, Intel is improving, and of course should not be written off. But they did get themselves into a hole to dig out from, and not just because the 5nm process was really hard to get working.

vineyardmike
13 replies
15h22m

Not noticing a hugely growing segment that was bringing insane reams of cash to Apple, and Qualcomm, Samsung and others involved is not something I could call astute.

And it's not like they didn't notice either. Apple literally asked intel to supply the chips for the first iPhone, but the intel CEO at the time "didn't see it".

https://www.theverge.com/2013/5/16/4337954/intel-could-have-...

eslaught
7 replies
14h57m

I agree mobile was a miss but the linked article actually quotes Intel's former COE making a pretty good argument why they missed:

"The thing you have to remember is that this was before the iPhone was introduced and no one knew what the iPhone would do... At the end of the day, there was a chip that they were interested in that they wanted to pay a certain price for and not a nickel more and that price was below our forecasted cost. I couldn't see it. It wasn't one of these things you can make up on volume. And in hindsight, the forecasted cost was wrong and the volume was 100x what anyone thought."

In that circumstance, I think most people would have made the same decision.

katbyte
1 replies
10h6m

I’m not so sure, he made a choice purely on “will it make money now” not “well let’s take a chance and see if this pays of big and if not we’ll loose a little money”

It’s not like they couldn’t afford it and taking chances is important

eslaught
0 replies
37m

Ok, but you have to view this through the lens of what was on the market at the time and what kind of expectations Intel likely would have had. I can't imagine that Apple told Intel what they were planning. Therefore, it would have been reasonable to look around at the state of what existed at the time (basically, iPods, flip phones, and the various struggling efforts that were trying to become smartphones at the time) and conclude that none of that was going to amount to anything big.

I'm pretty sure most people here panned the iPhone after it came out, so it's not as if anyone would have predicted it prior to even being told it existed.

xenadu02
0 replies
5h41m

And that statement is hilarious in light of the many failed efforts (eg subsidies for Netbooks and their embedded x86 chip) where they lit billions on fire attempting to sway the market.

FWIW I don't buy his explanation anyway. Intel at the time had zero desire to be a fab. Their heart was not in it. They wanted to own all the IP for fat margins. They have yet to prove anything about that has changed despite the noise they repeatedly make about taking on fab customers.

isignal
0 replies
13h54m

Intel also had a later chance when Apple tried to get off the Qualcomm percent per handset model. This was far after the original iPhone. Apple also got sued for allegedly sharing proprietary Qualcomm trade secrets with Intel. And Intel still couldn’t pull it off despite all these tailwinds.

epistasis
0 replies
14h25m

Kind of speaks to how Intel was not competitive in the space at all. If it was truly that the marginal cost per part was higher than the requested price, either Apple was asking for the impossible and settled for a worse deal with an ARM chip, or Intel did not have similar capabilities.

cpursley
0 replies
6h13m

In that circumstance, I think most people would have made the same decision.

In that circumstance, I think most MBAs would have made the same decision.

Fixed that for you

GeekyBear
0 replies
11h51m

"We couldn't figure out how much a chip design would cost to make" is pretty damning, in my book.

distances
4 replies
8h9m

That was very luck for Apple though. Nokia made deals with Intel to provide the CPU for upcoming phone models, and had to scramble to redesign them when it became clear Intel was unable to deliver.

finaard
3 replies
7h31m

Not quite true - the intel projects were at a pretty early stage when Elop took over, and the whole Microsoft thing happened - and the projects got canned as part of the cleanup and moving to Windows for the phones.

The CPUs were indeed horrible, and would've caused a lot of pain if the projects had actually continued. (source: I was working on the software side for the early Nokia intel prototypes)

distances
2 replies
6h28m

Thanks for the insights. N9 was originally rumored to use Intel, and it was speculated so [1] still half a year before the release. Was that then also switched by Elop as part of the whole lineup change, or were these rumors unfounded in the first place?

[1] https://www.gottabemobile.com/meego-powered-nokia-n9-to-laun...

finaard
1 replies
6h13m

Pretty much all rumors at that time were very entertainingly wrong.

I think at the time that article got published we didn't even have the intel devboards distributed (that is a screen and macroboards, way before it starts looking like a phone). We did have some intel handsets from a 3rd party for meego work, but that was pretty much just proof of concept - nobody ever really bothered looking into trying to get the modem working, for example.

What became the N9 was all the time planned as an arm based device - exact name and specs changed a few times, but it still pretty much was developed as a maemo device, just using the meego name for branding, plus having some of the APIs (mainly, qt mobility and QML) compatible with what was scheduled to become meego. The QML stuff was a late addition there - originally it was supposed to launch with MTF, and the device was a wild mix of both when it launched, with QML having noticeable issues in many areas.

Development on what was supposed to be proper meego (the cooperation with intel) happened with only a very small team (which I was part of) at that time, and was starting to slowly ramp up - but massive developer effort from Nokia to actually make a "true" meego phone would've started somewhere mid-11.

distances
0 replies
5h37m

Very interesting, thanks for setting the record straight!

ffgjgf1
2 replies
11h48m

ARM has been really competitive since

And a few years prior to that Intel made the most competitive ARM chips (StrongARM). Chances are that an Intel chip would have powered the iPhone had they not scrapped their ARM division due to “reasons”

bux93
1 replies
8h45m

Intel had purchased/gotten StrongARM from DEC.

DEC had started developing ARM chips as they concluded it was a bad idea to try and scale down their alpha chips to be more energy efficient.

Then, after the success of these ARM chips in the blackberry and most of the palm PDAs as well as MP3 players and HTC smartphones, Intel sells it off, so it could focus on trying to make its big chips more energy efficient, making the mistake DEC avoided.

iPhone was a defining moment, but at the time it was completely obvious that smartphones would be a thing, it's just that people thought that the breakthrough product would come from Nokia or Sony-Ericsson (who were using ARM SoCs from TI and Quallcomm respectively). Selling off the ARM division would not have been my priority?

So it's a string of unforced errors. Nevertheless, Intel remains an ARM licensee, they didn't give that up when selling StrongARM, so it seems some people still saw the future..

matwood
0 replies
7h30m

Sounds like the classic Innovators Dilemma. There wasn't a lot of margin in the ARM chips so Intel doubled down on their high margin server and desktop chips. ARM took over the low end in portable devices and is now challenging in the datacenter.

ako
0 replies
11h35m

Apple has been working with Arm since 1987, when work on the Apple Newton started: https://www.cpushack.com/2010/10/26/how-the-newton-and-arm-s...

duped
13 replies
16h6m

Was ARM really competitive until recently? Nope. Intel crushed them.

Intel never "crushed" ARM. Intel completely failed to develop a mobile processor and ARM has a massive marketshare there.

ARM has always beaten the crap out of Intel at performance per watt, which turned out to be extremely important both in mobile and data center scale.

hollerith
10 replies
15h41m

I got curious about how ARM is doing in the data center and found this:

Arm now claims to hold a 10.1% share of the cloud computing market, although that's primarily due to Amazon and its increasing use of homegrown Arm chips. According to TrendForce, Amazon Web Services (AWS) was using its custom Graviton chips in 15% of all server deployments in 2021.

https://www.fool.com/investing/2023/09/23/arm-holdings-data-...

dijit
5 replies
14h39m

ARM would be even more popular in the datacenter if getting access to Ampere CPUs was possible.

I can get a top of the line Xeon Gold basically next day with a incredibly high quality out of band management from a reputable server provider. (HP, Dell).

Ampere? Give it 6 months, €5,000 and maybe you can get one, from Gigabyte. Not known for server quality.

(yes, I'm salty, I have 4 of these CPUs and it took a really long time to get them while costing just as much as AMD EPYC Milan's).

touisteur
0 replies
11h15m

HPE has an Ampere server line that is quite good, especially considering TCO, density, and the IO it can pack. But yeah you'll have to fork some cash.

otabdeveloper4
0 replies
13h36m

ARM server CPUs are great, I'd move all of our stuff to them once more competition happens. Give it a few more years.

jjtheblunt
0 replies
12h44m

You can get them on Oracle cloud servers for whatever you choose to do, last i looked and used them.

finaard
0 replies
7h24m

Since roughly the first year of covid the supply generally has been quite bad. Yes, I can get _some_ xeon or epyc from HPE quickly, but if I care about specific specs it's also a several month long wait. For midsized servers (up to about 100 total threads) AMD still doesn't really have competition if you look at price, performance and power - I'm currently waiting for such a machine, the intel option would've been 30% more expensive at worse specs.

bayindirh
0 replies
11h8m

I'm using Ampere powered servers on Oracle cloud and boy, they're snappy, even with the virtualization layer on top.

Amazon has its own ARM CPUs on AWS, and you can get them on demand, too.

Xeons and EPYCs are great for "big loads", however some supercomputer centers also started to install "experimental" ARM partitions.

The future is bright not because Intel is floundering, but there'll be at least three big CPU producers (ARM, AMD and Intel).

Also, don't have prejudices about "brands". most motherboard brands can design server-class hardware if they wish. They're just making different trade-offs because of the market they're in.

I used servers which randomly fried parts of their motherboard when see some "real" load. Coming one morning and having no connectivity because a top of the line 2 port gigabit onboard Ethernet fried itself on a top of the line, flagship server is funny in its own way.

tcmart14
0 replies
12h39m

It'll probably get get there, but it'll probably be a slow migration. After all, no point in tossing out all the Xeon that still have a few years left in them. But I believe Google is now also talking about or is already working on their own custom chip similar to Graviton now. [1]

[1] https://www.theregister.com/2023/02/14/google_prepares_its_o...

matwood
0 replies
7h27m

Amazon Web Services (AWS) was using its custom Graviton chips in 15% of all server deployments in 2021

I'm guessing this has increased since 2021. I've moved the majority of our AWS workloads to ARM because the price savings (it mostly 'just works'). If companies are starting to tighten their belts, this could accelerate even more ARM adoption.

geerlingguy
0 replies
15h6m

Oracle, and even Microsoft, have decently large arm64 deployments now too (compared to nothing).

chasil
0 replies
15h34m

The Amazon Graviton started by using stock ARM A72 cores.

GeekyBear
1 replies
11h47m

Intel never "crushed" ARM.

They certainly tried selling their chips below cost to move into markets ARM dominated, but "contra revenue" couldn't save them.

Intel Corp.’s Contra-Revenue Strategy Was a Huge Waste of Money

https://www.fool.com/investing/general/2016/04/21/intel-corp...

offices
0 replies
2h19m

The name they chose to try and make it not sound like anti-competitive practices just makes it sound like Iran-Contra

mdasen
11 replies
15h28m

The curse of having weak enemies is that you become complacent.

You're right: AMD wasn't competitive for an incredibly long time and ARM wasn't really meaningful for a long time. That's the perfect situation for some MBAs to come into. You start thinking that you're wasting money on R&D. Why create something 30% better this year when 10% better will cost a lot less and your competitors are so far behind that it doesn't matter?

It's not that Intel should have seen AMD coming or should have seen ARM coming. It's that Intel should have understood that just because you have weak enemies today doesn't mean that you have an unassailable castle. Intel should have been smart enough to understand that backing off of R&D would mean giving up the moat they'd created. Even if it looked like no one was coming for their crown at the moment, you need to understand that disinvestment doesn't get rewarded over the long-run.

Intel should have understood that trying to be cheap about R&D and extract as much money from customers wasn't a long-term strategy. It wasn't the strategy that built them into the dominant Intel we knew. It wouldn't keep them as that dominant Intel.

llamaLord
6 replies
13h40m

To be fair, they should have seen Ryzen coming, any long-term AMD user knew years before Ryzen landed that it was going to be a good core because AMD were very vocal about how badly wrong they bet with Bulldozer (previous core family).

AMD bet BIG on the software industry leaning in heavily on massive thread-counts over high throughput, single-threaded usage... But it never happened so the cores tanked.

It was never a secret WHY that generation of core sucked, and it was relatively clear what AMD needed to do to fix the problem, and they were VERY vocal about "doing the thing" once it became clear their bet wasn't paying off.

bmer
4 replies
9h12m

(I'm curious about this story, as I am unfamiliar with it.)

Why did that generation of core (Bulldozer) suck?

What was it that AMD needed to do to fix the problem?

(Links to relevant stories would be sufficient for me!)

bonton89
0 replies
4h24m

There's been lots written about this but this is my opinion.

Bulldozer seemed to be designed under the assumption heavy floating point work would be done on the GPU (APU) which all early construction cores had built in. But no one is going to rewrite all of their software to take advantage of the iGPU that isn't present in existing CPUs and isn't present in the majority of CPUs (Intel) so it sort of smelt like Intel's itantic moment, only worse.

I think they were desperate to see some near term return on the money they spent on buying ATI. ATI wasn't a bad idea for a purchase but they seemed to heavily overpay for it which probably really clouded management's judgement.

StuffMaster
0 replies
51m

I thought it was a bad idea when I first read of it. It reminded me of Intel's Netburst (Pentium 4) architecture.

Macha
0 replies
4h31m

Chips and Cheese has probably the most in depth publicly available dive for the tech reasons why Bulldozer was the way it was:

https://chipsandcheese.com/2023/01/22/bulldozer-amds-crash-m...

https://chipsandcheese.com/2023/01/24/bulldozer-amds-crash-m...

---

From a consumer perspective, Bulldozer and revisions as compared to Skylake and revisions were:

+ comparable on highly multi-threaded loads

+ cheaper

- significantly behind on less multi-threaded loads

- had 1 set of FPUs per 2 cores, so workloads with lots of floating point calculations were also weaker

- Most intensive consumer software was single or a very small number of thread focused still (this was also a problem for Intel in trying to get people to buy more expensive i7s/i9s over i5s in those days)

King1st
0 replies
1h15m

The Bulldozer design had a few main issues.

1.Bulldozer had a very long pipeline akin to a Pentium 4. This allows for highclocks but comparatively little work being done per cycle vs their competition. Since clocks have a ceiling around 5GHz they could never push the clocks high enough to compete with intel. 2.They used a odd core design with 1 FPU for every 2 integer unit instead of the normal 1:1 that we have seen on every x86 since the i486. This leads to very weak FPU performance needed for many professional applications. Conversely it allowed for very competitive performance on highly threaded integer applications like rendering. This decision was probably under the assumption APUs would integrate their GPUs better and software would be written with it in mind since a GPU easily out does a CPUs FPU but it requires more programming. This didn't come to be. 3. They were stuck using Global Foundries due to previous contracts when they spun it off requiring AMD use GloFlo. This became a anchor as Gloflo fell behind market competitors like TSMC. Leaving AMD stuck on 32nm for a long while, until gloflo got 14nm and eventually AMD got out of the contract between zen 1-2.

bonus: Many IC designers have bemoaned how much of bulldozers design was automated with little hand modifications which tends to lead to a less optimized design. 3. 3.

baq
0 replies
7h24m

They've seen all that. You don't have to have an MBA or a MIT degree to plot projected performance of your or your competitors' chips.

It was process failures. Their fabs couldn't fab the designs. Tiger lake was what? 4 years late?

throwup238
2 replies
14h29m

> It's that Intel should have understood that just because you have weak enemies today doesn't mean that you have an unassailable castle.

Their third employee that later went on to become their third CEO and guide Intel from the memory to processor transition literally coined the term and wrote a book called "Only the Paranoid Survive" [1]. It's inexcusable that management degraded that much.

[1] https://en.wikipedia.org/wiki/Andrew_Grove#Only_the_Paranoid...

_the_inflator
1 replies
12h28m

Yes, I agree. However, I don’t necessarily see this book title as an imperative to innovate. Patent trolling can also be a way to deal with competitors.

After all, Apple and ARM came from the idea to have better end user products around softer factors than shear CPU power. Since Intel‘s products aren’t highly integrated Phones nor assembled computer, Intel had no stake directly.

It is complex.

IndrekR
0 replies
8h58m

Apple came from recreational “there is now a 10 times cheaper CPU than anything else and I can afford to build my video terminal into real computer in my bedroom” and “maybe we can actually sell it?”. [1]

ARM literally came from “we need a much better and faster processor” and “how hard can this be?” [2]

[1] https://en.wikipedia.org/wiki/History_of_Apple_Inc.#1971%E2%...

[2] https://en.wikipedia.org/wiki/ARM_architecture_family

nox101
0 replies
12h37m

This sounds like Google. Some bean counter is firing people left and right and somehow they think that's going to save them from the fact that AI answers destroy their business model. They need more people finding solutions, not less.

chasil
6 replies
15h48m

Intel's flaw was trying to push DUV to 10nm (otherwise known as Intel 7).

Had Intel adopted the molten tin of EUV, the cycle of failure would have been curtailed.

Hats off to SMIC for the DUV 7nm which they produced so quickly. They likely saw quite a bit of failed effort.

And before we discount ARM, we should remember that Acorn produced a 32-bit CPU with a 25k transistor count. The 80386 was years later, with 275k transistors.

Intel should have bought Acorn, not Olivetti.

That's a lot of mistakes, not even counting Itanium.

cherryteastain
1 replies
8h58m

N7, TSMC's competitor to Intel 7, does not use EUV either.

Sakos
0 replies
2h52m

There are multiple versions of N7. The N7 and N7P are DUV while the N7+ is EUV.

BirAdam
1 replies
15h9m

Acorn’s original ARM chip was impressive but it didn’t really capture much market share. The first ARM CPU competed against the 286, and did win. The 386 was a big deal though. First, software was very expensive at the time, and the 386 allowed people to keep their investments. Second, it really was a powerful chip. It allowed 11mips vs ARM3’s 13, but the 486 achieved 54mips. ARM6 only hit 28mips. It’s worth noting that the 386 also used 32bit memory addressing and a 32 bit bus while ARM was 26bit addressing with a 16 bit bus.

chasil
0 replies
14h49m

At the same time, it had unquestioned performance dominance until ARM made the decision for embedded.

ARM would have been much more influential under Intel, rather than pursuing the i960 or the iAPX 432.

Just imagine Intel ARM Archimedes. It would have crushed the IBM PS/2.

Whoops.

Seriously, even DEC was smart enough.

https://en.m.wikipedia.org/wiki/StrongARM

kens
0 replies
14h3m

Acorn produced a 32-bit CPU with a 25k transistor count. The 80386 was years later, with 275k transistors

Coincidentally, ARM1 and 80386 were both introduced in 1985. I'm a big fan of the ARM1 but I should point out that the 386 is at a different level, designed for multitasking operating systems and including a memory management unit for paging.

ffgjgf1
0 replies
11h41m

Intel should have bought Acorn, not Olivetti.

Intel had StrongARM though. IIRC they made best ARM cpus in the early 2000s and were designing their own cores. Then Intel decided to get rid of it because obviously they were just wasting money and could design a better x86 mobile chip…

klelatti
4 replies
9h47m

Nope. Intel crushed them.

The problem is that Intel has had a defensive strategy for a long time. Yes, they crushed many attempts to breach the x86 moat but failed completely and then gave up attempts to reach beyond that moat. Mobile, foundry, GPUs etc have all seen half-hearted or doomed attempts (plus some bizarre attempts to diversify - McAfee!).

I think that, as Ben essentially says, they put too much faith in never-ending process leadership and the ongoing supremacy of x86. And when that came to an end the moat was dry.

bonton89
1 replies
4h13m

Part of the problem is Intel is addicted to huge margins. Maybe of the areas they have tried to enter are almost commodity products in comparison so it would take some strong leadership to convince everyone to back off those margins for the sake of diversification.

They should have been worried about their process leadership for a long time. IIRC even the vaunted 14nm that they ended up living on for so long was pretty late. That would have had me making backup plans for 10nm but it looked more like leadership just went back to the denial well for years instead. It seemed like they didn't start backport designs until after Zen1 launched to me.

klelatti
0 replies
3h3m

100%! Also in a way reversing what Moore/Grove did when they abandoned commodity memories. Such a hard thing to do.

baq
1 replies
9h41m

On the contrary, they tried to pivot so many times and enter different markets, they bought countless small companies, some big, nothing seemed to stick except the core CPU and datacenter businesses. IIRC mobileye is one somewhat successful venture.

klelatti
0 replies
9h34m

Except they weren't real pivots. Mobile / GPU were x86 centric, foundry half hearted without buying into what needed to be done. Buying a company is the easy bit.

makeitdouble
2 replies
15h50m

Where was AMD?

Trying to breathe as Intel was pushing their head under water.

We saw AMD come back after their lawsuit against Intel got through and Intel had to stop paying everyone to not use AMD.

mjevans
1 replies
14h7m

Kind of, but not really in laptops :( they're doing great on handhelds though.

makeitdouble
0 replies
11h9m

I think they're doing better (disclaimer: writing this from a Ryzen laptop) and their latest chip has better thermals and consumption, with a decent reputation compared to 10 years ago for instance. But yes, it's a long road ahead.

AtlasBarfed
1 replies
15h8m

If you're process nodes are going wayyyy over schedule, it shouldn't take much intelligence to realize that TSMC is catching up FAST.

You should probably have some intel (haha) on ARM and AMD chips. They didn't care.

Why? It's monopoly business tactics, except they didn't realize they weren't Microsoft.

It's not like this was overnight. Intel should have watched AMD like a hawk after that slimeball Ruiz was deposed and a real CEO put in charge.

And the Mac chips have been out, what, two years now, and the Apple processors on the iPhones at least 10?

Come on. This is apocalyptic scale incompetence.

edgyquant
0 replies
2h34m

Microsoft also got its lunch ate during this time by mobile. They have a new CEO who’s had to work hard to reshape the place as a services company.

vdaea
0 replies
8h5m

Intel's recent CPUs are not as good as Ryzen? That hasn't been correct for a few years now.

treffer
0 replies
10h59m

The problem is this... The money cow is datacenters, and especially top of the line products where there is no competition.

Then fastest single core and multi core x86 cpus that money can buy will go to databases and similar vertically scaled systems.

That's where you can put up the most extreme margins. It's "winner takes all the margins". Beeing somewhat competitive, but mostly a bit worse is the worst business position. Also...

I put money on AMD when they were taking over the crown.

Thank you for this wakeup call, I'll watch closely if Intel can deliver on this and take it back. I'll habe to adjust accordingly.

scrubs
0 replies
14h53m

Well, yes.

Look at Japan generally and Toyota specifically. In Japan the best award you can get for having an outstanding company in terms of profit, topline, quality, free-cash, people, and all the good measures is the Deming Award. Deming was our guy (an American) but we Americans in management didn't take him seriously enough.

The Japanese to their credit did ... ran with it and made it into their own thing in a good way. The Japanese took 30% of the US auto market in our own backyard. Customers knew Hondas, Toyotas cost more but were worth every dollar. They resold better too. (Yes some noise about direct government investment in Japanese companies by the government was a factor too, but not the chief factor in the long run).

We Americans got it "explained to us." We thought we were handling it. Nah, it was BS. But we eventually got our act together. Our Deming award is the Malcolm Baldridge award.

Today, unfortunately the Japanese economy isn't rocking like it was the 80s and early 90s. And Toyota isn't the towering example of quality it once was. I think -- if my facts are correct --- they went too McDonalds and got caught up in lowering price in their materials, and supply chain with bad effects net overall.

So things ebb and flow.

The key thing: is management through action or inaction allowing stupid inbred company culture to make crappy products? Do they know their customers etc etc. Hell, mistakes even screw-ups are not life ending for companies the size of Intel. But recurring stupidity is. A lot of the times the good guys allow themselves to rot from the inside out. So when is enough enough already?

jldugger
0 replies
9h25m

Was ARM really competitive until recently?

The writing was on the wall 6 years ago; Intel was not doing well in mobile and it was only a matter of time until that tech improved. Same as Intel unseating the datacenter chips before it. Ryzen I will give you is a surprise, but in a healthy competive market, "the competition outengineered us this time" _should_ be a potential outcome.

IMO the interesting question is basically whether Intel could have done anything differently. Clay Christianson's sustaining vs disrupting innovation model is well known in industry, and ARM slowly moving up the value chain is obvious in that framework. Stratechery says they should have opened up their fabs to competitors, but how does that work?

epolanski
0 replies
8h8m

Was AMD really competitive before Ryzen?

No, but ARM should've rung many bells.

Intel poured tons of billions in mobile.

Didn't understood the future from smartphones to servers was about power efficiency and scale.

Eventually their lack of power efficiency made them lose ground in all their core business. I hope they will get this back and not just by competing on manufacturing but architecture too.

djmips
0 replies
5h0m

Only the Paranoid Survive

chx
0 replies
12h5m

Where was AMD?

Crushed by Intel's illegal anticompetitive antics?

Dalewyn
0 replies
15h49m

Core 2 series blew them out of the water.

And Sandy Bridge all but assured AMD wouldn't be relevant for the better part of a decade.

It's easy to forget just how fast Sandy Bridge was when it came out; over 12 years later and it can still hold its own as far as raw performance is concerned.

ActionHank
0 replies
8h37m

They should have known it was coming because of how many people they were losing to AMD, but there is a blindness in big corps when management decide they are the domain experts and the workers are replaceable.

1123581321
0 replies
16h8m

Intel’s problem was the cultural and structural issues in their organization, plus their decision to bet on strong OEM partner relationships to beat competition. This weakness would prevent them from being ready for any serious threat and is what they should’ve seen coming.

noopl
64 replies
17h56m

As someone that doesn’t really understand the space and stuff that the article is talking about, I was surprised that “Apple Silicon” didn’t appear here since from my perspective having used both Intel and Apple Silicon Macs, it’s a huge “wow” change. Was Apple leaving and Apple Silicon Macs being so incredibly better than Intel ones not actually a big deal for Intel?

timschmidt
55 replies
17h39m

Apple had been sandbagging the Intel chips for several generations of macbook with anemic cooling solutions, poor thermal throttling curves, and fans that wait until the last moment to turn on. To the extent that the fastest CPUs available in Macbook Pros did not outperform lower models, because both became thermally saturated. When properly cooled, the Intel silicon tends to perform a lot better.

Additionally, both Intel and AMD manufacturer CPUs for HEDT and servers which are far beyond anything Apple is fabricating at the moment. Apple has no response to Epyc, Threadripper, or higher end Xeon. Similarly, Apple has no equivalent to Intel, AMD, and Nvidia discrete GPUs.

Apple made a quality cell phone chip, and managed to exploit chiplets and chip-to-chip interconnects to double and quadruple it into a decent APU. But it's still an APU, just one segment addressed by current x86 designs.

whynotminot
22 replies
16h30m

Intel stopped making good chips for the things Apple cared about: efficient, cool designs that can run well with great battery life in attractive consumer products.

Apple "sandbagging" was them desperately trying to get a poor Intel product to work in laptops they wanted to be thin and light.

Even today though their designs haven't really changed all that much--in fact the MacBook Air is actually just as thin and now even completely fan-less. It just has a chip in it that doesn't suck.

orangecat
15 replies
16h9m

It was Apple's decision to put an 8-core i9 in an MBP chassis that was utterly incapable of dealing with the heat. Yes, the M1 is far more efficient and Apple deserves a lot of credit for it, but their last Intel laptops were much worse than they needed to be.

whynotminot
14 replies
16h3m

You can definitely point out some processor decisions that seemed poor in hindsight, but the fact is nothing in Intel's lineup at the time was any good for a laptop.

Maybe they were worse than they needed to be, but the best they could have been with Intel would have still left so much to be desired.

timschmidt
13 replies
15h59m

but the fact is nothing in Intel's lineup at the time was any good for a laptop.

Simply false brand-oriented thinking.

whynotminot
12 replies
15h52m

Why so defensive about Intel? I think you're the one doing the brand-oriented thinking here.

timschmidt
11 replies
15h46m

nothing in Intel's lineup at the time was any good for a laptop.

You leave no room for nuance here, friend. And somehow I doubt you're familiar with every single one of Intel's tens of thousands of SKUs over decades. Intel has made a lot of CPUs. They're in lots of laptops. You might consider your wording if you're not trying to be corrected.

whynotminot
10 replies
15h20m

Please correct me then. I'm willing to learn about the Intel laptop chip that Apple could have used to achieve M1 performance and efficiency at the time they switched. You're right that I am not familiar with every one of Intel's SKUs they produced at the time.

timschmidt
9 replies
15h12m

M1 was released in 2020, that puts it in line with Tiger Lake and Comet Lake, for which I'm seeing 7 different SKUs at 7W TDP. The fastest of which seems to be the Core i7 1180G7, which seems to perform almost as well as the M1 while using about half as much power: https://nanoreview.net/en/cpu-compare/intel-core-i7-1180g7-v...

Just the first example I grabbed. Intel made quite a few more chips in the 15W and higher envelopes, still under M1's ~20W TDP.

I didn't check their embedded SKUs yet. Nor enterprise.

whynotminot
5 replies
14h56m

Did you make a mistake in your comparison link? Because I've been looking at it for a few minutes now and I don't get it.

It doesn't support your contention that Intel was making comparable chips at the time to the M1, I'll say that at the least.

timschmidt
4 replies
14h49m

I guess we're reading different graphs. The one I'm seeing shows Intel producing 70% of the performance with 35% of the power of an M1. That's... checks math... better performance per watt, and lower overall power consumption. If you want more performance, you are free to step up to a 15W version which outperforms the M1 at 70% of it's TDP.

whynotminot
3 replies
14h19m

I think I have a better sense of some of the misunderstanding here. I think you're taking Intel's TDP figures as honest and straightforward, when they really truly aren't.

Anandtech has talked about some of Intel's TDP shenanigans before, even for Tiger Lake: https://www.anandtech.com/show/16084/intel-tiger-lake-review...

timschmidt
2 replies
14h12m

I wouldn't call them shenanigans. At least Intel publishes numbers (Apple does not). There are simply a lot of power states Intel CPUs are capable of engaging, and many configuration options about when and under what circumstances to do so. Much of this is configurable by the OEM and is not hard-set by Intel, or even under Intel's control after the processor leaves the fab. The Anandtech article seems to indicate that it's perfectly possible to run this Intel CPU within the advertised 7W TDP, and that it can also be allowed to turbo up to 50W, which I'm sure most OEMs enable. My favorite OEMs provide the knobs to adjust these features.

whynotminot
1 replies
4h51m

So you’re aware of this Intel TDP chicanery but trying to say with a straight face that these chips achieve similar performance and efficiency to the M1. I think that’s pretty disingenuous.

timschmidt
0 replies
3h13m

Again, there's not chicanery going on. You asked for examples of chips Intel produced which were suitable for laptops as you claimed they produced none. I provided one.

nemothekid
2 replies
14h43m

It's hard for me to square your statements with actual real world products. The Core i7 1180G7 was the CPU in the ThinkPad X1 Nano Gen 2 (IIRC), and that laptop got half the battery life of an M2 Pro and a third of the battery life of the M1 air.

TDP doesn't seem to be the whole story - the datasheet is one thing, but I've yet to see a reviewer get 30 hours from an Intel laptop.

timschmidt
0 replies
14h27m

For sure there's interesting stuff to talk about here. Intel allows OEMs a lot of configuration with regard to TDP and various ways to measure chip temp accounting for thermal mass of the cooling solution and skin temperature of the device. Many OEMs screw it up. Even the big ones. Lenovo and Apple included.

Screen backlight usually has as much to say about battery life as the CPU does. And it's hard to deny Apple's advantage of being fully vertically integrated. Their OS only needs to support a finite number of hardware configurations, and they are free to implement fixes at any layer. Most of my PC laptops make choices which use more power in exchange for modularity and upgradeability as well, like using SODIMMs for ram, and m.2 or SATA removeable storage, all of which consume more power than Apple's soldered on components.

mike_hearn
0 replies
7h59m

Lots of confounders there. MacOS is probably a lot more power optimized than Windows, given the shared iOS codebase, and OS quality can impact battery life hugely (useless wakeups, etc).

timschmidt
5 replies
16h25m

Intel makes plenty of chips in the 15W and even 5W envelopes to this day. MSI just built a gaming handheld with one.

whynotminot
4 replies
16h21m

At those envelopes they are nowhere near as good as what Apple makes.

Even if Intel makes a thing that doesn't mean it's actually any good.

I had some of those Intel ULV parts back in the day. They sucked.

timschmidt
3 replies
16h17m

I'm not sure what you had back in the day, but even Intel's E cores these days are spritely. Gone are the days of slow Atom cores. Pretty sure an equivalently clocked Intel E core of today beats the fastest Intel core ever shipped in an Apple product.

whynotminot
2 replies
16h15m

Neither here nor there--when Apple needed Intel to ship a decent, performant core that didn't overheat and chew through battery life, they could not.

Apple switched to their own silicon, and life for their customers has radically improved.

Maybe Intel has a great E Core now. Good for them if they do.

timschmidt
1 replies
16h3m

It sounds like Apple just wanted to vertically integrate, to me. Which is a fine reason to do something. But doesn't require misrepresenting competitors or constructing a past which never happened. You do you though.

evilduck
0 replies
15h4m

Intel laptops cant even sleep anymore. I have a 11th Gen Intel space heater that invalidates every claim you've made in this thread. People aren't stupid, we've owned these shitty Intel products for years, you're fooling nobody.

jdewerd
9 replies
16h54m

Absolutely ridiculous. No, Apple did not juice the M1 by giving it better cooling than x86. Quite the opposite, they took a big chunk of their design+process wins to the bank and exchanged them for even cooler and quieter operation because that's what they wanted all along. Cool and quiet was a deliverable.

It's absurd to point out that apple could have gotten higher perf from x86 at higher power as if it's some kind of win for intel. Yes, obviously they could have, that's true of every chip ever. They could take it even further and cool every macbook with liquid nitrogen and require lugging around a huge tank of the stuff just to read your email! They don't, because that's dumb. Apple goes even further than most laptop manufacturers and thinks that hissing fans and low battery life are dumb too. This leads them to low power as a design constraint and what matters is what performance a chip can deliver within that constraint. That's the perspective from which the M1 was designed and that's the perspective from which it delivered in spades.

sangnoir
6 replies
16h9m

Absolutely ridiculous. No, Apple did not juice the M1 by giving it better cooling than x86.

This is such an uncharitable and adversarial interpretation of parent. Sandbagging intel =/= juicing M1.

whynotminot
5 replies
16h6m

Parent didn't deserve any charity--Apple didn't sandbag anyone. They made the laptop they felt was the future based off a processor roadmap Intel failed to deliver on.

The fact that the exact same laptop designs absolutely soared when an M1 was put in them with no changes tells you everything you need to know about how Intel dropped the ball.

timschmidt
4 replies
16h2m
whynotminot
3 replies
16h1m

The fact that the exact same laptop designs absolutely soared when an M1 was put in them with no changes tells you everything you need to know about how Intel dropped the ball.
timschmidt
2 replies
15h53m

Intel did screw up and get stuck on 14nm for far too long. But then does Apple deserve credit for TSMC's process advantage? AMD was never stuck in the way Intel was, and they have had the performance crown since for most kinds of workloads. I suppose Apple figured if they were going to switch chip vendors again it might as well be to themselves.

whynotminot
1 replies
15h44m

True, it's not just a story of Apple besting Intel. AMD has been beating them too.

Rough recent history for Intel.

I agree that Apple figured if they were going to switch, they should just go ahead and switch to themselves. But the choice was really to switch to either themselves or AMD. Sticking with Intel at the time was untenable. 14nm is certainly a big part of that story, and I'm glad you at least finally recognize there was a serious problem.

If Intel had been able to deliver on their node shrink roadmap, perhaps Apple never would have felt the need to switch--or may have at least delayed those plans. Who knows, that's alternate history speculation at this point.

The article in question is about Intel potentially getting back to some level of process parity, perhaps even leadership. I'm looking forward to that because I think a competitive market is important.

But pretending Intel's laptop processors weren't garbage for most of the last 8 or so years is kind of living in an alternate reality.

timschmidt
0 replies
15h21m

I think a lot has happened in Intel land since Apple folk stopped paying attention, as well. Intel still has a lot of work to do to catch up to AMD, but they have been fairly consistently posting gains in all areas. Apple really doesn't have a power advantage other than that granted by their process node at this point, against either AMD or Intel. AMD has seemingly delayed the Strix Halo launch because it wasn't necessary to compete at the moment. And Qualcomm is taking the same path Apple has, but is willing to sell to anyone, and as a result has chips in all standalone VR headsets other than Apple's.

It remains to be seen if Apple is willing or able to scale their architecture to something workstation class (the last Intel Mac Pro supported 1.5TB of ram, it's easy to build a 4TB Epyc workstation these days).

timschmidt
1 replies
16h27m

Maybe it's not clear to everyone... hot transistors waste more power as heat. It's a feedback loop. And it doesn't require liquid nitrogen to nip in the bud. Running the chip hot benefits no one. Kicking on a fan, and racing to idle without throttling would use less battery power.

I'm not sure what's so upsetting about the assertion that chips composed of a similar number of transistors, on a similar process node, cooled similarly, might function similarly. Because when all variables are controlled for, that's what I see.

zamadatix
0 replies
1h34m

Even taking the latest Intel has to offer with the Core Ultra 9 185H in a system with active cooling and putting it up against the fanless model of the M1 MacBook Air comes out equal in performance at more than triple the power usage - 3 years after the fact. It's got nothing to do with a feedback loop, there is less cooling capacity and not even a fan on the Apple model, the fan wasn't somehow needed for the performance level. Not that a conspiracy of Apple to run all their MacBooks like shit for many years prior to switching all to make Intel look bad is a bit ludicrous - if the problem was ever that the fans weren't on enough they would have just turned them up. Turns out, they can remove them instead!

The one place I agree with you is in the > ~16 core space (Server style) with TBs of RAM, where total performance density is more important than power, they don't bother to really compete. Where I differ slightly is I don't think there is anything about the technical design that prevents this, just look how Epyc trounced Intel in the space by using a bunch of 8 core chip modules instead of building a monolith, rather they just don't have interest in serving that space. If Apple was able to turn a phone chip into something with the multicore performance of a 24 core 13900k it doesn't exactly scream "architectural limitation" to me.

cookingmyserver
9 replies
16h59m

When properly cooled, the Intel silicon tends to perform a lot better.

Of course, but the average Joe does not want to wear ear protection when running their laptop. Nor do they want the battery to last 40 minutes or have it be huge brick, or have to pour liquid nitrogen on it to not get it to not thermal throttle.

Apple innovated by making chips that fit the form and function most people need in their personal devices. They don't need to be the absolute fastest, but innovation isn't solely tied to the computing power of a processor. It make sense that Intel excels in the market segment where people do need to wear ear protection to go near their products. If they need to crank in an extra 30 watts to achieve their new better compute then so be it.

We don't know the specifics of the conversations between Apple and Intel. Hopefully for Intel it was just the fact that they didn't want to innovate for personal computing processors and not that they couldn't.

timschmidt
7 replies
16h35m

It seems like you think I'm trying to dunk on Apple. I am not. Apple Silicon is a great first showing for them. Performance simply isn't better than Ryzen APUs running in the same power envelope. And power usage is what you'd expect of silicon running on the latest node. Further, some of Apple's choices - bringing memory on package, only two display outputs - caused regressions for their users compared to the previous Intel offerings.

I wouldn't call what Apple did innovation - they followed predictable development trajectories - more integration. They licensed ARM's instruction set, Imagination's PowerVR GPU, most of the major system busses (PCIe, Thunderbolt, USB 3, Displayport, etc), they bonded chiplets together with TSMC's packaging and chip-to-chip communication technologies, and they made extensions (like optional x86 memory ordering for all ARM instructions which removes a lot of the work of emulation). Incidentally, Apple kicked off it's chip design efforts by purchasing PA Semi. Those folks had all the power management chip design expertise already.

But again, it's been a good first showing for Apple. I think they were smart to ship on-package DRAM in a consumer device. Now is about the right time for the CPU to be absorbing DRAM, as can be seen by AMD's 3D VCache in another form. And it's cool for Apple folks to have their own cool thing. Yay y'all. But I've run Linux for 20 years, I've run Linux on every computer I can get my hands on in that time, and through that lens, Apple silicon performs like any x86 APU in a midrange laptop or desktop. And as regards noise, I never hear the fans on my 7800x3D / 3090Ti, and it is very very noticeably faster than my M1 Mac. Apple Silicon's great, it's just for laptops and midrange desktops right now.

JumpCrisscross
3 replies
16h5m

I wouldn't call what Apple did innovation - they followed predictable development trajectories - more integration

By this yardstick, nobody in semiconductors has ever innovated.

timschmidt
2 replies
16h0m

Well to be fair there is an awful lot of copying and extending in place.

JumpCrisscross
1 replies
15h59m

to be fair there is an awful lot of copying and extending in place

That's how technology proliferates. The point is if the M1 wasn't innovative, that rules out pretty much everything AMD, Intel and potentially even NVIDIA have done in the last three decades.

timschmidt
0 replies
15h51m

Did they do anything in those three decades that hadn't been dreamt of and projected out sometime in the 60s? Architecturally, sure doesn't seem like it.

I'd say a lot more innovation happens on the process side. Manufacturing.

All the architecture changes look like things mainframes did decades ago.

charrondev
1 replies
15h45m

Somehow you are comparing Apple’s first gen laptop/iPad chip to a a desktop setup requiring 10x the power consumption and 10x the physical size (for the chips and all the cooling required). The power envelope for these chips is very different and they prioritize different things.

timschmidt
0 replies
15h41m

That's my point. You got it. Go you.

aurareturn
0 replies
11h47m

Apple Silicon is a great first showing for them. Performance simply isn't better than Ryzen APUs running in the same power envelope. And power usage is what you'd expect of silicon running on the latest node.

Do you have source for this other than Cinebench R23, which is hand optimized for x87 AVX instructions through Intel Embree Engine?

From all sources, Apple Silicon has 2-3x more perf/watt than AMD's APUs in multithread and a bigger gap in single thread.

ericmay
0 replies
16h47m

It's always curious to me how Apple's superior products are somehow some other company's fault.

kristianp
6 replies
17h37m

Apple had been sandbagging the Intel chips for several generations of macbook with anemic cooling solutions

I don't think that was deliberate. Apple has a long history of not cooling their computers enough.

yakz
2 replies
17h30m

I had never even considered it was deliberate. In hindsight, doing it deliberately seems pretty smart, at least from a “let’s juice the intro” perspective, but that would have been a really big bet.

timschmidt
0 replies
17h9m

It's an interesting question right? Do you think Apple never tested multiple configurations of their leading computer (specifically the higher end ones) over multiple generations, or do you think they knew what they were doing?

evilduck
0 replies
14h58m

Microsoft Surface Laptops were trying for similar form factors and having the same thermal problems at the same time, this isn't a grand conspiracy, all Intel laptops were suffering.

alwayslikethis
2 replies
17h9m

They still do it. Apple silicon macs run at 100+ degrees at load. Apple would rather run their chips hot than turn on the fans, which may or may not be justified. We don't really see chips dying from high temps, especially for typical workloads a casual user would use.

gryn
0 replies
16h29m

Just wait for them to get a little older and you'll probably have a wave of motherboards dying in the future like what happened with the MBPs from around 2011 - 2013 with apple denying it for years until the threats of class action lawsuits before offering replacements with other motherboards that also fail.

doublepg23
0 replies
16h23m

That’s pretty common for modern CPUs. Look at the temps for Ryzen 7xxx series.

GrumpySloth
1 replies
16h15m

They didn’t sabotage them. The CPUs just didn’t align with Apple’s goals. It’s not like they gave the M1 CPUs the cooling that Intel didn’t get. My Mac Mini M1 never spins up its fans and everything is fine. I love it. If it did, I’d consider it a downgrade.

jeffbee
0 replies
15h48m

I don't think Apple "sabotaged" them but it is true that the M1 series came with very different thermal design parameters than the terminal Intel models. Apple's Intel laptops would ramp chip power to the max as soon as anything happened. Apple's M1 ramps very, very slowly, dwelling for hundreds of milliseconds at each voltage before stepping up. These are decisions that are in the OEM's hands and not dictated by Intel.

eigen
0 replies
14h37m

2019 15" Macbook Pro with i9-9980HK has 1383/6280 Geekbench score. [1] generic user submitted Intel Core i9-9980HK has 1421/6269 Geekbench score. [2] a <3% difference doesn't seem like Apple sandbagging with anemic cooling.

[1] https://browser.geekbench.com/macs/macbook-pro-15-inch-mid-2...

[2] https://browser.geekbench.com/processors/intel-core-i9-9980h...

doublepg23
0 replies
16h22m

I don’t think that was the case. I believe Apple believed Intels roadmap as much as anyone and them repeatedly not delivering inspired them to move.

UniverseHacker
0 replies
15h19m

It's not possible to cool those Intel cpus better and still have good battery life, and quiet operation in a compact laptop form factor. The last Intel macbooks would get very hot and very loud when running hard, and kill the battery in minutes... my M1 never makes a sound or feels even slightly warm to the touch when running hard, yet is a heck of a lot more powerful, and can run hard all day long on a single battery charge.

spenczar5
5 replies
17h53m

TSMC manufactures Apple Silicon. It’s a good example of the foundry model that the article is talking about.

jsheard
4 replies
17h41m

Not only do they manufacture Apple Silicon, but Apple usually buys out the first runs of their new processes so they have a head-start against even other TSMC customers. I believe every 3nm wafer that TSMC makes is still going to Apple, hence the just-released Qualcomm SD8G3 flagship chip still being made on TSMC 4nm.

timschmidt
2 replies
17h20m

Nvidia's AI $trillions may influence this arrangement in the near future. In the recent past Nvidia has bid Samsung against TSMC in attempts to save costs, but Apple's strategy works well for as long as one foundry has a process advantage.

mdasen
0 replies
15h40m

I think Apple's advantages are that it has a lot of cash and a very predictable business. I'm not arguing that Nvidia doesn't have a good business, but it seems to be a bit less predictable and a bit less regular than Apple's business. Apple really knows how many chips it's going to want years in advance.

Even if Nvidia also wants TSMC's latest process, that could work to Apple's advantage. Right now it's looking like Apple might end up with TSMC's 3nm process for 18 months. If Apple and Nvidia split the initial 2nm capacity, it could be 3+ years before AMD and Qualcomm can get to 2nm.

If Nvidia launches the RTX 50 series in late 2024 or early 2025 on TSMC's 3nm (which seems to be the rumor), what does that do for availability for AMD and Qualcomm? Maybe what we'll see going forward is Apple taking the capacity for the first year and Nvidia taking the capacity for the second year leaving AMD and Qualcomm waiting longer.

That would certainly benefit Apple. Apple isn't competing against Nvidia. If Nvidia uses up TSMC capacity leaving Apple's actual competitors at a bigger disadvantage, that's pretty great for Apple.

frankchn
0 replies
14h52m

The chips Nvidia requires are a lot bigger (>800 mm2 sometimes) and they are much more expensive to make on a cutting edge process with relatively low yields compared to the 100-150 mm2 chips Apple wants for its Axx iPhone chips.

Rapzid
0 replies
16h49m

Not a good deal for consumer IMHO. Intel is floundering and AMD is now always a node behind. Seems very anti-competitive.

sillywalk
0 replies
16h42m

I don't think Macs really amounted to much $ in sales for Intel, compared to e.g. Dell etc.

javier2
0 replies
17h36m

There are many different reasons. Firstly, Apple built experience over a decade designing and building the chips, taking over more and more of the design and IP. Then making these chips at a Intel competitor TSMC, who has a different business model to Intel. Apple were also willing to compromise. The first years had weird experiences for customers with Rosetta, broken apps and Macs that could only drive a single external monitor and connect to few devices. Yet we clearly saw the power efficiency from the better foundry tech at TSMC, coupled with decade of saving watts for mobile phone batteries.

autokad
60 replies
17h43m

Intel's 2010 7.6B$ purchase of mcafee was a sign that Intel doesn't know what its doing. In the CEO's words: The future of chips is security on the chip. I was like no, no its not! I wanted them to get into mobile and GPUs at the time. Nvdia's market cap was about 9B at the time. I know it would have been a larger pill to swallow, and likely had to bid a bit more than 9B, but I thought it was possible for Intel at the time.

brutus1213
19 replies
16h29m

I remember when they did random stuff like the whole IoT push (frankly, their offerings made no sense to humble me .. Microsoft had a better IoT than Intel). They did drone crap .. gave a kick ass keynote at CES I recall .. also .. made little sense. Finally, the whole FPGA thing .. makes little sense. So much value being destroyed :(

stefan_
14 replies
15h51m

I remember when they bought a smart glasses company then refunded every buyer ever the full retail price. There hasn’t been an Intel acquisition that has worked out in some 20 years now it seems. Just utterly unserious people.

TylerE
12 replies
15h12m

Isn't that true for virtually EVERY big tech merger? Like, which ones have actually worked?

landryraccoon
3 replies
14h31m

Facebook bought Instagram and WhatsApp and they were both home runs. Zuckerberg knows how to buy companies.

bee_rider
1 replies
13h33m

That’s a different type of acquisition, right? Buying your competition. If nothing else you’ve wiped out a competitor.

TylerE
0 replies
13h22m

Even that sometimes flops (HP/Compaq)

baq
0 replies
7h19m

Facebook bought their eventual competitors by making an offer they couldn't refuse. Zuck knows Metcalfe's law.

dangrossman
1 replies
14h32m

Google built its core advertising ecosystem on acquisitions (Applied Semantics, DoubleClick, AdMob, etc) and extended it into the mobile space by buying Android.

erik
0 replies
9h37m

Youtube was also an acquisition.

Haven't heard much about successful Google acquisitions lately though.

FullyFunctional
1 replies
13h14m

Mostly true, but there are exceptions:

Apple does really well on its rare acquisitions, but they aren't very public as they get successfully absorbed. PA Semi, Intrinsity, more I can't remember.

ATi and Xilinx have by all accounts worked out really well for AMD.

scns
0 replies
3h4m

The iPod

photonbeam
0 replies
14h15m

There are the occasional good ones, like instagram.

But I guess thats the problem - I had to provide an example

nsteel
0 replies
58m

Broadcom is a good example of successful mergers.

murderfs
0 replies
14h53m

Android and PA Semi have worked out pretty well...

moondev
0 replies
13h5m

Nvidia Mellanox

CoastalCoder
0 replies
15h21m

There hasn’t been an Intel acquisition that has worked out in some 20 years now it seems.

Maybe Habana Labs?

I can't really tell if it's working out for Intel, but I do hear them mentioned now and then.

sam_bristow
2 replies
15h21m

The Altera (FPGA) acquisition could have made sense, but they never really followed through and now it's being spun off again.

pclmulqdq
1 replies
13h49m

There were some technical issues with the follow-through that they didn't foresee. CPUs need to closely manage their power usage to be able to extract maximum computing power, and leaving a big chunk of static power on the table in case the FPGA needs it. The idea of putting an FPGA on a die was mostly killed by that.

Regarding other plans, QPI and UPI for cache coherent FPGAs were pretty infeasible to do at the sluggish pace that they need in the logic fabric. CXL doesn't need a close connection between the two chips (or the companies), and just uses the PCIe lanes.

FPGA programming has always been very hard, too, so the dream of them everywhere is just not happening.

FullyFunctional
0 replies
13h18m

That was not the point of the Altera acquisition. The point was the fill Intel's fabs, but the fab fiasco left Altera/Intel-FPGA without a product to sell (Stratix 10 -- 10nm -- got years of delay because of that). Meanwhile Xilinx was racing ahead on TSMC's ever shrinking process.

krautt
0 replies
13h50m

i was a process engineer there in the early 2000's, they did crazy random shit then too! they had an 'internet tv' pc that was designed to play mp4's in 2001.

jomohke
8 replies
16h47m

There's definitely a lot that can be critiqued about that period.

Famously they divested their ARM-based mobile processor division just before smartphones took off.

The new CEO, as the article mentions, seems to have a lot more of a clue. We just hope he hasn't arrived too late.

dralley
3 replies
16h25m

Famously they divested their ARM-based mobile processor division just before smartphones took off.

Wasn't that AMD (perhaps also AMD)? Qualcomm Adreno GPUs are ATi Radeon IP, hence the anagram.

tverbeure
1 replies
16h22m

Intel divested their StrongARM/XScale product line.

KerrAvon
0 replies
15h38m

Yes, just before the iPhone came out and with Apple newly fully engaged as a major Intel CPU customer (for x86 Macs) for the first time ever.

Kind of like Decca Records turning down The Beatles.

tirant
0 replies
12h32m

Intel sold their XScale family of processors to Marvell in 2006.

I remember very well as back then I was working in University porting Linux to an Intel XScale development platform we had gotten recently.

After I completed the effort, Android was released as a public beta and I dared to port it too to that Development Board as a side project. I thought back then Intel was making a big mistake by missing that opportunity. But Intel were firm believers in the x86 architecture, specially on their Atom Cores.

Those little Intel PXA chips were actually very capable, I had back then my own Sharp Zaurus PDA running a full Linux system on an Intel ARM chip and I loved it. Great performance and great battery life.

TylerE
2 replies
15h14m

It's really sort of been downhill since they decided to play the speed number game over all else with the Pentium IV. Even the core i7/i9 lines that were good for a long time have gone absolutely crazy lately with heat and power consumption.

toast0
1 replies
14h19m

Intel's market reality is (percieved) speed sells chips.

It's embarassing when they go to market and there's no way to say it's faster than the other guy. Currently, they need to pump 400W through the chip to get the clock high enough.

But perf at 200w or even 100w isn't that far below perf at 400w. If you limit power to something like 50w, the compute efficiency is good.

Contrast that to Apple, they don't have to compete in the same way, and they don't let their chips run hot. There's no way to get the extra 1% of perf if you need it.

TylerE
0 replies
13h27m

Oh, I'm quite well aware. I traded a spaceheater of an i9/3090 tower for an M1 Studio.

The difference in performance for 95% of what I do is zero. I even run some (non-AAA) Windows games via crossover, and that's driving a 1440p 165hz display. All while it sits there consuming no more than about 35w (well, plus a bit for all my USB ssds, etc) and I've never seen the thermals much past 60c, even running nastive-accelerated LLMs or highly multithreaded chess engines and the like. Usually sits at about 40c at idle.

It's exactly what almost 40 year old me wants out of a computer. It's quiet, cool, and reliable - but at the same time I'm very picky about input devices so a-bring-your-own peripherals desktop machine with a ton of USB ports is non-negotiable.

huppeldepup
0 replies
12h35m

  a lot that can be critiqued about that period.
Like the time they appointed Will.I.Am?

https://youtu.be/gnZ9cYXczQU

mise_en_place
7 replies
15h19m

Intel pivoting to GPUs was a smart move but they just lacked the tribal knowledge needed to successfully ship a competitive GPU offering. We got Arc instead.

solarkraft
4 replies
14h46m

Isn't Arc actually pretty okay?

inversetelecine
1 replies
14h25m

It's getting better and drivers are improving all the time. I personally liked the Arc for the hardware AV1 encoding. Quicksync (I use qsvencc) is actually pretty decent for a hardware encoder. It won't ever beat software encoding, but the speed is hard to ignore. I don't have any experience using it for streaming, but it seems pretty popular there too. Nvidia has nvenc, and reviews say it's good as well but I've never used it.

FullyFunctional
0 replies
13h23m

This. If you follow GamersNexus, there are stories every month about just how much the Arc drivers have improved. If this rate continues and the next-gen hardware (Battlemage) actually ships, then Intel might be a serious contender for the midrange. I really hope Intel sticks with it this time as we all know it takes monumental effort to enter the discrete GPU market.

mastax
0 replies
13h53m

They mostly work now and they are decent options at the low-end (what used to be the mid-range: $200) where there is shockingly little competition nowadays.

However, they underperform greatly compared to competitors' cards with similar die areas and memory bus widths. For example the Arc A770 is 406mm^2 on TSMC N6 and a 256-bit bus and performs similarly to the RX 6650XT which is 237mm^2 on TSMC N7 with a 128-bit bus. They're probably losing a lot of money on these cards.

dralley
0 replies
14h24m

When it works, perhaps

mjevans
1 replies
14h18m

Arc seems more like, where the GPU will 'be' in another 2-6 years. Where Arc's second or third iteration might be more competitive. Vulkan / future focused, fast enough that some translation layers for old <= DX11 / OpenGL are worth it.

If you're hoping for an nVidia competitor, the units in that market may be more per unit, but there's already an 1-ton gorilla there and AMD can't seem to compete either. Rather, Arc makes sense as an in-house GPU unit to pair with existing silicon (CPUs) and low / mid range dGPUs to compete where nVidia's left that market and where AMD has a lot of lunch to undercut.

moondev
0 replies
13h18m

One unfortunate note on Nvidia data center GPUs is to fully utilize features such as vgpu and multi-instance GPU, there is an ongoing licensing fee for the drivers.

I applaud Intel for providing fully capable drivers at no additional cost. Combined with better availability for purchase they are competing in the VDI space.

https://www.intel.com/content/www/us/en/products/docs/discre...

dboreham
5 replies
17h13m

MBAs eating the world one acquisition at a time.

selimthegrim
4 replies
16h55m

He was a process engineer

NtochkaNzvanova
2 replies
16h47m

The CEO at the time of the McAfee acquisition was Paul Otellini -- an MBA: https://en.wikipedia.org/wiki/Paul_Otellini.

FullyFunctional
1 replies
13h48m

Intel has an amazing track record with acquisitions -- almost none of them work out. Even the tiny fraction of actually good companies they acquired, the Intel culture is one of really toxic politics and it's very hard for acquired people to succeed.

I wish Pat well and I think he might be the only who could save the company if it's not already too late.

Sourced: worked with many ex-Intel people.

POSTSCRIPT: I have seen from the inside (not at Intel) how a politically motivated acquisition failed so utter spectacularly due to that same internal power struggle. I think there are some deeply flawed incentives in corporate America.

tcmart14
0 replies
12h26m

Not gonna lie, I had a professor who retired from Intel as a director or something like that. Worst professor I had the entire time. We couldn't have class for a month because he 'hurt his back,' then half us saw him playing a round of golf two days later.

kibwen
0 replies
16h51m

It's never too late to go back to school.

bee_rider
4 replies
13h28m

Intel should have bought Nvidia.

And acqu-hired Jensen as CEO.

moffkalast
1 replies
5h46m

Would that make them Ntel or Invidia?

bee_rider
0 replies
40m

Invidia is sort of how nvidia gets pronounced anyway, so I’d go with that one. Ntel sounds like they make telecommunications equipment in 1993.

bonton89
1 replies
4h3m

I've heard the reason AMD bought ATI instead of Nvidia is Jensen wanted to be CEO of the combined company for it to go through. I actually AMD would be better off it they had taken that deal.

Prior to the ATI acquisition nvidia actually had been the motherboard chipset manufacturer of choice for AMD cpus for a number of years.

edgyquant
0 replies
2h0m

AMD is doing fantastic and it’s CEO is great. It would be a big let down if they had bought nvidia as we’d have a single well run company instead of two

captn3m0
3 replies
17h37m

It would have had the same fate as the NVIDIA ARM deal.

tw04
1 replies
17h20m

Unlikely with AMD owning ATI. The reason NVidia was blocked from buying ARM was because of the many, many third parties that were building chips off ARM IP. Nvidia would have become their direct competitor overnight with little indication they would treat third parties fairly. Regulators were rightly concerned it would kill off third party chips. Not to mention the collective lobbying might of all the vendors building ARM chips.

There were and are exactly zero third parties licensing nvidia IP to build competing GPU products.

paulmd
0 replies
14h4m

one example would be the semicustom deal with mediatek

https://corp.mediatek.com/news-events/press-releases/mediate...

like it’s of course dependent on what “build competing products” means, but assuming you mean semicustom (like AMD sells to Sony or Samsung) then nvidia isn’t as intractibly opposed as you’re implying.

regulators can be dumb fanboys/lack vision too, and nvidia very obviously was not spending $40b just to turn around and burn down the ecosystem. Being kingmaker on a valuable IP is far more valuable than selling some more tegras for a couple years. People get silly when nvidia is involved and make silly assertions, and most of the stories have become overwrought and passed into mythology. Bumpgate is… something that happened to AMD on that generation of gpus too, for instance. People baked their 7850s to reflow the solder back then too - did AMD write you a check for their defective gpu?

https://m.youtube.com/watch?v=iLWkNPTyg2k

adventured
0 replies
17h29m

Maybe, however, the GPU market was not considered so incredibly valuable at the time (particularly by eg politicians in the US, Europe or China). Today it's a critical national security matter, and Nvidia is sitting on the most lucrative semiconductor business in history. Back then it was overwhelmingly a segment for gaming.

anyfoo
3 replies
17h28m

The future of chips is security on the chip. I was like no, no its not!

Putting aside whether the statement is considered true or not, buying McAfee under the guise of the kind of security meant when talking about silicon is... weird, to say the least.

lmm
2 replies
17h21m

McAfee makes their money from people being required to run it for certification. Imagine government/healthcare/banking/etc. customers being obliged to use only Intel chips because they'll fail their audits (which mandate on-chip antivirus) otherwise. I hate it, but I can see the business sense in trying.

wredue
0 replies
14h14m

I’m not sure mcaffee is the go to for this requirement any longer. Maybe. Definitely across the 4 enterprise I’ve worked at, they all migrated away from mcaffee.

mikepurvis
0 replies
14h41m

Still, $7.6B is ludicrous money for a "try", especially when everyone in the room should have known how shaky the fundamentals were for such a pitch.

bombcar
2 replies
17h34m

He was right but for the wrong reasons.

Had Intel figured out hyperthreading security and avoided all the various exploits that later showed up …

quatrefoil
1 replies
17h8m

Then they would have worse-performing chips and the market wouldn't care about the security benefits. Cloud providers may grumble, but they aren't the most important market anyway.

nottorp
0 replies
8h34m

Has there ever been an exploit in the wild for rowhammer/whatever the other vulnerabilities were?

thijson
0 replies
3h20m

McAfee was Renee James idea, she was two in a box (Intel speak for sharing a management spot) with Brian Krzanich.

tgtweak
53 replies
14h0m

I'm kind of bullish on Intel right now. They've moved up so many process nodes so quickly and have made some earnest headway in being an actual fab. Let's ignore the elephant in the room which is taiwan and it's sovereignty, and only focus on the core r&d.

Intel flopped so hard on process nodes for 4 years up until Gelsinger took the reigns... it was honestly unprecedented levels of R&D failure. What happened over the 8 years prior was hedge funds and banks had saddled up on Intel stock which was paying healthy dividends due to cost cutting and "coasting". This sudden shock of "we're going to invest everything in R&D and catch back up" was news that a lot of intel shareholders didn't want to hear. They dumped the stock and the price adjusted in kind.

Intel's 18A is roughly 6 months ahead of schedule, set to begin manufacturing in the latter half of 2024. Most accounts put this ahead of TSMC's equivalent N2 node...

Fab investments have a 3 year lag on delivering value. We're only starting to see the effect of putting serious capital and focus on this, as of this year. I also think we'll see more companies getting smart about having all of their fabrication eggs in one of two baskets (samsung or tsmc) both within a 500 mile radius circle in the south china sea.

Intel has had 4 years of technical debt on it's fabrication side, negative stock pressure from the vacuum created by AMD and Nvidia, and is still managing to be profitable.

I think the market (and analysts like this) are all throwing the towel in on the one company that has quite a lot to gain at this point after losing a disproportionate amount of share value and market.

I just hope they keep Pat at the helm for another 2 years to fully deliver on his strategy or Intel will continue where it was headed 4 years ago.

RCitronsBroker
14 replies
11h4m

i honestly don’t see what you are seeing in terms of taiwans future sovereignty. Of course, China would like to do something about Taiwan, especially now with their economy kind of in the dumps and a collapsing real estate bubble. But when you look at the facts of it all, there’s absolut ZERO chance chine can muster up what it takes to hold their own in such a conflict. Their military isn’t up to snuff and they are one broken dam away from a huge mass casualty event.

rapsey
8 replies
9h55m

there’s absolut ZERO chance chine can muster up what it takes to hold their own in such a conflict.

However China is now a full fledged dictatorship. I'm not sure you can count on them being a rational actor on the world stage.

They can do a lot of damage, but would also get absolutely devastated in return. They are food, energy insecure and entirely dependent on exports after all.

RCitronsBroker
7 replies
9h26m

True, but the elite class that’s currently profiting from and in control of said country would devastate themselves if they dare. Skepticism about the wests self-inflicted dependency on China is at an all time high. Terms like "on-" or "friend-shoring" are already coming up now.

You’re not wrong, maybe all the scaremongering in the west about China overtaking us got them delusional enough in a Japanese nationalist type way for them to behave this irrational, but i highly doubt it. But that can also change pretty quick if they feel like their back is against the wall, you’re not wrong in that regard

rapsey
6 replies
9h2m

How much is that elite independent of Xi? A relatively independent elite is probably a more stable system. But a completely subservient elite to the fearless leader is however much more dangerous.

RCitronsBroker
5 replies
5h37m

I don’t think Xi is as independent as you believe, but that’s a matter of personal opinion.

I just don’t think it’s very likely for just about any leader putting themselves into the position you are describing. This is a reoccurring narrative in western media, and I’m not here to defend dictators, but i feel like reality is less black and white than that.

Many of the "crazed leaders" we are told are acting irrational, often do not. It’s just a very, very different perspective, often bad ones, but regardless.

Let me try to explain what I mean: during the Iraq war, Saddam Hussein was painted as this sort of crazed leader, irrationally deciding to invade Kuwait. But that’s not the entire truth. Hussein may have been an evil man, but the way the borders of Iraq were re-drawn, Iraq was completely cut off from any sources of fresh water. As expected, their neighbors cut off their already wonky water supplies and famine followed. One can still think it’s not justified to invade Kuwait over this, but there’s a clear gain to be had from this "irrational" act. Again, not a statement of personal opinion, just that there IS something to be had. I’m not trying to say that i am certain that Hussein had the prosperity of his people at heart, but i do think that it isn’t entirely irrational to acknowledge that every country in human history is 3 missed meals away from revolution. That’s not good, even if you are their benevolent god and dictator for lifetime(tm).

Russia "irrationally" invading the Ukraine may seem that way to us, but let’s see. Russias economy is just about entirely dependent on their petrochem industry. Without, their are broke. The reason why they still can compete in this market is their asset of soviet infrastructure and industry. A good majority of USSR pipelines run through the Ukraine. I’m not saying it’s okay for them to invade, but i can see what they seek to gain and why exactly they fear NATO expansion all that much.

I personally don’t see a similar gain to be had from China invading Taiwan, at least right now. They have lots to lose and little to gain. Taiwans semiconductor industry is useless without western IP, lithography equipment and customers. There are even emergency plans to destroy taiwans fabs in case of invasion. And that’s beside the damage done to mainland China itself.

But as i stated, this may very well change when they get more desperate. Hussein fully knew the consequences of screwing with the wests oil supply, but the desperation was too acute.

I just don’t buy irrationality, there’s always something to be had or something to lose. It may be entirely different from our view, but there’s gotta be something.

dgroshev
1 replies
2h25m

Problem is, "rational" is not objective. "Rational" is more like "consistent with one's goals (subjective) under one's perception of reality (subjective)".

When you're saying "Putin invaded Ukraine irrationally" you're implicitly projecting your own value system and worldview onto him.

Let's take goals. What do you think Putin's goals are? I don't think it's too fanciful to imagine that welfare of ordinary Russians is less important to him than going down in history as someone who reunited the lost Russian Empire, or even just keeping in power and adored. It's just a fact that the occupation of Crimea was extremely popular and raised his ratings, so why not try the same thing again?

What about the worldview? It is well established that Putin didn't think much of Ukraine's ability to defend, having been fed overly positive reports by his servile underlings. Hell, even Pentagon thought Ukraine will fold, shipping weapons that would work well for guerrilla warfare (Javelins) and dragging their feet on stuff regular armies need (howitzers and shells). Russians did think it'll be a walk in the park, they even had a truck of crowd control gear in that column attacking Kyiv, thinking they'll need police shields.

So when you put yourself into Putin's shoes, attacking Ukraine Just Makes Sense: a cheap&easy way to boost ratings and raise his profile in history books, what not to like? It is completely rational — for his goals and his perceived reality.

Sadly, people often fall into the trap of overextending their own worldview/goals onto others, finding a mismatch, and trying to explain that mismatch away with semi-conspiratorial thinking (Nato expansion! Pipelines! Russian speakers!) instead of reevaluating the premise.

andrewflnr
0 replies
52m

I don't accept the subjectivity w.r.t. "perceived reality". Russia's military unreadiness was one of the big reasons I consider the invasion irrational, and I put the blame squarely on Putin because he could have gotten accurate reports if he wasn't such a bad leader. You are responsible for your perceived reality, and part of rationality is acting in a way that it matches real reality.

(But yeah, clearly his actual goal was to increase his personal prestige. Is that not common knowledge yet?)

datadrivenangel
1 replies
3h54m

Also Saddam was told by the US ambassador that the US has no opinion on Arab-Arab conflicts...

RCitronsBroker
0 replies
3h12m

yup. there are more examples than i can muster up to write. One more gut-wrenching than the former. The US calling anyone irrational is pretty rich anyways. After all, invoking the use Brainwashing in war after war, instead of accepting the existence of differing beliefs isn’t the pinnacle of rationality either. Neither is kidnapping your own people in an attempt to build your own brand of LSD-based brainwashing. Neither is infiltrating civil rights movements, going so far as attempting to bully MLK into suicide. Neither is spending your people’s tax money on 638 foiled assassinations of Castro. Neither is committing false-flag genocides in Vietnam, or PSYOPing civilians into believing they are haunted by the souls of their relatives.

none of those claims are anything but proven, historical facts by the way.

Wanna lose your appetite? The leadership in charge of the described operations in Vietnam gleefully talked about their management genius. They implemented kīll quotas.

this list also is everything but exhaustive.

sd1010
0 replies
30m

Russia doesn't frear NATO - see their reaction on Finland joining it. Also the pipelines were not the reason for invasion. They were the opposite - a deterrence. As soon as Russia built pipelines that were circumventing Ukraine, they decided to invade, thinking that the gas transmition would't be in danger now.

bigbillheck
2 replies
5h14m

they are one broken dam away from a huge mass casualty event.

Are there any dam-having countries for which this isn't the case?

RCitronsBroker
1 replies
2h49m

none or VERY few are even remotely close to the impact a potential breach of the three gorges dam would have. [1] Seriously, it’s worth reading up on, it’s genuinely hard to overstate.

[1]: https://www.ispsw.com/wp-content/uploads/2020/09/718_Lin.pdf

"In this case, the Three Gorges Dam may become a military target. But if this happens, it would be devastating to China as 400 million people live downstream, as well as the majority of the PLA's reserve forces that are located midstream and downstream of the Yangtze River."

maxglute
0 replies
2h19m

It's grossly overstated because TW doesn't have the type or numbers of ordnance to structurally damage gravity dam the size of three gorges. And realistically they won't because the amount of conventional munitions needed is staggering, more than TW can muster in retaliatory strike, unless it's a coordinated preemptive strike, which TW won't since it's suicide by war crime.

The entire three gorges meme originated from FaLunGong/Epoche times propaganda, including in linked article (to interview with Simone Gao) and all the dumb google map photos of deformed damn due to lens distortion. PRC planners there aren't concerned about dam breech, but general infra terrorism.

The onne infra PRC planners are concerned about are coastal nuclear plants under construction, which is much better ordnance trade for TW anyway, and just as much of a war crime.

riffraff
1 replies
10h44m

I think "economy in the dumps" is a bit too harsh.

China is facing a deflating real estate bubble, but they still managed to grow the last year (official sources are disputed but independent estimates are still positive).

RCitronsBroker
0 replies
10h17m

it’s where the growth is coming from. Chinas growth (or even just sustenance) isn’t coming from a healthy job market and consumer spending. It’s mostly fueled by SOEs and prefectures going into debt to keep on investing, many local administration have found out they can trick debt limits by forming state-owned special purpose vehicles that aren’t bound to their debt limits. That’s not good at all. there’s a reason we are seeing tons of novel Chinese car brands being pushed here in Europe, they massively overproduced and cannot sell them in their own market anymore. It’s really not looking great atm.

edit: one also should keep in mind that the Chinese real estate market is entirely different in its importance to its populations wealth. "Buying" real estate is pretty much the only sanctioned market to invest your earnings. They still pretend to be communist after all.

throwaway4good
9 replies
9h2m

What I worry about with Intel is that they have gotten too much into politics; relying on CHIPS act and other subsidies, encouraging sanctions on Chinese competitors while relying on full access to the Chinese market for sales.

It is not a good long term strategy: The winds of politics may change, politicians may set more terms (labour and environment), foreign market access may become politicized too (US politicians will have to sell chips like they sell airplanes on foreign trips).

So Intel will end up like the old US car makers or Boeing - no longer driven by technological innovation but instead by its relationship to Washington.

davidy123
5 replies
4h0m

"This investment, at a time when … wages war against utter wickedness, a war in which good must defeat evil, is an investment in the right and righteous values that spell progress for humanity"

That is not a partner for creating logical systems. Very clear their current decisions are political.

throwaway4good
3 replies
1h55m

That bananas quote is from an Israeli minister.

Imagine what it would do if Intel became strongly associated with one side in the Israel-Palestine conflict. It could really hurt their business.

Usually business leaders are smart enough to stay out of politics.

davidy123
2 replies
1h24m

Apparently they have become strongly associated since that quote is part of the release for their new plant. It is sickening to me this kind of hard-right religious zealotry is part of decisions of tech companies. I am avoiding Intel as much as possible now, I hope others will consider this too.

UberFly
1 replies
54m

That quote didn't come out of Intel's mouth.

davidy123
0 replies
38m

It is directly associated with the deal they are making. Would you let that quote be used with a deal your company is making? Intel knows full well how this is being spun.

sentinalien
0 replies
42m

Basically all the big semiconductor companies do R&D in Israel

hardware2win
1 replies
8h2m

Have you read chip war?

It will challenge your concerns

throwaway4good
0 replies
7h20m

Yeah. Too much the cold war angle. I think he overstates the role of government/military and underestimates how much the consumer market has driven the process innovations that has made computing cheap and ubiquitous,

King1st
0 replies
1h53m

Intel has used political incentives often though its history to great effect. I think its a much smaller issue than you think. Its part of their standard game-plan for over 30 years. The issue with boeing is becoming acontract company that into contracts out all their work which is self defeating and leads to brain drain. EX: the door lacking bolts because Boeing doesnt even build its own fuselages anymore and have let their standards fall, wholly depending on contractors with little oversight.

adrian_b
7 replies
7h50m

There is a good chance for Intel to recover, but that remains to be proven.

From their long pipeline of future CMOS manufacturing processes with which Intel hopes to close the performance gap between them and TSMC, for now there exists a single commercial product: Meteor Lake, which consists mostly of chips made by TSMC, with one single Intel 4 die, the CPU tile.

The Meteor Lake CPU seems to have finally reached the energy efficiency of the TSMC 5-nm process of almost 4 years ago, but it also has obvious difficulties in reaching high clock frequencies, exactly like Ice Lake in the past, so once more Intel has been forced to accompany Meteor Lake with Raptor Lake Refresh made in the old technology, to cover the high-performance segment.

Nevertheless, Meteor Lake demonstrates reaching the first step with Intel 4.

If they will succeed to launch on time and with good performance, later this year, their server products based on Intel 3, that will be a much stronger demonstration of their real progress than this Meteor Lake preview, which has also retained their old microarchitecture for the big cores, so it shows nothing new there.

Only by the end of 2024 it will become known whether Intel has really become competitive again, after seeing the Arrow Lake microarchitecture and the Intel 20A manufacturing process.

ajross
4 replies
4h48m

The Meteor Lake CPU [...] has obvious difficulties in reaching high clock frequencies,

Not sure where that's coming from? The released parts are mobile chips, and the fastest is a 45W TDP unit that boosts at 5.1GHz. AMD's fastest part in that power range (8945HS) reaches 5.2GHz. Apple seems to do just fine at 4GHz with the M3.

I'm guessing you're looking at some numbers for socketed chips with liquid cooling?

adrian_b
3 replies
4h21m

The 5.1 GHz Intel Core Ultra 9 processor 185H is the replacement for the 5.4 GHz Intel Core i9-13900H Processor of previous year. Both are 45-W CPUs with big integrated GPUs and almost identical features in the SoC.

No liquid cooling needed for either of them, just standard 14" or 15" laptops without special cooling, or NUC-like small cases, because they do not need discrete GPUs.

Both CPUs have the same microarchitecture of the big cores.

If Intel had been able to match the clock frequencies of their previous generation, they would have done that, because it is embarrassing that Meteor Lake wins only the multi-threaded benchmarks, due to the improved energy efficiency, but loses in the single-threaded benchmarks, due to lower turbo clock frequency, when compared to the last year's products.

Moreover, Intel could easily have launched a Raptor Lake Refresh variant of i9-13900H, with a clock frequency increased to 5.6 GHz. They have not done this only to avoid an internal competition for Meteor Lake, so they have launched only HX models of Raptor Lake Refresh, which do not compete directly with Meteor Lake (because they need a discrete GPU).

During the last decade, the products made at TSMC with successive generations of their processes had a continuous increase of their clock frequencies.

On the other hand Intel had a drop in clock frequency at all switches in the manufacturing processes, at 14-nm with the first Broadwell models, then at 10-nm with Cannon Lake and Ice Lake (and even Tiger Lake could not reach clock frequencies high enough for desktops), and now with Meteor Lake in the new Intel 4 process.

With the 14-nm and 10-nm (now rebranded as Intel 7), Intel has succeeded to greatly increase the maximum clock frequencies after many years of tuning and tweaking. Now, with Meteor Lake, this will not happen, because they will pass immediately to different better manufacturing processes.

According to rumors, the desktop variant of Arrow Lake, i.e. Arrow Lake S, will be manufactured at TSMC in order to ensure high-enough clock frequencies, and not with the Intel 20A, which will be used only for the laptop products.

Intel 18A is supposed to be the process that Intel will be able to use for several years, like their previous processes. It remains to be seen how much time will pass until Intel will become able to reach again 6.0 GHz in the Intel 18A process.

ajross
2 replies
3h18m

That's getting a little convoluted. I still don't see how this substantiates that Intel 4 "has obvious difficulties in reaching high clock frequencies".

Intel is shipping competitive clock frequencies on Intel 4 vs. everyone in the industry except the most recent generation of their own RPL parts, which have the advantage of being being up-bins of an evolved and mature process.

That sounds pretty normal to me? New processes launch with conservative binning and as yields improve you can start selling the outliers in volume. And... it seems like you agree, by pointing out that this happened with Intel 7 and 14nm too.

Basically: this sounds like you're trying to spin routine manufacturing practices as a technical problem. Intel bins differently than AMD (and especially Apple, who barely distinguish parts at all), and they always have.

adrian_b
1 replies
2h14m

I have also pointed that while for Intel this repeats their previous two process launches, which is not a good sign, TSMC has never had such problems recently.

While one reason why TSMC did not have such problems is that they have made more incremental changes from one process variant to another, avoiding any big risks, the other reason is that Intel has repeatedly acted as if they had been unable to estimate from simulations the performance characteristics of their future processes and they have always been caught by surprise by inferior experimental results compared to predictions, so they always had to switch the product lines from plan A to plan B during the last decade, unlike the previous decade when all appeared to always go as planned.

A normal product replacement strategy is for the new product to match most of the characteristics of the old product that is replaced, but improve on a few of them.

Much too frequently in recent years many Intel new products have improved some characteristics only with the price of making worse other characteristics. For example raising the clock frequency with the price of also increased power consumption, increasing the number of cores but removing AVX-512, or, like in Meteor Lake, raising the all-cores-active clock-frequency with the price of lowering the few-cores-active clock frequency.

While during the last decade Intel has frequently progressed in the best case by making two steps forward and one step backward, all competitors have marched steadily forwards.

ajross
0 replies
1h37m

I have also pointed that while for Intel this repeats their previous two process launches, which is not a good sign, TSMC has never had such problems recently.

I'll be blunt: you're interpreting a "problem" where none exists. I went back and checked: when Ivy Bridge parts launched the 22nm process (UNDENIABLY the best process in the world at that moment, and by quite a bit) the highest-clocked part from Intel was actually a 4.0 GHz Sandy Bridge SKU, and would be for a full 18 months until the 4960X matched it.

This is just the way Intel ships CPUs. They bin like crazy and ship dozens and dozens of variants. The parts at the highest end need to wait for yields to improve to the point where there's enough volume to sell. That's not a "problem", it's just a manufacturing decision.

formerly_proven
1 replies
4h59m

TSMC 5-nm process of almost 4 years ago

N5 is interesting because it's the first process fully designed around EUV and because it was pretty much exclusive to Apple for almost two years. It launched in Apple products in late 2020, then crickets until about late 2022 (Zen 4, RTX 4000, Radeon 7000). Launches of the other vendors were still on N7 or older processes in 2020 - RTX 3000 for example used some 10nm Samsung process in late 2020. All of those were DUV (including Intel 7 / 10ESF). That's the step change we are looking at.

ajross
0 replies
2h55m

Exactly. N5 is sort of an outlier, it's a process where a bunch of technology bets and manufacturing investment all came together to produce a big leap in competitive positioning. It's the same kind of thing we saw with Intel 22nm[1], where Ivy Bridge was just wiping the floor with the rest of the industry.

Improvements since have been modest, to the extent that N3 is only barely any better (c.f. the Apple M3 is... still a really great CPU, but not actually that much of an upgrade over the M2).

There's a hole for Intel to aim at now. We'll see.

[1] Also 32nm and 45nm, really. It's easy to forget now, but Intel strung together a just shocking number of dominant processes in the 00's.

jillesvangurp
5 replies
7h24m

There are a few areas where they are under pressure:

- The wintel monopoly is losing its relevance now that ARM chips are creeping into the windows laptop market and now that Apple has proven that ARM is fantastic for low power & high performance solutions. Nobody cares about x86 that much any more. It's lost its shine as the "fastest" thing available.

- AI & GPU market is where the action is and Intel is a no-show for that so far. It's not about adding AI/GPU features to cheap laptop chips but about high end workstations and dedicated solutions for large scale compute. Intel's GPUs lack credibility for this so far. Apple's laptops seem popular with AI researchers lately and the goto high performance solutions seem to be provided by NVidia.

- Apple has been leading the way with ARM based, high performance integrated chips powering phones, laptops, and recently AR/VR. Neither AMD nor Intel have a good answer to that so far. Though AMD at least has a foothold in the door with e.g. Xbox and the Steam Deck depending on their integrated chips and them still having credible solutions for gaming. Nvidia also has lots of credibility in this space.

- Cloud computing is increasingly shifting to cheap ARM powered hardware. Mostly the transition is pretty seamless. Cost and energy usage are the main drivers here.

hardware2win
3 replies
6h41m

They will be manufacturing those ARM cpus

jillesvangurp
2 replies
4h5m

Has that been announced? Or is it more a matter of Intel producing some unannounced product on an unannounced timeline with a feature set that has yet to be announced on an architecture that may or may not involve arm? Intel walking away from x86 would be a big step for them. First they don't own arm and second all their high end stuff is x86.

nsteel
0 replies
3h21m
hardware2win
0 replies
3h44m

They will be manufacturing them for their customers.

Also arm isa is just isa

You seem to focus on it too much when it isnt THAT relevant.

Isa doesnt imply perf/energy characteristics

ajross
0 replies
4h37m

Apple has proven that ARM is fantastic for low power & high performance solutions

Apple has proven that Apple Silicon on TSMC's best process is great. There are no other ARM vendors competing well in that space yet. SOCs that need to compete with Intel and AMD on the same nodes are still stuck at the low margin end of the market.

dur-randir
3 replies
11h12m

have made some earnest headway in being an actual fab

In the terms of end product - not really. Last 3-4 gens are indistinguishable to the end user. It's a combined effect of marketing failure and really underwhelming gains - when marketing screams "breakthrough gen", but what you get is +2%/ST perf for another *Lake, you can't sell it.

They might've built a foundation and that might be a deliberate tactics to get back into the race, we'll see. But i'm not convinced for now.

paulmd
0 replies
9h24m

raptor lake is the same as coffee lake/comet lake? nah

TechnicalVault
0 replies
8h8m

Depends who your user is. From a desktop side you're probably not going to notice because desktop CPU requirements have been stagnant for years, desktop is all about GPU. On the server side Sapphire Rapids and Emerald Rapids is Intel getting back in the game and the game is power and market share.

See there's only 2 or 3 more obvious generations of obvious die shrinks available. Beyond those generations we'll have to innovate some other way, so whoever grabs the fab market for these modes now gets a longer period to enjoy the fruits of their innovation.

Meanwhile server CPU TDPs are hitting the 400W+ mark and DC owners are looking dubiously at big copper busbars, die shrinks tend to reduce the Watts per Calculation so they're appealing. In the current power market, more efficient computing translates into actual savings on your power bill. There's still demand for better processors, even if we are sweating those assets for 5-7 years now.

14u2c
0 replies
9h32m

What you quoted refers to Intel's efforts to act as a fab for external customers.

rjzzleep
1 replies
8h6m

You should be bullish on intel, they got so much TSMC and Samsung trade secrets through the chips act that it would be a miracle to mess that up.

jonplackett
0 replies
3h52m

How did that work?

mcint
1 replies
11h46m

I appreciate the deep cut. I definitely do not follow companies internally closely enough to see this coming.

(samsung or tsmc) both within a 500 mile radius circle in the south china sea.

Within 500 mile radius of great power competitor, perhaps. The closest points on mainland Taiwan and Korea are 700 miles apart. Fabs about 1000 miles, by my loose reckoning.

hencoappel
0 replies
11h25m

A 500 mile radius circle has a diameter of 1000 miles, so you're both correct.

dchftcs
1 replies
10h52m

What happened over the 8 years prior was hedge funds and banks had saddled up on Intel stock which was paying healthy dividends due to cost cutting and "coasting"

Not clear about what the role of activist hedge funds is here but Intel's top shareholders are mutual funds like Vanguard which are part of many people's retirement investments. If an activist hedge fund got to run the show, it means that they could get these passive shareholders on their side or to abstain. It would have meant those funds along with pension funds, who should have been in a place to push back against short term thinking, didn't push back. These funds should really be run much more competently given their outsized influence, but the incentives are not there.

pas
0 replies
5h36m

there's probably no need to imagine this conspiracy-like machinations of shareholders. Intel fucked up bad and process development is certified crazytrain to la la land.

(dropping molten tin 1000 times a second and then shooting it with a laser just to get a lamp that can bless you with the hard light you need for your fancy fine few nanometers thin shadows? sure, why not, but don't forget to shoot the plasma ball with a weaker pulse to nudge it into the shape of a lens, cheerio.

and you know that all other parts are similarly scifi sounding.

and their middle management got greedy and they were bleeding talent for a decade.)

lr1970
0 replies
3h12m

The biggest Intel's problem is that a lot of good people left over the previous years of shitty management. Pouring money into R&D certainly helps but with wrong people in key positions the efficiency of the investments will be low.

epolanski
0 replies
8h10m

I think the market (and analysts like this) are all throwing the towel

1) Intel is up 100% from ten years ago when it was at $ 23. All that despite revenue being flat/negative, inflation and costs rising and margins collapsing.

2) Intel is up 60% in the last 12 months alone.

Doesn't look to me like they throwing the towel at all.

SchumerGrift
0 replies
12h23m

They've moved up so many process nodes so quickly and have made some earnest headway in being an actual fab.

I'd buy this if they'd actually built a fab, but right now this seems too-little, too-late for a producer's economy.

The rest frankly doesn't matter much. Intel processors are only notable in small sections of the market.

And frankly—as counter-intuitive as this may seem to such an investor-bullish forum—the death knell was the government chip subsidy. I simply can't imagine american government and private enterprise collaborating to produce anything useful in 2024, especially when the federal government has shown such a deep disinterest in holding the private economy culpable to any kind of commitment. Why would intel bother?

PaulHoule
0 replies
2h39m

This blog

https://semiaccurate.com/

has told the story for more than a decade that Intel has been getting high on its own supply and that the media has been uncritical of the stories it tells.

In particular I think when it comes to the data center they’ve forgotten their roots. They took over the data center in the 1990s because they were producing desktop PCs in such numbers they could afford to get way ahead of the likes of Sun Microsystems, HP, and SGI. Itanium failed out of ignorance and hubris but if they were true evil geniuses they couldn’t have made a better master plan to wipe out most of the competition for the x86 architecture.

Today they take the desktop for granted and make the false claim that their data center business is more significant (not what the financial numbers show.). It’s highly self-destructive because when they pander to Amazon, Amazon takes the money they save and spends it on developing Graviton. There is some prestige in making big machines for the national labs but it is an intellectual black hole because the last thing they want to do is educate anyone else on how to simulate hydrogen bombs in VR.

So we get the puzzle that most of the performance boost customers could be getting comes from SIMD instructions and other “accelerators” but Intel doesn’t make a real effort to get this technology working for anyone other than the Facebook and the national labs and, in particular, they drag their feet in getting it available on enough chips that it is is worth it for mainstream developers to use this technology.

A while back, IBM had this thing where they might ship you a mainframe with 50 cores and license you to use 30 and if you had a load surge you could call you up and they could turn on another 10 cores at a high price.

I was fooled when I heard this the first time and thought it was smart business but after years of thinking about how to deliver value to customers I realized it’s nothing more than “vice signaling”. It makes them look rapacious and avaricious but really somebody is paying for those 20 cores and if it is not the customer it is the shareholders. It’s not impossible that IBM and/or the customer winds up ahead in the situation but the fact is they paid to make those 20 cores and if those cores are sitting there doing nothing they’re making no value for anyone. If everything was tuned up perfectly they might make a profit by locking them down, but it’s not a given at all that it is going to work out that way.

Similarly Intel has been hell-bent to fuse away features on their chips so often you get a desktop part that has a huge die area allocated to AVX features that you’re not allowed to use. Either the customer or the shareholders are paying to fabricate a lot of transistors the customer doesn’t get to use. It’s madness but except for Charlie Demerjian the whole computer press pretends it is normal.

Apple bailed out on Intel because Intel failed to stick to its roadmap to improve their chips (they’re number one why try harder?) and they are lucky to have customers that accept that a new version of MacOS can drop older chips which means MacOS benefits from features that were introduced more than ten years ago. Maybe Intel and Microsoft are locked in a deadly embrace but their saving grace is that every ARM vendor other than Apple has failed to move the needle on ARM performance since 2017, which itself has to be an interesting story that I haven’t seen told.

this_user
31 replies
17h0m

Intel has truly been on a remarkable spree of extremely poor strategic decisions for the last 20 years or so. Missed the boat on mobile, missed the boat on GPUs and AI, focused too much on desktop, and now AMD and ARM-based chips are eating their lunch in the data centre area.

ternaryoperator
8 replies
16h17m

You're missing the big one: they missed the boat on 64-bits. It was only because they had a licensing agreement in place with AMD that they were able to wholesale adopt AMD's extensions to deliver 64-bit x86 processors.

FullyFunctional
6 replies
13h40m

That's not at all what happened. Intel's 64-bit story was EPIC/IA-64/Itanium and it was an attempt to gain monopoly and keep x86 for the low-end. AMD64 and Itanic derailed that idea so completely that Intel was forced by Microsoft to adopt the AMD64 ISA. Microsoft refused to port to yet-another incompatible ISA.

Had Itanium been a success then Intel would have crushed the competition (however it did succeed in killing Alpha, SPARC, and workstation MIPS).

mgerdts
3 replies
12h58m

I don’t think it was Itanium that killed SPARC. On workstations it was improved reliability of Windows and to some extent Linux. Sun tried to combat this with lower cost systems like the Ultra 5, Ultra 10, and Blade 100. Sun fanatics dismissed these systems because they were to PC like. PC fanatics saw them as overpriced and unfamiliar. With academic pricing, a $3500 Ultra 10 with 512 MB of RAM and an awful IDE drive ran circles around a $10000 HP C180 with 128 MB of RAM and an OK SCSI drive because the Sun never had to hit swap. I think Dell and HP x86 PC workstations with similar specs as the Ultra 10 were a bit cheaper.

On servers, 32 bit x86 was doing wonders for small workloads. AMD64 quickly chipped away at the places where 1-4 processor SPARC would have previously been used.

FullyFunctional
2 replies
11h18m

Fair point, Itanium's impact on SPARC might be less than I stated, but Alpha is very clearly documented.

silvestrov
1 replies
9h3m

I think that Itanium had zero impact on anything (besides draining Intel for money) due to high cost and low performance.

It could not run x86 apps faster than x86 cpus, so it didn't compete in the MS Windows world. Itanium was a headache for compiler writers as it was very difficult to optimize for, so it was difficult to get good performance out of Itanium and difficult to emulate x86.

Itanium was introduced after the dot-com crash, so the marked was flooded with cheap slightly used SPARC systems, putting even more pressure on price.

This is unlike when Apple introduced Macs with PowerPC cpus: they had much higher performance than the 68040 cpu they replaced. PowerPC was price competetive and easy to write optimizing compilers for.

panick21_
0 replies
7h53m

Itanium itself did nothing. But Itanium announcement killed basically killed MIPS, Alpha and PA-RISC. Why invest money into MIPS and Alpha when Intanium is gone come out and destroy everything with its superior performance.

So ironically announcing Itanium was genius, but then they should have just canceled it.

xenadu02
1 replies
5h33m

Microsoft did port to Itanium. Customers just didn't buy the chips. They were expensive, the supposed speed gains from "smarter compilers" never materialized, and their support for x86 emulation was dog slow (both hardware and later software).

No one wanted Itanium. It was another political project designed to take the wind out of HP and Sun's sails, with the bonus that it would cut off access to AMD.

Meanwhile AMD released AMD64 (aka x86-64) and customers started buying it in droves. Eventually Intel was forced to admit defeat and adopt AMD64. That was possible because of the long-standing cross-licensing agreement between the two companies that gave AMD rights to x86 way back when nearly all CPUs had to have "second source" vendors. FWIW Intel felt butt-hurt at the time, thinking the chips AMD had to offer (like I/O controllers) weren't nearly as valuable as Intel's CPUs. But a few decades later the agreement ended up doing well for Intel. At one point Intel made some noise about suing AMD for something or another (AVX?) but someone in their legal department quickly got rid of whoever proposed nonsense like that because all current Intel 64-bit CPUs rely on the AMD license.

FullyFunctional
0 replies
3h20m

Maybe I wasn’t clear; I meant after Itanium failed, Microsoft refused to support yet another 64-bit extention of x86 as they already had AMD64/x64 (and IA-64 obviously)

oldgradstudent
0 replies
14h24m

They didn't simply miss the boat on 64 bit.

That was an intentional act to force customers into Itanium, which was 64 bit from the outset.

whalesalad
6 replies
16h9m

They still dominate PC’s and the server market.

danielmarkbruce
2 replies
14h20m

The trend ain't their friend in those markets though. Many folks running server workloads on ARM and their customer base is drastically more concentrated and powerful than it once was. Apple has shown the way forward on PC chips.

They are a dead company walking.

whalesalad
1 replies
3h46m

Very hyperbolic take. I agree there is serious competition elsewhere, but "dead company walking" is far from true.

danielmarkbruce
0 replies
3h25m

Time will tell. It's not meant to be hyperbolic - I'm short them and lots of others are and expect it will be a disaster going forward. There are obviously people on the other side of that trade with different expectations, so we will see.

djha-skin
1 replies
15h58m

I run all my workloads in AWS on armchips. It's half the cost for just as good of an experience on my side.

paulmd
0 replies
11h56m

Cloud providers are very careful to make sure of that - they have deliberately eschewed performance increases that are possible (and have occurred in the consumer market) in favor of keeping the “1 vcpu = 1 sandy bridge core” equivalence. The latest trend is those “dense cores” - what a shocking coincidence that it’s once again sandy bridge performance, and you just get more of them.

They don’t want to be selling faster x86 CPUs in the cloud, they want you to buy more vcpu units instead, and they want those units to be arm. And that’s how they’ve structured their offerings. It’s not the limit of what’s possible, just what is currently the most profitable approach for Amazon and google.

makeitdouble
0 replies
10h55m

Looking at Statistica [0] I see Intel at 62% and AMD at 35% on desktop for 23Q2. That's a significant gap, and having more than half of the market is nothing to sneeze at, but I think they move from being dominant to just being a major player.

IF (big IF) the trend continues we might see Intel and AMD get pretty close, and a lot more comptetion on the market again (I hope)

On the server side, I don't have the number but that's probably a way harder turf to protect for Intel going forward, if they're even still ahead ?

[0] https://www.statista.com/statistics/735904/worldwide-x86-int...

Mistletoe
6 replies
16h41m

Is it just the fate of large successful companies? The parallels with Boeing always come to mind. We’ve seen this play out so many times through history, it’s why investing in the top companies of your era is a terrible idea.

https://money.cnn.com/magazines/fortune/fortune500_archive/f...

I wonder how many of our companies will survive at the top for even 20 more years?

https://companiesmarketcap.com/

Berkshire is the only one I feel sure about because they hold lots of different companies.

tru3_power
3 replies
16h23m

Market forces are hard to fight sometimes. Look at Meta- they were beaten like hell cuz of their metaverse bet.

HDThoreaun
1 replies
13h24m

Meta was beaten because investors were worried Mark had gone rogue. Now that hes laid a bunch of people off to show that the investors are in control theyre cool with the metaverse.

lotsofpulp
0 replies
7h8m

Or maybe it has something to do with their amazing net income trend in recent quarters:

https://www.macrotrends.net/stocks/charts/META/meta-platform...

scarface_74
0 replies
15h38m

Am I missing something? Meta looks like it’s at an all time high

zeusk
1 replies
8h34m

Apple makes 40% of Berkshire's investment portfolio

lotsofpulp
0 replies
7h2m

https://www.cnbc.com/berkshire-hathaway-portfolio/

As of Nov 2023, BRK has ~915.5M AAPL shares, with a market cap of $172B. BRK’s market cap is $840B.

Per the CNBC link, the market cap of BRK’s AAPL holding is 46% of the market cap of BRK’s publicly listed holdings, but BRK’s market cap being much higher than $172B/46% means there is $840B - $374B = $466B worth of market cap in BRK’s non publicly listed assets.

I would say $172/$840 = 20% is more representative of BRK’s AAPL proportion.

nine_k
3 replies
16h15m

Xeon is not something people normally run on desktops.

Datacenters were and still are full of monstrous Xeons, and for a good reason.

nullhole
2 replies
15h56m

Datacenters were and still are full of monstrous Xeons, and for a good reason.

To ask the foolish question, why? My guess would be power efficiency. I've only ever seen them in workstations, and in that use-case it was the number of cores that was the main advantage.

nine_k
0 replies
14h34m

It's a compact package with many cores, lots of cache memory, lots or RAM lanes, and lots of PCI lanes. All within a large but manageably hot crystal.

Space and power in datacenters are at a premium; packing so many pretty decent cores into one CPU allows to run a ton of cloud VMs on a physically compact server.

AMD EPYC, by the way, follows the same dtatcenter-oriented pattern.

asd4
0 replies
15h19m

They seem to gate ECC support behind Xeon for higher end processors. You see ECC memory in a lot of workstation class machines.

megablast
0 replies
13h37m

This is so common that it happens all the time with successful companies. They don't have to make good decisions. They have more than enough cash to keep making bad decision after bad decision, whereas a smaller company would collapse.

Apple and Microsoft have both managed to avoid this, and been the exception.

danielmarkbruce
0 replies
16h48m

They ran it for max profit in an era when strategic investments needed to be made.

It's remarkably common and heavily incentivized.

RachelF
0 replies
16h47m

No, they were on the boat, they just mis-managed it. They used to make Arm chips, but sold it off just before the first iPhone was released as they saw no future in mobile CPUs. Same with network processors for routers around the same time.

They have been trying half-heartedly with GPUs on and off since the late 1990's i740 series.

The root cause is probably the management mantra "focus on core competencies". They had an effective monopoly on fast CPUs from 2007 until 2018. This monopoly meant very little improvement in CPU speed.

B1FF_PSUVM
0 replies
16h41m

And the manufacturing failed to save them:

[...] when Intel’s manufacturing prowess hit a wall Intel’s designs were exposed. Gelsinger told me: 'So all of a sudden, as Warren Buffet says, “You don’t know who’s swimming naked until the tide goes out.” [...]'

mmaunder
17 replies
14h24m

Intel is just like Boeing: A company with legendary engineering roots taken over by myopic financial engineers who can’t think further than next quarters stock price, which is their only measure, their only target, and to which all their bonuses are attached.

rpmisms
3 replies
14h14m

While it's sad now, I'm looking forward to long-term thinking being emphasized in business school again.

rectang
1 replies
14h2m

Why would people pay attention even if long-term thinking is taught? Only chumps think about the long-term interest of the company. Your long-term interest as an individual is in extracting as much money from short-term bonuses as possible, because once that money is in your bank account it doesn't matter if the company craters.

rpmisms
0 replies
13h56m

Well, the idea of giving value to society has been evaporating of late, but it should still be taught as the ideal.

runeblaze
0 replies
13h50m

I am still trying to maximize my promo chances and that has been very hard. Long term thinking -- that's like rocket science

markus_zhang
2 replies
14h14m

They all look like that, aren't they? It's like cancer, late stage. It's everywhere. I'm trying to be optimistic here, but I don't see much light at the end of the tunnel. Best case a second depression wipes away everyone's wealth and we start from clean slate.

chrisco255
1 replies
13h51m

I mean, clearly the competitors eating Intel's lunch do not look like that. Nvidia does not look like it. Apple doesn't. AMD doesn't. Just seems like ordinary competitive churn in the marketplace to me.

markus_zhang
0 replies
13h48m

That's a pretty good point.

kortilla
2 replies
13h42m

This is oversimplified and actually underplays how serious the issue is.

These companies didn’t fail because of myopic financial engineers. The ones focused on quarter to quarter tend to bomb the company relatively quickly and do get flushed out.

These companies failed because of long term financial visionaries. These are the worst because they are thinking about how the company can capture markets at all costs, diversify into all adjacent markets, etc. They are a hard cancer to purge because on the surface they don’t sacrifice stuff for the current quarter. They sacrifice laser focus for broad but shallow approaches to all kinds of things.

“Sure, we’ll build a fab here. And we’ll also do enterprise NICs… And also storage… and also virtualization… and also AI… and also acquire some startups in these fields that might have something interesting…”

The company becomes big listless monster coasting on way too many bets spread out all over the place. There is no core engineering mission or clear boundary between what is the business and what isn’t.

Intel is far from “myopic”. If it was something as obvious as a next quarter driven CEO, the board would have made a change immediately.

mmaunder
1 replies
1h7m

These companies failed because of long term financial visionaries. These are the worst because they are thinking about how the company can capture markets at all costs, diversify into all adjacent markets, etc.

Long term financial visionaries? No they’re simply plundering the business to reward shareholders and executives through buybacks and dividends. They rationalize this as “maximizing shareholder value” but it is destroying long term value, growth and innovation.

kortilla
0 replies
8m

Don’t be naive. You missed the entire point of what I highlighted and looking for “dividends and buybacks” as your signal will lead you nowhere.

CEOs not focused on product vision who focus on “company growth” as a vague vision are what lead to Intel, GE, etc. There is no greedy raiding of the coffers like your caricature implies in this scenarios.

kaliqt
2 replies
14h19m

Funnily enough, this is EXACTLY what has happened to the video game and movie/series industry.

They jump onto the latest trend, e.g. ESG to get in good with the banks and funds without thinking about what long term damage it is doing to their brands and products.

matheusmoreira
1 replies
14h12m

To be fair, they jump on that particular trend because the banks will punish them if they don't. They have tens of trillions in assets under their management and they make sure that capital won't flow to companies that refuse their agenda. In effect the banks dictate the direction society moves towards. And people say it's a conspiracy theory.

ESG investment funds in the United States saw capital inflows of $3.1 billion in 2022 while non-ESG investment funds saw capital outflows of $370 billion during the stock market decline that year
rpmisms
0 replies
14h8m

It's not a conspiracy theory if it's on their website...

borissk
2 replies
14h11m

And furthermore they lost most of their best engineers and scientists during the years that the company was run by the useless MBAs. Now each new technology process is a huge struggle. Intel has secured some of the first new gen EUV machines from ASML, but if they'll have the talent to quickly start using them on scale is not yet clear.

selimthegrim
1 replies
11h54m

Somebody from Intel who had a PhD from my school and manages a bunch of process engineers was on sabbatical and teaching a class at his old department, some of whose undergrads I taught. I visited an info session he was presenting and pointed out that I had been a green badge in JF5 for Validation and that they had a reputation of not matching ASML and Nvidia offers. He went ballistic on me and told me to go work for them if I got an offer from them and that he wouldn’t want me on his team with that attitude. While I am sure all the other people in the room who were F1s and undergrads still wanted jobs and he answered their questions honestly as far as I could tell (saying Intel would be bankrupt without CHIPS act money etc) that can’t have been a good look for him. I did leave the room only after telling him I wanted his company to succeed.

borissk
0 replies
8h58m

You're right - this is another problem for Intel. They have a reputation for underpaying engineers, so the best and the brightest go elsewhere.

danielmarkbruce
0 replies
14h23m

Good analogy.

lvl102
5 replies
16h40m

Intel is a product of corporate cancer that is management consulting.

tru3_power
3 replies
16h21m

What does this even mean? (not sarcastic)

jbm
1 replies
15h57m

I worked at (American drink company in Japan) previously and saw what the poster may be referring to.

Management Consulting sells ideas, many of them silly or not important that are marketed and packaged as era-defining. A manager who implements #FASHIONABLE_IDEA can look forward to upwards mobility, while boring, realistic, business-focused ideas from people in the trenches usually get ignored (unless you want a lateral transfer to a similar job). hashtag-collection-of-ideas is much easier to explain when the time comes for the next step up.

This explains why you get insane things like Metaverse fashion shows that barely manage to look better than a liveleak beheading video. These sorts of things might seem like minor distractions, but getting these sorts of boondoggles up and running creates stress and drowns out other concerns. Once the idea is deployed, the success or failure of the idea is of minimal importance, it must be /made/ successful so that $important_person can get their next job.

These projects starve companies of morale, focus and resources. I recall the struggle for a ~USD $20k budget on a project to automate internal corporate systems, while some consultants received (much) more than 10 times that amount for a report that basically wound up in the bin.

Oddly, this sort of corporate supplication to management consultants worked out for me (personally). I was a dev who wound up as a manager and was able to deliver projects internally, while other decent project managers could not get budget and wound up looking worse for something that wasn't their fault.

I don't think any of the projects brought by management consultants really moved the needle in any meaningful way while I worked for any BigCos.

rrr_oh_man
0 replies
15h23m

Oh the PTSD... *twitch*

yCombLinks
0 replies
12h49m

People come in on a short term basis. They don't know the company, the business, or the employees. They make long term decisions by applying generic and simplified metrics without deep understanding.

pm90
0 replies
16h27m

tbh that applies to all fortune 500 at this point

ycsux
2 replies
14h34m

Boycott Intel

koshergweilo
1 replies
14h32m

Why

selimthegrim
0 replies
11h53m

Israeli fabs?

wolfspider
2 replies
15h44m

My own humble opinion is that Intel has always suffered from market cannibalization. They are a brand I look for but many times the iteration of products will force me to go a generation or two older because I can’t argue with the price and features. By the time I was sold on a NUC they were discontinued. I wanted a discrete GPU when they announced Xe but it has become Xe ARC alchemist, battlemage, celestial, and druid. By the time I’m ready to spend some money it will become something else usually. Also, they should have snapped up Nuvia. I’m still rooting for them but really if they could streamline their products and be willing to take a leap of faith on others in the same space it would help out a lot.

rudedogg
0 replies
14h26m

I felt that way about their Optane persistent RAM memory (https://arstechnica.com/gadgets/2018/05/intel-finally-announ...).

Night_Thastus
0 replies
1h30m

I wanted a discrete GPU when they announced Xe but it has become Xe ARC alchemist, battlemage, celestial, and druid.

They've made this situation fairly clear, in my eyes.

Alchemist is the product line for their first attempt at true dedicated GPUs like those Nvidia and AMD produce. It's based on Intel Xe GPU architecture.

It's done decently well, and they've been very diligent about driver updates.

Battlemage is the next architecture that will replace it when it's ready, which I believe was targeted for this year. Similar to how the Nvidia 4k series replaced the 3k before it. Celestial comes a couple years later, then druid a couple years after that, etc. They don't exist simultaneously, they're just the names they use for generations of their GPUs.

mixedbit
2 replies
9h2m

Isn't the current GPU-based stack which drives the progress of AI an early stage architecture which will become sub-optimal and obsolete in a longer term?

Having separate processors with separate memory and separate software stack to do matrix operations works, but it would be much more convenient and productive to have a system with one RAM and CPUs that can efficiently do matrix operations and would not require the programmer to delegate these operations to a separate stack. Event the name 'Graphics Processing Unit' suggest that the current approach for AI is rather hacky.

Because of this, in a long run there can an opportunity for Intel and other CPU manufacturers to regain the lucrative market from NVIDIA.

mike_hearn
0 replies
8h25m

The software stack co-evolves with the hardware, if the hardware can't do something fast then the software guys can't necessarily even try it out.

People have been trying to break NVIDIA's moat by creating non-GPU accelerators. The core maths operations aren't that varied so it's an obvious thing to try. Nothing worked, not even NVIDIA's own attempts. AI researchers have a software stack oriented around devices with separate memory and it's abstracted, so unifying CPU and GPU ram doesn't seem to make a big difference for them.

Don't be fooled by the name GPU. It's purely historical. "GPUs" used for AI don't have any video output capability. I think you can't even render with them. They are just specialised computers come with their own OS ("driver"), compilers, APIs and dev stack, which happen to need a CPU to bring them online and which communicate via PCIe instead of ethernet.

goriloser
0 replies
6h16m

Fast memory is very expensive.

Which is why you only use it when you really need it - on a GPU/AI accelerator.

topspin
1 replies
16h30m

"Intel’s argument is that backside power will make chips much easier to design, since the power is separated from the communications layer, eliminating interference"

That makes a lot of sense to me: that's how and why PCBs are usually designed as they are. How true it that there is an actual advantage vs TSMC?

mdasen
0 replies
16h3m

It all depends on timing. TSMC is also working on backside power delivery.

Intel's roadmap looks great. However, I'm skeptical of whether they're actually meeting that roadmap. Meteor Lake was launched last month using Intel 4, but it looks like Intel 4 has lower transistor density than TSMC's 5nm. Intel 3 is listed on their roadmap as second-half 2023, but we've yet to see any Intel 3 parts.

Realistically, there won't be too much of an advantage for Intel. It's pretty clear that even when Intel ships things, they aren't shipping these new nodes in volume. Intel 4 is only being used for some laptop processors and they're even using TSMC's 5nm and 6nm processes for the graphics and IO on those chips. They canceled the desktop version of Meteor Lake so desktops are staying on Intel 7 for now. Intel's latest server processors launched last month are also Intel 7.

If Intel were able to get a year or two ahead of TSMC, then I could see a nice advantage. However, it looks like Intel's a year behind its roadmap and even being a year behind they're only shipping laptop parts (and not even able to fab the whole chip themselves).

But past success/failure doesn't predict the future. Maybe Intel will be shipping 20A with backside power in 2024 and it'll be 2025 before TSMC gives Apple a 2nm chip with backside power.

But given that we haven't seen them shipping Intel 3 and they haven't taken Intel 3 off their roadmap, I'm going to be a bit skeptical. Intel is doing way better than they had been doing. However, I've yet to see something convincing that they're doing better than TSMC. That's not to say they aren't going to do better than TSMC, but at this point Intel is saying "we're going to jump from slightly worse than 5nm (Intel 4) to 2nm in a year or less!" Maybe Intel is doing that well, but it's a tall ask. TSMC launched 5nm in 2020 and 3 years later got to 3nm. It doesn't take as long to catch up because technology becomes easier over time, but Intel is kinda claiming it can compress 5-6 years worth of work into a single year. Again, maybe Intel has been pouring its resources into 20A and 18A and maybe some of it is more on ASML and maybe Intel has been neglecting Intel 4 and Intel 3 because it knows it's worth putting its energy toward something actually better. But it also seems reasonable to have a certain amount of doubt about Intel's claims.

I'd love for Intel to crush its roadmap. Better processors and better competition benefit us all. But I'm still wondering how well that will go. TSMC seems to be having a bit of trouble with their 3nm process. 2024's flagship Android processors will remain on 4nm and it looks like the 2024 Zen 5 processors will be 4nm as well (with 3nm parts coming in 2025). So it looks like 3nm will basically just be Apple until 2025 which kinda indicates that TSMC isn't doing wonderfully with its 3nm process. Android processors moved to 5nm around 6 months after Apple did, but it looks like they'll move to 3nm around 18 months after Apple. But just because TSMC isn't doing great at 3nm doesn't mean Intel will face similar struggles. It just seems likely that if TSMC (a company that has been crushing it over the past decade) is facing higher struggles at 3nm, it's a bit of wishful thinking to believe Intel won't face similar struggles at 3nm and below.

nimbius
1 replies
16h42m

Honestly I think the real deathwatch started during spectre/meltdown. Intel had a real choice. They could own the issue, patch and use it as a chance to retool a largely broken architecture when corporate money was still 0% interest or negative interest.

They didnt. Every press conference downplayed deflected and denied the performance issues, every patch to the Linux kernel was "disabled by default." They lied through their teeth and real players like amazon vultr and other hosting providers in turn left for AMD.

orev
0 replies
16h23m

I think you’re vastly overestimating how much Wall Street cares about technical issues like this. Spectre/meltdown barely registers on the radar of issues that Intel has. People are still mostly buying Intel CPUs in laptops, desktops, and servers, and the N100 seems to be gaining ground in the Raspberry Pi space. Maybe it gave an opening for AMD in some markets, but Intel really hasn’t seen some huge tarnished reputation because of it.

The far greater threats are that they missed the phone market, and are missing the GPU/AI market. Those are entirely new markets where growth would happen, instead of the mostly fixed size of the current CPU market where the same players are just trading small percentage points.

fortran77
1 replies
15h50m

I was a huge Intel fanboy for years...and then I got my first Windows Arm laptop (Lenovo X13s). Intel has a lot of catching up to do.

StillBored
0 replies
13h2m

I take that to mean you haven't tried any recent (zen3/4 U series) AMD laptops then, or for that matter the M1/2/3's. I've heard this from a number of people who are comparing the x13s or new mac's with their 14nm intel that has a discrete nvidia gpu (which is eating another 10W+) from 2017/8 or so.

I have a pile of laptops including the x13s, and that part burns a good ~30W for a couple minutes, while running ~30% (or worse) slower, then thermally throttles and drops another ~30% in performance.

For light browsing it manages to stay under 5W and gets 10h from its 50Wh battery. This is also doable on just about any AMD or Mac laptop. The AMD machine I'm typing this on tells me I have more than 12 hours remaining and i'm sitting at 81% battery (67Wh). And this machine is a good 3x faster compiling code, or running simple benchmarks like speedometer when its plugged in or allowed to burn 35W. Except it also has a fan that turns on when its under heavy load to keep it from thermally throttling.

Yes the x13s is cute, it is a thin/light laptop and is probably the first windows on arm machine that isn't completely unusable. But, its going to take another leap or two to catch-up with current top of the line machines.

Everyone loves geekbench so, here is an obviously unthrottled x13s vs a similarly rated 28W AMD 7840U.

https://browser.geekbench.com/v5/cpu/21596449 https://browser.geekbench.com/processors/amd-ryzen-7-7840u

That Amd is 2x the single threaded perf, and 50% faster multithreaded, and anything graphics related is going to look even worse.

andrewstuart
1 replies
15h33m

Off topic but I find it weird that Intel CEO does so much religious quoting under the Intel corporate logo:

Here's one examples but there's a pile of them.

https://twitter.com/PGelsinger/status/1751653865009631584

I guess Intel does need the help of a higher power at the stage.

MrBuddyCasino
0 replies
5h22m

Almost as if "doing the right thing" requires an underlying moral framework.

RicoElectrico
1 replies
17h17m

Humble people don't allude to snake oil when dissing competition's naming scheme.

Edit: relevant video, because whoever downvoted did not get what I was referring to: https://www.youtube.com/watch?v=xUT4d5IVY0A

timerol
0 replies
13h21m

TFA is about Intel's humbling not Intel's humility. If Intel was already humble, there would be no need for a humbling

zer00eyz
0 replies
17h24m

>> Notice what is happening here: TSMC, unlike its historical pattern, is not keeping (all of its) 5nm capacity to make low-cost high-margin chips in fully-depreciated fabs; rather, it is going to repurpose some amount of equipment — probably as much as it can manage — to 3nm, which will allow it to expand its capacity without a commensurate increase in capital costs. This will both increase the profitability of 3nm and also recognizes the reality that is afflicting TSMC’s 7nm node: there is an increasingly large gap between the leading edge and “good enough” nodes for the vast majority of use cases.

My understanding is that 5nm has been and continues to be "problematic" in terms of yield. The move to 3nm seems to not be afflicted by as many issues. There is also a massive drive to get more volume (and a die shrink will do that), due to the demands of all things ML.

I suspect that TSMC's move here is a bit more nuanced than the (valid) point that the article is making on this step...

quickthrower2
0 replies
14h26m

Good. Evolution! Competition still works and regulatory capture can’t help every corp.

oldgradstudent
0 replies
14h59m

I thought that Krzanich should do something similar: Intel should stop focusing its efforts on being an integrated device manufacturer (IDM) — a company that both designed and manufactured its own chips exclusively — and shift to becoming a foundry that also served external customers.

That would only work if Intel has a competitive foundry. Intel produces very high margin chips. Can it be competitive with TSMC in low margin chips where costs must be controlled?

The rumors I've heard (not sure about their credibility) is that Intel is simply not competitive in terms of costs and yields.

And that's even before considering it doesn't really have an effective process competitive with TSMC.

It's easy to say it should become a foundry, it's much harder to actually do that.

mjevans
0 replies
14h54m

Wasn't the region around 12-7nM around where flash and RAM style memory were more reliable? If the new process works well for those but is cheaper that could be very good for bulk (value focused) non-volatile and volatile memory.

krautt
0 replies
13h56m

i worked at intel between 97 and 07. MFG was absolute king. Keeping that production line stable and active was the priority that eclipsed all. i was a process engineer, and to change a gas flow rate on some part of the process by a little bit, i'd have to design an experiment, collect data for months, work with various upstream/downstream teams, and write a change control proposal that would exceed a hundred pages of documentation. AFAIK, that production line was the most complex human process that had happened to date. It was mostly ran by 25-30 yo engineers. That in itself was a miracle.

ken47
0 replies
16h10m

This reminds me of the situation at Boeing, albeit w/ less fatal consequences. For a long time, it's been a company that has focused on maximizing profits through "innovative" business practices, first and foremost, rather than innovative R&D. It's completely unsurprising that Intel has been struggling lately against its genuinely inventive competition.

kbutler
0 replies
15h35m

Andy Grove's philosophy was "Only the paranoid survive". Intel seemed to rest on its laurels, divesting ARM, casually developing casual GPUs, and being complacent about their biggest direct competitor.

They needed (and need) a lot more paranoia.

drumhead
0 replies
4h17m

With the US re-industrialising, semi-conductors are a strategic priority so Intel will be at the heart of the process. They're going to be a major beneficiary of the process, here and in Europe. They'll be a significant player.

They got lazy and sat on their laurels when AMD was struggling and they didnt view ARM as a threat. TSMC was a probably a joke to them...until everyone brought out killer products and Intel had no response to them. They could have been way ahead of the pack by now but they decided to harvest the market instead of innovating aggresively. Right now they're worth less than $200bn, which is less than half of Broadcom or TSMC, its 30% less than AMD and 10% of Nvidia. Is it intrisically worth that little? Probably not, I think its a buy at this price.

dcdc123
0 replies
12h14m

Stop humbling them, I still own a bunch of stock.

culebron21
0 replies
11h11m

That much of text, of no use to most in the tech field... Discussions on big tech seem to me like those on geopolitics.

bernardlunn
0 replies
10h38m

Brilliant article. He is totally right that what Pat Gelsinger is doing is as brave as what Andy Grove did and just as essential. In hindsight Andy Grove was 100% right and I hope Pat Gelsinger is proved right.

The fact that Intel stock went up during the Brian Krzanich tenure as CEO is simply a reflection of that being the free money era that lifted all boats/stocks. Without that we would be writing Intel’s epitaph now.

You cannot play offense in tech when there is a big market shift.

Sparkyte
0 replies
12h43m

You can not innovate without competition. AMD will stagnate too without Intel or another company competing.

I am hoping Intel rights its wrongs so they can stay competitive. It takes a good amount of competition to keep businesses honest.

BonoboIO
0 replies
14h47m

Intel: The Boeing of semiconductors