What is the financial benefit to Intel in artificially limiting its CPU sockets like this? Logic and reason would have me believe they'd want to sell as many CPUs as possible, and keeping the socket compatible for as long as possible would seem logical.
My thoughts on reasons they might have done this. I honestly have no idea, these are just uninformed guesses.
- The Charitable Answer: There actually is some sort of minor incompatibility like a voltage difference to some pin where it can still boot but it's maybe not good for the CPU?
- The Practical Answer: They make more off the associated sockets/compass direction bridges/etc than they would off increased numbers of CPU upgrades.
- The Cynical Answer: Motherboard manufacturers paid them off
- The Cynical Practical Answer: They have a schedule for socket changes as some sort of deal with motherboard manufacturers and some engineers decided to do so in the laziest way possible
- The Silly Answer: They're an evil corporation and want you to suffer
Intel basically made the same CPU for about 6 years straight because of 10nm process issues.
They had to keep pretending the next gen "Lake" CPU was substantially different from the last, so they just took last gen product, made some minor tweaks to break compatibility and called it a new generation
Same goes for most cars. No real revolution, tweaks or changes due to regulatory demands, but nothing groundbreaking.
Still, when you’re due for a new car and look for the newest of the newest, would you go with manufacturer A, who released their latest car 8 years ago, or manufacturer B, who released it 1 year ago?
Incremental upgrades get so much hate around the internet (mostly about phones) by people having the version before it. Saying things like “ah they changed almost nothing! Why would I upgrade?!” While for instance me, only on my 3rd smartphone EVER, would love all the incremental updates over the years when I finally decide I need a new one, because I always get the latest and greatest. If a company then doesn’t release anything for a few years, I’d go somewhere else.
The one with a reliability data for the past 8 years.
It's surprising to me that people would want to make a major financial decision like a car without knowing about its reliability history.
8 years of the same parts, repair knowledge, and continued software support?
Sign me up immediately.
Unfortunately due to the extremely minimal software rights that exist (see: proprietary software) this is pretty much nonexistent in cars AFAIK.
I would rather get a car that is old enough to not be limited by software constraints. Which is pretty disappointing, because I actually really like electric cars. I think they would work well for my needs. But they are all so intentionally kneecapped, I have no interest in any particular model that's available.
I would love a super bare bones electric car. One that functions the same as any late 90s/early 2000s era car would, except with an electric power train and maybe cruise control.
2011-2013? Nissan Leaf fits the bill.
This.
It's so thoroughly reverse engineered that if you decided you wanted to reconfigure the software so it would only drive at prime numbered miles per hour, you could.
Those cars are essentially illegal to sell in many countries.
> It's surprising to me that people would want to make a major financial decision like a car without knowing about its reliability history.
Some people will always be surprising, but it is pretty clear that the pickup truck is the most purchased type of vehicle (in North America) exactly because they have a much better reliability track record as compared to most cars. This idea doesn't escape the typical buyer.
The pickup truck is also deeply engrained in American culture as masculine, even if the owner does nothing that requires it.
Yes, it seems most products that gain a reputation for reliability end up taking on a "masculine persona", of sorts. Which I guess stands to reason as in popular culture reliability is what defines the "manly man".
What if the reliability is like "these bearings are known to fail every 10k miles or so, but we have no product refresh planned for at least 3 years so the problem will remain unresolved?"
This is what incremental improvements are supposed to be. Well that and discovering that the vehicle can last till the end of the warranty period with one less bolt in that spot, so you can eliminate it.
You are wrong and right. Wrong because continuous improvement is not about making a vehicle only survive a 3-5 year warranty period. Right because the master of continuous improvement have a 10 year warranty (20000 km, 12500 miles) where I live if you do service at a dealership or authorised service centre and I think this extended warranty influence the decisions about what minimum level of quality the manufacturer will accept.
Buying first gen models is always a crapshoot. Often same for last gen if they try to squeeze new capabilities into a platform it wasn’t intended for.
Tesla is particularly terrible but this has been true for every manufacturer.
You want a couple years for them to work out the kinks.
Case in point: A new car released 8 years ago, but with incremental upgrades (i.e. Mitsubishi RVR in NA), still won’t have the same fundamental design considerations around safety or fuel efficiency as a more recent model.
Honestly, from a reliability standpoint, the ideal new car is one that had a major refresh ~2 years ago. By then most of the kinks should be worked out.
Or just pay attention to the warranty. If they guarantee it for 10 years, they probably expect it to run for 10+ years.
In the case of cars and CPUs its not that people mind incremental upgrades, it is that they mind incremental upgrades sold as big upgrades.
For phones the mindset of people who upgrade when they have to and/or buy cheaper phones is very different from those who regularly upgrade to the latest flagship phone.
Car buyers aren't always so dumb. When we bought our car, I was fully aware that major updates to models happen only so often. We bought used (of course), and the "major update" was our major criteria, more so than the specific year release date. (We bought a 2014 model in 2018; the year they released significant safety improvements compared to the 2013 model.)
For sure A. I would never buy a car that is the first model year of a revamp. I would give them at least a year to work out the bugs.
New models every year are fine if they're honestly labeled and have technologically reasonable compatibility.
Cars and phones meet those criteria a lot better than Intel CPUs. The problem isn't releasing things, it's the way they pretend every release is a big advance and the way they make the motherboards almost never able to upgrade.
Luckily it’s not common to need to replace your garage every time you get a new car.
In this analogy, Intel sells the parts to make garages too
I mean I can't speak to ICE cars, but electric cars ranges seem to scale pretty dramatically with how new they are.
There hasn't been significant combustion engine efficiency changes in a long time. My scrapbox from 2007 still goes 550 miles on a tank of diesel, about the same as my 1997 car did before it.
This is arguably exactly what most people actually need in a vehicle that you are spending thousands of dollars on: accumulated refinements seamlessly incorporated over time.
Year over year this typically results in good outcomes on a purely practical basis. However it just inherently makes for very boring publicity/promotional material.
Edit to add: it can also admittedly result in older solutions getting baked in which prevent larger beneficial changes. (Toyota's North American 4Runner and Tacoma models might be good real world examples of this approach resulting in generally high reliability but also larger, "riskier" changes being seemingly eventually necessary.)
Cars don't get a new model every year. They are even called "facelifts" to make it clear that it is essentially the same with minor modifications and upgrades.
Also there isn't much "groundbreaking" you can do in a car, except for the EV switch, the industry has existed on many small upgrades over time. (Like many other industries)
It costs money to validate a new processor on an old motherboard, and no corp wants to waste money on a product they have already sold.
It costs money to validate a new processor on an older motherboard design. How much does it cost to make a completely new motherboard design?
They're going to make new motherboards every year regardless, so any support for old motherboards is in addition to that.
The new processor isn't sold yet.
Sure, but why pay to validate on old boards when you can instead get many of their users to pay you for new ones?
AMD ran into significant compatibility issues with AM4, with some motherboards not being able to supply the amount of power needed by the newer CPUs and PCI-E Gen 4 support being removed in the final BIOS release due to reliability issues. A lot of motherboards also didn't have enough flash space to contain the firmware needed for the whole CPU range, so the update to support the newer gen had to remove support for the oldest gen.
Turns out it's really hard to guarantee compatibility with several years of third-party products which weren't designed with your new fancy features in mind.
Newer doesn't usually mean more power, though. You can just have a power limit, it's fine. And I don't expect PCIe to get faster with a CPU upgrade anyway.
The flash space was pretty forseeable and they dealt with it, more of an excuse than anything.
This is my thought, too.
I'd bet customer perception is also a factor; there's a risk the old boards (not even made by Intel) die and cause problems, and Intel wouldn't want to deal with press/comments like "This CPU stopped working after 2 months" or "I installed this new CPU, and within 2 months it killed my motherboard that had been working fine for 7 years".
They've released CPU upgrades with the same socket before, I'm sure they have the sales data to know how that performs vs new socket.
Laptops have outsold desktops for well over a decade, and their CPUs pretty much non-upgradable. I can't easily find a nice chart to reference, but intuition tells me the desktop industry is similarly trending towards complete "system" sales vs individual parts. In other words: Most people don't upgrade their CPU, they upgrade by replacing the entire system. If true, this also means the socket would be almost entirely irrelevant to sales performance.
Motherboard manufacturers don't seem to mind doing it for AMD chips
Intel actually intended for LGA1151 to remain unchanged for Coffee Lake but found out late in the testing process that many existing motherboards did not have enough power delivery capability to support the planned 6 and 8 core parts. Hence the decision to lock them out in software only. They are probably aware of the bad optics but decided that it’s better than trying to deal with the RMAs later.
It’s very similar to what had happened in 2006 when the 65nm Core 2 series were released in the same LGA775 package used by 90nm Pentium 4s, however the former mandated a specific VRM standard that not all comtemporary motherboards supported. Later 45nm parts pretty much required a new motherboard despite having the same socket again due to power supply issues.
AMD went the other route when they first introduced their 12 and 16 core parts to the AM4 socket. A lot of older motherboards were clearly struggling to cope with the power draw but AMD got to keep their implicit promise of all-round compatibility. Later on AMD tried to silently drop support for older motherboards when the Ryzen 5000 series were introduced but had to back down after some backlash. Unlike the blue brand they could not afford to offend the fanboys.
P.S. Despite the usual complaints, most previous Intel socket changes actually had valid technical reasons for them:
- LGA1155: Major change to integrated GPU, also fixed the weird pin assignment of LGA1156 which made board layout a major pain.
- LGA1150: Introduction of on-die voltage regulation (FIVR)
- LGA1151: Initial support for DDR4 and separate clock domains
This leaves the LGA1200 as the only example where there really isn’t any justification for its existence.
Why does a socket that can support DDR3 or DDR4 need to be different from a socket that only supports DDR3?
And with the current socket being 1700, they're going to change it again for the next generation to 1851, and with a quick look I don't see any feature changes that are motivating the change. (Upgrading 4 of the PCIe lanes to match the speed of the other 16 definitely does not count as something that motivates a socket change.)
So by my reckoning, half their desktop socket changes in the last decade have been unnecessary.
Because DDR4 is electrically different and memory controllers are all on-die.
Intel could get away with doing that pre-Nehalem because the memory was connected via the northbridge and not directly (which is what AMD was doing at the time; their CPUs outperformed Intel's partially due to that), so the CPU could be memory-agnostic.
AMD would later need to switch to a new socket to run DDR3 RAM, but that socket was physically compatible with AM2 (AM3 CPUs would have both DDR2 and DDR3 memory controllers and switch depending on which memory they were paired with; AM3+ CPUs would do away with that though).
There were some benefits to doing that; the last time Intel realized them was in 2001 when RD-RAM turned out to be a dead-end. Socket 423 processors would ultimately prove compatible with RDRAM, SDRAM, and DDR SDRAM.
Them being on-die is exactly why you don't need a socket change to take full advantage of DDR4, since they directly showed a socket can support both at once. Unless you're particularly worried about people trying to buy a new motherboard for their existing CPU, but who does that? You can tell them no without blocking CPU upgrades for everyone else.
LGA1151 supported both DDR3 and DDR4.
Of course, the P67 chipset was trivially electrically compatible with LGA1156 CPUs; Asrock's P67 Transformer motherboard proved that conclusively.
That said, the main problem with 1155 was their locking down the clock dividers, so the BCLK overclocking you could do with 1156 platforms was completely removed (even though every chip in the Sandy Bridge lineup could do 4.4GHz without any problem). This was the beginning of the "we're intentionally limiting our processor performance due to zero competition" days.
Which they would proceed to remove from the die in later generations, if I recall correctly. (And yes, Haswell was a generation with ~0% IPC uplift so no big loss there, but still.)
Well it’s possible to shoehorn in support for the determined but iGPU support is definitely out of reach and I am not sure what segment of the market is that targeted to. Seems like an excuse for AsRock to get rid of their excess stock. The socket change was actually very well received by everybody in the industry.
You are right that FIVR did not last long in that particular iteration. However Haswell does have a 10% to 30% IPC advantage over the previous gen depending on the test[1].
Haswell also added AVX2 instructions which means that it will still run the latest games whereas anything older is up to the whims of the developer (and sometimes denuvo, sadly)
https://www.anandtech.com/show/9483/intel-skylake-review-670...
Coffee lake's 8700k had 6 core not 8, the refresh lake-s did feature 8 cores. The release TDP of 8700k was the same as 6700k - 95W, of course it consumed more at peak, and when overclocked.
The support would still depend on the motherboard manufacturers adding the CPU support (along w/ the microcode) to their BIOS. Many high-end boards, e.g asrock z170 extreme6/7, would have supported 8700k.
The situation is no different today that many (most) boards do not top support top end processors anyways, due to poor VRMs, even when the socket is the same. (or if they support the CPUs are effectively power throttled)
Thank you for providing valuable insight. I wish these kinds of comments would end up at the top instead of the usual low quality "hurr-$COMPANY evil, it's all because greedy obsolescence-durr", from people who have no idea how CPUs and motherboards work together and the compatibility challenges that come when spining new CPUs designs with big difference that aren't visible to the layman who just counts the number of cores and thinks there can't possibly be more under the hood changes beyond their $DAYJOB comprehension.
Here's a video from gamer's Nexus on AMD's HW testing lab, just to understand the depth and breadth of how much HW and compatibility testing goes into a new CPU, and that's only what they can talk about in public. https://www.youtube.com/watch?v=7H4eg2jOvVw
x86 market is a near monopolistic one, with two companies cornering most of the market. Monopolies can afford to sustain irrational/inefficient practices as long it helps to squeeze the consumer. I hope RISC-V succeeds in breaking this duopoly. Perhaps now with the latest round of sanctions against China, if their full industrial might is thrown behind open designs, we may have some hope to crash the duopoly.
If you’re hoping RISC-V will get market share then it’s three companies, not two. Intel, AMD, and Apple.
If you're going to bring in RISC-V, why not mention ARM? They're currently more of a threat to Intel than RISC-V and likely will be over the next decade. They have a near monopoly for anything that is not a computer that requires anything more than a low performance microcontroller, and are supported to varying degrees by the three major general purpose operating systems.
RISC-V likely has a promising future, but the foundations are still being laid.
Coming of age in the 90s and witnessing the sheer audacity of greed that followed, I can tell you that the cynical answer tends to be the right one.
Especially considering Intel came of age in the 90’s as well.
Intel sells the chipset that goes along with the processor, as well as selling their own motherboards. I think the profit incentive here is obvious.
Why sell just a CPU when you can sell a CPU and a chipset and a motherboard?
Intel stopped selling motherboards in 2013.
They (Intel) also make the chipsets that go on the motherboards. So anyone who disposes of their old motherboard and buys a new motherboard because of this limitation results in:
Given that a "new CPU" sale plus a "new motherboard chipset" sale is more revenue to Intel than just a "new CPU" sale alone the financial benefit becomes obvious.Not to mention that they know people have to upgrade, and in not that long of a time (5 years-ish?).
The strategy doesn't work if it was something like a car where the lifespan is 20+ years, but with high turnover they have people over a barrel. Now AMD competes, but I think we forget that this is a new thing (and likely because if you're on HN, you're deep in this environment). So the big question is: will Intel continue, or will they recognize that they don't have the choice anymore. That is, of course, as long as AMD decides to not play the same game.
I would say that Intel is a company, not a person, and isn't motivated but directed. If the same people own both Intel and the mobo manufacturers, they win by forcing new purchases of new products primarily distinguished by a higher price.
One computer owner in a thousand upgrades their processor alone, or would even know how to.
You know, it's a matter of perspective, but I'd disagree.
I think we'd like to think companies are directed, but I think as they get larger and older, especially public companies, they operate less by difection, and the systemic forces take over, and they operate and function more by the aggregate sum of all the motivations of the actors involved.
I think it's true that some companies do exist, perhaps by sheer force of personality of their leaders, that remain primarily "directed" but feel that's more the exception than the rule.
To be fair I don't upgrade PC all that often (my 1080ti still going strooong!)
So when I do upgrade (every 6 years or so, it's currently been since 2018 and I still feel no need) then not only does the CPU technology need a bump, but the bridges on the mobo also need an update. Going from a 2018->2024 mobo is guaranteed to get you things like more/faster m.2 slots etc as well.
I suppose they could make compatible with old and new boards but I imagine it's much easier for them to design 1:1 new cpu + chipset than to design and test 1:* new cpu, new chipset, old chipset 1, old chipset 2, etc.
My i7-6700k was going strong until the motherboard died for the second time. I suspect a solder crack caused by issue vibration from a damaged fan (and my only evidence for that is the fact the fan is damaged and vibrates and that two different motherboards died). Might try reflowing it to revive it, eventually.
After reviewing my options I would have either bought a third motherboard or upgraded to something like a Threadripper (I did the latter). Upgrading to a current desktop Intel system just didn't seem really worth it when I already had most of a working one and it was still fast enough.
You actually have FEWER expansion slots on the newer desktop motherboards because they've pre-decided what you want: a desktop has a 16x GPU slot and a 4x NVMe slot, and that's it. Gone are the days of generic uncommitted expansion slots, mostly.
I am going to use this listing in my pre-prompt
Intel makes money selling motherboard chipsets. There's no excuse, just greed.