return to table of content

LPCAMM2 is a modular, repairable, upgradeable memory standard for laptops

baby_souffle
32 replies
1d2h

This is fantastic news. Hopefully the cost to manufacturers is only marginal and they find a suitable replacement for their current "each tier in RAM comes with a 5-20% price bump" pricing scheme.

Too bad apple is almost guaranteed to not adopt the standard. I miss being able to upgrade the ram in macbooks.

sliken
16 replies
1d1h

Apple ships 128 bit, 256 bit, and 512 bit wide memory interfaces on laptops (up to 1024 bit wide on desktops).

Is it feasible to fit memory bandwidth like the M3 Max (512 bits wide LPDDR5-6400) with LPCAMM2 in a thin/light laptop?

pja
6 replies
1d1h

This PDF[1] suggests that an LPCAMM2 module has a 128 bit wide memory interface, so the epic memory bandwidth of the M3 max won’t be achievable with one of these memory modules. High end devices could potentially have two or more of them arranged around the CPU though?

[1] https://investors.micron.com/node/47186/pdf

7speter
5 replies
20h36m

Apple could just make lower tier macbooks but mac fanboys wouldnt be able to ask “but what about apples quarterly profits?”

Most macbooks dont need high memory bandwidth, most users are using their macs for word processing, excel and vscode.

teaearlgraycold
1 replies
19h2m

Yes but Apple’s trying to build an ecosystem where users get highly quality, offline, low latency AI computed on their device. Today there’s not much of that. And I don’t think they even really know what’s going to justify all of that silicon in the neural engine and the memory bandwidth.

Imagine 5 years from now people have built whole stacks on that foundation. And then competing laptops need to ship that compute to the cloud, with all of the unsolvable problems that come with that. Privacy, service costs (ads?), latency, reliability.

jwells89
0 replies
15h41m

Apple is also deliberately avoiding having “celeron” type products in their lineup because those ultimately mar the brand’s image due to being kinda crap, even if they’re technically adequate for the tasks they’re used for.

They instead position midrange products from 1-2 gens ago as their entry level which isn’t quite as cheap but is usually also much more pleasant to use than the usual bargain basement stuff.

sliken
1 replies
12h44m

Even low end gaming, simulations, and even fun webGL toys can require a fair amount of memory bandwidth with an iGPU, like apple's M series. It also helps quite a bit for inference. I MBP with a M3 max can run models requiring multiple GPUs on a desktop and still get decent perf for single users.

consp
0 replies
11h54m

I MBP with a M3 max can run models requiring multiple GPUs on a desktop and still get decent perf for single users.

Good for your niche case, the other 99.8% still only does web and low performance desktop applications (which includes IDEs)

pmontra
0 replies
11h39m

As a non Mac reference, I work on a HP laptop from 2014. It was a high end laptop by then. It's between 300 and 600 Euro refurbished now.

I expanded it to 32 GB RAM, 3 TB SSD but it's still a i7 4xxx with 1666 MHz RAM. And yet it's OK for Ruby, Python, Node, PostgreSQL, docker. I don't feel the need to upgrade. I will when I'll get a major failure and no spare parts to fix it.

So yes, low end Macs are probably good for nearly everything.

jauntywundrkind
5 replies
1d1h

Hoping we see AMD Strix Halo with it's 256-bit interface crammed into an aggressively cooled fairly-thin fairly-light. But it's going to require heavy cooling to make full use of.

Heck, make it only run full tilt when on an active cooling dock. Let it run half power when unassisted.

seanp2k2
4 replies
18h41m

Kinda hilarious to see gamers buying laptops that can't actually leave the house in any practical meaningful way. I feel like some of them would be better off with SFF PCs and the external monitors they already use. I guess the biggest appeal I've seen is the ability to fold up the gaming laptop and put the dock away to get it off the desk, but then moving to an SFF on the ground plus a wireless gaming keyboard and wireless mouse that they already use with the normal laptop + one of those compact "portable" monitors seems like it'd solve the same problem.

jwells89
2 replies
15h34m

I’ve been wondering for a while now why ASUS or some other gaming laptop manufacturer doesn’t take one of their flagship gaming laptop motherboards, put some beefy but quiet cooling on it, put it in a pizza-box/console enclosure, and sell it as a silent compact gaming desktop.

A machine like that could still be relatively small but still be dramatically better cooled than even the thickest laptop due to not having to make space for a battery, keyboard, etc.

antonkochubey
1 replies
12h46m

ZOTAC does these - there are ZBOX Magnus with laptop-grade RTX 4000 series GPUs in 2-3 liter chassis. However their performance and acoustics are rather.. compromised, compared to a proper SFF desktop (which can be built in ~3x the volume)

jwells89
0 replies
12h33m

Yeah, those look like they’re too small to be reasonably cooled. What I had in mind is shaped like the main body of a laptop but maybe 2-3x as thick (to be able to fit plenty of heatsink and proper 120/140mm fans), stood up on its side.

kristianp
0 replies
18h31m

My wife can get an hour of gaming out of her gaming laptop. They're good for being able to game in an area of the house where the rest of the family is, even if that means being plugged in at the dining table. Our home office isn't close enough.

Also a gaming laptop is handy if you want to travel and game at your hotel.

wmf
1 replies
1d1h

For 512 bits you would need four LPCAMM2s. I could imagine putting two on opposite sides of the SoC but four might require a huge motherboard.

kristianp
0 replies
19h6m

Perhaps future LPCAMM generations will require more bits? I still can't imagine apple using them unless required by right to repair laws. But those laws probably don't extend to making RAM upgradeable.

AnthonyMouse
0 replies
8h28m

Apple does this because their CPU and GPU use the same memory, and it's generally the GPU that benefits from more memory bandwidth. Whereas in a PC optimized for GPU work you'd have a discrete GPU that has its own memory which is even faster than that.

cjk2
6 replies
1d2h

Given enough pressure ...

colinng
4 replies
1d

They will maliciously comply. They might even have 4 sockets for the 512-bit wide systems. But then they’ll keep the SSD devices soldered - just like they’ve done for a long time. Or cover them with epoxy, or rig it with explosives. That’ll show you for trying to upgrade! How dare you ruin the beautiful fat profit margin that our MBAs worked so hard to design in?!?

7speter
2 replies
20h33m

Apple lines perimeter of the nand chips on modern mac minis with an array of tiny capacitors, so even the crazy people with heater boards can’t unsolder the nand and replace them with higher density NAND.

wtallis
0 replies
18h50m

Have you not looked at the NAND packages on any regular SSDs? Tiny decoupling caps alongside the NAND is pretty standard practice.

cjk2
0 replies
11h45m

This is normal. They are called decoupling capacitors and are there to provide energy if the SSD requires short bursts of it. If you put them any further away the bit of wire between them and the gate turns into an inductor and has some somewhat undesirable characteristics.

Also replacing them is not rocket science. I reckon I could do one fine (used to do rework). The software side is the bugbear.

cjk2
0 replies
11h42m

This is hyperbole. They are replaceable. It's just more difficult.

armarr
0 replies
1d2h

You mean pressure from regulators, surely. Because 99% of consumers will not notice or know the difference in a spec sheet.

Aurornis
4 replies
1d

Too bad apple is almost guaranteed to not adopt the standard.

Apple would require multiple LPCAMM2 modules to provide the bus width necessary for their chips. Up to 4 x LPCAMM2 modules depending on the processor.

The size of each LPCAMM2 module is almost as big as the entire size of an Apple CPU combined with the unified RAM chips, so putting 2-4 LPCAMM2 modules on the board is completely infeasible without significantly increasing the size of the laptop.

Remember, the Apple architecture is a combined CPU/GPU architecture and has memory bandwidth to match. It's closer to your GPU than the CPU in your non-Mac machine. Asking to have upgradeable RAM on Apple laptops is akin to almost like asking for upgradeable RAM on your GPU (which would not be cheap or easy)

For every 1 person who thinks they'd want a bigger MacBook Pro if it enabled memory upgrades, there are many, many more people who would gladly take the smaller size of the integrated solution we have today.

kokada
1 replies
23h2m

Up to 4 x LPCAMM2 modules depending on the processor.

The non-Pro/Max versions (e.g. M3) uses 128-bits, and arguably is the kind of notebook that mostly needs to be upgraded later since they commonly come with only 8GB of RAM.

Even the Pro versions (e.g. M3 Pro) use up-to 256-bits, that would be 2 x LPCAMM2 modules, that seem plausible.

For the M3 Max in the Macbook Pro, yes, 4 x LPCAMM2 would be impossible (probably). But I think you could have something like the Mac Studio have them, that is arguably also the kind of device that you probably want to increase memory in the future.

throwaway48476
0 replies
18h39m

It would only need to be 2x per board side.

coolspot
1 replies
23h54m

like asking for upgradeable RAM on your GPU

Can I please have upgradeable RAM on GPU? Pwetty pwease?

thfuran
0 replies
23h39m

Sure, as long as you're willing to pay in cost, size, and performance.

j16sdiz
1 replies
16h20m

Unified memory is basically L3 cache speed with zero copy between CPU and GPU.

They have engineering difference. Depends on who you ask, it may or may not worth it

enragedcacti
0 replies
15h37m

Assuming you mean latency, Apple's unified memory isn't lower latency than other soldered or socketed solutions e.g. M1 Max with 111ns latency on cache miss vs 13900k with 93ns latency. Certainly not L3 level latency. Zero copy between CPU/GPU is great but not unique to unified memory or soldered ram.

As far as bandwidth goes, you would only need one or two LPCAMM2 modules to match or exceed the bandwidth of non-Max M series chips. Accommodating Max chips in a macbook with LPCAMM2 would definitely be a difficult packaging problem.

https://www.anandtech.com/show/17024/apple-m1-max-performanc...

https://www.anandtech.com/show/17047/the-intel-12th-gen-core...

redeeman
0 replies
20h12m

and they wont so long as people buy regardless

orev
29 replies
1d1h

I’m glad they explained why RAM has become soldered to the board recently. It’s easy to be cynical and assume they were doing it for profit motive purposes (which might be a nice side effect), but it’s good to know that there’s also a technical reason to solder it. Even better to know that it’s been recognized and a solution is being worked on.

drivingmenuts
10 replies
23h41m

The problem is getting manufacturers to implement the new RAM standard. While the justifications given are great for the consumer, I didn't see any reason for a manufacturer to sign on.

They are going to lose money when people buy new RAM, rather than a whole new laptop. While processor speeds and size haven't plateaued yet, it's going to take a while to develop significant new speed upgrades and in the meantime, the only other upgrade is disk size/long-term storage, which, aside from Apple, they don't totally control.

So, why should they relenquish that to the user?

cesarb
2 replies
21h47m

While the justifications given are great for the consumer, I didn't see any reason for a manufacturer to sign on. [...] So, why should they relenquish that to the user?

It makes sense that the first ones to use this new standard would be Dell and Lenovo. They both have "business" lines of computers, which usually offer on-site repairs (they send the parts and a technician to your office) for a somewhat long time (often 3 or 5 years). To them, it's a cost advantage to make these computers easier to repair. Having the memory (which is a part which not rarely fails) in a separate module means they don't have to replace and refurbish the whole logic board, and having it easy to remove and replace means less time used by the on-site technician (replacing the main logic board or the chassis often means dismantling nearly everything until it can be removed).

masklinn
0 replies
19h47m

To them, it's a cost advantage to make these computers easier to repair.

Alternatively, it allows them to use more efficient RAM in computer lines they can't make non-repairable so they can boast of higher battery life.

babypuncher
0 replies
16h10m

They also charge a lot more for these "business-class" machines. That higher margin captures the revenue lost to DIY repairs and upgrades.

AnthonyMouse
2 replies
8h42m

They are going to lose money when people buy new RAM, rather than a whole new laptop.

You're thinking about this the wrong way around.

Suppose the user has $800 to buy a new laptop. That's enough to get one with a faster processor than they have right now or more memory, but not both. If they buy one and it's not upgradable, that's not worth it. Wait another year, save up another $200, then buy the one that has both.

Whereas if it can be upgraded, you buy the new one with the faster CPU right away and upgrade the memory in a year. Manufacturer gets your money now instead of later, meanwhile the manufacturer who didn't offer this not only doesn't sell to you in a year, they just lost your business to the competition.

petemir
1 replies
8h18m

I doubt the consumer mass that actually matters to manufacturer's earnings understands RAM value and if the computer they are buying is RAM-upgradable or not.

They are going to buy the 800$, any of the two, complain when it inevitably "works slower" in a couple of years (if they are lucky), and buy a new 800$ once again then. I don't see the manufacturer's motivation to offer upgradable RAM.

AnthonyMouse
0 replies
8h14m

They don't have $800 to buy another one so soon. So they take the one that "works slower" to some tech who knows the deal and tells them this machine sucks because you can't upgrade it, and now they think your brand is crap (because it is), curse you for the next however many years until they have the money and then buy the next one from someone else.

rock_artist
0 replies
6h47m

Unlike Apple, where they are in in-direct competition on computer hardware, For PCs, If Lenovo starts doing it, then it's a marketing point. now Asus, HP, Dell would try and get it.

So it's the egg and the chicken where if it'll be important to consumers, it might end up as catching up.

makeitdouble
0 replies
18h52m

I'd see two angles:

- the manufacturer themselves benefit from easier to repair machines. If DELL can replace the RAM and send back the laptop in a matter of minutes instead of replacing the whole motherboard to then have it salvaged somewhere else, it's a clear win.

- prosumers will be willing to invest more in a laptop that has better chance to survive a few years. Right now we're all expecting to have parts fail within 2 to 3 years on the higher end, and budget accordingly. You need a serious reason to buy a 3000$/€ laptop that might be dead in 2 years. Knowing it could weather RAM failure without manufacturer repair is a plus.

bugfix
0 replies
23h6m

Even if it's just Lenovo using these new modules, I still think it's a win for the consumer (if the modules aren't crazy expensive).

7speter
0 replies
20h42m

These companies did plenty well 12+ years ago when users could upgrade their systems memory.

yread
4 replies
10h36m

If they soldered a decent amount that gou can be sure you don't ever need to upgrade it would be fine (seriously, 64GB ram costs like 100eur, non issue in a 1000eur laptop). 8 is not enough already and 16 will soon be limiting too.

brookst
1 replies
6h18m

Is the goal to not have any computers that are limited to a single task? Tons of corporate IT purchases go to someone only using e.g. Word all day. Do we really care if they are provisioned with “enough” memory for you or me?

pathartl
0 replies
3h14m

The baseline 14" MacBook Pro that costs $1600 has 8GB of shared RAM. That's not enough. I don't believe OP is talking about machines better suited for your task, machines in the $1k range.

orev
0 replies
2h58m

No matter how much the specs increase, developers find a way to use it all up. This approach would just accelerate that process.

nuancebydefault
0 replies
6h12m

10 percent is not neglectible. Also 64GB is a lot _today_ but most probably not 5 years from now. The alternative of buying a new laptop feels like a big waste.

tombert
4 replies
21h30m

Yeah, I was actually surprised to learn there was a reason other than "Apple wants you to buy a new Macbook or overspec your current one". It's annoying, but at least there's a plausible reason to why they do it.

klausa
2 replies
15h56m

Apple's RAM is not soldered to the _motherboard_, it's part of the SoC package.

Vogtinator
1 replies
11h26m

Only recently. It started out as soldered to the main board.

brookst
0 replies
6h15m

No, it started out as chips in sockets. I (dimly) remember upgrading my II+, I think from 32kb to 48kb?

A lot has changed.

seanp2k2
0 replies
18h47m

"...and they charge 4x what the retail of premium RAM would otherwise be per GB"

do storage next.

OJFord
4 replies
23h20m

I didn't find that a particularly complete explanation - and the slot can't be closer to the CPU because? - I think it must be more about parasitic properties of the card edge connector on DIMMs being problematic at lower voltage (and higher frequencies) or something. Note the solution is a ball grid connection and the whole thing's shielded.

I suppose in fairness and to the explanation it does give, the other thing that footprint allows is a shorter path for the pins that would otherwise be near the ends of the daughter board (e.g. on a DIMM), since they can all go roughly straight across (on multiple layers) instead of a longer diagonal according to how far off centre they are. But even if that's it, that's what I mean by it seeming incomplete. :)

Tuna-Fish
1 replies
20h5m

and the slot can't be closer to the CPU because?

All the traces going into the slot need to be length-matched to obscene precision, and the physical width of the slot and the room required by the "wiggles" made in the middle traces to length-match them restrict how close you can put the slot. Most modern boards are designed to place it as close as possible.

LPCAMM2 fixes this by having a lot of the length-matching done in the connector.

ansible
0 replies
6h48m

Generally speaking, layout for modern DRAM (LPDDRx, etc.) is a giant pain. Trace width, differential trace length matching, spacing, number of vias, and more.

And all this is needed even though the DRAM signaling standard has extensive measurement and analysis of the traces built right into the hardware of the DRAM and the memory controller on the processor. They negotiate the speed and latency at runtime.

Giant pain.

throwaway48476
0 replies
19h47m

Competes with space for VRM's.

smolder
0 replies
21h0m

Yeah, you can only make the furthest RAM chip in DIMM be so close to the CPU based on the form factor, and the other traces need to match that length. Distance is critical and edge connectors sure don't help.

kjkjadksj
1 replies
20h51m

They can have their technical fig leaf to hide behind but in practice, how many watts are we really saving between lpddr5 and ddr5? is it worth the ewaste tradeoff to have a laptop we can't modularly upgrade to meet our needs? I would guess not.

masklinn
0 replies
19h48m

how many watts are we really saving between lpddr5 and ddr5?

From what I gathered, it's around a watt per when idling (which is when it's most critical): the sources I found seem to indicate that ddr5 always runs at 1.1V (or more but probably not in laptops), while lpddr5 can be downvolted. That's an extra 10% idle power consumption per.

klysm
0 replies
18h55m

I didn’t really appreciate the insanity of the electrical engineering involved in high frequency stuff till I tried to design some PCBs. A simplistic mental model of wires and interconnects rapidly falls apart as frequencies increase

farmdve
23 replies
1d2h

Remember that Haswell laptops were the last to feature socketed CPUs.

RAM is nice to upgrade, for sure. As well as an SSD, but CPUs are still a must. I would even suggest upgradeable GPUs but I don't think the money is there for the manufacturers. Why allow you to upgrade when you can buy a whole new laptop?

zamadatix
7 replies
1d1h

I'm not sure I really get much value out of a socketed CPU, particularly in a laptop, vs something like a swappable MB+CPU combo where the CPU is not socketed.

RAM/Storage are great upgrades because 5 years from now you can pop in 4x the capacity at a bargain since it's the "old slow type". CPUs don't really get the same growth in a socket's lifespan.

farmdve
4 replies
1d

As I said to the comment above, it makes perfect sense. In 2014 we purchased a dual core Haswell. Almost a decade later I revive the laptop by installing more ram, an SSD and the best possible quad core CPU for that laptop. The gain in processing power were massive and made the laptop useable again.

zamadatix
3 replies
1d

I'm sure it's all subjective (e.g. I'm sure someone here even considers the original dual core Haswell more than fine without upgrade in 2024) but going from a dual core Haswell to a quad core Haswell (or even a generation or two beyond, had it been supported) as an upgrade a decade after the fact just doesn't seem worth it to me.

The RAM/SSD sure - a 2 TB consumer SSD wasn't even a possible thing to buy until a year after that laptop would have come out and you can get that for <$100 new now. It won't be the highest performing modern drive but it'll still max out the bus and be many times larger than the original drive. Swap equipment 3 years from now and that's also still a great usable drive rather than a museum piece. Upgrading to a CPU that you could have gotten around the time the laptop came out? Sure, it has twice as many cores... but it still has pretty bad multi core performance and a god awful perf/wattage ratio to be investing new money on a laptop for. It's also a bit of a dead end, in 3 years you'll now have 2 CPUs so ancient you can't really do much with them.

pavon
1 replies
23h7m

This matches my experience. Every PC I've built over the last 30 years have benefited from memory and storage upgrades through their life, and I've upgraded GPU a few times. However, every time I've looked at upgrading to another CPU with the same socket it is either not a big enough step up, or too much of a power hog relative to the midrange CPU I originally built with. The only time I've replaced CPUs is when I've fried them :)

seanp2k2
0 replies
18h24m

Yup, so I've adopted a strategy for my past few desktop builds like this:

  - Every time a new ToTL GPU comes out for a new family, buy it at retail price as soon as it launches (so, the first-available ToTL models that were big gains in perf: GTX 1080 Ti, RTX 2080 Ti, RTX 3090, RTX 4090)

  - Every other release cycle, upgrade CPU to the ToTL consumer chip (eg on a 12900KS right now, HEDT like ThreadRipper is super expensive and not usually better for gaming or normal dev stuff). I was with Ryzen since 1800x -> 3950x -> 5950x but Intel is better for the particular game I play 90% of the time.

  - Every time you upgrade, sell the stuff you've upgraded ASAP. If you do this right and never pay above MSRP for parts, you can usually keep running very high-end hardware for minimal TCO.

  - Buy a great case, ToTL >1000w PSU (Seasonic or be quiet!), and ToTL cooling system (currently on half a dozen 140mm Noctua fans and a Corsair 420mm AIO). This should last at least 3 generations of upgrading the other stuff.

  - Storage moves more slowly than the rest, and I've had cycles where I've re-used RAM as well, so again here go for the good stuff to maximize perf, but older SSDs work great for home servers or whatever else.

  - Monitor and other peripherals are outside of the scope of this but should hopefully last at least 3 upgrade generations. I bit when OLED TVs supported 4K 120hz G-Sync, so I've got a 55" LG G1 that I'm still quite happy with and not wanting to immediately upgrade, though I do wish they made it in a 42" size, and 16:10 would be just perfect.

farmdve
0 replies
5h20m

Maybe it is subjective. For me it made perfect sense. I could not afford a new laptop but could afford rejuvenating an old one.

immibis
1 replies
1d

Socket AM4 had a really good run. Maybe we just have to pressure manufacturers to make old-socket variations of modern processors.

The technical differences between sockets aren't usually huge. Upgrade the memory standard here, add or remove PCIe lanes there. Using new cores with an older memory controller may or may not be doable, but it's quite simple to not connect all the PCIe lanes the die supports.

seanp2k2
0 replies
18h23m

but then what excuse would you have to throw another $500 at Asus for their latest board that while being the best chance the platform has, still feels like it runs a beta BIOS for the first 9 months of ownership?

leduyquang753
6 replies
1d1h

The Framework laptop 16 features replaceable GPU.

farmdve
3 replies
1d1h

These are very obscure, or perhaps I mean to say niche laptop manufacturers. We need this standard for all of them, HP, Lenovo, Acer etc.

nwah1
2 replies
1d

Framework open sources most of their schematics, if I understand correctly. So it should be possible for others to use the same standard, if they wanted to. (they don't want to)

Dylan16807
0 replies
14h14m

The form factor isn't great for being a vendor-neutral thing.

If we can convince the companies to actually try for compatibility, then a revival of MXM is probably a significantly better option.

freedomben
0 replies
19h58m

I'm writing this from my Framework 16 with GPU and it is the best laptop I've ever known. It's heavy and big and not the most portable, but I knew that would be the case going into it and I have no regrets

FloatArtifact
0 replies
22h48m

The Framework laptop 16 features replaceable GPU.

In a way I don't mind having non-replaceable ram in the framework ecosystem as an option. Put simply because the motherboard itself is modular and needs to be upgraded for the CPU. At that point though I would prefer on integrated ram CPU/GPU.

Night_Thastus
4 replies
1d

On a laptop it's not very practical.

Because you can't swap the motherboard, your options for CPUs are going to be quite limited. Generally, only higher-tier CPUs of that same generation - which draw more power and require more cooling.

Generally a laptop is built designed to provide a specific budget of power to the CPU and has a limited amount of cooling.

Even if you could swap out the CPU, it wouldn't work properly if the laptop couldn't provide the necessary power or cooling.

farmdve
2 replies
1d

I can't say I agree. Back in 2014 a laptop was purchased with a dual-core haswell CPU. 8 years later I revive the laptop by upgrading the CPU to almost the best possible CPU, which is a 4-core 8 thread CPU or 4-core 4 threads, I am unsure which of these it was, but the speed boost was massive. This is how you keep old tech alive.

And the good thing about mobile CPUs is that they have almost the same TDP across the various dual-quad versions(or whatever is the norm today).

Rohansi
1 replies
20h26m

How old was the new CPU though? Probably the same or similar generation to what it originally came with since the socket needs to be the same.

IMO the switch to an SSD would have been the biggest boost.

farmdve
0 replies
3h28m

Same gen but with 2 more cores + Hyperthreading

yencabulator
0 replies
21h7m

On a laptop it's not very practical.

Because you can't swap the motherboard,

https://frame.work/ has entered the chat.

sojuz151
0 replies
1d1h

I would say it would make the most sense to have a replaceable entire ram+cpu+gpu assemble. Just have some standard form factors and connectors for external connectors.

This way, you could keep power consumption low and be able to upgrade cpu to a new generation

seanp2k2
0 replies
18h36m

They've done upgradeable laptop GPUs before with MXM: https://en.wikipedia.org/wiki/Mobile_PCI_Express_Module

Looks like the best card they have out with MXM right now is a Quadro RTX 5000 Mobile which seem to be going for ~$1000 on eBay.

immibis
0 replies
1d

Laptops have always been trading size for upgradeability and other factors, and soldering everything is the way to make them tiny. If you ask me they've gotten too extreme in size. The first laptops were way too bulky, but they hit a sweet spot around 2005-2010, being just thick enough to hold all those D-Sub connectors (VGA, serial, etc).

And soldering stuff to the board is the default way to make something when upgradeability isn't a feature.

dvh
11 replies
1d2h

What's wrong with DIMM?

magicalhippo
4 replies
1d

The physical size of the socket and having the connections on the edge means you're forced to have much longer traces. Longer traces means slower signalling and more power loss due to higher resistance and parasitics.

This[1] Anandtech article from last year has a better look at how the LPCAMM module works. Especially note how the connectors are now densely packed directly under the memory chips, significantly reducing the trace length needed. Not just on the memory module itself but also on the motherboard due to the more compact memory module. It also allows for more pins to be connected, thus higher bandwidth (more bits per cycle).

[1]: https://www.anandtech.com/show/21069/modular-lpddr-becomes-a...

kjkjadksj
3 replies
20h50m

I'd wager for most consumers capacity is more important than bandwidth and the power losses are going to be small compared to the rest of the stack.

magicalhippo
1 replies
13h35m

power losses are going to be small compared to the rest of the stack

While certainly not the largest losses, they do not appear insignificant. In LPDDDR4 they introduced[1] a new low-voltage signalling, which I doubt they could have gotten working with SODIMMs due to the extra parasitics.

If you look at this[2] presentation you can see that at 3200MHz a DDR4 SODIMM would consume around 2 x 16 x 4 x 6.5mW x 3.2GHz = 2.6W for signalling going full tilt. Thanks to the new signalling LPDDR4 reduces this by 40% to around 1.6W.

Compare that to a low-power CPU having a TDP of 10W or less a full 1W reduction per SODIMM just due to signalling isn't insignificant.

To further put it into perspective, the recent Lenovo ThinkPad X1[3] uses around 4.15W average during normal usage, and that includes the screen.

Obviously the memory isn't going full tilt at normal load, but say average 0.25W x 2 sticks would reduce the X1's battery lifetime by 10%.

edit: yes I'm aware the presentation is about LPDDR4 yet the X1 uses LPDDR5, just trying add context using available sources.

[1]: https://www.jedec.org/news/pressreleases/jedec-releases-lpdd...

[2]: https://www.jedec.org/sites/default/files/JY_Choi_Mobile_For...

[3]: https://www.tomshardware.com/reviews/lenovo-thinkpad-x1-carb...

CoolCold
0 replies
3h28m

useful, thank you!

bmicraft
0 replies
20h23m

Bandwidth translates directly into better (igpu) performance

rangerelf
0 replies
1d2h

There's nothing _wrong_ with it, it performs according to spec, but it has limitations: trace length, power requirements, signal limitations, heat, etc.

mmastrac
0 replies
1d2h

The size, the sockets, the heat distribution, etc, etc, etc.

linsomniac
0 replies
1d2h

It requires too much power, according to the article. This allows using "LP" (Low Power) parts to be removable, they normally have to be soldered on board close to the CPU because of the low voltage tolerances.

armarr
0 replies
1d2h

Larger footprint, taller, longer traces and signal degradation in the connectors.

adgjlsfhk1
0 replies
1d1h

One of the biggest problems is that edge connections don't give you enough density. Edge connections are great for serves where you stack 16 channels next to each other, but in a laptop form factor, your capacity is already limited, so you can get more wires coming out of the ram by connecting to the face rather than the edge.

0x457
0 replies
1d

There is literally an entire section explaining why LPDDR needs to be soldered down as close as possible to the memory controller.

ThinkBeat
10 replies
1d1h

Meanwhile Apple bakes the RAM,CPU,GPU all into the same "chip". Good luck with that.

colinng
6 replies
1d

Don’t forget - they solder in the flash too even though there is no technical reason to do so.

Unless “impossibly far profit margin” is a technical requirement.

mschuster91
5 replies
23h48m

Don’t forget - they solder in the flash too even though there is no technical reason to do so.

There is, Apple uses flash memory as swap to get away with low RAM specs, and the latency and speed required for that purpose all but necessitates putting the flash memory directly next to the SoC.

wmf
4 replies
23h36m

This is not really true; Apple's SSDs are no faster than off-the-shelf premium NVMe SSDs.

Rohansi
2 replies
20h15m

Yeah but some people need to justify their $1,800 USD purchase of laptop that comes with only 8 GB of RAM. Even though most laptops manufactured today would also come with NVMe (PCIe directly connected to the CPU, usually) flash storage, which is used by all operating systems as swap.

mschuster91
1 replies
8h28m

NVMe by no means is directly connected to the CPU directly, usually it's connected through at least one PCIe switch.

Rohansi
0 replies
5h19m

It's harder to confirm for laptops but you can refer to motherboard manuals to see if any of your PCIe-related slots go through a switch or not. For example, my current PC has a PCIe x16 slot, x1 slot, and two M.2 NVMe slots. It says everything is integrated into the CPU except the x1 slot which goes through the motherboard chipset. I don't see why any laptop would make NVMe go through a PCIe switch unless the CPU doesn't provide enough lanes to support everything supported by the motherboard. Even the at the lowest end, a dual core Intel Core i3-10110U (laptop processor from 2019) has 16 lanes from the CPU which could support at least one NVMe without going through a switch.

wtallis
0 replies
20h26m

And the latency of flash memory is several orders of magnitude higher than even the slowest interconnect used for internal SSDs.

0x457
2 replies
23h38m

Meanwhile, Apple ships machines with a 1024bit wide memory bus, while this solution offers just 128 bits per "stick".

Dylan16807
1 replies
13h49m

Compared to how big the CPU package is on those machines, 4 of these sticks on each side of the motherboard should fit acceptably.

And you'd be able to have a lot more than 192GB.

mmastrac
8 replies
1d2h

Ugh, finally. And it's not just a repurposed desktop memory standard either! The overall space requirements look to be similar to the BGA that you'd normally solder on (perhaps 2-3x as thick?). I'm sure they can reduce that overhead going forward.

I love the disclosure at the bottom:

Full Disclosure: iFixit has prior business relationships with both Micron and Lenovo, and we are hopelessly biased in favor of repairable products.

Aurornis
4 replies
23h58m

Ugh, finally.

FYI, the '2' at the end is because this isn't the first time this has been done. :)

LPCAMM spec has been out for a while. LPCAMM2 is the spec for next-generation parts.

Don't expect either to become mainstream. It's relatively more expensive and space-consuming to build an LPCAMM motherboard versus dropping the RAM chips directly on to the motherboard.

nrp
1 replies
22h43m

My recollection of this is that LPCAMM was a proposal from Dell that they put into the JEDEC standardization process, and LPCAMM2 is the resulting standard, named that way to avoid confusion with the non-standard LPCAMM that Dell trialed on a small number of commercial systems.

Tuna-Fish
0 replies
4h54m

Almost. The Dell proposal is called CAMM, which was slightly modified during the JEDEC process and standardized as CAMM2, which is the combined with the memory type the same way DIMM was, For example LPDDR5X CAMM2 or DDR5 CAMM2. LPCAMM2 is not a name used in any JEDEC standard or even referred to anywhere on their site, but it seems to be used by both the memory manufacturers and the users because it's less of a mouthful, and they feel there needs to be more to distinguish between LPDDR5 CAMM2 and DDR5 CAMM2 because they are not electrically compatible.

audunw
1 replies
8h56m

Not to mention putting the RAM directly on a System-in-Package chip like Apple does now. That's going to be unbeatable in terms of space and possibly have an edge when it comes to power consumption too. I wouldn't be surprised if future standards will require on-package RAM.

I kind of wish we could establish a new level in the memory hierarchy. Like, just make a slot where you can add slower more power hungry DDR RAM that acts as a big cache for the NVM storage, or that the OS can offload some of the stuff in main memory if it's not used much. It could be unpopulated in base models, and then you can buy an upgrade to stick in there to get some extra performance later if needed.

burutthrow1234
0 replies
5h44m

This is kind of what Optane was in some incarnations (it's really terrible branding that conflates multiple technologies).

cjk2
2 replies
1d2h

Yeah they even gloss over Lenovo's crappy soldered on the motherboard USB-C connectors which is always the weak point on modern thinkpads. Well that and Digital River (Lenovo's distributor) carries absolutely no spare parts at all for any Lenovos in Europe, and if they do they only rarely turn up, so you can't replace any replaceable bits because you can't get any.

sspiff
0 replies
21h24m

Digital River is shit at everything. From spare parts, to delivery and tracking, to customer communications, to warranty claims. Every single interaction with them is a nightmare. It is the single reason I prefer to buy Lenovo from resellers rather than directly.

oneplane
5 replies
1d

On the other hand, with a reflow station everything becomes modular and repairable.

I do hope that a more widespread usage of compressed attachment gives us some development in that area where projects that were promising modular devices failed (remember those 'modular' phone concepts? available physical interconnects were one of the failures...). Sockets for BGAs have existed for a while, but were not really end-user friendly (not that LGA or PGA are that amazing), so maybe my hope is misplaced and many-contact connections will always be worse than direct attachment (be it PCB or SiP/SoC/CPU shared substrate).

jcotton42
2 replies
1d

On the other hand, with a reflow station everything becomes modular and repairable.

Not for the average person.

redeeman
1 replies
20h7m

true, but can the average person replace the innertube on a bicycle wheel? :)

pezezin
0 replies
17h50m

Yes? I did it many, many times as a kid, it is not that difficult.

zokier
0 replies
1d

On the other hand, with a reflow station everything becomes modular and repairable.

until you hit custom undocumented unobtainium proprietary chips. good luck repairing anything with those.

RetroTechie
0 replies
23h19m

maybe my hope is misplaced and many-contact connections will always be worse than direct attachment

As much as I like socketed / user-replaceable parts, fact is that soldering down a BGA is a very reliable way to make those many connections.

On devices like smartphones & tablets RAM would hardly ever be upgraded even if possible. On laptops most users don't bother. On Raspberry Pi style SBCs it's not doable.

Desktops, workstations & servers are the exception here.

Basically the high-speed parts of a system need to be as close together as physically possible. Especially if low power consumption is important.

Want easy upgrades? Then compute module + carrier board setups might be the way to go. Keep your I/O connectors / display / SSD etc, swap out the CPU/GPU/RAM part.

sharpshadow
3 replies
21h39m

Is it possible to have both LPDDR and LPCAMM2 in use at the same time?

wtallis
2 replies
21h35m

LPCAMM2 is a connector and form factor standard for modules carrying LPDDR type memory chips.

masklinn
1 replies
19h41m

I assume they mean having some memory soldered and an expansion slot.

I've seen laptops like that, with e.g. 8GB soldered and a sodimm slot.

sharpshadow
0 replies
5h39m

That would be nice since there is a rise of CPU+RAM and even GPU I think all on one chip. Would be interesting to be able to upgrade RAM on maschines like that.

doublextremevil
3 replies
1d2h

Cant wait to see this in a framework laptop

OJFord
2 replies
23h24m

For the presumed improvement to battery life? Because Fw already uses SO-DIMMs.

wmf
0 replies
22h40m

It's also faster (7500 vs. 5600).

universa1
0 replies
21h26m

That's also nice, but the memory speed is also higher, Ddr5-7266 vs 5600 iirc. The resulting higher bandwidth translates more or less directly into more performance for the iGPU.

zxcvgm
2 replies
1d1h

I remember when Dell was the first to introduce [1] these Compression Attached Memory Modules in their laptops in an attempt to move away from soldered-on RAM. Glad this is now being more widely adopted and standardized.

[1] https://www.pcworld.com/article/693366/dell-defends-its-cont...

AlexDragusin
1 replies
23h29m

The first iteration, known as CAMM, was an in-house project at Dell, with the first DDR5-equipped CAMM modules installed in Dell Precision 7000 series laptops. And thankfully, after doing the initial R&D to make the tech a reality, Dell didn’t gatekeep. Their engineers believed that the project had such a good chance at becoming the next widespread memory standard that instead of keeping it proprietary, they went the other way and opened it up for standardization.
jimbobthrowawy
0 replies
4h49m

Trying to make it a standard is one of the least surprising things about it. You want accessories/components in your product to be as commodity as possible to drive costs down.

sharpshadow
2 replies
5h39m

Would it be possible to have LPCAMM2 as external device tru thunderbolt?

noodlesUK
1 replies
5h33m

No, RAM is not something that is exposed on the PCIe bus (which is what thunderbolt is based on). RAM has a different protocol (DDR5 in this case), and as it says in the article, is very sensitive to the distance between the CPU and the RAM. External RAM isn't really something that is viable in the modern era of computers as far as I know.

simcop2387
0 replies
3h58m

Surprisingly this is something starting to show up in the server market lately with a new protocol/tech called CXL. But yea that latency issue is still there over the distance but it'll let more remote memory type stuff start to happen. I doubt you'll do more than a few meters (i.e. within the same rack) ever but it'll likely end up getting used for so called "hyperscaler" type companies to more flexibly allocate resources, similar to how they're doing PCIe over ethernet with DPU devices right now. Unlikely that this will end up at the consumer level anytime even medium term because that kind of flexibility is still just so niche but we might see some CXL connectivity eventually for things like GPUs or other accelerators to have more memory or share better between host and accelerator.

EDIT: article about a tech demo of it on a laptop actually, hadn't seen this before: https://www.techradar.com/pro/even-a-laptop-can-run-ram-exte...

zokier
1 replies
23h27m

I wonder if this will bring a new widely available high-performance connector to the wider market. SO-DIMM connectors have been occasionally repurposed to other uses, most notably by Raspberry Pi Compute Models 1-3 among other similar SOM/COM boards. RPi CM4 switched to 2x 100pin mezzanine connectors; maybe some future module could use CAMM connectors, I'd imagine they are capable enough

wmf
0 replies
22h43m

The compression connector looks flimsier than a mezzanine so it should probably be a last resort for multi-gigahertz single-ended signaling.

quailfarmer
1 replies
10h29m

I'm sure this will find use in Business-Class "Mobile workstations", but having integrated DDR4 in my own hardware, I have a hard time seeing this as the mainstream path forward for mobile computing.

There's lots of value in tight integration. Improved signal integrity (ie, faster), improved reliability, better thermal flow, smaller packaging, and lower cost. Do I really want to compromise all of those things just to make RAM upgrades easier?

And how many times do I need to upgrade the RAM in a laptop, really? Twice? Why make all those sacrifices to use a connector, instead of just reworking the DRAM parts? A robotic reflow machine is not so complex that a small repair shop couldn't afford one, which is what you see if you to to parts of the world where repair is taken seriously. Why do I need to be able to do it at home? I can't re-machine my engine at home. It's the most advanced nanotechnology humanity can produce, why is a $5k repair setup unreasonable?

This is not to mention the direction things are really going, DRAM on Package/Die. The signaling speed and bus widths possible with co-packaged memory and HBM are impossible to avoid, and I'm not going to complain about the fact that I can't upgrade the RAM separately from the CPU, any more than I complain about not being able to upgrade my L2 cache today. The memory is part of the compute, in the same way the GPU memory is part of the GPU.

I hope players like iFixit and Framework aren't too stubborn in opposing the tight integration of modern platforms. "Repairable" doesn't need to mean the same thing it did 10 years ago, and there are so many repairability battles that are actually worth fighting, that being stubborn about the SOTA isn't productive.

Timshel
0 replies
10h8m

I'm sure this will find use in Business-Class "Mobile workstations", but having integrated DDR4 in my own hardware, I have a hard time seeing this as the mainstream path forward for mobile computing.

Don't know would say the reverse, workstation might need the performance of DRAM on Package/Die, but I don't believe it's the case for mainstream user.

A robotic reflow machine

Same maybe to service enterprise customer but probably way too expensive for mainstream.

I certainly hope that players continue to oppose tight integration and I'll try to support them. I value the ability that anyone can swap ram and disks to easily upgrade or repair their device more than an increase of performance or even battery life.

I recently cobbled up a computer for a friend's child with component from three different computers; any additional cost would have made the exercise worthless.

kristianp
1 replies
18h57m

So this is going into the ThinkPad P1 (Gen 7), which is too expensive and power hungry for my use cases. How long until it filters down into less expensive SKUs? Are we talking next years generation?

Ifixit also links to a repair guide:

https://www.ifixit.com/Device/Lenovo_ThinkPad_P1_Gen_7

CoolCold
0 replies
3h30m

My personal understanding - for Thinkpads, it's next year. I guess Lenovo is making real-life testes with P1 here, gather feedback before addressing other families like T14/T14s

cryptonector
1 replies
20h50m

Yes please. Also, can we haz ECC?

seanp2k2
0 replies
18h21m

Why are you trying to bankrupt Intel??? Without being able to charge 5x as much for Xeons for ECC support, why would anyone ever pony up for one?

userbinator
0 replies
16h0m

A bit of a disingenious argument intended to sell this as being more revolutionary than it really is --- BGA sockets already exist for LPDDR as well as other things like CPUs/SoCs, but they're very expensive due to low volumes. If the volume went up, they'd go down in price significantly just like LGA sockets for CPUs have.

https://www.ironwoodelectronics.com/products/lpddr/

snvzz
0 replies
15h56m

I see no mention of ECC.

It worries me.

p0w3n3d
0 replies
23h56m

Apple hates it

Tran84jfj
0 replies
23h11m

I would welcome something like Raspberry Pi compute module, that contains CPU+RAM and communicates with other parts via PCIE. This standard can last decades!

Yet another standard for memory will just fail.

PTOB
0 replies
17h17m

The current Dell version of this: upgrade to 64GB is $1200. Found this the hard way when trying to get my engineering team what I thought would be a $200 upgrade per machine from their stock 32GB Precision laptop workstations.

Dwedit
0 replies
20h59m

Can it become loose then suddenly not have all pins attached properly? This is something that's unlikely to happen with SODIMM slots, but I've seen so many times when screw receptacles fail.