"With these improvements to the CPU and GPU, M4 maintains Apple silicon’s industry-leading performance per watt. M4 can deliver the same performance as M2 using just half the power. And compared with the latest PC chip in a thin and light laptop, M4 can deliver the same performance using just a fourth of the power."
That's an incredible improvement in just a few years. I wonder how much of that is Apple engineering and how much is TSMC improving their 3nm process.
Apple usually massively exaggerates their tech spec comparison - is it REALLY half the power use of all times (so we'll get double the battery life) or is it half the power use in some scenarios (so we'll get like... 15% more battery life total) ?
IME Apple has always been the most honest when it makes performance claims. LIke when they said an Macbook Air would last 10+ hours and third-party reviewers would get 8-9+ hours. All the while, Dell or HP would claim 19 hours and you'd be lucky to get 2 eg [1].
As for CPU power use, of course that doesn't translate into doubling battery life because there are other components. And yes, it seems the OLED display uses more power so, all in all, battery life seems to be about the same.
I'm interested to see an M3 vs M4 performance comparison in the real world. IIRC the M3 was a questionable upgrade. Some things were better but some weren't.
Overall the M-series SoCs have been an excellent product however.
[1]: https://www.laptopmag.com/features/laptop-battery-life-claim...
EDIT: added link
That's just laughable, sorry. No one is particularly honest in marketing copy, but Apple is for sure one of the worst, historically. Even more so when you go back to the PPC days. I still remember Jobs on stage talking about how the G4 was the fasted CPU in the world when I knew damn well that it was half the speed of the P3 on my desk.
Have any examples from the past decade? Especially in the context of how exaggerated the claims are from PC and Android brands they are competing with?
Apple recently claimed that RAM in their Macbooks is equivalent to 2x the RAM in any other machine, in defense of the 8GB starting point.
In my experience, I can confirm that this is just not true. The secret is heavy reliance on swap. It's still the case that 1GB = 1GB.
You are entirely (100%) wrong, but, sadly, NDA...
Memory compression isn't magic and isn't exclusive to macOS.
I suggest you go and look HOW it is done in apple silicon macs, and then think long and hard why this might make a huge difference. Maybe Asahi Linux guys can explain it to you ;)
I understand that it can make a difference to performance (which is already baked into the benchmarks we look at), I don't see how it can make a difference to compression ratios, if anything in similar implementations (ex: console APUs) it tends to lead to worse compression ratios.
If there's any publicly available data to the contrary I'd love to read it. Anecdotally I haven't seen a significant difference between zswap on Linux and macOS memory compression in terms of compression ratios, and on the workloads I've tested zswap tends to be faster than no memory compression on x86 for many core machines.
Regardless of what you can't tell, he's absolutely right regarding Apple's claims: saying that a 8gb mac is as good as a 16gb non-mac is laughable.
That was never said. They said 8gb mac is similar to a 16gb non-Mac
My entry-level 8GB M1 Macbook Air beats my 64GB 10-core Intel iMac in my day-to-day dev work.
I do admit the "reliance on swap" thing is speculation on my part :)
My experience is that I can still tell when the OS is unhappy when I demand more RAM than it can give. MacOS is still relatively responsive around this range, which I just attributed to super fast swapping. (I'd assume memory compression too, but I usually run into this trouble when working with large amounts of poorly-compressible data.)
In either case, I know it's frustrating when someone is confidently wrong but you can't properly correct them, so you have my apologies
How convenient :)
There is also memory compression and their insane swap speed due to SoC memory and ssd
Every modern operating system now does memory compression
Some of them do it better than others though.
Apple uses Magic Compression.
This is a revolution
Not sure what windows does but the popular method on e.g. fedora is to split memory into main and swap and then compress swap. It could be more efficient the way Apple does it by not having to partition main memory.
Ye that was hilarious, my basic workload borders on the 8GB limit not even pushing it. They have fast swap but nothing beats real ram in the end, and considering their storage pricing is as stupid as their RAM pricing it really makes no difference.
If you go for the base model, you are in for a bad time, 256GB with heavy swap and no dedicated GPU memory (making the 8GB even worse) is just plain stupid.
This what the Apple fanboys don't seem to get, their base model at somewhat affordable price are deeply incompetent and if you start to load it up the pricing just do not make a lot of sense...
You know that RAM in these machines is more different than the same as "RAM" in a standard PC? Apple's SoC RAM is more or less part of the CPU/GPU and is super fast. And for obvious reasons cannot be added to.
Anyway, I manage a few M1 and M3 machines with 256/8 configs and they all run just as fast as 16 and 32 machines EXCEPT for workloads that need more than 8GB for a process (virtualization) or workloads that need lots of video memory (Lightroom can KILL an 8GB machine that isn't doing anything else...)
The 8GB is stupid discussion isn't "wrong" in the general case, but it is wrong for maybe 80% of users.
I got the base model M1 Air a couple of years back and whilst I don't do much gaming I do do C#, Python, Go, Rails, local Postgres, and more. I also have a (new last year) Lenovo 13th gen i7 with 16GB RAM running Windows 11 and the performance with the same load is night and day - the M1 walks all over it whilst easily lasting 10hrs+.
Note that I'm not a fanboy; I run both by choice. Also both iPhone and Android.
The Windows laptop often gets sluggish and hot. The M1 never slows down and stays cold. There's just no comparison (though the Air keyboard remains poor).
I don't much care about the technical details, and I know 8GB isn't a lot. I care about the experience and the underspecced Mac wins.
Sure, and they were widely criticized for this. Again, the assertion I was responding to is that Apple does this ”laughably” more than competitors.
Is an occasional statement that they get pushback on really worse than what other brands do?
As an example from a competitor, take a look at the recent firestorm over Intel’s outlandish anti-AMD marketing:
https://wccftech.com/intel-calls-out-amd-using-old-cores-in-...
FWIW: the language upthread was that it was laughable to say Apple was the most honest. And I stand by that.
Fair point. Based on their first sentence, I mischaracterized how “laughable” was used.
Though the author also made clear in their second sentence that they think Apple is one of the worst when it comes to marketing claims, so I don’t think your characterization is totally accurate either.
If someone is claiming “‹foo› has always ‹barred›”, then I don't think it's fair to demand a 10 year cutoff on counter-evidence.
Clearly it isn’t the case that Apple has always been more honest than their competition, because there were some years before Apple was founded.
Apple marketed their PPC systems as "a supercomputer on your desk", but it was nowhere near the performance of a supercomputer of that age. Maybe similar performance to a supercomputer from the 1970's, but that was their marketing angle from the 1990's.
It's certainly fair to say that twenty years ago Apple was marketing some of its PPC systems as "the first supercomputer on a chip"[^1].
That was not the claim. Apple did not argue that the G4's performance was commensurate with the state of the art in supercomputing. (If you'll forgive me: like, fucking obviously? The entire reason they made the claim is precisely because the latest room-sized supercomputers with leapfrog performance gains were in the news very often.)
The claim was that the G4 was capable of sustained gigaflop performance, and therefore met the narrow technical definition of a supercomputer.
You'll see in the aforelinked marketing page that Apple compared the G4 chip to UC Irvine’s Aeneas Project, which in ~2000 was delivering 1.9 gigaflop performance.
This chart[^2] shows the trailing average of various subsets of super computers, for context.
This narrow definition is also why the machine could not be exported to many countries, which Apple leaned into.[^3]
What am I missing here? Picking perhaps the most famous supercomputer of the mid-1970s, the Cray-1,[^4] we can see performance of 160 MFLOPS, which is 160 million floating point operations per second (with an 80 MHz processor!).
The G4 was capable of delivering ~1 GFLOP performance, which is a billion floating point operations per second.
Are you perhaps thinking of a different decade?
[^1]: https://web.archive.org/web/20000510163142/http://www.apple....
[^2]: https://en.wikipedia.org/wiki/History_of_supercomputing#/med...
[^3]: https://web.archive.org/web/20020418022430/https://www.cnn.c...
[^4]: https://en.wikipedia.org/wiki/Cray-1#Performance
This is marketing we're talking about, people see "supercomputer on a chip" and they get hyped up by it. Apple was 100% using the "supercomputer" claim to make their luddite audience think they had a performance advantage, which they did not.
The reason they marketed it that way was to get people to part with their money. Full stop.
In the first link you added, there's a photo of a Cray supercomputer, which makes the viewer equate Apple = Supercomputer = I am a computing god if I buy this product. Apple's marketing has always been a bit shady that way.
And soon after that period Apple jumped off the PPC architecture and onto the x86 bandwagon. Gimmicks like "supercomputer on a chip" don't last long when the competition is far ahead.
I can't believe Apple is marketing their products in a way to get people to part with their money.
If I had some pearls I would be clutching them right now.
That is also not in dispute. I am disputing your specific claim that Apple somehow suggested that the G4 was of commensurate performance to a modern supercomputer, which does not seem to be true.
This is why context is important (and why I'd appreciate clarity on whether you genuinely believe a supercomputer from the 1970s was anywhere near as powerful as a G4).
In the late twentieth and early twenty-first century, megapixels were a proxy for camera quality, and megahertz were a proxy for processor performance. More MHz = more capable processor.
This created a problem for Apple, because the G4's SPECfp_95 (floating point) benchmarks crushed Pentium III at lower clock speeds.
PPC G4 500 MHz - 22.6
PPC G4 450 MHz - 20.4
PPC G4 400 MHz - 18.36
Pentium III 600 MHz – 15.9
For both floating point and integer benchmarks, the G3 and G4 outgunned comparable Pentium II/III processors.
You can question how this translates to real world use cases – the Photoshop filters on stage were real, but others have pointed out in this thread that it wasn't an apples-to-apples comparison vs. Wintel – but it is inarguable that the G4 had some performance advantages over Pentium at launch, and that it met the (inane) definition of a supercomputer.
Yes, marketing exists to convince people to buy one product over another. That's why companies do marketing. IMO that's a self-evidently inane thing to say in a nested discussion of microprocessor architecture on a technical forum – especially when your interlocutor is establishing the historical context you may be unaware of (judging by your comment about supercomputers from the 1970s, which I am surprised you have not addressed).
I didn't say "The reason Apple markets its computers," I said "The entire reason they made the claim [about supercomputer performance]…"
Both of us appear to know that companies do marketing, but only you appear to be confused about the specific claims Apple made – given that you proactively raised them, and got them wrong – and the historical backdrop against which they were made.
That's right. It looks like a stylized rendering of a Cray-1 to me – what do you think?
The Cray-1's compute, as measured in GFLOPS, was approximately 6.5x lower than the G4 processor.
I'm therefore not sure what your argument is: you started by claiming that Apple deliberately suggested that the G4 had comparable performance to a modern supercomputer. That isn't the case, and the page you're referring to contains imagery of a much less performant supercomputer, as well as a lot of information relating to the history of supercomputers (and a link to a Forbes article).
All companies make tradeoffs they think are right for their shareholders and customers. They accentuate the positives in marketing and gloss over the drawbacks.
Note, too, that Adobe's CEO has been duped on the page you link to. Despite your emphatic claim:
The CEO of Adobe is quoted as saying:
How is what you are doing materially different to what you accuse Apple of doing?
They did so when Intel's roadmap introduced Core Duo, which was significantly more energy-efficient than Pentium 4. I don't have benchmarks to hand, but I suspect that a PowerBook G5 would have given the Core Duo a run for its money (despite the G5 being significantly older), but only for about fifteen seconds before thermal throttling and draining the battery entirely in minutes.
From https://512pixels.net/2013/07/power-mac-g4/: the ad was based on the fact that Apple was forbidden to export the G4 to many countries due to its “supercomputer” classification by the US government.
It seems that US government was buying too much into tech hypes at the turn of the millenium. Around the same period PS2 exports were also restricted [1].
[1] https://www.latimes.com/archives/la-xpm-2000-apr-17-fi-20482...
Blaming a company TODAY for marketing from the 1990s is crazy.
Interesting, by what benchmark did you compare the G4 and the P3?
I don't have a horse in this race, Jobs lied or bent the truth all the time so it wouldn't surprise me, I'm just curious.
I remember that Apple used to wave around these SIMD benchmarks showing their PowerPC chips trouncing Intel chips. In the fine print, you'd see that the benchmark was built to use AltiVec on PowerPC, but without MMX or SSE on Intel.
Ah so the way Intel advertises their chips. Got it.
Yeah, and we rightfully criticize Intel for the same and we distrust their benchmarks
You can claim Apple is dishonest for a few reasons.
1) Graphs often are unannotatted.
2) Comparisons are rarely against latest generation products. (their argument for that has been that they do not expect people to upgrade yearly, so its showing the difference of their intended upgrade path).
3) They have conflated performance, for performance per watt.
However, when it comes to battery life, performance (for a task) or specification of their components (screens, ability to use external displays up to 6k, port speed etc) there are almost no hidden gotchas and they have tended to be trustworthy.
The first wave of M1 announcements were met with similar suspicion as you have shown here; but it was swiftly dispelled once people actually got their hands on them.
*EDIT:* Blaming a guy who's been dead for 13 years for something they said 50 years ago, and primarily it seems for internal use is weird. I had to look up the context but it seems it was more about internal motivation in the 70’s than relating to anything today, especially when referring to concrete claims.
"This thing is incredible," Jobs said. "It's the first supercomputer on a chip.... We think it's going to set the industry on fire."
"The G4 chip is nearly three times faster than the fastest Pentium III"
- Steve Jobs (1999) [1]
[1] https://www.wired.com/1999/08/lavish-debut-for-apples-g4/
Thats cool, but literally last millennium.
And again, the guy has been dead for the better part of this millennium.
What have they shown of any product currently on the market, especially when backed with any concrete claim, that has been proven untrue-
EDIT: After reading your article and this one: https://lowendmac.com/2006/twice-as-fast-did-apple-lie-or-ju... it looks like it was true in floating point workloads.
Indeed. Have we already forgotten about the RDF?
No, it was just always a meaningless term...
While certainly misleading, there were situations where the G4 was incredibly fast for the time. I remember being able to edit Video in iMove on a 12" G4 Laptop. At that time there was no equivalent x86 machine.
Worked in an engineering lab at the time of the G4 introduction and I can contest that the G4 was a very, very fast CPU for scientific workloads.
Confirmed here: https://computer.howstuffworks.com/question299.htm (and elsewhere.)
A year later I was doing bonkers (for the time) photoshop work on very large compressed tiff files and my G4 laptop running at 400Mhz was more than 2x as fast as PIIIs on my bench.
Was it faster all around? I don't know how to tell. Was Apple as honest as I am in this commentary about how it mattered what you were doing? No. Was it a CPU that was able to do some things very fast vs others? I know it was.
Didn’t he have to use two PPC procs to get the equivalent perf you’d get on a P3?
Just add them up, it’s the same number of Hertz!
But Steve that’s two procs vs one!
I think this is when Adobe was optimizing for Windows/intel and was single threaded, but Steve put out some graphs showing better perf on the Mac.
If you have to go back 20+ years for an example…
BTW I get 19 hours from DELL XPS and Latitude. It's Linux with custom DE and Vim as IDE though.
can you share more details about your setup?
Archlinux, mitigations (spectre alike) off, X11, OpenBox, bmpanel with only CPU/IO indicator. Light theme everywhere. Opera in power save mode. `powertop --auto-tune` and `echo 1 | sudo tee /sys/devices/system/cpu/intel_pstate/no_turbo` Current laptop is Latitude 7390.
Right, so you are disabling all performance features and effectively turning your CPU into a low–end low–power SKU. Of course you’d get better battery life. It’s not the same thing though.
This is why Apple can be slightly more honest about their battery specs, they don’t have the OS working against them. Unfortunately most DELLs XPS will be running Windows, so it is still misleading to provide specs based on what the hardware could do if not sabotaged.
I get about 21 hours from mine, it's running Windows but powered off.
I guess you weren't around during the PowerPC days... Because that's a laughable statement.
I have no idea who's down voting you. They were lying through their teeth about CPU performance back then.
A PC half the price was smoking their top of the line stuff.
That's funny you say that, because this is precisely the time, I started buying Macs (I got a Pismo PowerBook G3 gifted and then bought an iBook G4). And my experience was that for sure, if you put as much money into a PC than in a Mac you would get MUCH better performance.
What made it worth it at the time (I felt) was the software. Today I'm really don't think so, software has improved overall in the industry and there is not a lot of things "Mac specific" that makes it a clear-cut choice.
As for the performance I can't believe all the Apple silicon hype. Sure, it gets good battery life given you use strictly Apple software (or software optimized for it heavily) but in mixed workload situation it's not that impressive.
Using the M2 MacBook Pro of a friend I figured I could get maybe 4-5 hours out of its best case scenario which is better than the 2-3 hours you would get from a PC laptop but also not that great considering the price difference.
And when it comes to performance it is extremely unequal and very lackluster for many things. Like there is more lag launching Activity Monitor on a 2K++ MacBook Pro than launching task manager on a 500 PC. This is a small somewhat stupid example but it does tell the overall story.
They talk a big game but in reality, their stuff isn't that performant in the real world.
And they still market games when one of their 2K laptops plays Dota 2 (a very old, relatively ressource efficient game) worse than a cheapo PC.
All I remember is tanks in the commercials.
We need more tanks in commercials.
Oh those megahertz myths! Their marketing department is pretty amazing at their spin control. This one was right up there with "it's not a bug; it's a feature" type of spin.
Okay, but your example was about battery life:
And even then, they exaggerated their claims. And your link doesn't say anything about HP or Dell claiming 19 hour battery life.
Apple has definitely exaggerated their performance claims over and over again. The Apple silicon parts are fast and low power indeed, but they've made ridiculous claims like comparing their chips to an nVidia RTX 3090 with completely misleading graphs
Even the Mac sites have admitted that the nVidia 3090 comparison was completely wrong and designed to be misleading: https://9to5mac.com/2022/03/31/m1-ultra-gpu-comparison-with-...
This is why you have to take everything they say with a huge grain of salt. Their chip may be "twice" as power efficient in some carefully chosen unique scenario that only exists in an artificial setting, but how does it fare in the real world? That's the question that matters, and you're not going to get an honest answer from Apple's marketing team.
You're right, its not 19 hours claimed. It was more than even that.
M1 Ultra did benchmark close to 3090 in some synthetic gaming tests. The claim was not outlandish, just largely irrelevant for any reasonable purpose.
Apple does usually explain their testing methodology and they don’t cheat on benchmarks like some other companies. It’s just that the results are still marketing and should be treated as such.
Outlandish claims notwithstanding, I don’t think anyone can deny the progress they achieved with their CPU and especially GPU IP. Improving performance on complex workloads by 30–50% in a single year is very impressive.
They are pretty honest when it comes to battery life claims, they’re less honest when it comes to benchmark graphs
I don't think less honest covers it and can't believe anything their marketing says after the 3090 claims. Maybe it's true, maybe not. We'll see from the reviews. Well assuming the reviewers weren't paid off with an "evaluation unit".
Apple is always honest but they know how to make you believe something that isn’t true.
Yes and no. They'll always be honest with the claim, but the scenario for the claimed improvement will always be chosen to make the claim as large as possible, sometimes with laughable results.
Typically something like "watch videos for 3x longer <small>when viewing 4k h265 video</small>" (which means they adapted the previous gen's silicon which could only handle h264).
Maybe for battery life, but definitely not when it comes to CPU/GPU performance. Tbf, no chip company is, but Apple is particularly egregious. Their charts assume best case multi-core performance when users rarely ever use all cores at once. They'd have you thinking it's the equivalent of a 3090 or that you get double the frames you did before when the reality is more like 10% gains.
For literal YEARS, Apple battery life claims were a running joke on how inaccurate and overinflated they were.
Yeah, the assumption seems to be that using less battery by one component means that the power will just magically go unused. As with everything else in life, as soon as something stops using a resource something else fills the vacuum to take advantage of the resource.
Controlling the OS is probably a big help there. At least, I saw lots of complaints about my zenbook model’s battery not hitting the spec. It was easy to hit or exceed it in Linux, but you have to tell it not to randomly spin up the CPU.
Quickly looking at the press release, it seems to have the same comparisons as in the video. None of Apple's comparisons today are between the M3 and M4. They are ALL comparing the M2 and M4. Why? It's frustrating, but today Apple replaced a product with an M2 with a product with an M4. Apple always compares product to product, never component to component when it comes to processors. So those specs are far more impressive than if we could have numbers between the M3 and M4.
Didn't they do extreme nitpicking for their tests so they could show the M1 beating a 3090 (or M2 a 4090, I can't remember).
Gave me quite a laugh when Apple users started to claim they'd be able to play Cyberpunk 2077 maxed out with maxed out raytracing.
I'll give you that Apple's comparisons are sometimes inscrutable. I vividly remember that one.
https://www.theverge.com/2022/3/17/22982915/apple-m1-ultra-r...
Apple was comparing the power envelope (already a complicated concept) of their GPU against a 3090. Apple wanted to show that the peak of their GPU's performance was reached with a fraction of the power of a 3090. What was terrible was that Apple was cropping their chart at the point where the 3090 was pulling ahead in pure compute by throwing more watts at the problem. So their GPU was not as powerful as a 3090, but a quick glance at the chart would completely tell you otherwise.
Ultimately we didn't see one of those charts today, just a mention about the GPU being 50% more efficient than the competition. I think those charts are beloved by Johny Srouji and no one else. They're not getting the message across.
Plenty of people on HN thought that M1 GPU is as powerful as 3090 GPU, so I think the message worked very well for Apple.
They really love those kind of comparisons - e.g. they also compared M1s against really old Intel CPUs to make the numbers look better, knowing that news headlines won't care for details.
They compared against really old intel CPUs because those were the last ones they used in their own computers! Apple likes to compare device to device, not component to component.
You say that like it's not a marketing gimmick meant to mislead and obscure facts.
It's not some virtue that causes them to do this.
It's funny because your comment is meant to mislead and obscure facts.
Apple compared against Intel to encourage their previous customers to upgrade.
There is nothing insidious about this and is in fact standard business practice.
No, they compared it because it made them look way better for naive people. They have no qualms comparing to other competition when it suits them.
You're explanation is a really baffling case of corporate white knighting.
that's honestly kind of stupid when discussing things like 'new CPU!' like this thread.
I'm not saying the M4 isn't a great platform, but holy cow the corporate tripe people gobble up.
Yes, can't remember the precise combo either, there was a solid year or two of latent misunderstandings.
I eventually made a visual showing it was the same as claiming your iPhone was 3x the speed of a Core i9: Sure, if you limit the power draw of your PC to a battery the size of a post it pad.
Similar issues when on-device LLMs happened, thankfully, quieted since then (last egregious thing I saw was stonk-related wishcasting that Apple was obviously turning its Xcode CI service into a full-blown AWS competitor that'd wipe the floor with any cloud service, given the 2x performance)
Yes, kinda annoying. But on the other hand, given that apple releases a new chip every 12 months, we can grant them some slack here. Given that from AMD, Intel or nvidia we see usually a 2 year cadence.
There’s probably easier problems to solve in the ARM space than x86 considering the amount of money and time spent on x86.
That’s not to say that any of these problems are easy, just that there’s probably more lower hanging fruit in ARM land.
And yet they seem to be the only people picking the apparently "Low Hanging Fruit" in ARM land. We'll see about Qualcomm's Nuvia-based stuff, but that's been "nearly released" for what feels like years now, but you still can't buy one to actually test.
And don't underestimate the investment Apple made - it's likely at a similar level to the big x86 incumbents. I mean AMD's entire Zen development team cost was likely a blip on the balance sheet for Apple.
> Qualcomm's Nuvia-based stuff, but that's been "nearly released" for what feels like years now
Launching at Computex in 2 weeks, https://www.windowscentral.com/hardware/laptops/next-gen-ai-...
Good to know that it's finally seeing the light. I thought they're still in legal dispute with ARM about Nuvia's design?
That's more bound by legal than technical reasons...
Maybe for GPUs, but for CPU both intel and AMD release with yearly cadance. Even when Intel has nothing new to release, generation is bumped.
It’s an iPad event and there were no M3 iPads.
That’s all. They’re trying to convince iPad users to upgrade.
We’ll see what they do when they get to computers later this year.
I have a Samsung Galaxy S7 FE tablet, and I can't figure any use case where I may use more power.
I agree that iPad has more interesting software than android for use cases like video or music editing, but I don't do those on a tablet anyway.
I just can't imagine anyone updating their ipad M2 for this except a tiny niche that really wants that more power.
AI on the device may be the real reason for an M4.
Previous iPads have had that for a long time. Since the A12 in 2018. The phones had it even earlier with the A11.
Sure this is faster but enough to make people care?
It may depend heavily on what they announce is in the next version of iOS/iPadOS.
The A series was good enough.
I’m vaguely considering this but entirely for the screen. The chip has been irrelevant to me for years, it’s long past the point where I don’t notice it.
A series was definitely not good enough. Really depends on what you're using it for. Netflix and web? Sure. But any old HDR tablet, that can maintain 24Hz, is good enough for that.
These are 2048x2732 with 120Hz displays, that support 6k external displays. Gaming and art apps push them pretty hard. From the iPad user in my house, goin from the 2020 non M* iPad to a 2023 M2 iPad made a huge difference for the drawing apps. Better latency is always better for drawing, and complex brushes (especially newer ones), selections, etc, would get fairly unusable.
For gaming, it was pretty trivial to dip well below 60Hz with a non M* iPad, with some of the higher demand games like Fortnight, Minecraft (high view distance), Roblox (it ain't what it used to be), etc.
But, the apps will always gravitate to the performance of the average user. A step function in performance won't show up in the apps until the adoption follows, years down the line. Not pushing the average to higher performance is how you stagnate the future software of the devices.
I like the comparison between much older hardware with brand new to highlight how far we came.
That's ok, but why skip the previous iteration then? Isn't the M2 only two generations behind? It's not that much older. It's also a marketing blurb, not a reproducible benchmark. Why leave out comparisons with the previous iteration even when you're just hand-waving over your own data?
In this specific case, it's because iPad's never got the M3. They're literally comparing it with the previous model of iPad.
There were some disingenuous comparisons throughout the presentation going back to A11 for the first Neural Engine and some comparisons to M1, but the M2 comparison actually makes sense.
I wouldn't call the comparison to A11 disingenuous, they were very clear they were talking about how far their neural engines have come, in the context of the competition just starting to put NPUs in their stuff.
I mean, they compared the new iPad Pro to an iPod Nano, that's just using your own history to make a point.
Fair point—I just get a little annoyed when the marketing speak confuses the average consumer and felt as though some of the jargon they used could trip less informed customers up.
They know that anyone who has bought an M3 is good on computers for a long while. They're targeting people who have m2 or older macs. People who own an m3 are basically going to buy anything that comes down the pipe, because who needs an m3 over an m2 or even an m1 today?
I’m starting to worry that I’m missing out on some huge gains (M1 Air user.) But as a programmer who’s not making games or anything intensive, I think I’m still good for another year or two?
I have an M1 Air and I test drove a friend's recent M3 Air. It's not very different performance-wise for what I do (programming, watching video, editing small memory-constrained GIS models, etc)
I wanted to upgrade my M1 because it was going to swap a lot with only 8 gigs of RAM and because I wanted a machine that could run big LLMs locally. Ended up going 8G macbook air M1 -> 64G macbook pro M1. My other reasoning was that it would speed up compilation, which it has, but not by too much.
The M1 air is a very fast machine and is perfect for anyone doing normal things on the computer.
personally I think this is a comparison most people want. The M3 had a lot of compromises over the M2.
that aside, the M4 is about the Neural Engine upgrades over anything (which probably should have been compared to the M3)
What are such compromises? I may buy an M3 mbp, so would like to hear more
The M3 Pro had some downgrades compared to the M2 Pro, less performance cores and lower memory bandwidth. This did not apply to the M3 and M3 Max.
Well, the obvious answer is that those with older machines are more likely to upgrade than those with newer machines. The market for insta-upgraders is tiny.
edit: And perhaps an even more obvious answer: there are no iPads that contained the M3, so the comparison would be more useless. The M4 was just launched today exclusively in iPads.
I don't think this is true. When they launched the M3 they compared primarily to M1 to make it look better.
That's because the previous iPad Pros came with M2, not M3. They are comparing the performance with the previous generation of the same product.
Doesn't seem plausible to me that Apple will release a "M3 variant" that can drive "tandem OLED" displays. So probably logical to package whatever chip progress (including process improvements) into "M4".
And it can signal that "We are serious about iPad as a computer", using their latest chip.
Logical alignment to progresses in engineering (and manufacturing) packaged smartly to generate marketing capital for sales and brand value creation.
Wonder how the newer Macs will use these "tandem OLED" capabilities of the M4.
because previous ipad was M2. So 'remember how fast was your previous ipad', well this one is N better.
From their specs page, battery life is unchanged. I think they donated the chip power savings to offset the increased consumption of the tandem OLED
I’ve not seen discussion that Apple likely scales performance of chips to match the use profile of the specific device it’s used in. An M2 in an iPad Air is very likely not the same as an M2 in an MBP or Mac Studio.
Surprisingly, I think it is: I was going to comment that here, then checked Geekbench, single core scores match for M2 iPad/MacBook Pro/etc. at same clock speed. i.e. M2 "base" = M2 "base", but core count differs, and with the desktops/laptops, you get options for M2 Ultra Max SE bla bla.
A Ryzen 7840U in a gaming handheld is not (configured) the same as a Ryzen 7840U in a laptop, for that matter, so Apple is hardly unique here.
The GeekBench [1,2] benchmarks for M2 are:
Single Core: iPad Pro (M2): 2539 Macbook Air (M2): 2596 Macbook Pro (M2): 2645
Multi Core: iPad Pro (M2 8-core): 9631 Macbook Air (M2 8-core): 9654 Macbook Pro (M2 8-core): 9642
So, it appears to be almost the same performance (until it throttles due to heat, of course).
1. https://browser.geekbench.com/ios-benchmarks 2. https://browser.geekbench.com/mac-benchmarks
Any product that uses this is more than just the chip, so you cannot get a proportional change in battery life.
Sure, but I also remember them comparing M1 chip to 3090 GTX and my MacBook M1 Pro doesn't really run games well.
So I've become really suspicious about any claims about performance done by Apple.
That is fault of the devs. Because optimization for dedicated graphic cards is a either integrated in the game engine or they just have a version for rtx users.
I mean, I remember Apple comparing the M1 Ultra to Nvidia's RTX 3090. While that chart was definitely putting a spin on things to say the least, and we can argue from now until tomorrow about whether power consumption should or should not be equalised, I have no idea why anyone would expect the M1 Pro (an explicitly much weaker chip) to perform anywhere near the same.
Also what games are you trying to play on it? All my M-series Macbooks have run games more than well enough with reasonable settings (and that has a lot more to do with OS bugs and the constraints of the form factor than with just the chipset).
They compared them in terms of perf/watt, which did hold up, but obviously implied higher performance overall.
I don't know, but the M3 MBP I got from work already gives the impression of using barely any power at all. I'm really impressed by Apple Silicon, and I'm seriously reconsidering my decision from years ago to never ever buy Apple again. Why doesn't everybody else use chips like these?
I have an M3 for my personal laptop and an M2 for my work laptop. I get ~8 hours if I'm lucky on my work laptop, but I have attributed most of that battery loss to all the "protection" software they put on my work laptop that is always showing up under the "Apps Using Significant Power" category in the battery dropdown.
I can have my laptop with nothing on screen, and the battery still points to TrendMicro and others as the cause of heavy battery drain while my laptop seemingly idles.
I recently upgraded my personal laptop to the M3 MacBook pro and the difference is astonishing. I almost never use it plugged in because I genuinely get close to that 20 hour reported battery life. Last weekend I played a AAA Video Game through Xbox Cloud Gaming (awesome for mac gamers btw) and with essentially max graphics (rendered elsewhere and streamed to me of course), I got sucked into a game for like 5 hours and lost only 8% of my battery during that time, while playing a top tier video game! It really blew my mind. I also use GoLand IDE on there and have managed to get a full day of development done using only about 25-30% battery.
So yeah, whatever Apple is doing, they are doing it right. Performance without all the spyware that your work gives you makes a huge difference too.
For the AAA video game example, I mean, it is interesting how far that kind of tech has come… but really that’s just video streaming (maybe slightly more difficult because latency matters?) from the point of view of the laptop, right? The quality of the graphics there have more-or-less nothing to do with the battery.
I think the market will move to using chips like this, or at least have additional options. The new Snapdragon SOC is interesting, and I would suspect we could see Google and Microsoft play in this space at some point soon.
Isn't 15% more battery life a huge improvement on a device already well known for long battery life?
You wouldn’t necessarily get twice the battery life. It could be less than that due to the thinner body causing more heat, a screen that utilizes more energy, etc
I'm not sure what you mean by "of all times" but half the battery usage of the processor definitely doesn't translate into double the battery life since the processor is not the only thing consuming power.
Apple might use simplified and opaque plots to drive their point, but they all too often undersell the differences. Indepedent reviews for example find that they not just hit the mark Apple mentions for things like battery but that often do slightly better...
Apple is one of the few companies that underpromise and overdeliver and never exaggerate.
Compared to the competition, I'd trust Apple much more than the Windows laptop OEMs.
CPU is not the only way that power is consumed in a portable device. It is a large fraction, but you also have displays and radios.
Well battery life would be used by other things too right? Especially by that double OLED screen. "best ever" in every keynote makes me laugh at this point, but it doesn't mean that they're not improving their power envelope.
Potentially > 2x greater battery life for the same amount of compute!
That is pretty crazy.
Or am I missing something?
Sadly, this is only processor power consumption, you need to put power into a whole lot of other things to make an useful computer… a display backlight and the system's RAM come to mind as particular offenders.
backlight is now the main bottleneck for consumption heavy uses. I wonder what are the main advancements that are happening there to optimize the wattage.
Please give me an external ePaper display so I can just use Spacemacs in a well-lit room!
Onyx makes a HDMI "25 eInk display [0]. It's pricey.
[0] https://onyxboox.com/boox_mirapro
edit: "25, not "27
I'm still waiting for the technology to advance. People can't reasonably spend $1500 on the world's shittiest computer monitor, even if it is on sale.
Dang, yeah, this is the opposite of what I had in mind
I was thinking, like, a couple hundred dollar Kindle the size of a big iPad I can plug into a laptop for text-editing out and about. Hell, for my purposes I'd love an integrated keyboard.
Basically a second, super-lightweight laptop form-factor I can just plug into my chonky Macbook Pro and set on top of it in high-light environments when all I need to do is edit text.
Honestly not a compelling business case now that I write it out, but I just wanna code under a tree lol
I think we're getting pretty close to this. The Remarkable 2 tablet is $300, but can't take video input and software support for non-notetaking is near non-existent. There's even a keyboard available. Boox and Hisense are also making e-ink tablets/phones for reasonable prices.
A friend bought it & I had a chance to see it in action.
It is nice for some very specific use cases. (They're in the publishing/typesetting business. It's… idk, really depends on your usage patterns.)
Other than that, yeah, the technology just isn't there yet.
If that existed as a drop-in screen replacement on the framework laptop and with a high refresh rate color gallery 3 panel, then I'd buy it at that price point in a heart beat.
I can't replace my desktop monitor with eink because I occasionally play video games. I can't use a 2nd monitor because I live in a small apartment.
I can't replace my laptop screen with greyscale because I need syntax highlighting for programming.
Maybe the $100 nano-texture screen will give you the visibility you want. Not the low power of a epaper screen though.
Hmm, emacs on an epaper screen might be great if it had all the display update optimization and "slow modem mode" that Emacs had back in the TECO days. (The SUPDUP network protocol even implemented that at the client end and interacted with Emacs directly!)
Is the iPad Pro not yet on OLED? All of Samsung's flagship tablets have OLED screens for well over a decade now. It eliminates the need for backlighting, has superior contrast and pleasant to ise in low-light conditions.
I'm not sure how OLED and backlit LCD compare power-wise exactly, but OLED screens still need to put off a lot of light, they just do it directly instead of with a backlight.
The iPad that came out today finally made the switch. iPhones made the switch around 2016. It does seem odd how long it took for the iPad to switch, but Samsung definitely switched too early: my Galaxy Tab 2 suffered from screen burn in that I was never able to recover from.
QD-oled reduces it by like 25% I think? But maybe that will never be in laptops, I'm not sure.
QD-OLED is an engineering improvement, i.e. combining existing researched technology to improve the result product. I wasn't able to find a good source on what exactly it improves in efficiency, but it's not a fundamental improvement in OLED electrical→optical energy conversion (if my understanding is correct.)
In general, OLED screens seem to have an efficiency around 20≈30%. Some research departments seem to be trying to bump that up [https://www.nature.com/articles/s41467-018-05671-x] which I'd be more hopeful on…
…but, honestly, at some point you just hit the limits of physics. It seems internal scattering is already a major problem; maybe someone can invent pixel-sized microlasers and that'd help? More than 50-60% seems like a pipe dream at this point…
…unless we can change to a technology that fundamentally doesn't emit light, i.e. e-paper and the likes. Or just LCD displays without a backlight, using ambient light instead.
AMD gpus have "Adaptive Backlight Management" which reduces your screen's backlight but then tweaks the colors to compensate. For example, my laptop's backlight is set at 33% but with abm it reduces my backlight to 8%. Personally I don't even notice it is on / my screen seems just as bright as before, but when I first enabled it I did notice some slight difference in colors so its probably not suitable for designers/artists. I'd 100% recommend it for coders though.
Strangely, Apple seems to be doing the opposite for some reason (Color accuracy?), as dimming the display doesn't seem to reduce the backlight as much, and they're using a combination of software dimming, even at "max" brightness.
Evidence can be seen when opening up iOS apps, which seem to glitch out and reveals the brighter backlight [1]. Notice how #FFFFFF white isn't the same brightness as the white in the iOS app.
[1] https://imgur.com/a/cPqKivI
If the usecases involve working on dark terminals all day or watching movies with dark scenes or if the general theme is dark, may be the new oled display will help reduce the display power consumption too.
that's still amazing, to me.
I don't expect an M4 macbook to last any longer than an M2 macbook of otherwise similar specs; they will spend that extra power budget on things other than the battery life specification.
Thanks. That makes sense.
Comparing the tech specs for the outgoing and new iPad Pro models, that potential is very much not real.
Old: 28.65 Wh (11") / 40.88 Wh (13"), up to 10 hours of surfing the web on Wi-Fi or watching video.
New: 31.29 Wh (11") / 38.99 Wh (13"), up to 10 hours of surfing the web on Wi-Fi or watching video.
Isn't this weird, a new chip consumes 2 times less power, but the battery life is the same?
The OLED likely adds a fair bit of draw; they're generally somewhat more power-hungry than LCDs these days, assuming like-for-like brightness. Realistically, this will be the case until MicroLEDs are available for non-completely-silly money.
This surprises me. I thought the big power downside of LCD displays is that they use filtering to turn unwanted color channels into waste heat.
Knowing nothing else about the technology, I assumed that would make OLED displays more efficient.
OLED will use less for a screen of black and LCD will use less for a screen of white. Now, take whatever average of what content is on the screen and for you, it may be better or may be worse.
White background document editing, etc., will be worse, and this is rather common.
Can’t beat the thermodynamics of exciton recombination.
https://pubs.acs.org/doi/10.1021/acsami.9b10823
It's not weird when you consider that browsing the web or watching videos has the CPU idle or near enough, so 95% of the power draw is from the display and radios.
No, they have a "battery budget". It the CPU power draw goes down that means the budget goes up and you can spend it on other things, like a nicer display or some other feature.
When you say "up to 10 hours" most people will think "oh nice that's an entire day" and be fine with it. It's what they're used to.
Turning that into 12 hours might be possible but are the tradeoffs worth it? Will enough people buy the device because of the +2 hour battery life? Can you market that effectively? Or will putting in a nicer fancy display cause more people to buy it?
We'll never get significant battery life improvements because of this, sadly.
Yeah double the PPW does not mean double the battery, because unless you're pegging the CPU/SOC it's likely only a small fraction of the power consumption of a light-use or idle device, especially for an SOC which originates in mobile devices.
Doing basic web navigation with some music in the background, my old M1 Pro has short bursts at ~5W (for the entire SoC) when navigating around, a pair of watts for mild webapps (e.g. checking various channels in discord), and typing into this here textbox it's sitting happy at under half a watt, with the P-cores essentially sitting idle and the E cores at under 50% utilisation.
With a 100Wh battery that would be a "potential" of 150 hours or so. Except nobody would ever sell it for that, because between the display and radios the laptop's actually pulling 10~11W.
On my M1 air, I find for casual use of about an hour or so a day, I can literally go close to a couple weeks without needing to recharge. Which to me is pretty awesome. Mostly use my personal desktop when not on my work laptop (docked m3 pro).
So this could be a bit helpful for heavier duty usage while on battery.
A more efficient CPU can't improve that spec because those workloads use almost no CPU time and the display dominates the energy consumption.
Unfortunately Apple only ever thinks about battery life in terms of web surfing and video playback, so we don't get official battery-life figures for anything else. Perhaps you can get more battery life out of your iPad Pro web surfing by using dark mode, since OLEDs should use less power than IPS displays with darker content.
this
Ok, but is it twice as fast during those 10 hours, leading to 20 hours of effective websurfing? ;)
Wait a bit. M2 wasn't as good as the hype was.
That's because M2 was on the same TSMC process generation as M1. TSMC is the real hero here. M4 is the same generation as M3, which is why Apple's marketing here is comparing M4 vs M2 instead of M3.
Saying tsmc is a hero ignores the thousands of suppliers that improved everything required for tsmc to operate. Tsmc is the biggest, so they get the most experience on all the new toys the world’s engineers and scientists are building.
It's almost as if every part of the stack -- from the uArch that Apple designs down to the insane machinery from ASML, to the fully finished SoC delivered by TSMC -- is vitally important to creating a successful product.
But people like to assign credit solely to certain spaces if it suits their narrative (lately, Apple isn't actually all that special at designing their chips, it's all solely the process advantage)
Saying TSMC's success is due to their suppliers ignores the fact that all of their competitors failed to keep up despite having access to the same suppliers. TSMC couldn't do it without ASML, but Intel and Samsung failed to do it even with ASML.
In contrast, when Apple's CPU and GPU competitors get access to TSMC's new processes after Apple's exclusivity period expires, they achieve similar levels of performance (except for Qualcomm because they don't target the high end of CPU performance, but AMD does).
And why other PC vendors not latching on to the hero?
Apple pays TSMC for exclusivity on new processes for a period of time.
Apple often buys their entire capacity (of a process) for quite a while.
I thought M3 and M4 were different processes though. Higher yield for the latter or such.
Actually, M4 is reportedly on a more cost-efficient TSMC N3E node, where Apple was apparently the only customer on the more expensive TSMC N3B node; I'd expect Apple to move away from M3 to M4 very quickly for all their products.
https://www.trendforce.com/news/2024/05/06/news-apple-m4-inc....
This. Remember folks Apple's primary goal is PROFIT. They will tell you anything appealing before independent tests are done.
Is the CPU/GPU really dominating power consumption that much?
Nah, GP is off their rocker. For the workloads in question the SOC's power draw is a rounding error, low single-digit percent.
2x efficiency vs a 2 year old chip is more or less in line with expectations (Koomey's law). [1]
[1] https://en.wikipedia.org/wiki/Koomey%27s_law
And here it is in an OS that can't even max out an M1!
That said, the function keys make me think "and it runs macOS" is coming, and THAT would be extremely compelling.
We've seen a slow march over the last decade towards the unification of iOS and macOS. Maybe not a "it runs macOS", but an eventual "they share all the same apps" with adaptive UIs.
They probably saw the debacle that was Windows 8 and thought merging a desktop and touch OS is a decade-long gradual task, if that is even the final intention.
Unlike MS that went with the Big Bang in your face approach that was oh-so successful.
You're right about the reason but wrong about the timeline: Jobs saw Windows XP Tablet Edition and built a skunkworks at Apple to engineer a tablet that did not require a stylus. This was purely to spite a friend[0] of his that worked at Microsoft and was very bullish on XP tablets.
Apple then later took the tablet demo technology, wrapped it up in a very stripped-down OS X with a different window server and UI library, and called it iPhone OS. Apple was very clear from the beginning that Fingers Can't Use Mouse Software, Damn It, and that the whole ocean needed to be boiled to support the new user interface paradigm[1]. They even have very specific UI rules specifically to ensure a finger never meets a desktop UI widget, including things like iPad Sidecar just not forwarding touch events at all and only supporting connected keyboards, mice, and the Apple Pencil.
Microsoft's philosophy has always been the complete opposite. Windows XP through 7 had tablet support that amounted to just some affordances for stylus users layered on top of a mouse-only UI. Windows 8 was the first time they took tablets seriously, but instead of just shipping a separate tablet OS or making Windows Phone bigger, they turned it into a parasite that ate the Windows desktop from the inside-out.
This causes awkwardness. For example, window management. Desktops have traditionally been implemented as a shared data structure - a tree of controls - that every app on the desktop can manipulate. Tablets don't support this: your app gets one[2] display surface to present their whole UI inside of[3], and that surface is typically either full-screen or half-screen. Microsoft solved this incongruity by shoving the entire Desktop inside of another app that could be properly split-screened against the new, better-behaved tablet apps.
If Apple were to decide "ok let's support Mac apps on iPad", it'd have to be done in exactly the same way Windows 8 did it, with a special Desktop app that contained all the Mac apps in a penalty box. This is so that they didn't have to add support for all sorts of incongruous, touch-hostile UI like floating toolbars, floating pop-ups, global menus, five different ways of dragging-and-dropping tabs, and that weird drawer thing you're not supposed to use anymore, to iPadOS. There really isn't a way to gradually do this, either. You can gradually add feature parity with macOS (which they should), but you can't gradually find ways to make desktop UI designed by third-parties work on a tablet. You either put it in a penalty box, or you put all the well-behaved tablet apps in their own penalty boxes, like Windows 10.
Microsoft solved Windows 8's problems by going back to the Windows XP/Vista/7 approach of just shipping a desktop for fingers. Tablet Mode tries to hide this, but it's fundamentally just window management automation, and it has to handle all the craziness of desktop. If a desktop app decides it wants a floating toolbar or a window that can't be resized[4], Tablet Mode has to honor that request. In fact, Tablet Mode needs a lot of heuristics to tell what floating windows pair with which apps. So it's a lot more awkward for tablet users in exchange for desktop users having a usable desktop again.
[0] Given what I've heard about Jobs I don't think Jobs was psychologically capable of having friends, but I'll use the word out of convenience.
[1] Though the Safari team was way better at building compatibility with existing websites, so much so that this is the one platform that doesn't have a deep mobile/desktop split.
[2] This was later extended to multiple windows per app, of course.
[3] This is also why popovers and context menus never extend outside their containing window on tablets. Hell, also on websites. Even when you have multiwindow, there's no API surface for "I want to have a control floating on top of my window that is positioned over here and has this width and height".
[4] Which, BTW, is why the iPad has no default calculator app. Before Stage Manager there was no way to have a window the size of a pocket calculator.
Clip Studio is one Mac app port I’ve seen that was literally the desktop version moved to the iPad. It uniquely has the top menu bar and everything. They might have made an exception because you’re intended to use the pencil and not your fingers.
Honestly, using a stylus isn't that bad. I've had to support floor traders for many years and they all still use a Windows-based tablet + a stylus to get around. Heck, even Palm devices were a pleasure to use. Not sure why Steve was so hell bent against them, it probably had to do with his beef with Sculley/Newton.
At this point, there's two fundamentally different types of computing that will likely never be mergeable in a satisfactory way.
We now have 'content consumption platforms' and 'content creation platforms'.
While attempts have been made to try and enable some creation on locked-down touchscreen devices, you're never going to want to try and operate a fully-featured version of Photoshop, Maya, Visual Studio, etc on them. And if you've got a serious workstation with multiple large monitors and precision input devices, you don't want to have dumbed-down touch-centric apps forced upon you Win8-style.
The bleak future that seems likely is that the 'content creation platforms' become ever more niche and far more costly. Barriers to entry for content creators are raised significantly as mainstream computing is mostly limited to locked-down content consumption platforms. And Linux is only an option for as long as non-locked-down hardware is available for sensible prices.
On the other hand, a $4000 mid-game Macbook doesn’t have a touchscreen and that’s a heresy. Granted, you can get the one with the emoji bar, but why interact using touch on a bar when you could touch the screen directly?
Maybe the end game for Apple isn’t the full convergence, but just having a touch screen on the Mac.
Why would you want greasy finger marks on your Macbook screen?
Not much point having a touchscreen on a Macbook (or any laptop really), unless the hardware has a 'tablet mode' with a detachable or fold-away keyboard.
People have complained about why Logic Pro / Final Cut wasn't ported to the iPad Pro line. The obvious answer is that making workflows done properly take time.
I'd be very surprised if Apple is paying attention to anything that's happening with windows. At least as a divining rod for how to execute.
Even with the advantage of time, I don't think Microsoft would have been able to do it. They can't even get their own UI situated, much less adaptive. Windows 10/11 is this odd mishmash of old and new, without a consistent language across it. They can't unify what isn't even cohesive in the first place.
Unfortunately I think "they share all the same apps" will not include a terminal with root access, which is what would really be needed to make iPad a general purpose computer for development
It's a shame, because it's definitely powerful enough, and the idea of traveling with just an iPad seems super interesting, but I imagine they will not extend those features to any devices besides macs
I mean, it doesn't even have to be true "root" access. Chromebooks have a containerized linux environment, and aside from the odd bug, the high end ones are actually great dev machines while retaining the "You spend most of your time in the browser so we may as well bake that into the OS" base layer.
Been a while since I've used a chromebook but iirc there's ALSO root access that's just a bit more difficult to access, and you do actually need to access it from time to time for various reasons, or at least you used to.
I actually do use a Chromebook in this way! Out of all the Linux machines I've used, that's why I like it. Give me a space to work and provide an OS that I don't have to babysit or mentally maintain.
Maybe not a "it runs macOS", but an eventual "they share all the same apps" with adaptive UIs
M-class MacBooks can already run many iPhone and iPad apps.
The writing was on the wall with the introduction of Swift, IMO. Since then it's been over complicating the iPad and dumbing down the macOS interfaces to attain this goal. So much wasted touch/negative space in macOS since Catalina to compensate for fingers and adapative interfaces; so many hidden menus and long taps squirreled away in iOS.
Agreed, this will be the way forward in the future. I've already seen one of my apps (Authy) say "We're no longer building a macOS version, just install the iPad app on your mac".
That's great, but you need an M series chip in your mac for that to work so backwords compatibility only goes back a few years at this point, which is fine for corporate upgrade cycles but might be a bit short for consumers at this time. But it will be fine in the future.
Until an "iPhone" can run brew, all my developer tools, steam, epic games launcher, etc it's hardly interesting.
I think so too. Especially after the split from iOS to ipados. Hopefully they'll show something during this year's WWDC
I will settle for: you can connect 2 monitors to iPad and select audio device sound is going through. If can run IntelliJ and compile rust on the iPad, I would promise to upgrade to the new iPad Pro as soon as it is released every time.
Do you really want your OS using 100% of CPU?
Actually, TSMC's N3E process is somewhat of a regression on the first-generation 3nm process, N3. However, it is simpler and more cost-efficient, and everyone seems to want to get out of that N3 process as quickly as possible. That seems to be the biggest reason Apple released the A17(M3) generation and now the M4 the way they did.
The N3 process is in the A17 Pro, the M3, M3 Pro, and M3 Max. The A17 Pro name seems to imply you won't find it trickle down on the regular iPhones next year. So we'll see that processor only this year in phones, since Apple discontinues their Pro range of phones every year; only the regular phones trickle downrange lowering their prices. The M3 devices are all Macs that needed an upgrade due to their popularity: the Macbook Pro and Macbook Air. They made three chips for them, but they did not make an M3 Ultra for the lower volume desktops. With the announcement of an M4 chip in iPads today, we can expect to see the Macbook Air and Macbook Pro upgraded to M4 soon, with the introduction of an M4 Ultra to match later. We can now expect those M3 devices to be discontinued instead of going downrange in price.
That would leave one device with an N3 process chip: the iMac. At its sale level, I wouldn't be surprised if all the M3 chips that will go into it will be made this year, with the model staying around for a year or two running on fumes.
N3E still has a +9% logic transistor density increase on N3 despite a relaxation to design rules, for reasons such as introduction of FinFlex.[1] Critically though, SRAM cell sizes remain the same as N5 (reversing the ~5% reduction in N3), and it looks like the situation with SRAM cell sizes won't be improving soon.[2][3] It appears more likely that designers particularly for AI chips will just stick with N5 as their designs are increasingly constrained by SRAM.
[1] https://semiwiki.com/semiconductor-manufacturers/tsmc/322688...
[2] https://semiengineering.com/sram-scaling-issues-and-what-com...
[3] https://semiengineering.com/sram-in-ai-the-future-of-memory/
SRAM has really stalled. I don’t think 5nm was much better than 7nm. On ever smaller nodes, sram will be taking up a larger and larger percent of the entire chip. But the cost is much higher on the smaller nodes even if the performance is not better.
I can see why AMD started putting the SRAM on top.
It wasn't immediately clear to me why SRAM wouldn't scale like logic. This[1] article and this[2] paper sheds some light.
From what I can gather the key aspects are that decreased feature sizes lead to more variability between transistors, but also to less margin between on-state and off-state. Thus a kind of double-whammy. In logic circuits you're constantly overwriting with new values regardless of what was already there, so they're not as sensitive to this, while the entire point of a memory circuit is to reliably keep values around.
Alternate transistor designs such as FinFET, Gate-all-around and such can provide mitigation of some of this, say by reducing transistor-to-transistor variability by a factor, but can't get around root issue.
[1]: https://semiengineering.com/sram-scaling-issues-and-what-com...
[2]: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9416021/
The signs certainly all point to the initial version of N3 having issues.
For instance, Apple supposedly required a deal where they only paid TSMC for usable chips per N3 wafer, and not for the entire wafer.
https://arstechnica.com/gadgets/2023/08/report-apple-is-savi...
My read on the absurd number of Macbook M3 SKUs was that they had yield issues.
There is also the fact that we currently have an iPhone generation where only the Pro models got updated to chips on TSMC 3nm.
The next iPhone generation is said to be a return to form with all models using the same SOC on the revised version of the 3nm node.
https://www.macrumors.com/2023/12/20/ios-18-code-four-new-ip...
Ultimately, does it matter?
Michelin-starred restaurants not only have top-tier chefs. They have buyers who negotiate with food suppliers to get the best ingredients they can at the lowest prices they can. Having a preferential relationship with a good supplier is as important to the food quality and the health of the business as having a good chef to prepare the dishes.
Apple has top-tier engineering talent but they are also able to negotiate preferential relationships with their suppliers, and it's both those things that make Apple a phenomenal tech company.
Qualcomm is also with TSMC and their newer 4nm processor is expected to stay competitive with the M series.
If the magic comes mostly from TSMC, there's a good chance for these claims to be true and to have a series of better chips coming on the other platforms as well.
Does Qualcomm have any new CPU cores besides that one they can't make ARM due to licensing?
The one being announced on May 20th at Computex? https://news.ycombinator.com/item?id=40288969
“Stay” competitive implies they’ve been competitive. Which they haven’t.
I’m filing this into the bin with all the other “This next Qualcomm chip will close the performance gap” claims made over the past decade. Maybe this time it’ll be true. I wouldn’t bet on it.
This info is much more useful than a comparison to restaurants.
On one hand, it's crazy. On the other hand, it's pretty typical for the industry.
Average performance per watt doubling time is 2.6 years: https://newsroom.arm.com/blog/performance-per-watt#:~:text=T....
M2 was launched in June 2022 [1] so a little under 2 years ago. Apple is a bit ahead of that 2.6 years, but not by much.
[1] https://www.apple.com/newsroom/2022/06/apple-unveils-m2-with...
If they maintain that pace, it will start compounding incredibly quickly. If we round to 2 years vs 2.5 years, after just a decade you're an entire doubling ahead.
Note that performance per watt is 2x higher at both chips peak performance. This is in many ways an unfair comparison for Apple to make.
It is almost certainly half as much power in the RMS sense, not absolute.
Breathtaking
That doesn't seem to reflect in the battery life of these. They have the same exact battery life. Does it mean it's not entirely accurate? Since they don't indicate the battery capacity in their specs, it's hard to confirm this.
Also the thousands of suppliers that have improved their equipment and supplies that feed into the tsmc fabs.
It can deliver the same performance as itself at just a fourth of the power than it's using? That's incredible!
They don't mention which metric is 50% higher.
However, we have more CPU cores, a newer core design, and a newer process node which would all contribute to improving multicore CPU performance.
Also, Apple is conservative on clock speeds, but those do tend to get bumped up when there is a new process node as well.
I think Apple's design choices had a huge impact on the M1's performance but from there on out I think it's mostly due to TSMC.