return to table of content

Apple Intelligence for iPhone, iPad, and Mac

whalee
45 replies
23h12m

I am deeply disturbed they decided to go off-device for these services to work. This is a terrible precedent, seemingly inconsistent with their previous philosophies and likely a pressured decision. I don't care if they put the word "private" in there or have an endless amount of "expert" audits. What a shame.

wilg
8 replies
23h5m

You are deeply disturbed by the idea that some services can be better implemented server-side? Who do you think pressured them, the illuminati?

grishka
7 replies
22h57m

Here's a shocking suggestion: maybe wait some time before these services could be implemented on-device, and implement them on-device, instead of shipping this half-baked something? Apple seems to be the perfect company to make it happen, they produce both the hardware and the software, tightly integrated with each other. No one else is this good at it.

wilg
5 replies
22h52m

They implemented way more on the device than anyone else is doing, and I don't see how it makes it "half-baked" that it sometimes needs to use an online service. Your suggestion is essentially just not shipping the product until some unspecified future time. That offers no utility to anyone.

grishka
3 replies
22h39m

It is, however, very much Apple's philosophy to wait it out and let others mature a technology before making use of it.

TeMPOraL
2 replies
22h21m

At the current rate of advancement, we might get a runaway AGI before the technology "matures".

grishka
1 replies
20h54m

Or we might not. LLMs are remarkably dumb and incapable of reasoning or abstract thinking. No amount of iterative improvement on that would lead to an AGI. If we are to ever get an actual AGI, it would need to have a vastly different architecture, at minimum allowing the parameters/weights to be updated at runtime by the model itself.

TeMPOraL
0 replies
20h50m

Right. But there's so much effort, money and reputation invested in various configurations, experimental architectures, etc. that I feel something is likely going to pan out in the coming months, enabling models with more capabilities for less compute.

dleink
0 replies
21h29m

It offers utility to user privacy.

qeternity
0 replies
22h45m

Here’s a shocking suggestion: if you’re not comfortable using it, don’t use it.

giancarlostoro
8 replies
23h9m

They prompt you before you go off-service, which makes the most sense.

Me1000
7 replies
23h5m

They prompt you before they send your data to OpenAI, but it's clear that they prompt you before they send it to Apple's servers (maybe they do and I missed it?). And their promise that their servers are secure because it's all written in Swift is laughable.

Edit:

This line from the keynote is also suspect: "And just like your iPhone , independent experts can inspect the code that runs on the servers to verify this privacy promise.".

First off, do "independent experts" actually have access to closed source iOS code? If so we already have evidence that this is sufficient (https://www.macrumors.com/2024/05/15/ios-17-5-bug-deleted-ph...).

The actual standard for privacy and security is open source software, anything short of that is just marketing buzz. Every company has an incentive to not leak data, but data leaks still happen.

ethbr1
6 replies
23h3m

They're promising to go farther than that.

> Independent experts can inspect the code that runs on Apple silicon servers to verify privacy, and Private Cloud Compute cryptographically ensures that iPhone, iPad, and Mac do not talk to a server unless its software has been publicly logged for inspection. Apple Intelligence with Private Cloud Compute sets a new standard for privacy in AI, unlocking intelligence users can trust.
pdpi
5 replies
22h59m

That promise made my ears perk up. If it actually stands up to scrutiny, it's pretty damn cool.

ethbr1
4 replies
22h50m

I look at things like that from a revenue/strategic perspective.

If Apple says it, do they have any disincentives to deliver? Not really. Their ad business is still relatively small, and already architected around privacy.

If someone who derives most of their revenue from targeted ads says it? Yes. Implementing it directly negatively impacts their primary revenue stream.

IMHO, the strategic genius of Apple's "privacy" positioning has been that it doesn't matter to them. It might make things more inconvenient technically, but it doesn't impact their revenue model, in stark contrast to their competitors.

Hizonner
3 replies
22h4m

Their disincentive to delivering it is that it's not actually possible.

warkdarrior
2 replies
21h43m

It's certainly possible through remote attestation of software. This is basically DRM on servers (i.e., the data is not decrypted on the server unless the server stack is cryptographically attested to match some trusted configuration).

Hizonner
1 replies
21h36m

That requires trusting that the attestation hardware does what it says it does, and that the larger hardware system around it isn't subject to invasion. Those requirements mean that your assurance is no longer entirely cryptographic. And, by the way, Apple apparently plans to be building the hardware.

It could be a very large practical increase in assurance, but it's not what they're saying it is.

ethbr1
0 replies
20h24m

I haven't read all the marketing verbage yet, but even 'Our cloud AI servers are hardware-locked and runtime-checked to only run openly auditable software' is a huge step forward, IMHO.

It's a decent minimum bar that other companies should also be aiming for.

Edit: ref https://security.apple.com/blog/private-cloud-compute/

tr3ntg
5 replies
22h57m

They didn't have a choice. Doing everything on-device would result in a horrible user experience. They might as well not participate in this generative AI rush at all if they hoped to keep it on-device. Which would have looked even worse for them.

Their brand is equally about creativity as it is about privacy. They wouldn't chop off one arm to keep the other, but that's what you're suggesting they should have done.

And yes, I know generative AI could be seen specifically as anti-creativity, but I personally don't think it is. It can help one be creative.

Terretta
2 replies
22h22m

Doing everything on-device would result in a horrible user experience. They might as well not participate in this generative AI rush at all if they hoped to keep it on-device.

On the contrary, I'm shocked over the last few months how "on device" on a Macbook Pro or Mac Studio competes plausibly with last year's early GPT-4, leveraging Llama 3 70b or Qwen2 72b.

There are surprisingly few things you "need" 128GB of so-called "unified RAM" for, but with M-series processors and the memory bandwidth, this is a use case that shines.

From this thread covering performance of llama.cpp on Apple Silicon M-series …

https://github.com/ggerganov/llama.cpp/discussions/4167

"Buy as much memory as you can afford would be my bottom line!"

philjohn
1 replies
22h9m

Yes - but people don't want to pay $4k for a phone with 128GB of unified memory, do they?

And whilst the LLM's running locally are cool, they're still pretty damn slow compared to Chat-GPT, or Meta's LLM.

theshrike79
0 replies
21h3m

Depending on what you want to do though.

If I want some help coding or ideas about playlists, Gemini and ChatGPT are fine.

But when I'm writing a novel about an assassin with an AI assistant and the public model keeps admonishing me that killing people is bad and he should seek help for his tendencies, it's a LOT faster to just use an uncensored local LLM.

Or when I want to create some people faces for my RPG campaign and the online generator keeps telling me my 200+ word prompt is VERBOTEN. And finally I figure out that "nude lipstick" is somehow bad.

Again, it's faster to feed all this to a local model and just get it done overnight than fight against puritanised AIs.

roncesvalles
1 replies
22h4m

I don't think it would've looked bad for their brand to have not participated. Apple successfully avoided other memes like touchscreens laptops and folding phones.

adpirz
0 replies
20h35m

Siri is bad and is bad for their brand. This is making up for that ground.

electriclove
5 replies
23h6m

I like their approach. Do everything possible on device and if it can only be done off-device, provide that choice.

NewJazz
3 replies
22h40m

You misunderstand.

They will go off device without asking you, they just ask if you want to use ChatGPT.

rahkiin
2 replies
22h23m

No: they do on device, ask to do off device in their private cloud. Chatgpt is then a separate integration / intent you ask can for

NewJazz
1 replies
21h52m

I don't see anything to that effect in tfa, and a few people in the comments have claimed otherwise.

pertymcpert
0 replies
12h33m

Yeah that's how it works.

unshavedyak
0 replies
22h31m

Are they giving us a choice? I thought the choice was primarily about using ChatGPT? It sounded like everything in apples "Private Cloud" was being considered fully private.

cedws
5 replies
23h3m

Circa 2013 Snowden says the intelligence agencies are wiretapping everything and monitoring everyone.

In 2024 they don't have to wiretap anything. It's all being sent directly to the cloud. Their job has been done for them.

buildbuildbuild
3 replies
22h46m

I hear you but caution against such oversimplification. Advanced Data Protection for iCloud is a thing. Our culture of cloud reliance is truly dangerous, but some vendors are at least trying to E2E data where possible.

There are big risks to having a cloud digital footprint, yet clouds can be used “somewhat securely” with encryption depending on your personal threat model.

Also, it’s not fair to compare clouds to wiretapping. Unless you are implying that Apple’s infrastructure is backdoored without their knowledge? One does not simply walk into an Apple datacenter and retrieve user data without questions asked. Legal process is required, and Apple’s legal team has one the stronger track records of standing up against broad requests.

theshrike79
0 replies
21h1m

Yes, because the UX is better that way.

With ADP if your mom loses her encryption keys, it's all gone. Forever. Permanently.

And of course it's Apple's fault somehow. That's why it's not the default.

ggamecrazy
0 replies
21h43m

Broadly, in the US, the Federal Wiretap Act of 1968 still applies. You're going to have to convince a judge otherwise.

Yes, perhaps broad dragnet type of might be scoffed down by some judges (outside of Patriot act FISA judges ofc)

I would warn you about the general E2E encryption and encrypted at rest claims. They are in-fact correct, but perhaps misleading? At some point, for most, the data does get decrypted server-side - cue the famous ":-)"

lancesells
0 replies
22h47m

It's been going to the cloud since at 2013 as well.

evrenesat
2 replies
20h57m

I hope at some point they start selling a beefy Mac mini variant that looks like a HomePod to work as an actual private AI server for the whole family.

pram
1 replies
17h46m

This is called a Mac Studio

evrenesat
0 replies
6h10m

It would be great if they let us install the private cloud server on our Macs, but I’m not holding my breath. Then again, in the name of more privacy, maybe they want to sell a dedicated local AI hub as another hardware product. They could even offer it for an affordable upfront cost that can be amortized into a multi-year iCloud subscription.

29athrowaway
2 replies
23h4m

Siri and other assistants already do this no?

re
0 replies
21h29m

Yes. Siri debuted with the iPhone 4s (running iOS 5) in 2011. It wasn't until iOS 15 in 2021 that Siri gained the ability to do some things without an internet connection, on devices with the A12 Bionic chip (the 2018 iPhone XR/XS or later).

hrdwdmrbl
0 replies
22h25m

Yes

steve1977
0 replies
22h7m

You can’t charge for a service so easily if it runs on-device.

drexlspivey
0 replies
22h53m

That’s a necessary temporary step until these powerful LLMs are able to run locally. I’m sure Apple would be delighted to offload everything on device if possible and not spend their own money on compute.

curious_cat_163
0 replies
22h53m

I agree with you about this being a bad precedent.

However, to me, the off-device bit they showed today (user consent on every request) represents a strategic hedge as a $3T company.

They are likely buying time and trying to prevent people from switching to other ecosystems while their teams catch up with the tech and find a way to do this all in the “Apple Way”.

markus_zhang
41 replies
21h31m

TBH, I think the IT industry is too concentrated at eating itself. We are happily automating our jobs away and such while the other industries basically just sleep through.

I don't want generative AI in my phone. I want someone, or something to book a meeting with my family doctor, the head of my son's future primary school, etc. I don't need AI to do that. I need the other industries (medical/government/education) to wake up and let us automate them.

Do you know that my family doctor ONLY take calls? Like in the 1970s I guess? Do you know it takes hours to reach a government office, and they work maybe 6 hours a day? The whole world is f**ing sleeping, IT people, hey guys, slow down on killing yourselves.

AI is supposed to get rid of the chores, now it leaves us with the chores and take the creative part away. I don't need such AI.

skilled
10 replies
21h27m

I wonder if Apple ever approached Google about using Gemini as the flagship integration. I say that because during the keynote I kept thinking to myself, this could be the moment that Google realises it needs to stick to what it knows best - Search - and all they have to do is sit back and watch the hype fade away.

But that’s in a perfect world.

Even to this day, post ChatGPT, I still can’t imagine how I would ever use this AI stuff in a way that really makes me want to use it. Maybe I am too simple of a mind?

Maybe the problem is in the way that it is presented. Too much all at once, with too many areas of where and how it can be used. Rewriting emails or changing invitations to be “poems” instead of text is exactly the type of cringe that companies want to push but it’s really just smoke and mirrors.

Companies telling you to use features that you wouldn’t otherwise need. If you look at the email that Apple rewrote in the keynote - the rewritten version was immediately distinguishable as robotic AI slop.

markus_zhang
5 replies
21h24m

TBF I was too harsh in my original comment. I did use ChatGPT to automate away the chore part of the coding (boiler plate for example). But I have a gut feeling that in maybe 5-10 years this is going to replace some junior programmer's job.

My job can be largely "AIed" away if such AI gets better and the company feeds internal code to it.

nomel
2 replies
21h2m

My job can be largely "AIed" away if such AI gets better and the company feeds internal code to it.

The first company to offer their models for offline use, preferably delivered in shipping container you plug in, with the ability to "fine tune" (or whatever tech) with all their internal stuffs, wins the money of everyone that has security/confidentiality requirements.

kolinko
1 replies
20h38m

Unless the company handles national security, the existing cloud tos and infrastructure fulfill all the legal and practical requirements. Even banks and hospitals use cloud now.

nomel
0 replies
17h4m

The context here is running third party LLM, not running arbitrary things in the cloud.

the existing cloud tos and infrastructure fulfill all the legal and practical requirements

No, because the practical requirements are set by the users, not the TOS. Some companies, for the practical purposes of confidentiality and security, DO NOT want their information on third party servers [1].

Top third party LLM are usually behind an API, with things like retention, in those third party servers, for content policy/legal reasons. On premise, while being able to maintain content policy/legal retention on premise, for any needed retrospection (say after some violation threshold), will allow a bunch of $$$ to use their services.

[1] Companies That Have Banned ChatGPT: https://jaxon.ai/list-of-companies-that-have-banned-chatgpt/

edit: whelp, or go this route, and treat the cloud as completely hostile (which it should be, of course): https://news.ycombinator.com/item?id=40639606

worldsayshi
1 replies
21h2m

If it can automate a junior away it seems as likely it will just make that junior more capable.

Somebody still needs to make those decisions that it can't make well. And some of those decisions doesn't require seniority.

jonathankoren
0 replies
20h23m

That’s not what happens.

What happens is if you don’t need junior people, you eliminate the junior people, and just leave the senior people. The senior people then age out, and now you have no senior people either, because you eliminated all the junior people that would normally replace them.

This is exactly what has happened in traditional manufacturing.

barkerja
1 replies
21h6m

My understanding is that Apple's approach to this integration is adaptable; much like how you would change your browser's search engine, you'll be able to change which external AI model is utilized. ChatGPT, Gemini, Claude, etc.

rurp
0 replies
19h44m

I don't think the choice of integration really matters for GP's point. Regardless of which model is used, how useful is the ability to rewrite an email in AI Voice really going to be? If I'm struggling over how to word an email there's usually a specific reason for it; maybe I'm trying to word things for a very particular audience or trying to find a concise way to cover something complicated that I have a lot of knowledge of. General purpose language model output wouldn't help at all in those cases.

I'm sure there are usecases for this and the other GenAI features, but they seem more like mildly useful novelties than anything revolutionary.

There's risk to this as well. Making it easier to produce low value slop will probably lead to more of it and could actually make communication worse overall.

notpachet
0 replies
20h12m

this could be the moment that Google realises it needs to stick to what it knows best - Search

You misspelled "ads"

TexanFeller
0 replies
16h39m

this could be the moment that Google realises it needs to stick to what it knows best - Search

In my mind Google is now a second class search like Bing. Kagi has savagely pwned Google.

triyambakam
7 replies
21h27m

AI is supposed to get rid of the chores, now it leaves us with the chores and take the creative part away. I don't need such AI.

You know I hadn't considered that and I think that's very insightful. Thank you

matt-attack
6 replies
20h47m

This quote has been circulating recently:

I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes
mensetmanusman
4 replies
20h39m

That’s not possible yet, moving atoms is much more difficult than moving bits.

__loam
2 replies
20h4m

I feel like you've hit this industry in the nose without realizing it. How much actual value is the tech industry producing?

tavavex
0 replies
14h51m

Given that almost every major thing in the world runs on some kinds of computers running some software - a lot, probably. The fact that we don't have perfect and infallible robots and universal ways to manipulate the unpredictable, chaotic environment that is the real world (see also - self-driving cars) simply doesn't affect a lot of industries.

mensetmanusman
0 replies
3h47m

Absolutely, doing easier things at massive scale can still have more value than harder things at small scale.

I actually think exactly what should happen is already happening. All the low hanging fruit from software should be completed over the next decades, then the industry will undergo massive consolidation and there will be a large number of skilled people not needed in the industry anymore and they can move onto the next set of low hanging hard(er) problems for humanity.

lioeters
0 replies
19h35m

Bits will move bots, and hopefully do laundry and dishes too.

Dakizhu
0 replies
20h11m

Seems kind of silly. Laundry machines and dishwashers exist. The issue with the last mile is more robotics and control engineering than strictly AI. It's getting annoying seeing AI used as an umbrella term for everything related to automation.

kolinko
3 replies
20h40m

As for government - depends on a country. In Poland we have an mCitizen (mObywatel) mobile app that allows to handle more things tear by year, and we have internet sites with unified citizen login for most of the other government interactions.

The last time our IRS wanted sth from me, they just e-mailed me, I replied and the issue was solved in 5 minutes.

Oh, and you don’t need any paper ids within the country - driver license, car registration and official citizen id are apps on your phone, and if you don’t have your phone when say police catches you, you give them your data and they check it with their database and with your photo to confirm.

tavavex
0 replies
14h56m

Sounds similar to what the Ukrainian government did with the Diya app (lit. "Act/Action" but also an abbreviation of "the country and me") a few years ago. It's an interesting trend to see Eastern Europe adopt this deep level of internet integration before the countries that pioneered that same internet.

nox101
0 replies
20h15m

The last time our IRS wanted sth from me, they just e-mailed me, I replied and the issue was solved in 5 minutes.

Lol, that will never happen in the USA. We have companies like Intuit actively lobbying against making things easy because their entire business is claiming to deal with the complexity for you.

gambiting
0 replies
20h19m

You don't have to walk to the local government office to get car registration plates anymore? That was always annoying as hell.

whizzter
1 replies
21h0m

In Sweden doctors have a fair bit of automation/systems around them, the sad part is that much of it has been co-opted for more stringent records keeping,etc that's just making doctors unhappy and ballooning administration costs instead of focusing on bringing better care for patients.

In essense, we've saved 50 lives a year by avoiding certain mistakes with better record keeping and killed 5000 since the medical queues are too long due to busy doctors so people don't bother getting help in time.

mihaaly
0 replies
20h31m

I have a faint to noticable but persistent back pain. It should be checked out but I do not want to cause bigger pain and mental strain than caused by the back pain by talking to 3-4 persons sending me around and putting me in phone queues weeks apart just to see a doctor sometime in the future - with my embarrassingly low priority issue - making mountains of paperworks bored having too little time to diagnose me (that have the risk of leading to even bigger pile of paperwork). It's a different country, life is all the same.

segmondy
1 replies
21h25m

The world has never cared about what you want. Your life has always revolved around the world. Don't like it, you vs the world. Beat it if you can.

markus_zhang
0 replies
21h22m

I agree. It's just some rant. Whatever, better bury it under the other comments...

prepend
1 replies
20h18m

I’ve had some success with google assistant calling restaurants to make reservations, when they are phone only. I expect it’s a matter of time until they can camp on my doctors office. Or call my insurance and pretend to be me.

sigmoid10
0 replies
20h15m

some success with google assistant calling

The funny thing is, these auto-callers don't even need to be successful. They just need to become common enough for restaurants and doctors to get annoyed to the point where they finally bring their processes to the 21st century.

preezer
1 replies
21h23m

Ohhhh yes. That's why I was so hyped about Google Duplex or duo?! Never heard of it again....

themacguffinman
0 replies
21h2m

It's available today, it's just not a product called "Duplex". Android has call screening and "hold my call" and phone menu tree detection. On select Google Maps listings, you can make reservations by clicking a button which will make a phone call in the background to make a reservation.

iLoveOncall
1 replies
21h23m

Do you know that my family doctor ONLY take calls?

And despite that it's still your family doctor.

I fully agree with your vision. It's obvious once laid out in words and it was a very insightful comment. But the incentives are not there for other industries to automate themselves.

hot_gril
0 replies
20h58m

I like a family doctor who only takes calls. Good doctors are responsive or have responsive staff. One time a doctor was locked into booking and communicating via this One Medical app that's a total piece of shit and just made things harder, so I went elsewhere. If someone makes a truly better solution, AI or not, doctors will use it without being forced.

And government offices don't even care to begin with, you have no other choice.

xnx
0 replies
14h35m

Do you know that my family doctor ONLY take calls? ... Do you know it takes hours to reach a government office, and they work maybe 6 hours a day?

Google has a few different features to handle making calls on your behalf and navigating phone menus and holds.

runeb
0 replies
19h38m

I share your frustration on services that won’t let you automate them, but to me that’s precisely what generative AI will let you do. You don’t need an API at the family doctors to have AI automate it for you. It just rings them up and sorts it out at your command. AI is like obtaining an API to anything

pms
0 replies
21h5m

Great points!

The only thing I'd add: I don't think the responsibility for lack of automation is solely on these other industries. To develop this kind of automation, they need funds and IT experts, but (i) they don't have funds, especially in the US, since they aren't as well funded as IT industry, (ii) for the IT industry this kind of automation is boring, they prefer working on AI.

In my view, the overall issue is that capitalism is prone to herding and hype, and resulting suboptimal collective decision-making.

heywire
0 replies
21h0m

I know this wasn’t really your point, but most physicians around me use Epic MyChart, so I can book all that online. I also almost exclusively use email to communicate with our school district, and we’re in a small town.

dionian
0 replies
20h49m

I don't want generative AI in my phone. I want someone, or something to book a meeting with my family doctor, the head of my son's future primary school, etc. I don't need AI to do that.

If someone can do that more productively with Gen AI, do you care?

brundolf
0 replies
20h21m

Social problems are the hard ones, information problems are the easy ones. So the latter are the low-hanging fruit that gets solved first

acchow
0 replies
21h25m

AI is skipping software integrations the same way cell phone towers (and Starlink) skipped phone wire deployment.

TheKarateKid
0 replies
20h52m

I completely agree, especially with the taking away the creative part and leaving us with the chores.

Doctors have exams, residencies, and limited licenses to give out to protect their industry. Meanwhile, tech companies will give an engineering job to someone who took a 4 month bootcamp.

Nition
34 replies
20h42m

Aside from the search and Siri improvements, I'm really not sure about the usefulness of all the generative stuff Apple is suggesting we might use here.

If you spend an hour drawing a picture for someone for their birthday and send it to them, a great deal of the value to them is not in the quality of the picture but in the fact that you went to the effort, and that it's something unique only you could produce for them by giving your time. The work is more satisfying to the creator as well - if you've ever used something you built yourself that you're proud of vs. something you bought you must have felt this. The AI image that Tania generated in a few seconds might be fun the first time, but quickly becomes just spam filling most of a page of conversation, adding nothing.

If you make up a bedtime story for your child, starring them, with the things they're interested in, a great deal of the value to them is not in the quality of the story but... same thing as above. I don't think Apple's idea of reading an AI story off your phone instead is going to have the same impact.

In a world where you can have anything the value of everything is nothing.

cchance
15 replies
19h47m

LOL that image you painstakingly created is also forgotten not long after being given to most people, just because you know the effort that went in doesn't mean the receiving person does 99.9% of the time.

Same thing for your kid, the kid likes both stories, gives 0 shit that you used GenAI or sat up for 8 hours trying to figure out the rhyme, those things are making YOU feel better not the person receiving it.

tines
8 replies
19h43m

those things are making YOU feel better not the person receiving it

I don't think this is true at all. Love is proportional to cost; if it costs me nothing, then the love it represents is nothing.

When we receive something from someone, we estimate what it cost them based on what we know of them. Until recently, if someone wrote a poem just for us, our estimation of that would often be pretty high because we know approximately what it costs to write a poem.

In modern times, that cost calculation is thrown off, because we don't know whether they wrote it themselves (high cost) or generated it (low/no cost).

cchance
5 replies
19h27m

Love is proportional to cost?!?!?!?! Holy shit thats fucking weird, it costs me 0 to love my mom, i love my mom lol, that doesn't change that fact. Broke mother that can't afford to take care of her kid doesn't not love the kid or vise versa.

If your calculating "cost" for if someone is showing nuts, i feel sad for you lol, if my wife buys or makes me something or just says "i love you" they are equivalent, I don't give a shit if she "does something for me that costs her something" she loves me she thought of me.

The thought is what matters, if you put extra worth on people struggling to do something meaning more love... thats... ya

its_ethan
2 replies
18h8m

I can't help but be baited into responding to this comment too lol

You are obviously willfully misinterpreting what the OP meant by "cost".

You say "the thought is what matters" - this is 100% true, and "the thought" has a "cost". It "costs" an hour of sitting down and thinking of what to write to express your feelings to someone. That's what he is saying is "proportional" to love.

It "costs" you mental space to love your mom, and that can definitely happen with $0 entering the equation.

And with respect to "extra worth on people struggling to do something meaning more love" - if you spend the time to sit down and write a poem, when that's something that you don't excel at, someone will think: "oh wow you clearly really love me if you spent the time to write a poem, I know this because I know it's not easy for you and so you must care a lot to have done so anyway". If you can't see that... thats... ya

raincole
1 replies
9h26m

You say "the thought is what matters" - this is 100% true, and "the thought" has a "cost". It "costs" an hour of sitting down and thinking of what to write to express your feelings to someone. That's what he is saying is "proportional" to love.

So if you sit down and thinking of what to write to express your love to your mom for two hours, then you love your mom twice than the person who only sit down for one hour loves his mom?

It's what "proportional" means. Words have meanings.

its_ethan
0 replies
2h42m

I never said that spending two hours means you love someone twice as much as spending one hour, I'm not sure where you're getting that from.

You also may be shocked to learn this, but "proportional" doesn't mean 1:1. It can mean 1:2, 5:1, or x:x^e(2*pi). All of those are proportions. Words do have meanings, and you'll note that - while I didn't even misuse the word proportional - the quotations also indicate I'm using it more loosely than it's textbook definition. You know, like how a normal person might.

I'm getting the vibe from you and the other commentator that, to you, this is about comparing how much two people love their respective mothers. That's not at all what this is even about? You can't compare "how much" two people love a respective person/thing because love isn't quantifiable.

I'm really not sure what you're even taking issue with? The idea that more time and effort from the giver corresponds to the receiver feeling more appreciation? That is not exactly a hot take lmfao

tines
0 replies
12h58m

I think I see what tripped you up in my comment. I said

if it costs me nothing, then the love it represents is nothing.

You could read this as meaning that every action has to be super costly or else the love isn't there. I admit that it's poorly phrased and it's not what I meant.

What I should have said is that if it costs you nothing, then it doesn't in itself indicate love. It costs me nothing to say "I love you" on its own, and you wouldn't believe me if I just walked up to you in the street and said that. But your mom has spent thousands of hours and liters of blood, sweat and tears caring for you, so when she says "I love you," you have all that to back those words up. It's the cost she paid before that imbues the words with value: high cost, deep love.

Hopefully that makes more sense.

robertjpayne
0 replies
18h23m

He uses cost in a context of time and effort not directly financial

raincole
1 replies
15h18m

Love is somewhat related to cost, but "proportional" is definitely not the word you want.

If love is proportional to cost, then rapists and psychos who kill their SOs are the true lovers since the cost is 20 years of jail time to life sentence. Do you want to live by this standard?

tines
0 replies
13h9m

Actually you prove my point; the psycho loves himself so much that he will risk prison to get what he wants (or keep others from having it), but he doesn't love his SO enough to pay the cost of letting her go.

frereubu
3 replies
19h40m

I think it would be clear that the picture was drawn for the person - I imagine most people would explicitly say something like "I drew this for you" in the accompanying message. And I don't know what kind of kids you've been hanging around, but my daughter would definitely appreciate a story that I spent some time thinking up rather than "here's something ChatGPT came up with". I guess that assumes you're not going to lie to kids about the AI-generated being yours, but that's another issue entirely.

cchance
2 replies
19h25m

You go into "HOW" you write a poem for your daughter? are you also explaining and rubbing in how hard you work to get her food on the table? Like wow, the amount of people here that want their "effort" calculated into the "love" cost of something is insane.

I was brought up that the thought matters, if i think to call my mom she appreciates it i don't need to make some excess effort to show her i love her or show her more love.

You read your daughter a book off your phone you got for free, is that somehow worth less than a book you went to barnes and noble and paid full price for?

its_ethan
0 replies
18h14m

This is because there actually is a calculation that people do between "effort" and "love" (it's not some 1:1 ratio and you can't calculate it, it's real). At least for the vast, vast majority of people with functional interpersonal skills...

It's the difference between calling your mom and just saying "Hi mom, this is me thinking to call you. bye." vs calling her with a prepared thing to say/ask about that you had to take extra time to think about before calling. Effort went into that. You don't need to tell her "HOW" you came up with what you wanted to talk about, but there is a difference in how your call will be received as a result.

If you really believe that sending a text versus a hand written card will have no difference on how the message is interpreted, you should just know that you are in the minority.

Nition
0 replies
15h58m

With my original bedtime story example, I was actually thinking about the kind of story you make up on the spot. Like the topic request comes at bedtime, and maybe the child even has feedback on how the story should go as you're making it up. The alternative of the parent quickly asking ChatGPT on their phone for a story on the selected topic just seems not as fun and meaningful.

I guess in Apple's example it looks like they're writing it as a document on MacOS, so I suppose they are writing it ahead of time.

cromka
0 replies
19h40m

What a cynical take!

Nition
0 replies
16h38m

I don't truly agree with your take here, but let's assume you are correct and creating real things in your life only benefits you and no-one else. If you create a painting or story or piece of furniture, others prefer the more professional AI or mass-produced version.

In that scenario certainly there'll be times when using the AI option will make more sense, since you usually don't have hours to spare, and you also want to make the stories that your kid likes the most, which in this scenario are the AI ones.

But even then there's still that benefit to yourself from spending time on creating things, and I'd encourage anyone to have a hobby where they get to make something just because they feel like it. Even if it's just for you. It's nice to have an outlet to express yourself.

rising-sky
6 replies
20h28m

You could say the same thing for sending a Happy Birthday text, versus a hand written letter or card. Nothing is stopping a person from sending the latter today, and yes they are more appreciated, but people also appreciate the text. For example, if you're remote and perhaps don't have that deep of a relationship with them

sensanaty
2 replies
11h10m

If a friend of mine sent me some AI generated slop for my birthday I'd be more offended than if they just sent me a text that only contains the letters "hb"

Almondsetat
1 replies
9h59m

Birthday cards are slop too

sensanaty
0 replies
7h56m

The messages inside of them, which are presumably not AI-generated, aren't however.

Nition
1 replies
20h18m

I guess the question is, is sending an AI Happy Birthday image better than sending a Happy Birthday text?

cchance
0 replies
19h46m

Nope their identical, but the AI one at least looks cool lol

anon22981
0 replies
20h19m

Your analogy does not apply at all.

bredren
6 replies
20h10m

I've got a fairly sophisticated and detailed story world I've been building up with my kid, it always starts the same way and there are known characters.

We've been building this up for some time, this tiny universe is the most common thing for me to respond to "will you tell me a story?" (something that is requested sometimes several times a day) since it is so deeply ingrained in both our heads.

Yesterday, while driving to pick up burritos, I dictated a broad set of detailed points, including the complete introductory sequence to the story to gpt-4o and asked it to tell a new adventure based on all of the context.

It did an amazing job at it. I was able to see my kid's reaction in the reflection of the mirrors and it did not take away from what we already had. It actually gave me some new ideas on where I can take it when I'm doing it myself.

If people lean on gen ai with none of their own personal, creative contributions they're not going to get interesting results.

But I know you can go to the effort to create and create and create and then on top of that layer on gen AI--it can knock it out of the park.

In this way, I see gen AI capabilities as simply another tool that can be used best with practice, like a synthesizer after previously only having a piano or organ.

Nition
5 replies
19h53m

That's a very valid rebuttal to my comment. I think this kind of "force multiplier" use for AI is the most effective one we have right now; I've noticed the same thing with GPT-4 for programming. I know the code well enough to double check the output, but AI can still save time in writing it, or sometimes come up with a strategy that I may not have.

Maybe the fact that you did the dictation together with your child present is also notable. Even though you used the AI, you were still doing an activity together and they see you doing it for them.

baapercollege
2 replies
12h55m

In fact, by allowing people to generate photos for birthday wishes, apple is elevating the bottomline not lowering the topline. The person who wants to put in the effort and send a hand-drawn image would often not want to resort to a ready-made machine creation. OTOH, the simple "HBD Mom" sender would now send "Happy Birthday Mom <genmoji>" and an image...

s3p
1 replies
3h32m

Oh god.. if someone sent me Ai generated slop for my birthday I would be bothered. A simple happy birthday is fine!

solarmist
0 replies
3h7m

What about things like GIPHY reactions? I’m guessing you’re not a fan of those either or using quotes from well known people. There shortcuts have existed as long as people have been writing or drawing. They just get easier and more powerful over time.

I view this as just extending that to custom reactions and making them more flexible expanding their range of uses.

tkgally
0 replies
17h59m

Meta comment: This back-and-forth between Nition and bredren is one of the best exchanges I’ve read on HN recently. Thanks to both of you.

bradgessler
0 replies
17h48m

The best articulation of what the industry is currently calling “AI” is “augmented intelligence”—this wording captures that these are tools that can enhance intelligence, not replace it or come up with its own ideas.

nperrier
1 replies
20h17m

I would argue the same thing applies when you buy a card from Hallmark

Nition
0 replies
20h4m

I sometimes think the physical world has been going through a similar time, where most of what we own and receive is ephemeral, mass-produced, lacking in real significance. We have a lot more now but it often means a lot less.

skybrian
0 replies
20h9m

The value of a gift isn't solely on how much you worked on it or what you spent on it. It can also be in picking out the right one, if you picked something good.

Context will be more important when the gift itself is easy.

rldjbpin
0 replies
42m

having been bombarded with forwards of "good morning" image greetings from loved ones on a daily basis, i can definitely attest to this sentiment.

ai spam, especially the custom emoji/stickers will be interesting in terms of whether they will have any reusability or will be littered like single-use plastic.

burningChrome
21 replies
23h6m

Am I only person who's reached their threshold on companies forcing and shoving AI into every layer and corner of our lives?

I don't even look at this stuff any more and see the upside to any of it. AI went from, "This is kinda cool and quaint." to "You NEED this in every single aspect of your life, whether you want it or not." AI has become so pervasive and intrusive, I stopped seeing the benefits of any of this.

wilg
7 replies
22h58m

The new generative AI stuff has been barely implemented in most products, I don't know how you are experiencing it as pervasive and intrusive. Are you sure you're not just cynical from all flood of negative news stories about AI?

mcpar-land
3 replies
22h43m

this being a news thread about Apple integrating AI into all their operating systems and apps aside... Chrome has started prompting me to use generative AI in text boxes. Twitter (X) has an entire tab for Grok that it keeps giving me popup ads for. Every single productivity suite (Notion, Monday, Jira) are perpetually prompting me to summarize my issue with AI. Github has banner ads for Copilot. It is everywhere.

ethbr1
2 replies
22h41m

Summarization was implemented everywhere because it was the easiest AI feature to ship when a VP screamed "Do AI, so our C-suite can tell investors we're an AI company!"

kristofferR
1 replies
20h9m

Summarization is damn useful, though. It has solved clickbait and TLDR-spam, now you can always know if something is worth watching/reading before you do.

ukuina
0 replies
14h25m

Agreed, the dehyping of article titles is one of the main reasons I built hackyournews.com, and the avoidance of clickbait via proactive summarization is consistently rewarding.

thuuuomas
1 replies
22h25m

Are you sure you’re not optimistic just bcuz you stand to materially benefit from widespread adoption of chatgpt wrappers?

wilg
0 replies
21h42m

How would I materially benefit?

lottin
0 replies
22h4m

AI doesn't have to be intrusive but this "personal assistant" stuff, which is what they're marketing to the general public at the moment, certainly is.

blibble
6 replies
22h53m

if I can't turn 100% of this botshit off then my iphone's going in the bin

I'll go back to a dumbphone before I feed the AI

dieortin
5 replies
22h28m

You’re not feeding anything by having this feature turned on

blibble
2 replies
22h21m

I have zero confidence in any privacy or contractual guarantees being respected by the parasitic OpenAI

theswifter01
0 replies
21h25m

And they’re parasitic how exactly? Even if they do collect every single of my prompts the benefit of chatGPT outweighs my data being sold

ru552
0 replies
21h55m

you have to acknowledge a pop up authorizing your request be sent to OpenAI every single time it happens. it's not going to happen by mistake.

ethagnawl
1 replies
22h15m

Right. This thread on the other hand ...

blibble
0 replies
22h13m

I have curtailed my internet commenting considerably in the last 12 months

it is now almost exclusively anti-AI, which funnily enough I don't mind them training on

warkdarrior
0 replies
23h1m

They are not making it mandatory to use, just widely available through various interfaces. I see this closer to how spellcheck was rolled out in word processors, then editors, then browsers, etc.

pndy
0 replies
11h23m

Am I only person who's reached their threshold on companies forcing and shoving AI into every layer and corner of our lives?

After a random update my bank's app has received AI assistant out of blue to supposedly help their clients.

At first I was interested how these algorithms could enhance apps and services but now, this does indeed feels like shoving AI everywhere it's possible even if it doesn't makes any sense; as if companies are trying to shake a rattle over your baby's cradle to entertain it.

Aside above, I was hoping that after this WWDC Siri would get more languages so I could finally give it instructions in my native language and make it actually more useful. But instead there are generated emoticons coming (I wonder if people even remember that word). I guess chasing the hottest trends seems more important for Apple.

moralestapia
0 replies
22h9m

You're in for a ride.

This barely scratches the surface on how much AI integration there's going to be in the typical life of someone in the 2030s.

lancesells
0 replies
22h35m

I feel like this WWDC kind of solidified that these corporations really don't know what to do with AI or aren't creative enough. Apple presented much better AI features that weren't called AI than the "summarize my email" and "generate an ugly airbrushed picture you buy at the mall kiosk to send to your mom".

All of these "make your life easier" features really show that no tech is making our lives simpler. Task creation is maybe easier but task completion doesn't seem to be in the cards. "Hey siri, summarize my daughters play and let me know when it is and how to get there" shows there's something fundamentally missing in the way we're living.

epistasis
0 replies
22h26m

Currently, AI use has a "power user" requirement. You have to spend a lot of time with it to know what it is and is not capable of, how to access those hidden capabilities, and be very creative at applying it in your daily life.

It's not unlike the first spreadsheets. Sure, they will some day benefit the entire finance department, but at the beginning only people who loved technology for the sake of technology learned enough about them to make them useful in daily life.

Apple has always been great at broadening the audience of who could use personal computing. We will see if it works with AI.

I think it remains to be seen how broadly useful the current gen of AI tech can be, and who it can be useful for. We are in early days, and what emerges in 5-10 years as the answer is obvious to almost no one right now.

acjohnson55
0 replies
22h32m

I'm resistant, too. I think from a number of reasons:

- So far, the quality has been very hit or miss, versus places where I intentionally invoke generative AI.

- I'm not ready to relinquish my critical thinking to AI, both from a general perspective, and also because it's developed by big companies who may have different values and interests than me.

- It feels like they're trying to get me to "just take a taste", like a bunch of pushers.

- I just want more/better of the right type of features, not a bunch of inscrutable magic.

maz1b
17 replies
23h29m

Gotta say, from a branding point of view, it's completely perfect. Sometimes things as "small" as the letters in a companies name can have a huge impact decades down the road. AI == AI, and that's how Apple is going to play it. That bit at the end where it said "AI for the rest of us" is a great way to capture the moment, and probably suggests where Apple is going to go.

imo, apple will gain expertise to serve a monster level of scale for more casual users that want to generate creative or funny pictures, emojis, do some text work, and enhance quality of life. I don't think Apple will be at the forefront of new AI technology to integrate those into user facing features, but if they are to catch up, they will have to get into the forefront of the same technologies to support their unique scale.

Was a notable WWDC, was curious to see what they would do with the Mac Studio and Mac Pro, and nothing about the M3 Ultra or M4 Ultra, or the M3/M4 Extreme.

I also predicted that they would use their own M2 Ultras and whatnot to support their own compute capacity in the cloud, and interestingly enough it was mentioned. I wonder if we'll get more details on this front.

peppertree
9 replies
22h47m

I think the biggest announcement was the private compute cloud with Apple Silicon. Apple is building up internal expertise to go after Nvidia.

dmix
4 replies
21h39m

Can you explain what that means for someone who missed part of the video today?

theshrike79
3 replies
21h33m

The Apple Intelligence cloud system uses Apple's own M-series chips, not Nvidia.

ismepornnahi
2 replies
21h5m

Because they will be running inference using much smaller models than GPT 4.

int_19h
1 replies
15h5m

At least they are honest about it in the specs that they have published - there's a graph there that clearly shows their server-side model underperforming GPT-4. A refreshing change from the usual "we trained a 7B model and it's almost as good as GPT-4 in tests" hype train.

(see "Apple Foundation Model Human Evaluation" here: https://machinelearning.apple.com/research/introducing-apple...)

theshrike79
0 replies
14h1m

Yea, their models are more targeted. You can't ask Apple Intelligence/Siri about random celebrities or cocktail recipes.

But you CAN ask it to show you all pictures you took of your kids during your vacation to Cabo in 2023 and it'll find them for you.

The model "underperforms", but not in the ways that matter. This is why they partnered with OpenAI, to get the generic stuff included when people need it.

yborg
1 replies
20h28m

Isn't it also that Nvidia chips are basically unobtainable right now anyway?

solarmist
0 replies
2h55m

Yeah, but Apple wouldn’t care either way. They do things for the principle of it. “We have an ongoing beef with NVIDIA so we’ll build our own ai server farms.”

teruakohatu
1 replies
20h27m

Apple have a long antagonist relationship with NVIDIA. If anything it is holding Apple back because they don’t want to go cap in hand to NVIDIA and say “please sir, can I have some more”.

We see this play out with the ChatGPT integration. Rather than hosting GPT-4o themselves, OpenAI are. Apple is providing NVIDIA powered AI models through a third party, somewhat undermining the privacy first argument.

hmottestad
0 replies
20h10m

Rumours say that Apple has bought a lot of GPUs from Nvidia in the last year or so in order to train their own models.

tonynator
2 replies
16h15m

Yeah real smart move to make your products initials unusable and unsearchable. Apple has done it again

iansinnott
1 replies
16h4m

Indeed. I suppose they are hoping people will associate the two letters with their thing rather than the original acronym.

zarzavat
0 replies
5h57m

People will just call it Apple AI like ATM machine.

buildbot
2 replies
23h14m

Yeah I feel like we are getting the crumbs for a future hardware announcement, like M4 ultra. They’ll announce it like “we are so happy to share our latest and greatest processor, a processor so powerful, we’ve been using it in our private AI cloud. We are pleased to announce the M4 Ultra”

samatman
1 replies
22h30m

It was speculated when the M4 was released only for the iPad Pro that it might be out of an internal need on Apple's part for the bulk of the chips being manufactured. This latest set of announcements gives substantial weight to that theory.

hawski
0 replies
21h31m

I see what they did here and it is smart, but can bring chaos. On one side it is like saying "we own it", but on the other hand it is putting a brand outside of their control. Now I only hope people will not abbreviate it with ApI, because it will pollute search results for API :P

windowshopping
16 replies
21h47m

The "Do you want me to use ChatGPT to do that?" aspect of it feels clunky as hell and very un-Apple. It's an old saw, but I have to say Steve Jobs would be rolling over in his grave at that. Honestly confused as to why that's there at all. Could they not come up with a sufficiently cohesive integration? Is that to say the rest ISN'T powered by ChatGPT? What's even the difference? From a user perspective that feels really confusing.

theshrike79
7 replies
21h31m

From a user perspective it's 100% clear.

If the system doesn't say "I'm gonna phone a friend to get an answer for this", it's going to stay either 100% local or at worst 100% within Apple Intelligence, which is audited to be completely private.

So if you're asking for a recipe for banana bread, going to ChatGPT is fine. Sending more personal information might not be.

windowshopping
4 replies
21h18m

I just don't think the average user cares enough to want this extra friction. It's like if every time you ran a google search it gave you lower-quality results and you had to click a "Yes, give me the better content" option every time to get it to then display the proper results. It's just an extra step which people are going to get sick of very fast.

You know what it's really reminiscent of? The EU cookies legislation. Do you like clicking "Yes I accept cookies" every single time you go to a new website? It enhances your privacy, after all.

IMTDb
1 replies
19h21m

In theory there isn't. In practice > 99% of the website I visit have a cookie banner thingy. Including the EU own website (https://european-union.europa.eu/index_en).

Think about it: even a government agency isn't able to produce a simple static web page without having to display that cookie banner. If their definitions of "bad cookies that require a banner" is so wide that even they can't work around it to correctly inform citizens, without collecting any private data, displaying any ad or reselling anything; maybe the definition is wrong.

For all intent and purposes, there is a cookie banner law.

theshrike79
0 replies
14h3m

The cookie banners are a cargo cult.

Someone somewhere figured out that it might be a good idea and others just copied it.

adrianmsmith
0 replies
10h42m

It's interesting you phrase it that way, because that's sort of how DuckDuckGo works with their !g searches. I'm not saying that's good or bad, it's just an observation.

rohitpaulk
1 replies
20h48m

Still involves friction. A more "seamless" way for Apple to do this would've been to license GPT-4's weights from OpenAI and run it on Apple Intelligence servers.

asadm
0 replies
19h58m

but that restricts it to just openai then.

I want to use perplexity from siri too!

dmix
1 replies
21h37m

I thought it was the smartest and most pragmatic thing they've announced.

Being best in class for on-device AI is a huge market opportunity. Trying to do it all would be dumb like launching Safari without a google search homepage partnership.

Apple can focus on what they are good at which is on device stuff and blending AI into their whole UX across the platform, without compromising privacy. And then taking advantage of a market leader for anything requiring large external server farms and data being sent across the wire for internet access, like AI search queries.

FinnKuhn
0 replies
21h22m

I think they also announced the possibility to integrate Siri with other AI platforms than ChatGPT so this prompt would be especially useful to make clear to the user which of these AIs Siri wants to use.

xanderlewis
0 replies
21h38m

I agree. Quite odd and not very Apple-ish. I wonder if there’s some good reason for it; it must have been debated internally.

fckgw
0 replies
21h24m

It's a clear delineation between "My data is on my device or within Apple's ecosystem" and "My data is now leaving Apple and going to a 3rd party"

empath75
0 replies
21h20m

They'll probably add an option to disable that prompt at some point. I'm glad it is the default behavior, though.

dag11
0 replies
21h43m

What? The original Siri asked if the user wanted to continue their search on the web if it couldn't handle it locally. It was one of the last things from the Jobs era.

chrisBob
0 replies
21h28m

Apple is touting the privacy focus of their AI work, and going out to ChatGPT breaks that. I would be reluctant to use any of their new AI features if it weren't for that prompt breaking the flow and making it clear when they are getting results from ChatGPT.

0xCMP
0 replies
21h30m

At the core of everything they presented is privacy. Yes the point is that most questions are answered locally or via the Private Compute system.

More specifically "is openai seeing my personal data or questions?" A: "No, unless you say it's okay to talk to OpenAI everything happens either on your iPhone or in Private Compute"

htrp
11 replies
22h22m

Apple Intelligence is free for users, and will be available in beta as part of iOS 18, iPadOS 18, and macOS Sequoia this fall in U.S. English. Some features, software platforms, and additional languages will come over the course of the next year. Apple Intelligence will be available on iPhone 15 Pro, iPhone 15 Pro Max, and iPad and Mac with M1 and later, with Siri and device language set to U.S. English. For more information, visit apple.com/apple-intelligence.

iphone 15 Pro 8 GB RAM (https://www.gsmarena.com/apple_iphone_15_pro-12557.php)

iphone 15 6 GB Ram (https://www.gsmarena.com/apple_iphone_15-12559.php)

dudus
4 replies
21h35m

English only? That is surprising

theshrike79
2 replies
21h17m

As long as they don't geolock it to "english speaking" countries, I'm fine with that.

TillE
1 replies
20h49m

As far as I'm aware, the only time Apple has implemented that kind of restriction is with their DMA compliance. Like, I used the US App Store (with a US credit card) while physically in Europe for many years.

theshrike79
0 replies
14h11m

And Apple Fitness and Apple News.

I can follow workout instructions in english, as can my kids. But Apple has decided that Apple One is more shit over here for some reason.

c1sc0
0 replies
21h28m

The platform talk had a bit more architectural details and it looks like they heavily optimize / compress the Foundation model to run for specific tasks on-device. I'm guessing that sticking to US English allows them to compress the foundation model further?

elAhmo
3 replies
20h57m

I am quite disappointed that 14 pro is not supported. So much power, but they decided to not support any of the older chips.

hbn
1 replies
20h26m

The 15 Pro's SoC has an extra 2GB of RAM which could very well be make-or-break for running a local model which tends to be very memory-constrained

luigi23
0 replies
16h34m

it's about 15 having 2x more powerful neural engine

kolinko
0 replies
10h48m

It’s a matter of RAM most likely - models require crazy amounts of ram, and I bet they had to struggle to fit them on 15’s pro 8GB.

iLoveOncall
0 replies
21h5m

That's a good reason not to upgrade my iPhone 13!

Jtsummers
0 replies
22h3m

Along with a 2GB RAM difference, they have different processors (A17 vs A16).

https://en.wikipedia.org/wiki/Apple_A17

Per the comparison table on that page, the "Neural Engine" has double the performance in the A17 compared to the A16, which could be the critical differentiator.

machinekob
9 replies
23h26m

Microsoft Recall => bad. Apple Recall => good.

minimaxir
4 replies
23h12m

The massive difference here is that Apple Recall is 100% on device. (for the use cases they demoed anyways)

EDIT: Yes, I'm wrong.

sseagull
2 replies
22h57m

Microsoft Recall is completely on-device (or so they say).

skydhash
1 replies
22h40m

It's mostly the screenshots things that get people. Semantic search is ok if the index is properly secured and privacy is a concern. And localized context is ok too (summarizing one web page does not screenshot my whole screen). I believe Microsoft has gone with building the easiest option (recording everything) instead of thinking about better contextual integration.

anonbanker
0 replies
21h52m

Those are pretty big If's when you have a webkit or blink-based browser on the same device.

Foe
0 replies
22h32m

Isn't Microsoft Recall also 100% on device?

fh9302
1 replies
22h19m

Apple does not take screenshots every couple seconds, unlike Microsoft. That's what people were bothered about.

anonbanker
0 replies
21h53m

That was merely one aspect of what people were bothered about. The most obvious one.

samatman
0 replies
22h17m

Two companies who have earned very different reputations over the decades, will elicit rather different reactions when announcing similar features, yes.

I also missed the part of the linked article where it says that my Mac is going to take a screenshot every few seconds and store it for three months.

pbronez
0 replies
22h41m

Yup, this is the fascinating thing to me. Looking forward to some detailed comparisons between the two architectures.

zmmmmm
8 replies
19h28m

It's interesting to see Apple essentially throw in the towel on on-device compute. I fully expected them to announce a stupendous custom AI processor that would do state of the art LLMs entirely local. Very curious if they are seeing this as a temporary concession or if they are making a fundamental strategic shift here.

The problem is, regardless how hard they try, I just don't believe their statements on their private AI cloud. Primarily because it's not under their control. If governments or courts want that data they are a stroke of the pen away from getting it. Apple just can't change that - which is why it is surprising for me to see them give up on local device computing.

xcv123
6 replies
19h22m

Why would you expect that? State of the art LLMs need a GPU with hundreds of GB of RAM costing tens of thousands of dollars. Apple doesn't have magical supernatural powers. They are still bound by the laws of physics. Siri is still running on device (mostly) and is quite powerful with this new update.

password54321
3 replies
19h20m

"There is no way we can reduce the size of transistors" - You in the 20th century.

xcv123
2 replies
19h7m

Apple uses TSMC for fabrication. The roadmap for TSMC and Intel are planned years in advance.

Two orders of magnitude improvement in 6 months? Not possible. Have you heard of Moore's Law? Maybe in 20 years.

https://en.wikipedia.org/wiki/Moore%27s_law

password54321
1 replies
18h59m

- Doesn't read article

- Doesn't understand not every LLM needs to be ChatGPT

- Links Moores Law wikipedia

I give up.

xcv123
0 replies
18h56m

Try reading the thread before mindlessly replying.

"I fully expected them to announce a stupendous custom AI processor that would do state of the art LLMs entirely local."

State of the art LLM means GPT-4 or equivalent. Trillion+ parameters. You won't run that locally on an iPhone any time soon.

zmmmmm
1 replies
18h21m

in the past Apple has made the choice to gimp their functionality before sending data off-device - one of the reasons Siri has sucked so badly. This seems like a distinct change here, finally conceding that they just can't do it on device and be competitive. But I forsee they now have a much more challenging story to tell from a marketing point of view, now that they can no longer clearly and simply tell people information doesn't leave their device.

xcv123
0 replies
18h15m

I've been using Siri more often recently and surprised at how capable it is for something that runs entirely on a phone. The speech recognition is perfect and it can do basic tasks quite well. Send messages, lookup word definitions, set timers and alarms, check the weather, adjust timers/alarms, control Spotify, call people, adjust the brightness and sound level, control lighting in the lounge, create and read notes or reminders, etc. It all works.

password54321
0 replies
19h24m

"A cornerstone of Apple Intelligence is on-device processing, and many of the models that power it run entirely on device."

nerdjon
8 replies
23h22m

Said this in the other thread, but I am really bothered that image generation is a thing but also that it got as much attention as it did.

I am worried about the reliability, if you are relying on it giving important information without checking the source (like a flight) than that could lead to some bad situations.

That being said, the polish and actual usefulness of these features is really interesting. It may not have some of the flashiest things being thrown around but the things shown are actually useful things.

Glad that ChatGPT is optional each time Siri thinks it would be useful.

My only big question is, can I disable any online component and what does that mean if something can't be processed locally?

I also have to wonder, given their talk about the servers running the same chips. Is it just that the models can't run locally or is it possibly context related? I am not seeing anything if it is entire features or just some requests.

I wonder if that implies that over time different hardware will run different levels of requests locally vs the cloud.

skybrian
2 replies
21h35m

Regarding image generation, it seems the Image Playground supports three styles: Animation, Illustration, or Sketch.

Notice what's missing? A photorealistic style.

It seems like a good move on their part. I'm not that wild about the cartoon-ification of everything with more memes and more emojis, but at least it's obviously made-up; this is oriented toward "fun" stuff. A lot of kids will like it. Adults, too.

There's still going to be controversy because people will still generate things in really poor taste, but it lowers the stakes.

maronato
1 replies
19h14m

I noticed that too, but my conclusion is that they probably hand-picked every image and description in their training data so that the final model doesn’t even know what the poor taste stuff is.

0xfae
0 replies
52m

Exactly. And, I'm assuming, it will reject any prompts or words in poor taste.

latexr
1 replies
23h7m

I wonder if that implies that over time different hardware will run different levels of requests locally vs the cloud.

I bet that’s going to be the case. I think they added the servers as a stop-gap out of necessity, but what they see as the ideal situation is the time when they can turn those off because all devices they sell have been able to run everything locally for X amount of time.

solarmist
0 replies
2h48m

I’m betting 100% on this. And I think the new sidecar controlling your phone is an example of where they’re going in reverse.

If you have a M6 MacBook/ipad pro it’ll run your AI queries there if you’re on the same network in two-four years.

skydhash
0 replies
23h15m

I am worried about the reliability, if you are relying on it giving important information without checking the source (like a flight) than that could lead to some bad situations.

I think it shows the context for the information it presents. Like the messages, events and other stuff. So you can quickly check if the answer is correct. So it's more about semantic search, but with a more flexible text describing the result.

kylehotchkiss
0 replies
19h33m

I am worried about the reliability, if you are relying on it giving important information without checking the source (like a flight) than that could lead to some bad situations.

I am worried at the infinite ability of teenagers to hack around the guardrails and generate some probably not safe for school images for the next 2 years while apple figures out how to get them under control.

intended
0 replies
10h8m

They hid the workaround for this - it’s going to be available in US english first, and then other locations over the coming year.

This can be never. LLMs fail fast as you move away from high resourced languages.

jl6
7 replies
22h44m

I can see people using Rewrite all the time. In the grim darkness of the AI future, your friends speak only in language that is clean, sanitized, HR-approved, and soulless.

dinkleberg
0 replies
17h12m

That perfectly describes how I feel about all of this.

I'm sure that there will be lots of genuinely useful things that come out of this AI explosion, but can't help but be a bit saddened by what we're losing along the way.

Of course I can choose not to use all of these tools and choose to spend time with folks of a similar mindset. But in the working world it is going to be increasingly impossible to avoid entirely.

kylehotchkiss
1 replies
19h31m

Young people already seem bothered by how pristine/flawless modern photography looks and seem increasingly obsessed with using film cameras/camcorders to be more authentic or whatever pleasing attribute they find in that media. I think they'll respond with more misspellings and sloppier writing to appear more authentic

tavavex
0 replies
15h10m

As one of these young people, you're way overestimating the popularity of these trends. There are always some "we gotta go back"-centered communities lingering in the background, but digital vs analogue photography isn't even a close match-up. People who want to get more into photography are far more likely to buy a good digital camera than a film camera.

twoWhlsGud
0 replies
22h28m

At work, yes. However, it won't be long until the language you speak will become a feature of your ML driven consumer language service. There will likely be products that reflect your style/ identity/ whatever. And once you reach a certain socioeconomic level, you'll speak a highly customized bespoke dialect that reflects your station in life, just like today but much, much weirder…

glial
0 replies
21h13m

People already use words like 'product', 'content', 'feature', and 'vehicle' in everyday conversation. It makes me shudder every time.

TillE
0 replies
20h47m

I feel like this is an awful feature for your native language, but fantastically exciting for a second language where you're not quite fluent and need to be able to write coherently.

ayakang31415
7 replies
23h27m

There was one part that I didn't understand about AI compute: For certain tasks, server side compute will be done as on-device chip is not powerful enough I suppose. How does this ensure privacy in verifiable manner? How do you CONFIRM that your data is not shared when cloud computing is involved with AI tasks?

tom1337
6 replies
23h17m

Your data is being shared. But they've shown that it is being done in a way where only required data leaves the devices and there are some protections in place which try to minimize misuse of the data (the OS will only communicate with publicly signed versions of the server for example). The call to Apples "Private Compute Cloud" is intransparent to the user, ChatGPT calls need permission if I understood it correctly.

ayakang31415
2 replies
23h12m

So it is not really private then.

pertymcpert
0 replies
12h35m

What does private mean? If I store my children's photos on iCloud encrypted, that's not private?

Spivak
0 replies
23h1m

I think it's a semantic thing at this point. If for you private can't mean plaintext living on a computer you don't control then no. If it's private in the way your iCloud photos are private then yes, and seemingly more so.

AshamedCaptain
2 replies
22h55m

the OS will only communicate with publicly signed versions of the server for example

This hardly increases security, and does not increase privacy at all. If anything it provides Apple with an excuse that they will throw at you when you ask "why can't I configure my iOS device to use my servers instead of yours?" , which is one of the few ways to actually increase privacy.

This type of BS should be enough to realize that all this talk of "privacy" is just for the show, but alas...

theshrike79
1 replies
20h55m

Can you configure a Google phone to use your servers instead of theirs for Google Assistant requests?

AshamedCaptain
0 replies
19h47m

I don't know what your argument was going to be if I said "no", but in any case, the answer is yes, you can. You can even entirelly uninstall Google Assistant and replace it with your own software, and you do not lose any functionality of the device nor require access to private hooks to do that. I do that myself.

doawoo
6 replies
22h6m

Happy as long as there is a switch to toggle it all off somewhere. I find very little of this useful. Maybe someone does, and that’s great!

And my concern isn’t from a privacy perspective, just a “I want less things cluttering my screen” perspective.

So far though it looks like it’s decent at being opt-in in nature. So that’s all good.

doutatsu
2 replies
21h34m

I feel like this is actually the thing you want when you say "less things cluttering my screen".

Siri can now be that assistant, that summarises or does things, that would instead make you go through various screens or apps. Feels like it rescues clutter, not increases it to me imo

doawoo
1 replies
21h2m

I simply cannot agree, but again, it's a personal thing. I never ever find voice interfaces useful though...

Aside: When the presenter showed the demo of her asking Siri to figure out the airport arrival time and then gloat it "would have taken minutes" to do on her own... I sat there and just felt so so strongly that I don't want to optimize out every possible opportunity to think or work out a problem in my life for the sake of "keeping on top of my inbox".

I understand value of the tools. But I think overall nothing about them feels very worth showing even more menus for me to tick through to make the magic statistical model spit out the tone of words I want... when I could have just sat there and thought about my words and the actual, real, human person I'm talking to, and rephrase my email by hand.

deergomoo
0 replies
19h56m

I don't want to optimize out every possible opportunity to think or work out a problem in my life for the sake of "keeping on top of my inbox"

Completely agree. My first thought on seeing this stuff is that it suggests we, as an industry, have failed to create software that fulfils users’ needs, given we’re effectively talking about using another computer to automate using our computers.

My second thought is that it’s only a matter of time before AI starts pushing profitable interests just like seemingly all other software does. How long before you ask some AI tool how to achieve something and it starts pitching you on CloudService+ for only 4.99 per month?

theshrike79
0 replies
21h15m

It's actually taking LESS screen space, because "Siri" is now just a glowing edge on your screen.

And good news! You can clear your homescreen too fully from all icons now =)

Optimal_Persona
0 replies
21h29m

My thoughts exactly, as someone who manages 145 iPhones for a health-care org, all of this stuff needs to be completely blockable and granularly manageable in Mobile Device Management or things could go very, very wrong compliance-wise.

LogHouse
0 replies
21h59m

Strong agree here. Features are cool, but I value screen real estate and simplicity. Plus, the gpt app works fine for me. I don’t need it built into other things yet.

duskhorizon2
4 replies
21h51m

Some generative AI features are quite useful. I’m already using AI to generate icons for my apps and write nonsense legalese. But one thing when I explicitly creating image by prompting at the third-party server, and another when AI index and upload all my private documents in the cloud. Apple promised: “Independent experts can inspect the code that runs on Apple silicon servers to verify privacy, and Private Cloud Compute cryptographically ensures that iPhone, iPad, and Mac do not talk to a server unless its software has been publicly logged for inspection.” There are so many questions: Who’re these experts? Can myself be this expert? Will the server software be open sourced? Well, I will postpone my fears until Apple rolls out AI on devices, but now I see this is a privacy nightmare. Now it’s all looks like Microsoft’s Recall. I afraid that without homogeneous encryption private cloud is a sad joke.

JumpCrisscross
3 replies
21h46m

write nonsense legalese

Oh boy. Someone is going to make a lot of money in court finding people who did this.

duskhorizon2
2 replies
21h37m

Nope. I'm not in USA ;)

JumpCrisscross
1 replies
21h27m

not in USA

If you’re somewhere where contracts have meaning, it’s a true statement.

duskhorizon2
0 replies
21h23m

Well, if contracts have meaning, I will not use AI. But AppStore for example requires privacy policy, that one AI wrote.

zombiwoof
3 replies
18h47m

Apple promising in 8-12 months what others have today. Although Apple marketed it better.

Google didn't have the brass balls to call it "Alphabet Intelligence" !!!

Etheryte
2 replies
18h44m

No one else is doing this stuff without sending your data off to a remote server. This is a crucial distinction, especially when it comes to personal data.

Slyfox33
1 replies
15h19m

But apples implementation also sends stuff to remote servers.

"To run more complex requests that require more processing power, Private Cloud Compute extends the privacy and security of Apple devices into the cloud to unlock even more intelligence. With Private Cloud Compute, Apple Intelligence can flex and scale its computational capacity and draw on larger, server-based models for more complex requests."

Etheryte
0 replies
9h2m

For complex queries, yes, but everything they can, they do on-device. No one else does that, even if you ask ChatGPT what's 2 + 2 it'll go to their servers.

resharpe105
3 replies
22h0m

Key question is, will there be a hard switch to only ever use on device processing?

If not, and if you don’t want practically every typed word to end up on someone else’s computer (as cloud is just that), you’ll have to drop ios.

As for me that leaves me with a choice between dumbphone or grapheneOS. I’m just thrilled with these choices. :/

LogHouse
2 replies
21h57m

It’s not sending every word to the cloud. I think you must invoke the AI features. Am I wrong?

resharpe105
1 replies
21h47m

I understood that it will have the full context of the data on your phone, in order to be ,,useful”.

We are yet to see if that means only the data you’ve invoked ai features for, or totality of your emails, notes, messages, transcripts of your audio, etc.

dialup_sounds
0 replies
21h5m

From the presentation it sounds like the on-device model determines what portion of the local index is sent to the cloud as context, but is designed for none of that index to be stored in the cloud.

So (as I understand it) something like "What time does my Mom's flight arrive?" could read your email and contacts to find the flight on-device, but necessarily has to send the flight information and only the flight information to answer the arrival time.

losvedir
3 replies
22h38m

Independent experts can inspect the code that runs on Apple silicon servers to verify privacy, and Private Cloud Compute cryptographically ensures that iPhone, iPad, and Mac do not talk to a server unless its software has been publicly logged for inspection.

Technically, the sentence could be read that experts inspect the code, and the client uses TLS and CA's to ensure it's only talking to those Apple servers. But that's pretty much the status quo and uninteresting.

It sounds like they're trying to say that somehow iPhone ensures that it's only talking to a server that's running audited code? That would be absolutely incredible (for more things than just running LLMs), but I can't really imagine how it would be implemented.

Hizonner
2 replies
22h1m

I can't really imagine how it would be implemented.

People do stuff that they claim implements it using trusted, "tamperproof" hardware.

What they're ignoring is that not all of the assurance is "cryptographic". Some of it comes from trusting that hardware. It's particularly annoying for that to get glossed over by a company that proposes to make the hardware.

You can also do it on a small scale using what the crypto types call "secure multiparty computation", but that has enormous performance limitations that would make it useless for any meaningful machine learning.

warkdarrior
1 replies
21h34m

There is no known solution to remote software attestation that does not depend on trusted hardware.

Hizonner
0 replies
21h30m

That's correct. But Apple is not making that clear, and is therefore misrepresenting what assurance can be offered.

hawski
3 replies
21h21m

This ramping up AI war will leave no prisoners. I am not an Apple customer in any way, I am in Google's ecosystem, but I feel that I need to make an exit, at least some essentials, preferably this year.

My e-mail, my documents, my photos, my browsing, my movement. The first step for me was setting up Syncthing and it was much smoother than I initially thought. Many steps to go.

sircastor
1 replies
21h5m

I haven’t adopted passcodes, and moved all my email out of gmail to a private domain. Photos backup t to my NAS. I’m terrified of the automated systems deciding I’m a bad actor.

I can’t help but think it’ll get worse with AI

its_ethan
0 replies
20h36m

Not that you shouldn't do it, but too much of an active effort or obsession with not using standard e-mail services or photo back ups is probably a faster way to get flagged as suspicious lol

jwrallie
0 replies
19h55m

For things that don’t leave your system it’s ok, but the moment you send something to others it will go into the systems that you try to avoid anyway.

Mostly I see no point in things like email self hosting if half my contacts are on Gmail and the other half on Microsoft.

My suggestion (as someone that tried to escape for some time) is to build a private system for yourself (using private OS and networks) and use a common system to interface with everyone else.

fdpdkf
3 replies
22h2m

I find the removing people from photos thing creepy. Yes you can remove others to see only your family, but forging the reality to only conform to what you wish is disturbing I think.

standardUser
0 replies
21h52m

Maybe it will remind people that we should never have been mistaking recorded media for reality in the first place, a lesson we've been learning since at least 1917...

https://en.wikipedia.org/wiki/Cottingley_Fairies

qeternity
0 replies
22h0m

Photos are already just one perspective on reality. Instagram has shown that to be painfully true. This is merely a continuation of that.

We all experience our own reality individually.

guyforml
0 replies
21h42m

we've had photoshop for more than a decade now

tsunamifury
2 replies
23h8m

This isn't about giving Apple intelligence, this is about giving ChatGPT an understanding of the world via the eyes, ears, and thoughts on your phone.

Jtsummers
1 replies
22h58m

This isn't about giving Apple intelligence, this is about giving ChatGPT an understanding of the world via the eyes, ears, and thoughts on your phone.

Except it doesn't do that. The ChatGPT integration is via Siri and opt-in (you ask Siri something, it prompts you to send that prompt to ChatGPT). The rest of the LLM and ML features are on device or in Apple's cloud (which is not OpenAI's cloud). The ChatGPT integration is also, by their announced design, substitutable in the future (or you'll be given a set of systems to select from, not just ChatGPT). They are not sending all data on your device to OpenAI.

tsunamifury
0 replies
22h29m

Yea, I worked in partnership with apple for years. I dont know what else to tell you except they lie through their teeth about privacy all the time.

mrkramer
2 replies
22h56m

Is it just me or this AI rush is actually about to ruin user experience both on Apple and Microsoft devices? The extra layer of complexity for users who will now be introduced to endless AI features is bloatware in the making.

password54321
0 replies
19h41m

Based on what they showed most users won't even know if the feature they are using is using AI or not. Most of it is local and just comes in the form of a button rather than typing out a prompt to make it do what you want. And I think those two are the big things to take away from this. Local means less clunkiness and lag you get from tools like Perplexity or whatever and no 'prompt engineering' means even someone's grandma could immediately start using AI. Apple just doing what Apple does best.

anonbanker
0 replies
21h51m

Just making linux more appealing for the subset of the population that doesn't want to hook into skynet's subpar UX.

chucke1992
2 replies
22h1m

Nothing really impressive. Let's see who the stock reacts.

qeternity
1 replies
21h59m

Out of curiosity, what would you have considered impressive?

chucke1992
0 replies
21h38m

Hard to tell. That's the whole point.I thought maybe Apple had come up with something - but by and large it is not different from Vision Pro - they have made X feel better, no real one generation ahead stuff. Basically they are not introducing the innovation.

There are two challenges right now for AI - user monetisation and mass adoption. ChatGPT right now is basically a TikTok - a popular app and that's it. Yeah, it has a subscription but by and large, companies are failing to find a way to monetize the user. And at the same time there is no a proper trigger - something that would make AI better than a glorified assistance. For people who used not to rely on it, it won't be a game changer either, just a little bit of convenience.

So it remains to be seen what's going to happen with AI in the future. It seems like the biggest gamechanger introduced by AI is in hardware space - the mass adoption of ARM, NPUs and stuff. Plus it seems like the monetization of AI is done nicely in the companies - Adobe's AI features, Microsoft and their corporate features and so on.

xnx
1 replies
23h25m

Credit where credit is due for co-opting the components of the "AI" acronym.

latexr
0 replies
23h14m

Agreed. Got to hand it to them that marketing was sharp on the name. Unless, of course, it doesn’t really work as advertised and then every “AI <negative>” search specifically bubbles Apple stories to the top.

vessenes
1 replies
22h45m

I wonder what if any Developer support for “AI” — I need a better way to write that — ahem I - will have for accessing the personal data store. I’ve spent the last four years running up at collecting this data about myself, and, it’s hard, real hard to do a good job at it.

I’d love to have an app I write be able to subscribe to this stream.

It feels like a sort of perfect moat for Apple - they could say no on privacy concerns, and lock out an entire class of agent type app competitors at the same time. Well, here’s hoping I can get access to the “YouSDK” :)

tavavex
0 replies
14h38m

Ah, yes, "F8FF I" lol

More seriously, I think the SDK they've teased is only really intended for making their ready-made features integrate with your programs. If you want complete control, you'd probably have to write it the old way, or integrate it with some existing local LLM backend.

thimabi
1 replies
23h21m

Oh, well, many apps will have a hard time competing with “Apple Intelligence” features. Why bother downloading a third-party app if some feature you want is included by default in the OS?

Better yet, no more dealing with overpriced subscriptions or programs that do not respect user privacy.

Kudos to the Apple software team making useful stuff powered by machine learning and AI!

ukuina
0 replies
13h56m

It was amusing to see the Duolingo product placement when their entire product is just a prompt in ChatGPT.

seabass
1 replies
22h1m

Adding ai features to the right-click menu is something I’ve been working on for the past year or so, and it’s always both exciting and disappointing to see one of the big players adopt a similar feature natively. I do strongly believe in the context menu being a far better ux than copying and pasting content into ChatGPT, but this release does have me questioning how much more effort to expend on my side project [1]. It doesn’t seem like Apple will support custom commands, history, RAG, and other features, so perhaps there is still space for a power-user version of what they will provide.

[1] https://smudge.ai

jbkkd
0 replies
20h1m

Love your extension! There's definitely room for it

notatoad
1 replies
21h51m

is my iPhone 14 going to get none of this then?

i understand i'm not going to get the on-device stuff, but something like siri being able to call out to chatGPT should be available on any device, right?

syspec
0 replies
6h8m

FWIW: You can do that on your phone today iwith Siri, if you have the ChatGPT app.

You just say "hey siri, ask chat gpt..." and then it will

newsclues
1 replies
23h29m

Let me run it locally on a Mac mini or whatever

mjamesaustin
0 replies
23h18m

A lot of the features do run locally, e.g. the Image Playground.

milansuk
1 replies
23h13m

This looks cool for v1! The only problem I see is most devices don't have much RAM, so local models are small and most requests will go to the servers.

Apple could use it to sell more devices - every new generation can have more RAM = more privacy. People will have real reason to buy a new phone more often.

MVissers
0 replies
20h55m

Apple is starting to anticipate a higher RAM need in their M4+ silicon chips: There are rumors they are including more ram than specified in their entry level computers.

https://forums.macrumors.com/threads/do-m4-ipad-pros-with-8g...

One reason could be future AI models.

I'm not sure if this has been verified independently, but interesting nonetheless and would make sense in an AI era.

mihaaly
1 replies
21h10m

Cloud compute and privacy in the same sentence, this is a new low bar for corporate bull*hit. Almost worse than the Windows Recall nonsense.

theshrike79
0 replies
21h6m

It's also auditable, they mentioned it multiple times.

Apple specifically doesn't want to know your shit, they're jumping through weird hoops to keep it that way.

It would be a LOT easier just to know your shit.

karaterobot
1 replies
23h28m

Apple sets a new standard for privacy in AI,

That does not necessarily mean better, just different. I reserve judgment until I see how it shakes out.

but if I don't like this feature, and can't turn it off, I guess it's sadly back to Linux on my personal laptops.

theshrike79
0 replies
20h48m

It's just Siri, but with better context.

If you don't specifically activate it, it won't do shit.

dombili
1 replies
23h10m

None of these features seem to be coming to Vision Pro, which I think is quite baffling. Arguably it's the device that can use them the most.

atlex2
0 replies
22h12m

baffling indeed- seems like they should be over-investing in AVP right now, not under-investing

block_dagger
1 replies
23h19m

I wonder if the (free) ChatGPT integration will be so good that I won't need my dedicated subscription anymore?

blixt
1 replies
21h21m

Did I miss the explanation of how they trained their image generation models? It's brave of a company serving creative professionals to generate creative works with AI. I'm a fan of using generative AI, but I would have expected them to at least say a little about what they trained on to make their diffusion models capable of generating these images.

Other than that, using an LLM to handle cross-app functionality is music to my ears. That said, it's similar to what was originally promised with Siri etc. initially. I do believe this technology can do it good enough to be actually useful though.

glial
0 replies
21h16m

I thought it was interesting that the only image generation they support are sketches (that look like a photoshop styling) and goofy 3d cartoons -- not really competition with most creatives.

adrianmsmith
1 replies
9h11m

It seems they didn't address hallucination at all?

Presumably this hallucinates as much as any other AI (if it didn't, they'd have mentioned that).

So how can you delegate tasks to something that might just invent stuff, e.g. you ask it to summarize an email and it tells you stuff that's not in the original email?

audessuscest
0 replies
8h39m

you really have to try hard to make a model hallucinate when asked to summarize an email. I think they didn't mention it because they can't guarantee 100%, but it's virtually on non-issue for such task.

a_petrov
1 replies
19h15m

What would be interesting for me is, if I can develop an app for, let's say macOS, and expose its context to Siri (with Intelligence) in an easy way.

For example:

Imagine a simple Amazon price tracker I have in my menu bar. I pick 5 products that I want to have their price tracked. I want that info to be exposed to Siri too. And then I can simply ask Siri a tricky question: "Hey Siri, check Amazon tracker app, and tell me if it's a good moment to buy that coffee machine." I'd even expect Siri to get me that data from my app and be able to send it over my email. It doesn't sound like rocket science.

In the end of the day, the average user doesn't like writing with a chatbot. The average user doesn't really like reading (as it could be overwhelming). But the average user could potentially like an assistant that offloads some basic tasks that are not mission critical.

By mission critical I mean asking the next best AI assistant to buy you a plane ticket.

RyanAdamas
1 replies
22h18m

Be me, have iPhone 6s

Can't get many apps these days

Can't use AI apps at all

Battery last about 2 hours

Never used iCloud, barely used iTunes

Apple announces new "free" Ai Assistant for everyone

well...not everyone

timothyduong
0 replies
22h9m

iOS users need to have the iPhone 15 pro.. so everyone else is also cooked on iOS.

zx10rse
0 replies
21h51m

Jumping on the chatgpt hype train is a mistake. I don't want anything from my devices to be accessible by openai. It will bite them back big time.

whoiscroberts
0 replies
22h22m

For anyone who is technical and wants to play with AI but doesn’t want to use cloud services it’s worth digging into LangChain, CrewAI, OpenDevin. Coupled with Ollama to serve the inference from your local network. You can scratch the AI itch without getting in bed with OpenAI.

vladsolokha
0 replies
17h47m

So now my “Sent from iPhone” email signature will be replaced with “Sent with Apple Intelligence” smh. I don’t think we will have anything original to say anymore. It will all just be converted to what is proper and “right”

tunnuz
0 replies
8h57m

AI for short?

toisanji
0 replies
19h34m

can you disable external llm calls so everything stays on device?

todotask
0 replies
4h10m

Sequoia with 'ai', coinciding with Apple Intelligence, is a cleverly chosen code name for this release.

tekawade
0 replies
21h6m

"privacy in AI" - If apple is sharing with ChatGPT how does it work? Do they try to remove context information. But still it's sharing a lot more. + Anything that goes out can go anywhere in internet. Look at Facebook, Twitter and even Apple use of data.

simianparrot
0 replies
12h29m

It needs to be opt-out by default in the EU or this is an enormous and inevitable GDPR breach waiting to happen.

sidcool
0 replies
14h29m

No privacy concerns?

semireg
0 replies
13h46m

So that’s where all the M4s are going … to Apple’s private inference cloud.

sc077y
0 replies
9h36m

Impressive not technically because nothing here is new but because it's the first real implementation for the average end consumers of "ai". You have semantic indexing which allows series to basically retrieve context for any query. You have image gen which gives you emojigen or messaging using genAI images. TextGen within emails. UX is world class as usual.

However, The GPT integration feels forced and even dare I say unnecessary. My guess is that they really are interested in the 4o voice model, and they're expecting openAI to remain the front runner in the ai race.

ryankrage77
0 replies
21h4m

The image generation seems really bad. Very creepy, offputting, uncanny-valley images. And that's the the best cherry-picked examples for marketing.

I'm curious to try some of the Siri integrations - though I hope Siri retains a 'dumb mode' for simple tasks.

rock_artist
0 replies
8h44m

What's not clear to me during that time is, how will this work on pre-M1 / pre-iPhone 15 devices. (also worth noting that iPhone 14 Pro is almost identical to iPhone 15 in terms of CPU... which is odd, especially when someone bought "Pro" tier...)

If I have some "AI" workflow on my MacBook Pro and then it's broken on my iPhone, I would most likely to entirely stop using it, as it's unexpected (I cannot trust it) or in Apple words... lack continuity...

rldjbpin
0 replies
38m

i hope there is a way to prevent the online processing without consent.

resfirestar
0 replies
21h0m

The Image Playground demos contrast pretty strongly, in a bad way, with how image generation startups like Stability typically emphasize scifi landscapes and macro images in their marketing material. We're more open to strange color palettes and overly glossy looking surfaces in those types of images, so they're a good fit for current models that can run on smaller GPUs. Apple's examples of real people and places, on the other hand, look like they're deep in uncanny valley and I'm shocked anyone wanted them in a press release. More than any other feature announced today, that felt like they just got on the hype bandwagon and shipped image generation features to please AI-hungry investors, not create anything real people want to use.

rdl
0 replies
20h25m

I'm super excited about how the apple private compute cloud stuff works -- I tried to build this using intel TXT (predecessor of SGX) and then SGX, and Intel has fucked up so hard and for so long that I'm excited by any new silicon for this. AWS Nitro is really the direct competition, but having good APIs which let app developers do stuff on-device and in some trustworthy/private cloud in a fairly seamless way might be the key innovation here.

qmmmur
0 replies
20h36m

Did they touch on any AI features that might be able to help me create shortcuts? I really like them, but hate creating them with the kludgy block-based diagrams.

pcloadletter_
0 replies
23h23m

My MSFT stock is looking good.

pastyboy
0 replies
11h8m

Well on the one hand its very interesting... on the other a little dystopian, but I guess I am a luddite.

Everyone now will appear to be of a certain intelligence with proscribed viewpoints, this is going to make face to face interviews interesting, me, I think I'll carry on with my imperfections and flawed opinions, being human may become a trend again.

nsxwolf
0 replies
22h27m

"AI for the rest of us" is an interesting resurrection of the "The computer for the rest of us" Macintosh slogan from 1984.

nothrowaways
0 replies
17h3m

I hope it is optional feature.

myko
0 replies
3h41m

This AI craze is very underwhelming. Surprised to see Apple go whole-hog into it.

mvkel
0 replies
23h22m

Kind of wild that "ChatGPT" is going to be the household term. It's such a mouthful! Goes to show that the name can kind of be anything if you have an incredible product and/or distribution.

Lobbying for the name to shorten to "chatty-g"

mihaaly
0 replies
20h56m

I can't wait until making tools for users will be the centerpiece of device development again instead of this corporate crap enforcement about half cooked whatevers acting on our behalf pretending to be a different us (I intentionally avoid the word intelligence, it is the mockery of the word that is going on all around).

Who will trust in anything coming from anyone through electonic channels? Not me. Sooner start to talk to a teddy bear or a yellow rubber duck.

This is a bad and dangerous tendency that corporate biggheads piss up with glares and fanfares so the crowd get willing to drink with amaze.

The whole text is full of corporate bullsh*t, hollow and cloudy stock phrases from a thick pipe - instead of facts or data - a generative cloud computing server room could pour at us without a shread of thoughts.

mensetmanusman
0 replies
20h41m

“With onscreen awareness, Siri will be able to understand and take action with users’ content in more apps over time. For example, if a friend texts a user their new address in Messages, the receiver can say, “Add this address to his contact card.””

Little annoyances like this being fixed would be great. “Open the address on this page in google maps” better work :)

mellosouls
0 replies
23h29m

Given there is no mention of "Artificial" is this Apple rebranding AI, the same as they did AR a year ago?

matrix87
0 replies
3h17m

the thing with "I have xyz ingredients, help me plan a 5-course meal for every taste bud"... I get the idea, it just doesn't feel like that's how people actually interact with computers. similarly with the bed time story thing. why would anyone waste time with some AI generated thing when they can just reference the works of a chef or author that they already know?

appropriating all of this information through legally dubious means and then attempting to replace the communication channels that produced it in the first place is hubris.

mark_l_watson
0 replies
15h29m

While I really enjoyed the “Apple-ification of AI” in the keynote today, I have been hoping for a purely personal AI ecosystem, one that I run on my own computer, using the open weight models I choose, and using open source libraries to make writing my own glue and application code easier.

The good news is that consumers can buy into Apple’s or Google’s AI solutions, and the relatively few of us who want to build our own experience can do so.

m3kw9
0 replies
17h21m

All the talk is sounding real nice, it’s when it comes out we will see how much context it can see and how accurate it knows what the user says. Gonna be fun few weeks of reviews.

lz400
0 replies
19h45m

I suppose it was to be expected by IMHO this takes the wind out of the sails of the OpenAI / Apple deal. In the end they don't let OpenAI get into the internals of iOS / Siri, it's just a run of the mill integration. They actually are competing with ChatGPT and I assume eventually they expect to replace it and cancel the integration.

The OpenAI integration also seems setup to data mine ChatGPT. They will have data that says Customer X requested question Q and got answer A from Siri, which he didn't like and went to ChatGPT instead, and got answer B, which he liked. Ok, there's a training set.

I'm always wrong in prediction and will be wrong here but I'd expect openAI is a bad spot long term, doesn't look like they have a product strong enough to withstand the platform builders really going in AI. Once Siri works well, you will never open ChatGPT again.

lawlessone
0 replies
21h15m

Couldn't Siri already do some of these things without LLM's?

kaba0
0 replies
23h9m

I think this will be a complete game changer. This will be the first device (with reasonable capabilities) where the human-machine interaction can transform into an actual instruction/command only one. We no longer just choose between a set of predefined routes, but ask the machine to choose for us. If it really does work out even half as good as they show, this will fundamentally alter our connection to tech, basically having created “intelligence” known from our previous century sci-fi tales.

Of course LLMs will quickly show their bounds, like they can’t reason, etc - but for the everyday commands people might ask their phones this probably won’t matter much. The next generation will have a very different stance towards tech than we do.

jshowa
0 replies
4h24m

Microsoft delivers AI recall Everyone hates it Apple integrates AI into every facet of a device that is highly personal Everyone loves it

Please make it make sense.

jdeaton
0 replies
12h12m

Maybe we could just get a decent spam filter on imessage?

iandanforth
0 replies
21h10m

I think the only way I would trust this is if they explicitly described how they would combat 5-eyes surveillance. If you're not willing to acknowledge that the most dangerous foe of privacy in the western world is the governments of the western world then why should I believe anything you have to say about your implementation?

iJohnDoe
0 replies
20h6m

Apple Intelligence = AI

Figgin’ brilliant.

hartator
0 replies
20h30m

It’s a little messy.

Local LLMs + Apple Private Cloud LLMs + OpenAI LLMs. It’s like they can’t decide on one solution. Feels very not Apple.

guhcampos
0 replies
20h25m

Somehow all these news about Apple Intelligence don't really make me thinkg about Apple, but just how bad Intel just lost the branding battle forever.

gonzo41
0 replies
18h5m

Oddly, I find myself siding with Musk on this feature.

ge96
0 replies
18h29m

Heh I see what they did there convenient name

egypturnash
0 replies
23h26m

Uugggghhhh

dwighttk
0 replies
6h50m

If I can just get Siri to control music in my car I will be happy.

Hey siri play classical work X a randomly selected version starts playing

Hey siri play a different version same version keeps playing

Hey siri play song X some random song that might have one similar keyword in the lyrics starts playing

No play song X I don’t understand

Hey siri play the rangers game do you mean hockey or baseball?

Only one is playing today and I’ve favorited the baseball team and you always ask me this and I always answer baseball I can’t play that audio anyway

car crashes off of bridge

(All sequences shortened by ~5 fewer tries at different wordings to get Siri to do what I want)

dt3ft
0 replies
13h19m

Thanks but no thanks.

All I wish for is user-replaceable battery and a battery lasting for at least 2 full days.

If I can’t opt out from any of this, this is where I stop using an iPhone.

daralthus
0 replies
20h28m

/Time for a good prompt injection email header/s

dakiol
0 replies
22h7m

I didn't watch the whole thing (will do), but could someone tell me already: can it be disabled on a Mac?

czierleyn
0 replies
23h7m

Nice, but my native language is Dutch, so I'll be waiting for this for the next 5 years to arrive. If it arrives at all.

coolg54321
0 replies
5h27m

As a pixel user I'm really impressed with their cleanup tool, it looks way ahead in UX compared to magic editor on pixel, also having able to select the distractions without altering the main object looks really cool (at least in their demo), magic editor on pixel's underpowered SoC runs too slow, In general iphones have superior hardware vs pixel (as per the benchmarks) so having this on-device should make it really nice experience overall.

camcaine
0 replies
20h36m

Feels like Apple are super late to the party and are scrambling. And it showed.

blackeyeblitzar
0 replies
18h26m

The AI wave is showing us that the gains will keep going to the big tech companies and competition doesn’t really exist, not even at this moment. They need to be broken up and taxed heavily.

baxuz
0 replies
3h24m

I really hope that they'll enable other, less spoken languages. I'm not planning on talking with my phone in English.

agumonkey
0 replies
23h22m

It's not personal computing, it's personal intelligence now :)

adamtaylor_13
0 replies
21h40m

This is literally everything I've been hoping Siri would be since the very first GPT-3.5 demo over a year ago. I've never been more bullish on the Apple ecosystem. So exciting!

abrichr
0 replies
23h28m

With onscreen awareness, Siri will be able to understand and take action with users’ content in more apps over time. For example, if a friend texts a user their new address in Messages, the receiver can say, “Add this address to his contact card.”

I wonder how they will extend this to business processes that are not in their training set. At https://openadapt.ai we rely on users to demonstrate tasks, then have the model analyze these demonostrations in order to automate them.

abaymado
0 replies
18h23m

I really just want Siri to perform simple tasks without me giving direct line-by-line orders. For example, I often use Siri to add reminders to my Calendar app but forget to mention the word “calendar” or replace it with “remind me,” and Siri ends up adding it to the Reminders app instead of the Calendar app. I want Siri to have an explicit memory that every time I use the phrase “remind me,” I want the task done in my Calendar app. Additionally, if most apps end up adopting App Intents like OpenAI’s Function Calling, I see a bright future for Siri.

__loam
0 replies
22h6m

Hope we can disable all this crap.

WillAdams
0 replies
23h25m

Nice to finally see a follow on to the Assistant feature from the Newton MessagePad.

Tiktaalik
0 replies
23h7m

I think the genmoji is going to be tons of fun. Basically seems like https://emojikitchen.dev/ on steroids.

LelouBil
0 replies
18h22m

So, this looks great, but I don't get the criticism against Microsoft Recall and not against this.

Can someone explain what Apple has avoided that were such a problem with Recall ?

KennyBlanken
0 replies
22h51m

Okay. And what about the terrible keyboard, predictive text, and autocorrect?

Hippocrates
0 replies
23h4m

The OpenAI/ChatGPT part of this looks pretty useless. Similar to what some shortcuts like “hey data” already do. I was shocked, and relieved that Apple isn't relying on their APIs more. Seems like a big L for OpenAI.

GeekyBear
0 replies
23h3m

One thing that I found thoughtful was that images could only be generated as cartoons, sketches or animations. There was no option for a more photorealistic style.

That seems like an effective guardrail if you don't want people trying to pass off AI generated images as real.

ENGNR
0 replies
19h51m

Ok I'm calling it. If NVIDIA releases a phone, and allows you to buy the hardware for the off-device processing too, I'll fully ditch Apple in a heartbeat.

I'm quite creeped out that it uses off-device processing for a personal context, and you can't host your own off-device processing, even if you have top of the line Apple silicon hardware (laptop or desktop) that could step in and do the job. Hopefully they announce it in one of the talks over the next few days.

BonoboIO
0 replies
21h58m

Only on iPhone 15 Pro upwards or M1 Mac’s

So only a very small percentage of users will be able to use it.

Aerbil313
0 replies
20h45m

About time. I was saying that Apple is cooking these features, especially intelligent Siri, for the past 1.5 years. It was obvious really.

You can clearly see only people objecting to this new technological integration are the people who don't have a use case for it yet. I am a college student and I can immediately see how me and my friends will be using these features. All of us have ChatGPT installed and subscribed already. We need to write professionally to our professors in e-mail. A big task is to locate a document sent over various communication channels.

Now is the time you'll see people speaking to their devices on street. As an early adopter using the dumb Siri and ChatGPT voice chat far more than average person, it has always been weird to speak to your phone in public. Surely the normalization will follow the general availability soon after.

65
0 replies
22h52m

Some stuff seems cool in the sense that you try it once and never use it again. Other stuff, like ChatGPT integration, seem like they'll produce more AI Slop and false information. It's always interesting to me to see just how many people blatantly trust ChatGPT for information.

I find most AI products to be counter-intuitive - most of the time Googling something or writing your own document is faster. But the tech overlords of Silicon Valley will continuously force AI down our throats. It's no longer about useful software, we made most of that already, it's about growth at all costs. I'm a developer and day by day I come to despise the software world. Real life is a lot better. Real life engineering and hardware have gotten a lot better over the years. That's the only thing keeping me optimistic about technology these days. Software is what makes me pessimistic.