IMO this really feels like the Facebook / Twitter integration from early iOS. That only lasted a few years.
Apple clearly thinks it needs a dedicated LLM service atm. But still thinks it is only supplemental as they handle a bunch of the core stuff without it. And require explicit user consent to use OpenAI. And Apple clearly views it as a partial commodity since they even said they plan to add others.
Tough to bet against OpenAI right now...but this deal does not feel like a 10 year deal...
Actually, just in three to five years, lots of "AI boxes" and those magical sparkling icons next to input fields summoning AI would be silently removed.
LLMs are not accurate, they aren't subject matter experts that'll be maybe within 5% error margin.
People will gradually learn and discover anf the cost of keeping a model updated and running won't drastically reduce so we'll most likely see dust settling down.
LLMs are not accurate, they aren't subject matter experts that'll be maybe within 5% error margin.
You're asserting that the AI features will be removed in 3 to 5 years because they're not accurate enough today, but you actually need them to remain inaccurate in 3 years time for your prediction to be correct.
That seems unlikely. I agree that people will start to realize the cost, but the accuracy will improve, so people might be willing to pay.
The same argument can be used for Tesla full self driving: basically it has to be (nearly) perfect, and after years of development, it's not there yet. What's different about LLMs?
They don't have to be perfect to be useful, and death isn't the price of being wrong.
Death actually can be the price of being wrong. Just wait for someone to do the wrong thing with an AI tool they weren't supposed to use for what they were doing, and the AI to spit out the worse possible "hallucination" (in terms of outcome).
What you say is true, however with self-driving cars death, personal injury, and property damage are much more immediate, much more visible, and many of the errors are of a kind where most people are qualified to immediately understand what the machine did wrong.
An LLM that gives you a detailed plan for removing a stubborn stain in your toilet that involves mixing the wrong combination of drain cleaners and accidentally releasing chlorine, is going to happen if it hasn't already, but a lot of people will read about this and go "oh, I didn't know you could gas yourself like that" and then continue to ask the same model for recipes or Norwegian wedding poetry because "what could possibly go wrong?"
And if you wonder how anyone can possibly read about such a story and react that way, remember that Yann LeCun says this kind of thing despite (a) working for Facebook and (b) Facebook's algorithm gets flack not only for the current teen depression epidemic, but also from the UN for not doing enough to stop the (ongoing) genocide in Myanmar.
It's a cognitive blind spot of some kind. Plenty smart, still can't recognise the connection.
Google’s recent AI assistant has already been documented recommending people mix bleach and white vinegar for cleaning purposes.
Someone’s going to accidentally kill themselves based on an AI hallucination soon if no one has already.
There's hundreds+ of companies making LLMs we can choose from, and the switching cost is low. There's only one company that can make self-driving software for Tesla. Basically, competition should lead to improvements.
Tesla aren't the only people trying to make self-driving cars, famously Uber tried and Waymo looks like they're slowly succeeding. Competition can be useful, but it's not a panacea.
Mercedes seems to be eating Tesla’s breakfast on FSD, in particular where safety and real-world implementation is concerned. Their self-driving vehicles are equipped with aqua-colored lights to alert other drivers that it is being controlled via computer, and Mercedes has chosen to honor its liability for incidents/accidents.
GPT-4 is 1 year old; 3.5 is 1 and a half. Before 3.5, this wasn't really a useful technology. 7 years ago it was a research project that Google saw no value in pursuing.
Anyone claiming that accuracy of AI models WILL improve is either unaware of how they really work or is a snake oil salesman.
Forget about a model that knows EVERYTHING. Let's just train a model that only is expert in not all the law of United states just one state and not even that, just understands FULLY the tax law of just one state to the extent that whatever documents you throw at it, it beats a tax consultancy firm every single time.
If even that were possible, OpenAI et.el would be playing this game differently.
Why does a mobile app needs to beat a highly trained professional every single time in order to be useful?
Is this standard applied to any other app?
Because it's taxation. Financial well being is at stack. We're even looking at a potential jail time for tax fraud, tax evasion and what not.
My app is powered by GTPChatChat, the model beating all artificially curated benchmarks.
Still wanna buy?
This is one of those "perfect is the enemy of good" situations. Sure, for things where you have a legal responsibility to get things perfectly right using an LLM as the full solution is probably a bad idea (although lots of accountants are using them to speed up processes already, they just check outputs). That isn't the case for 99% of task though. Something that's mostly accurate is good. People are happy with that, and they will buy it.
Those use cases are never sold as "Mobile apps", but rather as "enterprise solutions", that cost the equivalent of several employees.
An employee can be held accountable, and fired easily. An AI? You'll have to talk to the Account Manager, and sit through their attempts to 'retain' you.
My experience suggests that LLMs become not less accurate, but less helpful.
Two years ago they output a solution for my query [1] right away, now they try to engage user to implement that thing. This is across the board, as far as I can see.
These LLMs are not about helping anyone, their goals are engagement and mining data for that engagement.
[1] The query is "implement blocked clause decomposition in haskell." There are papers (circa 2010-2012), there are implementations, but not in Haskell. BCD, itself, is easy, and can be expressed in a dozen-two lines of Haskell code.
Wow, this is a really interesting idea! A sneaky play for LLM providers is to be helpful enough to still be used, but also sufficiently unhelpful that your users give you additional training data.
I truly hope the reckless enthusiasm for LLMs will cool down, but it seems plausible that discretized, compressed versions of today's cutting-edge models will eventually be able to run entirely locally, even on mobile devices; there are no guarantees that they'll get better, but many promising opportunities to get the same unreliable results faster and with less power consumption. Once the models run on-device, there's less of a financial motivation to pull the plug, so we could be stuck with them in one form or another for the long haul.
I don't believe this scenario to be very likely because a lot of the 'magic' in current LLMs (emphasis on 'large') is derived from the size of the training datasets and amount of compute they can throw at training and inference.
Llama 3 8B captures that 'magic' fairly well and runs on a modest gaming PC. You can even run it on an iPhone 15 if you're willing to sacrifice floating point precision. Three years from now I full expect GPT4 quality models running locally on an iPhone.
Three years is more than twice the time since GPT-4 was released to now. Almost twice the time ChatGPT existed. At this rate, even if we'll end up with GPT-4 equivalents runnable on consumer hardware, the top models made available by big players via API will make local LLMs feel useless. For the time being, the incentive to use a service will continue.
It's like a graphics designer being limited to chose between local MS Paint, and Adobe Creative Cloud. Okay, so Llama 3 8B, if it's really as good as you say, graduates to local Paint.NET. Not useless per se, but still not even in the same class.
No one knows how it will all shake out. I'm personally skeptical scaling laws will hold beyond GPT4 sized models. GPT4 is likely severely undertrained given how much data facebook is using to train their 8B parameter models. Unless OpenAI has a dramatic new algorithmic discovery or a vast trove of previously unused data, I think GPT5 and beyond will be modest improvements.
Alternatively synthetic data might drive the next generation of models, but that's largely untested at this point.
The one thing people overlook is the user data on ChatGPT. That's OpenAI's real moat. That data is "free" RLHF data and possibly, training data.
They're extremely pessimistic, 3 years is 200% of how long it took ChatGPT 3.5.
Llama 8B is ChatGPT 3.5 (18 months before L3), running on all new iPhones released since October 2022, (19 months before L3). That includes multimodal variants (built outside Facebook).
I know this isn’t really the point, but Adobe CC hasn’t really improved all that much from Adobe CS, which was purely local and perfectly capable. A better analogy might be found in comparing Encyclopedia Brittanica to Wikipedia. The latter is far from perfect, but an astounding expansion of accessible human knowledge that represents a full, worldwide paradigm shift in how such information is maintained, distributed, and accessed.
On the same token, those of us who are sufficiently motivated can maintain and utilize a local copy of Wikipedia…frequently for training LLMs at this point, so I guess the snake has come around, and we’ve settled into a full-on ouroboros of digital media hype. ;-)
Just imagine if you had an accurately currated dataset.
I just want to sit down in front of my TV, put on my Bluetooth headphones and have the headphones and TV connect automatically.
Then, when I’m downstairs in my office and want to listen to music on my iPhone. I want my headphones to connect to my iPhone and not my TV upstairs!
I don’t need Skynet, I just need my devices to be a little less stupid.
I would consider that akin to magic at this point. Let’s start there and work our way up to handing over control of our nuclear arsenal.
The University of Washington is studying an AI application where a pair of headphones will isolate a single voice in a crowd when one simply looks at them. Amazing stuff…until you try it anywhere near your car, and then it starts playing the voice over your car stereo (presumably).
The Gell Mann amnesia effect suggests people will have a very hard time noticing the difference. Even if the models never improve, they're more accurate than a lot of newspaper reporting.
So, you're betting on no significant cost reduction of compute hardware? Seems implausible to me.
Is that when they’re cribbing straight out of the newspaper pages, or is this just a cynical snipe at the poor state of media that, not for nothing, tech companies have had a fair hand in kneecapping?
The criticism of the performance of newspapers goes back well before Lovelace and Babbage:
"""I will add, that the man who never looks into a newspaper is better informed than he who reads them; inasmuch as he who knows nothing is nearer to truth than he whose mind is filled with falsehoods & errors. He who reads nothing will still learn the great facts, and the details are all false."""
- Thomas Jefferson (not Mark Twain), 1807, https://www.snopes.com/fact-check/mark-twain-read-newspaper-...
This is not about compute, but about data.
https://arxiv.org/abs/2404.04125
"...our study reveals an exponential need for training data which implies that the key to "zero-shot" generalization capabilities under large-scale training paradigms remains to be found."
How do you define a percent error margin on the typical output of something like ChatGPT? IIRC the image generation folks have started using metrics like subjective users ratings because this stuff is really difficult to quantify objectively.
IMHO the terribly overlooked issue with generative AI is that the end users' views of the response generated by the LLM often differs greatly from the opinion of the person actually interacting with the model
this is particularly evident with image generation, but I think it's true across the board. for example, you may think something I created on midjourney "looks amazing", whereas I may dislike it because it's so far from what I had in mind and was actually trying to accomplish when I was sending in my prompt
Your last paragraph is true regardless of how the image was generated.
One can find anything YOU produce to have different qualities from you.
True, but generally what art I produce IRL is objectively terrible, whereas I can come up with some pretty nice looking images on Midjourney.... which are still terrible to me when I wanted them to look like something else, but others may find them appealing because they don't know how I've failed at my objective
In other words, there are two different objectives in a "drawing": (1) portraying that which I meant to portray and (2) making it aesthetically appealing
People who only see the finished product may be impressed by #2 and never consider how bad I was at #1
As mentioned elsewhere, 3 to 5 years is some 3x to 5x as long as GPT-4 exists; some 2-3x as long as ChatGPT exists and LLMs suddenly graduated from being obscure research projects to general-purpose tools. Do you really believe the capability limit has already been hit?
Not to mention, there's lots of money and reputation invested in searching for alternatives to current transformer architecture. Are you certain that within the next year or two, one or more of the alternatives won't pan out, bringing e.g. linear scaling in place of quadratic, without loss of capabilities?
I'm pretty sure that statistical foundations of AI where a thing just been shy of 0.004 of the threshold value out of a million dimensional space can get miscategrized as something else will not deliver AGI or any useable and reliable AI for that matter other than that sequence of sequence mapping (voice to text, text to voice etc.) applications.
As for money and reputation, that's a lot behind gold making too in medieval times and look where that lead too.
Scientific optimism is a thinking distortion and a fallacy too.
Tool seems like a strong term for whatever ChatGPT is right now. Absurdly overhyped curiosity? Insanely overengineered autocorrect? Dystopian MadLibs? Wall Street Wank Sock?
I’m not trying to downplay its potential, but I don’t know of anyone who trusts it enough for what I’d consider “tooling”.
Right now they're basically a improved search engine, but they aren't solving the hard problem of making money.
Had Google become a utility and frozen it's search engine half a decade or more in the past, we would actually have something you could add AI on top of and come out with an improved product.
As it stands, capitalism isn't going to fix GIGO with AI
I don't think that's really what Apple is going to do with it though, it's not going to be for factual question and answer stuff. It will be used more like a personal assistant, what's on my calendar this week, who is the last person who called me etc. I think it will more likely be an LLM in the background that uses tools to query iCloud and such, ie, making Siri actually useful.
Ditto. They'll use it now while they stand to benefit and in 3 years they'll be lambasting OpenAI publicly for not being private enough with data and pretend that they never had anything to do with them.
This partnership is structured so that no data is logged or sent to OpenAI.
Some people here somehow thinking they will simultaneously outsmart:
* The CEO of a three trillion dollar company that employs 100,000+ of the best talent you could find around the world, with the best lawyers in the world one phone call away. Also, one of the best performing CEOs in modern times.
AND
* The CEO of the AI company (ok ... non-profit) that pretty much brought up the current wave of AI to existence and who has also spent the best part of its life building and growing 1,000s of startups in SF.
Lol.
You make it sound like it's merit or competence that landed Cook in that position, and that he somehow has earned the prestige of the position?
I could buy that argument about Jobs. Cook is just a guy with a title. He follows rules and doesn't get fired, but otherwise does everything he can with all the resources at his disposal to make as much money as possible. Given those same constraints and resources, most people with an IQ above 120 would do as well. Apple is an institution unto itself, and you'd have to repeatedly, rapidly, and diabolically corrupt many, many layers of corporate protections to hurt the company intentionally. Instead, what we see is simple complacency and bureaucracy chipping away at any innovative edge that Apple might once have had.
Maintenance and steady piloting is a far different skillset than innovation and creation.
Make no mistake, Cook won the lottery. He knew the right people, worked the right jobs, never screwed up anything big, and was at the right place at the right time to land where he is. Good for him, but let's not pretend he got where he is through preternatural skill or competence.
I know it's a silicon valley trope and all, but the c-class mythos is so patently absurd. Most of the best leaders just do their best to not screw up. Ones that actually bring an unusual amount of value or intellect to the table are rare. Cook is a dime a dozen.
I was with you until your last sentence. By all accounts Cook was one of the world's most effective managers of production and logistics -- a rare talent. He famously streamlined Apple's stock-keeping practices when he was a new hire at Apple. How much he exercises that talent in his day-to-day as CEO is not perfectly clear; it may perhaps have atrophied.
In any case, "dime a dozen" doesn't do him justice -- he was very accomplished, in ways you can't fake, before becoming CEO.
I look at it from a perspective of interchangeability - if you swapped Steve Ballmer in for Cook, nothing much would have changed. Same if you swapped Nadella in for Pichai, or Pichai for Cook. Very few of these men are exceptional; they are ordinary men with exceptional resources at hand. What they can do, what they should do, and what they can get away with, unseen, govern their impact. Leaders that actually impact their institutions are incredibly rare. Our current crop of ship steadying industry captains, with few exceptions, are not towering figures of incredible prowess and paragons of leadership. They're regular guys in extraordinary circumstances. Joe Schmo with an MBA, 120 IQ, and the same level of institutional knowledge and 2 decades of experience at Apple could have done the same as Cook; Apple wouldn't have looked much different than it does now.
There's a tendency to exaggerate the qualities of men in positions like this. There's nothing inherent to their positions requiring greatness or incredible merit. The extraordinary events already happened; their job is to simply not screw it up, and our system is such that you'd have to try really, really hard to have any noticeable impact, let alone actually hurt a company before the institution itself cuts you out. Those lawyers are a significant part of the organism of a modern mega corporation; they're the substrate upon which the algorithm that is a corporation is running. One of the defenses modern corporations employ is to limit the impact any individual in the organization can have, positive or otherwise, and to employ intense scrutiny and certainty of action commensurate with the power of a position.
Throw Cook into an start-up arena against Musk, Gates, Altman, Jobs, Buffet, etc, and he'd get eaten alive. Cook isn't the scrappy, agile, innovative, ruthless start-up CEO. He's the complacent, steady, predictable institutional CEO coasting on the laurels of his betters, shielded from the trials they faced through the sheer inertia of the organization he currently helms.
They're different types of leaders for different phases of the megacorp organism, and it's OK that Cook isn't Jobs 2.0 - that level of wildness and unpredictability that makes those types of leaders their fortunes can also result in the downfall of their companies. Musk acts with more freedom; the variance in behavior results in a variance of fortunes. Apple is more stable because of Cook, but it's not because he's particularly special. Simply steady and sane.
This is absolutely true. But that doesn’t imply that Tim Cook is so unexceptional that anyone with a 120 IQ could do the same job he does. The fact that Steve Jobs himself trusted Cook as his right hand man and successor when Apple probably has literally thousands of employees with at least a 120 IQ should be a sign of that.
Partly because little of this is really a question of intelligence. If you want to talk about it in psychometric terms, based on what I’ve read about the man he also seems to have extraordinarily high trait conscientiousness and extraordinarily low trait neuroticism. The latter of the two actually seems extremely common among corporate executive types—one gets the sense from their weirdly flat and level affect that they are preternaturally unflappable. (Mitt Romney also comes across this way.) I don’t recall where I read this, but I remember reading Jobs being quoted once that Cook was a better negotiator that he was because unlike Jobs, Cook never lost his cool. This isn’t the sign of an unexceptional person, just a person who is exceptional in a much different way than someone like Steve Jobs. And, contrary to what you claim at the top of your comment, someone like Tim Cook is pretty distinguishable from someone like Steve Ballmer in the sense that Ballmer didn’t actually do a good job running Microsoft. I don’t know if that was related to his more exuberant personality—being a weirdly unflappable corporate terminator isn’t the only path to success—but it is a point against these guys being fungible.
Jobs was growth stocks, Cook is fixed income. Each has their place, and there are good and bad versions of each.
Historians often debate whether Hitler was some supernatural leader, or a product of a culture looking for a scapegoat.
I'm on the side of culture. That's what I see with most of the business leaders.
The UI shows a "do you want your data to be sent to OpenAI?" popup.
The parent is partially right, the keynote mentioned that OpenAI agreed to not track Apple user requests.
I would like to see that codified in a binding agreement regulators can surface in discovery if needed. Trust but verify.
I'm reasonably sure you just described the SEC and the (paraphrasing Matt Levine) "everything is securities fraud"-doctrine. Yes Apple has some wiggle room if they rely on rule-lawyering, but.. I really don't think they can wide-spread ignore the intention of the statements made today.
California and EU law require keeping data like that to be opt-in afaik, so it doesn't need a promise to not do it.
The partnership is structured so that Apple can legally defend including language in their marketing that says things like "users’ IP addresses are obscured." These corporations have proven time and time again that we need to read these statements with the worst possible interpretation.
For example, when they say "requests are not stored by OpenAI," I have to wonder how they define "requests," and whether a request not having been stored by OpenAI means that the request data is not accessible or even outright owned by OpenAI. If Apple writes request data to an S3 bucket owned by OpenAI, it's still defensible to say that OpenAI didn't store the request. I'm not saying that's the case; my point is that I don't trust these parties and I don't see a reason to give them the benefit of the doubt.
The freakiest thing about it is that I probably have no way to prevent this AI integration from being installed on my devices. How could that be the case if there was no profit being extracted from my data? Why would they spend untold amounts on this deal and forcibly install expensive software on my personal devices at no cost to me? The obvious answer is that there is a cost to me, it's just not an immediate debit from my bank account.
Requests are not stored by openai, but stored by Apple and available on request.
Is how I interpret that. It's similar to that OneDrive language which was basically allowing user directed privacy invasion.
Inevitably,openai will consume and regurgitate all data it touches.
It is not clean and anyone thinking openai won't brutalize your data for it's race to general AI is delusional in one of several ways.
I’m not sure I understand the paranoia that Apple is secretly storing your data. Sure they could secretly do so but it doesn’t make any sense. Their whole schtick is privacy. What would Apple benefit from violating what is essentially their core value prop? They’d be one whistleblower away from permanent and irreparable loss of image.
Theyre not secretly. They are. They admit it.
The question is, is it encrypted E2E everywhere, how controlled is it on device, how often is it purged.
The ubiquity of cloud means theres a huge privacy attack.surface and unclear how much ofvthat is auditable.
Lastly, theres no reason to think Apple will avoid enshittification as the value of their ecosystem and users grow.
Just takes one bad quarter and a greedy MBA to tear down the walls.
Past privacy protection is no Guarantee of future protection.
What's the worst possible interpretation of Apple and CloudFlare's iCloud Private Relay?
That won't stop Apple from lambasting later
Sure
There’s a lot I don’t like about Sam Altman. There’s a lot I don’t like about OpenAI.
But goddamn they absolutely leapfrogged Google and Apple and it’s completely amazing to see these trillion dollar companies play catch-up with a start-up.
I want to see more of this. Big Tech has been holding back innovation for too long.
They "leapfrogged" Google on providing a natural language interface to the world knowledge we'd gotten used to retrieving throug web search. But Apple's never done more than toyed in that space.
Apple's focus has long been on a lifestyle product experience across their portfolio of hardware, and Apple Intelligence appears to be focused exactly on that in a way that has little overlap with OpenAI's offerings. The partnership agreement announced today is just outsourcing an accessory tool to a popular and suitably scaled vendor, the same as they did for web search and social network integration in the past. Nobody's leapfrogging anybody between these two because they're on totally different paths.
Siri is a toy, but I don't think that was Apple's intent. It's been a long-standing complaint that using Siri to search the web sucks compared to other companies offerings.
Apple's product focus is on getting Siri to bridge your first-party and third-party apps, your 500GB of on-device data, and your terabyte of iCloud data with a nice interface, all of which they're trying to deliver using their own technology.
Having Siri answer your trivia question about whale songs, or suggest a Pad Thai recipe modification when you ran out of soy sauce, is just not where they see the value. Poor web search has been an easy critique to weigh against Siri for the last many years, and the ChatGPT integration (and Apple's own local prompt prep) should fare far better than that, but it doesn't have any relevance to "leapfrogging" because the two companies just aren't trying to do the same thing.
That's the complaint! They play in the same space, they just don't seem to be trying. Siri happily returns links to Pad Thai recipes, it's not like they didn't expect this to be a use-case. They just haven't made a UX that competes with others.
And it's not just web search! Siri's context is abysmal. My dad routinely has to correct the spelling of his own name. It's a common name, there are multiple spellings, but it's his phone!
My favorite thing with names is I have some people in my contacts who have names that are phonetically similar to English words. When I type those words in a text or email, Siri will change those words to people’s names.
Ah yes, them saying “we’re bad at it on purpose, but are scrambling to throw random features in our next release” is definitely a great defense.
Apple bought Siri 14 years ago, derailed the progress and promise it had by neglect, and ended up needing a bail out from Sam once he kicked their ass in assistants.
Call it whatever you want.
Big Tech is the only reason OpenAI can run. Microsoft is propping them up with billions of dollars worth of compute and infrastructure
And the foundational tech (Transformers) came from Big Tech, aka Google
It came from Google employees who left to found startups.
Google had technical founders, now it’s run by MBAs and they are having a Kodak Moment.
Isn’t MS heavily invested in them and also letting them use Azure pretty extensively? Rather, I think this is more like an interesting model of a big tech company actually managing to figure out exactly how hands off they need to be, in order to not suffocate any ember of innovation. (In this mixed analogy people often put out fires with their bare hands I guess, don’t think too hard about it).
Change is inevitable in the AI space, and the changes come in fits and starts. In a decade OpenAI too may become a hapless fiefdom lorded over by the previous generation's AI talent.
Disagree. This feels more like the Google partnership with Apple' Safari that has lasted for long time. Except in this case, I think is OpenAI who will get the big checks.
If Apple wasn't selling privacy, I'd assume the other way around. Or if anything, OpenAI would give the service out for free. There's a reason why ChatGPT became free to the public, GPT-4o moreover. It's obvious that OpenAI needs whatever data it can get its hands on to train GPT-5.
ChatGPT was free to the public because it was a toy for a conference. They didn't expect it to be popular because it was basically already available in Playground for months.
I think 4o is free because GPT3.5 was so relatively bad it means people are constantly claiming LLMs can't do things that 4 does just fine.
This integration is way more limited and frictioned. Whereas with search Apple's fully outsourced and queries go straight to your 3rd-party default, Siri escalates to GPT only for certain queries and with one-off permissions. They seem to be calculating that their cross-app context, custom silicon, and privacy branding give them a still-worthwhile shot at winning the Assistant War. I think this is reasonable, especially if open source AI continues to keep pace with the frontier.
Why would Apple want to keep paying big checks while simultaneously weakening their privacy story?
If Apple were paying to use Google the partnership would not still exist today.
Apple doesn't even bother to highlight their cooperation with OpenAI. Instead they bury the integration of ChatGPT as the last section of their "Apple Intelligence" announcement: https://www.apple.com/newsroom/2024/06/introducing-apple-int...
It's a win for OpenAI and AI. I remember someone on Hacker News commented that OpenAI is a company searching for a market. This move might prove that AI, and OpenAI, has a legitimate way to be used and profitable. We'll see.
Steve Jobs famously said Dropbox is a feature not a product. This feels very much like it.
Well, Dropbox is a sub $8bn company now that hasn't really grown in 5 years, so maybe Steve was right?
Yea, I mean…if you’re only doing $3.5Bn in annual revenue at 83% gross margins…like, are you even a product bro?
If anything, your words prove he was absolutely wrong.
I think he was right - now you've got OneDrive automatically bundled into Windows, iCloud in MacOS, Google Cloud in the Google ecosystem and Dropbox down 25% from IPO with no growth. I get nagging emails from them every month or so asking me to upgrade to a paid plan because I'll definitely not regret it.
Looking at their stock performance and the amount of work they’ve put into features that aren’t Dropbox file sync, he appears to have been right. iCloud doc syncing is what DB offered at that time.
My gut says that it's a stopgap solution to implement the experience they want.
I think Apple's ultimate goal is to move as much of the AI functionality as possible on-device.
yup.. and thats good for the consumers as well because they don't have to worry about their private data sitting on open ai servers.
The idea that they would give ChatGPT away to consumers for free without mining the data in some form or another is naive.
Doubt that Apple can ever come up with a better LLM than OpenAI's, they stopped trying to make Siri as good as Google Assistant after 10+ years now. I don't think they are that good at Cloud or ML compared to other big techs
But they’ve also signalled they’ll probably support Google/Anthropic in the future
yeah somehow it reminded me of the fb integration too. we‘ll see how well it works in practice. i was hoping for them to show the sky demo with the new voice mode that openai recently demoed
Apple is also claiming they are gonna go privacy protecting AI.
I'm quite skeptical of Apple.
Not looking forward to the equivalent of the early Apple Maps years.