Lots of surprises on the downside from all the reviews. Pass through much more limited in quality with motion blur, pixelation, distortions, limited color and dynamic range. The eye tracking driven input method which was seen as holy grail turns out to be annoying after a while because people don't naturally always look at what they want to click on. Personas straight up aren't ready. The lack of AR features is the biggest surprise. They tried hard to avoid it being a VR device but all the actual high quality experiences, especially the ones people are impressed by are the VR ones.
For me the biggest issue though is that it can't fulfil it's primary use cases:
Want it for productivity? it can't run MacOS applications and if you want to use your actual Mac it can't do multiple monitors.
Want it for entertainment? people want to enjoy photos, videos, movies with other people and it can't include them. Even if they have a Vision Pro, I haven't yet seen any sign of ability for multiple people to do these things together.
All up, it all seems far more immature and dev-kit stage than I was expecting.
Reminds me of iPhone 1.
Everything you've said is reminiscent of the reviews of the first iPhone.
This iPhone trope has gotta die. I worked at Motorola when the iPhone came out. Every single engineer knew this thing would blow everything else out of the water. It was one of the largest leaps in consumer tech devices ever. I assure you the Vision Pro is nowhere close to that.
I love how people just rewrite history on the internet lol.
Iphone 1 was a collosal PoC. Slow, most of web didnt work. Its only appeal was the full touchscreen, which of course sucked to type on, but looked cool (which is the reason people bought it mostly). Everyone that needed mobile compute functionality was still on Blackberry and some other devices.
There was a time during early 2010s where the iphone was better than everything else due to native hardware and in house software and updated functionality. However by 2016 Android caught up, and since the first Pixel came out it pretty much has been ahead ever since.
Not sure what you are going on about, that the original iPhone was a PoC.
Given what was available at that time (I was using a Windows Mobile O2 XDA AND a Blackberry at that time), the iPhone was simply magical. The ability to browse the full web on the go and a proper mail client, was amazing.
Worth the money to travel to San Francisco from Singapore just to get one (and the cost of the AT&T SIM masker to spoof it on the local Singapore telco network)
I think you have rose tinted glasses. I had one too, and the browser was garbage over 2g. You forget how much time was spent looking at that checkerboard pattern.
As for email. Proper email client? It was pop3 only, and you had to manually tap to fetch new messages.
You're right about the email client. I had IMAP email clients on mobile for a while before the iPhone supported it. Email on the OG iPhone was terrible.
I wouldn't bother responding to grandfather. Literally every time apple releases a new product, there's a bunch of people collectively shrugging off whatever the product claims to be bringing, and along come the "the iPhone v1 was crap too and look how that turned out" apologists. Not worth the discourse.
Again, no. You completely somehow forgot tech in late 2000s lol.
The internet on anything mobile was pretty painful when it launched in general. Websites weren't optimized for mobile, mobile data was unusably slow. Most people who wanted portability were using things like netbooks, which you could actually multitask on.
Blackberry was the goto for actual phone because it was much easier to type on due to the best keyboard at the time, well developed software for things like email, basic browser, e.t.c.
Ill leave you with this staplepiece of internet history: https://maddox.xmission.com/c.cgi?u=iphone/
That's the history I remember tho.
I remember the first time I saw an iPhone. It was 2am at a house party with a bunch of 19-25 year olds. Pretty much everyone stopped drinking or dancing and played with this dudes phone for three hours.
This was my first experience with an iPhone too. A rich guy (friend of a friend) had one, my friend and I spent the rest of the night trying it.
I had the first Occulus when it came out. Big hit at my workplace. Anything flashy is going to get attention. The problem with Apple is that they have, do, and will continue on prioritizing flashiness over usability. Its actually pathetic that you can't install linux on Apple silicon (and no, REd hacked together Asahi linux does not count)
I think if you talked to someone working on Meta Quest they would say this is going to blow the competitors away. If you talk to a reviewer for a tech magazine they’re going to complain about any detail they can find in the 1.0 launch of a new product line as if it’s a colossal failure (aka PoC).
I think the difference is everyone knew market penetration on cell phones would be close to 90%. This may be better than the Quest but is it going to take AR/VR mainstream? Seems iffy. In which case drawbacks may never get ironed out.
Quest and Vive weren't great, but they actually had working VR that set the stage for subsequent development.
Iphone 1 set the stage for tech jewelry.
I could've sworn that "it's easy to type on" was the one weird trick it did right? Though perhaps I'm just misremembering the media; my first iOS device of any kind was the iPod touch with retina display.
Something about Apple having a temporary monopoly (or possibly monopsony) on capacitive touch screens, where everyone else was stuck with resistive ones?
I don't think Apple had a monopoly on capacitive screens. There were a couple other mobile devices that used them and came out around the same time. Maybe they tied up all/most the available production capacity for a bit?
Typing on the original iphone wasn't perfect, but it was generally better than tiny physical keyboards in many cases
Physical keyboards were better than any touch screen one until the swipe typing became standard. You could type on them faster, and had more features like arrow keys, which were useful for smaller screens.
There was a whole era of autocorrect and the memes that came with it due to how much it was being used with touchscreen keyboards.
Of course the advantage of a full screen for things like web and media was more important, and making fullscreen phones was cheaper, so physical keyboards died out. The size of the screen increased as well.
and motion controls. Between the Nintendo Wii and the first iPhone people were obsessed with motion controls for some reason.
Internet is the new story telling at the camp fire, turning humble actions into historical myths for eternity.
The better analogy is probably something like the Apple Watch.
Apple certainly wasn't the first smartwatch, but anyone who owned one before that was obviously a geek (said lovingly). Apple made the first mainstream acceptable smartwatch by smoothing over a lot of the complaints about their competitors, while adding some of their own in the process, just like the Vision Pro. It took a few iterations, but today people from all walks of life wear smartwatches. Certainly not as ubiquitous as smartphones, but Apple made smartwatches a standard piece of tech that millions of people own and they made plenty of money along the way.
The Vision Pro will probably be similar. For example, anyone wearing a VR/AR headset on a plane today would likely get stares. I bet a few years from now there will be several people on every plane wearing one of these. That doesn't mean Apple will make the best VR/AR headset or that VR/AR headsets will be a piece of tech that everyone owns, but Apple is capable of mainstreaming a piece of technology in ways that the Facebooks and Googles of the world aren't even if that is due to their marketing prowess and the strength of their brand just as much as their technical expertise. And in that sense, the thing that is more important than any of these reviews dropping today is the Super Bowl commercial Apple has almost assuredly bought to show this thing off in two weeks.
Really? Why do Apple fanboys make these kinda claims.. same as wireless Bluetooth pods, or fingerprint readers, or faceID. There are ample examples of these done well on the hardware side prior. The main advantage Apple has is its seamless integration with software, which of course it pairs well with iOS because nothing else is allowed to.
You clearly missed my point entirely. I'm not a fanboy saying Apple's products are the best. I even specifically said their success is "due to their marketing prowess and the strength of their brand just as much as their technical expertise."
They weren't the first smartwatch, but Apple is the company most responsibly for changing arbitrary societal metrics of "mainstream acceptance" like the percentage of people who would wear a smartwatch on a first date. That seems like an obvious observation and a "win" even if smartwatches aren't as ubiquitous as smartphones. I think the Vision Pro will follow a similar trajectory of success in that it will take years before anyone uses that word "success", but a few years from now you'll get on a plane and notice more than a few people wearing headsets and that will be because of Apple.
I agree with you, but not about the Vision Pro. I could see the potential and use cases for the iPhone, iPad, AirPods, Apple Watch, Apple TV. This headset though? It's a gimmick. I can see some niche use cases for it in very specific industries and in gaming. But I don't see that "normal" people would want to spend significant money on this. Not even if the price dropped to $999.
I would consider my use case (desire) of comfortably working from, say, a coffee shop without having to bring my 24" screen pretty "normal" and non-niche.
"Done well" was the Nokia motto. They did solid phones with a ton of features. Look what happened to them?
Their problem was that none of the features was _usable_. It was like they released the first MVP the engineering team got done and forgot that people needed to use it too. But it gave them a bonus and another line on spec sheet, so all was good.
For example Nokia had Copy & Paste years before Apple. But it was shit. They _had_ it, but you could copy very specific text bits to other very specific locations. Even Android had the same issue, you could copy some bits not others.
Apple isn't innovating, they haven't for a long time. They rarely come up with something "new" that _nobody_ has done yet.
What they are pretty much the best at is getting the tech everyone else has tried and packaging it to a usable form factor for the normal non-Hackernews consumer.
Wireless BT headphones existed before the Airpods, but they made it so seamless even my mom could do it and hasn't needed any help with them. Open box, insert in ear, done.
To be fair iOS copy and paste is still shit today, selecting and copying/pasting is really one of the worst experiences on iOS.
Correlation does not imply causation. I think smartwatches (and step trackers) were a thing, independent of apple.
I remember a friend talking about load balancers when they were first came on the market 20 years ago. Cisco had this thing called "localdirector" which I believe couldn't handle load in the first place, while competitors did load balancing in hardware.
I was puzzled why people bought them.
My friend said, "Look, people buy $1M of cisco equipment, and they can just add a line item for one or 10 of these with no friction"
So, I think Apple made their watch a "line item". People buy a phone, and they need cables and the watch is sitting there, and they say "ok!" and try one.
(aside, I love my garmin watch. I just put it on my wrist. I haven't hooked it to my phone or connected it to the internet. It is great with battery life. I track my sleep, which seems to be when most people put their apple watch on a charger. I put my watch on the charger during my shower, which is all it needs)
Same as the Apple Watch.
Every day though. I charge my Garmin once a week when it gets down to 50%. The Garmin is a fitness watch with a few basic smartwatch features though. The Apple watch is a Smartwatch (with a lot of fitness features) The two aren't really comparable I don't think.
...to top it up to 7 days of charge
3500USD, an external battery and still "looking weird", I don't think you are going to see too many of them on planes, unless it's some die-hard Apple fanboy. If they manage to make next iteration slimmer (like, half the size) and with a battery in it, this might start to happen. But the market will be anyway smaller than the smartwatch one.
prices will come down, there'll be a non-Pro line, you think the market will be smaller but you forget it replaces displays, so people with laptops will migrate to using this, and then Apple will come out with headless laptops.
Until the ergonomic radically improves, it will be a niche device for enthusiasts.
IF they manage to produce some AR device that 1) you almost won't notice you are wearing 2) it has pass-through light capabilities so being real AR and not VR mimicking AR, THEN it can get mass-adoption, at least for office workers or to replace big TV screens.
I still think it's weird how many people's eyes are glued to their phone screen in everyday situations and social settings. I think it's realistic that these headsets become acceptable and normal.
How can you assure us that Vision Pro is nowhere close to that? Do you have one?
As someone who bought the original iPhone, it was extremely impressive, but had many many flaws. The browser was practically unusable over 2g and the whole pinch to zoom the New York Times desktop site was never actually practical.
I think the parallels are clear.
I also bought the original MacBook Air. Now that truly was terrible and stupidly overpriced. More expensive than the Vision Pro when adjusted for inflation, and with major functional problems. Today it’s the world’s most popular laptop.
Iphone was made from pretty much what was available to other manufacturers plus some secret software sauce, and was priced like a Blackberry of the time. There was immediately plenty of use cases that competitors kinda did, but not so well.
These googles are as bespoke as it gets, are priced at 7x the competition and more than a flagship laptop, steer clear of the most popular existing use case, which is games, and offers... what exactly again?
That's quite some difference.
Neither of these statements is remotely true.
Samsung CPU, samsung oled display, Balda touchscreen analogous to what LG has used previously, Marvell wi-fi, Skyworks cellular, various Intel and Infineon aux chips. $499 vs Blackberry's 8320 at $449.
Not even remotely.
The iPhone was also subsidized by Cingular/AT&T at launch. The $449 was the retail cost of the BlackBerry.
Samsung OLED? iPhone X was first to feature OLED display. Surely you meant LCD?
iPhone used capacitive touchscreen whereas the competitors used resistive touchscreens in their smartphones, which instantly added more usability to iPhone compared to Nokia N95/97 that was already a fully featured pocket computer in mobile case and likely much more powerful than the original iPhone. Apple did the dirty logistics trick on other smartphone manufacturers by buying all production of capacitive touchscreen factories and similar key components 3 years ahead, leaving other phone manufacturers unable to respond.
https://en.wikipedia.org/wiki/LG_Prada
Except when the iPhone came out all the reviewers were like "holy shit, this is mind blowing" while with this one everyone is like "it's a shittier oculus quest with some apple polish"
Almost every outlet is calling it a better Oculus Quest but one that is fantastically expensive.
Definitely looks better for productivity, but for gaming and social?
Nothing on Vision Pro is comparable to things like Beatsaber, VR Chat, Pavlov.
Hard to tell if the hand tracking could handle those sort of experiences currently.
I mean it's been over ten years since the new generation of VR headsets (I'm thinking of the Oculus Rift) came out; if at this point in modern VR development it wouldn't be better than existing offerings, I'd be deeply disappointed in Apple's R&D.
Anyway, I bring that up because when the iphone came out, it really did do something different than the locked-in feature phones of the time; I did have to look it up to refresh my memory (https://www.cnet.com/pictures/original-apple-iphone-competit...), but its competition in that year was a lot of Blackberry-style physical keyboard and resistive touch screens running Windows Mobile. I do want to highlight the LG Prada, the first capacitive touch screen smartphone - came out in the same year the iPhone was announced, and it along with the HTC Touch on that page had a similar screen focused form factor.
The verge is probably the most critical. Here’s what they say:
“marvelous display, great hand and eye tracking, and works seamlessly in the ecosystem, … The Apple Vision Pro is the best consumer headset anyone's ever made”
Yes, they also list a bunch of flaws. But the people trying to make out that the reviews are saying it’s a shittier oculus quest are not being honest.
No one is saying shittier oculus quest. It has a much higher resolution and I presume people will get used to letting their eyes linger a bit longer on what they want to click. We’re so used to a mouse paradigm we try to immediately apply that here.
It’s not a trope if it’s true. Most hard core nerds didn’t get it until they had tried it hands on (including myself, I panned the device hard until I tried it). Then it was the exorbitant price point ($650 at a time when nobody really paid for phones). Then it was the lack of hardware keyboard. No “real” apps. No copy and paste (even my older at the time Symbian S60 devices had that). The list goes on and on.
I get it, if you were at another phone manufacturer, you might’ve been scared, but the reality is the iPhone didn’t really pick up steam in the market until 3G or 3GS.
People thought Apple making a phone was odd. No one thought mobile phones, or even smart phones were odd. Mobile phones were common when the iPhone came out, and even smart phones weren't uncommon. And once the iPhone did come out, there was immediate interest.
AR though, that's something the public hasn't shown much interest in yet. These products are looking more like the Segway (which was once supposedly going to revolutionize transportation) - cool, popular in a few niche markets, but not the revolution that people imagine them to be.
Nobody thought Apple making a phone was odd. Even prior to the announcement of the original iPhone and prior to any rumors about actually making a phone, the internet was full of amateur 3D mockups of what an "iPhone" could be like, including looking like an iPod with a click-wheel. They had conquered the personal media player market, and now people wanted a phone with the design of an iPod to carry less stuff in their pockets.
Everyone forgets that the nerds had windows ce phones before Apple hit the market. It was the 3GS and 3rd party dev arcade games that converted me. Apples challenge now is how to convince people who hate them to go to work for them.
I was one of those nerds! I equally loved it and loathed it. Some very cool software available, but as a cellphone-like experience, it was neither one thing nor the other. Most of the time I had a feature-phone as a daily-driver alongside the clunky early Windows stuff. Wasn't a bad compromise as I didn't want or need the power of Pocket PC/Windows Mobile 24/7, but it was however a fairly expensive one!
People also forget that using a stylus with a touchscreen made it a pretty crappy phone, and they weren't ergonomic to hold up to your face with a lot of the larger PDA-like ones. The Windows devices with sliding keyboards were pretty decent as a compromise in terms of size and features though. But boy were they expensive in the UK at the time as they were generally imports from the USA.
I loved my htc with the slide out keyboard! Sure, it would reboot occasionally when you slid the keyboard out and it took 45 minutes to send an email on edge but that was back to the future stuff back then.
Windows CE or Nokia N95 or Nokia 9500.
To be clear, not disagreeing with your core point, but as a reference comparison, the Meta Quest 2 sold more units in its first 2-2.5 years than Apple sold iPhones (and iPhone 3G) in the equivalent time period.
Yep. People forget that the original iphone that could only run built-in apps really was a failure; it was only when they opened up the app store that it exploded (perhaps aided by this Cartmanland marketing strategy).
Nobody thought Apple making a phone was odd. Phone + MP3 players were already in everybody's pockets. Telcos were making money selling ringtones for $2.99 a pop. Apple already had a digital music store. It all lined up. And that was before they tested the waters with the Moto ROKR in 2005.
Back in 2010 there were rumours of Apple building TVs and cars. Those would be weird because they're in areas where Apple had no experience and no software content to provide.
These AR/VR are definitely a doomed profile unless somehow these things were given away for basically free. IMHO success for any new technology usually comes from: 1. Does it make you look more attractive? NO 2. Does it make you money? NO except for the YouTubers who will surely be pandering about how meme they are. How many streamers us VR? Oh yeah... 3. Does it make me more valuable to others? No? I can't see a mass adoption are where having a computer strapped to your face which enhances your productivity. This may be the area most attackable by some great use cases, but I don't see them today (once again, mass market in a way that Apples of the world would give an F).
I remember being angry that my Moto Razr V3 had the power to run awesome software but as a teenager I didn't have an easy way to boot stuff up on it. IMHO it was a perfect phone, except there was no way to run what I wanted. The best I could do was program text messaging services and use those. They discontinued the phone rather than give consumers the freedom to just use the hardware. I thought the iPhone was dumb, but when the App Store came out that was game over. I really missed the convenience of having buttons and being able to text with my phone in my pocket, but at least it was consumer programmable... even if you had to pay a $100 premium to become a "Developer" in order to do so.
Eventually the Moto X came along. I thought it was the perfect phone. Its voice assist features worked better than most voice assist features even today. You could easily do everything you wanted to do with the phone in your pocket and your earbuds in.
It had the perfect size screen. It had a great ambient-on watch-face screen that looked nice sitting on your desk among clutter. The dimple in the back was a really nice touch, it made it like a worry-stone[0] in your pocket. It had a lower resolution, but I kinda liked that about it. I think Motorola was bought or something, but whatever the reason, the next phone in the series ditched every single thing that made the Moto X special.
Those two devices were both my favorite mobile computing devices, and probably the closest I've come to getting fan-angry about a company screwing up their own magic.
More to the point: you might have been able to see it at Motorola, but even looking back, I don't understand why Razr couldn't have won against the iPhone. The Razr was a surprisingly capable little machine!
[0] - https://en.m.wikipedia.org/wiki/Worry_stone
You’re gonna love this — Motorola was bought by Google in an attempt to jumpstart their hardware business but it turned out to be a failed acquisition. They ended up grabbing the patent portfolio and selling off the rest of the biz fairly quickly.
I wouldn't call Motorola a failed acquisition - Google bought Motorola as a shield in an increasingly litigious environment: this was the age of Apple going "thermonuclear", Microsoft and patent trolls were wantonly shaking down Android vendors, and beginning to circle Google itself.
Motorola (under Google) had the best value-for-money smartphones - their midrange was solid, and reasonably priced while everyone else was continuously shifting to flagships, with each release priced higher than the last.
From the outside looking in, Google appeared to dispose Motorola to make Samsung happy - Samsung had been complaining loudly and widely about the Motorola acquisition, and openly flirted with other mobile platforms as a hedge.
Motorola shareholders got paid, Google got the patents it wanted, Samsung remained the 600 lb gorilla in Androidland, Lenovo got a good brand and keeps making ok phones to this day. So, not a failed acquisition by any reasonable measure
Yeah, I generally agree with that framing. That’s a good detail about samsung that I was not aware of. But I will say that from an ex-insider perspective — what you describe is mostly a failure.
Google was searching desperately for revenue diversity and was acquiring pretty hard at the time. I think the intent with Moto was to acquire a hardware shop and establish a market leading brand to compete directly with Apple. They eventually arrived at Pixel by building it entirely in-house. That Moto got raided for IP and spun out to Lenovo did not meet those (high) expectations. There was no need for anyone to take a loss but I think Google wanted to make a whole lot of money and did not.
I'll defer to you an insider on the hardware efforts, but the timing of the Motorola acquisition in the immediate aftermath of Google's failed bid[1] on Nortel's patent portfolio made it seem more like a patent-play more than a hardware acquisition. I recall Motorola's then-CEO even threatened to sue Google over patent Android infringement just before the acquisition, so it most certainly wasn't just about building a hardware business.
1. It was a crazy time. The winning consortium - which included Apple and Microsoft - invited Google to join their $4.5B bid; which would have made the patents entirely useless as a defense of Google against Apple or Microsoft. Google wisely declined, but bought Motorola less than a year later for $12B. https://www.theguardian.com/technology/2011/jul/02/google-pi...
Soon after the acquisition, Google, really Google X, started working with Motorola on a new watch, prototyping it on existing hardware (Motoactv?). Within a few weeks, the whole thing got cancelled. Not sure if that was because of the Samsungs of this world complaining or because of government agencies.
Source: I was supposed to run the dogfood program in the NYC office. I never got the watches, but somewhere I still have the USB extension cords and the (then quite fancy) chargers with dual USB ports, one for your phone and one for the watch.
And they still do under Lenovo. Multiple-day battery, almost stock Android with very useful enhancements.
IIRC, moto X was the first phone made by the Google-owned Motorola. The devices were made and assembled in the US.
I was hopeful for the post-acquisition Motorola but it didn't quite pan out. The IP salvaged from the purchase was always a big part of the deal so it wasn't really a loss, just not really much of a win either.
Moto X (2013) was the device I intended to describe. You are right on both counts. I think I conflated the second gen closure of the US-based plant with the acquisition in my memory. And I'm now remembering that the real reason I was so disappointed is that the Moto X (2013) was a phone that you could operate the entirety of the screen one handed. With the Moto X (2014) you could not.
I have an iPhone mini now, but it's still not small enough to 100% operate one-handed, at least not without grip adjustments. The Moto X wasn't quite a flagship phone, but it came close enough. I can't even find a properly one-handed phone anymore
Loads of phones could side-load J2ME apps. I never owned a RAZR V3, but I imagine it would have supported J2ME apps as well as I ran them on even crappier phones. The first things I'd install on my dumbphones back in 2005 was Google Maps and Opera Mini. I'd grab all kinds of J2ME games off Zedge and other sites and copy them over back in the day.
Moto X was peak phone.
I do think iPhone is often made more mythical than it was. Sure it was good and we could finally use our sausage fingers to navigate it. But I recently heard someone say on Radio that iPhone was the first phone with a touch screen. Meanwhile me an the guys (yeah all guys) we’re rocking Sony Ericsson P800’s and the like in 2003,4,5.
I had an HTC touch when iPhone came out and I was most envious that you could do two finger zooming in Google maps instead of tapping zoom buttons.
Many things Apple related are very often about applying existing technologies to a novel place. Many forget that the touch screens at the time were resistive [1] that required pressing (small but nevertheless) at the screen and mostly usable with styluses. As long as I recall the novelty was to apply a capacitive touch screen [2] to navigation, it does not require physical force and that's where the fingers shined.
[1] https://en.wikipedia.org/wiki/Resistive_touchscreen
[2] https://en.wikipedia.org/wiki/Touchscreen#Capacitive_touchsc...
Sure. But one can argue that capacitive touchscreens were about to happen anyway (like ARM laptops..). And HTC with their touchflow were moving into sausage friendly UIs.
But I agree, Apple is absolutely good at taking all this tech at the right time and making a compelling product. Steve insisted on glass/capacitive and that is just the best choice. Also, the UI didn’t feel like a layer (that you could easily get out off) as with touchflow. I’m also an iPhone user atm. Switched from Android about 3 years ago. The whole experience feels like higher quality to me still now. Although I have friends that would argue against that.
Peak Blackberry was 2011. It's easy to forgot that that the iPhone takeover took years and many releases.
I kind of find that hard to believe but maybe there were government/enterprise purchases boosting it or something? As far as I'm concerned Blackberry died with the Blackberry Storm 2 in 2009, which was a huge piece of junk.
I used to go back and forth to Shenzhen all the time trading refurbs and by 2010 I straight up would not buy anything other than Android devices because 1) iPhone was actually more expensive in mainland China due to it being a grey market item back then and 2) demand for everything else fell off a cliff.
Vision Pro is not earth shaking or category defining.
It's entering a crowded market that isn't even that big. As the premium option.
Climbing that hill is going to be a very tall order.
Apple's brand will not be a moat, either.
Zuck's initial response to Vision Pro [1] was the correct one:
[1] https://www.roadtovr.com/apple-vision-pro-zuckerberg-reactio...
In retrospect, that was a very negative post, and I wanted to add that I'd like to see {V,A,X,*}R succeed as a sector. In fact, I'd like to see all of the players do well, including Apple. I really want to see a transportive vision of the future pan out.
I don't think this will be an easy market for anyone. It's low attachment, low critical app space.
I want to believe, though.
I worked at Nokia and we certainly did not had that opinion, specially since we already had a couple of touch based Symbian devices.
If we weren't busy with Symbian vs Linux internal feuds, and the Microsoft deal, things would have turned out much differently.
The iPhone was subsidized by mobile carriers and/or interest free payment plans. I don't see that same path for this VR device, but maybe I am missing something.
The original iPhone wasn’t subsidized… it was bought outright. It was only after Apple proved the potential of the iPhone did the carriers get on board (and even then, it was limited support for quite a while).
Completely untrue. The original iPhone required a 2-year contract with AT&T/Cingular. https://en.wikipedia.org/wiki/IPhone_(1st_generation)#Releas...
The contract was a requirement, but it wasn’t subsidized like you’re thinking. Other phones at the time were cheap and subsidized as part of the contract. The iPhone, by comparison, was freakishly expensive and I don’t think Cingular was subsidizing it. And I don’t remember there being any penalties with cancelling the contract. But I already had an AT&T/Cingular account, so I’m not sure about the contract info.
The contract issue had more to do with how the phone interacted with the network. IIRC, AT&T was an exclusive provider because of the backend requirements (visual voicemail notification maybe?). I assume the contract was in part because they wanted to recoup those expenses.
Here’s a news article from the time that says that AT&T didn’t actually start subsidizing the phone until the 3G arrived.
https://usatoday30.usatoday.com/tech/wireless/phones/2008-07...
Also, the subsidy might have been from Apple, if anyone… Apple got a kickback of $10/month per iPhone user. They might have used that to keep the price lower, but that wasn’t from ATT’s side of the account until the 3G followup.
https://www.wired.com/2008/01/ff-iphone/
You response seems a bit confused/confusing, but the linked articles explain the situation fairly well. The crucial point is that AT&T/Cingular had a 5 year exclusivity deal from the very beginning. They were "on board", and indeed no other carrier could get on board. The initial terms of the deal were that AT&T gave Apple $10 per month for every iPhone user. Then the deal was changed to have AT&T subsidize $300 per iPhone, thereby lowering the iPhone price. In either case, every iPhone sold required an AT&T contract and was locked to the AT&T network.
My only point is that the initial iPhone was an expensive phone that didn’t have typical carrier subsidies. It was successful in spite of this.
The original parent post claimed that it did and implied that this was the reason why it was so successful. They also implied that the new Vision Pro would need similar subsidies to be successful.
I’m not quite sure the killer feature is there yet for VR headsets. But if the usability is better for the Vision Pro than the Quest, et al., it could still be successful, regardless of the cost.
It did have subsidies, though. And they were “typical”. AT&T was paying for part of the cost so that people could buy it for $499 initially. Most other phones were that price at retail and unlocked.
That's completely untrue. The original iPhone DID NOT require a 2 year contract, you could absolutely buy it on a prepaid plan. Yeah, you had to "fail" the credit check to get offered the prepaid plans, but all you had to do was put "999-99-9999" as your SSN in the activation screens to get them.
That is not true at all. The original iPhone required a 2 year contract and you could only buy a maximum of 2 phones per account.
Pretty sure AT&T had exclusivity there for some time and didn’t get that for free..
It was sold with a subsidy in the uk in a weird way. Iirc you had to buy it from apple but o2 subsidised it as it they were the only ones with edge (so assumed they would also sell you a contract). Worked out £50-100 which was mad as o2 couldnt actually make you get a contract.. quite a few people i knew boight a bunch of them and gave them away as gifts.
Also it was a totally stand out appealing device accessible to everyone with immediate value to everyday people.
Apple was paid a monthly fee by Cigular per user for the iPhone 1, so it was just subsidized weirdly.
The first gen iPhone had a much better UX than other phones, so missing a few features like copy and paste was palatable.
I do think future generations of AVP will do well. Iterating and applying learnings and customer feedback will make this a good product.
iphone by far did not have a better UX. It looked nice, but it had no more functionality than other devices at the time.
In general, UX design is the argument people used to (and still do sometimes) run to "prove" that the device was better when it was clearly not. Fancy icons dont make a good UX, functionality does. You dont say copy and paste is good UX, you say its a feature.
Functionality does not make good UX. Good UX makes good UX
Not sure how people don't remember what a revelation the capacitive screen was. It was miles better than most Nokia phones that mostly used resistive screens (not saying that Apple invented capacitive screens but they most certainly made it popular) and the navigation with the simple home button and everything else being instant feedback with the buttons on the screen was better than anything else on the market from what I remember. The keyboard especially was incredible with that light tapping sound and instant keystrokes appearing. While it wasn't functionally better than most phones (famously less functional than a Blackberry), it was very much the leader of the pack in details that ACTUALLY contribute to good UX. Just as the iPod and the clicker wheel was ahead with the instant feedback and usability of rotating the wheel to scroll at high speeds through hundreds of songs
I honestly think that people like you experienced smartphones in mid 2010s and just extrapolating back to what they think they were in late 2000s,
When iPhone came out, the mobile internet was so shit and screen was very low res, so the benefit of having a full touchscreen was minimal (mostly being able to select things directly, but that was a very small advantage). Mobile web wasn't a thing. You had to have wifi for any real speed. And if you had wifi, you were likely in a building where you could sit down, and there were things like netbooks and mobile pcs that were just better than the iPhone for doing web stuff. Even people with iPhones still mainly used them as iPods and phones before 3g cellular.
The keyboard on Blackberries was in fact better UX because it was easier to use and faster to type on, without any delay in appearance. The on screen keyboards were all shit until swipe typing became defacto standard.
And then you look at the drawbacks, like no removable battery, no microsd expansion, no shit proprietary cables that would break, no copy/paste, e.t.c, all of them absolutely make the UX horrible.
The iPhone also came with a great data deal for the time, as I recall.
Not really. I had a cheaper unlimited AT&T unlimited data plan than the original iPhone data plan, and that included 3G data!
The trick was to just buy the smartphone outside of the plan. Then you get unlimited data at half the price. I'd pull gigs of data a month on my "dumbphone".
You could say that about literally any product. The most important factor that will determine success is the starting point. Initial conditions matter. I actually believe this a developer tool from start to finish. It will end of life a lot quicker than we expect. The real product is going to be lightweight glasses we wear all day. But when? How many iterations of the AVP to meet the developer needs for seeding this future product that might be 10+ years away?
And before that the iPod. CmdrTaco wrote: "No wireless. Less space than a Nomad. Lame." Couple of years later the Nomad was gone.
If anyone does not understand the reference, "CmdrTaco" was the editor of Slashdot many years ago. Ref: https://en.wikipedia.org/wiki/Rob_Malda
If anyone does not understand the reference, "Slashdot" is a link-sharing site oriented at tech news, that was popular many years ago as HN is today.
https://slashdot.org
Was it not a mailing list before a site?
I let the next guy explain what a mailing list was.
Ah, memories from 20 years ago. How time flies.
I wonder what new site, and what new euphemism, will replace the "hug of death" and "the orange site" in 2044?
(Hopefully not robots that go around actually hugging people to death…)
Most of the issues listed by @zmmmmm are the same issues plaguing other VR headsets for a decade now: motion blur, pixelation, distortions, fake looking colors.
Apple didn’t fix any of them.
I never heard this complain before. Can you explain more?
See the video in the OP at timestamp 5:00 to 5:20. The review from The Verge touched on it as well; essentially, at the end of the day, it's still a bunch of displays showing you camera feeds of the real world. And both displays and cameras have a lot of flaws—low field of view, motion blur, pixelation in low lighting, and a much more limited set of colors compared to our actual eyes.
VR avoids this since they just make up their own designed world instead, while most AR glasses avoid this by having actual transparent glasses and reflecting images off them instead. The Vision Pro is more ambitious and tries to pull off both AR and VR, resulting in these compromises.
I think this is the main thing — plenty of other headsets showed how these aspects are problematic. I very much appreciate just how hard a problem things like hand tracking, distortion, etc. are to solve, but I was hoping (perhaps unreasonably) we’d see a break through in at least one of them. It also feels like Apple design choices got in the way somewhat. Two things that surprised me most are the FoV and limited DCI color space coverage — now I get why they didn’t readily share that spec earlier.
And the touch bar, and the magic mouse, and ping, and mobile me, and hi-fi.
They're just a consumer goods company. Sometimes they make good things people don't appreciate at first. Sometimes they make bad things that people don't appreciate ever.
Their track record is better than the mean, but comparing every criticism of a first-gen Apple product to the iPod/iPhone launches is unserious. Of course some people panned any given new thing on Earth.
And this isn't even in response to someone predicting that AVP would fail, but just that the 1st-gen AVP is an immature product. The 1st-gen iPhone was an immature product! It's delusional (and discrediting to Apple!) to think the iPod, iPhone, OS X, Intel Macs, M1 Macs, etc. were as mature at launch as the later iterations we associate those technologies with now.
Does "AVP" mean Affordable Viable Product?
Apple Vision Pro (assuming you were genuinely asking).
facepalm Thank you to correct me. Yes, I was asking sincerely!
I checked a couple of those old reviews and a couple things that seemed to be a common take with the iPhone that definitely aren't holding now:
- incredible amounts of hype
- loving the design
- loving the touchscreen and input (directly contrasting folks worrying about the eye tracking now)
- a sense (at least from cnet and pcmag) that it's really just an overgrown iPod so they keep comparing it to an iPod (compared to the vision where folks get that it's a new category for apple and have good comps outside anyway )
There are definitely similarities in terms of complaining about missing features that apple's probably going to add soon anyway (keyboard showing up in portrait and stuff). Lots of complaints about not supporting flash but we know how that went. Also apparently the headphone jack position was annoying.
What I'm not seeing in the current vision reviews - and maybe it's impossible to see this in real time - is some feature that has the chance to change literally everything that people arent able to comprehend just yet. These reviews being relatively dismissive of this web browsing on your phone thing is absolutely hilarious in hindsight. The only similar thing in the vision is - the passthrough eye thing maybe? Nothing else seems particularly baffling.
I'm glad I read some of those reviews. The vibe I'm getting is - the iPhone was doing something fundamentally weird with this whole smartphone thing that reviewers just didnt get, so they kept reviewing it as an iPod with really bad voice calling and a browser and being confused by all the hype. The vision though? It's a vr/mixed reality headset, we know what those are like, and apple didn't throw any real curveballs.
An addendum - I'm surprised at how spot on so many of these iPhone 1 reviews are outside of the not getting the browser thing (and even there folks complaining about how web2.0 is taking off and there's no js/java/flash support? Good point!). Keyboard should work in portrait mode, voice calling is bad, 3g support is needed, no multimedia messaging. No replaceable battery and not user servicable enough. Camera is OK but needs dramatic improvement. Typing urls is very hard. And of course, no games, no 3rd party apps, no app store. After hearing all these memes for however long about how the media just didn't get the iPhone or something - yeah no they did pretty well
Before the 3g the web browser thing wasn't really that useful. If you have to be on wifi to get reasonable speeds, it's easier to just use your laptop vs a tiny screen with a slow processor.
iPhone 1 was the effectively the first mass market smartphone
This however is coming after a decade of existing AR/VR consumer electronics and still misses the mark
first mass-market smartphone with mass appeal maybe, but it wasn't the first mass-market smartphone by a long shot.
... no?
Windows Mobile, BlackBerry, Palm, Nokia, Symbian, Maemo?
Sales: https://news.ycombinator.com/item?id=39184136
Fools and their money are quickly departed.
Reminds me of Lisa.
the price is quite different though
Can you elaborate on this?
The first iPhone, yeah, it had some detractors, but I don't think the kinds of criticisms the parent poster gave ever really applied to the iPhone. To succeed, the iPhone didn't have to be this utopian product; it just had to be more useful than its main competitor, which was dumbphones. People who complained that it was missing features the Blackberry had were working from an unstated major premise that the iPhone was initially targeted at enterprise users, and I think that everyone who wasn't too busy being a pundit to see how the world works could see that that quite transparently wasn't the case. There was even a time period where I had both an iPhone for personal use and a Blackberry for work.
And I think the criticism about entertainment is spot-on. By contrast, despite being extraordinarily limited compared to even the very next modal, the first iPhone was fantastic for entertainment, precisely because it was good for fostering shared experiences. It didn't take long after the device came out before you'd see groups of people clustered around an iPhone, looking at photos together on that big, vibrant, gorgeous screen. That was something that none of its competitors could do. And you better bet that people saw that happening and started wanting to have one of their own so they could have fun, too.
I do think we're still in the "wait and see" phase for this product, but, unlike some of the original iPhone criticisms or cmdrtaco's original dismissal of the iPod, the criticisms this article points out feel really personally relevant to me.
iPhone 1 launched without cut/copy/paste
Or the first Newton.
I wonder given all this... what the expectations are at Apple from a higher up/board/executive standpoint are
Most of Apple offerings are good:
Watch
iPad
Mac
iPhone
services
Are they really expecting this to just be a hard problem initially that they get better at over time? When is the last time they launched a "so so" product?
The first iPhone was pretty terrible. I couldn't get my Marketing peers to take it seriously (which they came to regret).
The first Watch was awful. I love my Series 8.
Don't get me started on the first Mac...
I definitely do not think that opinion was widespread. If anything I think that the biggest discussed shortcoming of the first iPhone was that it was only available on AT&T.
For example:
It didn't multitask, even though other pocket computers for a Long Time had at least the appearance of multitasking. It didn't have the ability to install any additional software, even though other pocket computers for a Long Time handled third-party software just fine. It didn't even have a copy/paste function, even though [WTF? Srsly, Apple?].
All of these things were eventually corrected by Apple, but it was pretty awful until they were corrected: For quite some time, the iPhone was just a rather fancy touchscreen music player with telephone and SMS programs tacked on.
(More damning: All of these things were corrected very quickly by third parties via jailbreaks. Some of us were having a ball with first-gen IOS devices very early on in the game, but Apple wasn't any help in getting that accomplished.)
Yeah, but the first iPod touch was great. It came out only 3 months after the iPhone.
It seems like the Vision Pro 1 has the M2 so that when the Vision, or Vision Air comes out, it will be at least as powerful as the first Pro. And the Pro 2 can have the M3.
First iPod Touch was even worse: It had all of the limitations of the iPhone, plus it additionally lacked cellular connectivity, messaging, voice, Bluetooth, GPS, speaker, Bluetooth, and camera.
The first iPhone didn't have GPS either. No iPods had speakers back then so that wasn't missed, you just used your earbuds. It was great for browsing the web and media consumption, since you either had WiFi or you didn't (no slow Edge network to tease you). Once the App store came out you had Instapaper for offline reading, and Google Voice for messaging/calls.
You're right. The first iPhone did not have GPS. But it did have the connectivity to make other geolocation services useful, at least in a "Where the fuck am I at?" sense. (And the iPod Touch also had wifi-based geolocation services that were spooky-good, if it was online somehow.)
But from various PalmOS devices to whatever Android device is in my pocket right now, carrying wired headphones has never been a thing for me for whatever reason.
So as a Google Voice user since it was still called GrandCentral, having Google Voice available on a Wifi-connected touchscreen pocket music player called an iPod Touch was simply never very useful to me: The OG iPod Touch was lousy as a telephone, since it lacked all of the basic parts (like a microphone or an earpiece) that made telephones useful, and SMS was not yet in its heyday back then either.
I got much better use of the service with my dumb phone with T9 text input and transcription of voicemails to SMS, and my dumb phone worked anywhere instead of just where I could find a Wifi network.
Jailbroken, the iPod was an amazing pocket computer with a brilliant display, thin profile, and exceptional responsiveness to touch input, especially with third-party apps and improvements installed. It was fun having a real *nix userland installed on it, and it sure seemed novel to SSH from it.
By default, though? Almost useless except as a music player -- a task that previous iPods did better.
(I mostly used my OG iPod Touch to take offline notes. It did OK at this, but previously-used PalmOS devices did better in terms of input speed and portability of those notes.
Even relatively high-end aftermarket car stereos at the time were afraid of it: "Oh, why sure I can play music with your iPod!
Except.. is that an iPod Touch?"
"shun shun shun")
It had the only truly usable mobile web browser. Windows Mobile's IE was trash, and the browsers on Palm OS, Blackberry and Symbian were laughable. This was the killer app (especially on Wifi, EDGE was not great).
I guess I (and to be honest, reviewers at large at the time) just have a very different definition of "pretty terrible". Sure, it was easy to see how it wasn't "fully complete", but I think that is true of literally every brand new product.
And to take two of your examples, multitasking and 3rd party software, yeah, other pocket computers had them, and they generally sucked hard (see, Windows Mobile). Even the lack of copy/paste - other phones had them, but there was (and honestly still is) considerable debate over how it should be implemented given multitouch was new.
It was usable on any other network, I had one and it wasn’t AT&T locked
Compared to all the smartphones of the time, it was a slow toy with no apps and a 2G connection that was completely worthless
It looked nice though, and eventually it got an app store (unnoficially first), and the overall UX was a clear winner (nobody believed touch screens would ever replace blackberry style devices!)
https://appleinsider.com/articles/10/05/10/apple_att_origina...
https://en.wikipedia.org/wiki/History_of_the_iPhone
I definitely did not use AT&T :) (in fact I didn’t even use my iPhone in the US)
So you can't really describe the experience for the several different US carriers which it didn't work on, and thus can't really say it worked on any other network. Verizon was a network it didn't work on, and thus fails the "any other network" standard you set earlier.
Pedantic much?
It worked on any GSM network. Tmobile for instance. Verizon and Sprint were the exceptions in the US, since they weren’t GSM.
A few people had it in Japan connected to docomo as a GSM phone. I suppose there were workarounds for other carriers as well.
I don't know how they did it, but "life finds a way" ?
There were apps and hacks to sim unlock the original iPhone. I was using mine on T-Mobile after using it for a couple of months on prepaid AT&T GoPhone.
No copy/paste & no apps were the main criticisms, given that Windows, Palm, & other "less-powerful" devices from that era had those features.
Bugginess was to me the main criticism.
Albeit not from the computer nerds who knew what they were getting, but the general public who thought they were buying a phone on par with the Motorola, Nokia or Panasonic phones in term of reliability.
The lack of Flash support was mentioned a lot of times.
We should bear in mind that at the time it launched many early adopters were on the fence and kept a second phone around (most people around me did until basically the iPhone 4). We look back at it as a fun area, but it was in part because we accepted it was flawed and limited.
When trying to use the iPhone seriously, at some point it wouldn't receive calls anymore (just silently crash the deamon receiving it, you'd never know if not told by the caller afterwards), drop active calls when battery was too low (single digit), call quality could be horrible, stuff would crash from time to time.
Memory would leak like a sueve from everywhere so rebooting the phone every now and then was good hygiene. It's only after the 3GS that it stabilized, otherwise antennagate would have been just another tuesday IMO.
It was still a stellar device, but not a fully reliable device in any way shape or form.
My memory of it was that when it first came out, it was worse in all ways compared to the best of what already existed, save one, and that was capacitive multi-touch. That was a big enough improvement in itself that nothing else mattered.
That opinion was so widespread that it was still common for people to dismiss the second iPhone version.
I remember what I was doing when I first saw it. Eating lunch with a work friend who bought it. I remember what I was eating.
It was incredible and obviously a game changer.
In the case of my company, the camera was bad.
Also, it took over a year to be able to write apps for it.
JS Web apps were not “apps.”
The first iPhone was terribly limited compared to what came after it was not in any way terrible for the time. The demo Steve Jobs did was so good that a lot of people just refused to believe it was possible, only to be proven wrong when the thing went on sale.
You have to put it in context with what passed for a smart phone back then. No web browser (they sort of existed but were extremely cut down and not really usable), small screens with limited software, very limited connectivity that was very expensive.
A valid criticism was the lack of physical keyboard, which phone-jockies found a deal breaker, but in all other ways a phone with decent apps, a large screen, a real web browser designed around and provided with unlimited internet connectivity was always going to be a hit.
It wasn't even that expensive compared to other phones.
At the time my feature phone had much better MMS, video calling, and J2ME apps. The first iPhone was a joke by comparison looked at through that lens, and detractors generally were looking at the fact it was this hobbled 2G device missing features that were common in other, cheaper phones. Oh, and the camera was absolute garbage, too.
Please don't remember me of J2ME. I felt relieved getting back to develop EJBs with xdoclet after a short stint working in a J2ME project.
You are not wrong but Apple predicted that none of those things mattered and they were proven right.
MMS worked well enough but was/is pretty limited. Video calling existed but postage stamp video at 8 frames per second was not something that anyone wanted to actually use. And don't get me started on J2ME, an ecosystem so vast that Google didn't even bother to include it in Android even though they are both based on Java. Although other phones had better cameras, none took what we would call an acceptable photos today.
What people did use all the time was the browser and only the iPhone had a decent one. That was the only thing that mattered. Heck, the iPhone wasn't even a very good _phone_ and even that didn't matter.
We all forgot how awesome inertial scroll was. Scrolling through emails or text messages was absolutely bonkers - it actually allowed us to use the full informational capacity of the screen.
Exactly this. I vividly remember watching that first public demo and thinking "sure, this is OK but touch screens are really annoying to use". When I saw Jobs flick through a list of contacts (or whatever it was) I was blown away. Finally somebody had made touch screens useful.
I am sure someone with quibble that some other product technically had something similar first, and they are probably right. But the iPhone demo was the first time I had seen anything quite like it.
Yup. This was what they were supposed to do for 'spatial computing' https://www.youtube.com/watch?v=c40cxE-dfPg
Turning on a light bulb, ordinary light in the house, meh, ok, then swipe your hands out over the counter and the recipe pulls up. That's the 'oh wow' moment. Then grab and throw a song to a Homepod connection. You know what I mean...
There's a gap someone can shoot, maybe 3 years tops. Honestly though, the developer ecosystem is just so tight, continuity so powerful, doing this solo requires more than what the whole MFi program offers.
I based my purchase of phones in the early to mid 00s on whether they came with the full Opera Mobile browser.
I could use just about everything online the same as desktop, that didn't rely on flash of course. The screen size was small but surprisingly wasn't a big issue as everything else was a massive improvement.
Oh Nokia 3660, how I loved thee. I actually chose the 3650 and then 3660 because they seemed to be the best smartphones at a standard sized. I just couldn't justify qwerty.
The first iPhone was pretty terrible
Before Steve stood on that stage in 2007 and demoed the first iPhone, most people had never even seen 'swipe to scroll'. The first iPhone absolutely kicked the arse of everything else. So what if it was 2G that was how most phones were back then. The only objection anyone could think off (e.g. Ballmer) was that it was expensive.
Agreed. It had limitations but at the time nobody was "this phone sucks" because what it did have was fucking amazeballs and clearly the first glimpse at the future. We were witnessing a paradigm shift and aware of it while it was happening.
For 5 year-old me, the first Mac was pretty amazing.
Tim Apple reportedly overrode the design team on launching it prematurely relative to their typical standards in order to enter the market and begin iterating before waiting too long. It’s the first new product category made under his leadership and he’s eyeing retirement, as context.
Btw, don’t forget visionOS 2.0 is just 18 weeks away. (Source: WWDC is every June and every platform gets a version bump, as we saw with watchOS launching in April then getting a 2.0 immediately after at WWDC.)
The watch is a new product category under his leadership isn't it?
AirPods, too.
AirPods weren't a new product category, they were an iteration of the Apple EarPods which looked the same, had the same remote features, etc.
I highly disagree about that, especially for the Pros with ANC and spatial audio. EarPods were nice wired earphones. AirPods seem like way more of a leap than just EarPods-but-wireless.
If those features defined new product categories, they wouldn't have replaced existing AirPods with "3rd generation" (non-pro) AirPods that have spatial audio and ANC in the upcoming 4th generation. They're iterative features on an existing product category.
Watch still under Steve, put it in other way, it's last product under his supervision.
Doesn’t look like it.
https://www.wired.com/2015/04/the-apple-watch/
That is incorrect, or rather, misleading - maybe Ive didn't join the project until 2011, sure, but it began in 2010. Ive didn't lead all design work yet in 2011, he was still just a VP.
I've been wondering how much this is part of the context here. He may feel some pressure that he hasn't really launched a new major product category from scratch in all his time as CEO and if this has been running 10 years as a project now, that it would be a blemish on his legacy to not get it out the door before he leaves. Perhaps without him there it would even be binned which would be even more pressure to deliver it.
Contrary to all that he really seems a bit ambivalent about the device himself, having never allowed himself to be seen publicly using it.
Apple Watch, HomePod, AppleTV+, Apple News+, AirPods.
And I would argue the M-series CPU are pretty major as well.
Apple Watch began before he became CEO, it wasn't under his direction. He just continued the existing initiative.
Wikipedia's citation that the Watch began in 2011 after Steve's death is incorrect. It began in 2010 under Steve's direction after the acquisition of Bob Messerschmidt's Rare Light, which brought hearth rate monitoring tech to Apple. Messerschmidt was assigned a team by Steve. Don't blindly trust Wikipedia, folks...
As for the others I just meant platform products, none of those have their own OS until homeOS later this year. None of them defined new product categories for Apple - for instance the AirPods were an iteration on EarPods, not a new product category. Apple was already shipping a home speaker product before HomePod, which was an iterative product without its own software platform for developers. M-series is iterative tech, not a new product category. The Motorola ROKR wasn't an Apple product. But sure HomePod is perhaps Tim Apple's one original new product category before Vision if the 2006 Apple Hi-Fi counts as a distinct product category.
Discounting genuinely new products because Apple made them nearly 20 years ago seems a bit ridiculous.
No one would call the iPhone an existing product just because Apple shipped the iTunes collaboration phone with Motorola.
This all feels like so much projection.
As far as I can tell Tim Cook has never pretended to be a product guy. He seems perfectly comfortable being what he is: a ruthless operations guru bent on efficiency. In every profile I’ve ever read he’s not really the one making hard product decisions, and seems content to leave that to others better suited for it.
The decision to ship Vision Pro before the design team considered it ready/good enough was his decision. That's more a product decision than an operations one. But yeah otherwise he is reportedly hands-off and disengaged from internal product demos including of the Vision Pro.
Source?
Its speculation, but probably reasonable speculation. At WWDC, I think almost every OS version gets an announcement of a major version bump. They will announce the next macOS major version, iOS major version, etc. So it is totally reasonable to suspect they will announce the next major version of visionOS.
Edit: But since it is still so new, it could not be until WWDC 2025 that visionOS 2.0 gets announced.
watchOS 1.0 launched in April and got a 2.0 in June that same year
it'll be a beta
The first two iWatches were borderline pointless/bad.
The first two iPhones weren't as innovative as they make them,just more polished than other symbians with cameras and internet, it took off with apps really in third iteration.
I think visionpro has lots of opportunities in the next iterations, early users will provide feedback this gen.
Yes they were. Multi-touch in particular was a revelation. Making a big screen with one physical button is a simple idea, but making it work well was the hard part that nobody else had figured out.
Those first iPhones were dog slow, sure, but they absolutely defined how smartphones work ever since.
Do you remember the first phone that allowed you to play music? No. The first one to take photos? No. The first with internet? No.
Yet we're supposed to give to consider the iphone a revelation because of multi touch. I had the iphone 3g. It was a more polished experience than other phones but that revelation part was just not there. Apps made it big, later.
I swear people treat iPhones like a cult.
I had phones before the iPhone that took pictures and played music (and a Nokia tablet with wifi internet browsing).
The iPhone was the first one that I actually used to take pictures and play music. It wasn’t the first to have those features (and the edge network was super slow), but that polished experience is what made it work. And made it clear to everyone how we would interact with phones for decades to come.
It was a technological inflection point. That’s why it is looked back on the was it is.
I think if you find a good, objective history of the first iPhone, you'll find that it was groundbreaking in many meaningful ways.
It was a full unix OS, not some mobile-specific trash like the others (Symbian, Windows Mobile, Blackberry). Capacitive touch wasn't an invention, but a premium form of touchscreens (most were resistive).
Having nearly 60fps high quality 3d accelerated animations on a phone also contributed to a heavily premium feeling for the software. It enabled things like Inertial Scrolling, which among other things (like first class typography afforded by using OS X, and a fully standards compliant web browser), none of which were "innovations" in the sense of "we came up with this first", but more exceptionally good engineering in service of a great experience.
In almost every way (apart from the "traditional phone metrics" like 2G ironically), the iPhone was a huuuuuge leap forward from the status quo.
It might be true that anyone else could've done it. But nobody else did. It wasn't just luck that Apple did.
Multitouch was a huge deal because it allowed a usable Web browser to be offered. That was something no other phone came within a light year of providing.
There was a "cult," all right. It consisted of people like Steve Ballmer plugging their ears, shutting their eyes, and denying the obvious.
yep ... i'm generally downplaying Apple's "innovations" but I will say, there was huge scepticism that you could have a phone without a physical keyboard and Apple overnight turned that around. That was a real contribution, IMHO much bigger than anything in Vision OS so far.
don't forget pinch to zoom/expand on actual webpages in an actual web browser.
I don’t think this was my experience at all. The first iPhone was always snappy until they started adding random crap everywhere (e.g. by the release of the iPhone 4 the original was near worthless without really old firmware)
Are current watches good? I genuinely don't know and am interested. I hadn't gotten one because I was an early iPhone adopter (Gen 1), but haven't been willing to be an early adopter since then. But I would like a watch, if they are good now.
I find them exceptional for a somewhat narrow range of activities (I love going on a run or hike navigating by my Apple Watch while streaming whatever music I want in the world to my AirPods - no phone, nothing to carry), and only slightly useful for a broad range of other things (notifications, weather, etc - conveniences).
What I love best with my Apple Watch is that I can respond to Duo/Okta/BankId directly from my watch and don't have to look for my phone or pull it out of my pocket.
I'm sure the later ones are also great, but as of Apple Watch 4, they are quite compelling depending on what your needs are. From a fitness / voice assistant button on the wrist, getting notifications without pulling your phone out standpoint the Series 4 and later are all fantastic.
Yep. They've been great since v7 or so. My S0 watch was... something. Apps took forever to load. Battery life sucked. The UI was still clearly highly experimental. Now even complex apps like OmniFocus launch instantly, and I charge my Ultra every other day. In short, it's a mature product now and works like it's supposed to, without qualifications like "...eventually".
If you want a very small cell phone on your wrist, they are good at that. I’m nit sure what they’re good for , but they can be that.
I think Apple was smart to focus on Fitness and Health monitoring because that's about the only thing I've found immensely useful with my watch.
Apart from that, yesterday I just played Pickleball with a guy who used his apple watch to keep score. It was a little odd at times but seemed like a cool use.
To each their own perspective! As for me, as a user at the time of fairly cutting-edge cell phones from other manufacturers, my first hands-on with the first iPhone was one of the most memorable technological experiences of my life. Just using the Maps application with multi-touch was magical. Yes, it was slow, limited, etc., but it felt like an entirely different class of device, and that proved to be true.
Isn’t that what people are saying about the Vision Pro too?
Clunky and sorta useless, but “a new class of device”… eventually
Apple Maps falls in that category. Maps was bad when it came out but after years of effort (and a lot of money), it's pretty good now. That was no small feat given how good Google Maps already was when Apple Maps started.
In my area (Long Island), Apple Maps works great (better than Google Maps). I hear that it falls down, in rural areas, though.
[edit]
Looks like I pissed in someone’s cereal. Didn’t mean to, but different strokes, and all that…
I was just recounting actual personal experience, which, I suppose, isn’t popular, hereabouts.
Just for context, I have been writing Apple programs, since 1986. I am totally committed to their products, and their vision (although I have no opinion, –yet– on their Vision), and have every right to be critical. Those of us that have been on the ride for that long, have seen a very bumpy road.
I just checked my area in Apple Maps out of interest and my own house, which I've lived in for longer than Apple Maps has existed, is listed as belonging to a business I've never heard of. The only actually existing business I can find with that name is at the opposite side of the country.
Not a great first impression. Also while I can view Apple Maps via a third party like DuckDuckGo, you can only report a map issue through an Apple device, so I guess it's staying wrong.
It's possible a previous tenant used that address or was attached to a business with that name. It happens to offices all the time as well. We had like 8 businesses attached to one of our offices, where only 1 ever existed. They were all tied to the owner and original filings used the address temporarily.
Still weird either way.
Apple Maps in New Zealand is really, really good. Travel estimates are nearly perfect.
In smaller countries though, it's total rubbish. Samoa, travel guidance is nearly non-existent. Tahiti, very patchy. In those countries, Google Maps was perfect.
It really is location dependent. I find that Apple Maps has much better driving directions and a better UI that doesn't get in the way. Google Maps on the other hand typically has much more up-to-date information on businesses and restaurants. I am in the habit of using both if I am trying to find an unfamiliar business address.
I generally find the quality of Apple Maps to be quite good here in Tokyo, although unfortunately Google does still seem to have them beat pretty consistently. The recently completed Azabudai Hills complex, for instance, is correctly rendered on GM, while AM still shows the pre-development layout.
However, one bug that is a showstopper in AM is the dreaded “can’t reach the server now”, which I get probably more than half the time I try to use it, and with apparently no solution other than wait.
As a result, I just removed it from my phone.
I gave Apple Map a second chance after hearing good things about it recently.
But it gave me a route where I need to turn left right after taking a right-turn ramp, except that the left turning line are separated by road dividers before the exit of the ramp. So I would need to either go wrong direction and do a 180, or bulldoze the dividers. As a comparison, Google Map never gave me that route in the past.
This happened in a moderately sized town so I guess I will still stick to Google Map for now.
In my particular rural area, Apple Maps knows the street I live on exists. Google Maps still needs a call to delivery people to explain to them that their map is lying to them.
I live in the UK in a small village next to a large city. Just had a quick look and there are tons of issues within the 1 mile radius, that don't have issues on Google Maps.
* A pub that doesn't exist - there is one about 15 miles with that name, but there's never been one called that here.
* The local church is in the wrong place
* The opening hours of the local cafe are wrong (11:30-3:30 instead of 10:00-3:00)
* Business names aren't correct
Apple Naps has sent me to the middle of cornfields.
Generally, if you are in a city it is fine, but don't use it if you are rural.
Apple Maps could be improved with classic hard work on the software etc. Apple Vision Pro is against the same problems of physics and usability that other VR and AR headsets are up against. I’m leaning towards they will never be “solved”, because using a monitor and keyboard or tv and controller works really well.
Portable monitors do not work as well. Monitors in general could also always be bigger speaking as a programmer.
There’s a clear strategic reason for that — reduce dependence on Google for more vertical integration.
I don’t see the VisionOS play here and wonder if it’ll be Tim Apple’s first major fail at the helm. There’s no clear set of competitors or established industry for Apple to just polish like they have with most products. There’s not an obvious large tam for VR headsets and I’m skeptical optical ar is close. I don’t get it but I’m sure the internal Apple and board view it as a strong thesis.
Still glad it’s this and not a car.
Looking forward to how it plays out.
Duckduckgo uses Apple Maps and it's completely unusable. Might be a Duckduckgo issue but last I tried a few weeks ago I couldn't even pan in Firefox without randomly snapping back to the pin. When I wanted context I'd work around it by just zooming out and be done with it. Still have to turn to the Devil if I want a decent map experience.
Ridiculous.
Gmaps also helped by enshittifying itself.
Apple Maps still sucks though. Even if it’s better now, recent use is still ridiculous erroneous.
All of them! The first version of the iPhone, iPad, iPod, watch, AirPods...
They all had similar reviews. "Seems like a tech preview, not really ready for general use, too expensive", etc.
First iPod and iPad were fine.
The first iPod's wheel would break and skip all the time. And so would the hard drive. And the OS was really simple, and the syncing was terrible.
The first iPad was ok only because it truly was "just a bigger iPhone". But they didn't really make features that were unique to the iPad and took advantage of its bigger screen for a few versions afterwards.
The os was simple, it played music, that's it?
It did, but finding your song wasn't easy when you had 1000s on the device.
I would say the main function of an iPod OS is finding what you want to play, not actually playing it.
It was extremely easy if you tagged your mp3s properly.
Way back then it also competed against actual cassette and CD Walkmans. So just playing was a big part of it.
The first iPod had a wheel that physically moved and would get dirt and stuff stuck in it/have issues. The HDD would also freeze up. It was better than alternatives at the time, but it wasn't really until the click wheel it got really refined.
The first iPad was similarly expensive, thick and underpowered. It wasn't until a couple generations in that it was good.
They were both significantly better than alternatives and didn't cost 7x as much as this does vs meta quest 3.
Yeah - the first one is always at the edge of what's possible with the hardware capabilities at the time of launch at the quality level Apple demands. Then usually they refine it and end up defining the category.
The idea of 'spatial computing' or a heads up display that augments your vision and gives you eye tracking input to manipulate UIs in your visual field seems clearer superior to looking at little glass displays (meta agrees). If you can project out a future where that hardware becomes better and better it's a pretty powerful interface paradigm.
Eye tracking as an interface for this can get pretty close to telepathy if done well.
I ordered one - it's the sort of thing you really have to play with yourself imo.
Newton and to a much much lesser extent the first iPod and iPhone. But really the Newton while a product that I love and a super profitable line was the last time Apple created something that had so many rough edges.
Watch was cute but fussy and a little pointless when it first came out, and only gradually became really compelling.
So-so in what way? This product is clearly a toy. Which is to say that it is genuinely new. Maybe like how the Apple II was when it first came out. PCs were quite expensive back then too, if I remember correctly. This really will take time. All the important technological things were toys before they became tools.
The watch first 1-3 generations were clearly "so so".
The biggest thing for me, from the Verge review, was the limited field of view. Apple sold it as filling the users entire field of view and it sounds like that isn’t the case. When I first saw the reality of the Microsoft HoloLens, that was my disappointment there as well… Google Glass as well. I don’t want to feel like I’m wearing an AR device, as so far, that’s what everything has looked like. Limits of the technology, sure, but until that is solved I’ll have a lot of trouble throwing money at it. I still plan to head to an Apple Store at some point to try it out for myself.
As far as viewing with other people, this doesn’t seem like an insurmountable challenge. They have the theaters, they have Personas, they have spacial audio, and other Apple devices have features for watching content together with friends. Put them all together and it seems like if several friends had Vision Pro they could feel like they were sitting in a theater room together while watching a movie. I’m not saying this will be easy, but it seems like all the building blocks are there. The Personas are probably the big weak point, especially looking at someone next to you, but with the focus on the movie, I that’s probably the least important part.
That was the surprise to me too. Lower FOV than a Quest 3? Inexcusable.
That’s a double edge sword though. Were they to make it 180°, we would critique it for spreading the precious pixels. “With those 4K your virtual monitor is 1080p at most!”
Apple used to excel at making the right trade off here. Choosing what feels right over numbers.
I remember when the first Android phones with 4-inch screens hit the market, I saw someone comment online that if a 4-inch screen was better Apple would have figured that out during R&D for the iPhone and made the original iPhone 4 inches.
I hope somebody at the time replied with a picture of the original iMac's hockey puck mouse.
Apple can do amazing things, but they do also fairly regularly make terrible things.
I mean, the current Apple Magic Mouse which has its charging port on the underside is interesting as well.
Sure, but wasn't cockroach charging introduced in 2015, vs. c. 2010 for 4-inch Android phones?
What's 'cockroach charging'?
Upside-down, like a dead cockroach: https://www.behance.net/gallery/36272401/Dear-Apple-about-yo...
I still think the original iPhone was the best size. I use the iPhone mini today, and I wouldn't mind it being a bit smaller still.
It really lends itself to the phone being a tool, rather than a device for endless entertainment. It also suggests it is an accessory, with the desktop/laptop being the home base, rather than the phone being a primary device.
It really bothers me that the phone is now at the center of the ecosystem. It seems like the hub should be something that isn't as vulnerable (to theft, loss, or drops) as a phone.
Idgi. What feels right has always been very high numbers.
yeah. But a high number of pixels and a high number of minutes of battery life work against each other here.
There is a fundamntal tradeoff between FOV and image quality at any given pixel density (and there are other tradeoffs between pixel density and cost and compute power needed).
They were and are good at this. But the decision to ship this in this state before their typical quality bar was Tim Apple’s. He reportedly overrode the design team and demanded it ship now to enter the market and iterate in public. Even though he’s otherwise hands off with the product itself. This was all widely reported but has been missing from recent reporting now that it’s out.
It was a good call. It needs to get to early adopters and establish the existence of an alternative to facebook's metaverse vision.
And they probably have
Steve used to excel at it.
They certainly could do something clever, like having a screen with gradually decreasing pixel density and quickly moving it around so its center is always where you're looking it.
Even with that FOV and the highest resolution display they could get, it still can only emulate a 15" display comfy 70cm from your eye at a whopping resolution of 940x540.
It's a gimmick priced as 2.5 monitor stands.
How did you get those numbers? They seem quite low, considering that they do 4K mirroring and all.
They don't do 4k mirroring. A 4k monitor has 8 million pixels while the googles have about 11 million pixels per eye. True 4k is only possible if you use 85% of vertical and 85% of horizontal FoV, like when staring at a 24" monitor from 15cm.
I would expect most people to pick a natural size for their windows, perhaps 24-30" half a meter away?
It's a VR headset, angular resolution is fixed at 3800 px / 90 deg = 42px/deg. For a 27.5" virtual display half a meter away, let chord length = 700mm, radius ~ 600mm, distance to virtual display will be 488mm, and diagonal angle it occupies within view will be 71.4deg. with 42px/deg, resolution is ~3000px diagonal, or from (16x)^2+(9x)^2 = (3000)^2, x = 163.4, therefore resolution for that virtual monitor will be 2608x1470.
Other combinations of distances and virtual sizes can be calculated in the same manner.
How much of that “natural size” has been dictated by common monitor sizes, the limits of desk size/setup, and aesthetics?
My work setup as a 43” monitor set about 1 arm’s length away. I can just touch it when my fingertip if I stretch slightly. In a virtual space, I’d probably go slightly larger, maybe 45-48”, pushed a little further back.
8/11 = 73% not 85%. so should be a lot more comfortable distance than 15cm/5.9 inches.
Nope.
0.85*0.85 => 0.73
Ah, gotcha. But what if you factor in the limited FOV of the display itself? this 11 million pixel calculation doesn't apply to the full FOV of the eye after all.
Even if they were double the resolution per eye, you aren't going to have "native" 4k reproduction because you are looking at a projection of a screen rather than an actual screen. What is the value of achieving one "accurate" pixel if it is surrounded by eight bilinear-interpolated ones?
But then, you usually don't buy a 4k screen because you can perceive the individual pixels either, unless you are creating content that requires that.
I suspect it aligns easier with the quality concerns of reality and not with virtual components - if I can use a monitor through the screen with no noticeable loss of quality or additional eye strain, there's no reason to think a virtualized monitor won't be at least as good.
I don't think it's an apples to apples comparison (heh), I think at this point 'good enough' is going to be better than anything my eyes can resolve. On a Quest 3 with Zenni lenses, pixel visibilty as a concern is way down on my list. It works REALLY well.
Agreed. 8 years out from the original Oculus rift which had 110' FOV and we're still somehow staring through the AR/VR equivalent of a submarine periscope.
a whole eight years???
I still have one of the Oculus Devkits at home from their initial campaign. It's more than 10 years old now. I've noticed the biggest improvements to VR over the last decade were in useability - not the screen. Once they figured out low latency by switching the screen to black after every frame every device upgrade felt like a tiny improvement in resolution at best. Size 12 font text on a 2m screen wasn't really readable yet, but I still don't use VR for text work today (even though I could) so it also doesn't matter. Right now it's still a question of usability rather than the screen.
Not when text is vastly sharper than Quest 3.
It really isn't though, at least not for long, as of mid 2023 there are publicly showcased compact lightweight prototypes with 240° FoV.
they are paying the price for their obsession with super high resolution. I think it's a mistake, from my perception Quest 3 at 25ppd is nearly good enough, and its panels are nearly half the resolution. They should have traded 20% resolution for 10 deg FOV either side. In fact, with appropriate optics the effective resolutions sacrifice could be even less than that.
How is it to read text on the Quest 3? How would you compare it to a high-density display (such as what is on your phone)?
Well, it's no Retina screen. I'd say it's bit like reading text on a 1080p CRT at 96 dpi.
It's perfectly doable, but there's still a hint of fuzziness and we're still a couple generations away from crisp LCD text.
I care about my eyes, so I use a 4k screen at 2x scaling for coding, and will not use the Quest 3 for work, unless that involves playing games and watching videos.
Is there any evidence of pixelated text damaging eyes?
https://www.reviewofoptometry.com/news/article/higher-resolu...
They only need hi-res where the user is looking, don't they?
We can't see what's in our peripheral vision clearly, so they should be able to get away with most of the FoV being blocky af, shouldn't they?
Well there’s FOV and there’s pixel density which are both too low right now and antagonistic to one another feature wise. There’s also display brightness which is an issue and I’m not sure how that fits in to the FOV/density spectrum. And then more pixels means more compute… That prototypes exist doesn’t necessarily mean much in a space that is full of prototypes showing off one particular feature. The very hard thing is to combine all of the desired features in to one consumer ready headset.
Exactly. You know what is the best "goggle" type display I ever saw (and I tried quite a few)? Recent FPV goggles (hdzero) that have 1930*1080 OLED displays at approximately 46 degrees fov. I'm very glad the manufacturer decided to use the his low fov instead of increasing it to 55deg and more like others. The picture is insanely crisp, and looks better than looking at a 4k display. Another huge benefit is that entire picture fits within your "focus cone" so there is no need to gaze around. It is not a VR display, it's purpose is different, but it shows us what visual quality is possible.
I'd love if manufacturers, if they can't make 16k displays that fill the entire fov, create variable pixel density displays. Best quality in the center. Deteriorating towards the edges. That would be much cheaper, but then for good illusion one would need eye tracking and motorised optics which would probably be more expensive in the end...
Oh, well, I'm pretty happy with my fpv goggles (for fpv) I just wish there was a way for them to display different picture for each eye. They already have head tracking, I wonder what VR would be like with this huge ppi narrow fov goggles. Would it be more or less immersive?
for those who don't know: FPV = first person view. Used for drone racing and things like that.
motorized displays also sounds HEAVY which is a big issue for a device intended to be used for long periods of time
The end goal here - multiple people who are isolated by the technology they're wearing experiencing a simulacrum of human interaction mediated by that technology - is not only unbelievably depressing but also honestly just really boring. We've already built this. It was called the "metaverse" and it sucked. Everyone left.
The idea of wearing a VR headset so I can "enjoy photos with friends" is absolutely hilarious and so deeply disconnected from what human beings want and need as a species.
Every time I see the spatial photos and spatial videos, I always view it as someone putting on their VR headset to reliving a memory of someone who died. Then they take the headset off and realize they are alone and the loss broke them. I have a lot of trouble not seeing those videos from a very depressing place.
People are totally allowed to watch videos of their children without their children being dead.
Maybe not everyone shares this viewpoint.
The people could also be isolated by being a few time zones away from each other.
in some ways even the reviews seem to be placed by Apple (same as they provided the exact photos of the device the media printed). all of them seem to present a full view with floating widgets, when there's really no way to photograph them
Surely the device just has a screenshot function?
Two interesting and unrelated facts:
A screenshot would be blurry because of the foveated rendering
The device doesn't even have a power button
Given Apple have complete control with their OS of how the dynamic foveated rendering system works, if they particularly wanted a screenshot feature to be included - reasonably likely for marketing purposes, it seems to me - then surely it wouldn't be too hard to code it such that it renders a second version of the GPU's output to generate the screenshot, when asked to by user, which has even pixel density. (Or perhaps screenshots getting some sort of "bokeh" for the first time, highlighting whatever the user was looking at when taking the screenshot, will turn out to be a desirable feature.)
Not having buttons obviously doesn't prevent there being a screenshot function, just as lack of screenshot button on iPhone hasn't prevented it from taking screenshots, and just as the VP's lack of buttons doesn't prevent users from interacting with it at all...
This can also be achieved with 3rd party apps. On Quest, this is already a reality with Big Screen.
I wonder who would actually do that. The only reason I watch tv is to socialize with my family. I think if we were just sitting on the couch all wearing VR googles it would ruin the entire use case for watching stuff together. Surely, some geeks, once or twice.
The use case is watching something with people who are half a world away I think, not with people who are physically present with you.
I wonder if that "theatre experience" would be a significant improvement for most people. I already do this with friends remotely by jumping on discord and hitting play on whatever service we're using at the same time. There more more automatically in-sync solutions out there for sure, but I can't see doing this in VR being more than a novelty that wears out after a few sessions.
I've been telling people this for awhile, the biggest issue I have with the available VR headsets is the limited FoV. It causes unneccesary neck strain having to move your head as much as you do to look at things.
lol, literally the only use case I could convince myself to spend this money on (hey its cheaper than an XDR ).
Even then I was having trouble convincing myself since.. all indications were you wouldn't want to wear this more than a few hours at a time, so ultimately you still need the real physical monitors for the other 80% of your workday.
Instead of having multiple virtual screens I would prefer a lot more if I could simply spread individual Mac windows around my virtual space. In mixed reality the whole concept of a virtual monitor makes no sense, it's just an unnecessarily limiting abstraction you could easily do without. The whole room is your "desktop".
It's basically what the SimulaVR guys are aiming at, and I'm surprised Apple didn't go this way with their Mac integration. Especially because the native visionOS apps do seem to behave just that way.
I imagine it's mostly a CPU/battery/bandwidth concern at this point — having one wireless 4.5k/60 (or is it 90?) stream from your Mac is difficult enough; having a potentially unbound number of them (for arbitrary number of Mac windows open) is a different problem altogether.
But there are 3rd party solutions that will let you do just that though: https://github.com/saagarjha/Ensemble
This is it exactly. I’m surprised the HN crowd isn’t immediately picking up the bandwidth issues with streaming unlimited 4k screens from your MacBook.
Noted that this would be a non issue if you could just connect a cable between your Mac and the Vision. You already have the huge battery pack dangling off the side. This is an apple issue, not a technical one.
Not really?
Do the math of how much bandwidth like three windows/screens (because with this model each window is basically its own screen) at 4K/100hz/10bit color each would take.
You're at limits of TB4 _very quickly_.
You can compress the image, try to do something smart with foveated rendering (only stream the windows that users are looking at; but that breaks if you want to keep a window with logs in your peripheral vision), use chroma subsampling, etc; but those all are varying trade-offs with relation to image quality.
why would foveated rendering break down? it does not stop rendering where you are not looking just lowers the resolution
It'd break if you wanted to do the dumbest thing and just not stream the windows users aren't looking at; but lowering the stream resolution on the fly could work but now involves more complexity (on both sides to communicate when to adjust the resolution) and because it's not handled entirely on-device breaks the illusion of it being invisible.
I can also imagine it having some weird privacy implications; like Mac apps somehow monitoring this and tracking whether you're actually looking at them, etc.
If you're OK with rendering everything at high resolution (and then choosing what quality to send over) then you shouldn't have any privacy issues, assuming that this part is done by the OS.
Foveated rendering don't work but not the way described in GP; human eyes are too fast against current latency and frame intervals for it to work.
Foveated rendering doesn’t stop streaming if the user isn’t looking at the window. Just streams in lower quality.
If you would just trivially stream all windows to the vision of course it will at some point be at the limit of current technology. But I would assume a company like apple has the means to push the state of transmission media (rather than just using a now 4 year old standard) and be able to think of something smarter than „just stream all windows in maximum quality even if the user doesn’t look at them“
I may be wrong but it honestly doesn’t seem realistic to expect seamless, latency free foveated rendering switching at those speeds while coordinating between two devices.
You don't need to send every pixel for every frame uncompressed.
It would be more like VLC, not sending pixels that aren't changing. And you don't really need 100Hz either. You can't read that fast anyway, the content could refresh lower than that as long as the actual window is moved at 100Hz to avoid nausea.
I really doubt the Mac display functionality as-is is refreshed at 100Hz with full pixel fidelity without compression. WiFi can't handle those kinds of speeds reliably.
Since foveated rendering would only send the resolution required for what the user could perceive then even logs in the peripheral space would be ok since they would be sent in much lower resolution. I think the challenge with some smart foveated rendering would likely be latency.
Another option would be handling rendering on the Vision Pro rather than the MacBook so pixels don't need to be streamed at all.
Exactly. Current systems that work with large channel counts of high res deep colour real-time compositing are also around 6U 19" rack mount units and pull half a kilowatt of power. Not exactly ergonomic for strapping to one's face.
I think Apple sees a device that doesn't have any cables in the near future; they just weren't able to pull it off this time.
It’s the product vision. But progress is slow reportedly because of physical and current scientific tech limits. Their product vision could take another 10-30 years.
No, it’s a processor and graphics card issue as well. You have to actually render a potentially unlimited number of 4K screens (at least the ones in your direct view) on the Mac Pro. It was never built to do that.
AVP knows exactly which monitor you are looking at. The mac doesn't need to render all monitors at full resolution, just the one being looked at.
That sort of seamless render switching doesn’t sound technologically doable if you include that the render is happening on a different machine. I could be wrong.
Because I don't care about the technical limitations. I care about if it's a useful gadget for me or not. It doesn't matter why a feature isn't there, that just becomes a "you're holding it wrong" thing.
AVP already has foveated rendering so if MacOS had some awareness of what is being rendered it could potentially render unlimited windows since it would only need to render and stream enough to handle what is being looked at.
The primary problem I would guess here would be latency though so maybe not feasible. The other possibility is if the actual rendering happens on device instead of streaming pixels. It would dramatically decrease bandwidth required.
I think there's a solution somewhere for this.
I don't think Apple wants you to use Mac apps. They want developers to make Vision Pro apps. That's why the Mac integration is the minimum they could get away with.
Good luck with that. Devs made the iPhone a hit and Apple demonized them over it. Can’t imagine too many are eager to make the same mistake twice.
There are enough devs that love Apple and have a burning loyalty toward them, that this is not something I would worry about. Even if it started to become a problem, Apple has enough cash to make a lot of apps, and could subsidize this for an eternity if necessary to make it work. Apps will not be a problem.
Depends on what they want it to be.
For sure there will be some hardcore indies that just love developing for vr.
There will probably be some game studios ready to go.
TikTok will probably be there.
But, why would Netflix, Spotify, every productivity tool…go out of their way to work on this thing at a time where most are diving deeper into PWA’s?
For spatial computing, they’re going to need to improve the dev situation or accept that safari is the star of the show.
Do you think Netflix and Spotify are really needed (genuinely wondering)? Netflix maybe because of exclusives, but I would think most people with Vision Pro are also going to have Apple Music. With their TV and Music offerings, it might actually be better not to have Netflix/Spotify etc.
Obviously many people use Spotify over Apple Music and want to use it.
Yep, I’d say if this thing can’t sell beyond the Apple services subscribers, it won’t exist in 5 years.
Have you been paying attention to everything Apple has been doing in, say, the last week?
VisionOS is pretty limited right now and seems like a much larger leap to justify that iPad vs iOS was.
Hope they are not really betting on that, think it's clear to everyone that OS fatigue hit when they had 3 OSes and lots of companies stopped bothering after that didn't pan out as worth it now this would make 6 OSes...
But what is your actual physical work environment like then? Are you standing in the middle of the room and typing into air? I always imagined productivity minded people would still at least use a physical keyboard which limits mobility.
No you’d be sitting at a desk with a mouse and keyboard, like today
But that isn't 'The whole room is your "desktop".' If you are restricted to having your desk in front of you, you might as well be looking at virtual screen(s) on your desk.
I’d be sitting on my couch with my wireless Glove80 split keyboard at my sides. Or lying down with the keyboard flanking me. No desk. And I’d move the keyboard to my counter to stand when I feel like it.
I was thinking something similar. Split ergonomic keyboard wherever you are. Could possibly do something to stick them to legs wherever it comfortable for standing.
I had not seen the Glove80 before, it does look pretty nice. Though I would prefer a bit smaller. I don't really use F keys, and use layers for things like arrow keys.
Glove80 is smaller than the only alternative that suits me, Kinesis Advantage. Agree even smaller would be nice. There are many options if you don’t require the key well shape or if more DIY kits/production quality are up your alley
Of course you could hang the displays freely into space as you can now with the existing Vision Pro apps. You'd hang the productivity ones close to your desk but some of them you could hang in other places and use a virtual keyboard.
I know the implementation needs a lot to be desired but just in case you were not aware, the Oculus Rift shipped with this feature and it still exists via the link on Quest AFAIK. You pick the last icon on the UI bar that appears and it lets you select any window to pop out separately
Here's a screenshot of 4 windows plus the deskop.
https://pasteboard.co/hPKLrKgp5vjy.jpg
This features has been there for nearly 9 years?
(note: In my view there was passthrough and I could see my room but the screenshot function didn't include that).
It seems limited to desktop + 4 popout windows but I know it used to support more because when I first got the Rift I tried to see if I could get 6 videos running + desktop all at once. It worked.
There are issues though
(1) it's just showing the desktop's image, that means it can't adjust the resolution of the window. Ideally you'd pick a devicePixelRatio and a resolution and it would re-render but no desktop OS is designed to do that.
(2) It's Microsoft Windows, not an OS that Facebook controls which means they're at Microsoft's mercy and whatever features Windows supports. No easy way to add better VR controls. For example, IIRC, you can't size a window directly. Instead you need go to the window showing the full desktop, size it there, that will then be reflected in the popout window. If that's not clear, there are 2 size. The size in VR. The resolution of the window. If the window is a tiny 320x240 pixel window on your desktop and you make it 5x3 meters in your virtual space, all Oculus can do is draw those 320x240 pixels. You can go back to your desktop, size the window to 1237x841, now Oculus can draw those 1237x841 to your 5x3 meter VR pane. But, since Oculus doesn't control windows (or maybe they're lazy), there is no way change from 320x240 to 1237x841 by directly manipulating the pane in VR. Instead you need to do that in the desktop pane in VR.
(3) IIRC it's got some weird issues with the active window. Since it's really just grabbing the textures of the Windows OS compositor and showing then in different spaces, that's separate from Windows' itself's concept of which window is the foreground window. So, as you select windows, you can see on the floating window of the "desktop" the front window keeps changing.
These are all really just a function of (2), that Facebook doesn't own the desktop OS.
The same problem would exist with the Mac. Apple, since they control Mac, could maybe make it work but it would be a huge change to MacOS. They'd effectively need each app to run on its own screen and each screen to be a different resolution and device pixel ratio. Plus they'd need to map in all of the other features to mac MacOS to be usable in AR
I think people know about that but aren't interested because the resolution is obviously too low on current headsets.
Yeah that's why I never used the whole desktop either, only if I need to change a setting during VR gaming like the audio output device that doesn't always auto-switch to the Oculus Audio output. I never noticed you could show apps separately like this, cool!
Too low and all the surrounding UI is an impedance mismatch.
Huh cool, thank you! I had no idea this existed. And I actually have had a Rift since the start (I was an original Oculus Kickstarter backer so I got one for free). But to be honest I never really used it for productivity as the Rift's resolution was too low anyway. Even with the Quest 2 I didn't think it was high enough to use comfortably. But now with the Quest 3 it might be worth it.
But I'll give it a try, thanks!
And yeah Apple would need to do some changes to macOS to make this work well, but as you say they control it. If they really view this as the future of computing they should really make it work.
The idea of SimulaVR is that it replaces your window manager so they avoid some of these issues by taking direct control. But that's something that only works on Linux.
For 2, does Microsoft not have accessibility APIs to control window size and layout? On Apple platforms at least if you were doing this "right" you would probably create a new software virtual display and move all windows bridged into VR to that, and move and resize them as appropriate. I believe the actual implementation turns off the Mac screen (through a "curtain" I think, so idk if it's actually moving the windows around below that or what) but like there are many things you can try doing here to make it better.
this is how microsoft's remote desktop protocol works (RDP) but apple never invested in their implementation of something similar, relying on vnc to just send images of the entire screen.
the way RDP works is the code to actually render a window or desktop gets sent over the network and is ONLY rendered on the remote client.
this is not only more efficient but also allows for fun things like rendering remote apps directly on your local desktop mixed in with your normal local windows
Keep an eye on Immersed.com, they have an app that does this with a slew of VR devices, and I expect they will support Vision Pro if they can. They also have the Visor.com that is supposed to release later this year
I think all the wear comes if you’re sitting up wearing it, though. For passive consumption (or thinking between moments of work), you can just lean back or even lay down, which will take the pressure off your head/face.
Wow, I remember when it was announced, to press materials gave the impression that it would be closely integrable with macOS. I was picturing using VS Code from within it. Which I guess I could do if I set up VNC... but I'm not paying thousands for that.
Is an AR productivity tool really so hard? Apple owns the whole stack here. Nintendo can do Mario Kart AR in your living room with an RC car, but I can't get unlimited AR desktops for software development etc?
You can't because it's computationally impossible. There is simply no computing device that can render unlimited high-res desktops at 60Hz per eye.
Not to mention the need to stream all that data to the headset - since you're not going to put a high-end graphics card needed to even attempt this in anything approaching a wearable form factor. Good luck getting multiple 4k or even just HD streams between your laptop and your AR headset over Wi-Fi.
To clarify, I should have said virtual "desktops" instead of monitors. I don't even need all desktops to be active at once, but be available in physical spaces for me to access in a way that is more intuitive and customizable than pressing the F2 button on my Macbook keyboard and then hovering my cursor over the desktops bar. That would totally suffice. Also, somehow that bar can show what's on each virtual desktop, and in real time, so I know that's at least possible.
That bar shows a small lower-res preview of each of the desktops, it is not rendering all the desktops at full 4K res. Essentially the smaller a window appears on the screen, the lower effort is needed to draw it.
True, but apart from video/games/possibly a browser they absolutely don’t need to refresh at 60hz. I’m surprised they couldn’t somehow work in a “one nice screen, the rest refresh when needed” solution - though maybe that’s just too clunky for Apple.
Rest of the VR industry already systems that only really put the rendering hardware to use where you're looking it's called Foveated rendering, because essentially your eyes can't even see the detail in the edges of vision even if it were rendered there. Handles this all very fast and seamlessly.
Not a stretch the same thing could be applied to a virtual desktop environment especially when you have eye tracking.
AVP knows exactly which monitor you are looking at. The mac doesn't need to render all monitors at full resolution, just the one being looked at.
I'm surprised we still don't have an X-like "render on the drawing device" model. Rather than sending pixel, you send a higher abstraction model from which the UI can be drawn.
You can theoretically serialize draw calls, but that's significantly more complicated than sending over a video stream.
Then you lose the power of a second computing device if all view rendering happens on the AVP.
I suspect as hardware encode gets better and wireless bandwidth gets greater there’s less and less merit in this.
We still had this (--NSDisplay ?) in the first Developer Previews of MacOSX.
Absolutely. This is how RDP works as well. It's one the few areas where I think the Windows ecosystem has a better approach.
It does Airplay from a Mac to the headset. You look at the Mac and then tap the "connect" button that appears in mid air above it. Apparently it's very seamless and has good latency.
What you don't get is Mac windows intermingled with AVP windows, it's just your Mac screen as one window that you can move about. It sounds good though, and there has been a fair amount of movement in screen sharing over the last few macOS releases which suggests more could be coming here (like multi-screen support).
Yeah, that's not great. I guess that's slightly useful, but what I'm looking for is the ability is more like virtual desktops but being able to place each desktop into physical space. Being able to do that with individual windows would be even better, but I'd be fine with the virtual desktops. If Vision Pro can do Airplay, then I don't know why Apple decided not to do an integration with virtual desktops from macOS. Maybe they will, but not having that at day one is puzzling to me. Without that, it seems we've got yet another AR/VR device that is a glorified tech demo. I really hope that it gets better (and less expensive) than what we are currently seeing.
I'm sure this will be possible in future Vision Pro devices, main issue right now is probably performance. Macbook Pros already can only push to 2-3 monitors (albeit at higher resolution) and the Vision Pro is also running its own apps simultaneously alongside the Mac window (curious if it is the Mac or Vision Pro doing compression and other processing work for streaming to the headset). Apple being obsessive about frame rates and performance are likely erring on the side of caution before allowing too much. For example the recent downgrade from 1080p to 720p when it comes to streaming the Vision Pro screen to other devices.
I've been keeping an eye on the Simula One for some time, it does what I think you're looking for[1]. I have to assume outside of the Apple ecosystem this type of this is already possible with existing VR headsets, but the closest I've seen is Virtual Desktop[2].
I guess for the Apple ecosystem we've gotta wait for some software updates to make it more useful. Not that I can justify paying my own money for the Vision Pro or Simula One.
[1] https://youtu.be/a3uN7d51Cco?t=16 [2] https://www.vrdesktop.net/
Yeah I’d love to use one for PCB design at a cafe, but I don’t even own a Mac. I’m not going to drop $5k for an experience I can already do with my old laptop just a little bit fancier.
I can’t get over how goofy and stupid it looks, and how awkward those gestures are.
There’s a video circulating of someone cooking while wearing it and gingerly pinching a virtual timer and placing it on a pot of boiling pasta.
It looks so stupid that I couldn’t help but laugh out loud.
Maybe younger people might think differently but for me, this stuff is dead on arrival because of simply how uncool and stupid it makes you look when you use it.
I'm not really an Apple fan nor am I going to buy this thing, but this seems like a criticism that goes away as soon as everyone does it.
We all look stupid staring at our black rectangles, with notches at the top, with little headphone stems sticking out of our ears. It looks stupid at first and then you get over it
Yeah, new things will look weird until culturally appropriated. Two things that seem to differentiate AR/VR stuff from previous tech
1. They are more "worn" than "used"
2. They conceal your face while wearing
So maybe cultural acceptance might take a while...
EDIT: typos
Do we really though? Reading a newspaper or book would be stupid as well
People said this about voice assistants as well, we would get used to talking to them in public after a little while. I've still not seen anyone speaking to Siri or Google in public. The only thing most people use them for is setting timers when cooking ...
I think thats the author of the wsj article
I'm so old I remember the N-Gage 1st gen being ridiculed for the "sidetalking" feature.
Now we have millionaires on TV talking to their phones like it's piece of bread they are about to take a bite of and nobody bats an eye.
Yeah, saw the WSJ review with that video, but in all fairness, I think only someone who doesn't cook would think that cooking with this thing on your head might be a good idea.
Maybe, but if you had an app that allows you to define kitchen + appliances. You could eaily made a game/app that tells you what to do. Take stuff out off fridge, cut it up... ping your virtual pot needs stirring, go back to cutting, ping turn off the oven... and so on.
Sort of virtual assistant.
That could be useful to people to avoid burning or forgetting stuff.
This has always been the case and this technology has been around for a while. I'm surprised Apple would have chosen to use it for user input.
I was curious about this so I tried surfing the web and seeing how I click on things. I am literally unable to click on buttons or text inputs without actually looking at them. If I try to only use a corner of my eye, I miss the buttons 90% of the time. How do people not look at what they are clicking?
My guess is that people actually do look in order to aim their mouse but by the time they click, their eyes have already moved onto wherever they’re looking next.
Yep, focused on the play button on a YouTube video to get my mouse there and then my eyes immediately went back to the center of the video a fraction before I clicked the mouse button.
Maybe they could "remember" where your eyes were looking at a few milliseconds ago and click on that, but then they wouldn't know the difference between that or maybe you actually did mean to click in the center of the content area.
The problem is the Vision OS has no cursor. Cursor position is critical. Your eyes move around a lot more than your cursor moves, but in Vision OS, cursor position and eye position are the same. Very annoying. Maybe they could put a little cursor in the center of the screen that you could move around with another gesture.
If you want to stick with the old mouse paradigm. I rather think people are going to get used to looking at what they want to click on for slightly longer very quickly.
yep this is it
one reviewer said they kept hitting "No" and "cancel" when they meant "Yes" because their natural progression was to look at the buttons in order, pick out the one they want to press but actually click it after they finish scanning all of them with their "mouse" paused on the one they want to hit. It's kind of fascinating that the VisionPro's input method works so well when it does work that it tricks people into doing this, because they really do just expect it to work like magic.
From when I've used them in the past, the issue is the difference between looking in the general direction of something and using your eye like a joystick that jumps around if you happen to glance a little to the left or right.
Just now I moved the mouse to hover over "Reply" while I was still reading the comment. It was close enough to where my eyes were focusing that I could put the pointer over it, but I never looked at it directly. I've noticed this kind of cueing behavior in screen recordings many, many times over the years. I'm sure most people do it without thinking.
It’s my understanding that eye tracking isn’t great as an input method, it should be used more for stuff like rendering or NPC interactions.
It's going to be a great, sneaky way of installing an ad blocker onto wetware - users will train themselves to not even gaze at anything resembling an ad, lest they look at it long enough for the headset to register it as a click.
Apps are not able to register where you’re looking. Only the headset itself does.
Does ios not rely on hovering/mouse over for alt text or similar? Im mostly on windows but i wonder if theyll eventually concede a second gesture for "put cursor here"
On visionOS the APIs essentially ask all UI elements for "alt text" and the system display is it when appropriate.
Sure, but the proof of looking is in the clicking. The threat scenario I'm describing here is "looked at the ad long enough for the headset to interpret it as wanting to click on it".
I think that eye tracking perhaps could be used to enhance gesture-based input methods though. It could provide a hint at which object a gesture is directed at in cases where that would be ambiguous.
I have tried eye tracking as primary input method some years ago in another setting, and I very much did not like the experience.
That's how it works on the Vision Pro. I didn't really feel annoyed by it, but make sure the device is calibrated to your eyes, or the results might not be that great.
This has been know for at least 30 years in the eye tracking business and it even has a name - The Midas Touch problem.
I wanted to see how it is possible but sure enough, I found a paper from 1995 that cited even older research about this.
https://www.cs.tufts.edu/~jacob/papers/barfield.pdf
from the paper, the definition of The Midas Touch problem is:
So this doesn't seem to be a problem that Vision Pro has?
"At first it is helpful to be able simply to look at what you want and have it occur without further action; soon, though, it becomes like the Midas Touch. Everywhere you look, another command is activated; you cannot look anywhere without issuing a command."
Yeah, Apple Vision doesn't have this problem, because eye tracking is used just for pointing; not for clicking on items.
Whether it has this problem or not seems to really depend on how they use “OnEyesOver” events in practice.
But decoupling hand gesture from eye tracking should not be that hard: the external cameras could just follow the hand and put a pointer on the screen.
Seems like you could just implement a simple delay to solve this.
Let's say I want to click on the "reply" button below this text box. If I'm perfectly honest, I DO look at the button for a moment, then I move the mouse pointer over to it. But then right before clicking, my eyes switch back to the content I've created to observe that my click is having the desired effect on it.
I'm not actually looking at the button at the moment I click on it, but I DID look at it just a few milliseconds prior to the click. Why can't the UI just keep track of what I looked at a few milliseconds ago, to figure out that I actually wanted to click on the button, and not in the center of some text box?
One issue could be maybe I thought for a moment about replying but then changed my mind and decided to edit the content some more. But the UI has decided that I meant to click the "reply" button and so now it's been submitted prematurely. Yeah, I can see the problem now. The position of the mouse cursor is meaningful when clicking, and the Vision OS doesn't have a cursor. Cursors are important.
It doesn't seem like a 'problem'. For new tech, the emphasis should always be on pro users first (even if they don't initially adopt it because of the long lead times for those industries). So if you're designing an oil rig with these, a pro user would probably want to be able to independently interact with an element while looking for the next element since that's more time-efficient. Seems like a better term might be the 'Midas Touch Axiom'.
"Want it for entertainment? people want to enjoy photos, videos, movies with other people and it can't include them. Even if they have a Vision Pro, I haven't yet seen any sign of ability for multiple people to do these things together."
This is not accurate, FaceTime has Share Play. Any app that leverages it can build a synced entertainment. Example out of the box, Apple TV, Freeform, Apple Music.
I wish the SDK for Share Play was not tied with FaceTime, since it limits you to only people you have their iCloud email or phone number to. Big Screen on Quest was a great app that leveraged the idea of multiple users in the same VR session, based on interest, Quest just lacked the quality.
https://support.apple.com/guide/iphone/shareplay-watch-liste...
I think OP meant you can't include other people _in the same room_. If I want to watch Netflix (well, Apple TV, since Netflix doesn't have an app yet), my partner is completely out of luck if I do it on an Apple Vision. Unless he too has the money for one and we FaceTime each other to sync up our watching experience.
1. This problem has already been solved on the Meta Quest by a 3rd party app called Big Screen. I would imagine that they or someone else will attempt to do the same for VisionOS. Otherwise, I can see Apple fixing this later.
2. 52% of media viewing in the US is done alone.
1. The problem isn't solved, because nobody has the patience or money to set up and sync multiple headsets when you already have a perfectly good TV.
I know zero people who have eliminated their TVs for a headset. Apple isn't going to change that.
2. And so the vision pro isn't even viable for 48% of watching. That's a huge percentage, and shows that nobody is getting rid of their TVs for this.
That seems to be an odd requirement/desire to have a head mounted display do?
That's the point. Watching TV with other people is a thing that most folks do. At least in my house, it's nice to walk into the room to see what someone else is watching and join. Unless they become comfortable and universally ubiquitous, watching media isn't something Apple Vision is going to be particularly good for.
I once had a roommate who stopped playing video games a decade ago. We bought a PS5, and he kept trying to find games like 'Gears of War' and 'Lord of the Rings' with a multiplayer split-screen campaign in the same room. Aside from classic 1v1 fighting and sports games, the new games being released did not offer that. They were all built for high-quality solo gaming or optimized online gaming. Expecting a VisionOS experience with people in the same room reminds me of that. I'm not saying there won't be apps for it, but it would be niche.
A note on the sharing part: the number of people living alone is double digits percentage [0] in the markets Apple care about.
That might not be their primary goal, but the device could appeal to that demographic either way if the other features are appealing enough.
[0] https://thehill.com/policy/healthcare/4085828-a-record-share...
Not on a plane they don’t. Not in a hotel room on a business trip.
IMHo, the Vision Pro is for being somewhere when you’re nowhere; not for being somewhere when you’re already somewhere.
I want to watch stuff or collaborate on projects with my wife on a plane or in a hotel, you just haven’t seen these use cases from Apple yet because the product lacks it in 1.0
I think you're conflating two things.
The hypothetical use-case that the GP is talking about here, is AR -facilitated collaboration — people in your "see-through" view of the world being able to interact with the AR objects in your field of view, or vice-versa. Being able to AirDrop something to someone's iPhone by dragging a file from a window over to their body that you can see through the lens. You and your SO on the same flight, next to each-other and both wearing headsets, able to see the same set of shared/synced AR objects, and therefore watch the same movie. That kind of thing.
While this is an intended use-case for the Vision "platform" as a whole — it's an implicit promise of the whole "spatial computing paradigm" phrasing — I don't think this kind of AR-facilitated collaboration was ever intended to ship with the Vision Pro.
Why? Because, nobody in the Vision Pro's target market — at least as Apple would have it — wants or cares about AR-facilitated collaboration. I'll get to why in a moment, but in short — it's because it's the Vision Pro. And like Apple's other Pro products, it's mostly intended to be something possessed by an individual but owned and managed by their employer.
Now, Apple clearly intends to build VR -facilitated collaboration — but hasn't yet. That's what all the reviews mean when they say "the collaboration" is half-baked in the Vision Pro. The people in Apple's target market for this thing, expected VR collaboration, and it's not there. But that's an entirely different thing from AR-facilitated collaboration.
The Vision Pro as a specific product offering, almost certainly came out of Apple employees doing pandemic WFH, and realizing that all current "remoting" solutions sucked. Especially if you're a hardware engineer and trying to get a good look at a 3D real-world prototype of a thing. The people who are actually there in the office could just come in and stare at a real 3D-printed prototype. But the WFH people had to settle for looking at a 3D model on a screen... or maybe setting up a Windows machine, connected to an Oculus Quest, and using special compatible CAD software, in order to be able to do a walkaround. And that CAD software definitely didn't let them just paw at the 3D object with their hands.
The Vision Pro is clearly aimed at the "high-fidelity sensory feedback" half of the remoting problem. It's thus the complement to the companies who build telepresence robots — those robots solve the high-fidelity presence and interaction half of the remoting problem. (And you could really do some clever things by combining them!)
But note what else Apple released during the pandemic: Freeform, a piece of collaboration software. In fact, Apple added collaboration features to many of their first-party software offerings during the pandemic.
Apple never thought they'd extract their best work from their WFH (or now, travelling) employees by having them "show up to the office" with a telepresence robot. They rather expect that the best platform for all their engineers — in-office or otherwise — will be a digital office: one made of collaborative apps. And specifically, "level-of-immersion responsive" collaborative apps — apps that can switch between (or be compiled for) both lower-fidelity 2D screen-based UX, and higher-fidelity 3D spatial UX, on the same codebase.
It's this sort of 3D spatial collaboration-app UI experiences that the Vision Pro lacks in 1.0 but will clearly add later. (Which is why Apple cancelled WFH as soon as they could: their "solution to the remoting problem" isn't baked enough for them to usefully dogfood yet!)
But this is effectively VR-facilitated collaboration — working with other people who could be in arbitrary places, in VR or just interacting through a screen, where you see their VR avatars (personas) instead of seeing them. It's "Metaverse stuff" — but don't let Tim Cook catch you calling it that.
But AR-facilitated collaboration — i.e. having other people who are part of your work, present in the room with you, with some or all people in the room wearing Vision headsets with the see-through turned up, and where people wearing the headsets are interacting both with shared AR objects and with others present in physical space (rather than their VR simulacras in VR space)... this all has zero relevance to the remoting use-case. If you're in the office, then you're not going to be wearing a Vision Pro... because you can collaborate by just getting people in a room, and using various screens (AirPlay on an AppleTV display for shared viewing; your Macbooks for running collaboration tools; etc.)
Now, AR-facilitated collaboration is likely a use-case for the "Vision platform" as a whole. (Otherwise, why bother with the AR OS features?) AR-facilitated collaboration, is what a self-employed / freelance creative professional — someone who isn't remoting into some [idea of an] office, but who does have people [e.g. clients] physically present around them who they need to interact with them and with their work — would want. AR-facilitated collaboration would therefore be the defining use-case for a later "Vision Studio": a "prosumer" model targeted at such creative professionals — the same sort of people who buy themselves a Mac Studio. It would match the Vision Pro's targeting (which, if you haven't considered it, will almost certainly end up squarely on "businesses buying these for their remoting employees" — even if the early adopters are individual tech nerds.)
Hmm a lot to disagree with there
a hotel room on a business trip is pretty much "nowhere", so absolutely yes.
So the use case is porn for people on business trips?
My launch day Apple Watch was unbelievably bad. Not even good for telling time as the raise-to-wake feature was so flaky. So by the sound of it, the Vision Pro might be starting in a better position than the watch.
My job at the time gave me a brand new Series 2 and it was annoying to interact with... when it was new. Dropped frames, missed touches, etc. There was still a little bit of lingering hope there would be "killer apps" outside notifications and exercise.
But nope, nothing more interesting came, and OS updates ruined performance so badly that I happily returned it to my employer 2 years later and opted not to buy my own until 2021 (for exercise and notifications only).
I had the watch they launched with, which Apple pretends didn't exist as they named the next model the Series 1 and this model never got a name. The watch improved considerably in the next generations, so you can do the math back from your Series 2 to get a sense of how truly bad it was.
As a counterpoint, I had a Series 2 for years and it worked really well. It was slow, so third party apps were not really usable, but the core functionality of the watch was perfectly fine. That is, until an unfortunate incident broke its screen while in the sea.
Watch v1 was just awful. Those laptops with butterfly keyboards and the first gen Air were also completely broken in their own ways. If Apple manages to keep iterating on this device, they might eventually have a winner
Or maybe it’ll be another homepod
genuine curiosity, how often are you clicking something without looking at it at least for a moment?
or, how often do we believe other people are clicking something with out looking at it?
im examining this for myself...hard to feel organic while im actively focusing on it, but i at least glance at my mouse pointer target while traversing the pointer towards the target across my screens
A lot.
I also started thinking about it reading the reviews, and the main cases to me are:
- checking something before commiting an action: for instance reread the product name before pushing the purchase button. The pointer is already on the button, I keep it there while checking the order, so I just need to click.
- focus switch: pushing another window to the forefront doesn't need a super accurate click. I assume most people eye ball it like me and will click on a emptyish part of the window from the corner of their eye. Same for moving the focus away.
- scroll and type like situation: mostly when using a document on the side while taking notes. My eyes and focus will be primarily on one side (with quick glances on the other), while the mouse/trackpad movement will be on another.
I think we'll discover a lot more instances of this.
your first example is p solid, i agree there will be a micro-tedium related to re-focusing on a confirmation button in that type of scenario
assuming that, in the AVP UX, the user could not linger hand-gesture focus on the button then immediately click after verifying whatever info they are reading as they will have to move the eyes up (to read lol) then back down to focus the hand-gesture control back onto the button
Often times I'll look to position the mouse and then look at something else when I actually hit the button.
i generally agree with the sentiment of this post. it does appear to be a beta/dev kit. i will say that the productivity criticism is a BIT unfair. It may be the case that you can only have 1 MACOS display, but you can have many non macos apps running right alongside the 1 macos display. You could have your macos display doing things that only macos can do, and then run the vision pro version of discord or teams or safari or whatever else you would use that has an ipad/vision version as floating windows separate from the macos display.
yes true - a lot depends on the integration in that scenario though. Can I seamlessly copy and paste rich content between the, drag and drop, does the mouse seamlessly move from my Mac desktop to the safari window next to it, etc.
At a deeper level it depends a lot on the question of does Apple want this? If they do then all these will be solved over time. But if they actually see MacOS as a legacy integration then they simply aren't going to invest in encouraging people to use it. I'm waiting to see indications on which way they are going to play it.
iOS/visionOS lacks good window management tools for this though and since macOS Apple has not demonstrated they can build them for new platforms. Maybe visionOS will motivate them to actually get it right, but that hasn't been shown off yet.
I think about all the apps I'm running and switching between on my computer now, using shortcuts and toolbars and docks to arrange, hide, and switch between them. Everyone using multiple apps on visionOS just looks chaotic.
I once had a second portrait monitor next to my ultrawide. I had to get rid of it because it was just too tiring to be constantly turning my head so far to look at it. It didn't work out.
I cannot imagine how uncomfortable it would be if each app needed to be in a different physical space that required turning my head to use. Painful.
What do you mean, it can't run multiple monitors? I thought it lets you pop out windows free standing, no concept of monitor at all.
This tool evidently overcomes the display limitation: https://github.com/saagarjha/Ensemble
"Ensemble (formerly MacCast, before the lawyers had something to say about it) bridges windows from your Mac directly into visionOS, letting you move, resize, and interact with them just like you would with any other native app. It's wireless, like Mac Virtual Display, but without the limitations of resolution or working in a flat plane."
It does for apps running on the goggles themselves, but the Mac integration feature just shows the Mac's screen as one window, kinda like Remote Desktop on Windows.
And this is a quite hard limitation, since the Mac has to actually render those windows and then stream them to the goggles over radio. So, without quite a bit of magic, you have a limited amount of pixels the Mac can draw and send.
I don’t know why, but while I feel multiple monitors helps my productivity a lot in Windows and Linux I find myself not caring as much in MacOS as long as the screen is big enough. I think it has to do with my habits around how I use the windowing in each. I tend to teasselate and arrange them in MacOS while I tend to maximize or lock to screed edges in Windows.
Unpopular opinion: multiple monitors are a meme for most uses and almost everyone is better off having a single screen and using their fingers to move the viewport accross virtual desktop spaces.
Not just an unpopular opinion, it’s also not supported by data. There are a decent number of studies that show that multiple monitors increases productivity.
Iirc though, there are diminishing returns fairly quickly beyond dual monitors.
Me personally, the sweet spot is three total screens.
> Want it for entertainment? people want to enjoy photos, videos, movies with other people and it can't include them
This is a disingenuous argument. Your other points are much more valid than this one. You don’t have a VR headset to interact with other people in the same room. If you want to watch a movie with other people around you, there are many other (cheaper) ways to do that (and Apple can sell you a nice AppleTV to do it).
What does disingenuous mean
And yet you literally have early access reviewers regurgitating talking points about how this will redefine the television-watching and movie-going experience.
This was always going to happen. The human eye has a field of view and dynamic range no display technology can hope to match anytime soon. The future of AR is not reprojecting the outside world on a screen; it is screens which can become transparent.
Dynamic transparency is the path to the future, and it's physically perfectly doable. Any news from the ray ben smart glasses?
For that to happen we would need a transparent display that can block light. Seeing content on an additive display will always look somewhat transparent, with no hope of displaying blacks. The augmentation on the vision pro look so much better than on a HoloLens2, it's like looking at 3d printed objects.
Expecting the same, this is going to be a MASSIVE flop. VR is not for the masses until the goggles go away.
I think this product will be a slow burn. They are getting developers engaged now and subsequent generations will bring broader appeal while the software will get more refined and apps will expand in availability. I don’t think it will be a flop it will just take a while to get going. And they must absolutely know this given the current pricing.
Its not gonna flop, for the same reason that Apple desktops dont flop. Anyone educated in basic modern technology can easily see that for the price, you can build a custom desktop that blows any Mac one out the water, but people still buy them because of 2 things: styling and ecosystem.
Vision has both of those. People will conveniently ignore all the downsides of it, like they do with current Apple products.
We just got the 'larger iPod' version of Spatial Computing. If your primary interface is a screen, it's still screen computing, not spatial computing. They literally had everything teed up to do some wild proximity things (AirTags, Homepods, etc) - and they gave us a strap on iPad.
Whatever, it at least gives a startup an opportunity to build something unique - it's just sad to see your old friend start going senile.
“ok zoomer” - your old “friend” next month as you pay for your subscriptions through their payment processing.
Eh... I prefer empty cinemas
No loud babies, no popcorn sounds, no people explaining the plot to people not paying attention, just me and the world of the film. Bliss.
No surprise at all i’d say - as Nilay Patel on the Verge review put it correctly:
Anyone remotely familiar with the state of development in those areas would be aware that “even Apple” can’t cheat Reality (punintentionally).
Those left still raving about and/or hoping for a game changer will be greatly dissapointed - or only in it for the line go up.
The whole concept will be a niche product for many years to come and will stay an isolating experience.
Do you mean a Mac's external monitor visible through the vision pro AR view?
No idea if it does this, but the obvious use case is for people who aren't physically present - but letting them somehow share a physical space. It could potentially be awesome for friends/partners who live far apart.
I wrote a long comment [1] months ago when the Vision was first announced expressing my skepticism about the use of eye tracking based on my person experience with the tech. At the end I said, "maybe I'm wrong." Turns out I wasn't.
1. https://news.ycombinator.com/item?id=36220097
Every hit from Apple had lots of initial (VALID) gripes, but the experience was worth it to advance the core product and eventually eliminate or accept those limitations.
iPod was the size of a beefy wallet, but was good enough.
iPhone was glorified plastic and websites looked like crap, no app store. But hey, it worked well enough.
That said... this isn't accessibly priced and what's the hook? Like if this launched the same time as Pokemon Go or WoW were taking off alongside it it'd get the social momentum all the other options had.
Also it's better than the competition in key was, but differentiable ways...? AR/VR could very well take off but it's not this year.
It’s first generation
First Gen is usually awful
This is not awful and maybe even closer to 2nd gen
Everything starts somewhere
Most things are ready for the masses by the 3rd or 4th gen.
To me, 3D home recording playback is a huge use case.
Why would you want presence in an action movie? Cool, in an exciting sort of way.
But presence in recording of your family? That's powerful stuff, for everyone!
That feels like the long-term hook. iCloud to handle obscene storage amounts as a service. iPhone to generate new recordings. Vision to play back recordings.
And the dastardly brilliant part is... the more 3D video you record... the more valuable a Vision is to you.
The reviews haven't mentioned it, but SharePlay [1] is OS-level functionality and the press releases mention using it with movies, music, and games.
[1]: https://developer.apple.com/videos/play/wwdc2023/10087/
may be apple expect every person to buy one. Didn't facebook recently try something similar?
The eye-tracking thing doesn't surprise me at all, but I am surprised anyone thought this was a holy-grail sort of interface, in particular that Apple didn't rule it out themselves fairly quickly. Eye-tracking data is always a gigantic mess - it's why it's presented as gaze averages rather then direct replays.
AR Passthrough is the big v 1.0 feature for this generation of headsets...I'm sure Apple and Meta were developing in tandem without knowing what the competition was doing. It's a really need addition that brings significant improvements...and I could see Apple developing it as 'this is streets ahead' where Meta was just improving tech they already had.
At the end of the day, this is Apple testing the waters and trying to get a positive cash flow to help offset significant R&D...what they're showing is pretty impressive in a number of ways, even as it's lacking in others.