I can see something like this filling a niche with the elderly population as like an external memory. (Or even just for forgetful adhd folks like myself, having something I can ask "wait, what did my wife just say to me 5 minutes ago?" ;) )
I can see something like this filling a niche with the elderly population as like an external memory. (Or even just for forgetful adhd folks like myself, having something I can ask "wait, what did my wife just say to me 5 minutes ago?" ;) )
"what did my wife just say to me 5 minutes ago?" is the most brilliant app idea I've seen (in recent memory.)
Black Mirror's "The Entire History of You" S1E03 episode has one take on what would happen if we could effortlessly record -- and replay -- everything. As with most Black Mirrors, there are some dark but believeable ideas.
Incidentally, it was written by Jess Armstrong who later created "Succession".
I recommend "The Truth of Fact, the Truth of Feeling" by the brilliant Ted Chiang, a short story featuring a related social situation. People in the story use a recorder with highly advanced search and indexing capabilities so it becomes possible to instantly access a video of anywhere you've been, or of any conversation you've had.
but requires an always on and listening device, which people don't really want because it is always abused by the overlords
Kapture audio from 2013 kickstarter (not so recent memory) was 4 minutes too short?
https://web.archive.org/web/20140208000114/https://kaptureau...
EDIT: had to share promo videohttps://www.youtube.com/watch?v=arQoSSXKaSQ
counterpoint being "bring up what my husband said 10 months ago". the Devil skips away, contract in hand...
The elderly example is actually an extremely good/thought provoking idea. I can imagine my Grandparents getting huge use out of this, including with smart home functionality, if it got to where it needs to get to.
Instantly destroy? Or acquire?
Quite a few bridges were burned to form the company, an acquisition would be a surprising development.
Can you expand on that? How did humane burn bridges to form the company?
Aren't the humane folks a bunch of ex-Google/ex-Android execs?
Who is the extremely slow talking spokesperson?
Wrong, ex-Apple people
A bit tongue in cheek - I said "instantly destroy" because if the main selling point is an ai voice assistant, then people would just use what's already built into their phone/watch/airpods instead of paying 600$ if apple was to implement a better LLM for siri.
I'm skeptical of the usefulness of the hand projection vs a watch. And I think anyone who wants to bring a camera would be far better served by an iphone (or any phone).
Genuinely asking, problem does this solve?
Simple example: which way do I go at the next intersection?
Why would I speak this question when I can just look at the map phone for an extremely quick and concise answer?
Or if I'm driving, GPS is displayed on a giant screen.
You need to take out the phone and start the phone map app. And quite a few people are terrible at reading maps.
Not saying this is for everyone, but there will be users.
I assume this pin will use GPS? It’s not always accurate. Sometimes the GPS is off. With a phone screen, you can better estimate where you’re standing by looking around you and then comparing what’s on the map.
For example, the GPS is almost never accurate in Hong Kong, when I visited.
The route is usually shown in great detail on your car's display. You also get a voice prompt just before you need to start thinking about turning. Is this a genuine problem?
I meant as a pedestrian.
Apple Watch and Android Wear solve this.
I don't understand the insistence on using voice as the main interaction and ditching the screen.
There was a google i/o talk a few years back were they talked about users wanting multi-modal, an example being they ask for restaurant recommendations by voice, then get the list they can view on their device. Both query and results are presented in their easiest modal, and humans will naturally switch between them.
This thing seem dead on arrival. Who wants to hold their hand up like that? Who wants to look at an uneven "screen"? Can you use it while walking or experience the movement in a vehicle? (car, bus, subway)
Is this just a big sunk cost fallacy launch?
I agree with you on all points except one. Arguably the uneven "screen" problem can be solved with a depth camera and warping the projection to match the contours of your hand. Since they already support hand gestures on the target hand it's possible they already have the equipment built-in to do this.
Does this also require that you track the user's eye position?
Eyes should stay mostly fixed relative to a pin on your chest, as long as you're looking in the same direction (ie your hand). I think the differences would be small enough, especially since the display seems pretty blurry anyway
Projection warping should be viewer position independent (within a limited scope), only relying on the position of the projector and the screen. Some home theater projectors which already do this to an extent. And there are some famous public performances where video is projected onto a building and warped to match the contours in such a way as to give a convincing 3d effect (Example:https://www.youtube.com/watch?v=gJ_5sDvAlNY). You can perform this kind of projection warping on a raspberry pi.
Genuinely asking, problem does this solve?
People are thinking about the form factorafterthe cell phone. Apple is busy training everyone to use hand gestures with the new Apple Watch and upcoming Apple Vision. Humane is going down the path of projecting on the hand and touch.
Doesn't the projection require one hand to be up and in a flat position, and the other hand interacts with it? Meaning it requires two hands?
Apple's implementations are for 1 hand operation. You can operate the watch's touch screen while holding a steering wheel for example.
What's the difference between the objectively not great screen that is my hand, and the oled watch that doesn't require both my hands for operation?
EDIT Heck this requires one hand just to see anything. I can look at my watch without any hands!
From the demo video it looks that buttons can activate with similar pinch gesture as new Apple watch has.
"What comes next" is interesting as a problem formulation insofar as it encourages solution based thinking ("Here's the solution I think is next, for an problem still to be identified- other than it is what comes next.)
People are thinking about the form factor after the cell phone.
That presumes there is one. There's not yet a "form factor after the car" for example. Just refinement of the same basic 4-wheeled template, with a few oddball vehicles for niche uses.
A possible indicator here is the apparent lack of demand for small screen phones. To me it suggests that screen real estate is more valuable than portability for most people.
Yeah, people already have phones. This is could just be a phone app.
Part of me just wants to get rid of my phone if I get a device that does the actually useful things. Get info about something, checks and sends messages in a smart way, checks the bus.
Most of the other stuff is just idling. I don't expect I would idle in the same way with an actually good assistant that respects me.
But then I'd prefer an open source Wikipedia/Wikimedia like organisation behind it.
Part of me just wants to get rid of my phone if I get a device that does the actually useful things. Get info about something, checks and sends messages in a smart way, checks the bus.
This requires an always-on device, or always-in-the-cloud server processing your data and pushing updates to your device.
The former is limited by physics (battery), the latter is limited by how much data you want accessed from the cloud. Neither are solved by open source.
Could be a self hosted home device for the big power requirements.
The apple watch can already Pay, music, communication without a phone.
Siri itself is lacking but I expect that to change with an LLM soon.
You can already lock down your phone to prevent distracting apps.
I've skeptical that people would actually choose to go without a phone in favor of this
Genuinely asking, problem does this solve?
It's a combadge.
I repeat: it's a combadge. It solves the self-evident problem of there not being combadges available and in use.
Or, at least, it'salmosta combadge. A good qualitative jump forward, but with plenty of unwanted features likesubscription(I guess this could work for aFerengicombadge), screen, wake words, etc. A combadge doesn't need to be an image projector, nor does it need rich tactile controls. But I guess you can improve the product-problem fit by ignoring those features.
It's a poor version of the comm badges in Star Trek Discovery when they go to the 32nd century. Those one can project holograms in front of them.
Hell, Star Wars had those, and that was A Long Time Ago!
Honestly, if they did the one thing that com badges did, it would have at least one feature I would say, "oh, that's nice"
The guy had a Ted talk a while back going into his motivations. I believe the main one was he didn't like how phones get between you and the world, and take you out of the moment. This was an attempt to make tech that isn't a distraction in your life but that fades into the background. That was his driving principle, I believe.
I got the impression his driving principle was "if we can't replace smartphones then Apple wins"
Tim Cook sleeping like a baby tonight.
This looks like a cool toy that high-level members of an organization will buy, and nobody else.
It can’t compete in the consumer space, because it doesn’t let you waste time on social media. It can’t compete in the corporate world because it doesn’t have a screen — no email, no spreadsheets, no collaborative chat application we’ve all grown used to. And it can’t even be great for photography, since you need another device to view the photos and videos this thing takes.
Ifthis thing takes off for its impressive AI capabilities, smartphone makers can pump R&D into their AI, and give us this for free as a software update. But right now, the only people who will use this are folks whose job involves scheduling meetings and firing off quick text messages to colleagues and clients.
This thing is great for old people who can't see the screen. It is like a life alert on steroids that can order pizza. It is also great for kids for obvious reasons.
Not just Apple. Any smart watch or ear buds with Google or Amazon AI. I think ear buds paired to a phone are already the perfect form factor for this kind of thing. My Pixel Buds are already pretty good at this and I absolutely never use it.
The problem this solves is my pager looks outdated
"How can I get people to regret sharing this public transport with me?"
Biggest issue is that people hate talking to computers in public.
Alexa was the closest to achieve significant usage since you can use it within the privacy of your home.
For voice UIs the non clear boundaries on what you think it can or cannot do is also a huge hurdle. After you get a couple “sorry I cannot do that” you stop using it
Yeah, unless the utility of this devices is large enough to override existing cultural norms, there's actually very few venues where it feels "comfortable" to voice interact with a device.
I went through this exercise with GPT voice. It's an awesome capability, but other than perhaps walking outside, or sitting in my office, there's no other space where it feels "ok" to just spontaneously talk to something.
A grey area is when you perhaps have headphones in / on and it looks like you're in a phone conversation with somebody, then it kinda feels ok, but generally you're not going to take a phone conversation in a public area without distancing yourself from others.
There's a reason most casual communication these days is text rather than voice or video calls.
The weirdness is caused by the incantation all these things have. Once you can just talk to the AI without doing anything, just talk to it, it'll catch on very easily.
"Siri, lights to half."
"Siri, lights to HALF."
"Siri, lights to HAAAAALF."
"Siri, LIGHTS TO FIFTY PERCENT!"
This fortunately is a solved problem. Or will be, once Amazon, Apple and Google get out of their asses and plug a better voice recognition model to an LLM.
Silly how OpenAI could blow all voice assistants out of the watertoday, if they just added Android intents as function calls to the ChatGPT app. Yes, the "voice chat mode" isthat good.
I know i'm getting close to Torment Nexus territory but how do you get an LLM to run code as the response? Given that an LLM basically calculates the most probable text that follows a prompt, how do you then go from that response to a function call that flips a lightswitch? Seems like you'd need some other ML/AI that takes the LLM output and figures out it most likely means a certain call to an API and then executes that call.
With alexa i can program if/then statements, like basically when i say X then do Y. If something like chatgpt requires the same thing then i don't see the advantage.
If something like chatgpt requires the same thing then i don't see the advantage.
So LLMs today can do this a few ways. One they can write and execute code. You can ask for some complex math (eg calculate the tip forthis bill), and the LLM can respond with a python program to execute that math, then the wrapping program can execute this and return the result. You can scale this up a bit, use your creativity at the possiblities (eg SQL queries, one-off UIs, etc).
You can also use an LLM to “craft a call to an API from <api library>”. Today, Alexa basically works by calling an API. You get a weather api, a timer api, etc and make them all conform to the Alexa standard. An LLM can one-up it by using any existing API unchanged, as long as there’s adequate documentation somewhere for the LLM.
An LLM won’t revolutionize Alexa type use cases, but it will give it a way to reach the “long tail” of APIs and data retrieval. LLMs are pretty novel for the “write custom code to solve this unique problem” use case.
Yup, from where I see it, the only thing(s) holding llms back from generating api calls on the fly in a voice chat scenario is probably latency (and to a lesser degree malformed output)
Yea, the latency is absolutely killing a lot of this. Alexa first-party APIs of course are tuned, and reside in the same datacenter, so its fast, but a west-coast US LLM trying to control a Philips Hue will discover they're crossing the Atlantic for their calls, which probably would compete with an LLM for how slow it can be.
and to a lesser degree malformed output
What's cool, is that this isn't a huge issue. Most LLMs how have "grammar" controls, where the model doesn't selectanycharacter as the next one, it selects the highest-probability character that conforms to the grammar. This dramatically helps things like well-formed JSON (or XML or... ) output.
Disagree. Extra latency of adding LLMs to a voice pipeline is not that much compared to doing voice via cloud in the first place. Improved accuracy and handling of natural language queries would be worth it relative to the barely-working "assistants" that people only ever use to set timers, and they can't even handle that correctly half the time.
At some point you've got to get from language to action, yes - in my case, I use the LLM as a multi-stage classifier, mapping from a set of high-level areas of capability, to more focused mappings to specific systems and capabilities. So the first layer of classification might say something like "this interaction was about <environmental control>" where <environmental control> is one of a finite set of possible systems. The next layer might say something like "this is about <lighting>", and the next layer may now have enough information to interrogate using a specific enough prompt (which may be generated based on a capability definition, so for example "determine any physical location, an action, and any inputs regarding colour or brightness from the following input" - which can be generated from the possible inputs of the capability you think you're addressing).
Of course this isn't fool proof, and there still needs to be work defining capabilities of systems, etc. (although these are tasks AI can assist with). But it's promising - "teaching" the system how to do new things is relatively simple, and effectively akin to describing capabilities rather than programming directly.
Check out “LLM tool use”
The basic idea is to instruct the llm to output some kind of signal in text (often a json blob) that describes what it should do, then have a normal program use that json to execute some function.
With alexa i can program if/then statements, like basically when i say X then do Y. If something like chatgpt requires the same thing then i don't see the advantage.
Yes, I was thinking about even something as if/then, which could be configured in the UI and manifest to GPT-4 as the usual function call stuff.
The advantage here would be twofold:
1. GPT-4 won't need you to talk a weird command language; it's quite good at understanding regular talk and turning it into structured data. It will have no problem understanding things like "oh flip the lights in the living room and run some music, idk, maybe some Beatles", followed by "nah, too bright, tone it down a little", and reliably converting them into data you could feed to your if/else logic.
2. ChatGPT (the app) has a voice recognition model that, unlike Google Assistant, Siri and Alexa,does not suck. It's the first model I've experienced that can convert my casual speech into text with 95%+ accuracy even with lots of ambient noise.
Those are the features ChatGPT app offerstoday. Right now, if they added a basic bidirectional Tasker integration (user-configurable "function calls" emitting structured data for Tasker, and ability for Tasker to add messages into chat), anyone could quickly DIY something 20x better than Google Assistant.
Google's version could have flawless voice recognition backed with AGI. Within a couple years, it will decay and fail randomly with setting timers.
It's staggering to me that Apple has not improved on the UI for "try again" or "keep trying", whether the fault is with Siri itself, or just network conditions. It seems like (relatively) low-hanging fruit, compared to the challenges of improving the engine. (I don't use any other voice assistants, no idea how well they do here.)
For iOS, there's nothing more frustrating than dictating a long note only to have it come back with try again.
Feels like there needs to be more frequent feedback about what Siri is doing in cases like that instead of treating the whole input as a single unit.
If I want to ask ChatGPT about something I will, and the speech-to-text is a lot faster than typing on my phone. There's no voice incantation needed, rather a button press, but people still raise their eyebrows and make me feel self-conscious. I wish I could subvocalize to it like I remember reading about in the book series Artemis Fowl.
it looks like you're in a phone conversation with somebody
Even though everyone's seen AirPods by now, in those rare occasions when I'm on the phone in public, I feel compelled to have my phone out and vaguely talking at it, so it's clear I'm on a phone call and not a crazy person.
I'm curious if we would see similar usage with the pin, where voice commands in public are always performed with the hand up for the projection screen (it will still prompt looks, but hopefully be clear in context, "oh they're doing some tech thing").
Of course at this price point, it's highly dubious that we'll see anywhere near the ubiquitous market penetration of AirPods (which garner understandable complaints about the price point sub-$200, and that's with a clear value prop).
I don't mind the earphones, but often headsets are entirely impractical. Most notably, in the case of any sort of weather, wind, etc. A phone can also get rained on, but its a bit easier to keep safe.
The other reason they are mostly impractical - keeping a charge. *wired* headsets were great in this regard, but then there's the wire, and now, there's the phone (that may not even support the wire?).
The only reason people don't like talking to computers in public is that it's distinguishable in an awkward way from talking to humans in public. That's not going to be an issue for much longer. ChatGPT voice mode is about 99% of the way there. The only remaining issue is the cadence of the conversation -- you can't interrupt ChatGPT naturally, you have to press a button.
The issue is that your private communications are now audible by the people around you. It’s one thing when it’s to another person and you can whisper and share social context, it’s another when it’s at a good volume and contextless.
These don't seem like real issues to me. They are the exact same issues you have when you are talking to humans. And the way we solve that issue with humans is that we only have conversations around other humans that we are comfortable having. We save sensitive conversations for when we are not in public.
Except there’s no way to have a sensitive conversation on this device that isn’t spoken. With my phone I can. That’s part of my point.
You can talk through your earbud.
Again, if there is no way to distinguish a conversation with an AI with a conversation with a human, then there's no barrier.
I don't see how what you are saying holds any water in that scenario.
That solves nothing.
The issues isn’t “communicating with the device” it’s “communicating around other people”.
I have almost negative interest in having to recite the technical specifics of my web search to my phone on the train to work. I have even less interest in having to listen to the person next to me trying to do the same.
Typing already allows sensitive conversations with computers in public as long as no one is directly peeking at your screen. When I talk to humans in public, I'm not using them as an utility tool to manage information for me, because computers do a better job. These aren't comparable scenarios.
The only reason people don't like talking to computers in public is that ...
It does not seem right to speak of a single reason. There are probably multiple. So, IMHO it would be more productive to come up with a list and put some weights on the options if you want to dissect this matter.
IMHO one very strong factor / important reason (one that you ignore) is the social context. Ie the reaction of others in the same physical space, as you start talking out loud, seemingly unmotivated.
Humans are social animals, and so the reaction of others to the actions you do tend to be very important to a large fraction of the population. What is acceptable in one context simply isn't in another. Also, the exact tolerances tend to differ with the local culture (here "local" is used in the sense "geographically/physically local")
It's not just about not annoying others here. In this case it's also about a thing as imprecise as "perceived self image". Some people (I'd argue, most people) dislike having the perception that others perceive them to be mentally unstable or rude. Most people need some kind of social acceptance for the actions they do.
One significant trait of some mental instabilities (as well as some drug induced behavioral changes) is that those affected will spontanously start talking in public. You will probably know the Tourettes Syndrome, and the alchoholic rambling about because these cases often imply quite rude and offensive verbiage and/or loud volume, but these are not the only cases.
People in general are well adept at detecting such anomalous behaviour as it is part of our insticts trained through Evolution. Also the uncomfortable feelings that observing this type of behaviour leads to will lead many to react with a "confront or escape" (aka. "fight or flee") response (a stress signal), which is not beneficial to social interaction in general.
TL;DR: If you speak out in public without a very clear and socially valid reason (speaking to an object is not that) you are not only rude to others, but you also cause them stress... and you will have to face the social stigma of being perceived as insane.
(edit: grammar/typos)
TL;DR: If you speak out in public without a very clear and socially valid reason (speaking to an object is not that) you are not only rude to others, but you also cause them stress... and you will have to face the social stigma of being perceived as insane.
Except... this problem is known to be trivially solvable. After all, the very act of putting a flat rectangle to your ear makes talking out loud in public not just perfectly acceptable, but mundane and not worth paying attention to (subject to social norms dictating where it is or isn't OK to be on the phone).
As for talking to yourself signalling insanity... I'd hope that stupid and probably developmentally retarding idea died long ago, and the "talking to yourself out loud in public" subtrope being dead sincewireless earphonesgot ubiquitous some twodecadesago.
The modern reality is, hearing someone "talking to themselves" is normal, and 99.9% of times means they're on a call.
perfectly acceptable
The point is it's not, though. As a society we have generally established that it is rude to be speaking out loud on the phone in public. Especially on the bus or the train or waiting for same or in the shop or at a movie or any number of other places. I genuinely think it would be easier instead to list the places where it would be okay (in a busy street, if you step to one side). Even in these places there is some expectation that you show a little shame to be doing it, as though you didnt want to but had to because the call is important
This is just hand waving. When people can talk to AI in the same way as they speak to people, then there is no barrier.
I've been thinking about this recently. A colleague is participating in a group call and talking to someone I can't hear or see and that's just background noise to me, I can easily tune that out. Another person tends to vocalize his thought process sometimes and it steals my attention in a hard-to-explain unpleasant way every time.
I agree, BUT, i think it's going to get alotbetter soon. Ie i loathe Siri because it felt like there was always some incantation i had to remember. Like a very terrible CLI. LLMs though, even if we never get intelligence right, i think can help this area significantly.
Combine that with areas like GPT Vision, (GPT?) Whisper, etc .. it'll start feeling a lot more natural here very soon i suspect.
TBH i'm surprised Apple isn't pushing this much harder. They tout Siri so hard but it's just worthless to me. It feels like applecouldmake a AI Pin like this, but visibly from the public side i have zero idea that the're even working in this space. It feels like they purposefully watched the boat sail away.
edit: Sidenote, Pin + Airpods would be a nice way to interface more quietly too.
The Google assistant has been years ahead of Siri and Alexa for a good while now. I've been able to give it really loose sloppy commands, even stuttering or backtracking on my sentences, and does a competent job of figuring out what I want. In my experience Siri is much more dependent on keywords and certain phrasing, and doesn't quite integrate as deeply into one's life because Apple isn't doesn't play Google's game of slurping up all your personal data and all the public data on the internet.
These next gen AI voice assistants are still a solid improvement over Google's current offerings, but they'll feel like amassivejump into the future for folks that have been stuck in Apple's ecosystem, and that's probably where the biggest opportunity lies.
I've noticed the latest iOS speech recognition model works with whispered speech pretty well. Not a perfect fix, but it's something.
same with swiftkey - can handle whispered speech to some extend.
Still I would guess Meta Glasses or AirPods should be better to handle such whispered mode since microphones are so much closer. Would be interesting if Airpods had some contact mic that could pickup whispered sound inside your mouth.
Maybe the holly grail is to have something inside your mouth so you don't have to even make voice - device will figure out what you want to speak from how mouth and tongue movement - smart tooth braces anyone? :)
This is easy to fix IMHO. Pair a small screen in the future for typing or have a cuff link mic for whispering. You will see accessories like these pop up in the near future.
I still get called 'dick tracey' at my local shop for the time I paid with my phone.
Talking to my cuff isn't going to make this better
I see people talking on speaker phone all the time in public.
believe it or not, here in Ottawa Canada I was just reading a post on Reddit where people complain about those who were talking or doing video calls on the streets, I think this will be a matter of culture and the barrier will be smaller as soon as the devices are "smarter" and not making you repeat yourself many times or, not understanding what you are asking.
Well, I hated talking to Siri in public because about 70% of the time it did what I want and 30% of the time it made me feel like a fool for even trying. That 30% was what killed it for me after giving it a serious go around the time Apple was rolling out shortcuts.
After watching the presentation, I am now curious about Humane’s thing though, but I’m still going to hold off for a bit because I want to see the failure modes first and I also don’t want to rush out and be one of the first to buy the brand new 3Com Audrey.
Any way we could capture subvocalizations?
How well does whispering do with these things? I've found that I can reliably write sentences and set alerts when holding the mic fairly close on my Pixel 6.
I hate talking to them at home either. The only time I use them is when cooking with my hands wet or dirty. And it’s still bad, even when it works.
If people can speak more naturally, maybe they'll be okay with it. I am constantly encountering people who are laughing or talking to themselves out in public nowadays. Of course, they're probably on phone calls with Airpods in, but it doesn't seem to be awkward in a way it used to in the 'Bluetooth headset' days.
Don't worry, in the future, they won't need to leave the house nor travel
The can and cannot do problem reminds me of writing Applescript. I just want to call a function not figure out where to sprinkle in random a/the/of modifiers!
Agreed. I have the new Meta Ray-Ban glasses, and have been pleasantly surprised with how soft I can speak since the mics are so close to my mouth, but still don't enjoy doing it in public.
For all the reasons that this might not take off, what a thrill that people are trying something new--and it looks really nicely designed too.
I think this is easy to dismiss at first glance, but I genuinely believe they're trying to think about a new mode of interaction. The idea that "the computer will disappear" is probably accurate in the long term. Except for content delivery (reading, photos, movies), most tasks we achieve via computers and phones do not strictly require a screen. It's probably a good thing if computers did a better job of getting out of the way, and stop so loudly disrupting human interactions.
Whether this will be the solution is unclear; the privacy/creepiness angle is still real with an outwards-facing camera. Latency and battery life limitations might be too significant. The cost will be a non-starter for many (it is for me).
But I'm still impressed because there was a vision here. The conversational interface has never worked before for many reasons, but that does not mean itcannotwork in principle, or that the ideal implementation would not be spellbinding. I'm glad they're trying. Also, the laser display is neat!
most tasks we achieve via computers and phones do not strictly require a screen.
X (doubt). There are unfortunately only 5 senses that our brains can interact with the outside world, and visual ways are the most information dense and the easiest to utilize. The screen isn't going away anytime soon.
Projector to me are same as screen - they've been around for as long too.
Though I do look forward to direct computer-brain interface, like introducing a 6th sense.
The 5 senses thing is long-disproven rubbish. Humans havehundredsof senses.
Would love to hear what the rest of those are, please be specific.
Well, for instance, what is commonly referred to as "touch" is actually a whole bundle of senses. There's the actual sensation of pressure, but also texture, temperature, surface finish, the physical position of your various body parts, your sense of balance, etc etc.
Okay, but unless you're suggesting a computer interface based on proprioception, I'm not sure that that's relevant to the topic at hand.
I too would be interested to see an enumerated list of over 100 senses.
Isn't that what all the various VR glove type controllers are?
Can't help but notice that you again didn't answer the question. I will third the request.
Rather pushy demand for well after the East Coast has gone to sleep.
not the parent but I posted an honest attempt at such a list on the sister comment if you're interested :)
Ok, 100 senses could be too many for you to type. Maybe could you list 20 human senses?
I'm no biology expert but had to study some of this for my robotics degree not so long ago.
"Sight" split into rods for brightness sensitivity, and cones, each of which is deicated to one out of red, green, and blue. green is wider gamut of color than the others because there is a lot of green in nature. These sensors are fully independant of each other for the most part, although there is minor overlap between cones which is what we call other colors (yellow etc)
"Taste" Again split into different specialised papillae sensors. I dont remember so well, but its something like foliate for sour sensing, fungiform for salty, and vallate for bitter/poison. There is also sweet I dont remember the name, and some argue for umami
"Touch" There are an ungodly number of very distinct senses that go into touch. From more abstract ones like pain, heat/cold, moisture (not evenly distributed around body, for example have to touch things to lips to distinguish cold from wet), proprioception for joints (arguably an independant sense for each joint, or at least each "kind" of joint, because the biological mechanism is different for ball joints to saddle joints etc as well as specialised proprioception for eyeballs, tongue etc)
Then in actual touch touch there is Ruffini corpuscles sensing skin stretching and slippage of objects past the skin
Merkel discs, which senses pressure applied to the skin and low frequency vibration
Meissner's corpuscles, which sense vibrations in middle range. They are very sensitive and allow very slight sensing of tiny impulses such as picking up an insect's wing
Pacinian corpuscle sense extremely fast vibration which among other things allow the distinction between "rough" and "smooth" surfaces (by mechanical movement causing vibration)
There are also free nerve endings sensing stuff like itching and bruising.
Hair foillicles also sense movement and stretching of the hair they are attached too, which provides more touch data. Incidentally this mechanism is also used for balance and hearing via really complicated interactions of tiny hairs in the ear.
"Smell" Smell is fiendishly complex, it actually is more akin to the way antibodies in the body are made in the sense it consists of thousands (and millions) of specialised sensors made to "fit" and attach to individual compounds, so there are almost limitless individual senses of smell
There is also a whole lot of internal sensor data for things like breathing (you know when you are short of breath), digestion you know when you are full, or when you are craving one of a number of things sweet salty etc), bladder control.
This is mostly off the top of my head and i'm certain i'm misremembering some of the subtlties and a whole bunch more senses both obscure and immediately recognisable ones to any owner of a human body
No, but points for a solid attempt. Senses areinput(to the body), notoutput. Glove controllers are just output via movement, just like keyboards and touchscreens.
True, part of what makes them cool is that your proprioception more or less agrees with the virtual hand that you see in your headset, but that's just window dressing. The computer has no way to control that.
Not sure there are hundreds, but just to add one example beyond “the five”: Proprioception [1]
Projectors really love flat, non-moving surfaces. Will be interesting to see how they've coped with a wiggly hand wiggling around or in motion.
I highly doubt this thing is even usable outdoors. You would need pretty insane brightness levels for this to work in the sunlight. Companies have been trying to make projects with touch input for years, and nobody has gotten close to anything resembling a consumer product, I highly doubt they achieved it here
I agree, even though I'll reserve judgement until trying it. But it remains that limitations in power are unavoidable, and projecting a laser image onto a hand in daylight is going to use an awful lot of juice, particularly given the projector is so tiny I've no idea how this can be done so it's functional, nevermind "insanely great". Same goes for their claim that the speakers inside this tiny device are worthy of getting sound to the ears while skateboarding outdoors. You can have all the Head Related Transfer Functions in the world but again, you need speakers and amplifiers on the order of several watts to get the sound up to the ears. My iPhone Pro Max sounds great and loud in a quiet room, but take it onto the street to play music, it's barely audible. Also not sure how the device will know what kind of HRTF to use given its placement is going to vary so much.
First, I’m really excited people are trying new things, but I won’t be buying this just based on the demo.
The conversational interface has never worked before for many reasons, but that does not mean it cannot work in principle, …. I'm glad they're trying. Also, the laser display is neat!
So I did a lot of work over the years to research voice UI/UX and I’m very skeptical about this, even with the LLM stuff. I think an LLM was missing from the Siri/alexa era to transform it from “audio cli” to “chat interface” but there’s a few reasons besides that it didn’t catch on.
The information density and linearity of chat, voice especially, is a big problem.
When you look at a screen, your eyes can move in 2 dimensions. You can have sidebars, you can have text fields organized in paragraphs and buttons and bars etc. Not so with chatting - when you add linearity (you can only listen to or read one thing at a time, conversation can only present one list at a time) it becomes really slow to navigate any sort of decision or menu trees. Mobile-first have simplified this of course, but it’s not enough. Reading TTS becomes even slower to find the info you care about. It’s found a place for simple controls (smarthome, media, timers, etc) and simple information retrieval (weather, announce doorbell, read last text). Then there’s the obvious problem of talking out loud in public, false response recognition etc which are necessary evils of a voice UI.
I think the best hope for a voice device like this is to (as they’ve done) focus on simple experiences like “what’s I miss recently” and hope an AI can do a good enough job.
The laser display might help with presenting a full menu at once (media controls being an easy example), but it probably will end up being a pain to use (eg like a worse smartwatch).
Honestly though, my biggest hesitation (which could end up great) is the “pin” design. It’s novel, especially with the projector, but how heavy is it and how will that impact the comfort of my clothes? What about when wearing a jacket or scarf? Will this flop around while walking? Etc.
IMO, with LLMs we won't reallyneedinformation density except for certain classes of people.
Even now - clicking through some insurance company's website hierarchy to find something out isinsanelypainful.
But even for researching things that we should probably care about enough to do it ourselves, correlating different sources of information or working through abstract/ambiguous problems... the vast majority of ordinary people will 100% take the easy way out and let LLMs do most of the thinking for them. Even with free GPT-3, people are unflinchingly having LLMs solve problems they don't want to think about too deeply. What they pay for, with occasional inaccuracy, is more than offset by convenience.
IMO, with LLMs we won't really need information density except for certain classes of people.
Maybe, but I don’t know if that day is here yet. I think “most people” do actually consume information. Like reading an insurance company’s website is pretty rare compared to things like using the Amazon App. Like it’d be hard to consume a list of 5+ push notifications via voice if you had to listen to them 1 by 1 instead of skimming them in a list next to their icons.
Even simple things like scrolling through a list of songs becomes painful. I have like 10k songs in my (streaming) library Sometimes I randomly scroll through it to find old music. That sounds impossible on voice. I’d be stuck with “shuffle” mode.
Being able to summarize and search text conversations via voice queries from their demo would be nice, but today that’s a task that you need a screen for.
The demo video shows the man buying a book online via voice after holding it up to the camera. How often is that the online shopping experience? I can’t imagine shopping without a screen 95% of the time.
we won't really need information density
we may not need it but we certainly prefer it. People went completely voluntary from voice calling to texting and within texting to ever terser forms to the point were an entire website was built around a short character limit.
Except for people with disability I have not really seen a single case where that tendency towards compactness is reversed in communication.
There is also a lack of serendipity or explorability with voice: How do you know whats possible? There is a reason a GUI menu is called a menu. It not only gives you access to multiple options but also at a glance an overview what options are there, like a restaurant menu.
Discoverabilityis the term; e.g., "What Can I Say? Effects of Discoverability in VUIs on Task Performance and User Experience"https://dl.acm.org/doi/10.1145/3405755.3406119
Will this flop around while walking?
If a science fiction author was writing it, the need for stiffer fabrics to support chest cameras would synergize with a neo-Victorianism in generation alpha. (Formal button-up shirts and higher necklines for enforced modesty)
Linear conversation is a big problem for anything beyond simple, casual usage. It is the reason that YouTube is a terrible research platform. Is the information you want inside that 3-hour video? Possibly, but with text I can search an article for content or skim sections to determine if it's worth a deeper read.
Let's not forget the value of non-linear input. Good search terms are often constructed rather than spilled forth. Sometimes I enter search terms, read it and realize that it's like to return unrelated results and need to modify it. By the time I realize this while speaking to an AI it's already spitting out the wrong information.
This leads to a need for altered interfaces that allow these scenarios to be accomodated. This is v1.0. Let's see where it goes.
It'll flop everywhere, not just while walking. Boom boom.
But yeah I've been thinking that too. "Oh, put my coat on - better spend 30 seconds messing around with my pin" [...] "Ahhh back in the office. There goes another thirty seconds moving the pin so it can film me looking at a screen for four hours"
And yeah, I feel like the weight would definitely pull my jumper or t-shirt out of shape, and make things like my collar/neckline look out of whack. Maybe they'll bring out a range of clothes suitable for it, or suggest you wear a coat indoors like the woman in the video is doing.
Whether this will be the solution is unclear; the privacy/creepiness angle is still real with an outwards-facing camera.
I don't think you're wrong, but it's funny that we aren't as concerned about everyone walking around with outwards-facingphonecameras.
tbf those are usually in a pocker or down facing, with filming being an explicit and purposeful action
Plenty of people are walking around, or sitting in a public place with their phone cameras facing out to the world.
I think what the parent comment was saying is that when being held in a normal manner, the phone is facing about 45 degrees below the horizon, so it can't see much except people's legs. To film people's faces and such, you'd have to tilt the phone up much higher than you would if you were just writing a text message / email or browsing the web. If you try writing a text on a phone that's angled up to the horizon like that, it's harder to type and harder to read the screen.
It's funny, I see people cover up the webcam on their laptops all the time, but not their phones. They forget that there's a camera on both sides of the phone.
Webcams in laptops are shitty cameras, and for most people, they're useless anyway (even in post-pandemic era, hardly anyone does conference calls, video or otherwise). Meanwhile, "selfie camera" is like literallythe main purposeof the phone for a large chunk of the population.
Or microphones being present absolutely everywhere.
I myself never felt like taping my camera, I feel like if someone pwned my system I would be much more worried about the leaked audio.
Well said.
Yeah, I expect that this will die a horrible death in the market, but it's definitely interesting with it's Star Trek vibe. :)
The next generation of devices that incorporate some of these features might be more successful.
I imagine if this company is successful, it will become quite the enterprise.
This doesn't feel like the right product for a lot of reasons. (Wait...do I have to pin it to the outside of my coat when I put that on? What's the battery life outside a coat in winter? Will it catch on my seatbelt?) Lots of practical problems for a lot of people. Still, LOTS of interesting ideas here.
It's probably a good thing if computers did a better job of getting out of the way, and stop so loudly disrupting human interactions.
And that is not this. Talking out loud every few moments with verbal commands do a device is way more annoying that someone looking at and typing on a phone
That said, I agree with you at a glance it's neat. I think in reality though it's a poor idea given how often people need to give a verbal command.
The talking out loud I agree is problematic. The bluetooth functionality and increasing quality audio pass through give me hope for a simple earphone in one ear, and eventually... this:https://x.com/ruohanzhang76/status/1720525179028406492
Also bullish on hand gesture control. Maybe most stuff will eventually become jutsu level fancy hand movements lol. What a time to be alive. It is easy to remain grateful in this age of rapid progress.
The idea that "the computer will disappear" is probably accurate in the long term.
Why though? Computer requires attention, which pretty much rules out doing something else while using it, except perhaps when passively listening to a podcast (which doesnt really qualify as computer use). Even though we may see new mediums, the mode of interaction will remain similar to that of a book
I agree. This looks like a gadget, which means I probably won't rush to buy one, but I'm glad people are trying to push the envelope.
The problem is the voice-based approach: it won't work reliably in loud environments, it won't be usable in a doctor's waiting room, libraries and other quiet environments, and some people simply don't like voice UIs.
If you want the computer to disappear, why not a better smartwatch? Or glasses, this time without the sci-fi gadget look? Both could support the exact same featureset but with a screen.
This strikes me as a less-functional Apple Watch that you wear on your shirt instead of your wrist.
(Yes, Siri is not great today, but that will change very quickly with Apple working hard on their own LLMs.)
Cool project, but not something I imagine most people will want. Like Google Glass.
They even did the cringey stunt Google Glass tried and featured it on the runway during Fashion Week, as if that instantly makes something fashionable:
https://images.fastcompany.net/image/upload/w_1200,c_limit,q...
It actually reminds me a lot of much older product:https://www.youtube.com/watch?v=vj24kNJEQJs(bonyt noticed this first)
Indeed. It just screams "comm badge", which makes the product idea obvious,andmakes me surprised they somehow managed to make zero references to Star Trek in the entire godawfully long landing page.
Kinda prefer a tricorder…..
At some point, someone produced an actually working TNG comm badge as a Bluetooth phone accessory, apparently it's still for sale:https://shop.startrek.com/products/star-trek-the-next-genera...
Though from the reviews I've seen (and as with so many Bluetooth devices), it's unusably terrible, and the battery only lasts a few hours.
Yeah, just realized this is an Apple Watch competitor — but one that requires and odd new paradigm of interactivity that seems much worse than that of the Watch. Lifting your wrist up and having a small screen you can look at and talk to seems so intuitive in a way that the Humane widget doesn't.
Think of the simple interaction of wanting to issue a voice command in public. Watch: Bring it close to your mouth, maybe cover both with the other hand to be even less audible to others. Humane: Smoosh your shirt up to your face?
(Also: I live in one of the sunniest places on earth — I simply don't trust that I'll be able to see light projections onto my hand when I'm outside.)
Anyway. All in favor of exploration and new ideas. Very willing to be proven wrong on the form factor. But I also feel like we've kind of solved the wearable computing interface problem — a couple hundred years ago, turns out — and so it's going to take a lot of convincing.
Lav mics aren't half bad for picking up speech, and maybe this can be improved by some beamforming?
He’s talking about how to issue a command that you don’t want the whole world to know about, so you bring up to mouth and whisper.
Watches & phones don't have the optical & audio "visibility" of the Humane AI Pin -- which, incidentally, looks an awful lot like the Axon body-worn cameras for police.
If you really want to Always Be Surveilling, wouldn't a better solution be a tiny cam/mic accessory that pairs with your phone/watch? You could use the same magnetic battery idea, but in a much smaller form factor.
This thing (the Humane AI Pin) is aiming to be a phonereplacement, which seems like a really steep challenge given its limitations--how could it replace any of the things I use my phone for on the subway to work?
You have to hit it to activate it, so it’s not always surveilling.
It’s a great point that if this modality becomes popular, then it should just be an accessory on top of iPhone or iWatch.
It's kinda funny that the latest version of a "less space than a Nomad" comment now holds up Apple specifically as the product of comparison.
Also more expensive, i pay 10/month for a dedicated watch, and i can still make 3rd part apps for it, i can't do that with humane as far as i can tell and don't really want to put it on my shirt like this.
Only real differentiator is maybe the real time translation, but that's not a frequent use case and i think i can take my phone out for that with google translate as needed.
It's too bad, love new hardware, this isn't it for me at least with that price and functionality.
Haha in the demo he asks "when is the next solar eclipse and where is the best place to see it?" - The AI responds correctly that it's on April the 8th 2024, but then clearly hallucinates like crazy and says "the best places to see it are in Exmouth, Australia and East Timor" which is totally incorrect - this eclipse will be visible only in North America, and invisible in Australia and East Timor. Good job he didn't ask it to book flights to Australia on the 7th of April.
You'd think your tech demo would check to see if your AI was hallucinating!
Yep, the eclipse answer was incorrect (1).
You’d think they’d have learned their lesson after Google Bard’s hallucinated demo!
this is aside but I really don't think you need to use a footnote if your comment is two sentences and a link
I was also gonna link to the Bard article, but was lazy.
I appreciate it and do it myself. If HN supported hyperlinks of our words the way html does we could just do that, but footnotes are the next best thing. Big unreadable links in the middle of what you're saying, even just one sentence, muddy the message.
It also says that amount of almonds has 15g of protein, which would actually be like 50 almonds according to a few different online nutrition sites.
Cant believe they left this stuff in.
Gotta at least make it seem good in the commercial, this ended up being the opposite of a sizzle reel
I think people are very eager to believe AI is useful in ways it often isn’t. Kind of like the crypto hype. People were willfully ignorant to how little sense it made in so many contexts. The common thread here is “yeah, but money!”
I’m not an AI detractor. I use it and really like it. I just don’t like it for information like this. Anything where the response needs to be verified yet is very brief makes no sense to bounce off of an AI, in my opinion.
Kinda shows you what their target audience is.
Google received significant backlash for this when demoing Bard:https://www.bbc.com/news/business-64576225
Because of stuff like this I don't get how you can trust it not to hallucinate when it summarizes your inbox.
This thing better not come with a speaker. Public speakerphone users are among the most obnoxious pests.
It even comes with a "Personic Speaker". From the website: "Ai Pin’s speaker system uses a Head Related Transfer Function (HRTF) to create a personally optimized bubble of sound, at a fixed distance, regardless of how soft or loud."
Unless physics has changed, I think headphones are the only way to do this.
Directional speakers exist and are very effective actually. Whether this device is using one, I have no idea.
You can't create a "bubble" of sound unless that just a marketing term for quiet and facing only the user.
Well I don't know about a bubble of sound, but there are "sound lasers"
go about 2:14 into that video. also outside of this argument, that is really cool either way. thanks for sending!
Haha what do you mean, that's the main feature they are advertising here. I agree that the advent of portable high-output, low-power audio amplifiers is chiefly responsible for the downfall of human society so I, too, hate to see it.
This comment thread will go down in history along with the famous HN Dropbox thread.
This thing is incredible and will eventually crush the iPhone. Solves iPhone addiction while retaining the utility of an iPhone? Solid gold.
along with the famous HN Dropbox thread.
Most people saw the utility and the use cases of Dropbox even when it launched.
What's the utility and use case of this? What problem does it solve?
IMO "Solves iPhone addiction" is more or less a rephrasing of "people will quickly get bored of this".
It's just a smartphone, except you can't run third-party software, can't directly interface with it, and can't connect it to other machines. And instead of holding an N-million pixel, M-million-colour, extremely high-constrast display directly in your hand, you have to indirectly project (meaning extremely LOW contrast) a single-colour display onto your hand from a projector that's shaking around being clipped to your clothes.
The only single hypothetical upside I can see to this tech is that it might lower the two-second delay in looking at my phone caused by putting my hand in my pocket before raising my hand, but you could say that that goes against the goal of solving phone addiction.
Maybe in the long term view - people correctly identifying that Dropbox had no differentiator (to quote Steve "this is a feature, not a product').
Apple watch? Cellular mode allows this, has siri built in, can handle calling/messaging/etc. People don't want to replace their phones though.
The thing is, most people don't actuallywantto solve their phone addiction even if they say they do.
In reality, they want to read news while waiting at a doctor's office, play games while they take the subway, and see Instagram updates from friends throughout the day.
And if you already want a less capable device, it's called an Apple Watch, but it comes with a little screen that is way more useful than laser projection, and will soon surely have a powerful LLM it can access. (And paired with AirPods it does a much better job preserving your audio privacy.)
So it's hard to see how this is going to succeed, when Apple can just copy the good part (LLM) as part of the Watch.
iPhone addiction
This is not a thing. "Screens" aren't 'separating us from one another', or 'distracting us'; that's fuzzy verbalistic nonsense, made up by marketers who want to sell you non-phones, and bloviating op-ed columnists who don't have a clue. It's so ridiculous, that everyone has seen the memes debunking it.[0][1]
True invasiveness is expressed as: "how long does it take me to do this thing I want to do?" In other words, you need a human-computer interface that reduces friction as close to zero as possible. The phone won because it's the best at that. The "pin" is orders of magnitude worse, so it won't catch on.
Surprised to see all the negative sentiment here. Besides the true point that voice-only interaction is a bad idea for public spaces, I think this is a very cool device with lots of potential.
I'm especially excited about the fact that they found a really low-barrier user interface for using CV and AR-type functionality -- like, without having to put on silly glasses, and without having to use a second device with a screen in addition to the pin.
Come on, this is cool! Or would you have designed (and built!) a better device?
Could you use this effectively while moving? Is the device going to jostle about? Can you keep your hand that steady (in relation to the projection)? Can you do this while standing, slouching in a chair, walking, and/or riding in a vehicle? Could you use it discretely, like in a meeting or waiting room?
Probably yes to some of these questions and no to some others.
But why only focus on some potential shortcomings instead of appreciating the positive aspects?
Btw, there is no tech device out there for which I couldn't come up with a list of critical questions like yours.
What are the positive aspects? I don't see any, this looks terrible to me, a waste of money for both investors and consumers
I'm limiting my points to the physical usage concerns, there are more concerns if we broaden the context, many other commenters have pointed them out. This is not even the full list of physical concerns. What people wear will have a big impact too
great, so why troll this thread with your redundant viewpoint?
I think a lot of us are coming down from the high of buying expensive surveillance devices.
It was looking very promising until I read "A subscription is required to use Ai Pin." right at the bottom of the page. Oh well.
Well it does come with a phone line and data coverage through T-Mobile, so like a cell phone really?
Can I use my own el-cheapo SIM with it?
true. it does make sense however, you can't expect the ai model to reside in the wearable device
It's seemingly OpenAI API
https://www.theverge.com/2023/11/9/23953901/humane-ai-pin-la...
SmartBrooch is a $699 + a $24-a-month subscription commitment.
700$ for a device I can't watch videos on, locked to one provider and has only first party Apps. An immediate flop there is no doubt about it.
2007 flashbacks.
The iPhone offered new possibilities, what does this offer which we don't have in a better form currently? AI powered voice control? Always on camera?
I don’t disagree with you at all, but your assessment matches the iPhone’s general tech world reception precisely (if you swap out “video” with “Flash”), so much so that I thought you might have tailored it that way.
$700 buys a full PC I can use to code, browse the web, and run millions of desktop apps. It comes with a screen too.
This thing has no business being $700.
Don't forget about the $24/mo subscription :)
New categories of things to wear are really hard to get people to adapt without some instantly-compelling use case. I'm having trouble coming up with a mildly interesting use case for this.
Get it to the point where it can constantly observe my surroundings and make the sort of suggestions a partner might ("if we stop at the hardware store first, we can get those fresh bagels Bob likes from the place that closes early", ) and maybe there's something to talk about.
Real time translation and direction finding seem like the obvious use cases for an always on device attached to your chest, plus the standard Siri questions without having to grab a phone. Those are more niche than a phone, but big niches
You just might have to deal with reactions to always on cameras and the annoyance of being admonished by LapelClippy on a regular basis
We know how the glasshole situation went :)
Siri question type things I can use my watch for real time translation is neat but it has to be a big and frequent use case to differentiate from just taking my phone out and using google translate (which i believe i can also do on my apple watch).
I think audio translation when it's quite close to real time will feel more natural without the phone in the hand than with it especially with a well functioning gesture interface, but yeah, there's nothing that stops a phone from doing it.
Also agree this is more a smartwatch competitor than a phone competitor. The fact that smartwatches sell at all is proof there are (much smaller) markets for wearable devices that do stuff that can be done at least as well on a phone if you get it out your pocket, the argument for separating the powerful internet connected functionality from the watch and having it on some other wearable on your chest is that actually I like the wrist-mounted device that tracks my activity and sleep to not need charging every day...
I'd rather not. From "here is an actually interesting suggestion based on your habits and what i know about the people you interact with" to "Shop at walmart today! Don't forget about their crazy deal on flat screen TVs!" Is about half a step, and they're already selling this with a subscription service.
Asking the price of vintage photos of a solar eclipse isn’t a killer use case for you?
I know this is the direct source, but people really need to go read the NY Times piece on this -https://www.nytimes.com/2023/11/09/technology/silicon-valley....
A Buddhist monk named Brother Spirit led them to Humane. Mr. Chaudhri and Ms. Bongiorno had developed concepts for two A.I. products: a women’s health device and the pin. Brother Spirit, whom they met through their acupuncturist, recommended that they shared the ideas with his friend, Marc Benioff, the founder of Salesforce.
Sitting beneath a palm tree on a cliff above the ocean at Mr. Benioff’s Hawaiian home in 2018, they explained both devices. “This one,” Mr. Benioff said, pointing at the Ai Pin, as dolphins breached the surf below, “is huge.”
“It’s going to be a massive company,” he added.
This product was also named a "best invention of 2023" by TIME magazine before it was even released. Entirely by coincidence, Marc Benioff happens to own TIME magazine.
HBO's Silicon Valley may be over, but the real world Silicon Valley is still going stronger than ever.
Wow, this device looks extremely awkward to use. Imagine having to aim this at your hand.
maybe its tracking your hand and aiming the projection ?. Can't really say much based on the landing page, I doubt most of the interactions are as smooth as they are pictured. But it is refreshing to see a new take on mobile devices.
Peak Silicon Valley is when a Buddhist monk arranges a business meeting that leads to a startup investment.
Peak Silicon Valley is when the Buddhist monk is rewarded by someone acquiring his mantras-as-a-service subscription webapp
DOA.
- too stealable, by people who will not care that a subscription is needed.
- the act of theft will happen violently and close up, not fun.
- it's an easy smallish act of violence, which means the on-ramp to violence is also easy. Not something most people want to invite into their lives.
- "they" (the Committee) will say phones can also be grabbed. But the equation here is different. With a phone there's no hand on your chest, no tearing of clothing, and for a phone thieves know you will try harder to get it back. With this, after the violent taking, the shock value and the relative disposability of the device will stop most from chasing the thief. This will be known subconsciously if not outright, so the "phones are also easy to grab" comparison does not apply.
- the features are already provided by something most everyone has, a smartphone.
- the level of obnoxiousness of the status signaling is off the charts.
- association with AI is not a positive for many people and is stigmatizing (whether the stigma is correct or not).
- built in camera and recording functionality or even the perceived possibility of recording is also stigmatizing and highly antisocial.
- all the voice UX inhibition concerns others have been mentioning.
- [edit, how did I leave this out, but it's just too obvious]: subscription. We. Don't. Want. More. Subscriptions.
On the positive side, the size is nice, it looks good, and reading stuff off your hand is a cool idea, although it will look pretty goofy. But no.
too stealable, by people who will not care that a subscription is needed.
I even doubt there will be much theft of these. People will simply forget these, and stop using them.
So they showed one clipped to a jacket. Don't they take the jacket off? What's the intended usecase? That you take it off and re-attach to various clothign as you dress/undress? It also looks quite heavy, so most T-shirts and other light items of clothing are not really suitable for this.
Yep that’s a good point!
And they really doubled down on that decision by also embedding it (“pin”) in the name, if not the identity, of the product.
They could have coined a word (I’m not claiming this is not cringe) “pindant” as in a dual use pin-or-pendant item, and bought more flexibility, for example. Edit: somebody already coined that word, see the dot com (sfw), lol.
I agree that this could be doa. But if violent theft is that big of a problem where you live, you should try to fix that. Someone could snatch your phone out of your hands too!
It's trully is disturbing that some people have this constantly on their minds while (presumably) living in developed countries where 600$ meme-device is a thing and that's why they wouldn't get it.
The general public is being mislead on LLMs/AI and it's dangerous. These are indeterminate systems. We CAN NOT know what they are going to output.
A product like this makes it very difficult to verify what it is telling you.
As others have pointed out, their own product launch video has several inaccuracies in it.
Isn't this just the Halting problem? No software has the property of being "determinate".
I dunno man. I can guarantee that this program will either not run or output "Hello world" on your screen:
#include <iostream>
int main() {
std::cout << "Hello world";
return 0;
}
Seems like youcandetermine for some programs what they will do.Termination != deterministic
But, but, AI is just a tool!!! No, it's not. If something else is making decisions for you, you're the tool.
I counted 14 almonds in the video. It said there were 15 grams of protein in the almonds. Almonds have about a quarter gram of protein each.
Also, you can't view the total eclipse in either locations it stated.
That's a big part of the issue to me as well, it's not going to be reliable. The dragonfruit example is correct, but I can't imagine it being accurate when it's not "single whole objects of average size that are in the USDA nutrition database". Pretty scary if you try to rely on it for something like translation.
The dragonfruit example is wrong.
How much sugar is in this?
A whole dragonfruit contains 7.31 grams of sugar.
100 gramsof dragonfruit contains 9.75 grams of sugar.
https://fdc.nal.usda.gov/fdc-app.html#/food-details/2344729/...
A whole dragonfruit weighs closer to 350-600 grams.
https://www.seedsdelmundo.com/blog/average-dragon-fruit-weig...
The link you posted has "1 fruit" as an option listed as 75g. Which does seem light, but it's at least an actual source for the number it used instead of a hallucination like the eclipse thing.
Yeah that stood out to me, would need ~60 almonds to have that much protein.
A screen would be useful for showing the details of how it misestimated the almond count, and let you adjust them.
This will absolutely be a commercial failure. I don't harbor any ill will to anybody in the company, and I wish them the best, but a voice interface in public is a complete non-starter. What is the use-case for this device for which a comparably-priced smartphone isn't better...much better? The only real distinguishing feature this thing has over a smartphone is projection, and even that doesn't seem too difficult to tack onto future models.
Once the AI Pin replies with a "sorry, couldn't get that" a few times, people will give up on it and reach for their phones. I could see it finding some success in the accessibility market, but outside of niche applications, I don't think this thing sticks around.
"...a voice interface in public is a complete non-starter."
As I sit in a crowded coffee shop at a shared table, with three people who are on meetings and talking away about sensitive things. Right next to one another! Along the wall there are two people chatting on their phones to family, one of them on speaker.
People don't care, they don't have manners.
That’s actually a great test - can it be used effectively when there’s ambient noise?
Once in a while I catch up with friends in a coffee shop and I usually get some death stares from those remote laptop workers.
A coffee shop is NOT supposed to be quiet. If you want quietness go to a library.
I think what you want is a public library. Or some headphones.
The "laser ink display" looks a bit like the totally bunk display tech of the Cicret Bracelet "product" that VFX videomaker Captain Disillusion did a comprehensive takedown of a couple of years agohttps://www.youtube.com/watch?v=KbgvSi35n6o.
While it looks like there are a few videos of apparent actual demos, I haven't seen one yet where the device (and more importantly, the recording camera's settings) are controlled by an impartial reviewer, and I'm extremely sceptical that this is usable in the real world. There's a demo by the founder where one of the inputs is to tilt your palm up, and even in the demo the projection struggles to compete with the indoor lights, nevermind the sunhttps://youtu.be/CwSeUV3RaIA?t=205.
The pitch of this seems to be "no more distracting screens, and no need to download and manage lots of apps and services". Except there is a (very poor) screen, it's your hand. And you're limited to just one service and set of apps, the one that comes with the device.
It's all well and good saying that the AI can do everything you want, but the real world (sadly) has copyright restrictions and content licensing agreements which an out-of-the-box service by a legit company will have to abide by. If the song I want to listen to isn't available on whatever music service this product is partnered with, could I transfer music files from my computer to this device? There's a lot of use cases like this where you very quickly start to want an actual screen, and actual methods of input more precise and domain-specific than conversational voice commands.
If the song I want to listen to isn't available on whatever music service this product is partnered with, could I transfer music files from my computer to this device?
What a weird example. They say they've partnered with Tidal, which would have 999 out of 1000 songs people look for, maybe more.
Even Spotify is missing probably more than 5% of what I'm looking for, which makes me strongly doubt that claim.
If the song I want to listen to isn't available on whatever music service this product is partnered with, could I transfer music files from my computer to this device?
Unfortunately, "nobody" has music files any more. Spotify forever.
(Of course readers here are the exception.)
Things you see in movies aren't meant to be made real. In movies, the pin is good for theatrical effect, because the actor recites his thoughts to the camera. There is no camera in real life, instead there is other people
I’m curious what movie you’re referring to? Even in Her, Theo just puts his phone in his shirt pocket with the camera exposed so Samantha can see things.
Star Trek (TNG, but the other shows too) had badges they wore on their shirts as communicators. Helped for story telling, because as opposed to a phone, the com badge would play the other person’s audio loud for all to hear.
Correct me if I'm wrong GO, but I'm pretty sure he's referring to StarTrek, by comparing this pin to a combadge.
I've been waiting for this. Seems like it is a self-contained cellular device requiring a subscription, which makes sense. I guess I am curious how I can be in communication with it. Will my contacts be texted from a new phone number? That seems like the biggest hurdle for me, as I'd just like to use my pre-existing cellular service that I already pay for.
I also find it curious that a former Apple exec formed this company. I'd assume Apple itself would want to pursue this internally, as such a device would be yet another killer addition to the iron grip of the Apple ecosystem.
Imran wasn’t an exec, just an IC designer.
He was a lot more than “just an IC designer”
Bethany wasn't an exec either, but she was a project manager reasonably high up on the totem pole.
It's nice to see this product isn't actually vapor. Congrats to them.
I'm shocked at how bad the presentation video is.
Almost the entire beginning of the video is about which colors are available and how the battery snaps, with zero hints about why I would need a cringe projector on me.
I can't believe this was shipped by ex-Apple people. Imagine Steve Jobs introducing the iPhone like this: "We are introducing a revolutionary new device. The first thing you should know about it is that it has a charger and an Apple processor. The second most important thing: here is how the battery works."
Their on-screen chemistry is super odd -- particularly considering they are married and cofounders. It's giving low-rent Apple.
Oh they're married!
I couldn't figure out why he kept touching and adjusting the pin on her chest, a thing I would never do with a coworker. All I knew was that she was CEO and he was Chairman, so I knew it was a joint decision. This makes so much more sense.
The TED reveal he did was really bad but I attributed it to being live. Think it's telling that they internally didn't see that and tell him "Dude, you desperately need to take speaking classes" and that with a pre-recording that was the best take.
Sign no one internally is being honest with them, feels they can say "It's bad"...
Does it know where your hand is? or do you have to put your hand in a certain spot?
Almost certainly requires putting your hand in a specific spot for the projector
I was wondering the same thing
It has a depth camera and can certainly track your hand as long as it is within the field of view of the camera and projector, whatever that is.
Edit: wow their tech specs are actually detailed. 125 degree FOV.
Another revelation from that FCC filing: OpenAI CEO Sam Altman is Humane’s largest shareholder.
Altman owns “14.93% equity and voting through a number of holding companies none of which individually holds 10% or greater ownership interest in Humane,” the filing states.
https://www.lowpass.cc/p/humane-ai-pin-cellular-mvno-sam-alt...
Huh, interesting. He is known for not having any equity in OpenAI - interesting that he has other AI investments instead.
Sam might be completely wrong about the most important things (I don't think so, but I don't think it's super unlikely either) and I still peg him as mostly genuine and smart, while not being very concerned with and certainly not being very good at being likeable.
He was fairly outspoken about getting (even more) rich off of other investments and believing OpenAI was simply too important to make it a conflict of interest, and mostly considers it a nuisance/distraction. That's fairly arrogant, and, again, might be completely off but I still do believe he means that and I would give it good odds to be the entirely right course of action, if most impact/most quickly is what you are going for.
A lot of that subscription fee is flowing back to OpenAI.
I think it looks quite neat. I dunno, maybe I played too many video games growing up, but the idea of UI that pops up when I need it in response to what’s in front of me seems cool as heck. But it designing that UI will be challenging, and just using something like this for texting and other cellphone tasks seems like a real waste.
I have a similar feeling about augmented reality glasses.
Having used the Hololens 2, I can say I definitely want smart glasses, but not the passthrough kind, the realview with projected holograms kind.
It was/is an amazing experience. It's really a hardware miniaturization at this point, except that M$ canned the device and team to focus on other things. Really thought this was their opportunity to build a device that would dominate the market
Unfortunately Microsoft won't bring it to market (and tbh I didn't think they would be the ones to do it anyway) but the idea is there and they've contributed a lot to the tech. Maybe this is going to be like the virtual boy, we're gonna have to wait 20 more years before the tech is viable and someone decides to make the thing.
I didn’t realize they were winding that all down.
It seems like a shame, Apple is entering that market soon with a device that sounds like it’ll be the same price, and… I dunno, I’d expect the real vision advantage to be a pretty strong selling point.
Between things like the Apple watch, and the upcoming glasses-based interfaces, this seems to kinda do nothing well and some of those things, just more poorly. It's beautiful, definitely interesting, but seems pretty dead in the water.
The MIT wearable demo from a few years ago which used a similar concept to project an interface in the real world was incredibly compelling, but mostly because it assumed near flawless real world AI object recognition, along with flawless projection onto said items. They'll need to demonstrate this on this particular device, before this becomes remotely interesting. Yes, it's a "detail", but I think for a lot of this kind of tech, demonstrating just how DEEPLY you can go into the interactions is sort of the whole point if they are thinking of replacing the kinds of devices that we depend on.
People have proved that they're willing to go far to make tech lighter.
Wearing glasses when you can just wear nothing seems like a big progress to me.
The only thing that remains is to make it look better imho
There's always physics. You can't make batteries out of nothing.
You can't make batteries out of nothing.
We can always discover new physics, or utilise already understood physics to design something more efficient. Like what if your phone was efficient enough that it could work only on the heat given off from your hands and/or ambient light? Sounds far-fetched but I won't be surprised if this is commonplace within the next 50 years.
Operating Temperatures 5°C to 35°C
LOL. Not made for this planet. Heck, put a jacket over it and your body heat could take it over 35.
I hadn't noticed this part, that is interesting. It was outside of that range (at the high end) for 70+ days this year in Texas, and no doubt come February it will be outside in the other direction too.
That's insane. So in a British winter your expensive, subscription-only phone replacement... just doesn't work?
Star Trek combadge vibes. Neat.
Already have it in my ear, which is honestly better than on my shirt
I was hoping to be the first to notice this :D
I think input is very much solved at this point. We have reached peak efficiency and peak usability with the touch screen and pinch on the mobile. Anything else is regression at this point and will not stick
I agree. The best voice input will not replace screens. I think the next frontier has to be some sort of neural/retinal display that sends info directly to your brain.
This is cool, it's a nicer abstraction than what the Google Glasses & then Meta Glasses tried to go for...but it's still a pin I have to wear on my shirt which is stopping me from adopting.
At least for me it's very common to have several layers of clothing on most days. It would be hassle to attach this from outer jacket to shirt when moving from outside to inside.
Lucky break that LLMs became a thing while they were making this
Given their shareholders it might not have been luck as much as they already knew.
Seems like cool tech and I'm excited to see how it does. I guess I'm sort of expecting a flop since this relies on good connection and fast ai over cell signal still seems like a challenge in a lot of places (upload voice file, speech recognition, nlu/llm orchestration, etc) but I do love the idea of a less intrusive 'smart phone' that would let me leave my phone at home more.
On another note, this reminds me a lot of the short story The Perfect Match by Ken Liu. The story isn't ground breaking but is worth a read and harps on AI assistants making decisions for people and driving biases based on the corporate agenda and sponsors (not to get too tinfoil hatty).
Communication seems to be a major selling point of the pin going by the demo, but I'm pretty certain it is impossible for it to work with iMessage, WhatsApp etc. in the way that is shown, so I'm wary about the actual advantages.
100% this, the lack of a clear 3rd party integration path does raise alarm bells in terms of breaking into mainstream as a customer product. Curious to see where they are going with their “we don’t do apps” LLM ecosystem.
In the video, they apparently didn't actually check up on the answer it gave about the next eclipse. The April 8 2024 total eclipse is best seen in North America. Exmouth Australia was where the April 20 2023 total eclipse occurred.https://science.nasa.gov/eclipses/future-eclipses/eclipse-20...
Not to mention that voice based Q&A has been done by everyone at this point. Should have focused on the differentiators they have with that form factor.
I just watched the Humane video about their AI pin. My wife said it's too boring to watch and my thought was 'wow, this guy doing the presentation is also bored'.
There has been a shortage of ADHD medication because people are taking it as an enhancement drug (tech folks do it a lot) and people who need it aren't getting it. My wife can't get hers, it sucks.
I told her 'I guess everyone in tech is also short on ADHD medication now'. ;-)
Seems like a design choice but it was odd. Both people didn't smile and seemed bored as you say. A bit odd, maybe tied to their like of things like Fashion Week where it's more of a serious look then a happy one
I think they missed the opportunity to make this a smartphone. Humans are obsessed with their screens, and taking away the visual appeal of a smartphone is questionable. Although the target audience may be people who are actively trying to use fewer screens so, owning this alongside an iPad could work for some people.
I disagree. Smartphones are very entrenched and a personal fashion statement in some ways. It's near impossible to get even a basis point marketshare in the global smartphone market. A wearable form factor is the right approach, though a pin feels like a weird choice - as opposed to a watch, earbuds, glasses etc.
Trying to play the linked video just crashes the page on MobileSafari, but I found their presentation on Youtube.
It’s almost as if they held their two most apathetic employees at gunpoint in a laboratory. Safe to say I’ve never seen that level of ennui in any startup’s presentation before.
Aren’t they the founders? It did strike me that already a minute in and they were bored with what they had to say about the product, like they’d already watched their lifetime’s share of Apple demos. At least when the Segway was launched we had weeks of building OMFG!!! hype around how it was going to be the worlds most radical new transportation device. Even the presenters of the hu.ma.ne website seem like they’re riddled with ennui, as you say, or just unenthused with presenting. It in no way seemed like the launch of a game changing new device. Plus, referring to their website, has anyone in history had cause to ask an AI assistant “how much is a vintage photograph of an eclipse going for?” like seriously?? And using this device as a speaker while roller skating in a city? I couldn’t even hear my iPhone Pro Max on full volume in an environment like that. Unless this device is phenomenally good and responsive, it’s going to flop so so hard in so many ways. People’s tolerance for latency and inaccuracy is low.
To monetize v1 could focus on the single purpose of tracking nutrition. If it somehow could estimate input / output and keep you on task, maybe nudging you to walk. Like a personal trainer on your shoulder all the time.
Well in their video demo they got the nutrition information completely wrong, so clearly the tech isn’t capable of doing that even in a controlled environment.
What I find the most interesting is the battery booster idea, there seems to be a mini battery inside the main computer part (to allow hot swapping of the booster), and the booster attach magnetically on the opposite side of the garment to bring the main source of power wirelessly. I would have loved this idea of battery hot swapping when smartphone used to have very poor battery, and it can makes product more durable because it is extremely simple to change the battery.
This is mostly achieved already thanks to the magnetic wireless charging standard found in many new phones and all the Apple ones. Replaceable internal batteries is still largely unavailable, but at least you can bolster the internal one and the same way as this pin.
On Mobile Safari: A problem repeatedly occurred on "https://hu.ma.ne/aipin".
I guess they really do hate smartphones…
Same...
I was thinking about a use case for this device. First thing, it's about water resistance. Is this waterproof? So I'm thinking about its relation to, let's say, a smart glass you have. I mean, they both can take a picture on camera. They can both understand your voice. You can use it as a microphone. So what's the thing that a smart glass is not able to provide?
Current smart glasses (like those AR glases from Meta) doesn't provide any screen output. Also not everyone want to wear glasses (and many that do need prefer contact lenses).
But I agree with your point that some combination of Apple Watch, AirPods, Meta AR could provide a better experience and probably future AR glasses will have better display technologies.
I wish though pico projectors got more maintstream in devices such as laptops, tablets - there could be many useful applications for indie devs.
I can't see this going anywhere. It won't work as soon as you need to put a jacket on for rain or cold weather.
Clip it to the outside of your jacket?
No thanks, I'll stick with my Cicret Bracelet.
For its beautiful white on black projector display
The most impressive thing about this is that it has managed to get funded...
I mean, has any skater ever thought "I want to listen to a hip hop playlist that mentions skating" - as someone into extreme sports I highly doubt this has ever been on a skaters mind.
And how many people are walking around thinking they want a picture of a solar eclipse?!
The device and its marketed use cases just seem so ridiculous, it honestly blows my mind that this thing exists.
To be fair the use-cases in Apple's Vision Pro release were equally misguided. Like the dystopic one where the father has this gigantic hockey mask on his face when asking his kid to blow out the birthday candles. Or the girl packing her clothes in a bag while CASUALLY wearing the face torture device so she conveniently can then receive a video call that she just has to take right then in video mode.
In both firefox and chrome I get a big page with a single picture on it. Navigation was non-obvious. Full-screen pages even at UHD. Firefox Readability couldn't gather enough data to provide a summary. When asked to provide a summary, phind.com hung for over a minute and didn't return a reply.
You know what, I just don't think I'm the target audience for whatever this site is trying to sell.
The website is seriously unusable. I was genuinely interested in learning about this just because it’s novel and creative, but every time I scroll the website on my iPhone it flings past several sentences then comes to rest with half-obscured sentences. It was so frustrating compared to a bullet list or slideshow that I just gave up and left.
I don't think this really competes with a phone. If anything it competes with a smartwatch. And as far as wearable tech, a watch blends in much better.
And a smartwatch offers a lot of health features, too! Something many people seem to find interesting and useful.
The watch is an accessory to the phone that adds features plus offers convenience and if you want to, it can temporarily substitute your phone like when you go to the gym. A cellular Apple Watch combined with Air Pods can do a lot. And both the Apple Watch and the Air Pods have use cases in addition to that in other situations. I don't see that here at all. I see a device with a very limited feature set.
Edit: Wording
I'm surprised that the data plan is priced so high. For an Apple Watch, you can add it to your existing cell plan with unlimited data for as low as $10/mo
Does anyone remember the Cicret bracelet scam? This seems an exact copy of that, except now on your shirt.
Creating this is a very bold move, regardless of success or failure. The noise it generates draws attention to potential alternatives in how we interact with our devices.
Talking about alternatives often leads to mere concern and agreement without action. Presenting an actual alternative, however, deserves respect.
Yet, there's a hint of skepticism in my appreciation. Why the 'AI' pin? The constant mention of 'AI' arouses suspicion about the product, recalling a time when 'AI' was not a part of their lexicon.
Nonetheless, I wish them good luck with "AI" pin.
I appreciate the effort but the interface is terrible. Voice is a terrible way to interact and get things done. Same with the pinching and hand based projection. It’s a bit better than voice but still not good. I don’t know what to make of it. It’s a good idea and people may buy it but it won’t stick
Discussion in progress here:https://news.ycombinator.com/item?id=38207656
The future is even more horrifyingly bland and dystopian than imagined by our best cyberpunk writers
Great website! It looks wonderful on my powerful new Mac.
Also, I like how the focus appears to be on how it will benefit the user, instead of focusing on tech specs.
Data privacy and personal security aside, I understand there will be a reciprocal action between lifestyle and technology. We might have to change our lifestyle to make room for and benefit from new technology tools, just as we have for smart phones.
If kids get interested in it, then it will have a chance. If it's cool and fun, then it has a chance. If it actually makes things easier and better, then it could take off.
But if it has that cringe factor like google glass had, then it will never get anywhere.
The possibilities are awesome, but something like this requires a reinforcing feedback loop on top of a network effect to become successful.
I feel more people than not in this thread like this, so let me say my contrarian point of view:
more muggable than a "I heart NYC" t-shirt
As noted here:https://news.ycombinator.com/item?id=38208209
No one wants to talk to a computer in the public. How could they missed that after all these dump voice assistants like siri and alexa. No one want to capture photos without directly checking them afterwards. This is so out of place.
Almost. I was almost sold until they mentioned the monthly subscription. This device looks so futuristic, but monthly subscriptions are (should be) a thing of the past.
Can I have the hand-tracking laser projector and nothing else?
No info on the battery life, and about the possibility of replacing it, all I know you are getting a 2-years at best obsolete device like airpods, also, very gimmicky.
I think the product is a failure, but the website is gorgeous.
Maybe this is simply getting older, but this feels gross. I don't want my technology to feel this opaque. The form factor is a little silly, too. Why not a smartwatch? Tech like this produces glassholes.
Glorified API wrapper with a shiny light for posers.
Anywhere outside silicon valley and this is pay several hundred dollars to get ostracized
This is literally an episode of Big Mouth. I cannot believe this exists.
The software is what will determine if it succeeds or fails, and it's clear from the "demos" that the software is nowhere near production ready (or exists at all)
is there any SDK or API?
I think this is a cool concept. I can't imagine ever using it with the latency demonstrated in their video. If I was in charge I'd be chanting "latency, latency, latency" like Ballmer or something.
I think the Meta Ray-Ban glasses are probably a better concept even without a screen just because of the better placement of the audio and camera. But I'm glad people are trying different things.
Love it
The problem with ditching the screen is that people genuinely love reading. Vast majority of my time in the day is spent reading, either on screen or paper. The product with the most variety is books, by a long shot.
They are making a device to replace the screen, yet the screen is a rich source of information! People love screens! Screens can present information non-linearly and also interactively.
While voice and sound is always linear.
And yes, this thing has a display, but it's low fidelity and cumbersome.
It's a no from me.
I liked the demo actually. But it's scary that they would have access to all personal details.
But then Google already has access to all my data - via Gmail and my phone (android)
Humane clearly appears to be a category creator in the making
I signed up a long time ago to be on the AI Pin info mailing list, so interesting to see it almost shipping.
I salute efforts to develop alternative computing devices, interfaces, and ways to interact.
I have been using my Apple Watch in an ‘only device carried with me’ mode, unless I will be wanting to take pictures or read an eBook when I am out of my house. As someone who has been using computers since about 1964, it is so refreshing to just have minimal connectivity - this helps being more present in the world.
I would love to be able to fast-forward and see how devices like the AI Pin do commercially in the next 5 years. How many people are like me and want to digitally disconnect, except for communicating with family or close friends? I would bet we are in a small minority.
Not having any third party apps is painful. No audio books, or podcasts? I get those on my Apple Watch used with AirPods. Anyway, I wish this company well!
It'e amazing, years of development, and at no point of time anyone stopped and asked: "what problem are we solving here?"
How is this form factor better than a smartwatch with a proper screen?
not sure about its future success, but the Voice Interface will be the new paradigm for sure.
Oof. Looks like a huge UX miss to me at best, and a worse version of Google Glass at worst.
The only way that any "pin" of this type will make it is if it is actually a peripheral that connects to a phone. That's what this thing wants to be. A camera/speaker/microphone/laser box that you wear. It gives you answers or short messages on the palm of your hand. Want to see pictures you recently took with it? Whip out the phone.
You will not separate people from video calling and media consumption, sorry.
But imagine you had this as a peripheral for your phone. You could be on a call with someone, face to face, and then say, "hey, I will take you for a walk". Switch to the pin's camera and keep talking while your friend now sees more or less what you see as you walk around.
The problem with voice interfaces in public is you look like a tosser while using the - and that's if they actually work. Also you may need it to communicate privately with you too...
"Hey humane, add a meeting next Tuesday at 2pm'.
"I’m sorry Dave, I'm afraid I can't do that. You have a doctors appointment about your haemeroids"
lol looks awful
The device and the subscription both cost more than my Pixel 8 Pro + Google Fi. Seems a bit steep, especially considering that the P8P can do most or all of these AI tricks and then some, except for clinging to my shirt.
One thing I am quite interested in here is the gesture controls. Google completely fumbled their Project Soli (although I bet they filed a thousand patents for it) but it's the kind of interface I want when driving my car. Buttons are better than touchscreens but gestures could be better than both.
Member’ the data broker character in Snowcrash?
there are no wake words, so it's not always listening [...] it doesn't do anything until you engage with it, and your engagement comes through your voice, touch, gesture, or laser ink display
I'm guessing they mean a combination, so you need to touch AND do something else. But taken literally the gesture option implies they're also always watching.
if it's ever physically tampered with, it will require service from Humane to restore operation
So it's entirely non-repairable?
--
I also love the "you can shop in the real world" example where they imply the scenario is him going into a physical bookstore (they say "retail") and yell out that you're looking up if it's cheaper online and buying it there.
I remember version 1 of this thing, Sixth Sense
Sorry, try again. The fact that the company is called "Humane" should be sign enough to tear it off your person, smash with hammer and burn.
The projection onto the hand is just straight up neat. I'm not going to buy this device, but do look forward to using that paradigm at some point in the future.
Any word when it'll be released outside the US?
I doubt it will be a huge success but this sparks interesting thoughts - how to put all these things on such a tiny board and power it up with a tiny Li battery. How to build it at home? FEMTO-ITX seems like the smallest motherboard to get started with. Is there a better/smaller alternative?
That subscription is priceeeey, the responses in the demo were off.
However those are things that can change. So I don't worry about that.
The form factor is innovative, and the laser display (which they didn't lean into) is very cool!
Related ongoing thread:
The Humane AI Pin Launches Its Campaign to Replace Phones-https://news.ycombinator.com/item?id=38207656- Nov 2023 (130 comments)
Photography was not allowed during WIRED’s visit to Humane, and the company didn’t provide WIRED a Pin to try.
Clearly very confident in their product.
finally, a plausible contender for the mark of the beast. impractical, but it could be made mandatory without too much arm twisting.
Does this run an LLM on device?
They clearly got the shape wrong. It’s supposed to be like a Metallic A , with a gold oval behind it.
They need their modern Chiat\Day because right now they are about to squander the best movement in hardware tech in 10 years.
Can't help but think this is dead on arrival, wearable tech doesn't have a good track record. Imagine talking to someone while they're wearing this (sans translation feature), frustrating to say the least. I imagine the adjustment is too steep a curve for people to adopt this comfortably. That said, I hope they keep pushing, I guess.
This feels like the Kinect in that,ifit works perfectly, seamlessly, responsively every time, it would be an amazing "the future is here now" gadget. But if it doesn't, it's just a tech demo with no real use.
The pin form-factor is awkward. At least with a watch, you have watch functionality to fall back on, making it immediately useful, and you can discover incremental functionality--health, message, alerts, etc. This is all or nothing (and I think it's going to land closer to nothing).
What does it feel like when (critically, not 'if') it accidentally shines into your eye?
This looks like a technology that will sell the 2nd or 3rd time it gets revived. Its an Apple Newton, and we all want an iPhone.
Genuinely asking, problem does this solve? And won't apple instantly destroy them the moment they use an LLM for siri?
Why wouldn't I use my existing watch/phone/earbuds/pods instead of paying 600$+subscription for this?
I don't understand the insistence on using voice as the main interaction and ditching the screen.
At least google glass/AR let's me read