This is remarkable and could be life changing for the disabled, elderly, gamers, or profoundly lazy and their caretakers.
It is opensource but still costs nearly 25k dollar. why is it that expensive ?
Since when does open source mean cheap?
Labor isn't free. Building custom PCBs and hardware in low quantity isn't cheap. Building, calibrating, and testing robots isn't cheap.
Now in all fairness, open source tends to mean cheaper because it does reduce how much has to be invented in-house, and also (sometimes) because it lets you crowd source free labor. In software, that can lead to stuff getting completely built for free (or close) because the base costs are low and mostly consist of labor that some people might be willing to do for free. In hardware, it's likely that open source still reduces the costs, but... you can make a thousand copies of a library for free; making a thousand copies of a part is never going to be free.
I’ve been getting back into robotics lately and one thing that’s rubbing me the wrong way is these days, with PCB way, everybody seems to be making their own boards. Why? Some boards are innovative, but how come everybody needs their own FOC controller? Can we get one project going and focus on that before adding yet another FOC controller, but this time with wireless!
It’s a low volume product that has to support the salaries of the engineers who create and maintain the product.
I’m always surprised by how difficult it is for people to understand this.
Yep. I was recently looking at building an art project that required a gas valve that can freely rotate while under pressure.
If you need one gas line, you can get a swivel for a normal shop hose reel for $15. If you need two gas lines on the same axis, the part is similar but way lower volume, so you have to go to a specialty supplier and the price is $350.
The business that makes hose reel swivels makes lots of high volume parts, has lots of competition, and needs to charge close to cost to sell them. The business that makes specialty gas swivels for industry that offers multiple gas lines in one swivel, lots of different options, and makes them higher quality needs to charge a lot more to keep their business operational.
Where’s the price? Do you have a link to the product page?
Thank you
Here is the page for Hello Robots: https://hello-robot.com/
The software is open source. The hardware is proprietary and protected by patents.
I don't think patents are what is making this specific hardware expensive. Rather it's just a lack of market and supply chain scaling.
By robot standards, $25k is not a bad. Most mobile-manipulator robots cost 5 digits or more, mostly due to the small market, high materials and engineering cost, and general headaches of robot building.
I'd love to see this be usable as potentially a mower and or vacuum/mop with different swappable components.
Not to counterpoint, but just for the sake of discussion, i kind of want the opposite but for maybe similar reasons. I want flexible robots that can replace my human labor. I don’t want specialist robots that are obligate specialists.
Laundry, cooking, dishes, sweeping, vacuuming, and other constantly recurring tasks are what I would love to see automated not just a “robot that sweeps” like the market has been trying to sell me.
Ever since I read the second shift book about the unpaid extra 40 hour week women work doing domestic tasks I’ve dreamed of robots replacing that for humanity. It’s a massive cost to people individually and humanity overall, and kind of a silent epidemic.
It’s crazy but freeing up half of humanity from the drudge work of daily chores is one of the most obvious disruptive technology plays. I rarely hear people put the robot revolution in this context, but I very much think we should start doing so.
Here’s a good overview for the uninitiated:
https://www.americanprogress.org/article/unequal-division-la...
I applaud — for real — your ideas and feelings here. I’ve had similar thoughts my whole life, growing up reading golden age science fiction.
But I worry very much that tools like this will be used primarily to increase corporate profits and reduce money spent on humans, rather than remove drudgery from people’s lives and allow them to do things more aligned with their goals and natures.
E.g., if we make a cleaning robot, hotels will replace half their staff — what will these people do for a living? Work in an AI sweatshop, categorizing images of child abuse?
Old-school science fiction often proposed that we’d be entering a new age of art and leisure, as robots and AI take over menial tasks. In fact today I think we’re seeing AI and robots — in part — taking jobs from humans, and in order to provide entertainment and economic leverage to richer humans.
It’s making me reevaluate all that old science fiction, as it seemed to require an invisible 90% of the population basically working for the AIs so that the AIs can curate a great life for a stratospherically-wealthy minority.
I don't think you should reevaluate it in that context. Golden age science fiction assumed what we seem to be now calling AGI and still don't know how to create. What we're now calling artificial intelligence (thanks to OpenAI) is effectively an advanced version of autocomplete with infinite computing power behind it. It's incredibly inefficient, and if we ever build AGI we'll look back at AI like people looking back at the earliest manual typewriters without shift keys or lowercase.
For golden age sci fi theories of human work vs leisure to actually take hold, we need universal basic income, or some other monetary theory that allows us to value other people for being alive rather than solely for being feudal slaves of deranged billionaires.
"Hotel maid" as a job really shouldn't exist when robots can do it better and more consistently (which isn't true yet). At that point, not before, should be considered beneath human dignity. But we definitely need an answer for what happens to the newly undignified human.
Dignity should be intrinsic, not a result of labor. Of course, labor is today necessary, (and in a way will always be necessary by someone), so working is indeed dignified to the extent it helps other people.
I think chores aren't necessarily the terrible boredom. But having a robot as an option, you can do them as a sort of hobby if and when you want. That seems nice.
I think we also will need to develop maturity to deal with our free time, but it's probably not the disaster I've seem many claim (that we lose meaning) -- maybe their way to cope with an unfair world? or my way to cope with laziness.
The main thing is how to protect ourselves from rulers when we aren't necessary for labor. It seems like a difficult but solvable problem. Being able to choose how much to work (and play) is the dream!
Old-school science fiction often proposed that we’d be entering a new age of art and leisure, as robots and AI take over menial tasks. In fact today I think we’re seeing AI and robots — in part — taking jobs from humans, and in order to provide entertainment and economic leverage to richer humans.
It was also predicted in the mid 20th century that rising productivity would create a shorter work-week; instead we have figured out how to prevent workers from being compensated for higher productivity.
I agree that generalist robots would be better, but building them is really hard (which we know, because we've been trying to build them for decades now). So I think piecemeal robots are the happy-enough medium that we can build to start automating away work today (while we hopefully keep working on the general case).
Make it micro.
I want mini robots cleaning dust and debris, silently and out of my way. I don’t want macro bots getting in my way
I agree, micro bots would be best to handle the dirty jobs.
okay, so we all agree a Matryoshka doll like system similar to SD card and microSD card is appropriate then.
That's what Zorg though in _Fifth Element_.
Robots like this will have a small market until they can handle obstacles. The cat toy that the cat left in the middle of the floor, the papers that an open window blew off the table, the toys the kids left scattered about, the pencil that rolled off the desk while you were away, the dirty laundry you left laying on the floor, the ridge between carpet and hardwood floors, doors left open or closed, and more. That means there may be several tasks that intervene before a primary task can accomplished (move the toys, pick up the papers, pick up the laundry, open the door). Some obstacles will semi-permanently block a wheeled robot, such as cables, things stacked that you don't want moved, furniture, a sleeping pet, stacked unopened packages from the mail, etc. I believe this means general purpose home robots can not have wheels, they must have legs, perhaps more than two legs for stability. It may sound weird but I think the ideal design might be somewhere between a large friendly spider and a dog. It's odd how robotics has mostly fallen into this idea that the world is two dimensional and flat. They've idealized away the really difficult problems of dealing with mobility in a 3D world. Note that everything this robot does involves only planar horizontal surfaces. Basically it looks like a person had to go through the rooms and clean them up for the robot to function. Roomba's have the same problem.
This same problem exists in the hospital setting on anything with wheels.
They solve it there with "cord pushers". Basically cattle guards for stuff on the floor.
Not every problem needs a complex solution.
I think the brilliance of this project is in its simplicity, particularly in the robot design.
these "invented problems" that software engineers (such as myself) find in every project are basically why we can't have nice things. why did software in the 90s run faster than the same functionality in the 2020's? it's this right here.
I have had good discussions with a colleague about this, where developers lean toward getting roadblocked by all possible engineering problems, they advocate checking to see if there are solutions to the problem that don't require engineering.
In this example, I think they'd suggest communication first then solve the engineering problem later.
Eg: just tell people they need to clear the floor or it can get stuck. People will still want it.
Perhaps the next step is lower touch engineering, ie: beep when it's stuck.
I tend toward engineering stuff, but I have come to realize you can't always afford the engineered solution, and that doesn't have to stop you from delivering stuff.
Coming up with these scenarios is called de-risking and engineers need to do it!
To your point, people own Roombas, which is kinda like this with no arms.
Take a look at these pictures:
https://duckduckgo.com/?q=cluttered+old+persons+home&atb=v31...
I was thinking that people who live in an environment like this are most in need of a robot to help them.
Agreed, they could use the help.
The issue I see is that is by far the most challenging corner case. It's not the largest market, but it's the most difficult to capture.
Good business sense would dictate that you should try to capture the largest market, that is simpler to capture first, then go after these more difficult corner cases.
that's an indictment of the exact platform used not the general concept though. "all" that has to happen is for the arm(s) to be able to pick up things on the floor in front of the robot. it doesn't seem insurmountable, the demo connecting all those models together to be able to say "move the Takis to the nightstand" and have it be able to execute on that is amazing. it's just a "small" matter of robotics to make it so the arm is articulated enough to reach the floor.
I'm not sure I'd use a robot that can "move x to y", but I'd love a robot that can run after my daughter's bedtime to tidy up her toys. This is an end in itself.
That's very cool. I have almost no experience with robotics, so excuse the silly questions:
- How does it know what objects are? Does it use some sort of realtime object classifier neural net? What limitations are there here?
- Does the robot know when it can't perform a request? I.e. if you ask it to move a large box or very heavy kettlebell?
- How well does it do if the object is hidden or obscured? Does it go looking for it? What if it must move another object to get access to the requested one?
Disclaimer: I'm not one of the authors, but I work in this area.
You basically hit the nail on the head with these questions. This work is super cool, but you named a lot of the limitations with contemporary robot learning systems.
1. It's using an object classifier. It's described here (https://github.com/ok-robot/ok-robot/tree/main/ok-robot-navi...), but if I understanding it correctly basically they are using a ViT model (basically an image classification model) to do some labeling of images and projecting them onto a voxel grid. Then they are using language embeddings from CLIP to pair the language with the voxel grid. The limitations of this are that if they want this to run on the robot, they can't use the super huge versions of these models. While they could use a huge model on the cloud, that would introduce a lot of latency.
2. It almost certainly cannot identify invalid requests. There may be requests that are not covered by their language embeddings, in which case the robot would maybe do nothing. But it doesn't appear that this system has any knowledge of physics, other than the hardware limitations of the physical controller.
3. Hidden? Almost certainly wouldn't work. The voxel labeling relies on a module that labels the voxels and without visual info, it can't label them. Also, as far as I can tell, it doesn't appear to have very complex higher-order reasoning about, say, that a fork is in a drawer, which is in a kitchen, which is often in the back of a house. Partially obscured? That would be subject to the limitations of the visual classifier, so it might work. ViT is very good, but it probably depends on how obscured the object is.
While they could use a huge model on the cloud, that would introduce a lot of latency.
Will all the recent work to make gen. AI faster (see groq for LLM & fal.ai for stable diffusion), I wonder if the latency will become low enough to make this a non-issue or at least good enough
If AI/ML home systems become significantly common for consumers before the onboard technology is capable, I could see home cacheing appliances for LLMs.
Like something that sits next to your router (or more likely, routers that come stock with it).
Does a robot that moves things in a home need this? The challenging decisions are (off the top of my head):
1. what am i picking up? - this can be AI in the cloud as it does not need to be real time
2. how do i pick it up? - this can be AI in the cloud as it does not need to be real time - the robot can take its time picking the object up
3. after pickup, where do i put the object? localization while moving probably needs to be done locally but identifying where to put down can be done via cloud, again, no rush
4. how do put the object down? again, the robot can take its time
You can see in the video the robot pauses before performing the actions after finding the object in its POV, so real time isn't a hard req for a lot of these
The cool thing is that there are solutions to all of these problems, if the more basic problems can be solved more reliably to prove the underlying technology works.
User fishbotics already answers a lot of these questions downstream, but just confirming it here as an author of the project/paper:
- How does it know what objects are? Does it use some sort of realtime object classifier neural net? What limitations are there here?
We use Lang-SAM (https://github.com/luca-medeiros/lang-segment-anything) to do most of this, with CLIP embeddings (https://openai.com/research/clip) doing most of the heavy lifting of connecting image and text. One of the nice properties of using CLIP-like models is that you don't have to specify the classes you may want to query later, you can just come up with them during runtime.
- Does the robot know when it can't perform a request? I.e. if you ask it to move a large box or very heavy kettlebell?
Nope! As it is right now, the models are very simple and they don't try to do anything fancy. However, that's why we open up our code! So the community can build smarter robots on top of this project that can use even more visual cues about the environment.
- How well does it do if the object is hidden or obscured? Does it go looking for it? What if it must move another object to get access to the requested one?
It fails when the object is hidden or obscured in the initial scan, but once again we think it could be a great starting point for further research :) One of the nice things, however, is that we take full 3D information in consideration, and so even if some object is visible from only some of the angles, the robot has a chance to find it.
For solving long term tasks like finding things that aren't there, you can turn the annotated scene into a templated description and feed it to a large-enough model trained on interactive fiction.
You are standing in a kitchen. Ahead of you to your right there is a large refrigerator with the handle on the right side. There is a set of cabinets to your left with a plate sitting on the counter above them.
get beer
You don't see any beer here.
<< COT: I know that beer is often found in the fridge. I should try opening the refrigerator
open fridge
Opening the refrigerator reveals 4 cans of beer.
get beer
taken
Obviously we're still several years from this working, but it's very exciting to consider. Interactive Fiction narrative fed by real sensors plus chain-of-thought blocks as internal monologue.
Great, now we can teach robots to wander around rooms looking for things, saying "keys, keys, keys... where would I put keys?"
Get a Tile. I have one attached to my keys, and saying "hey Alexa, find my keys" has been really nice. We also have one taped to our remote, which turned out to be excellent since our couch constantly eats it. I just wish it lit up, but sound-only is fine.
It would be really cool if the robot could just know where your keys are by attaching some kind of tile-type thing to it. If it already has a scan of your home, theoretically it could show a photo. But I have no idea if it’s possible to pinpoint an object via rfid.
I can pick up and place objects myself if only I could remember where I put them.
You could take video data and have fuzzy identification of objects moving around, then throw away the video and keep track of the objects, the blue floppy thing (gloves) and the metal shiny deforming things (keys) then have a more constructive dialog about the keys. A voice responding, what do the keys look like? Is there a blue square thing on the key ring? The less identifiable the object the funnier the discussion. What shirt? You have many shirts! Oh, the blue one, you have 4 of those, one in the sink, one behind the bed, one in the laundry basket, one in the closet. Oh the one with stripes! Why didn't you say so, it's behind the bed bro.
It could also ask you if they are suppose to be on the outside in the front door after you close it.
I have exactly one place I put my keys in the house - the handle of a certain door. As soon as I get into the house, I put the keys there. This hasn't failed me yet.
Multi modal LLM already excel at theses sorts of tasks. Try taking a picture of your kitchen and ask chatgpt where to find the beer.
I use this quite a lot actually. Being lazy I take photographs of components and boards and ask it how to wire them to my esp32. It’s able to distinguish the board, chip, etc, as well as the pinouts from a set of photos and tell me what wires to where and anything of note. It’ll often even suggest helpful libraries for the parts. It’s essentially magic.
I know nothing about robotics, but can someone ELI5 why the robot makes so many extraneous movements? E.g. the video that shows it moving Takis from the desk to the nightstand, it approaches the desk, and then the arm mechanism moves all the way down (an unnecessary maneuver), then rises again before reaching the level needed to pick up the Takis.
A lot of those movements are there to zero out the axes so that each movement starts from a known good position and orients itself against the camera. Usually there's a switch that, for example, senses when the body goes all the way to the bottom, which is the origin for the whole positioning system. Several other movements are for safety since it doesn't have a bunch of cameras and really complex logic for collision avoidance so it resets to a smaller profile between moving around.
Since motors are capable of very precise movements and errors accumulate, this is a best practice when starting new movements. Humans instead have a complex hand-eye coordination system that trained all of our lives (and some people are better at it than others).
A lot of those movements are there to zero out the axes so that each movement starts from a known good position and orients itself against the camera. Usually there's a switch that, for example, senses when the body goes all the way to the bottom, which is the origin for the whole positioning system.
Ideally the "zeroing" should be done once when the robot "wakes up" or only once in a while, and there should be digital encoders on all motors, the position should always be known within a tiny margin of error, and not enough to cause a problem for positioning. At least that's how I'd do it, I'm not sure how they built this thing.
It's always a trade-off! You could have more accurate sensors and motors that are more expensive, or you can have cheaper motors with no sensors and higher accumulated errors. Since this is more of a research project than a product, we went for a cheap robot with the slower-but-more-accurate approach.
Encoders are not that expensive and they don't have to be integrated into the motor. I've done this stuff before, it's not so costly and it really improves the entire system.
I very much want a stabilized platform vehicle that I can send point-to-point with a payload on it.
So, a gyro-stabilized platform like a segway that I can send back and forth from point A to point B on a not-terrible-but-rough (walking path) route.
I have tried to stay abreast of the options in the past and have never seen anything that matches this ... does anyone know if there is anything new that matches this use-case ?
(the use-case is a tray of drinks and hors d'oeuvres that needs to go from one part of a property to another without spilling ... needs to be minimally all-terrain)
You’ve maybe seen these already in restaurants? https://www.pudurobotics.com/product/detail/bellabot
Not sure I’ve seen them take drinks though, but definitely food.
Some of these robots are kind of "jittery" from what I've seen, I've seen soup delivered without spilling but it might not work well with drinks (not on "rough" terrain obviously)
https://www.youtube.com/watch?v=VGzRfvgnS_s
The technology's out there, just need to glue it all together.
This is rad. I would totally buy a 25k robot if I could train it to fold and put away my laundry (serious)
You might have to buy a second one to this one for folding [1]
In fact, Hello Robot already shared a teleoperated demo of folding shirts! https://www.youtube.com/watch?v=QtG8nJ78x2M&t=180s But yes, a second arm is needed.
I would buy a $100k robot if it could do the laundry and the dishes and the cooking and clean up after the kids. In a heartbeat.
Isn't this the same as dobb-e?
The projects look related and have an author in common. Both are mentioned on the website for robot that they used:
No, although it has some of the same people on the team (aka I'm the first author there, and my advisor is advising both projects :) )
The primary difference is that this is zero-shot (meaning the robot needs 0 (zero!) new data in a new home) but has only two skills (pick and drop); where Dobb-E can have many skills but will need you to give some demonstrations in a new home.
It's cool but what's the point for a normal person? Useful for warehouses and manufacturing but I don't see myself ever needing such things
Are elderly or disabled people "normal" in your book? Do you see yourself or your loved ones growing old someday?
A large motivation behind this line of home-robot work for me is thinking about the elderly, people with disabilities, or busy parents who simply don't have enough time to do it all. I am personally hopeful that we can teach AI to take the jobs that no one wants rather than the jobs that everyone wants :)
It appears slow, but tests show it completes most tasks more quickly, accurately, and with less complaining, than most members of the gen z cohort.
Weird energy.
It must not have anything better to do with its time
Is the title a reference to OK computer or is it just something you all came up with?
The title has multiple meanings, some credits definitely should go to OK computer/radiohead, but also "OK Google" for controlling a home assistant, open-knowledge (OK) models, etc.
congrats on the awesome work!
Thank you!
A friend is working on a slightly related project, I’m curious how they map out the room in voxels, anyone care to suggest how this is done?
The mapping process can be done with any RGB-D cameras, we use an iPhone pro but any apple devide with AR-Kit should work. Once we have a sequence of RGB-D images with associated camera poses, we can just backproject the pixels (and any associated information, like CLIP embeddings) using the depth into voxels.
The failures analysis is super well done, nice work! Curious what qualifies as hardware failure, e.g. there's 5 trials where the "Realsense gave bad depth", and how that's determined.
Thanks! We collect all the data and analyze it post-facto to see what may have caused the failure. For example, on the 5 trials you mentioned, the Realsense gave wrong depth on transparent or semi-transparent objects, and so the pointcloud generated from the robot's head camera was simply wrong.
Back in the day, my friend would lament not having “closetgrep” to find the needed thingy stored in an overfilled closet.
I bought 40 little cardboard boxes (VATTENTRÅG - they are pretty cheap, and shallow enough so you don't have to go digging too much) from IKEA and started putting what was in each one in text files so I could literally do that (grep for things). I still need to catalog 38 out of the 40 boxes though so I'm reconsidering my strategy.
Take that, SHRDLU!
For a long time, I wanted to use a robot with a gripper to make tea. Is there any 6DOF robot available within a reasonable price of <$1000 to do so ?
why are these general purpose robots always so slow? Intuitively we expect machines to be able to do tasks faster than humans, but even the 5x speed video is much slower than a human could do the task.
This looks really cool, but I immediately think of the possibility of it starting a fire, and thinking everything's fine
I've been watching this project for a while now, great progress!
I envision an integration with a mobility aid (eg, a wheelchair) for someone with limited control over their limbs. Imagine a "smart" exoskeleton that can help with otherwise impossible tasks -- it could be a game-changer for so many people.
I like the presentation of this, heres 10 different environments and multiple videos of each.
I forgot where I saw that, but generally, improving things for people with disabilities improves things for everyone, like making sidewalks wheelchair friendly helps parents with a stroller, or people carrying heavy stuff, walking with a cane, young children on bicycles, people who can't see well...
Everything has its limits. Many years ago I was involved in building a series of staircases in a rock climbing area inside a park. There were about a hundred steps in a handful of orientations to get from the parking lot over a rocky hill to the small valleys behind. The project was primarily to prevent trail erosion and falls. These steps weren't going to even have handrails. (Think 2x6 framed boxes filled with dirt and bolted to the rock.) Then someone in government said if we wanted to use donated money inside a park we would have to somehow make the project wheelchair accessible. All stop. Project over. No stairs were built. Access trail remained a mess.
We were going to replicate these stairs from another climbing area in BC. There is no way to make such a thing wheelchair accessible.
https://sonnybou.ca/ssbou2001/skaha01.jpg
In the US? I assume ADA was the kicker. A lot of folks even in government don’t realize the ADA isn’t unthinking. If the activity or environment doesn’t lend itself to accessibility it’s not required. Cutting a wheel chair ramp into a mountain face is a good example where the ADA wouldn’t apply because it’s impractical given the environment to do so. Even national parks only offer a subset set of activities ADA complaint.
No, it wasn't an ADA thing. It was a purely local thing. The local authority had adopted some resolution that no further "development" would happen before they added some sort of accessibility. So we couldn't move forwards even using donated money. We could repair things but not make substantive improvements.
Rock climbing areas tend to be inaccessible or at least very rough terrain. Ironically, a vertical rock surface can be made accessible. There are actually many disabled climbers out there. But with a mixed dirt/rock/scree slope you basically need to install a mile-long ramp.
I guess pointing at the cliff and saying that’s the accessible route doesn’t fly eh? It’s an inclined slope - just very inclined. And yes there are tons of disabled climbers.
We generally understand that disabled people have a right to access the spaces that everyone else does. But climbing/caving is different, different than most any other activity: Access to space is controlled by ability. I have stood on ledges that are impossible to get to without a certain set of skills. If there was a ladder or a staircase, standing on that ledge would mean nothing. We can make a pool or athletic field accessible, but making such a remote ledge half way up a sheer cliff accessible by people without those abilities isn't possible without destroying the nature of that space. So there is always going to be conflict.
I've never understood this argument. Why would somebody else getting to a point through a different easy way cause another to feel like the hard way lost its value?
I like to think that each individual has a limit to their ability to access the physical world around them, which will likely go up and down through their lifetime. Factors which might affect this limit are physical or medical differences between individuals. These factors can be mitigated, such as a prosphetic, or medication to help with altitude sickness. Humans also have ways to change the physical world to mitigate these limits. I'm guessing that there is a road which brings you closer to this climbing area? And that most people use vehicles to get closer and leave those somewhere? That infrastructure is in place, but there was a time when it wasn't. Vehicles, great invention aren't they? You see where I'm going with this? Take away that infrastructure or take away the vehicles and the trail errosion problem is solved, because suddenly there is a massive drop in people accessing the area. I'm not suggesting either way that those steps should be built or not, that is indeed a conflict and no one can say where the line should be drawn, but please don't loose sight of the limits of your own ability, that your limit WILL change and the mitigating factors that are already in place that enable you to exceed your limit.
Making an on-the-record decision to not provide accessibility is grounds for a lawsuit on that basis. It doesn't matter if they think they'd win that lawsuit, it's a chilling effect, and a big one.
It’s so frustrating that city leaders can’t even try to use common sense. Where I live a parking requirement blocked a restaurant from being built and our city council publicly acknowledged that there isn’t enough space for parking and a building, but “that’s the law” so they blocked it. Lazy idiots.
Or maybe the city doesn't want businesses that are going to bring people into an area without giving them space to park the cars they inevitably bring with them.
Isn’t that the point of the parking requirement? If you don’t have room for enough parking to support the Thing, then you don’t have room to add the Thing to the neighborhood. Seems like the intended outcome.
This is not a limit of making things accessible. This is a bureaucratic/legal/funds limit. Had they told you "for accessibility, we will build an alternative route and handle the cost", would you have said "No, thanks"?
"improving things" and "mandatory requirements that, in some cases, can go against common sense" are not the same things.
‘Designing things like door handles for people with only one arm is a good idea not just because it helps those with only one arm, but also because all of us sometimes have only one arm. If we’re carrying a hot cup of tea, for instance…’
…to (very liberally) paraphrase Rory Sutherland.
I heard this from Anna Martelli Ravenscroft in her presentation "Diversity as a Dependency" [0]
[0] https://www.youtube.com/watch?v=wOpdDxJzNkw
I've heard this called the curb cut effect. (It's a subject right in 99% Invisible's wheelhouse and there is a good episode about it that mostly focuses on the history of literal curb cuts.)
1. https://en.wikipedia.org/wiki/Curb_cut_effect
2. https://99percentinvisible.org/episode/curb-cuts/
While I would agree in general, I once slipped on one of those overly steep carved-out kerbs in SF and broke my elbow... I guess if you hit a bad spot you might need a wheelchair afterwards (ok, but it really did hurt!)
So you have to do it right to keep the potential harm as low as possible and not forget about security in the face of rewarding improvements. And watch your step, of course.
Might also be applicable in the context of self-learning household robots and their potential to burn down that house :)
... daleks
Thank you! A large motivation behind this line of home-robot work for me is thinking about the elderly, people with disabilities, or busy parents who simply don't have enough time to do it all. I am personally hopeful that we can teach AI to take the jobs that no one wants rather than the jobs that everyone wants :)