> In July, a Waymo in Tempe, Arizona, braked to avoid hitting a downed branch, leading to a three-car pileup.
> In August, a Waymo at an intersection “began to proceed forward” but then “slowed to a stop” and was hit from behind by an SUV.
> In October, a Waymo vehicle in Chandler, Arizona, was traveling in the left lane when it detected another vehicle approaching from behind at high speed. The Waymo tried to accelerate to avoid a collision but got hit from behind.
It’s worth noting that all 3 of these incidents involve a Waymo getting hit from behind, which is the other driver’s fault even if the Waymo acted “unexpectedly”. This is very very good news for them.
Yes, but...there is something else to be said here. One of the things we have evolved to do, without necessarily appreciating it, is to intuit the behavior of other humans through the theory-of-mind. If AVs consistent act "unexpectedly", this injects a lot more uncertainty into the system, especially when interacting with other humans.
"Acting unexpectedly" is one of the aspects that makes dealing with mentally ill people anxiety-producing. I don't think most of us would automatically want to share the roads with a bunch of mentally ill drivers, even if, statistically, they were better than neurotypical drivers. There's something to be said about these scenarios regarding trust being derived from understanding what someone else is likely thinking.
Edit: the other aspect that needs to be said is that tech in society is governed by policy. People don't generally just accept policy based on statistical arguments. If you think that you can expect people to accept policies that allow AVs without addressing the trust issue, it might be a painful ride.
Try replacing "acting unexpectedly" in your thought process (which superficially I agree with) with the words "acting safely."
It remains to be seen if autonomous driving systems are actually safe. But if the other driver does something that is safe, there's then an onus on the first driver to have accounted for that.
I don't disagree, but in order for the first driver to "account" for the actions of the second, they have to have some reasonable ability to predict what that driver will do. That gets us back to the theory-of-mind question.
None of these three cases involved the Waymo car behaving in ways that are not that uncommon among human drivers, and our theory of mind does not make us nearly-infallible predictors of what another driver is going to do. Your objection becomes essentially hypothetical unless these cars are behaving in ways that are both outside of the norms established by the driving public, and dangerous.
That’s true, but also one of the selling points of some AI tasks. As a non hypothetical example, the DoD hired a company to train a software dogfighting simulator with RL. What surprised the pilots was how many “best practices” it broke and how it essentially behaved like a pilot with a death wish. Possibly good in war, maybe not so good on a public road.
Your comparison is RL dogfighting? I think you need to brush up on recent autonomous driving systems…
Do you not feel they are related? In general, aviation tech tends to outpace automotive tech (I’ve worked in both). Maybe you could enlighten me?
Modern AVs are not driving like they have a death wish by any stretch of the imagination (and these systems are not developed using raw RL). They are driving safer than humans. Any concern that they are not following established best practices is entirely unfounded and strictly grounded in FUD.
I don’t think you can make this strong of a claim with the available data. Likewise, someone can’t make strong claim in the opposite direction. The best data I’ve seen is from NTSB investigations, and it clearly shows some very dangerous behavior. But it’s just a snapshot of data.
And I think you’ve taken away the wrong point. It was about unexpected behavior, not “driving with a death wish.”
The ability of people to consistently miss/twist the point to fit their own predetermined viewpoint is tiresome.
Indeed - so how did systems with a "death wish" enter into this discussion? Well, it turns out it was by you, about which you said "...maybe not so good on a public road."
In this light, your complaint about others twisting the point seems rather ironic.
It was the way the pilots described the unexpected behavior of the RL model.
If I related a story about ChatGPT where someone said, "It wrote like it was drunk" would you insist that I'm saying ChatGPT actually imbibes in alcohol before coming up with a response? I think you might be missing the point of an analogy. This tends to correlate with people with dichotomous thinking, which is also part of the thread and the difficulty with people understanding the intended point.
The part you missed was that the "maybe not so good on a public road" was about breaking "best practices," not about acting with a death wish. The intent was to underscore, yet again, was that unexpected behavior is sometimes beneficial in a wartime environment where you want to keep the other party guessing and off-balance, but not beneficial in a public safety domain. Again, twisting an argument to make a preconceived point rather than reading it as it was actually written. It's hard to get someone to understand a point when their biases are hell-bent on deliberately not understanding it.
And you chose to quote it, and to make the point that it is not the sort of thing we want on public roads - as if there's the slightest hint in the extensive testing of these cars that the apocalyptic scenario you chose to introduce to this discussion was anything but hypothetical and hyperbolic FUD.
And then you had the effrontery to chide dcow for responding to your claim, as if they were taking the discussion in this absurd direction. You've chosen to repeat yourself here, so I will say again that there is considerable irony in what you are doing - irony in the sense of statements being made that display a lack of self-awareness.
At some point there were numerous reports of cars using driving assist steering into the dividers at highway offramps. I believe there were in fact some some real accidents that happened this way. That definitely was unexpected behaviour and I would argue one could even qualify it as the car acting like it had a death wish so I don't think the OP statements can be qualified as hyperbolic FUD.
Now you're going to argue that these were early incarnations of non AV systems (despite the name), but I think they do illustrate how the systems can behave in unpredictable (and dangerous) ways when they encounter novel situations. That's why I commend waymo for not following the hype and keeping the environment they operate very restricted.
Well, yes I am, because, as you realize, it is a valid point! I will also add that, I have, myself, made the point that we have to be careful because complex systems have bizarre failure modes, so just because, in everyday circumstances, these systems may appear to function 'sensibly', we cannot simply trust them to be 'sensible' outside of the scope of testing.
I am, however, a believer in the relevance of empirical evidence, and I also agree (with some caveats) that human-driver performance is a valid basis for establishing whether, and to what extent, autonomous vehicles may be permitted on public roads. We are discussing the publication of some results of Waymo's extensive testing, and I stand by what I wrote in my first response here: none of these three cases involved the Waymo car behaving in ways that are not that uncommon among human drivers, and our theory of mind does not make us nearly-infallible predictors of what another driver is going to do.
I agree with you that Waymo is taking the right approach here (and, FWIW, I regard Tesla's two-faced stance as unethical.)
I doubt I would have objected to Bumby's invocation of a death wish in response to the past events of which you speak, but in response to Waymo's test results and the points I made about them, I think 'hyperbolic FUD' is justified. At some point, a rational person has to make accommodations when things have changed and arguments based on old data lose their relevance.
This bypasses the entirety of the trust argument originally stated by leveraging the very point (statistics) it cautioned against. (As stated elsewhere: "So all the bleating about statistics may be necessary, but not sufficient, to get wide-scale adoption of AVs on public roadways.")
Regardless if you think the decision should be based on statistics alone, my further point is that an n=3 sample size is not adequate to make strong claims. Add to it that Waymo only reports the data they want to[1], there is a reason to be careful about making any claims.
Here's how that scenario plays out when we rely on organizations to self-report safety data in a competitive environment, from my experience. Even though it may start with the best of intentions, cost and schedule pressure builds. Now items that would have otherwise been reported as safety incidents are now classified with vague terms like "test anomalies" and essentially buried. They will still report safety metrics, but now it's, at best, incomplete and misleading. Until some event that's egregious enough forces the company to be more transparent.
[1] https://spectrum.ieee.org/have-selfdriving-cars-stopped-gett...
It is absurd of you to suppose that Waymo's testing has yielded just three data points on the safety of its cars... Just suppose there had been no incidents incidents at all - then they would clearly have to be banned from the road, as we would have absolutely no data pertaining to their safety!
Again, missed the point. We shouldn't make strong claims about self-curated data when there's an incentive to make that data look safer than it actually was.
We know they aren't fully transparent. We also know that other well-known and well-funded AV developers have very bad practices that are highlighted when they are forced by regulators to be transparent. While that isn't a smoking gun against Waymo, it should give us pause and make one question a naive perspective in favor of a skeptical one.
If your absurd claim that Waymo's trials provide just three relevant data points is not part of "the point", then why did you make it? It does not give us any confidence in the proposition that "the point" has been well thought-out.
Furthermore, "the point" keeps shifting: recently it shifted to raising doubts about the provenance of the data whipped up from a six-year-old article. At this point, I feel that a quote is appropriate: "the ability of people to consistently miss/twist the point to fit their own predetermined viewpoint is tiresome."
As I've said elsewhere, my point was part of a larger context. My point was about how important trust is to adoption of AV tech. That goes well beyond the Waymo cases illustrated. The sample size and quality of the data illustrates the need for a broader context of information needed, in addition to the need to understand that humans don't build trust simply from statistical arguments.
And in the vein of trying to steel-man your position, I gave the comment to ChatGPT to see if it, too, considered the central point a claim about AV having a "death wish." Here's what it said:
"The statement highlights that some AI tasks, while effective, may deviate from conventional practices. An example is given where the Department of Defense (DoD) employed a company to train a dogfighting simulator using reinforcement learning (RL). Pilots were surprised by the simulator breaking established best practices and behaving recklessly, akin to a pilot with a disregard for safety. The implication is that while such behavior might be advantageous in a military context, it may pose risks or be unsuitable in civilian settings, such as public roads. The statement underscores the need to carefully consider and tailor AI applications to specific contexts and objectives."
So it seemed to recognize that the central point is that "the behavior" in question is "breaking established best practices" and that the "implication is that while such behavior might be advantageous in a military context, it may pose risks or be unsuitable in civilian settings". There's probably some irony in the fact that AI did better at a reading task.
"the point"
You made two points right next to each other, about sample size and bias, and the sample size point was invalid.
When called out on that, you don't get to move the goalposts and say that the second one was "the point" and the other person is "missing the point".
There's a subtle nuance you're missing. I am not saying the data is biased, I'm saying we have good reason to believe the data may be biased. I have been careful not to say any strong claims about Waymo here, because my stance is we probably don't have enough data to make such claims. It's a small, but crucial difference. We would need more data (ie a larger sample size) to make a strong bias claim.
Given the protracted nature of this thread, I get why it's confusing. I have a couple sub-threads that make those two points separately, and was simply trying to show how both are related. If one isn't aware of the broader context of the discussion I was making, I understand why it may seem like goalpost moving. But in reality, I was deliberately trying to tie the two related posts together because (IMO at least) they are related.
I'll help you out here, since there still appears to be some difficulty.
I've replaced your word "it" to make the point as clear as possible.
Or to put it different terms, the sometimes unpredictable behavior resulting from RL may be a feature on the battlefield but a bug on public roadways.
I still stand by that point. And, yes, that means dcow also missed that point. There's nothing hypothetical about bringing about real-world case studies of autonomous behavior based on RL models. We've been through this so many times now that I'm coming to the conclusion you may be arguing in bad faith or you get incredibly distracted by certain terms that it inhibits reading comprehension.
Ah, so we're playing the "I didn't say what I just said" game now. Here's what you actually wrote:
"That’s true, but also one of the selling points of some AI tasks. As a non hypothetical example, the DoD hired a company to train a software dogfighting simulator with RL. What surprised the pilots was how many “best practices” it broke and how it essentially behaved like a pilot with a death wish. Possibly good in war, maybe not so good on a public road."
Anyone with the slightest familiarity with language will find no credibility in your claim that in this comment, the thing that is being called "possibly good in war" but "not so good on a public road" is anything other than the one explicitly-mentioned aspect of the system that you have specifically chosen to present as an example. Furthermore it was completely reasonable for dcow to respond "modern AVs are not driving like they have a death wish by any stretch of the imagination" after you chose to make the above statement.
With your latest response, you continue to put more weight on an anecdote about this unrelated system than you do on the extensive empirical evidence from testing the actual system that is the subject of this article.
Ignoring the no true Scotsman-ism of your post, what's odd is that you are telling someone who actually wrote it what was intended. You've made it completely clear you misunderstood it. I've pointed out exactly where you made the mistake. Yet you can't seem to bring yourself to admit that just maybe your biases made you infer more than what was actually said. The "explicitly mentioned aspect" is the unpredictable behavior. You can tell that, not just from the wording, but from the fact that has been the consistent throughline of the entire sub-thread. And not to beat a dead horse, my secondary point has consistently been that we should not put too much emphasis on self-curated data when there is a bad incentive to embellish it. Yet here you are.
What I am saying here should be clear to someone such as yourself, who is invoking our theory of mind in his claims: I am making a distinction between what you are now saying you meant all along, and what other people will recognize as having been your intent when you first wrote the passage in question. We cannot present proof, but nevertheless we know, beyond reasonable doubt. Your explanation does not pass the sniff test.
They're telling you what you wrote, not what you intended to write.
They're correct.
No, it was breaking best practices and behaving like it had a death wish. That's something that has some overlap with being unpredictable, but is not at all the same thing. You can have a predictable death wish, even.
I don't think I have much bias here and I agree with them. Also consider that the person that writes something is biased to think the communication was clearer than it actually was.
It's neat to have a correct secondary point, but it won't make your primary point correct, and people don't need to add a disclaimer of "while your secondary point is fine" every time they criticize your primary point.
Are we still talking about a car getting rear ended because it braked? Because you’re meant to leave enough room between you and the car in front to stop safely even if it unexpectedly brakes as hard as possible. Running into the back of a car in front of you (that didn’t just pull out) is always your fault.
I think people are often missing point. Yes, in a rear end collision the fault is almost always the following driver. Having a framework for assigning liability is not the same as having a safety framework. Consider an AV that is consistently brake-checking those behind them due to nuisance alarms. Now I have a harder time predicting what the car in front of me is going to do. Is that a safer or less-safe scenario? Sure, I can mitigate it by giving more trailing distance, but now we've traded traffic flow/congestion for an equal level of safety.
If you don't maintain a safe following distance for your speed, you are the one creating the dangerous driving environment. Tailgating is worse for both traffic flow and safety.
This, again, gets to missing the point. If a disproportionate amount of cars are nuisance brake checking, it increases the level of uncertainty in driving behavior. I now have to overcompensate on average to maintain the same level of safety.
Tailgating is bad, regardless of if people brake check or not. If automous vehicles are what it takes to get you to stop tailgating and follow at a safe distance, then that is just an added bonus.
There is no single definition of tailgaiting other than being able to not being able to stop at a reasonable distance. So it is impossible to declare what constitutes tailing, especially in the mixed case of human drivers and robot drivers (who have a reputation for nuisance braking).
Why, for example, do you think trainers post "Student Driver" stickers on their cars? It's because it signals the driver may be more unpredictable and people (rightly) tend to give them wide berth. You're essentially advocating that everyone treat everyone else (and every robot) as a student driver. That's fine for a dichotomous safety mindset, but other people would prefer to recognize the tradeoffs with that approach.
Or maybe you're just deliberately bent on misunderstanding my point, I can't read your mind :)
What is this nonsense? The safe following distance is determined by how fast you can stop, not by who is driving the vehicle you are following.
No, it signals they have less experience and are more dangerous drivers. When it comes to driver predictability, student drivers tend to be far more predictable than the adult, overconfident drivers. I've never seen a student driver roaring past me in stopped traffic on a shoulder, or floor the gas to pass me through a light because they didn't want to turn in a turn only lane, or any of the other unpredictable things I see on a regular basis from experienced drivers.
I think that if more driver treated the people around them as student drivers, our roads would be a lot safer.
I know that if people followed at a safe distance then we would have fewer traffic jams.
Edit: You also seem stuck on the idea that Waymo unsafely unexpectedly brakes more often than human drivers, yet that isn't clear to me from the data we have. Indeed it seems like the opposite is true from the data.
So when you're driving, do you somehow know the braking distance of every car and reaction time of every driver around you? You don't, and since their braking distance is needed to know your own braking requirements, you have to use heuristics. Maybe your heuristic is "assume everyone will cram on the brake, full tilt, at any time." But, that is not a pragmatic solution given our current infrastructure. We don't have the road capacity for everyone to drive that way. So we make tradeoffs. Part of that tradeoff means anticipating what other drivers will do and adjusting accordingly. Naturally, this will trade some safety for other things we value. That is the reality of the world we live in. You seem to be advocating for something else. The OP was that we might struggle to apply such heuristics without a theory of mind to guide us.
We probably just disagree on the student driver vs. overconfident drivers. I feel like I'm pretty good at anticipating aggressive drivers, and I fear them must less than the super-tentative driver that tends to put other people at risk. But unless you have data, we're just talking about subjective opinion here so it's not really worth delving into further.
Sure. But again, it doesn't really fit with the world we live in. Should we all, in general, drive more defensively? Sure. But I doubt our infrastructure will allow for 25+ car lengths between vehicles that the NHTSA recommends, so we're stuck making some tradeoffs.
I agree on the data point. I'm not making strong claims about safety. I'm making claims about uncertainty. One thing that is clear (and I've advocated elsewhere) is that we don't have good data (in part, because companies get to share only what they want in many cases), which makes uncertainty greater.
Note: 75 mph is 120 kmh
They recommend estimating 4.5 seconds of stopping distance. But that’s very conservative and probably not realistic.
That's ridiculous. There's nothing "unrealistic" about 4.5 seconds of stopping distance - how could there be, are you thinking that the highways physically wouldn't fit all the cars spaced at that length?
The actual calculation involves how fast the vehicle you’re following can decelerate, and your reaction time.
You can (usually) follow a large semi a bit closer because its braking distance is longer than yours.
But because of reduced visibility you can end up with “revealed brake checkmate” where the semi swerves into the next lane because a vehicle is stopped in the lane, which you then need to swerve or hit.
You keep making the same argument over and over even though I've repeatedly explained that it does not match our understanding of how traffic jams form. Traffic jams are caused by braking, especially hard braking. Tailgating increases the need to brake hard if the person in front of you brakes or someone needs to merge. Following closely does not increase throughout, it is simply bad driving with no upside.
Wow, if you read the behavior I descrived as "agressive driving" then we have completely different standards. I was describing reckless driving that blatantly violates the rules of the road.
Stop using excuses for your bad behavior and you might even be able to become a better driver.
What behavior of my own have I stated? Or are you inferring unwarranted conclusions to make ad hominem arguements?
Guess what one of the main issues has been in AV...nuisance braking. So much so that they suppress safety-critical actions to avoid unwarranted braking.
I also think you're confusing the proximate causes of conjestion for the root causes. Traffic conjestion is a load sharing problem. Tailgating is, in part, a symptom of inadequate capacity. You are advocating a solution that exacerbates it by reducing carrying capacity.
To reiterate (yet again) we aren't in disagreement about whether slowing down, or increasing following distance, will increase safety. It will, but also, that's not the point I was making. I'm just saying that is a superficial understanding of the problem and you aren't accounting for the tradeoffs. Those tradeoffs are the reason your proposal misses its mark.
Why are we still debating something that has been answered by science? Stop assuming your intuitions are correct and look at what the science actually says about how follow distances affect the maximum capacity of a road.
I’ve entirely lost any semblance of a point you might have had initially. You’re doubling down on a weak stance making hyperbolic claims like “our infrastructure can’t handle cars leaving a safe following distance”. What nonsense!
It’s more effective to let your initial point stand and let the discussion run its course. You’re working against yourself now.
You seem very combative and seemingly deliberately missed the connections in other sub-threads, so why don’t you tell me what point you think I’m making and I can tell you how it’s accurate.
But you aren't overcompensating. Instead you are driving safely. If sudden braking is so rare that you feel comfortable riding right behind a car in front of you then when they do suddenly brake (which will happen eventually), you are now in a very dangerous situation.
To add onto the sibling's point, the "safe following distance" has a rule of thumb of "3 seconds behind". At 65MPH, I'm assuming you're in the US, is approximately 300 feet.
I'm willing to bet that's around 10 times what you were considering as a safe following distance in your head, and probably still 5 times more than what you were picturing for the safe distance behind a brake checking AV.
The problem with the “safe distance” (my new car has it built into the cruise control) is that it’s more than large enough for a vehicle to merge into, which repeatedly happens.
As more and more cars default to this safely, however, it’ll start to equal out.
If it were doing it consistently then that would be less of an issue. The problem is that complex systems make decisions for complex reasons and are very difficult to predict as a result.
That said, if you routinely tailgate the driver in front on the assumption that nothing will go wrong then you've chosen to accept the consequences when an otter runs out in front of them (hey I didn't see that coming either) and they suddenly limit brake and you're sitting in their back seat. Or a steering tie breaks in a classic car coming the other way, which hits the car in front of you head on and now they're stationary.
The question of how much additional caution (in terms of lower speed limits, longer following distances etc.) is optimal in terms of overall QALYs is, I feel, vastly under-considered and under-discussed.
The rules for every vehicle I have ever operated on land, air and water require the overtaking vehicle to maintain separation.
I’m not sure this is the blanket case. Hot air balloons, for example, get right-of-way regardless, on the assumption they have less maneuverability. Weird edge case, I know, but just throwing it out there to underscore the danger of absolute statements.
But who gets the right of way between 2 hot air balloons?
Funny story, I actually "crashed" in a hot air balloon as a kid, when the hill we were landing on had a draft running up over it that caught the balloon after the basket had touched down, and dragged us along the ground sideways for a good quarter mile.
If I recall correctly, the lower one has right of way because visibility from lower to higher is blocked compared to higher to lower. It is easier for the one further up in the air column to spot and react to the lower one. Going up is also likely a less dangerous proposition than going down.
This is half remembered from a Snowmass balloon rally conversation.
Haha well in that case, I’d say the ground had right-of-way :-)
See https://www.law.cornell.edu/cfr/text/14/91.113
Basically if on the same level the one on the other’s right has the right of way.
But a plane in distress has right of way over the balloon.
I have to admit I have not piloted a ballon but even a sailing vessel over taking a power boat has to avoid the vessel in front. I would also question the rationality of operating an aircraft you cannot steer:)
This discussion appears to have run off the road with an unexpected turn.
Alternative example:
What is your reaction to this data published by Waymo? Do you agree that many would reasonably conclude that this particular AV system was safe in these cities? I say yes. I write this as someone who is cautiously optimistic about AV. Waymo seems genuinely lower on the hyper compared to other AV companies. I hope they continue this path of higher transparency to encourage other AV companies to do the same.You're right. Also, if AI drivers are safer, there will be knock-on effects of making humans drive more safely as well.
Braking to avoid hitting an obstacle (like that tree branch in the first example) is hardly "acting unexpectedly".
Wat.
Depends on the size of the tree branch.
But also, being hit from behind at low speeds was a pretty common thing in early testing in Mountain View, when they were using the Google name on cars. That there are only a handful of incidents reported in this report means either the software has gotten better at communicating its intent to other drivers, or the driving public is aware that Waymo cars are way more cautious --- if they only make lane changes and unprotected turns by engraved invitation and everyone knows it, that's fine too.
In the early days, it seemed like it might be appropriate to install a 1979 regulation 5 mph rear bumper on these cars, because they'd likely get hit often enough.
As a thought experiment: Let's put you in a driving simulator, and ask you re-drive the same scenario 10,000 times. In each scenario, we randomly change the size of the branch -- both weight and volume. Repeat the same test with 100 other drivers. Repeat the same test with 100 different AV algorithms. I am sure each driver would have their own definition of "branch too large to safely drive over".
What concrete point are you trying to make?
Really? Also, this editorialised phrase "pretty common thing in early testing". Again, to me, so vague as to be meaningless. Common? From what perspective. Early testing? From what perspective.The implication is that drivers intentionally ram Google vehicles when they have a ghost of a reason to do so.
Nah, the implication is that many human drivers speed, tailgate, don't pay attention, etc. There's a reason rear end collision blame is usually applied to the car behind by default.
On the road, sometimes I feel like I'm the only one following the rules. :/
My recollection is during the early on road testing, Google was required to report on all collisions, regardless of severity (which another poster mentioned isn't the case here), and that in most of those collisions, the Google vehicle was hit from the rear at low speeds when the driver behind expected it to go through; without video or other imaging it's hard to judge the exact circumstances of course (and even with video it can be difficult). There may have been a couple inattentive drivers that ran into them with more speed. And the collision where the Google car tried to change lanes into a VTA bus that their algorithm predicted would move for them, being a professional driver and all.
From driving near the things, they're very cautious and sometimes start to move and don't (I saw one try to make a lane change for about a mile on el camino before it was able to, turning the blinker on and off the whole time).
You know, before they switched the name to Waymo.
Breaking for a tree branch of any size seems preferable to running into highway barriers[1] and fire trucks[2].
[1] https://www.kqed.org/news/11801138/apple-engineer-killed-in-...
[2] https://abc7news.com/tesla-autopilot-crash-driver-assist-cra...
Not for a branch that's 2 inches long and a fraction of an inch in diameter.
Our thinking fast reflexes are to pretty much avoid any obstacle whether living or dead. Hopefully our thinking slow brain has time to do a proper evaluation before doing anything drastic.
There's at least one other alternative: selection bias. Since there is no standard industry definition, companies are allowed to not report many incidents.
So it may not be reported unless it meets Waymo's (non-independent) selection criteria. I think most people can at least recognize there is a potential conflict of interest when objective reporting isn't required.
[1] https://spectrum.ieee.org/have-selfdriving-cars-stopped-gett...
But this certainly could be construed as unexpected:
Are you saying you don't understand why unexpected behavior causes anxiety? It's a pretty well documented effect, from rats to humans.
It was:
The "Wat" probably refers to the fact that this seems unrelated. Dealing with mentally ill people is anxiety-inducing because they act unexpectedly... so what? Lots of things are unexpected. They shouldn't all be drawn into the analogy that says "well, that thing, plus a load of other things, can induce anxiety, therefore that thing should be tarnished with the same brush as all of those things."
People drive in unexpected ways all the time. Of all the criticisms to level at them, "the thing you're doing plus a load of other stuff can induce anxiety" probably isn't top of the list.
I am not painting them with the same brush, I'm drawing an analogy to help people understand the context better. In this case, public policy will dictate to what extent AVs are allowed on public roadways. That, in turn, is dictated by trust. I'm pointing out that "trust" may be incompatible with "unpredictability." I'm not sure what throughline you're drawing, but you seem overly hung up on the use of the word "anxiety," and it's causing you to miss the real point.
So to put a finer point on it, people need to acknowledge that public trust is necessary to wide-scale adoption of AV tech. Plenty of psychological research shows how we aren't intuitively wired to understand statistics. So all the bleating about statistics may be necessary, but not sufficient, to get wide-scale adoption of AVs on public roadways.
It wasn't just the word anxiety; that's a mischaracterisation, and another psychology-adjacent misstep with "hung up on". Your point wasn't great, or at least was very poorly articulated, and that's what caused me to "miss the real point". You seem to have clarified it with this comment, so my challenge to your original point seems to have helped.
Apologies if you think it was a mischaracterization. I was trying to figure out why you missed (what I consider) a pretty straightforward connection between theory-of-mind and predicting behavior. In retrospect, using a term like "mentally ill" is too loaded of term and distracts people from making that point because it can be triggering. I still think its a valid point, though, and plenty of people seemed to follow it just fine.
I think you're understating it.
Mentally ill people act far off the cuff. If I'm walking outside, people can behave unexpectedly (but within the parameters of behavior that doesn't make me anxious).
Imagine: stopping instantly to bend over and tie one's shoes or to look at a storefront. Taking up multiple spaces on the sidewalk. Dropping an item that causes a loud noise. All of these are unexpected movements that require a reaction.
However, if somebody screams about CIA conspiracies or has very erratic mannerisms, that would create more anxiety.
So, apparently their point is that AI behavior on the roads might be more jarring than normal jackass human behavior on the roads.
Sounds like SUV driver was on their phone at the junction, then continued to mess about with it as the cars pulled away.
I imagine the collision speed was quite low since they just left the junction. There's a reason things like following distance and looking in front of you while driving™ exist.
Others may not necessarily agree, but at least anecdotally, a sizeable portion of drivers I see make all kinds of mistakes (law of averages dictates more of them are so called "neurotypical" than not no?).
"Acting Unexpectedly" can often mean following the actual laws and general guidelines for safe and/or defensive driving. I would hazard a guess that sometimes doing the intuitive thing is, in reality, unsafe and/or against the law. If the car does this in 99% of circumstances, and still gets rear-ended, who is really the problem here?
My wife gets triggered every time I drive 'only' the speed limit.
Driving the speed limit with people whizzing past you is more dangerous than following the speed of everyone around you.
1. The other people are causing the dangerous situation, not you. That is not justification for you to do so too.
2. Most of the time it is a false perception. You only notice the people whizzing past you, not the people driving the speed limit along with you because they never pass you (a variant of survivorship bias). This of course depends on where you are, but most people in fact do drive around the speed limit.
Counterpoint: Google Maps thinks you can drive an 86-mile trip from from Springfield, MA to Albany, NY in 80 minutes, a route which is patently impossible if you're driving the speed limit; you cannot drive 84 miles on I-91 in 76 minutes, an average just over 66 mph, on a road with no segments 65 mph segments, without exceeding the speed limit.
https://www.google.com/maps/dir/Springfield,+Massachusetts/A...
But also, this claim just doesn't stand up to scrutiny. Cars take up a lot of space. You can't just drive through them, you have to take affirmative action to change lanes and pass them. You'd find yourself behind people driving the speed limit pretty often, which you will notice.
That is no counterpoint. I am talking about people, not Google Maps. Also, the standard freeway speed limit is 65 MPH AFAIK, so I-91 seems like an exception that Google Maps is not accounting for (and perhaps human drivers as well).
I assume you're implying that you're always driving above the speed limit and you and you're saying you don't find yourself passing people. There could be many reasons for this. You could be in one of the exceptions that I mention, or you could be driving at times and/or along roads without many people. You could also be selectively remembering things; you are much more likely to remember passing people if the incident is frustrating and less likely if it is not frustrating.
You cannot average 66 mph without going over 65 mph, and the states involved don't post speed limits over 65 mph. You cannot get this result with by a simple oversight under the assumption that speed limits represent the speed people actually drive.
Consider what inputs might lead Google Maps to such a conclusion.
Another thing is, by driving the speed limit I am reducing the average speed on the road. So any driver whose heuristic for choosing a driving speed involves some kind of averaging will drive slower because of me.
Do you care about justification, or do you care about being safe?
I've heard that. In driving safety class they taught us that the speed limit is safer. I tried to look up a credible source either way and came up blank. Do you have one?
I go by feel. If it feels much safer to speed, I do. Otherwise I'm in the right hand lane, usually a safe stopping distance behind a big rig for aero gains.
I think the speed isn’t where the focus matters. Obviously speed is a factor in severity and a driver who is bombing down the highway is probably more dangerous than your average bear, but safe follow distance is the thing people just overwhelmingly do not practice or care about. If you’re able to react in time if something happens in front of you, that’s what is going to keep you out of (preventable) trouble more than if you’re sticking to 60 instead of 70 on the highway.
The unfortunate thing is people take your propensity to maintain a safe follow distance as an invitation to cut aggressively in front of you, potentially across multiple lanes.
It sucks all around, but I don’t think AVs are the solution and I definitely don’t trust any company producing AVs is delivering a product that does what it claims. Waymo doesn’t acknowledge in this about the part where their vehicles have human operators take control when the car doesn’t know what to do. I assume they’re not including that data. That’s going to skew the result. Even if the car is super safe in most driving conditions, ignoring what is arguably the least safe conditions the car can be in in your data analysis is fucked and intentionally dishonest.
I don’t think you get the aero gains at safe stopping distances, though.
Ah, yes, kill or be killed.
If somebody is whizzing past you and you are going the speed limit then they aren't going 5mph faster than you. I've never been on a road where everybody is consistently speeding by so much that it could be perceived as "whizzing."
Ordinary neurotypical drivers act "unexpectedly" on the road all the time. I know that I would brake if I saw a downed branch. And people do much much much more. Suddenly change lanes on the highway with no signaling? Check. Brake suddenly because you almost missed your turn without looking to see if anybody was following close behind? Check. Drive over non-lane portions of the road because you were late seeing your highway exit? Check. Swerve suddenly because you dropped your phone between the seats while texting? Check.
I've seen people reverse up a highway onramp.
"all the time" means it is not unexpected. Humans do mistakes and omissions that you've subconsciously learned that humans do, so you can adjust to deal with that.
The difficulty with AI is that it behaves like a completely alien being, making errors that are alien and unpredictable to a human driver.
Alien and unpredictable until we have enough experience with them to know their edge cases.
From my experience with LLMs, AIs even when called to be creative are fairly consistent, even if at first they seem to be producing output that’s more creative than human norms.
Because of the number of interfaces in the environment, there are a near infinite number of edge cases. The mail point is that because we evolved a theory of mind with humans, we can more reliably narrow that number of edge cases with human drivers without having to experience it first. Having an AV learn those edge cases bears some risk, and that risk may be carried by unwilling members of society.
Unwilling members of society are already carrying the risk of every nascent driver entering the road for the first time solo and every aging driver who hasn't yet had the accident that loses them their license.
That's the point of the risk part of the comment. Because we've evolved to have a theory-of-mind with other humans, we can at least know or intuit some of that risk. That's different than accepting a black-box risk. The whole point of the trust part of the comment is that, in order to build trust, we need to have risk-informed decisions.
For example, just because someone has their turn signal on and is slowing down doesn't mean they're going to turn.
Each individual behavior is indeed unexpected. When I go through an intersection I don't expect that somebody blows through a red light and t-bones me. But that does happen.
Perhaps a self driving car's unexpected behavior is on the "too defensive" side while human drivers are on the "suddenly do something wildly dangerous" but this is easier to account for as a driver rather than more difficult to account for.
I think both points are true- people act unexpectedly all the time, so much so that we've come to expect the unexpected, by the sophistication of our theory of mind. I live in a city where erratic driving is commonplace, but that's in relation to the legal norms. New local norms become accepted, which you can anticipate once you adapt. Some of these 'norms' are handy time savers, others are incredibly dangerous and result in frequent accidents, but persist anyway.
How will AI drivers navigate these 'cultural' differences? Will they insist on following the letter of the law (presumably, for liability) or will they adapt to local practices and potentially lower their overall collision rate with humans driving poorly. Interesting near term question.
You’re being selectively reductive.
The unexpected is, by definition, what was not predicted.
The point is unexpected is supposed to be in norm, self driving cars can not tell what a norm is. The idiom is not about the word unexpected, more about the relatioship between chaos and order
the swarm doesn't though. I can rely on most drivers to avoid black swan events
But unlike an automated vehicle, when a black swan event does occur in a human driver you can't disassemble their thought process, correct it, and apply that correction to all other human drivers unfortunately.
The question is how different is the set of unexpected things humans do from the set of unexpected things waymo/autopilots do/will do.
Also sometimes I can intuit that the driver in front of me is probably going to do something dumb because of other minor things I've observed about their driving style.
I'm not scared of mentally ill drivers. I'm scared of rich 16 year olds. I'm scared of drunk drivers. I'm scared of drivers sitting so low they can't see most of what is happening around them. I'm scared of drivers having seizures while driving (my mom was hit a while ago by a man who lost his license due to seizures, and still refused to stop driving). I'm scared of drivers who drive without a license because "fuck them, I drive when I want to". I'm scared of people mixing up gas and break pedals (just got hit by one), in cars which can go 0-60 in 2.6 seconds weighing 6000lbs.
Slight trolling here: That is a lot of scared. Do you also drive wearing a helmet and protective body suit?
If I have to guess, probably the man's livelihood depends upon driving to a job. That is probably why he kept driving.oh absolutely not. he was like 80, retired, and refused to stop driving. Hit my pregnant mom.
Another one, suspended license, got drunk, and decided to play pinball with the car my fiance was in, causing lifelong injuries.
You can't get away from those. America is just a fun place.
I'm sorry to hear about these tragedies in your life.
I disagree. I recommend that you either leave the US for a different highly advanced economy, or find a US city with decent mass transit. Yes, it will require major changes to your life. Your chance of personal injury while riding mass transit is virtual nil -- a rounding error.Ah got it, so your solution is to uproot my entire family to optimize for this one issue. Damage is already done. Plus where would I go? Who would take an old man like myself who won't be an ROI on taxes paid before retirement?
If these tragedies you describe keep happening (as described), and the fault you identified points directly to the problems of American drivers, I find it shocking you suddenly describe it as "this one issue". Either it's as problematic as you described or it's not.
Something can be quite problematic but still cost more to cure than the harms incurred.
And if one believes that, e.g. self driving cars will do a lot to address it in the next decade, they especially might not want to incur the costs of the alternatives.
This specific problem is just as much a design flaw as it is a PEBCAC issue.
You have two very similar pedals that perform polar opposite functions right next to each other, and they are both operated by the same foot.
I'm surprised this isn't a bigger problem.
I had to look up the 'PEBKAC' acronym, but I think you allude to the problem of human factors engineering. It's commonplace in aerospace, where safety-critical, time-sensitive decisions must be made, and humans are in the loop. I would extend this to autonomous driving systems, particularly when you expand the system boundaries beyond the car itself. Humans are part of that human-car-environment system, whether as pedestrians, passengers, or other drivers and we should give them consideration.
an average of 44 per day in the US. So... common, but relatively, uncommon. And apparently there is software that can help mitigate this.
I don't think this applies to any of the incidents mentioned in the article.
In fact, if you don't have enough room to react to "unexpected" behavior, you are at fault lol.
When a human driver must emergency break for a downed branch, it'd ok. When an AI does it, it's unexpected and needs to be hyperanalyzed. I swear, the trolley-car problem is absurd, it's poisoned all debate. 99% of crashes is people not being able to stop in time because people don't drive defensively and can't stop in time when they are called to do so.
There's a good reason for this. It's because the human can be interrogated into what was going through their mind whereas many ML models cannot. That means we can't ascertain if the ML accident is part of a latent issue that may rear its ugly head again (or in a slightly different manner) or just a one-off. That is the original point: a theory-of-mind is important to risk management. That means we will struggle to mitigate the risk if we don't "hyperanalyze" it.
You're missing the context. The AI didn't actually do anything unexpected, unless you expected it to try and drive through a downed branch. The AI behaved exactly as it should. The unexpected part was when the car behind the AI didn't see the branch and, therefore, didn't expect the AI car in front to stop. Unexpected doesn't mean wrong.
Cars can do unexpected things for good reasons, as the AI did in this case.
I'm taking in a larger context. I think just reading the three cited examples is an incorrect approach. For one, Waymo isn't sharing "all" their data, they've already been highlighted for bad practices in terms of only sharing the data from when their Waymo team decided was a bad decision. That's not necessarily objective, and can also lead to perverse incentives to obfuscate. So we don't have a great set of data to work with, because the data sharing requirements have not been well-defined or standardized. Secondly, if you look at reports of other accidents, you can see where AV developers have heinously poor practices as it relates to safety-critical software. Delaying actions as a mitigation for nuisance braking is really, really bad idea when you are delaying a potentially safety critical action. I'm not saying Waymo is bad in this regard, but we know other AV developers are and, when you combine that with the lack of confidence in the data and the previous questionable decisions around transparency, it should raise some questions.
Cory Doctorow - Car Wars https://web.archive.org/web/20170301224942/http://this.deaki...
(Linking to the web.archive version because the graphics are better / more understandable when in the context of some of the text)
Chapter 6 is the most relevant here, but it's all a thought provoking story.
Quite frankly, the vehicle in front of you is allowed to stop at any time for any reason (legally and practically, as front vehicle has better visibility on road conditions than follower), and it is always incumbent upon the driver to the rear to leave room.
If humans can't do that, the solution is probably more automation.
Has the nuisance braking problem been completetly solved? If not, I don't know that I'd agree that more automation is necessarily the answer. More good automation, maybe, but there's a logical jump there.
The Uber fatality from years back showed that the software used "action suppression" to mitigate nuisance braking. The idea that that would be acceptable on a safety-critical software application should give us pause to consider that more automation is the knee-jerk solution.
"Nuisance braking" is, like jaywalking, a phrase that prioritizes one party's use of the road over another party's. The best policy is still to leave the vehicle in front enough room to brake for any reason. Mostly because "nuisance braking" hasn't been solved in humans either (ever been behind someone who panicked when they realized they were falling asleep behind the wheel? I have.)
I think you are mischaracterizing the problem. Nuisance braking seems to be far more prevalent in AVs than in humans, partly because image classification has more uncertainty.
Now, couple that with: 1) a general approach of "when in doubt, brake" and the hacky workaround to suppress braking if it's occurring too much, and you've got a bad nuisance braking problem that's primed for a safety incident.
But again, this is disjointed from reality. I'd agree that if we all left the recommended 4.5 seconds of stopping distance, we'd all be safer. But that's not how our roads were built and that's not how humans drive. You're tilting at windmills here to make a point that doesn't need to be made because it doesn't apply to reality.
Oh, it emphatically is how our roads were built. Most miles in the states were still laid down in an era where cars didn't do more than 45. As for how humans drive... I'll direct the audience's attention to the 3,700 per day road fatality rate in the United States. Even if they brake check more than human drivers, an automated vehicle being forced to follow its programming to maintain a healthy follow distance may very well save lives, especially in an ecosystem where they are the dominant vehicle on the road as opposed to human operated vehicles.
Fair enough. I should have said not how our roads were built in the context of modern cars and population levels. I thought that assumption was a given. So, sure, we could program every car to obey the speed limit and have a 4.5 sec following distance. But do you think those tradeoffs will be palatable to society? The last portion of my comment was aimed at the oft-ignored aspect that public policy will govern the extent of AV adoption. You have to design your product in that environment, not an abstract one that's been sanitized from all those aspects. The best product in the world is still worthless if society says they don't want you selling it.
These two phrases are strange to me in the context of OP:
and: Are you implying that the three specific incidents quoted by the OP are acting unexpectedly? I hope not. Most good, aware drivers would do the same. If you disagree, please provide alternative driving actions that you consider expected. And, would the safety outcomes be better? Unlikely.No, I'm making a broader claim about the nature of AI in safety critical domains. I am also saying we should be careful about drawing strong conclusions from a sample size of n=3 from a potentially questionable dataset.
I would imagine that, with more experience, anticipating the (more consistent) actions of a machine would be easier than anticipating the actions of an unknown human in an unknown state.
The entire point of this line of discussion is that an ML based system with extremely weird and unexpected failure conditions and failure states ISN'T "more consistent" than a human who might follow more closely than physics says but otherwise is ACTUALLY predictable because they have a mind that we have evolved to predict.
ML having completely unpredictable failure modes is like the entire case against putting them anywhere. What would you call a vision system that mis-identifies a stop sigh because of a couple unrelated lines painted on it, other than "unpredictable"?
Humans are nothing if not adaptable. We will adjust our expectations.
True, but that doesn't mean it's always a better result. I've adjusted my expectations that software on my smart tv will be glitchy and have interface errors with a lot of apps, but that doesn't mean that's the only or best way to program a system. We adjust to poor quality all the time, but I prefer not to lower my expectations on a safety-critical system.
I've got some bad news for you.
Is it that 1 in 5 adults in the US live with mental illness[1]?
1. https://www.nimh.nih.gov/health/statistics/mental-illness
Which raises questions for me about how traffic behaves if 25-50% of cars are self-driving. What "feedback loops" might occur? I'd be interested to see large scale tests that demonstrate how self-driving cars deal with each other in high traffic areas.
I don't even get why these count as negatives against Waymo. There's nothing it can do to stop idiot humans driving too closely or just driving into it.
Before jumping to conclusions, are we sure these Waymos hit from behind didn't awkwardly and randomly stop in the middle of a busy intersection (where no sane human driver would)?
I know YouTube videos aren't always representative of reality, but there are some videos of these cars randomly driving extremely slowly in very busy intersections which might be a contributing factor to getting rear-ended, even if it's not Waymo's "fault" from an insurance perspective.
Before jumping to conclusions...
Nah, I'm jumping straight to this conclusion: if you hit something in front of you that didn't leap out in front of you at the last second, you fucked up. The object that you hit was erratically slowing and speeding up? You should have left more room to allow for the unpredictability. It was raining, can't see, the roads are slick? Leave more room in front of you.
Yes, I way too often don't do that, either. But if there's something in the road ahead of me, and I hit it? Man, there are few scenarios where I can claim that there was nothing I could do. And in the case of a full-sized vehicle in front of me, I don't care how erratically it's driving, don't run into the back of it.
Right, but also imagine how traffic would be if everyone drove with the pre-requisite distance to do that. Can you imagine I-5 traffic if everyone had 10 car lengths between them?
So while your statement isn't wrong, it's also not always pragmatic in the real world.
I’m guessing an accident caused due to not leaving enough room to safely stop is going to cause a bit more traffic than the alternative
So you're saying we should all drive with a 10 car buffer, then?
If not, then you already recognize the probability is less than 100%, so that has to be baked into your statement.
So you're saying we should all drive with a 10 car buffer, then?
The only comments saying that are...yours. If your argument is so lacking that you need to argue in bad faith, perhaps it is best to not bother at all.
Or perhaps you are not aware of the proper following distance (and therefore, part of the problem about which you complain). Two car lengths (EDIT: seconds, not car lengths; oops) is the general advice.
I'm just trying to understand exactly what they are advocating, because so many people seem to be making a dichotomous safety choice. It's not a simple model and my point is there are tradeoffs.
No. It's speed, roadway, and car dependent. Two car lengths isn't even sufficient at 25mph let alone at 70mph.[1] Which all goes to show how poorly people tend to think about these things and quickly resort to overly simplified mental models.
[1] https://one.nhtsa.gov/nhtsa/Safety1nNum3ers/august2015/S1N_A...
No. It's speed, roadway, and car dependent.
Moreso that my post has a mistake: two second following distance, not car lengths. Brain fart on my part; apologies for causing you to have to find a URL. But as general rules go, that increases the distance as speed goes up. No, it doesn’t account for everything, but good enough for most circumstances.
Of course it's pragmatic. Free-flowing traffic at 50mph beats people zooming up to 70mph then braking, then zooming again. Freeflowing traffic at 50mph annihilates traffic from accidents.
Traffic is rarely “free flowing” at any speed on these kinds of roads. Often I see “moving roadblocks”: clumps of cars going around or just under the speed limit jockeying around each other, impeding other traffic from moving around them. So-called “defensive drivers” are often unpredictably overly cautious: I’d wager they are at least an indirect cause of accidents quite often, but are severely under-represented in the statistics (if/when they’re represented at all).
Indeed - this is the 70mph is slower than 50mph thing I mention.
Not sure if we’re talking about the same thing. These slower drivers aren’t any safer: they’re weaving around each other and impeding faster traffic from passing. They’re arguably more dangerous because of that. And the flow of traffic is constrained to the speed they choose, which is on average slower than it otherwise would be.
"Can you imagine how bad traffic would be if everyone drove safely?" is a hell of a take.
So is "can you imagine how much infrastructure would cost to ensure everyone drove completely safely"
Like most real-world engineering, there is a cost-benefit balance. Could we design an interstate highway system that allows everyone a 10 car buffer? Sure. Would we like how much it costs, the effects on the environment, etc.? Probably not.
As an aside, your comment seems to go against HN guidelines by taking the least charitable interpretation of the comment.
Why?
You might have forgotten the ultimate rule of road safety: Everything is a tradeoff. Safer is only sometimes better. Otherwise the speed limit would be 10mph, because it's quite a safe speed.
I don't know what I-5 traffic is like, and kind of weird that you would refer to a local road on a global forum, but I'll assume it's like the M25.
Roads like that are currently operating at bursting point. There are incidents and accidents every single day and constant police presence is required to unblock them. If you alleviate congestion, more people use the road. They just go back to bursting point. In other words, it's utterly insane.
Can you imagine if there were accidents on railways or in the air every day? Imagine the scandal if train operators were found to be unsafely squeezing more trains on to the line that it could handle. Roads are stressful, inefficient and shit. Enforcing a safe stopping distance and pricing journeys accordingly, like trains, is where we want to be.
This is why cars do not scale: by the time traffic slows down there are 5-6 times more cars in the lane than it can safely handle. So by the time people are asking for "one more lane" they really mean 6 times as many lanes, a regular 4 lane highway needs 20 more lanes!
Moral: support public transit.
Most traffic is not caused by sheer volume - this is well studied. It is often caused by inability to maneuver, merge, etc. As a result, your i-5 traffic would likely be much much better if everyone left 10 car lengths.
You do not have to raise the average speed of travel very much to make up the theoretical loss due to increased spacing
I remember taking Driver's Ed class many years ago. When we got to the section about fault in relation to rear-end collisions, the instructor said, "if you rear-end someone, it is your fault, full stop." The class then spent five minutes asking hypotheticals, to which the answer was always, "nope, still your fault."
He's wrong. What if that car ahead just got there and is moving far below your velocity? And there's the case of where the car ahead stopped in a fashion a car can't--ran into something massive or the like. Under standard driving conditions if the car in front of you is involved in a head-on at speed you're going to hit it. 2 second following distance assumes the car ahead is subject to the same physics you are.
The one exception is if a car pulls into a lane in front of you when you are traveling faster.
His actual statement was, "if you rear-end someone that you are following, it is your fault..."
If someone abruptly pulls in front of you, you weren't following them.
If you cannot see a car ahead of you in time to stop then you are going too fast for conditions.
The only time it can be not your fault is if the slow moving car switches lanes in front of you before there is time to stop.
That is correct. However, it's also the case that in the real world if you drive legally but do things like brake erratically, you will cause accidents.
Not if the vehicles behind you are self driving cars.
If your system is objectively right, but also objectively causing accidents with humans. Well you won't fix the humans...
It's not causing accidents in these cases. The humans are.
The causality version of "cause", not the blame version. Accidents that would not have happened without the system.
I have a couple more scenarios for you. Overdriving your headlights is a great way to hit something you don't see in time. The safe speed in average conditions on low beams is around 25 to 30 mph, and on high beams, it is around 45 to 50 mph. If there's any glare on the roadway from security lights and oncoming drivers, your safe speed drops 10 to 20 miles an hour.
Related to this is glare from the sun or artificial sources. I lived in a small city with antique-style globe lamps on Main Street. The veiling glare made pedestrians invisible, and even if you knew about the glare and watched for pedestrians, you would still be surprised when they became visible halfway across the street in front of you.
Doesn't matter. If you drive into a slow-moving or stationary object then it's your fault in every sense. If you are driving too closely and are not ready to react to the vehicle in front doing an emergency stop for any reason then it's your fault in every sense.
Most human drivers I observe are taking huge chances every single day. I see them drive at speed around corners they can't see around fully, driving far too closely to cars front, using their phone etc. They get lucky but it's only a matter of time before they have accidents. The three incidents recorded here are simply due to three humans' luck running out.
I'm nearly certain that I'm alive today because I drove as defensively as something like a Waymo. One day as I was approaching an intersection where I had a green light, I saw a car approaching the same intersection from the cross street at a speed that they couldn't have possibly stopped for their red light.
I instinctively slowed down suddenly and that car, as I predicted , ran the red light at high speed and turned just a few yards in front of me.
If I had been more tired, hasty, or it had been darker out, I might not have seen it or reacted in time. A fully autonomous and defensive driving system wouldn't get tired or hasty, and lidar can see fine at night.
And yes, I might have gotten rear ended, but that's a far better outcome than getting t-boned by a car going 65mph.
Bad drivers are reality. If waymo drives in a way that leads to more crashes, even if theyre not its fault, its clear to me that it still deserves some responsibility for not following expected road etiquette.
Bad drivers are a reality because we decide to tolerate them. We could just not tolerate them, like we don't tolerate bad pilots, bad train drivers, bad people etc.
There is widespread support for allowing people with DUIs to drive in this country. Even people who negligently kill someone with their car are still allowed to drive. There is no political support for banning people from driving.
The stats clearly show that Waymo gets into fewer crashes, though. In the remaining incidents, the other driver was considered at fault.
I think that self driving cars will change “road etiquette”, as you call it, for the better.
If Waymo is driving is the way safety engineers are trying to get everyone else to drive, then we should encourage it. There are some things where what everyone else does is wrong. (see the zipper merge)
I agree and I wouldn't hold them against Waymo, but I think when you are developing a self driving car, you should stop analysing it like a crash between two humans with fault and blame, and start looking at it like a system.
If Waymos were having a seriously increased rate of non-fault crashes, that would still be a safety issue, even if every crash was ultimately a human's fault.
Yeah. We had an autonomous bus here--involved in an accident the very first day that wouldn't have happened with a human driver. The bus just sat there and let a truck back into it.
I also wonder how it fares in a Kobayashi Maru scenario. I chose to cream a construction cone because the guy in the left turn lane went straight. (Admittedly, I think he didn't realize he was in the left turn lane.) I could see the cone wasn't actually protecting anything, could a car do so?
These are a good examples of the self-reproach Waymo is exhibiting. Contrast to Cruise which appears to have attempted to suppress information about dragging a pedestrian under one of its cars.
I think it is important to track these. It may be true that these are entirely the other driver's fault but it is also possible that these accidents were encouraged by unexpected behaviour from the Waymo car. If you quickly exclude things that don't seem like your fault then you will likely exclude too much. So I'd rather include everything (but maybe flag it as likely not at fault) than risking exlcuding important data.
I feel like that doesn't paint the whole picture. I'm guessing incidents like [1] don't make it into those stats:
Why are these not counted? Are they really looking at their car crashes, or just autonomous driving software being in control during those car crashes?
Maybe they want to argue the software is safe, but that doesn't change the fact that I'd still be scared of getting into that car.
[1] https://qz.com/1410928/waymos-self-driving-car-crashed-becau...
Having ridden in many a taxi, Uber Lyft, etc. over the years all driven by humans.... I'd be lying to say I'd be more afraid to get in a Waymo car.
I took a London black cab recently for a trip to the hospital, and the driver at one point overtook about 200 metres of stationary traffic, going around several "keep left" bollards, to go through the red light that the rest of the traffic was queuing for. The driver was in his 70s, so I'm not sure if he's just been doing it so long that he doesn't care about the rules any more, or if he was struggling with some kind of age-related brain degenaration.
In his defence, I did get to the hospital well in advance of my appointment. It reminded me of the old Sega game Crazy Taxi.
IIRC, it is almost impossible to give a taxi cab driver a ticket in London.
Nah - easy to give them a ticket, and they'll be severely punished by one (insurance).
However, most ticket issuing in London is by automated cameras, and Taxi drivers know where all the cameras are.
I recently took a Waymo test ride (round trip, two separate segments) in Los Angeles.
It was extremely uneventful - in a good way - while navigating urban traffic, road construction, unprotected left turns, etc., and felt (subjectivity alert!) a lot safer than many of the rideshare drivers I've ridden with over the past 8 years in LA.
I would definitely do it again and would feel safe putting a family member in one.
Why this car, and not all cars? If I fail to assume control of my steering wheel, I will also crash.
The short answer is because this isn't a deterministic "if". The probability matters too.
The thing is in a normal car you're forced to be alert all the time. With autonomous driving, 99%+ of the time you have nothing to do. Humans simply cannot pay as much attention all the time when they're not actively forced to. It's much easier to lose attention (drowsiness, chatting with people, etc.) than if you're physically already driving.
And moreover, regaining control of a car requires some context switching time that isn't there when you're already in control.
If my car is going to disengage and at a random point (whether due to my fault or otherwise), I'd rather just be in control the whole time.
But it's not at a random point. If you push the disengage button, it disengages and tells you it's disengaged. I don't understand why you keep mischaracterising what happened.
I'm not mischaracterizing anything. To the driver who dozes off and presses this completely unintentionally, this did happen at a random point. He certainly didn't intend or expect it to happen.
You have to realize, dozing off in a situation that's boring 99%+ of the time is a human thing. If you design your car such that a driver is prone to pressing the disengage button unintentionally when he dozes off, you get to share the blame when that happens. It's not something reckless like DUI where you get to put all the blame on the driver for it.
That's not the only perspective, though. It's possible to doze off when driving a normal car, and that doesn't suddenly become the car's fault, despite from the driver's perspective, he/she didn't mean for it to happen.
But again - how do you know it's been designed that way? This still seems like editorialising without any additional info.
I believe these stats only cover the miles where there was no safety driver.
i.e., every accident where a split second before the collision the control system yields control to the safety driver is not accounted for in these stats?
Correct, but they also aren't including self driving miles where there is a safety driver in the driving seat, so it's fair.
Waymo is not Tesla, they're actually building self-driving cars.
This seems like an argument that you should be more worried about getting in a Waymo if there is a safety driver than if there isn't. If so, that would definitely be an interesting conclusion.
Well both would be worrying, maybe one less than the other. Really I'd rather have a safe car where the failure modes are not stupid, so I can stop worrying altogether.
Sure, but the failure modes of the traditional human-controlled car are incredibly stupid, we've just gotten used to it.
This is missing the point. Just because we accommodate certain types of errors by humans, that doesn't mean should or would tolerate them if the same errors were made by machines. (Or vice-versa, for that matter.) The standards we hold machines and humans to can be very different, and that should be expected.
There is no standardized way of collecting safety data. Each company is able to define their own standards on what is an AV-caused accident, the training conditions, etc.
This is the bigger issue.
I do not trust any data in 2023, and 2024 will be worse.
We can create frameworks to mitigate this problem, though. A good first step is better transparency regarding data reporting of AVs.
This is very much a by-product of the current regulations that, AFAIK, mandate that a human driver should be able to take control at all times.
That might make sense but this is obviously a little tricky to implement safely.
As a rider, you can't touch the Waymo steering wheel or pedals, which eliminates the cause of the accident you referenced.
The driver fell asleep and then pressed the gas pedal… and didn’t see or hear tons of warnings and alarms from the car. Very hard to blame that on the car.
It seems like these accidents could have been prevented by humans driving cars with collision avoidance. I'm a big fan of this feature on my relatively late-model Subaru, which tends to come part-and-parcel with adaptive cruise control, which is also quite a positive change in experience driving.
I recently rented an even later-model Malibu that only had collision warning auditory alert. Better than nothing, but I'm surprised cars are still made without automatic braking.
The auto-braking collision avoidance system on my 2023 Mazda CX-5 actually is exactly what caused my first collision in 20 years. I was slowing down to avoid a car that was turning off, the auto-braking decided I wasn't slowing enough (or, I might have just let go of the brake) and it proceeded to slam on the brakes bringing me to a full stop on a busy road, leading to me being rear-ended. At no time was any of that necessary. I've also had the auto-braking engage (on multiple cars) because of random debris in the road, or seemingly no reason at all.
Granted, I'm sure this will improve over time. But for the past 5ish years, all my experiences with auto-braking have been dangerously negative.
The most likely explanation is that you were tailgating.
So what if they were?
A system that takes driving too close and turns it into something more dangerous is not a good thing.
There is no actually no indication that Waymo cars are making tailgating even more dangerous
Huh? Aren't we talking about the anecdote above, where they were following a turning car, then automatic emergency braking kicked in and they got rear-ended? No waymo involved.
I was thinking about the waymo incidents.
The classical caveat of any fully automated system - it works well when everyone has it.
I've never driven a car with auto braking. I've been yapped at many a time for lane "departures" that were not lanes (concrete grooves on the highway being the primary culprit) and sometimes not even real (that "lane" is the shadow of a nearby power wire.) I've also seen the adaptive cruise control appear to fail once when two cars simultaneously changed into my lane, one from each side. It still had a moment it could have acted so I can't conclusively say it failed. It also fails to recognize cars with too great a speed difference.
In the EU, at least, since May 2022, all new cars do have automatic emergency braking, along with intelligent speed assistance; alcohol interlock installation facilitation; driver drowsiness and attention warning; advanced driver distraction warning; emergency stop signal; reversing detection; and event data recorder (“black box”).
Other features like eCall – a built-in automated emergency call for assistance in a road accident – have been mandatory since March 2018.
Sigh, EU cares about your privacy until it doesn't. These are data collection and monitoring nightmares. Big brother here we come.
While I broadly agree with you, at least eCall contacts (via voice and data) the local State 112 emergency services and only self-activates in the case of a collision.
That's far better than the situation in the US, where private services like Tesla, GM with OnStar, or Ford with "Sync with Emergency Assistance", which have no limits on data collection.
Shame. I've never had it work really reliably in any car, it's a feel good but mostly shit. Even more so when it's not even hooked into cruise (many cars will provide a shortcut to copy the sign's speed into the cruise or speed limiter, but far from all of them).
This tech is wonderful! Fun fact about the inclusion of this technology in automobiles sold in the US:
The Obama administration (2015) was able to successfully negotiate with and convince most major car manufacturers to voluntarily agree to start making new cars with automatic emergency braking. Their agreement stipulated that all new cars must have it by 2022 [1]. But this negotiated agreement is why we started to see some new car models include it post 2015.
The tl;dr is the Obama administration basically said "look, if y'all don't agree to these proposed minimal standards, we'll get congress to pass a law that is more strict. So the companies decided to take the agreement now to de-risk themselves from having to comply with potentially more stringent requirements in the future).
[1]. https://www.nhtsa.gov/press-releases/us-dot-and-iihs-announc...
AEB and friends also demonstrably reduced costs to insurance companies, who pushed some savings onto consumers to shape demand. My $28k brand new car has better insurance rates than my 2004 car because of all the additional safety and automation prevents enough incidents that would otherwise total the car.
Or just human drivers leaving enough room to brake in.
That last one is impressive, most humans probably wouldn’t pull it off. And just imagine when all the cars on the road are self-driving, probably none of these accidents would’ve happened.
Not if we have teslas!!!
Why not? Has Tesla had similar incidents?
I've been in a self-driving Tesla vehicle. After hours on the interstate, the person ahead of me slammed on their brakes suddenly. I was caught off guard, not expecting it, and may have crashed by not reacting in time. The Tesla braked. So I have anecdotal experience that the person you're asking for an answer isn't well informed on how Tesla's respond to this type of accident.
Of course, anecdotal evidence isn't a very high standard. Thankfully, statistics on this sort of thing are tracked. Statistically, the Tesla self-driving features reduce accidents per mile. They have for years now and as the tech has progressed the reduction has grown as the technology has matured. So statistical evidence also indicates that the person you are asking the question to is also uninformed.
What is probably happening is that it makes for good clickbait to involve Elon and Tesla into discussions. Moreover, successful content online often provokes emotion. The resulting preponderance of negativity, especially about each driving accident Teslas were involved in or caused, probably tricked them into misunderstanding the reality of the Tesla safety record.
My Subaru from 2018 can do this. It's not rocket system and most cars nowadays have a collision detection system. This is not a slef-driving capability by any means.
90% of new cars can do this, it's called AEB. It's not a Tesla self driving feature.
While that is the claim, I've never seen an independent analysis of the data. There are reasons to believe that Tesla drivers are not average. I don't know if what claims are true, which is why I want independent analysis of the data so that factors I didn't think of can be controlled for.
Nope. We'll keep having accidents if everything is self driving as long as we keep Tesla in the mix.
Teslas keep ramming into parked vehicles on the side of the road, including emergency vehicles. So when a waymo car stops because it doesn't know how to safely proceed, the Teslas might just plow into it.
Getting rear-ended is almost always the other driver's fault, but 7 years ago I was involved in a serious accident (minor injuries, both cars totaled) when the driver in the fast lane decided to pull over and pick up a hitchhiker. Crossed over two lanes, hard on the brakes, and I had no chance to even get off the gas.
The responsibility was 100% his because of "an unsafe lane change".
Yup, this is the primary case where the rear vehicle isn't at fault. You change lanes into a lane that's moving faster and get hit, you were wrong even if they hit you from behind.
If it can be proven. 25 years ago a scam was where someone would suddenly change lanes to be rear ended like that, then claim "back pain" and sue for a lot of $$$. I don't know how common it was, but when there are not witnesses the courts tend to side with the person being rear ended.
And this is why you should have a dashcam in your car.
That makes the waymo sound like a moving bollard, fault or not, I will smash into the back of you if you brake-test me
Typically, you would be considered responsible for damages for not keeping a safe following distance.
Hit from behind is a classic blunder. Almost always, it means the other driver was following too closely or not paying attention.
Since all of these accidents happened in the US, is the driver that hits from behind normally responsible for the accident? (For a moment, let's exclude predatory behavior where the front driver is doing something toxic, like intentionally pump-breaking on a high speed road to induce a hit-from-behind accident.)
I've watched Waymo cars, multiple times, from a traffic-lighted intersection, as the lead vehicle, from the left lane, stop 100% when the light it green until other cars pass in frustration and then cross 4 lanes to make the next RIGHT turn.
Traffic disruption with Waymos is UNDOCUMENTED and a real thing.
It “is” the drivers fault who hits from behind. It’s their responsibility to maintain a safe distance required for breaking in such situations.
How so? If human drivers did unexpected stuff like brake for no reason, we'd have a lot more accidents.
I think this just highlights how much better humans are at cooperating on the road compared to automated systems.
All this focus on Waymo supposedly acting "unexpectedly" but I don't see that word in the original article, and the statistics here implies the opposite -- Waymo gets in fewer accidents overall!
Also, only the 2nd item is even consistent with Waymo behaving unexpectedly (we're not given enough info to know why it stopped). In the first item, the "unexpected" thing is the branch, not the behavior (stopping), and in the third Waymo's behavior didn't contribute to the accident at all -- instead it nearly avoided it despite the other car's bad driving.