return to table of content

Air Canada is responsible for chatbot's mistake: B.C. tribunal

ooboe
82 replies
2d

"In an argument that appeared to flabbergast a small claims adjudicator in British Columbia, the airline attempted to distance itself from its own chatbot's bad advice by claiming the online tool was "a separate legal entity that is responsible for its own actions."

"This is a remarkable submission," Civil Resolution Tribunal (CRT) member Christopher Rivers wrote.""

From https://www.cbc.ca/news/canada/british-columbia/air-canada-c...

jasonjayr
40 replies
2d

IANAL, but it's astounding they took that as their defense, rather than pointing to a line (I hope?) in their ToS that says "This agreement is the complete terms of service, and cannot be amended or changed by any agent or representative of the company except by ... (some very specific process the bot can't follow)". I've seen this mentioned in several ToSs, I expect it to be standard boilerplate at this point ...

Drakim
30 replies
2d

That does make sense, but on the flipside, let's say that they start advertising discounts on TV, but when people try to pay the reduced rate they say "according to our ToS that TV ad was not authorized to lower the price".

Obviously that wouldn't fly. So why would it fly with the AI chatbot's advertising discounts?

AnthonyMouse
29 replies
1d23h

You'd normally expect a TV ad to be authorized to make offers.

You wouldn't normally expect an AI chatbot to be authorized to make offers. Its purpose is to try to answer common questions and it has been widely covered in popular media that they hallucinate etc.

Zak
17 replies
1d23h

I disagree. I expect any credible offer a company makes in an advertisement, on its website, using a chatbot, or through a customer service agent to be authorized by the company. Surely a corporation with lots of resources knows better than to program a chatbot to make fake offers; they'd get sued.

And they did get sued. Next time maybe they'll make sure software they connect to their website is more reliable.

AnthonyMouse
16 replies
1d21h

Surely a corporation with lots of resources knows better than to program a chatbot to make fake offers; they'd get sued.

They didn't program it to do that, it's a characteristic of the technology that it makes mistakes. Which is fine as the public learns not to blindly trust its answers. It seems silly to assume that people won't be able to figure that out. People are capable of learning how new things work.

This is like the people who set the cruise control in their car when it first came out and then climbed into the back of the car to take a nap. That's not how it works and the technology isn't in a state where anybody knows how to do better.

Zak
12 replies
1d21h

I agree with your cruise control analogy in a sense, but I think it's Air Canada that's misusing the technology, not the customer. If they try to replace customer service agents with chatbots that lie, they need to be prepared to pay for the results. I'm glad they're not allowed to use such unreliable, experimental technologies in their airplanes (737 Max notwithstanding).

There's absolutely a technology available to make a chatbot that won't tell lies: connect a simple text classifier to a human-curated knowledge base.

AnthonyMouse
11 replies
1d20h

If they try to replace customer service agents with chatbots that lie, they need to be prepared to pay for the results.

The result would be a de facto ban on AI chatbots, because nobody knows how to get them not to make stuff up.

I'm glad they're not allowed to use such unreliable, experimental technologies in their airplanes (737 Max notwithstanding).

If you use unreliable technology in an airplane, it falls out of the sky and everybody dies. If you use it in a chatbot, the customer can e.g. go to the company's website to apply for the discount it said exists and discover that it isn't there, and then be mildly frustrated in the way that customers commonly are when a company's technology is imperfect. It's not the same thing.

There's absolutely a technology available to make a chatbot that won't tell lies: connect a simple text classifier to a human-curated knowledge base.

But then it can only answer questions in the knowledge base, and customers might prefer an answer which is right 75% of the time and can be verified either way in five minutes than to have to wait on hold to talk to a human being because the less capable chatbot couldn't answer their question and the more capable one was effectively banned by the government's liability rules.

Zak
6 replies
1d19h

The result would be a de facto ban on AI chatbots

No, the result would be a de facto ban on using them as a replacement for customer service agents. I support that for the time being since AI chatbots can't actually do that job yet because we don't know how to keep them from lying.

They could put a disclaimer on it of course. To be sufficiently truthful, the disclaimer would need to be front and center and say something like "The chat bot lies sometimes. It is not authorized to make any commitments on behalf of the company no matter what it says. Always double-check anything it tells you."

AnthonyMouse
3 replies
1d13h

No, the result would be a de facto ban on using them as a replacement for customer service agents.

But what does that even mean? If Ford trains a chatbot to answer questions about cars purely for entertainment purposes, or to get people excited about cars, a customer could still use it for "customer service" just by asking it questions about their car, which it might very well be able to answer. But it would also be capable of making up warranty terms etc., so you've just banned that thing and anything like it.

I support that for the time being since AI chatbots can't actually do that job yet because we don't know how to keep them from lying.

It's pretty unlikely we could ever keep them from lying. We can't even get humans to do that. The best you could do is keep them on a script, which is the exact thing that makes people hate existing human customer service reps who can't help them because it isn't in the script.

To be sufficiently truthful, the disclaimer would need to be front and center and say something like "The chat bot lies sometimes. It is not authorized to make any commitments on behalf of the company no matter what it says. Always double-check anything it tells you."

Which is exactly what's about to start happening, if that actually works. But that's as pointless as cookie banners and "this product is known to the State of California to cause cancer".

Zak
2 replies
1d

It's all in how it's presented and should not be up to the customer or end-user to understand how technology running on the company's server, which might be changed at any time might behave unreliably.

I expect something that's presented as customer service not to lie to me about the rebate policy. As long as what it says is plausible, I expect the company to be prepared to cover the cost of any mistakes, especially if the airline only discovers the mistake after I've paid them and taken a flight. Compensating customers for certain types of errors is a normal cost of doing business for airlines, and the $800 CAD this incident cost the airline is not an exorbitant amount. The safety valve here is that judges and juries do test against whether a reasonable person would believe a stated offer or policy; I can't trick a chatbot into offering me a billion dollars for nothing and get a court to hold a company to it.

If Ford presents a chatbot as entertainment and makes it really clear at the start of a session that it doesn't guarantee the factual accuracy of responses, there's no problem. If they present it as informational and don't make a statement like that, or hide it in fine print, then it says something like "the 2024 Mustang Ecoboost has more horsepower than the Chevrolet Corvette and burns less gas than the Toyota Prius", they should be on the hook for false advertising to the customer and unfair competition against Chevrolet and Toyota.

Similarly, if Bing or Google presents a chatbot as an alternative to their search engine for finding information on the internet, and it says "Zak's photography website is full of CSAM", I'm going to sue them for libel.

AnthonyMouse
1 replies
23h4m

The safety valve here is that judges and juries do test against whether a reasonable person would believe a stated offer or policy; I can't trick a chatbot into offering me a billion dollars for nothing and get a court to hold a company to it.

Sure, but a billion people could each trick it into offering them $100, which would bankrupt the airline.

they should be on the hook for false advertising to the customer and unfair competition against Chevrolet and Toyota.

But all you're really doing is requiring everyone to put a banner on everything that says "for entertainment purposes only". Because if something like that gets them out of the liability then that's what everybody is going to do. And if it doesn't then you're effectively banning the technology, because "have it not make stuff up" isn't a thing they know how to do.

Zak
0 replies
18h19m

Courts probably aren't going to enforce any promise of money for nothing or responses prompted by obvious trickery, but they might enforce promises of discounts, and are very likely to enforce promises of rebates as the court in this case did.

If that means companies can't use chatbots to replace customer service agents yet, so be it.

ado__dev
1 replies
1d18h

And if I saw that disclaimer, I wouldn't use the tool. What's the point if you can't trust what it says. Just let me talk to a human that can solve my issue.

AnthonyMouse
0 replies
1d13h

What's the point if you can't trust what it says. Just let me talk to a human that can solve my issue.

That's the point of it -- you don't have to wait on hold for a human to get your answer, and you could plausibly both receive it and validate it yourself sooner than you could get through to a human.

l33t7332273
1 replies
1d17h

The result would be a de facto ban on AI chatbots, because nobody knows how to get them not to make stuff up

I think that banning lying to customers is fine.

AnthonyMouse
0 replies
1d12h

ChatGPT is presumably capable of making something up about ChatGPT pricing. It should be banned?

gregmac
1 replies
1d18h

The result would be a de facto ban on AI chatbots, because nobody knows how to get them not to make stuff up.

If, instead of a chatbot, this was about incompetent support reps that lied constantly, would you make the same argument? "We can't hire dirt-cheap low-quality labor because as company representatives we have to do what they say we'll do. It's so unfair"

AnthonyMouse
0 replies
1d13h

It isn't supposed to be a company representative, it's supposed to be a chatbot.

If Microsoft puts ChatGPT on Twitter so people could try it, and everybody knows that it's ChatGPT, and then it started offering companies free Windows licenses, why should they have to honor that? It's obvious why it might do that but the purpose of letting people use it wasn't so it could authorize anything.

If the company holds a conference where it allows third party conference speakers to give talks, which everybody knows are third parties and not company employees, should the guest speakers be able to speak for the company? Why would that accomplish anything other than the elimination of guest speakers?

l33t7332273
2 replies
1d17h

They didn't program it to do that, it's a characteristic of the technology that it makes mistakes

It sounds like you meant to say that they didn’t _intentionally_ program it to do that. They didn’t find the system under a rock and unleash it on the world; they made it.

AnthonyMouse
1 replies
1d12h

Most of these companies didn't make it, they took an existing one and fed it some additional information about their company.

rideontime
0 replies
22h10m

So what?

Scarblac
5 replies
1d22h

Its purpose is to try to answer common questions

Yes. And therefore people should be able to assume that the answers are correct.

Some people have heard of ChatGPT, and some of those have heard that they hallucinate, sure. But that's still not that many people. And they don't know that a question answering chat bot like this is the same technology!

AnthonyMouse
4 replies
1d21h

And therefore people should be able to assume that the answers are correct.

Why is that a necessary requirement? Something can be useful without it being perfect.

ado__dev
3 replies
1d17h

If I am trying to interact with a company and they tell me to use their chatbot, I expect that chatbot to provide me with accurate answers 100% of the time (or to say it can't help me in the event that I ask a question that it's not meant to solve, and connect me to a representative who can).

If I have to double-triple check elsewhere to make sure that the chatbot is correct, then what's the point of using the chat bot in the first place? If you can't trust it 99% of the time, or if the company says "use this, but nothing it says should be taken as fact", then why would i waste my time?

silverquiet
1 replies
1d17h

This is why I’m a bit vexed by all the hype around LLMs. It reminds me of talking to a friend’s mother who was suffering from dementia - she could have a perfectly lucid conversation with you and then segue into stories that were obviously fictions that existed only within her head. She was a nice lady, but not someone who you would hire to represent your company; she was considered disabled.

Awhile back another commenter called them a “demented Clippy” which about sums them up for me.

ado__dev
0 replies
1d17h

Yeah totally. LLMs have a lot of awesome use cases. But as chatbots, they need a lot of guardrails, and even then, I'm highly skeptical if they improve the experience over a simple searchable FAQs or docs.

AnthonyMouse
0 replies
1d13h

If I have to double-triple check elsewhere to make sure that the chatbot is correct, then what's the point of using the chat bot in the first place?

Because you can ask it a question in natural language and it will give you an answer you can type into a search engine to see if it's real. Before you didn't know the name of the thing you were looking for, now you do.

If you can't trust it 99% of the time, or if the company says "use this, but nothing it says should be taken as fact", then why would i waste my time?

The rate at which it makes stuff up isn't 99%, is the point. For common questions, better than half of the answers have some basis in reality.

ado__dev
1 replies
1d18h

I would expect that if I was talking to an official tool that the company provides to interact with to be authorized to give me information (including discounts and offers) to be accurate and true.

AnthonyMouse
0 replies
1d13h

What would it take to disabuse you of that notion now that your expectations have been observed to be in conflict with reality?

What you're describing isn't what you expect, it's what you wish were the case even though you know it isn't.

seba_dos1
0 replies
1d19h

Of course language models can't be trusted, but it's not the customer's problem to think about chatbot's purpose, how it's implemented and whether it hallucinates or not.

mc32
0 replies
1d17h

If it was approved by the company, yes. But you wouldn't want Braniff Airlines to put out an ad for SouthWest Airlines advertising rock bottom lifetime tickets and have those be valid...

kube-system
0 replies
1d23h

You wouldn't normally expect an AI chatbot to be authorized to make offers.

I think only software engineers would think this. I don't think it is obvious to a layperson who probably has maybe never even used one before.

dataflow
3 replies
2d

How do those clauses actually work? If a rep does something nice for you (like give you something for free), could the airline say it never agreed to that in writing or whatever and demand it back? How are you supposed to know if a rep has authority to enter into an agreement with you over random matters?

But, to your question, my guess is that would basically be telling people not to avoid their chatbot, which they don't want to do.

paxys
2 replies
2d

It's more to shield them from cases like a rep gifting you free flights for life.

dataflow
1 replies
2d

I realize the intention but I'm wondering how it works legally given what the terms actually say.

paxys
0 replies
1d23h

What you are or aren't entitled to is written down in the terms of service. Support agents can only help interpret the terms for you. They may be authorized to go beyond that to some degree, but the company will also have the right to reverse the decision made by the agent.

sdwr
0 replies
1d23h

Those ToS statements overreach their capabilities a lot of the time. They're ammunition against the customer, but don't always hold up in the legal system.

kreek
0 replies
2d

Beyond the chatbot's error and the legal approach they took, this bad PR could have been avoided by any manager in the chain doing the right thing by overriding things and just giving him the bereavement fare (and then fixing the bot/updating the policy).

easyThrowaway
0 replies
2d

I guess the original issue pointed by the judge would still stand: how am I supposed to know which terms are to be assumed true and valid? Why would I assume a ToS hidden somewhere (Is it still valid? does it apply to my case? Is it relevant and binding to my jurisdiction?) is to be considered more trustworthy than an Air Canada agent?

JCM9
0 replies
2d

Courts often rule that you can’t use ToS to overcome common sense. ToS are not a get out of jail free card if your company just does stupid things.

AlexandrB
0 replies
2d

How is that enforceable? In many cases this is carte blanche for company representatives to lie to you. No one is going to read the ToS and cross reference it with what they're being told in real time. Moreover, if a customer was familiar with the ToS they would not be asking questions like this of a chatbot. The entire idea of having a clause like this while also running a "help" chatbot that can contradict it seems like bad faith dealing.

onlyrealcuzzo
39 replies
2d

How is this different from me getting one of my friends to work at Air Canada and promise me a billion dollars to cancel my flight?

Will Air Canada be legal for my friend going against company policy?

thsksbd
15 replies
2d

That's fraud because you're in cahoots with your friend.

If a random AC employee gave you a free flight, on the other hand, you'd be entitled to it.

Anyway, the chat bot has no agency except that given to it by AC; unlike a human employee, therefore, its actions are 100% AC actions.

I don't see how this is controversial? Why do people think that laws no longer apply when fancy high-tech pixie dust is sprinkled?

baggy_trough
5 replies
2d

A random AC employee drinks too much and says "You are entitled to free flights for the rest of your life." Is Air Canada liable?

wredue
4 replies
2d

Since when are contracts enforceable when one party is drunk?

baggy_trough
3 replies
2d

A random AC employee who is having a bad day and hates his employer says "You are entitled to free flights for the rest of your life." Is Air Canada liable?

kube-system
0 replies
2d

Valid contracts usually require consideration

acdha
0 replies
1d23h

No, because no reasonable person would think that they had the authority to authorize that. Remember, the legal system is not computer code - judges look at things like intent and plausibility.

DiggyJohnson
0 replies
1d23h

No, because that's not "reasonable". My dad jokes that he's made a career off of determining what is "reasonable" and what isn't, and he's a contract attorney.

If you were standing at the customer service desk, and instead they said: "sorry about the delay, your next two flights are free", then all of a sudden this is "reasonable".

Night_Thastus
4 replies
2d

If a random AC employee gave you a free flight, on the other hand, you'd be entitled to it.

The company would be entirely within their rights to say 'this employee was wrong, that is not our policy, goodbye!'. This happens all the time with more minor incidents.

easyThrowaway
2 replies
1d23h

No idea about the US but this very same case was tested in France and some part of Germany in the late 90s when some PayTVs (Sky or Canal+, can't remember) tried to cancel multiple subscriptions offered with an extremely aggressive pricing by some of their agents. Courts concluded that the signed agreements superseded the official pricing and they had to offer the service for the entire length of the original subscription.

Night_Thastus
1 replies
1d23h

The difference is that was a signed agreement.

This chatbot merely said something was possible, no legally binding agreement occured.

lolc
0 replies
1d19h

Where I live, "meeting of the minds" is necessary for a contract. Written or not. In this case, that meeting didn't happen. Due to the bullshit generator employed by Air Canada.

So there was no contract but a consumed flight. The court has to retroactively figure out a reasonable contract in such cases. That Air Canada couldn't just apply the reduced rate once they learned of their wrong communication marks them as incredibly petty.

Zak
0 replies
2d

That's far less likely to be true if the customer buys something based on the employee's erroneous statement. I suspect in an otherwise-identical case with a human customer service agent, the same judge would have ruled Air Canada must honor the offer.

Zak
1 replies
2d

And if a random AC employee said[0] they'd give you a billion dollars, you wouldn't be entitled to it because any judge or jury hearing the case would say a reasonable person wouldn't believe that. Unlike computers, which do exactly what they're told[1], the legal system applies sanity checks and social context.

[0] perhaps because they're disgruntled and trying to hurt their employer

[1] generative models are not an exception; they're a way of telling computers to generate text that sometimes contains falsehoods

autoexec
0 replies
2d

And if a random AC employee said[0] they'd give you a billion dollars, you wouldn't be entitled to it because any judge or jury hearing the case would say a reasonable person wouldn't believe that.

I'm sure that if the bot had said that the airline would raise your dead relative from the grave and make you king of the sky or something equally unbelievable the courts wouldn't have insisted Air Canada cast a crown and learn necromancy.

abraae
0 replies
2d

If a random AC employee gave you a free flight, on the other hand, you'd be entitled to it.

And if it was a billion dollars?

DonHopkins
0 replies
2d

Because their source of income depends on sprinkling fancy high-tech pixie dust!

vundercind
5 replies
2d

The computer only does what they told it to.

What they told it to do was to behave very unpredictably. They shouldn’t have done that.

philipswood
4 replies
1d23h

Not these ones...

These ones do what they "learned" from a lot of input data using a process that is us mimicking how we think brains could maybe function (kinda/sort off with a few unbiological "improvements").

vundercind
3 replies
1d23h

Yes, these ones. Somebody told the computer to do all those things you just wrote.

philipswood
2 replies
1d22h

Maybe this makes the point better:

Say your webserver isn't scaling to more than 500 concurrent users. When you add more load, connections start dropping.

Is it because someone programmed a max_number_of_concurrent_users variable and a throttleExtraAboveThresholdRequests() function?

No.

Yes, humans built the entire stack of the system. Yes every part of it was "programmed", but no this behaviour wasn't programmed intentionally, it is an emergent property arising from system constraints.

Maybe the database connection pool is maxed out and the connections are saturating. Maybe some database configuration setting is too small or the server has too few file handles - whatever.

Whatever the root cause (even though that cause incidentally was implemented by a human if you trace the causal chain back far enough) this behaviour is an almost incidental unintended side effect of that.

A machine learning system is like that, but more so.

An LLM, say, is "parsing" language in some sense, but ascribing what it is doing to human design is pretty indirect.

In a way you typing words at me has in some way been "programmed" into you by every language interaction mankind has had with you.

I guess you could see it that way, but I don't think it's a particularly useful point of view.

In the same way an LLM has been in directly "programmed" via it's model architecture, training algorithm and training data, but we are nowhere near the understanding of the process to be able to consider this "programming" it yet.

vundercind
1 replies
1d22h

This is different from a bug or hitting an unknown limitation—the selling point of this was “it makes shit up” and they went “yeah, cool, let’s have it speak for us”.

Its behavior incorporates randomness and is unpredictable and hard to keep within bounds on purpose and they decided to tell a computer to follow that unpredictable instruction set and place it in a position of speaking for the company, without a human in between. They shouldn’t have done that if they didn’t want to end up in this sort of position.

philipswood
0 replies
1d7h

We agree that this is an engineering failure - you can't deploy an LLM like this without guardrails.

This is also a management failure in badly evaluating and managing the risks of a new technology.

We disagree in that I don't think that its behaviour being hard to predict is on purpose: we have a new technology that shows great promise as tool to work with language input and outputs. People are trying to use LLMs as general purpose language processing machines - in this case as chat agents.

I'm reacting to your comment specifically because I think you are evaluating LLMs using a mental model derived from normal software failures and LLMs or ML models in general are different enough to make that model ineffective.

I almost fully agree with your last comment, but the

they decided to tell a computer to follow that unpredictable instruction set

reflects what I think is now an unfruitful model.

Before deploying a model like this you need safeguards in place to contain the unpredictability. Steps like the following would have been options:

* Fine-tuning the model to be more robust over their expected input domain,

* Using some RAG scheme to ground the outputs over some set of ground truths,

* Using more models to evaluate the output for deviations,

* Business processes to deal with evaluations and exceptions, Etc

BadHumans
3 replies
2d

Chatbots aren't people and people are actually separate legal entities responsible for their own actions.

kube-system
2 replies
2d

People working for companies are sometimes separate legal entities responsible for their own actions, and sometimes they act on behalf of the company they work for and the company is responsible for their actions. It depends.

BadHumans
1 replies
1d23h

A chatbot(computer) cannot be responsible for their own actions so the only half of the coin you have left is "the company is responsible for their actions."

kube-system
0 replies
1d23h

Computers are inanimate objects and are not recognized as legal entities.

The legal system recognizes that people, or groups of people, are subject to legal authority. This is a story about a piece of software Air Canada implemented which resulted in them posting erroneous information on their website.

carlosjobim
2 replies
2d

Law and justice is not like a computer program that you can exploit and control without limits by being a hacker.

If the chatbot told them that they'd get a billion dollars, the courts would not hold Air Canada responsible for it, just as if a programmer put a decimal wrong and prices became obviously wrong. In this case, the chat bot gave a policy within reason and the court awarded the passenger what the bot had promised, which is a completely correct judgement.

Delumine
1 replies
2d

This argument seems overly dramatic and distorted. Yes, in an outrageous situation like a billion-dollar mishap, most people would know something isn't right. But for a policy that appears legitimate, especially when it's replacing a human customer service rep, it's not that obvious. In these cases, Air Canada should definitely be held accountable.

carlosjobim
0 replies
1d23h

Yes, that's exactly what I'm saying as well. Especially since they had already taken the customer's money.

weego
0 replies
2d

Likely because the claim was considered to be within the reasonable expectations of real policy.

vkou
0 replies
2d

How is this different from me getting one of my friends to work at Air Canada and promise me a billion dollars to cancel my flight?

There is a common misconception about law that software engineers have. Code is not law. Law is not code. Just because something that looks like a function exists, you can't just plug in any inputs and expect it to have a consistent outcome.

The difference between these two cases is that even if a chat bot promised that, the judge would throw it out, because it's not reasonable. Also, the firm would have a great case against at least the CS rep for this collusion.

If your friend of a CS agent promised you a bereavement refund (As the chatbot did), even though it went against company policy, you'd have good odds of winning that case. Because the judge would find it reasonable of you to believe and expect that after speaking to a CS rep, that such a policy would actually get honored. (And the worst that would happen to the CS rep would be termination.)

spamizbad
0 replies
2d

Your example is significantly different.

The chatbot instructed the passenger to pay full price for a ticket but stated they could get a refund later. That refund policy was a hallucination. The victim her just walked away with a discounted ticket as promised not a billion dollars.

oliwary
0 replies
2d

No, it is more similar to Air Canada hiring a monkey to push buttons to handle customer complaints. In that case, the company knows (or should know) that the given information may be wrong, but accepts the risk.

nrmitchi
0 replies
2d

What you are describing is 1) fraud, 2) conspiracy, and 3) not a policy that a reasonable person would take at face value.

It is very different than if an employee were to, in writing, make a statement that a reasonable person would find reasonable.

hiddencost
0 replies
2d

Weird straw man...

So replacing all their customer support staff with AI that misleads customers is OK? That's pants on head insane, so why spend time trying to justify it.

eirikbakke
0 replies
2d

The legal concept is called "Apparent authority". The test is whether "a reasonable third party would understand that an agent had authority to act".

("Chatbot says you can submit a form within 90 days to get a retroactive bereavement discount" sounds perfectly reasonable, so the doctrine applies.)

https://en.wikipedia.org/wiki/Apparent_authority

dataflow
0 replies
2d

How is this different from me getting one of my friends to work at Air Canada

One major difference is the AI wasn't your friend, another is that you didn't get it hired at Air Canada, another is that the promise wasn't $1B, etc...

TheCoelacanth
0 replies
2d

No, if you conspire with your friend to get them to tell you an incorrect policy, then you have no reasonable expectation that what they tell you is the real policy. If you are promised a billion dollars even without a pre-existing relationship with the agent, you have no reasonable expectation that what they are promising is the real policy because it's an unbelievably large amount.

If you are promised something reasonable by an agent of the company who you are not conspiring with, then the company is bound to follow through on the promise because you do have a reasonable expectation that what they are telling you is the real policy.

BiteCode_dev
0 replies
2d

Your friend is not trained by Air Canada. The bot is Air Canada property.

If they decide it is reliable enough to be put in front of the customer, they must accept all the consequences: the benefits like having to hire less, and the cons, which is that they have to make it work correctly.

Otherwise, woopsy, we made our AI handle our accounting and it cheated, sorry IRS. That won't fly.

skywhopper
0 replies
1d23h

The claim is so outrageous that I wish there were a way (I assume there probably isn't) for the company or the lawyers to have been sanctioned outside what the plaintiff was asking for.

ado__dev
40 replies
1d17h

If I am trying to interact with a company and they direct me to their chatbot, I expect that chatbot to provide me with accurate answers 100% of the time (or to say it can't help me in the event that I ask a question that it's not meant to solve, and connect me to a representative who can).

If I have to double-triple check elsewhere to make sure that the chatbot is correct, or if anything the chatbot tells me is non-binding, then what's the point of using the chat bot in the first place? If you can't trust it 99% of the time, or if the company says "use this, but nothing it says should be taken as fact", then why would i waste my time?

If a company is going to provide a tool, they should take responsibility for that tool.

steveBK123
27 replies
1d17h

Yes, I think people underestimate the amount of imagined LLM use cases that require accurate responses. To the point that hallucinations will cost money in fines & lawsuits.

This is a new frontier in short sighted customer service staffing (non-staffing in this case). The people who are on the frontline communicating with customers can convert unhappy customers to repeat customers, or into ex-customers. There's a few brands I won't buy from again after having to jump through too many hoops to get (bad) warranty service.

wongarsu
12 replies
1d16h

It's not like human call center staff has never given anyone wrong information, or cost companies money in fines and lawsuits.

The bar LLMs have to clear to beat the average front line support operations isn't that high, as your own experience shows. And compared to a large force of badly paid humans with high turnover, LLMs are pretty consistent and easy to train to an adequate level.

They won't beat great costumer support agents, but most companies don't have many of those

itsoktocry
3 replies
1d16h

It's not like human call center staff has never given anyone wrong information, or cost companies money in fines and lawsuit

A human will be more likely to say "I don't know" or pass you along, rather than outright lie.

KiranRao0
2 replies
1d16h

I find it common for human customer support people to give inaccurate information. I don't know about "outright lying", but I've had people tell me things that are factually incorrect.

atoav
1 replies
1d15h

Depends. Saying "thing X should not fail" is factually incorrect, when you called because thing X failed.

However I would not expect an airline customer support to make up a completely fictional flight that has never existed. Maybe they could confuse flights or read a number wrong, but making one up?

erhaetherth
0 replies
1d15h

Humans won't fabricate too much but when confronted with yes/no questions and they have a 50-50 shot of being right and any blowback will likely be on someone else....they'll answer whatever to get you out of their hair.

Case in point, I asked my bank if they had any FX conversion fees or markup. Guy said no. I asked if there was any markup on the spread. Said no. Guess what? They absolutely mark up that spread. Their exchange rates are terrible. Just because there isn't a line-item with a fee listed doesn't mean there isn't a hidden fee in there. He's either incompetent or a liar.

mynameisvlad
1 replies
1d16h

Sure, but there is such a thing as “the human element”. Humans aren’t perfect, and that is the expectation. That is not the same case with computers.

And especially for something where it’s just pulling data from an internal system. There is absolutely no reason to invent made up information and saying “well humans do it all the time” is just an excuse.

steveBK123
0 replies
1d14h

Yes, further, expectations wise..

On the phone with a customer service rep, I might understand a little wishy washy answer, slip of the tongue or slightly inaccurate statement. I've never really had a rep lie to me, usually its just I don't knows & them escalation as needed.

There is something about the written word from a company that makes it feel more like "binding statement".

SOLAR_FIELDS
1 replies
1d15h

To me that’s totally fine. I don’t even particularly care whether the LLM is better or not. The only thing they really matters is if you are gonna use that LLM, when it inevitably messes up you don’t get to claim that it wasn’t your fault because computers are hard and AI is an emerging field. IDGAF. Pay up. The fact that you dabble in emerging technologies doesn’t give you any excuse to provide lesser services

steveBK123
0 replies
1d14h

Right whether you employ a person or some software which lies, the liability should be the same.

steveBK123
0 replies
1d14h

It's still way too easy to send LLMs into a complete tangent of rambling incoherently, opening yourself up to the LLM making written statements to customers you really don't want.

I recently asked some LLMs "How many gallons in a mile?" and got some very verbose answers, which turned into feats of short story short stories when I refined to "How many gallons of milk in a mile?"

behringer
0 replies
1d13h

I think you're underestimating the quality of customer support. People are going to be out there testing every flaw in the support system, staffed or unstaffed. LLMs have no hope.

Wowfunhappy
0 replies
17h20m

It's not like human call center staff has never given anyone wrong information, or cost companies money in fines and lawsuits.

If a company representative told me in writing (perhaps via email support) that I could claim a refund retroactively, and that turned out to not be their policy, I would still expect the company to honor what I was told in the email.

Phone calls are difficult more because there is no record of what was said. But if I had a recording of the phone call... I'm not actually sure what I would expect to happen. It's just so socially unusual to record phone calls.

AdamJacobMuller
0 replies
1d16h

The bar isn't even that high.

They only need to increase the lawsuit/settlement amount by less than the amount the companies saves by automation.

TechSquidTV
5 replies
1d16h

With RAG it's entirely possible to essentially eliminate 100% of hallucinations, given you are ok with responding with "I don't know" once in a while. These situations are likely coming from poorly implemented chatbot, or they decided that "I dont know" was not acceptable, and really that should be a queue to send you to a real human.

jerpint
2 replies
1d16h

There are no guarantees with RAG either, and RAG only works when the answer to the question is already printed out somewhere explicitly in the text, otherwise it’s definitely prone to hallucinate

alexxys
1 replies
1d16h

Yeah, RAG can't provide such guarantees. Moreover, even if the correct answer is printed somewhere, LLM+RAG still may produce wrong answer. Example from MS Copilot with GPT-4: https://sl.bing.net/ct6wwRjzkPc It claims that OnePlus 6 has 6.4-inch display, but all linked pages actually claim that it's 6.28. Display resolution and aspect ratio are also wrong in the response.

steveBK123
0 replies
1d14h

It's funny it seems to have a lot of trouble extracting tabular data, which arguably is one of the things I hear people trying to do with it..

sebastiennight
0 replies
1d16h

This claim seems wildly inaccurate, as even with GPT-4 in a single conversation thread with previous human-written answers included, a repeat of a similar question just resulted - in my testing today - in a completely hallucinated answer.

I think your claim might be based on anecdotal testing. (I used to have that same feeling after my first implementation of RAG)... Once you get a few thousand users running RAG-based conversations, you quickly see that it's "good enough to be useful", but far from being as dreamy as promised.

fivre
0 replies
1d15h

do the people managing the chatbot know that though?

this shit gets sold as a way to replace employees with, essentially, just the middle manager that was over them, who is now responsible for managing the chatbot instead of managing people

while managers are often actually not great at people management, it's at least a somewhat intuitive skill for many. interacting with and directing other humans is something that many people are able to gain experience with outside of work, since it's a necessary life skill unless you're a hermit. furthermore, as a hedge against managerial ineptitude, humans are adaptable creatures that can recognize their manager's shortcomings and determine when and how to work around them to actually get the job done

understanding the intricacies training a machine learning system is a highly specialized and technical skill that nobody is going to pick up base knowledge for in the regular course of life. the skill floor for the average person tasked with it will be much lower than that of people management, and they will probably fuck up, a lot

the onus is ostensibly on AI system vendors to make their systems idiot-proof, but how many vendors actually do so past the point of "looks good enough to close the sale in a demo"? designing such a system is _incredibly_ hard, and the unfortunate reality is that if you try, you'll lose sales to snake oil salesmen who are content to push hokum trash with a fancy coat of paint.

these systems can work as a force multiplier in the hands of the capable, but work as an incompetence magnifier in the hands of the incapable, and there are plenty of dunning-krugerites lusting to magnify their incompetence

citizenpaul
4 replies
1d16h

that hallucinations will cost money in fines & lawsuits.

Sure. They are now out about $600. They probably already laid off 500+ customer service jobs costing conservatively 30k a year each. Not including mgmt,training,health,ect. I don't think it will make a difference to the ivory tower C levels. We will just all get used to a once again lower quality help/product. Another great "enshitification" wave of the future with "AI"

It also assumes that the customer service people dont make mistakes at a similar level anyway.

Another "new normal" How come anything that is "new normal" is never good?

eru
3 replies
1d15h

Another "new normal" How come anything that is "new normal" is never good?

If it allows them to reduce costs (and there's enough competition to force them to pass that on as reduced prices), I'm fairly happy with a new normal.

See also how air travel in general used to be a lot more glamorous, but also a lot more expensive.

lamontcg
1 replies
1d10h

and there's enough competition to force them to pass that on as reduced prices

i found the bug.

eru
0 replies
1d5h

Cynicism aside, air travel is one of the industries with pretty healthy levels of competition. (At least in Europe and South East Asia. I haven't spent much time in North America, so can't judge the market there.)

People love to hate eg RyanAir, but their effect on prices is felt throughout the industry; even if you never take a single RyanAir flight.

citizenpaul
0 replies
15h41m

Yeah they pass those cost saving right onto record corporate profits for the last 20 years...

jowea
2 replies
1d17h

The fines and lawsuits may be way cheaper than human staff.

jorisboris
1 replies
1d16h

Especially once we have ai lawyers ;)

photonthug
0 replies
1d15h

This initially sounded pretty good until I thought it through. Democratizing access to council and forcing troll lawyers to deal with trolling bots seems good but it will shape up like other spam arms races while legal systems gear up to deal with the ddos attacks. Good for spammers and most entrenched players, bad for the public at large.

Already we can’t manage to prosecute ex presidents in a timely manner before the next election cycle. If delays seem absurd now what will it be like when anything and everything remotely legal takes 10+ years and already sky-high costs triple?

m_0x
3 replies
1d17h

If you can't trust it 99% of the time

A chatbot should be either 100% or 0%. Companies should not replace humans with faulty technology.

smegger001
0 replies
1d15h

I would say meet or beat human custom support agent accuracy, 100% is in many case not acheivable for machine or human.

canadiantim
0 replies
1d17h

but humans aren't 100% either... seems ridiculous to demand 100% from any implementation

ado__dev
0 replies
1d17h

Agree there. I put 99% as even human reps sometimes get it wrong, but in my experience whenever a human agent has made a mistake and relayed wrong info, the company would take appropriate steps to meet me at least half way.

dataflow
2 replies
1d16h

then what's the point of using the chat bot in the first place?

The point is quite literally to make you give up trying to contact customer service and just pay them money, while getting their legal obligations as close to a heads-I-win, tails-you-lose situation as possible. That's not the mysterious part. The mysterious part is, why did they even let this drag into court for such a small sum?!

potatolicious
1 replies
1d15h

"The mysterious part is, why did they even let this drag into court for such a small sum?!"

Because most people wouldn't bother taking it to court.

If they rolled over and paid up every time their chatbot made a mistake, that gets expensive, and teaches customers that they can easily be compensated if the chatbot screws up.

If they fight it tooth and nail and drag it all the way to court, it teaches customers that pursuing minor mistakes is personally painful and probably not worth it.

Scorched-earth defense tactics can be effective at deterring anyone from seeking redress.

It's the same fundamental reason why customer support is so hard to reach for many companies - if you make it painful enough maybe the customer will just not bother. A valuable tactic if your company imagines customers as annoying fleshy cash dispensers that talk too much. Having flown many times with Air Canada I can confirm that they do seem to perceive their passengers as annoying cash dispensers.

erhaetherth
0 replies
1d15h

Well...they lost, and now it made the news. Are they going to keep the chatbot? Is the judge going to so lenient next time, now that there's precedent of wrongdoing?

JoshTriplett
2 replies
1d16h

If I am trying to interact with a company and they direct me to their chatbot, I expect that chatbot to provide me with accurate answers 100% of the time (or to say it can't help me in the event that I ask a question that it's not meant to solve, and connect me to a representative who can).

If I'm trying to interact with a company and they direct me to a chatbot, I expect to get useful help 0% of the time, because if help was available via a mechanism on their site I would already have found it. I expect a chatbot to stall me as long as possible before either conceding that I need a human's assistance or telling me some further hoop to jump through to reach a real human.

eru
1 replies
1d15h

Honestly, that's pretty similar to dealing with front-line level 1 human support.

JoshTriplett
0 replies
1d9h

I have a slightly higher expectation that first-line tech support can solve my problem if the problem is "you really should have had a self-service way to do this on your website but I'm sure you have a tool for this".

And if that isn't the case, I've mostly found that contrary to stereotype, many first-line tech support people are not such rote script-followers that they can't deal with skipping most of the script when the problem is obviously on their end and going to need real human intervention.

mattlondon
0 replies
1d16h

This is true of all LLMs: you cannot trust a single thing they say. Everything needs to be checked - from airline fee information to code.

I expect we'll see this sort of thing a lot more in the future, and probably a bit of a subsequent reversal of all of the sackings of humans once the issues (... and legal liability!) becomes clearer to people.

gwbas1c
0 replies
1d13h

My internet went down and I could only get a chat bot on the Web site or a hang up on the support line.

After the "estimated fix by ETA" came and went, I reported my ISP to the FCC. That resulted in a quick follow up from a real human.

guardiangod
33 replies
3d14h

I just want to give a shoutout to BC's Civil Resolution Tribunal. They take their job seriously, and make it as easy as possible for plaintiff to submit a complaint.

I once had the misfortune of generating a batch of defective enterprise-grade SSD from a S company. That S company requires all RMA to go through the sales channel you bought the SSD from, but the sales company we used was out of business.

S has refused all attempts to RMA by stonewalling us saying that we need to return the drives thru the bankrupted company. When we explained that the company is bankrupted, S just ignored us. When we created a new RMA request, S's rep says we already have an open case, and that we need to return the drives blah blah blah.

After 5 months, in a fit of rage I typed up a 2000 words complaint, gathered all the emails/phone calls/photo evidences, and submitted a complaint to CRT ($75 fee). I wasn't expecting much, but within 3 weeks I got a call from a corporate lawyer in S company's Toronto office, asked me for the situation, apologized profusely, and asked if I can drop the case if they RMA all affected SSDs.

That day was great, to say the least.

Aside:

The CRT posts all their cases (that reached arbitration) here- https://decisions.civilresolutionbc.ca/crt/en/nav.do

Reading the cases is quite am entertaining time passer.

ornornor
12 replies
3d12h

S is such a disaster that I have established a personal policy of staying away from S and never buying their things (TV fridges phones SSDs printers… anything they make). This policy has drastically improved my sanity. They produce expensive junk and have no regard for quality or security/privacy. I’d still buy things that have S components in them but not whole S devices.

choilive
11 replies
3d12h

Yes - my house (was) full of Samsung appliances - Stove, Fridge, Microwave, Washer, Dryer.

All garbage. They are all falling apart now or became became uneconomically repairable within 5 years. Every single appliance repair businesses I called flat out wouldn't touch the fridge for example.. Apparently they don't provide service information or parts to 3rd parties (at least for the fridge).

I have moved onto a different brand, but waiting to see if it's any more reliable..

PakG1
8 replies
3d11h

When we were purchasing a clothes washer and dryer, Samsung had a special promotion. The sales rep at the store told us that the Samsung machines got the most complaints and she would recommend the LG machines. But we wanted that promotion, it was oh so nice. We bought a 5-year warranty just in case.

Sure enough, it's year 3 and the washer has stopped working. Repair guy came and decided he needs to order new parts to fix it. It's been a week or so without doing any laundry. Glad we purchased the extra warranty, but maybe we should have gone with the LG like the sales lady recommended.

pjc50
6 replies
3d2h

The long-term brand you want is Miele. They're not cheap but my parents' dishwasher is approaching 30 years old.

eszed
5 replies
2d10h

Without knowing anything particular about Miele, all this anecdote suggests is that they were great thirty years ago. They could well have enshittified between now and then.

I'm at the point where I don't trust any brands at all anymore. The next time I need to make a major appliance purchase I'll buy a subscription to Consumer Reports and blindly follow their recommendation - I still trust them.

pintxo
1 replies
2d9h

Apparently Miele has started to have quality issues. But they still might be a good bet, if only for the fact that they are (probably?) the last family run business in the market.

eszed
0 replies
1d19h

I did not know that! Thanks. Indeed, "family run", depending on where they are in the internal-to-the-family management-transition cycle, is more encouraging to me than "publicly held". ("Private equity" is always and everywhere a huge red flag.)

It's depressing to me that we have to think about those things. I mean, "buyer beware" has always been the case, but it seems like we have to be more wary (or more wary of more factors) than we did a decade or two ago. Or maybe I'm just getting older. I dunno.

danielscrubs
1 replies
2d9h

But isnt that the crux of the matter? You buy what consumer reports say, and the reviewers have no way of knowing if it will break down in 3 years. No one rates their gadget after three years so we have a massive blind spot where the best thing is still word of mouth.

My parents bought an Miele washing machine, rock solid even after pushing ten years.

eszed
0 replies
1d19h

Yeah, totally. That's where branding used to be a valuable signal, under the assumption that a company wouldn't deliberately choose to destroy their long-term value. I don't believe that anymore, so I'll place what remains of my trust in reviewers I know are independent (God help us all if it turns out CR is taking kick-backs or something) and figure know more about, say, washing machines than I do.

mytailorisrich
0 replies
1d10h

Miele now has cheaper models so you may be right to be cautious.

Personally I have had issues with Bosch and don't trust them anymore.

The result is that now either I car about specific look, some specific features, etc and pay a bit more for them, or I just go for cheapest.

debian3
0 replies
3d4h

Get SpeedQueen next time. There’s still quality out there, need to stop listening to sales and do research.

kube-system
1 replies
3d11h

I also have had the same experience with my Samsung appliances. I paid top dollar for a nice looking set of laundry machines only to find out that they have garbage components inside of them that are bad by design.

I had a squeaking drier fixed under warranty, only to have the same issue reoccur multiple times, because the rollers are just junk. It needs a new set like clockwork every year and a half or so, I have the replacement procedure memorized now.

The washer seems to be allergic to water and soap. I keep the unit in a dry location, leveled and raised off of the floor, yet, the body of the unit is rusting out, the chrome finish on the door is peeling, and when I clean it, the cycle labels wipe right off the front panel. The pump has also failed due to rust on the motor.

Absolute trash. I probably would have been better off with a $400 top loader.

londons_explore
0 replies
3d11h

yet, the body of the unit is rusting out

Happened to my ~2016 Samsung. Turns out a hose clamp wasn't properly installed and water was dripping onto the steel floorpan and rusting everything nearby.

Fixed it and all rusting halted.

teraflop
7 replies
3d13h

Hmm, browsing through some of those cases, I'm starting to notice a pattern of Air Canada not taking these tribunal proceedings entirely seriously.

From this case:

I find that if Air Canada wanted to a raise a contractual defense, it needed to provide the relevant portions of the contract. It did not, so it has not proven a contractual defence. [...]

In its boilerplate Dispute Response, Air Canada denies “each and every” one of Mr. Moffatt’s allegations generally. However, it did not provide any evidence to the contrary.

From https://decisions.civilresolutionbc.ca/crt/crtd/en/item/5254...

Despite having the opportunity to provide documentary evidence, Air Canada did not do so.

From https://decisions.civilresolutionbc.ca/crt/crtd/en/item/5249... and https://decisions.civilresolutionbc.ca/crt/crtd/en/item/5188...

Having reviewed the evidence, I am satisfied, on the balance of probabilities, that [Air Canada] received the Dispute Notice and did not respond to it by the deadline set out in the CRT's rules.

From https://decisions.civilresolutionbc.ca/crt/crtd/en/item/5230...

Based on the proof of notice form submitted by the applicant, I am satisfied that [Air Canada] received the Dispute Notice and did not respond to it by the deadline set out in the CRT's rules.

(I also found a fun one that hinges on an Air Canada employee's apparent inability to do basic arithmetic: https://decisions.civilresolutionbc.ca/crt/crtd/en/item/5225...)

qingcharles
3 replies
3d12h

I see a lot of stuff like that in civil litigation in David v. Goliath situations.

Usually the large entity puts in little effort and relies on the fact that its (much more expensive) lawyers generally have more sway with the court (judge), are more persuasive even when their arguments are nonsense, and can just drag cases on for years until the smaller party is burned out.

wlonkly
2 replies
2d14h

One nice touch about BC's CRT is that neither side is allowed to be represented by lawyers without specific approval by the tribunal.

subarctic
1 replies
1d15h

That's interesting. What does that mean in practice, do they send people that have a lot of legal training and experience, but aren't bar members?

wlonkly
0 replies
1d13h

Honestly not sure, but with only $800 at risk I can't imagine them even sending a paralegal.

wlonkly
0 replies
2d14h

In the case at hand, it sounds like they decided they wouldn't spend over $800 to avoid losing another $800. Which is reasonable, I suppose.

(But another constraint of the CRT is that you can't bring representation -- so while it was an Air Canada employee involved, it wasn't their legal team.)

pjc50
0 replies
3d2h

Actually doing all of that would be quite expensive, so they don't and rely on people giving up or not knowing how to exert their rights.

Orangeair
0 replies
3d12h

That last one was great

In his statement, Mr. Mackoff described distinct conversations he had with each employee, provided the supervisor’s name, and submitted the diagram he drew while trying to explain to the employees how to count the 10 calendar days. As Mr. Mackoff’s witness statement includes so much detail, and as Air Canada has produced no contrary statement, I accept that Air Canada refused to transport both Mr. and Mrs. Mackoff on February 15, 2022 and so breached its contract with them.
anamexis
7 replies
3d14h

What is a S company?

calamari4065
2 replies
3d14h

Samsung

muro
1 replies
3d13h

Could be Seagate and SanDisk too ...

denysvitali
0 replies
3d11h

I expected this to be SanDisk. I bought one USB from them and after a few writes it locked itself from writing further.

Apparently I'm not alone and the only fix is to throw the thing away.

bakkerthehacker
2 replies
3d14h

Samsung

I've also had to deal with their lack of Canadian RMA for 2 SSDs. Had to go back and forth with them and trying to convince Amazon and the Amazon seller to replace the defective drives.

Not buying any more Samsung memory products due to their essential non existent warranty in Canada

somerandomqaguy
0 replies
3d11h

Buy through Memory Express if you've got a store nearby, just need to keep the receipt for the warranty period. They have a price matching policy for items that have the same SKU as well.

MemEx is an authorized dealer so they'll take it back and send it back to Samsung for warranty work if you buy from them.

rpy
0 replies
3d12h

Took about 5 months of endless back and forth to get them to replace a defective SSD in Australia too, despite very clear consumer law guarantees here obligating them to help. Absolutely hopeless company to deal with.

ProllyInfamous
0 replies
3d14h

Seems like an abbreviation for a USD$400B+ company that manufactures SSDs, but with the commenter not specifically mentioning (presume, smartly, to avoid SLAPP lawsuit/libel).

refurb
0 replies
3d14h

Even as a smaller-government proponent, I've been a big proponent of things like the CRT.

One of the core functions of the government is enforcement of contracts. While there are the courts, they are out of reach for most people either due to skill level or financial constraints.

Having a simple, low cost, easily accessible way to resolve contract issues puts every member of society on a more even footing when it comes to economic interactions. If we're going to build our society based on capitalism and the ability for parties to enter into contracts for things like employment, buying/selling, housing, etc, having an efficient means to resolve disputes seems like a no-brainer.

hamandcheese
0 replies
3d10h

Can we please name the company rather than make people guess?

Am4TIfIsER0ppos
0 replies
3d2h

This isn't Harry Potter and they aren't Voldemort! Are all the replies correct in assuming this is Samsung? You can say their name.

20kleagues
0 replies
3d14h

BC CRT is great when the happy pathway happens. They send legal letters to all parties so they are able to arbitrate, and that letter might be enough to get things resolved.

I had the misfortune of trying to use them for a company which had just stopped responding and in the end even though I did get the default judgment in my favour, actually enforcing the judgment still required me to go through the normal courts (which in my case was not worth the cost). But the process of dealing with CRT was nothing short of delightful.

ApolloFortyNine
14 replies
3d15h

Yo what was air Canada thinking here... 1 week after the flight, and he even provided the death certificate?

How'd anyone let this go to 'court' (I'm not Canadian, it's a tribunal idk what that is) for $600. And I'm guessing it's Canadian so it's more like $400 US. What kind of point were they trying to prove here.

I legitimately think you could talk amazon support into giving you that over a broken product.

jannyfer
9 replies
3d14h

And I’m guessing it’s Canadian so it’s more like $400 US.

FYI, to a Canadian, $600 CAD feels like what $600 USD feels like to an American. Canadian wages aren’t 30% higher in numerical value than US wages.

fumeux_fume
8 replies
3d13h

Granted, I’m an American and I’ve had a couple glasses of wine tonight, but I’ve read this comment like 8 times and it still makes no sense to me.

D13Fd
7 replies
3d13h

The currency is worth 30% of a U.S. dollar but the cost of living is also significantly lower, so to a person living in either country $600 feels like about the same amount of money.

johnwalkr
5 replies
3d13h

The cost of living is often not lower in Canada. Average housing cost is now significantly higher in Canada and for those Canadians with an easy path to move to the US (like SW developer, engineer or doctor), the numbers I am recently seeing are salary of 2-3x in the US, housing cost 0.5 to 0.7x, and other cost of living a bit less.

seattle_spring
4 replies
3d12h

Where are comparable locations half as expensive in the US? I agree that Canadian salaries trend much lower.

johnwalkr
3 replies
3d8h

"Comparable locations" is very subjective, but Vancouver vs Denver is one I've heard of that seemed convincing to me. Then again, I've never been to Denver.

__turbobrew__
2 replies
2d12h

Those cities are not remotely comparable. Vancouver and Seattle are a much better comparison. From a cursory search an average home in Vancouver is $CAD 1.1 mil and an average home in Seattle is $USD 800k. So once you take into account exchange rate they are actually fairly similar.

johnwalkr
1 replies
2d9h

You're right about the comparison between Seattle and Vancouver, but "housing is half the price" is a common comment from Canadians that have made the move (both online and people I know personally). It may be an exaggeration, but the US has 10x the population and more medium-large cities to choose from so I think it's somehow true at least some people can find a significantly cheaper place that is personally comparable and has a job for them. If you're a software engineer that lives in Vancouver (1x salary, 1x housing prices) and loves the mountains and wants to move to another mountain-adjacent place, Denver (2x salary, 0.5x housing prices) is indeed pretty enticing compared to Seattle (2.5x salary, 1x housing prices) or Calgary (1x salary, 0.4x housing prices).[1]

[1] Guess but not complete wild-guess multipliers

__turbobrew__
0 replies
1d22h

I agree. There are lots of places in Canada with 0.3x house prices but once you filter for 1x salary and places which are nice to live in Canada you are left with an empty set.

The ideal situation is you get a remote job in Canada for 1x salary and live in a nice place with 0.3x houses. That is my current setup.

anonymousab
0 replies
3d13h

but the cost of living is also significantly lower,

You can look at random things like groceries, homes or car insurance to see that this isn't really true. 600 USD (or even 600 CAD) goes a heck of a lot further in most of the USA.

dghlsakjg
1 replies
3d15h

The civil resolution tribunal is kind of a hybrid small claims court and arbitrator. It was originally started about a decade ago with limited scope over condo/HOA disputes and small claims, but has been expanded.

They have legal authority, but you can always appeal to the provincial courts (which almost never works out, I think they agree with CRT decisions in 95% of cases)

Air Canada baffles me. Their front line employees are powerless and frequently hostile. But I have never submitted a complaint to corporate without being given at least $200 CAD worth of flight credit. Most recently I was yelled at and hung up on by a customer service agent, I got a coupon for 20% off any itinerary with up to 4 passengers. I’m not even a member of their rewards program!

Scoundreller
0 replies
3d8h

Same here. Came across something that was broken on a plane that wasn’t serious in any way and I wasn’t even mad about. But I don’t bother going through “feedback” or “comment” because I figure they never action on those.

So I submit my “complaint”.

A few months later, I get a response clearly showing that they didn’t read what I wrote, and almost certainly didn’t put in any plan to fix it; but they gave me a $400 credit.

I wasn’t even angry in my “complaint”. Maybe I need to be nicer in my actual complaints in general in the future.

Thought it was a scam when I got the response but I’ll take it.

pwarner
0 replies
3d15h

Yeah the real story seems to be garbage customer service: by the AI and humans at Air Canada. Perhaps the bot is implemented perfectly to match the humans it replaced.

cperciva
0 replies
3d15h

it's a tribunal idk what that is

The Civil Resolution Tribunal is better known as "online small claims court". It's something BC introduced a few years ago to streamline the process.

brentm
13 replies
1d18h

I just can't believe it took a lawsuit to get an airline to fork over a few hundred bucks on something that was obviously their fault.

SuperNinKenDo
4 replies
1d17h

Probably because they felt confident they could set a precedent would be my guess.

Of course the argument is absurd, but this is exactly where companies like this would love to go. Virtually free support staff that incur the company absolutely no liability whatsoever.

ryandrake
3 replies
1d17h

Maybe companies are so used to their $multimillion dollar legal teams winning against broke Average Joes that their legal departments are starting to just assume that it will always happen and that all court precedents are going to be favorable. They've been sniffing their own farts for so long they think it's fresh air.

astrange
2 replies
1d16h

If you sue a company in small claims having an expensive legal team doesn't help them that much.

SuperNinKenDo
1 replies
1d11h

Could you elaborate?

astrange
0 replies
19h46m

Mostly all the expensive procedures aren't allowed, there isn't a jury so it moves faster, and in some places lawyers aren't allowed in court even to represent a business.

ceejayoz
3 replies
1d18h

Yeah; this should've been well within some second-tier customer service manager's discretionary refund budget. Instead they've got a precedent-setting ruling that makes the chatbot a huge liability.

tomrod
0 replies
1d17h

chatbot a huge liability

This is an appropriate outcome, in my view. I'm as pro-AI as they come. But I also recognize that without a clear standard of service delivery, and in an industry inundated with M&A instead of competition, that a chatbot isn't a helpful thing but a labor cost reduction initiative.

jlarocco
0 replies
1d17h

Instead they've got a precedent-setting ruling that makes the chatbot a huge liability.

Good.

beambot
0 replies
1d17h

Recall when the Chevy dealership's chatbot agreed to sell a Chevy Tahoe for $1...

https://www.businessinsider.com/car-dealership-chevrolet-cha...

ssnistfajen
0 replies
1d16h

At least the outcome is favourable to the customer who was misled, and since Canada is a Common Law country, this ruling will establish a precedent for future corporate negligence with their chatbots in Canada.

delfinom
0 replies
1d17h

They wanted to set a precedent they could use when they replace customer service with shit spewing bots.

briga
0 replies
1d17h

The problem with Canadian airlines (and Canadian companies in general), is that if you don't like the service there aren't really any meaningful alternatives. The two largest carriers have a near total stranglehold on the market (+75%) of flights. They practically have zero incentive to improve because they are propped up by protectionist Canadian policies that prevent foreign companies coming in with any meaningful competition. Their main competitor WestJet is similarly bad. Story of the Canadian economy really

akira2501
0 replies
1d17h

It's a signal that the company has internally configured itself in such a way as to insulate the administrative section from the operational section. They've likely done this in an effort to avoid the types of information that would force them to spend money.

It's literally just corporate ignorance as a business strategy. What's sad is, on the large, it works in their favor.

In any case, it's an equally good signal that you don't want to fly with them.

janalsncm
10 replies
3d15h

Good. Company outsources customer “service” to a completely unreliable piece of software, which goes to show how much they care about their customers. And then they argue in court that customers shouldn’t even trust their own software! So essentially they have increased profits by cutting customer service jobs, replaced humans with a stochastic parrot, and now don’t want to be responsible for the drawbacks of this decision. They want all of the upside and none of the downside.

apapapa
6 replies
3d13h

In their defense, I never seen an airline company caring about their customers... Otherwise they wouldn't be late 25% of the time.

CaptainZapp
5 replies
3d13h

Not necessarily so.

A couple weeks ago Oman Air cancelled the return leg of a long distance flight due to a change in schedule.

They offered to reroute me with the Qataris via Doha.

I preferred to cancel the (in principle non-refundable) flight and make different arrangements.

The money,~2'500$, was credited to my card 4 days later. Including fees paid for preferred seats.

It's a shame that they stopped service to my city. Beause it's a great airline, which always provided stellar service.

rowyourboat
2 replies
3d12h

Those are your basic rights: You entered into a contract with the airline, and the airline failed to deliver. Of course you get your money back if the alternative solution is not satisfactory - whether or not the ticket was refundable doesn't even enter into it, as it was the airline that failed to deliver in the first place. That's not stellar service, that's just fulfilling their legal obligations.

throwaway2037
0 replies
3d11h

I think the point being made: It was refunded very quickly and without hassle. Other, less scrupulous / ethnical airlines were try more tactics to redirect to worse flight or delay refund (if at all -- "oh, it was a lost in our system").

csomar
0 replies
3d7h

I am not sure which airline represent your average experience; but in my experience, almost all airlines will fight to "your death" not to give you anything back. In the times they do (especially European), it's because there is a law that requires them to.

What happened to the OP is, therefore, unusual.

j7ake
0 replies
3d11h

My experience with United emirates was similar. Extremely generous refund policies and fast action.

csomar
0 replies
3d7h

Oman Air is trying to offer good service in a tight, extremely cut-throat market (the Middle-East) where its competitors are giants (Qatar and Emirates) or cash drunk (Saudi Airlines).

I flew through them last year. Check-in was a bit of a hassle but the plane/service and the layover were great. The price was very competitive (cheapest) and yet the plane was empty. I expected them to either fold or downsize.

EZ-E
2 replies
3d12h

Most likely they will just add to the chat bot answers a "Please make sure to double check our policy at $URL - Only our policy applies".

spiffytech
0 replies
3d5h

Fortunately, the tribunal rejected that tactic in this instance.

The chatbot included a link to the detailed rules, which contradicted what the chatbot told the customer.

"While a chatbot has an interactive component, it is still just a part of Air Canada’s website ... It should be obvious to Air Canada that it is responsible for all the information on its website ... There is no reason why Mr. Moffatt should know that one section of Air Canada’s webpage is accurate, and another is not."
CogitoCogito
0 replies
3d8h

I really hope that wouldn’t get them out of it. In that case, Air Canada would still be misrepresenting their policies. Misrepresenting Air Canada policies to induce a purchase of a ticket may not legally be fraud, but it certainly feels like it should be. It’s also hard for me to see how that argument would square with this reasoning from the article:

"While Air Canada argues Mr. Moffatt could find the correct information on another part of its website, it does not explain why the webpage titled 'Bereavement travel' was inherently more trustworthy than its chatbot. It also does not explain why customers should have to double-check information found in one part of its website on another part of its website," he wrote.
Arrath
10 replies
1d18h

Quite a sensible judgement, it seems.

Pretty outrageous for the airline to try to claim the chatbot was its own legal entity.

dmurray
4 replies
1d17h

The guy who bought the Chevy Tahoe for $1 [0] isn't going to get it, though. And that's also perfectly sensible.

But at some point we're going to see more cases in the grey area in between. What's the important difference?

In both cases I think the result would be the same if the chatbot had been a human. GM doesn't have to honour every promise a sales rep makes, even if that rep is nominally entitled to enter into contracts for the company - otherwise someone might agree to sell their whole stock to an accomplice for $1. The same applies to Air Canada, my buddy there can't "advise" me I get free flights for life and have that honoured.

So where is the line? Is it about good faith by the customer, or about what a reasonable person might think the company would offer them?

[0] https://twitter.com/ChrisJBakke/status/1736533308849443121

tomrod
1 replies
1d17h

GM doesn't have to honour every promise a sales rep makes, even if that rep is nominally entitled to enter into contracts for the company - otherwise someone might agree to sell their whole stock to an accomplice for $1

The law doesn't protect a company in the world you laid out, internal compliance and controls do. A sales rep in a company with bad controls may well do exactly what you laid out.

dmurray
0 replies
1d8h

This is nonsense. Can you imagine the chaos of a world where what you said is even close to true?

I'm a cashier at Walmart. One day I sell to you for $100 - not just everything in the store, but the building, the local distribution centre and even the corporate headquarters.

And in your view - Walmart has no legal recourse against me or against you? They should just peacefully vacate the buildings and hand over the keys? Their remedy to this is to discipline me according to their internal controls - maybe put me on a PIP and remind me that the company handbook forbids these deals?

No, this just isn't a deal that can happen. It's not about reasonableness, consideration or unconscionability - the deal is void even if the buyer agreed to pay $100 million. It's not about good faith on the buyer's side - it's void even if you thought I was the VP of Real Estate. I can't sell Walmart's property at any price, even though I'm otherwise empowered to do business on behalf of the company.

ryandrake
0 replies
1d17h

Somehow, I bet if the sales rep or a chatbot convinced the guy to pay $200K for a Chevy Tahoe, the full force of GM's legal team would ensure the customer was held to that. But, when it goes the other way, and the sales rep or chatbot is convinced to sell it for $1, suddenly it's not "sensible."

overstay8930
0 replies
1d17h

The line has been defined a long time ago, AI chat bots are considered regular customer service agents because no law has said otherwise, and they represent themselves as such, and we already know that reasonable things said by customer service agents can be taken as fact.

Aachen
2 replies
1d16h

The article doesn't say (imo an omission), but I assume the argument wasn't that level of stupid

More likely, they blamed a third party vendor that developed, configured, or hosts the bot

Which sounds like a similar situation to when your taxi breaks due to a mechanic's shoddy work: it's not the passenger's fault that your mechanic sucked, you were contracted to get them from A to B and may be on the hook if you stated you'd get them there on time. Here, it's not the user's fault that the chat bot was shoddy and stated something that they now don't want to fulfil. If AirCan wants to blame their vendor, they can go right ahead but this person has a right to this reduced flight price independently of whether AirCan gets the money from their vendor

But explaining all that instead of saying "haha they claimed the chat bot is an independent entity!" probably gets shared less (it's yesterday's top comment after all) and thus fewer conversions from website readers into subscribers

erhaetherth
1 replies
1d15h

If they dragged the customer into court, they already screwed up.

They should have paid the customer immediately and then took it up with their vendor. If they want to take their vendor to court, they can do that separately.

Aachen
0 replies
1d7h

Yeah, we all agree on that. I'm just saying the article appears to exaggerate it further for comedic effect

itsmartapuntocm
0 replies
1d18h

They want to have their cake and eat it too. To use AI to get rid of paying for labor, but also not assume any of the risks that go along with it.

arcticbull
0 replies
1d18h

Surprised they didn't try and claim it was an independent contractor.

usaar333
8 replies
3d15h

I find it hard to understand the calculus on Air Canada's side of fighting this. Not a lot of money and really bad press.

dmix
2 replies
3d15h

Because it probably threatens a whole new customer support system they spent 100x developing / migrating to vs what they spent in lawyers for this case

ApolloFortyNine
1 replies
3d15h

Out of court settlements have no effect on precedent. We would even be here if the manager this got escalated to at some point had just done their job and made an exception.

dmix
0 replies
3d13h

So it wouldn’t make other lawsuits easier in civil law? I don’t know the law well here

plorg
0 replies
3d15h

If they can be held liable for their shitty software replacing a human then it might not actually be cheaper to replace the human with shitty software.

kwar13
0 replies
3d14h

Because they're mostly a monopoly and have been doing whatever they want for decades by now. They've gone bankrupt time and time again bailed out by tax money.

dghlsakjg
0 replies
3d15h

You overestimate AC.

They are buried in compensation claims right now due to them claiming that crew shortages in 2021 and 2022 were out of their control, and the government regulators disagreeing with them.

My guess is that no one with any power bothered to look at this until it was too late to settle, and they thought it was worth the cost of fees to see if they could get out of paying.

HWR_14
0 replies
3d12h

Not a lot of money times many people is a lot of money.

EZ-E
0 replies
3d13h

Realistically, all complaints from the flyer went to first level customer service agents which are only told to enforce the policy as is. This probably did not get escalated.

renewiltord
7 replies
3d15h

Where's the chatbot accessible from? Can't find it. I assume it was some old-school KB query chatbot not an LLM? Date says Nov 2022 and LLMs hadn't become quite as popular at that time yet. They have to obviously be responsible. What a nonsensical claim.

xyzzy_plugh
6 replies
3d15h

What difference does it make? If it happened today with an LLM the outcome should be the same.

petesergeant
5 replies
3d15h

What difference does it make?

In liability, none, but it'd at least be more understandable if it was an LLM, rather than something that should have been hard-coded with the right answers.

xyzzy_plugh
4 replies
3d14h

I'm not sure I follow. I wrote an "AI" chatbot in highschool and it certainly didn't reproduce hard-coded "right answers".

LLMs don't somehow invalidate the work of their predecessors. Chat bots aren't new.

I'm not really sure why you brought up LLMs at all. Are chat bots synonymous with LLMs now? I sure hope not because then this sort of scenario only gets worse.

petesergeant
1 replies
3d13h

I'm not really sure why you brought up LLMs at all

I didn't

I wrote an "AI" chatbot in highschool and it certainly didn't reproduce hard-coded "right answers"

Sounds like it would have been a poor choice for a customer-service bot then?

xyzzy_plugh
0 replies
3d12h

My apologies, I misread.

And it would certainly have been a poor choice for customer service, but I have definitely used chat bots that are far worse than that one was.

darylteo
1 replies
3d12h

Are chat bots synonymous with LLMs now? I sure hope not because then this sort of scenario only gets worse.

You're right. LLMs are now the de facto standard implementation for Support Chatbots. Almost every chatbot platform offers a AI Chatbot product in some form.

- they're also frequently shown to be prone to hallucinations - and also shown to be tricked - and can be gamed into breaking it's prompt cage

https://twitter.com/ChrisJBakke/status/1736533308849443121

This case therefore sets a precedent for these scenarios, with or without a disclaimer that you should confirm this information with the dealership. If you assume the liability for the accuracy of "ye old bot" responses, then it raises the possibility that you assume the liability for the accuracy of "ye new bot" responses.

My opinion is that once the AI wild west phase has ended, and the legal reckoning is upon it, everyone will learn that using AI does not absolve one of liability. This would essentially kill the dream of full self-driving automation, among other things.

xyzzy_plugh
0 replies
3d3h

I don't think that's true. I do think it is a bit of a slippery slope.

Replicable, intelligent but fallible and disposable minds have incredible potential to positively impact our society. But somewhere there is an ethical and moral boundary to be crossed.

It's the journey, not the destination.

plantain
7 replies
3d15h

Air Canada has always been like this.

They were notorious amongst the stranded-abroad community during COVID for selling tickets on flights they weren't operating and had no intention to operate, then refusing to refund, except with credits that also expired before they intended to operate.

Scammers from top to bottom.

throwaway2037
2 replies
3d11h

Canada is a good democracy. As your parliament to split Air Canada into multiple carriers or reduce any state-provided advantages to allow better competitors to emerge.

Georgelemental
1 replies
2d22h

Canada is a good democracy

Didn't they freeze hundreds of people's bank accounts, with no due process, for peaceful political demonstrations? https://www.bbc.com/news/world-us-canada-60383385

No better than Nigeria: https://www.hrw.org/news/2021/02/11/nigeria-finally-unfreeze...

zihotki
0 replies
2d6h

They're good, not perfect.

MichaelZuo
2 replies
3d14h

Were they ever practically compensated in the end?

cperciva
1 replies
3d14h

Yes, the government gave airlines a bailout conditional on them issuing refunds.

CogitoCogito
0 replies
3d8h

I don’t know anything about the details of that bailout, but I feel the only reasonable and fair approach would be to (1) require that customers were refunded unconditionally and the (2) be forced to pay statutory damages. Then _if_ Air Canada couldn’t do that without a bailout, the government should have bailed them out while taking equity for the value of the bailout.

FireBeyond
0 replies
2d22h

Sounds like Qantas, who sold thousands of tickets on flights they knew were never going to operate.

ndjshe3838
7 replies
1d17h

I hate the whole attitude the industry has towards AI models, it just seems so sloppy compared to software of the past

It’s like “yeah we don’t really know what it’s going to do and it might screw up but whatever just launch it”

Really surprised me that so many companies are willing to jump into LLMs without any guarantee that the output won’t be something completely insane

metalliqaz
1 replies
1d17h

They are salivating at the idea of being able to get humans off their payroll.

The future is mass unemployment with all resources being split between a handful of trillionaires who control the AIs.

qayxc
0 replies
1d17h

That wouldn't make sense though - once a sizable portion of the customer base is too poor to buy the services, there's no source of income anymore and the whole system collapses.

kevin_b_er
1 replies
1d17h

The purpose of AI, and the entire drive of investment in it, is to eliminate labor costs. There's no other purpose. It is not to advance science or make a better world, it is to make money. That money comes by the expense of the many and the gain of a few. And there will not be new jobs to replace the ones lost. There's no more "learn to code!". They're there to replace all jobs possible, make a more unequal society, and nothing more.

You'd best understand this today.

hatthew
0 replies
1d16h

Can you define what you mean by "AI" here? I strongly disagree with your sentiment, but perhaps you and me have different ideas of what counts as AI.

bhaney
1 replies
1d17h

Too much money on the line to bother with due diligence

lesuorac
0 replies
1d17h

This is the part I never got.

How does using a computer suddenly wash away any responsibility? Like if Air Canada's desk agents were all a separate company and they told the guy the wrong information isn't Air Canada still on the hook for training their sub-contractors?

bmitc
0 replies
1d17h

It's just proof companies don't care. The quicker they can turn their customers into little autonomous compute nodes, the better from their perspective.

I have also noticed an increase in automated call systems just flat out hanging up on me. As in: "We're experiencing higher than normal call volumes and cannot accept your call. Please call at a different time. Goodbye. <click>" How am I supposed to get help in such cases?

We've allowed companies to scale beyond their means, and they're getting away with more and more.

UPS destroyed a suitcase of ours and basically told us to go f ourselves. We could have sued in small claims court, but that's what they're betting on, that most people just give up.

And the chatbots are just terrible. And these days, the human representatives available have even less information than what the chatbots are provided with.

ilaksh
7 replies
3d15h

November 2022.. might have been a pretty weak model or just out of date info.

This to me is a cautionary tale against deploying cheap small LLMs instead of using larger models. I think 7b models are very tempting to many business people who may value profits over just about anything else.

potatolicious
2 replies
3d15h

... Larger models aren't immune from hallucinations, nor is the rate of hallucination negligible enough to ignore with larger models.

This is a fundamental issue with the underlying technology, one that many companies in this space refuse to reckon with.

A lot of "this can be automated with AI!" startups are relying on the basic assumption that hallucinations can be tolerated - cases like this really narrow the field of use cases where that is true.

quirkot
0 replies
3d15h

Just wait till an aggressive lawyer finds out about this. Gonna be a lot of companies taken to the cleaner for hallucinations

ilaksh
0 replies
3d15h

Hallucination or inaccuracy is dramatically worse with 7b models versus the largest ones.

roncesvalles
1 replies
2d

I just don't understand why companies don't use something like Algolia on their KB articles with some careful manual keyword curation.

Imagine all the person had to do was type "bereavement" in a search bar and it instantly matched with the bereavement policy. What more does a person need?

ado__dev
0 replies
1d17h

Exactly this. I feel like LLMs are great for so many things and have a crap ton of real-world use cases. But for simple FAQs, there's so many better and cheaper solutions out there.

cscurmudgeon
1 replies
3d15h

There is no evidence they used a small LLM. There is no evidence that LLM size is the issue. Environmentally bad to use a large model when a smaller one could work just as fine.

MBCook
0 replies
3d14h

There’s no evidence it was even an LLM at all.

kazinator
6 replies
2d

My take away from this is that Air Canada would rather fight a customer in court than suffer a few hundred dollar refund as the cost of figuring out they have a flaw in the consistency of their system. (And in the face of an extreme unlikelihood of winning against prevailing common sense, too.)

bena
2 replies
1d23h

Because if they win this, they get to keep the good and ignore the bad.

They get to automate a large chunk of their customer service and when the chatbot does something really stupid they can tap on the sign that says "We're not legally bound by our chatbot"

kazinator
1 replies
1d23h

But anything output by your chatbot is no different from something you put onto a billboard, or into a flyer or any other communication. The idea that a discount offered by a chatbot is not valid in the same way as one written on a coupon is nonsensical.

Maybe they were thinking, if we spend a bit more money on lawyers, we can try this crapshoot.

bena
0 replies
1d19h

Yes, that's what the ruling essentially said.

However, that's not what Air Canada wanted. And I'm not saying they should have won either. Just that, that's what they wanted.

Because if they can ignore the output of the chatbot when they want to, they can gut their customer service department.

denlekke
1 replies
1d23h

i imagine that paying out would set a precedent or provide future discoverable evidence that they want to avoid. fighting (and winning) might have allowed them to continue their chat bot system without needing to change anything.

lawlessone
0 replies
1d21h

I think this has backfired enormously then. Wouldn't this case set a precedent?

bo1024
0 replies
1d23h

I wonder if they really thought this would set a precedent that benefited them.

nine_zeros
5 replies
3d16h

So a lawsuit vector exists against AI-enabled lack of care for customers.

Would be interesting to see if a cottage industry can open up around prompting inaccurate information from company AI info to reap via lawsuits.

nkrisc
2 replies
3d15h

You’d have to do so without appearing to intend to do so, and in either case would be fraudulent.

HWR_14
1 replies
3d12h

I don't see how it's fraud or would need to have a disguised intent.

postalrat
0 replies
2d5h

If your intent is to get a wrong answer and you succeed then nice job. Do you really deserve a reward?

nrmitchi
0 replies
3d16h

That seems like a stretch based on this specific case, given that the only award was explicit damages (ie, a refund of funds already spent). No one ended up “ahead” here.

bee_rider
0 replies
3d15h

This would be good, actually, right? Better to have the chatbot misfire on somebody not actually depending on it for correct info.

userbinator
4 replies
3d14h

Companies should be responsible for the information they give customers, regardless of how they do it.

"Give me a real human" is usually what I say when it seems like I'm talking to a bot. Unfortunately, there have been times when I later discovered that the "bot" was actually a real human that was just acting like a bot!

While AI may seem to be improving, I always keep in mind the possibility that the opposite is also happening; and if you don't want your job replaced by a bot, perhaps you should not be acting like one.

colechristensen
2 replies
3d1h

if you don't want your job replaced by a bot, perhaps you should not be acting like one.

Call center folks, especially the first couple of layers of them, are on scripts. They have decision trees and what to say written down. Bots can be much more dynamic than them. It's a pretty terrible job unless you're fairly uniquely predisposed to liking that kind of work.

rsynnott
1 replies
2d23h

This case is an interesting example where the LLM is _worse_ than that; it's not on rails in the same way a front-line call-centre worker would be, so can make stuff up with impunity. This would never have happened with a conventional scripted hotline; at worst it could only have happened if the caller was escalated to someone with the authority to make stuff up, but for a simple question like this they wouldn't have been.

reaperman
0 replies
2d1h

It could be though. The LLM could just choose from options one through 10 of a list of prescribed responses, and an interposer could validate that the output of the LLM truly is an integer and then send that chosen message to the customer.

blackboxlogic
0 replies
3d5h

That's why I start with "are you a human or a robot", then "what's your favorite color"

Sakos
4 replies
3d15h

Air Canada, for its part, argued that it could not be held liable for information provided by the bot.

"In effect, Air Canada suggests the chatbot is a separate legal entity that is responsible for its own actions. This is a remarkable submission. While a chatbot has an interactive component, it is still just a part of Air Canada’s website," Rivers wrote.

This could be an article on The Onion. Unfortunately, I suspect this won't be the last time companies try to weasel their way out of the consequences of how they use AI in relation to their customers (i.e., us)

throwaway888abc
2 replies
3d15h

Indeed interesting, even in alternative reality if their argument is valid. They still "hired" the chatbot to provide service on their behalf and they are liable for it. Not lawyer, just spark of common sense ?

bee_rider
1 replies
3d15h

I don’t think hiring is a good analogy here.

A human at least can bear responsibility. A chatbot cannot. That responsibility has to be absorbed by something. A company should be much more responsible for the actions of programs they run, than people they hire.

em-bee
0 replies
3d14h

if a human makes that mistake the company is still responsible for it. the company can however then turn against that human they hired and sue them. if that court determines that said human does bear responsibility the company will get something back. that has no bearing on what the customer gets from the company however. so for the customer it is irrelevant how the mistake by the company was made.

nerdponx
0 replies
3d14h

I love a good spicy legal opinion like this.

danepowell
3 replies
3d16h

I wonder whether the bot hallucinated the wrong information or whether the policy changed and the bot simply wasn't updated / retrained. The latter seems more likely but less interesting, akin to information on a boring HTML page getting overlooked during a site update.

dghlsakjg
2 replies
3d15h

The incident was in 2021, so I don’t think it was an LLM.

WirelessGigabit
1 replies
3d14h

No but it would be nice if the same laws would apply to LLMs. Too often they're now deployed as a quick fix for a chatbot.

But before either they quoted me a solution or escalated to support.

Now it makes up a non-working solution.

MBCook
0 replies
3d14h

Honestly would it make any difference if the information was just on an FAQ page and it contradicted what the actual ticket contract said?

I’m with you. They should be held to the information they give out. Short of an employee purposely maliciously giving out bad information it seems like not making stuff up should be a basic requirement for them to operate.

tqi
2 replies
1d15h

I don't see why AI is particularly germane to this case- if a human support rep from a third party contractor made the same statement I assume AC would have tried the same nonsense?

erhaetherth
1 replies
1d15h

They're not liable for the things that come out of their employees' mouths?

Maybe my flight tickets aren't valid either because I foolishly purchased them on aircanada.com and their website is known to have bugs? Or the ticket lady punched the wrong thing into her computer? Or I should have known the plane was going to be overbooked?

tqi
0 replies
1d13h

My read from the article was that because the chatbot is a third party company, they were liable for the error. Which I assume is the same argument they would have made if chat support was outsourced to some outside firm. Of course, either way it is an idiotic argument.

lokar
2 replies
3d16h

The penalty for arguing they are not responsible for the “magic chatbot” telling lies to customers should be much more severe.

julianlam
1 replies
3d15h

It would've been if it were a lawsuit, but this was a tribunal hearing and recompense is limited to damages, that is, the amount the claimant was out of pocket.

I'd love to stick it to Air Canada too, but Canada is (hopefully) less litigious than the US.

dannyw
0 replies
3d10h

You should have a little deterrent for “remarkable” cases like this. Say, double damages capped at $300.

frabjoused
2 replies
1d15h

The header "Company claimed its chatbot ‘was responsible for its own actions’ when giving wrong information about bereavement fare" is such a great example of an article giving misleading information to up sensationalism. Later on it's explained "Air Canada argued that despite the error, the chatbot was a “separate legal entity” and thus was responsible for its actions."

That's a completely different argument and much less alarming.

ggm
0 replies
1d15h

That's a completely different argument and much less alarming.

Actually, it's just as alarming. The entity behind the chatbot might be a separate legal entity, but that doesn't absolve the airline, who outsourced a function bound to their terms and conditions.

If they literally tried to absolve blame by assigning personhood and liability to the bot, that's insanely bad.

erhaetherth
0 replies
1d15h

Those 2 sentences are identical? Unless they meant the company that made the chatbot was a separate legal entity. If the chatbot itself is a "separate legal entity" then that's basically saying the same thing.

CamelCaseName
2 replies
1d17h

The worst case for these airlines is always just a refund for whatever money they took in the first place.

Someone could make a killing making a fake airline, taking payments for flights, and then cancelling every single flight and only refunding the people who fight back hard enough.

joecool1029
1 replies
1d16h

Someone could make a killing making a fake airline, taking payments for flights, and then cancelling every single flight and only refunding the people who fight back hard enough.

They did, it's called Air Canada. I was scammed out of Japan tickets by them during COVID, they have still not refunded me and they expired the credit for the tickets. I have tried every customer support avenue to no avail. It's a scam airline.

alright2565
0 replies
21h52m

Apologies for the off-topic reply here, but they've just fixed the ipv6 issues with gtlib by removing the AAAA records.

yieldcrv
1 replies
3d14h

ouch I cant believe they really tried those arguments

the chatbot as a separate legal entity? do they mean like a contractor’s service, or did they mean like a distinct AI creature

dlivingston
0 replies
1d17h

Either way, if their argument were upheld, it would be insanely interesting to see the second-order effects of that.

Like, could you spawn up a local LLM, have it take out some loans and transfer you the funds, and then "kill it" (^C), so the loan liability dies with the LLM?

ooboe
1 replies
2d

Air Canada is going to get a lot of publicity over an $812 judgement in a provincial small claims court.

apapapa
0 replies
1d22h

It just did

MintPaw
1 replies
1d16h

What actually happened? It said they could the refund window was 90 days, but why didn't it work?

"But when he applied for a refund, Air Canada said bereavement rates did not apply to completed travel and pointed to the bereavement section of the company’s website."

Does this mean they actually took flight with no issues, but then requested a refund afterwards because it was for a funeral?

Aachen
0 replies
1d16h

I also found it very confusingly written, and as a non-native speaker, I initially took bereavement to refer to a flight delay (imagine my confusion about the claim "you can't apply for this refund on completed travel").

Then looked up the word and my initial association with something negative was correct: it's about death. The only explanation that I think fits all the article's statements is this:

0. AirCan offers a discounted rate if you fly to someone's death, probably because of the country's size (it's a weird concept to me as a Dutch person!)

1. Person asked the bot how to get this discounted rate, what papers AirCan needed to see

2. Bot said (not shown in article, the only relevant-looking link goes to some stupid news category page): you get your discount afterwards, not before. This seems to be phrased in the article as "refund", but I take it to mean "partial refund to the amount of the discount"

3. Person flies, then applies

4. AirCan now says: you can't apply for this anymore after the travel was completed

JCM9
1 replies
2d

With execs pounding their fist on the table saying “Go get me some of that GenAI stuff!” expect to see a lot more really poorly implemented “AI assistants” and other half-baked AI projects causing blunders for businesses. Eventually the dust will settle and folks will find more attenuated ways to get value out of this tech without creating a big mess. In the meantime get some popcorn ready for the AI-fueled comedy of errors that’s about to play out across many companies.

DonHopkins
0 replies
2d

It's those same execs who GenAI could replace and do a better job than.

xyst
0 replies
1d17h

Wow, going through all of that for $650. I guess it’s better to send a message.

Cost of Air Canada lawyers well exceed the judgement here. Why would they bother fighting this?

xutopia
0 replies
1d16h

I'm happy we have this precedent now in Canadian law. I'm wondering how such a thing would play out in the US.

solarpunk
0 replies
1d18h

RAG Agents are gonna cost the companies that run them a lot of money, in more ways than one

solardev
0 replies
2d

What the heck is a bereavement fare and how do you get them? Do most airlines offer that?

Edit: Whoa, apparently several do. In my 30 years of flying, I never knew this! https://travel.usnews.com/features/bereavement-flights#alask...

If an immediate family member passes away, some airlines will give you a discount.

ruined
0 replies
1d18h
rsynnott
0 replies
2d23h

Well, it's hard to see who else could be?

renewiltord
0 replies
1d16h

Interesting that this is just straightforward error. It's not an LLM or similar so whatever KB it was drawing from is just broken.

perihelions
0 replies
2d

That airline policy is the kind of micro-optimization that no individual human would do, but distributed decision-making of large organizations effect all the time. There's profit to be found by targeting people in distress—like someone who just had a person close to them die—and abusing their distraction to confuse them into paying more money then they need to. Allowing a grace period to claim this "bereavement discount" is a human(e) policy. Making the client claim the discount immediately, or else lose it, is monstrous.

Today, the robot showed more humanity than Air Canada's human leadership. That was an accident; the future could be the opposite of that. You could program machines to be "better" employees than humans, more aligned with organizational goals like "maximize profit at absolutely any cost", or "win wars at absolutely any cost", or "win elections at absolutely any cost". We humans aren't completely aligned with our teams; we have moral scruples that limit us—we can't achieve the 100% "absolutely any cost" part. I think that might suddenly change, and we might find ourselves drowning in an unexpectedly in-human world.

(This was inspired by, I forgot who wrote it (?), a writer's observation about the evils of war being easier when morality is distributed. The commander who orders the atrocity, doesn't do it; the solider who commits the atrocity, has no agency in his actions. Both feel reduced culpability, and can go farther in effecting their goals than an individual acting alone).

nequo
0 replies
1d15h

I don’t believe that the traveler mentioned in TFA did this but using a screenshot as evidence will not be viable in the longer run.

Editing the HTML code of a webpage that is open in the browser is a key step in one of the popular IT support scams that are covered by YouTubers.

But how else to present evidence of chatbot misinformation, I’m not sure.

mrweasel
0 replies
2d

The article doesn't state what kind of chat bot this is. Is it an LLM, or a pre-LLM bot programmed to act on certain keywords?

If it's the latter, I'd assume that Air Canada would be able to go in and check why the bot would give a wrong answer, most likely outdated policy information, or a misreading from whomever entered the answer to that prompt.

However, if the bot is based on an LLM, then what's the point? It's apparently worse than an old school bot, in that it cannot be trusted to give correct answers, it's just better at understand queries.

There was a quote in the article "I'm an Old Far and AI Makes Me Sad":

  “If we open up ChatGPT or a system like it and look inside, you just see millions of numbers flipping around a few hundred times a second,” says AI scientist Sam Bowman. “And we just have no idea what any of it means.”
If that's true, then you can not use these systems for anything where you may need hold some one responsible for the output.

morkalork
0 replies
1d17h

What are the chances that companies take the easy way out and slap a EULA on the chatbots like "by engaging in conversation with our automated agent you agree to <link to 100 page doc absolving them from anything the bot ever says or does they don't like>"?

labrador
0 replies
1d17h

If I ran their large multi-page website I could search over it for every occurence of "bereavement fare" to find out exactly what we are telling people and check it for accuracy. I can't do that with a LLM afaik. I can't ask it "tell me every thing you can say about "bereavement fare". LLM's are not inspectable like that. LLMs are tuned and output filtered, but can this catch all errors?

kazinator
0 replies
3d15h

If a vendor communicates multiple different prices for the same thing in different places, such as different areas of their website, or their website versus an e-mail flyer, or any pieces of paper from the vendor, they must give you the lowest price among all of them and not make excuses like that you should check the other communications and understand that the price is one of the higher ones. This is just common sense.

iainctduncan
0 replies
1d16h

I am Canadian. Air Canada has the worst customer service and treatment by far. This is hardly surprising, they try to weasel out of any responsibility all the time.

This is the same company that just got slapped for making a disabled dude crawl out of the airplane when air cans special chair didn't show up. I shit you not.

htrp
0 replies
3d15h

who was the vendor that supplied the chatbot?

greenavocado
0 replies
1d16h

What happened to the GMC dealer whose GPT chatbot was promising customers free trucks while claiming the contract was legally binding?

gorbypark
0 replies
3d2h

Oh, wow! There was a twitter thread that made the rounds a while back about a guy who jokingly got a car dealership's chatbot to agree to sell him a car for basically free...Living in BC and reading this article has given me a great idea...!

charles_f
0 replies
1d16h

I don't use chatbots. I don't like their non determinism, and the little I've tried, it never had the answer for what I was looking for. I much prefer browsing the website

bobterryo
0 replies
1d17h

Air Canada - We're not happy until you're not happy.

astrange
0 replies
1d15h

I notice almost all the comments here assume this is an AI chatbot, but the original incident happened in November 2022, which is before ChatGPT was released, so… were there any LM based chatbots then?

apapapa
0 replies
1d22h

I bet it would have gone the other way here in the USA.

angarg12
0 replies
1d17h

We are just getting started with the age of LLM mishaps, and it's just going to get more ridiculous.

I was talking with an ML engineer that told me they had a lot of success fine tuning a LLM on their internal docs. A chatbot could solve about 70-80% of questions without the need of human intervention.

However their next big idea was to fine tune the LLM with the company financial data so that the finance department could get the information they need without custom queries or tech skills. We are just a few steps away from LLMs feeding hallucinations to decision makers and then those acting on bogus data.

TheOtherHobbes
0 replies
1d16h

It's a start, but there should be some kind of general Algorithmic Integrity law.

Hallucinating chatbots, automated YT copyright strikes, Insta accusations of bothood because you clicked too fast, Amazon nuking author accounts because it gets confused, self-driving cars that don't self-drive - and so on. They're all the same problem.

At best these are large corporations automating processes with unacceptably poor reliability. At worst, they're hiding deceptive and manipulative practices behind algorithms so they can claim they're not responsible for damage caused.

TheArcane
0 replies
1d18h

companies with AI chatbots probably:

1. Replace customer service agents with shitty LLM

2. Distance yourself with shitty service by shitty LLM

3. Profit.

OnionBlender
0 replies
1d16h

I wonder if companies will just show a disclaimer first to avoid responsibility.

"This chatbot is provided for entertainment purposes only. If you trust anything it says, you only have yourself to blame. Reading this message waives your right to sue us."

Mistletoe
0 replies
2d
DeepYogurt
0 replies
1d18h

Good

2-718-281-828
0 replies
1d17h

that this is how it should be is obvious to everybody except the management. what i'm wondering is how do you prove such misconduct as a customer?