ruling that Air Canada had to pay Moffatt $812.02 (£642.64) in damages and tribunal fees
That is a tiny fraction of a rounding error for a company that size. And it doesn’t come anywhere near being just compensation for the stress and loss of time it likely caused.
There should be some kind of general punitive “you tried to screw over a customer or the general public” fee defined as a fraction of the companies’ revenue. Could be waived for small companies if the resulting sum is too small to be worth the administrative overhead.
It’s a tiny amount, but it sets an important precedent. Not only Air Canada, but every company in Canada is now going to have to follow that precedent. It means that if a chatbot in Canada says something, the presumption is that the chatbot is speaking for the company.
It would have been a disaster to have any other ruling. It would have meant that the chatbot was now an accountability sink. No matter what the chatbot said, it would have been the chatbot’s fault. With this ruling, it’s the other way around. People can assume that the chatbot speaks for the company (the same way they would with a human rep) and sue the company for damages if they’re misled by the chatbot. That’s excellent for users, and also excellent to slow down chatbot adoption, because the company is now on the hook for its hallucinations, not the end-user.
That is a tiny fraction of a rounding error for a company that size. And it doesn’t come anywhere near being just compensation for the stress and loss of time it likely caused.
There should be some kind of general punitive “you tried to screw over a customer or the general public” fee defined as a fraction of the companies’ revenue. Could be waived for small companies if the resulting sum is too small to be worth the administrative overhead.
It’s a tiny amount, but it sets an important precedent. Not only Air Canada, but every company in Canada is now going to have to follow that precedent. It means that if a chatbot in Canada says something, the presumption is that the chatbot is speaking for the company.
It would have been a disaster to have any other ruling. It would have meant that the chatbot was now an accountability sink. No matter what the chatbot said, it would have been the chatbot’s fault. With this ruling, it’s the other way around. People can assume that the chatbot speaks for the company (the same way they would with a human rep) and sue the company for damages if they’re misled by the chatbot. That’s excellent for users, and also excellent to slow down chatbot adoption, because the company is now on the hook for its hallucinations, not the end-user.
Definitely agree, there should have been some punitive damages for making them go through that while they were mourning.