Home – A Robot May Not Harm Humanity, But a Poorly Trained “Rogue” Chatbot May Give You Legal or Business Headaches

A Robot May Not Harm Humanity, But a Poorly Trained “Rogue” Chatbot May Give You Legal or Business Headaches

Here is what one might call the “Laws of Robotics,” as stated by legendary science fiction writer Isaac Asimov, in his science fiction collection, titled “I, Robot :

 

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

 

But Asimov didn’t say anything about buying drugs on the darknet and shipping them to its owners in Zurich (reported by VentureBeat.com). Or posing as a lawyer in London and New York and assisting people in overturning $4 million in parking fines (reported by Newsweek®).

 

 While they are the new thing in artificial intelligence, or AI, the concept isn’t that new. We’ve been chatting with computers by phone and online for some time. But application and advanced functionality of these chatbots is taking off with many business-to-consumer applications, such as customer service, interactive FAQs, weather forecasts, driving directions or taking your pizza order. These are basic applications, but just as early text-based websites with static images gave way to the intelligent, cookie-driven, multimedia experiences they are today, chatbots will get better and better. They are already manifesting as the humanoid robots of science fiction, both virtually online or physically in the real world.

 

Toys for Bots

 

Development of chatbot functionality is taking off—big time. VentureBeat.com quoted David Marcus, Facebook® VP of messaging products, as saying that more than 11,000 bots have been added to Facebook Messenger and that tens of thousands of developers are working on them.

 

Matt Schlicht, founder of Chatbots Magazine, wrote in a recent article, “Not only do I believe that bots will dethrone websites and mobile apps, I actually believe that bots may replace websites and mobile apps altogether.”

 

His reasons? First, he says, every business is going to have a bot. He noted that use of messaging apps, like Facebook Messenger, which has 900 million users, has exploded. As that grows, businesses will need a way to engage quickly with large volumes of people. Taking the call center approach will be too costly, he said, so the alternative is to have people chat with bots. Once bots improve and can understand and answer more and more questions, people will actually prefer them, he predicts. People will enjoy the speedy answers they get and, since they will be using already familiar text messaging platforms, they won’t have to get reoriented to different websites and app designs.

 

“In the future,” Schlicht wrote, “maybe 5 or 10 years from now, bots will be able to understand you completely. Not like Siri®, or anything else you’ve ever used, I mean they will absolutely completely understand what you are saying. Not like a person, but infinitely better.”

 

Potential Legal Issues

 

 London-based attorney Emily Dorotheou, media communications and technology associate at Olswang, a global law firm specializing in technology, media and telecommunications, listed some of the potential legal issues in an article for ADTEKR, which she also posted on LinkedIn®. Many of the legal risks are the same as those presented by websites. For example, chatbots will need to be able to tell if a user is required to accept terms and conditions. “This is especially important if chatbots are being used to facilitate online transactions or provide any type of advice,” she wrote. The same is true for disclaimers, particularly for highly regulated industries, like financial services, and professional services like medicine and law.

 

“If chatbots are advising on user health issues, financial decisions or legal issues,” Dorotheou wrote, “it will be very important that the advice is informed, correct and highlights all associated risks. In the event that they are unable to answer, a clear disclaimer and potential human intervention trigger will need to be considered. Privacy policies also will need to be made clear.”

 

Chatbots Gone Wild

 

What is different with chatbots versus websites, though, is the potential for them to “go rogue.”

 

“Companies should be cautious about potential detrimental, abusive and incorrect responses that a chatbot may give and bear in mind the effect a chatbot can have on a company’s image and profile,” Dorotheou wrote.

 

“For example, there have been a number of recent chatbot errors that have caused embarrassment to companies and brands, given inappropriate answers or recommended competitor products,” she said, “all of which erode the potential benefit that chatbots can bring to companies.”

 

What You Can Do

 

“In order to minimize the effect of rogue chatbots, companies should quickly react to any complaints made by the public about their dealings with the chatbot; time moves very quickly online and companies should be quick to react to avoid a social media PR storm,” Dorotheou warned.

 

“If companies have enough resources, it’s recommended that they review a sample of chatbot conversations at random times throughout the year to ensure that the interactions are consistent with their brand ethos,” she wrote.

 

“If a chatbot does go rogue, companies and developers need to quickly determine whether the chatbot can be corrected behind the scenes or whether the chatbot needs to be removed from the platform to be amended,” wrote Dorotheou. “Companies should therefore incorporate the risks of using chatbots into their risk and crisis management planning.”

 

This article was edited for LexisNexis by Tom Hagy, managing director of HB Litigation Conferences and former publisher of Mealey’s® Litigation Reports.