Financial institutions are increasingly turning to artificial intelligence (AI) solutions like ChatGPT and AI-driven customer service applications like Erica to provide customers with faster answers to their questions and help them make appropriate investments. However, the Consumer Financial Protection Bureau (CFPB) has noted numerous concerns about the accuracy of these solutions and the potential for incorrect advice that could leave consumers unprotected.
In its 2020 report on AI-based customer service bots, the CFPB found that chatbot software may not be accurately equipped to handle customer protection laws due to algorithmic bias or glitches in programming. For instance, a chatbot may give incorrect advice when it comes to disputing transactions or understanding one’s rights with regards to debts. It could also fail to recognize nuances in language—such as slang terms or regional dialects—and misunderstand customers or give inadequate answers. This is an issue particularly for older individuals or those whose primary language is not English, creating a potential “loop” where they can’t reach a human agent if they are unable to effectively communicate with the chatbot.
The report points out that major technology companies such as Microsoft and Google have had difficulties in making sure their AI-driven customer service bots remain accurate and unbiased when providing information. In addition, the CFPB itself has received complaints from customers stating that they were given wrong information by chatbots due to errors arising from bias programming issues.
Given these serious problems, financial institutions should take extra caution before implementing artificial intelligence solutions for customer service until these issues can be resolved. Banks should ensure accuracy and limit bias as much as possible so that their customers can trust that their rights are being protected even when dealing with automated systems instead of humans.