At this very moment we're at an inflection point in the customer service industry. ChatGPT and its Google, Meta, and Amazon.com-backed competitors are the technical breakthrough we’ve been waiting for. At long last, we have the potential to break free of our contact center and BPO legacy and drive true cost efficiency through automation. In fact, I’m on record as saying by the end of the decade, 90 percent of customer service inquiries will be automated.
However, this will only happen if we proceed very thoughtfully over the coming months. If we move recklessly, we're in danger of suffering self-inflicted damage that will set our industry back for years. Today I want to dive deep into what those reckless outcomes could look like, and how we should be vetting these new technologies to ensure those outcomes are extremely unlikely.
The first major risk with implementing generative artificial intelligence in customer-facing situations is related to brand and public perception. Generative AI technologies, by definition, create answers to questions on their own. Due to the overwhelming amount of data they're pulling from, sometimes those answers can be incorrect, nonsensical or downright diabolical. If your ChatGPT-powered chatbot doesn't have the right safeguards in place, it could potentially say something offensive or threatening to one of your customers. (Imagine if your bot spouted hate speech or encouraged one of your customers to leave their partner?) These are consumer-facing, highly public tools, making them particularly prone to creating PR nightmares for your company.
The second major risk pertains to security — both digital and physical. There have already been occurrences where employees have accidentally shared sensitive company data with ChatGPT, thus exposing their companies to major privacy and security threats. Using uninhibited large language model (LLM) chatbots in customer-facing situations means you also open up a very real risk that sensitive customer data can also be exposed. ChatGPT can also threaten physical safety as well, including encouraging self-harm.
Choosing the Right Technology Partner
Customer service leaders find themselves in a challenging position when it comes to implementing generative AI within their organizations. As I mentioned earlier, these are truly breakthrough technologies. Implementing them has become a prerogative and even a top-down mandate. Chatbot technology vendors, understanding the pressure their buyers are under, are rushing to release LLM-enabled chatbots often without considering requisite safety protocols first.
Make no mistake: Over the next 12 months, we will hear of major PR and security disasters because a brand rushed to leverage ChatGPT capabilities without the proper safeguards. Chances are, it will be because the customer service organization partnered with a vendor that chose speed and hype over safety.
As a customer service leader and buyer of these technologies, there are the two things you must watch out for in your discussions with chatbot vendors. First, you must be wary of any vendor that attempts to “black box,” meaning they eagerly display the positive results of their technology but withhold information as to how it actually works. Can they properly describe the safeguards they've put into place to prevent hallucinations and off-brand conversations? Can they show you how they direct the LLMs to the right sets of data. Can they walk through with a fine-tooth comb their security processes? Ultimately, it's incumbent on the buyer to leave no stone unturned when it comes to vetting the machinations of the technologies themselves.
At the same time, you must vet the company. With technology executives and their investors sensing a new gold rush, we're going to see new entrants into the “AI-enabled chatbot” space seemingly every day. But regardless of technological innovation, there's one consistent truth about customer service: expertise matters. So as a buyer you need to look at the company’s leadership and their backgrounds. You need to understand how much expertise the company has both within customer service and your specific industry. You should look at the organizational investments they’ve made. Do they have a substantive in-house AI team to act as your partner and counselor? How much are they willing to put forth to help you develop the right safeguards and security protocols that make sense for your company and your customers? The upfront time spent understanding who you are potentially partnering with will make or break the success of your AI chatbot implementation.
Speaking of make or break, the decision of which LLM chatbot to partner with this year may be the most significant technology purchase decision in a customer service leader’s career. Choosing the right partner will set you up for success arguably through the remainder of the decade. If you're able to build your organization around these technologies and understand how they're changing consumer behavior, perhaps even longer than that. Alternatively, choosing the wrong partner could potentially be disastrous. While your CEO isn't responsible for the technology purchase decision, he/she will be responsible for any major PR or security crisis that is the result of a poor or reckless vendor. Choose wisely.
Daniel Rodriguez is chief marketing officer at Simplr, the disruptor to the outdated contact center BPO model.
Related story: Chatbot Lessons for Retailers: Maintaining Excellent CX Year-Round
Daniel Rodriguez is CMO at Simplr, the disruptor to the outdated contact center BPO model. The company offers a fully managed service that connects a chatbot,human agents, and an AI-powered platform to deliver better and more cost efficient CX than legacy BPOs.