Welcome to my first blog post of all time, right here for your consumption. With all the noise around Generative AI and Large Language Models (LLMs), I appreciate the attention on my small corner in the realm of intelligent customer experiences and automation. If you’re like me, the noise has become too much, but at the same time, do you want to be left behind on the adoption curve and lose market share to your competitors?
Do you see what I did there? That’s called Fear Of Missing Out (FOMO), and people use it to get you to spend money for peace of mind, and there’s a lot of that going around. I’m not here to say it’s all hype because it isn’t. There is tremendous value and opportunity, particularly in rapidly generating new intellectual property. I am here, however, to tell you that if you haven’t already begun the journey of understanding and reacting to your customers through your data, that journey will likely unlock far more immediate and long-term value, even with implementing the latest and greatest of shiny things. This is a brief tale about trust and adoption.
Just over a year ago, ChatGPT took a significant portion of the world’s mindshare and acquired too much trust far too easily, in my opinion. I spoke with many people who felt the bot “understood” their question or a given domain of knowledge in providing coherent responses when prompted. How many of us can say that we’ve built bots that people have trusted by default to the same extent? Yes, I can hear the crickets as I type this.
I’ve spent the past few years understanding customers and designing, implementing, and optimizing voice and messaging interactions and other automations to provide meaningful impact. I have done my fair share of experimentation and improvements to use case adoption rates and reducing rates of agent escalations, but I was puzzled by the instant extension of trust I saw from so many people. Why was that, even though it can be completely wrong, also?
My experience jumping into automating interactions and bots was interesting, but I’ll save that one for another day. The relevant piece for this post is that I did not know what I needed to know to succeed, so I studied quite a few books outside of my usual tech approach, including books related to conversation analysis and interactional sociolinguistics. Through one particular book, “Conversational Repair and Human Understanding” [1], I learned about intersubjectivity and of an interesting experiment that aimed to see how people behave when we pretend we don’t have a shared sense of meanings and common-sense between each other.
In one documented case of this experiment shared in the book, one tester, a spouse, acted as if there existed no shared understanding between them and the testee, the other spouse, seeking to disambiguate any response that could be unclear, regardless of common sense meanings and norms. After a series of statements from the unaware testee subject followed by ridiculous repair prompts from the tester to establish clear meaning, the experiment ended with the testee telling the other person to drop dead! Have you ever felt that way speaking to a chatbot? Yeah, maybe not to that level, but I would believe most of us have had moments we’ve felt the system is is either “dumb” or “should have known more about my account or service”.
ChatGPT offers the façade of intersubjectivity in a wide range of contexts. You can talk to it about penguins or space exploration. It might not be telling you correct facts, but it fakes the expected conversation patterns and uses the right words. You have to know that it is incorrect to see the problems with its statements. You could otherwise believe it as sharing facts if you didn’t understand how LLMs work.
Still, it’s a worse experience when a bot speaks with absolute confidence after completely misunderstanding you. That’s the problem we’ve been contending with for a long time with NLU-based bots. You see, NLUs have their own way of “hallucinating” by way of misclassifying your intent. That misclassified intent might even have a high confidence score and trigger your carefully designed copy response to something your customer didn’t ask about. Trust is gone. “Silly” bot. Let me speak to a person now, please.
Back in 2020, I put my new-found knowledge of “how annoyed people get when you don’t know things they think you should know” to use by aggregating as much information I could about the quality and configuration of products and account states to predict reasons for calling. And why not? If we knew a customer was suspended, should we ask why they were calling? If we can see we delivered internet with poor quality based on known and trusted KPIs, did we need to ask them what’s going on when they contacted technical support? I’m sure you can also easily see where this can lead to contact prevention.
While I cannot share the specific statistics from my former employer’s bot, I’ve previously shared publicly with approval that I reduced the % rate of people asking to speak to an agent by 50% through these reason-for-contact suggestion rules. It wasn’t just a decrease in escalations for that turn but also an increase in adoption for self-service use cases that may occur after someone confirms that the suggested contact driver was correct. What happened there? I believe showing more history and understanding of the customer’s journey upfront established more trust and adoption.
The even better outcome behind the scenes is that by understanding your customers’ activities and events with your products and services, you’ll likely find more and sometimes even effortless ways to improve product and process quality without the heavy lifting of building a bot or some sophisticated automation.
Ultimately, it doesn’t matter if you’re rolling out a bot based on NLU or LLM technology regarding trust. Your businesses and your automations must be accurately informed and guided by your customer journeys to improve the experience and avoid the erosion of trust and adoption over time. What good is automation that people can’t trust? I have much more to say on NLUs and/or LLMs for bot building, but more on that later.
So, where do we begin this journey? Well, I hope my new blog, which I’m now kicking off, will help you improve your ability to give people more time back in their lives by improving your ability to craft and optimize intelligent and interactive customer experiences. Come along for the read if you like!
References
| [1] | R. a. S. Hayashi, Conversational Repair and Human Understanding, Cambridge: Cambridge University Press, 2013. |



Leave a comment