Step 1: Invite the support team to share their top linguistic gripes
What do people complain about when they finally reach a live agent? Often, they bemoan how long it took to get there—something support teams hear about endlessly.
“We’re very used to hearing people’s frustrations and it seems to be the same thing over and over,” says Nicole. “I find the number one issue is the user misunderstanding the bot because it used business jargon. The user wasn’t familiar with the language a company uses internally.”
When users misunderstand, they get slotted into the incorrect flow. Time and many taps pass before they get to the right flow. And by then it’s led to a poor perception of the bot’s ability, even though there may be nothing at all wrong with the flow itself—only that users didn’t understand enough to guide themselves. Too much of this across the industry is why people reflexively seek a live agent—they’ve come to distrust bots entirely.
Step one for addressing these fixable misdirections is breaking down the conversational design and support silo. Create a forum—perhaps a meeting, maybe even just a Slack or Teams channel—where support teams can report common linguistic issues. These fixes tend to be easy ones—they need only edit the text—and have notable results. They’re easy wins and a foundation for more collaboration.
“This is a serious opportunity for knowledge transfer,” says Simon. “Make contact. Reach out. Introduce yourself and say, ‘Here’s what we do, and how it relates to what you do.’ Set a meeting to compare notes, customer pain points and team pain points, and get to know each other.”
Here’s Nicole and Simon’s framework:
Initial knowledge transfer
- Compile customer pain points from each team
- Identify common jargon/intents that cause confusion in customers' mental models
- Review representative transcripts and customer calls
- Create an initial audit based on extended customer journeys
Ongoing
- Regular cadence to review transcripts and customer calls
- Solution discussion and design review for new projects
- Heartbeat on what agents are currently seeing/hearing
Step 2: Show how support is really a portal into unfiltered user feedback
Agents sometimes experience the worst of the chatbot overflow. But by that virtue, they are also a lightning rod for extremely candid feedback that wouldn’t have come across in a structured survey delivered after the fact.
“Nothing’s as immediate and accurate as an, ‘Oh my god please help,’” says Nicole. It’s very different from what’s reported in surveys, where they’re given a one-to-five option.
If the CXD team realizes that these support interactions are a clearer window into what users are feeling during real interactions, that initial sharing can evolve into much more. “Consider getting on a regular cadence where you review transcripts together,” says Nicole. “At PayPal, we have a single teammate champion who collects all that feedback and relays it. But you could just as easily make it a rotating responsibility.”
Whoever that person is, CXD should pull them into planning conversations. This effectively draws all that support wisdom into the bot’s design.
“There’s a huge benefit to having Nicole, who we can pull into discussions, so it’s not just a retroactive feedback loop. It’s pulling customer support experience into the design process,” says Simon. “This helps us preempt problems and keep the heartbeat of the customer at the center of the experience. And when customer support figures out a really good way to resolve a problem, so does the bot.”
Step 3: Pick a targeted challenge to address as a working group
Together, conversation designers and customer support agents can tackle a big, common issue like jargon and eliminate it wherever it shows up in customer interactions. That allows them to re-envision how the company communicates and make it friendlier to interact with.
“Jargon is a huge problem for every company of a certain size,” says Nicole. “Take a PayPal seller. Say they’re contacting us to access money they received in a sale and are urged to select from a selection of choices including ‘pending funds’ and ‘held funds.’ These sound similar, but are completely different options, with different reasons, time frames, and steps. Mind you, the designer correctly inputted these choices. They followed their instructions and that’s what they’re really called. But it can cause mixups—and if those mixups degrade the user experience, that’s something we can tweak together.”
And as both Nicole and Simon point out, agent feedback highlights shortcomings in the design that wouldn't otherwise be apparent until after release.
“The reality is that as conversation designers, in user testing we do a lot of controlling for a variable and testing the change, but it doesn’t always capture the emotional charge and the time constraints real users are under,” says Simon. This collaboration gives designers access to all those real, unfiltered, emotional insights.
Curious about what support can teach you? Watch Nicole and Simons’ full talk on how they’ve structured their work together.
Header image by Hossein Nasr