1. The shift from natural language (NLU) to natural language generation (NLG) design
In the old world, we focused on the input—what utterances build out the structure of your NLU model. In the AI future, we’ll see the concentration of design towards controlling the generated output.
A lot of your models will be built on “few-shot learning” with LLMs. Instead of creating intents that can have anywhere between 20 to 100 different utterances, you’d create only a few training examples that outline the scope of the intent, then receive an output from the LLM and submit feedback to the model on its response. Over time, the AI will increase in accuracy.
"In the AI future, we’ll see the concentration of design towards controlling the generated output."
How does your process change when the challenge is no longer whether the model understands your inputs, but instead, how it can generate the right outputs? And it’s already begun. Across industries we’re seeing everyone dip their toes in the water—integrating LLMs into business processes to interpret and, increasingly, generate data. As we become more confident with the outputs, that shift will accelerate.
In this world, NLUs will fade into the background (but it won’t disappear). You’ll be asked to shift your attention away from narrow inputs and instead focus on crafting flexible prompts that can generate the right outputs for a wide array of user inputs. This is often called prompt engineering—the practice of designing and optimizing prompts to achieve specific outcomes and flows in AI-human conversations. But more on that later.
2. The increasing importance of persona design
Let’s face it: LLMs lack personality. They’re built to process the entirety of the English language—billions and sometimes trillions of corpora—but sometimes those outputs leave something to be desired. Sure, I’ve asked Claude to summarize a podcast in the style of The New York Times columnist Kevin Roose. But unless the LLM your model relies on has been trained on your brand, it’ll struggle to nail your voice and assistant persona.
"In the future of AI-generated conversations, your assistant's personality will set it apart from a horde of charisma-less counterparts."
I won’t wax poetic on the importance of designing the personality traits, tone, and style of your conversation assistants (plenty of folks have done a better job at that), but in the future of AI-generated conversations, your assistant's personality will set it apart from a horde of charisma-less counterparts.
And as it becomes easier and easier to design assistants, you’ll want to prioritize making yours cutting edge. It’s the human touch that will make your conversations come alive.
3. The management of knowledge (and bias)
Conversation designers and developers alike will increasingly curate and maintain knowledge bases—those large swaths of data that feed into the model. It’s not unlike a call center that records conversations. A manager listens to calls and gives employees feedback. You'll be asked to do the same thing with AI. You’ll input data into the knowledge base—guides, marketing materials, FAQs, product specs—and the outputs will improve as you determine what worked and what didn’t.
Obviously, issues of privacy, bias, and oversight come to mind. Anything you put through an open-source LLM model like ChatGPT can be used to continue to train the model. That means if you put sensitive emails, internal memos, or consumer data through an LLM, the model might use that data to answer other users’ prompts. Designers should be rightfully reticent to put any customer data through an LLM right now—the protections just aren't robust enough (though that is changing over at OpenAI).
On bias and oversight, there's no future in which you can just “set it and forget it.” CxD teams will be expected to handle misinformation and mitigate the risks of LLMs generating false or misleading information. So, if a recruitment bot is filtering through resumes, your human managers need to understand why certain people have been accepted or rejected, and shift those parameters at any time. And as robust AI models become more robust and interpretable, they can help us understand how they arrive at specific outputs. Just as with any new piece of technology, humans will need to adapt and learn to manage it.
4. The art of prompt chaining
Most importantly, you’ll need to develop skills like prompt chaining, which involves connecting multiple prompts in a sequence, guiding the conversation and LLM-generated responses toward desired outcomes. You’ll need to follow a chain of prompts to work out how you got to an end result. And you’ll need to design processes to create repeatable outcomes.
It’s no easy task. Everyone playing around with AI right now is managing the challenge of prompt chaining, especially because your inputs lead to outputs, but there’s little context from the AI on how it arrived at those conclusions.
I suggest getting started by practicing reverse prompt chaining. I tell ChatGPT or Claude the task I'm trying to do and I get the AI to write the prompt for me. And as I receive outputs, I ask the AI to continue to suggest rewrites of the prompt until I achieve my desired results. Once you get the outcome that you're hoping for, you now understand the sequence of words that will help you get your desired outputs.
The good news is that conversation designers have an advantage in this area—especially if you come from a copywriting background. You already have a good grasp of language. In its simplest form, prompt chaining is about understanding the simplest way to instruct AI to do something. And from there, trying over and over again until it starts doing the right thing. My advice? Learn by doing.
The phenomenal future is here. Let’s protect it.
Everyone’s excitement (and fear) about the future of AI is completely valid. There are lots of concerns—job security, societal value, upskilling workers. Each of these are worthwhile discussions we’ll be having in the coming months and years.
Ultimately, my hope is that we gain regulatory restraints around this technology—protections that are intentionally created to protect consumers and companies but which don’t halt innovation. We didn’t have that with social media and we know that once the cat’s out of the bag, it’s out. The future of AI is interpretability, privacy, and oversight—and a lot of really thoughtful developers and designers pushing the conversation forward.