The teams we’re working with, both large and small, have some form of these three challenges in order to grow their practice:
- Siloed tools: the data sets that power your assistant live in disconnected tools
- Manual workflow: manual handoff of data through the workflow requires big time investment
- Lack of insight: usage analytics lack context, so it’s hard to act on them
This is why we’re focused on creating a connected workflow. Here’s what we mean by connected workflow:
- Having synchronization between the tools you’re using to design, test, and prototype, and the tools you’re using to launch your systems into production
- Reducing the tools you’re dependent on for your end-to-end workflow
- Reducing the amount of manual effort of bringing designs to production so that you can reinvest your time into expanding feature sets and honing in on the KPIs that your assistants are built to address
Ultimately, we want to provide a single source of truth for your end-to-end assistant experience.
We recognize that different conversational AI teams are at different levels of maturity in their practice, but over the last quarter we’ve built features that will help you take that next step towards having a fully integrated workflow for your CAI team—no matter where you’re at.
Collaborate faster with LLMs
Without a doubt, the introduction of large language models (LLMs) is the biggest thing that’s happened to the conversation design space over the last year. It’s top of mind for every conversation designer we’re working with, which means it’s top of mind for us, too.
We’re focusing on two main areas with LLMs:
Speeding up your design workflow
Things like utterance generation, entity generation, and re-prompts are core parts of conversation design. But they take away from the core aspect of building out your happy path, error prompts, and flows that actually become your system. To speed up the design process by 2x and get to user testing faster, we’ve already rolled out a series of new features. More are coming but check out what we've launched so far:
Enabling experimentation with LLMs directly in production
Our Generate Step feature allows you to send a prompt directly to GPT3 and populate it in your flow. It’s up to you whether you want it to be fully dynamic or use it as a way to generate content. Our CEO Braden walks through this exciting feature with his Sassy Starbucks assistant:
There’s a ton of opportunity here, and we love seeing our community using Voiceflow as a sandbox to experiment with this technology.
Achieve a single source of truth for design
Last year we introduced more ways that you can design projects that scale, with hierarchy features like Domains, Topics, and Components—and just last week we released Sub-Topics. These features are particularly useful for people who have broken out a single assistant into multiple project files and applied things like Domains and Topics, or any type of hierarchy.
This lets you start consolidating things into a unified design. Some of the benefits that come along with consolidation: it lets you build a unified component system, things like a shared block template library, and a consistent NLU that you can manage in a single spot and then apply across your entire experience.
When you’re chunking things out, you’re losing that high-level view of all the data and content that lives inside your assistant. Plus, there’s lots of manual maintenance needed to keep things consistent across these files—especially if you’re applying things like shared components in multiple places. You’re also unable to leverage things like our advanced navigation features like Actions when you’re siloing everything. Curious where to start when it comes to unified design? Chris walks through a sample Grocery Assistant:
We built these advanced organization features so you can design experiences that scale. But, of course, the partner piece to this is workspace management: managing your workflow as well as your teams.
Knowing that many of you work cross-functionally with diverse blends of stakeholders, we have some upcoming enhancements that’ll help you coordinate workflow access and navigation:
- Assistant Overview
- Domain Statuses
- Topic: Statuses
- Assistant-Level Permissions
- Organization Settings
All these are coming soon. We can’t wait for you to try them out, watch the sneak peak below to get an early tease!
Integrating Voiceflow into your existing CAI workflow
We've heard time and time again: “I’m building my design in Voiceflow, all my data is in there… but how do I do more with it? How do I integrate it with my existing technology stack that I'm using?”
We hear you. Everyone has gone through the exercise of signing up to Voiceflow and having to start with a blank canvas. With everything that we’re building over the coming months, our goal is to get users to started in Voiceflow with the assistant you started building in your existing tool stack (i.e. Dialogflow CX, Rasa, IBM Watson, etc.). We want you to get immediate value out of Voiceflow and start iterating rather than rebuilding.
Data imports
Introducing data imports—allowing you to take a blank canvas to your source of truth for your existing assistant in a matter of moments. Right now, you can already pull in your NLU data but we want to go one step further we want to allow you to pull in your response and flow content and have that all live within a specific assistant on the Voiceflow canvas. This is going to give your team a lot more workflow efficiency, and you can say goodbye to the redundancy of copying and pasting.
Data exports
Let's say you now have your assistant inside Voiceflow, you’re making updates and iterations to it, and now you want to get it back into your production platform. What we’ve seen a number of customers do today in Voiceflow is build connector that can take all of that assistant data that you're building on Voiceflow and convert it into a compatible format for third-party tooling. Over the coming months we're introducing data exports—more tools, templates, and guides to make this process easy and fast.
Ultimately we’re building something that’s flexible enough because we understand that everyone is designing differently in their production assistant. With these new import and export features, Voiceflow can now be a more integrated component in your conversational AI workflow. Watch this demo to see how it all comes together:
If you have any interest in importing some of your existing systems into Voiceflow, let us know which platforms so we can prioritize them. And for our exports, we're excited to know which tools you're using and how we can help prioritize those platforms for you to use as soon as possible.
Where we’re headed
There’s a lot to look forward to over the next quarter as we continue to focus on enabling scale for conversational AI teams. Here’s what we’re working on right now:
- Speeding up the design and prototyping experience
- Creating more opportunities to leverage LLMs as part of the design and prototyping experience
- More educational content and functionality that will make getting to the point of prototyping much faster for teams
- The ability to represent the scale of your assistant and tools to manage the different content types of that assistant all in one place under Voiceflow’s roof
- The ability to connect this data in and out of the NLUs and the dialogue managers you’re using today
This is only a handful of the many, many exciting things our product team covered. Want to dive in deeper? Watch the recording of Voiceflow Newsroom below.