⚡⚡⚡Observe.AI Launches Generative AI Suite, Powered by Contact Center LLM ⚡⚡⚡ Learn More →
⚡Connect your Conversation Intelligence Data across all Business Systems with Observe.AI Integrations⚡ Learn More →
The Comprehensive Guide to Generative AI for Contact Centers

The Comprehensive Guide to Generative AI for Contact Centers

We've assembled this in-depth guide through multiple interviews with AI scientists, contact center leaders, business leaders, and a distinguished professor. We hope you find it helpful as we all navigate this new era of artificial intelligence--and we're here to be a trusted partner on your journey.

Table of Contents

  1. The Generative AI Manifesto from Observe.AI's CEO and Co-Founder, Swapnil Jain
  2. What is GPT and Generative AI?
  3. Introduction to Large Language Models--and What to Look For
  4. Why Contact Center-Specific LLMs Matter
  5. A Fireside Chat with a Distinguished Professor
  6. Generative AI Perspectives from Contact Center Leaders
  7. Q&A with Observe.AI CEO and Chief Scientist on the Road Ahead
  8. Announcing Observe.AI's Proprietary Contact Center LLM
  9. How to Leverage Generative AI for Your Agents

The Generative AI Manifesto: Experiment Fast, Early, and Responsibly

When we look back on 2023, it will be remembered as the year Generative AI changed the world.

Already we are seeing the impact of Generative AI across industries and professions—the contact center included. As with mobile, and cloud computing before it, those that do not adapt will fall behind. With any paradigm shift, there are two critical choices to make:

What will you do?

The answer to the first question should be straightforward by now: We must embrace it or risk becoming obsolete. Yes, there are a lot of unknowns. Yes, there are real and relevant reservations about the technology. But we also have the rare opportunity to be the pioneers in this revolution. This is our strong recommendation to you: Experiment early and experiment fast, but experiment responsibly.

generative ai quote from Observe.AI CEO

Generative AI provides a nearly zero-barrier to entry for most applications. And most implementations can be reversed. For any Generative AI application you’re considering, if it checks those two boxes (low barrier to entry and easily reversible) and the value is there, then it’s well worth a test.The downside is minimal, and the upside is limitless. Experiment. Learn. Iterate. But this brings me to the second question.

Who will you trust?

Every pioneer has leaned on the expertise of a guide. The sherpas of Mount Everest. Sacajawea to Lewis and Clark. Who you partner with often determines your success or failure. Right now, the answer to this question is more confusing than ever. Type “Generative AI for Contact Centers” into Google and you’ll get over 20 million results. So let us provide some answers:

1. Should you use ChatGPT for your contact center?

Short answer: No. Why? While ChatGPT is impressive and produces comprehensive and cohesive answers, it is prone to serious inaccuracies and hallucinations, making them too risky to use on an enterprise level.

2. How can you experiment responsibly?

Generative AI solutions must have the ability to calibrate and fine tune the system. By nature, generic out-of-the-box models are trained on broad data sets and will not understand the nuances of contact center conversations. A black box solution without the means to calibrate it can be risky when things go awry. No machine gets it right the first time, every time. Allowing humans to refine the machine is essential.

3. What should you look for in a Generative AI partner?

First, are they well-versed enough in contact center dynamics to understand the intricacies of customer interactions, agent workflows, and the overall operational ecosystem. Someone with this level of expertise will be an accelerator to your experimentation with Generative AI.

Second, do they have a team of AI and machine learning experts with a strong track record. The low barrier to developing Generative AI solutions means simpler applications developed by teams unfamiliar with the contact center space will soon be worth a dime a dozen. Firms with an AI pedigree and funding will outlast and out innovate those who may have made a splash being first to market.

Third, understand the underlying technology. Most solutions are using out-of-the-box language models trained on generic data sets. On the surface, these may appear worthy of experimentation, but they are limited in capability by nature. The truth is, contact centers need an LLM that can adapt to their data nuances for better comprehension of interactions and more accurate identification of key actions and events you need to make better business decisions. More on this later.

What is GPT and Generative AI?

ChatGPT is dominating today’s headlines. But ChatGPT is just the most external-facing manifestation of GenerativeAI from its parent company OpenAI. Underneath the easy-to-use functionality and chatbot interface is a set of complex technologies that have major implications for contact centers and the world at large.

What is Generative AI?

Generative AI is a subset of deep learning that can generate new content, such as text, images, audio, and even code. It does this by using algorithms to learn from data and then generate new examples that are similar to the training data. Deep learning has been around for a long time, but Generative AI research picked up speed in 2014 with the introduction of Generative Adversarial Networks.

Generative AI builds on Large Language Model (LLM) related technology that has accelerated in the last few years. The recent advancements in Generative AI and LLM technology has opened the floodgates in terms of the applications that can be created across many different verticals, including the contact center industry. LLMs are the underlying technology that power Generative AI. We’ll dive into those next.

Introduction to Large Language Models--and What to Look For

Some of you may remember Mad Libs: You’d get a template for a story, with blanks for specific nouns, verbs, adverbs, and other parts of speech.

For example: The _______ (noun) provided excellent customer service to the _______ (noun) who was _______ (verb ending in -ing) its _______ (noun).

This might yield a sentence like: The antelope provided excellent customer service to the Pyramids of Giza who was juggling its nephew.

The first iterations of AI were a bit like Mad Libs. Relying on a small pool of available data, computers would take user prompts and fill in the blanks as best they could. They’d do this using language models: technology that predicts what will come next in a sequence of words. A good example of this is when you’re typing into Google and the program suggests words that you might want to use to finish your sentence or question.

Large language models (LLMs) are trained to understand and generate human language. LLMs have a much larger vocabulary and a greater capacity for understanding complex language structures, nuances, and context than language models. There are many LLMs that have been developed over the years. Apart from architectural innovation, the two main driving forces have been larger data sets and larger model complexity (i.e. # parameters). For example BERT was trained on ~3.3B words while GPT-3 was trained on ~45T words (~15000 times larger). However, larger is not always better. We’ll explain why next.

Why Contact Center-Specific LLMs Matter

Broad use case GPT applications like ChatGPT are impressive. Type in a prompt and you’ll get a cohesive and relevant answer. However, broader LLMs are 
not equipped to handle the specificity, detail, and precision needed for contact centers at the enterprise level. While Generative AI holds massive promise, there are several challenges to using generic LLMs that dampen their effectiveness in contact centers. They include a fundamental lack of specificity and control, inability to discern right from wrong responses, and ineptitude with spoken human conversation and real world environments.

Consequently, generic models like GPT are prone to serious inaccuracies and confabulations – otherwise known as “hallucinations” in AI – making them too risky to use in business settings. The beauty of LLMs is that they can be tailored for specific industries and purposes. By feeding them customized data, we can create domain specific LLMs that are expertly trained for certain use cases. This offers users far more accurate predictions than a general LLM would provide, as well as unprecedented levels of trust, control, feedback, and accuracy.

3 key challenges to using generic LLMs

Initial benchmarks demonstrate that our proprietary Contact Center LLM is 35% more accurate than GPT3.5 in automatically summarizing conversations and 33% more accurate in sentiment analysis. These numbers are expected to improve with continuous training. “By leveraging a domain-specific LLM, we’re able to drive deeper trend analysis, more accurate call summarization, and in-context question answering while ensuring degrees of control, calibration, and privacy that are simply not possible with generic models,” said Vache Moroyan, SVP of Product at Observe.AI.

The bottom line: If you care about accuracy and flexibility, domain-specific LLMs matter.

But don’t take our word for it. Continue reading to hear from industry leaders about their experience and use cases with Generative AI and GPT.

Generative AI Fireside Chat with a Distinguished Professor

There are few actual experts who can not only provide the historical context of this moment, but also a forward looking perspective. That’s why we brought in L.J. Skaggs Distinguished Professor and Executive Director of the Retail Management Institute at Santa Clara University, Kirthi Kalyanam to speak during our GPT Innovation Day. Here are the highlights from the wide-ranging conversation:

Professors thoughts on generative AI

1. Can you put Generative AI and this moment 
in context for us?

Professor Kirthi: If you look at this 30 year history, you repeatedly see the same phenomenon play out again and again. A new technology comes around and some company takes it to solve a customer experience problem that has not been solved before. And it's sometimes just amazing how obvious that is. We’re having one of those moments now.

2. What are some examples of this?

Professor Kirthi: Chewy is in the pet business. A lot of pet owners are not pet owners. They're pet parents. They don't use the outsource calls at Chewy. Somebody picks up the phone. It's one of their employees and it's their own call center. They created this huge space in that industry, which others have not been able to. That's how they build the connection.

3. What’s your takeaway from Chewy’s approach to customer service?

Professor Kirthi: The bottom line is that there's many, many industries where the customer experience is an important white space that is not properly addressed with the existing incumbents or with the existing technology. And there's huge returns to that payoff. Chewy is selling the same product that Petco is selling and pet smart selling. Nothing unique about the product carriers. It's purely a better customer experience.

4. So how does Generative AI fit into this story for you?

Professor Kirthi: I see Generative AI as a way for all of these companies to truly start creating differentiation in that high involvement purchase. The reason we haven't done more of this is that high involvement servicing is always using humans, and there's a huge cost to that. It affects your cost and your P&L, and it hasn't been something you can do very well. So I certainly expect to see more blending of technology with humans.

5. How do you think this will change consumer expectations?

Professor Kirthi: I think people are going to value authenticity more than anything else. If generating content is as easy as it has become with ChatGPT, consumers are pretty quickly going to see that a lot of this content is not real. And so then the question is, where can you create authenticity, where you can create emotion, where you can create trust. These parameters are going to become very critical.

6. How do you think ChatGPT will negatively affect customer experience?

Professor Kirthi: The first negative effect of ChatGPT is going to be an erosion of trust and content. I agree. Dramatic erosion of trust in any kind of content, whether it's written content or images.

7. How should contact centers think about Generative AI and GPT?

Professor Kirthi: Businesses are going to have to double down in figuring out how this will affect their business. That's the first check box. If you don't have that very cleanly lined up, this new wave of technology is going to cause a lot of confusion. The second question I would ask is “What's my strategy to have an emotional connection and continuous emotional conversation with the customer?” because Generative AI is going to begin commoditizing experiences. Technology commoditizes, but it also differentiates. It all depends on whether you know how to use it and which way you use it.

8. What is your recommendation for contact center leaders interested in Generative AI?

Professor Kirthi: Thank goodness OpenAI released ChatGPT in the public domain. Enterprises don't really have to sell this idea to their board anymore. The board is asking them “What does this mean for us?” So I think this is an opportunity, not a problem. The second thing is that this technology is highly trial able. You don't have to make a major investment to try it out. You already have platforms available and it's easy to build an entry point to start looking at a high value case study immediately. Find those investments that are easy to do that are completely reversible. That way, you’re moving along without the risk.

4 Generative AI Perspectives from Contact Center Leaders

We love working with our customers every day to help them improve contact center performance, so we hosted an intimate discussion with Observe.AI customers at our offices about GPT and Generative AI and its applications in the contact center world. With backgrounds across multiple industries and a variety of roles from Ops to CX to digital transformation, the discussion was as diverse as it was insightful.

Here are some key takeaways from the day:

4 takeaways from GPT innovation day

1. GPT cannot be ignored

Though the group was diverse, they all had this one thing in common: an eagerness to learn about this game changing technology. Attendees took the time to travel from around the country because the potential opportunity Generative AI brings to contact centers is palpable. But there was also an acknowledgement that there is much to be learned about the field and how it will impact day to day business operations.

2. ChatGPT is helpful, but not perfect

During our interactive sessions, attendees explored the potential of ChatGPT from Open.AI. By using simple prompts we provided, such as “Act as an agent conversing with a customer calling to complain about a billing error. Generate a script for the agent.” they could experience Generative AI firsthand. However, the sentiment from the room was clear: While the generated content was impressive, it wasn’t perfect.

The best agents are adept at creating personalized experiences for customers in order to build trust, rapport, and brand affinity. These canned responses built from generic language models aren’t useful enough for complex and complicated conversations many agents are faced with everyday.

3. Calibration and context is critical

If Generative AI built on models using generic information won’t cut it, then what will? It was clear attendees wanted the ability for more calibration and specificity. For Generative AI to be actually useful, it must be generated using models specific to the industry, business, and customer—and it must have the ability to calibrate responses to improve accuracy and compliance.

4. The opportunities are endless

During breakouts, the groups discussed the potential use cases for Generative AI. Often the discussion came back to how Generative AI can help improve efficiency and productivity within the contact center for agents, supervisors, and trainers, before, during, and after interactions. But the discussion also included use cases for teams outside of the contact center, like product and marketing.

In fact, according to our most recent report, customer conversation insights are critical to making strategic decisions across the business. Our survey of 300+ respondents showed 99% of contact center leaders leverage customer conversations for insights—and use those insights across multiple teams, including marketing, product, supply chain, and to report at the executive level.

5 Questions with Observe.AI CEO and Chief Scientist

We held an interactive Q&A session with our CEO, Swapnil Jain, and Chief Scientist, Jithendra Vepa, and below are there answers to the top 5 questions from contact center leaders.

1. What should contact center leaders know about the existing LLMs out there?

Swapnil Jain, CEO: The existing language models are trained on entire data sets, the entire Internet corpus. They’re really good at general purpose use cases. Very good at language capabilities. They’re fluent and coherent in nature. The problem is you don’t have control over general purpose language models. You can’t give it feedback or control the outcome. This makes generic LLMs not a good solution for enterprises. You need domain-specific models.

Jithendra Vepa, Chief Scientist: Hallucinations are the main limitations at this point. If these generic LLMs generate incorrect facts, they will really impact the trust of users. For enterprise use cases, trust and accuracy is obviously really important. You need to be able to customize some of the model to meet customer requirements. This is the current challenge with black box models.

2. What is the benefit of domain-specific models for contact centers?

Vepa: Domain specific LLMs are trained by feeding the model domain-specific data. For contact centers, for example, this would be contact center data or conversations. Bloomberg just announced their GPT model BloombergGPT, trained specifically on financial data.

Jain: Think about a Tesla. If you’re only training Tesla’s autopilot system in the suburbs, but then throw it into the middle of New York City, it’s not going to perform the same. When we talk about domain specific training, we limit the domain to a particular area to make it more effective.

With a generic application, you get a response and you live with it. It’s a very black box response. If you cannot give feedback to something, you cannot control it, how do you trust it? If you deploy something like this in your enterprise, imagine the damage it can cause. You need a system that you can improve upon. A system that you can give feedback to and once you have that, you can trust it. And that’s why domain-specific, smaller LLMs are the solution.

3. How should contact center leaders be communicating about Generative AI?

Jain: Generative AI is the talk of every boardroom. Every CEO is expecting their CIO or CTO to come up with their own AI strategy, LLM strategy, GPT strategy because of the relevance in the contact center. Every VP of operations or contact centers or customer services are expected to put together this strategy.

My recommendation is to embrace this. This is the new world. This is here to stay. The beauty of this is that it’s not a binary shift. You can try it out and get your hands dirty with this technology and you can always reverse it if it doesn’t work out. You want to be the company whose legacy is that you accepted and adopted it. Either you will do it and go to your CEO and say you are doing it, or your CEO will come to you and ask you for it.

4. How can contact center leaders get started with Generative AI?

Jain: Pick a simple use case, like note summarization or NLP-driven insights or a use case based on a simple basic knowledge base case. All of these solutions are available within Observe.AI today. Reach out to us, we’ll demo some of those capabilities with you. Pick one, don’t pick more than one. Even though we would love for you to use all of our product capabilities. Start small, prove it, get your hands dirty and expand it.

5. How should contact center leaders be communicating about Generative AI?

Jain: We recently announced some of these things in all three of these buckets: Pre-interaction, during interaction, and post interaction.

It will really challenge our thinking because we fundamentally believe all parts of the conversation are going to change. In everything we do, we overemphasize this concept of calibration that we’ve talked about previously. When we launch a new product or feature, we have a full framework that allows the customer to calibrate the machine and make sure the outputs are in line with what they expect. You can give feedback to the machine and the machine learns and improves, which builds trust. We don’t let AI just go out there and start doing things on their own.

Announcing Observe.AI's Proprietary Contact Center LLM

Every day your agents are interfacing with customers on the front lines of your business. Every interaction is an opportunity to drive revenue, build brand loyalty, and provide customer support. By now, there are dozens (if not hundreds) of solutions claiming to leverage Generative AI to improve, accelerate, or automate every one of these touch points. The question you need to ask is: Can you trust them to do the job you need them to do?

If your provider’s solutions are built on generic LLM models, you may want to reconsider for all of the reasons we've mentioned above. Our mission has always been to help contact centers, and their leaders, drive better performance across the entire operation with the most robust and useful technology and expertise. This is exactly why we have been developing our industry first contact center large language model.

Our ground breaking proprietary 30 billion-parameter LLM is customized for contact centers and trained specifically for contact center use cases, including automatic summarization, generating coaching notes, helping agents to query knowledge, and extracting insights. Today, we are officially launching our own ground breaking proprietary LLM to power our suite of Generative AI solutions: Knowledge AI, Auto Summary, and Auto Coaching. Read more about them here.

To evaluate the performance objectively, we conducted a comparative analysis between our proprietary model and GPT3.5 and the results were dramatic. Not only that, but we give you more control over the model with a feedback loop to improve and fine-tune it to the needs of your business. This may sound like yet another claim made by yet another vendor. I don’t blame you if you’re skeptical. So I invite you to see a demo with one of our experts and test it out for yourself.

This is a new era of contact center artificial intelligence and we are here to partner with you every step of the way.

How to Leverage Generative AI for your Agents

Introducing the new Generative AI Suite by Observe.AI

Observe.AI’s new Generative AI Suite empowers agents throughout the entire customer interaction process, improving performance and productivity every step of the way.

Knowledge AI: answer customer questions faster and better than before

Today, when a customer asks a question that isn’t very easy to answer, your agents put them on a “brief hold” to go search knowledge base (KB) articles or ask a supervisor. Industry reports suggest that 46% of customers are put on hold for an average of 55 seconds at a time. The same article also suggests that customers who are put on hold report a 13% lower CSAT and a 16% lower first call resolution (FCR).

With Knowledge AI, Observe.AI has eliminated the need for agents to scour through your knowledge bases. Agents can simply type the question and get ready-to-use answers. Once you have simply connected Knowledge AI to your KBs or even manually uploaded documents, Knowledge AI analyzes the information sources to deliver the best response to customer questions instantly. Agents also get citations in the form of links to the original documents if they want to access more details around the answer. This form of just-in-time knowledge discovery reduces customer hold or wait time and results in better CSAT as well as higher FCR. As a result, your overall average handle time (AHT) also improves.

Auto Summary: automatically capture the essence of customer interactions

In early 2023, we launched Automated Actions for Real Time Agent Assist as a unique way to automate parts of note-taking and reduce after-call work.

With our new Generative AI-powered Auto Summary contact centers can now completely eliminate the need for agents to capture notes. Generative AI can create summaries in multiple formats:

- Structured: Select the kind of structure you want your call summary in, e.g. reason for call, customer issue, solutions provided, follow-up or next steps, etc.

- Unstructured: Ask for a free-flowing description of what the essence of the entire conversation was.

- Entities: It identifies key entities mentioned on the call like names, phone numbers, dollar amounts, and so on.

These summaries can be generated as soon as the call ends or also in batches in our post-interaction AI solution for use in QA or coaching. Additionally, Generative AI-based summaries can be combined with the existing capabilities of moment-based notes capture as well as manually adding notes in Real Time Agent Assist.

Auto Summary has the potential to completely eliminate after-call work and make your contact center more productive. At the same time, you can save time on training and coaching agents on improving the quality and consistency of their note-taking. Machines can now summarize better and faster than humans. So if agents save time with features like Knowledge AI and Auto Summary, can they also use Generative AI to make better use of the saved time?

Read on to see how.

Auto Coaching: provide agents with immediate, in-the-moment coaching

QA and manager-driven coaching are critical methods for improving agent performance. But is there an opportunity to complement these workflows with agent self-coaching? With Auto Coaching, agents will have the opportunity to self-coach and learn what went well and what didn't in the interaction that just ended. Auto-created feedback from Generative AI is served up to agents so they can make quick adjustments on their own to improve performance, without having to wait for QA coaching or supervisor feedback. This method cuts down time to improvement in agent performance and impacts a wide range of contact center metrics like CSAT and FCR.

We understand that Generative AI is the hot topic for just about every business right now—and contact centers are no exception. In fact, contact centers are probably at the frontlines of this revolution. That's exactly why we’re leading the charge and doing it right:

- A proprietary contact center LLM purpose-built for contact center needs

- Built-in calibration to keep humans in-the-loop

- Proven expertise in Generative AI and contact center operations

Interested in learning more about Generative AI for Contact Centers? Talk to our team today!

Want more like this straight to your inbox?
Subscribe to our newsletter.
Thanks for subscribing. We've sent a confirmation email to your inbox.
Oops! Something went wrong while submitting the form.
This is some text inside of a div block.
This is some text inside of a div block.
LinkedIn profile
August 2, 2023
SCHEDULE A DEMO

Deliver breakthrough results with the Intelligent Workforce Platform