⚡⚡⚡Observe.AI Launches Generative AI Suite, Powered by Contact Center LLM ⚡⚡⚡ Learn More →
⚡Connect your Conversation Intelligence Data across all Business Systems with Observe.AI Integrations⚡ Learn More →
5 Security Questions Contact Centers Should Ask Before Using GPT and Generative AI

5 Security Questions Contact Centers Should Ask Before Using GPT and Generative AI

Go through this security checklist when considering a vendor for Generative AI

With the launch of OpenAI’s ChatGPT, the rush to identify use cases for Generative AI was exciting, but it also opened the door for a lot of potential data privacy concerns. Contact centers handle a variety of sensitive customer data, including credit card numbers, personal information, and (for highly-regulated industries) a host of potential health or financial details.

Tools like ChatGPT have made it incredibly easy for an average user to leverage Generative AI for text generation, getting quick answers, and task completion. Simply input content into the chat and, like magic, the platform makes your request happen.

Experimenting with these easy-to-use tools is tempting—and absolutely worthwhile to familiarize ourselves with the power of Generative AI. As our CEO mentioned in his Generative AI Manifesto, contact center leaders should be experimenting fast, quickly, and responsibly.

However—and we cannot stress this enough—it’s absolutely critical to keep data security top of mind as you’re doing so. 

Top Concerns About Generative AI and Security

There is a lot of upside to Generative AI and GPT-based tools, but being careless with sensitive customer information can open a can of worms for you and your contact center.

Violations and Fines

Similar to other contact center compliance violations, sharing sensitive customer information with a Generative AI platform can lead to regulatory violations and fines. Be careful what you share and how you share it. 

Machine Learning

By nature, Generative AI models learn through the content they consume. If you’re sharing customer data with the machine, it will take that information and may use it to learn. Depending on whether or not the model allows you to calibrate or redact, you may no longer have control over that sensitive data and if the machine will surface it elsewhere.

Yes, these scenarios sound scary, but don’t worry. We’re here to help. We’ve assembled 5 security questions to ask when identifying a Generative AI solution for your contact center.

5 Key Security Questions to Ask Before Using a Generative AI or GPT Solution

We’ve assembled the top questions you should ask before using a Generative AI or GPT solution for your contact center. 

#1: Do they use an in-house, proprietary LLM or external LLM?

Vendors may be using third-party large language models (LLM) as their underlying Generative AI technology to power their solution. 

Why does this matter? 

Some third-party vendors do not take responsibility for data leaks or guarantee security. Understanding if the solution you’re looking at is using a third-party application or an in-house, proprietary LLM is the first place to start because vendors will likely have more control over their in-house LLM—and by association, so will you.

Either way, you’ll want to ask a host of follow up questions. We’ve assembled an extensive list of them in this Generative AI Security Cheat Sheet.

#2 Does it have enterprise-grade security baked into the product?

Companies serious about security will have a set of enterprise security certifications to show for it. This can include both federal and global data privacy practices, like ISO 27001, PCI DSS, GDPR CCPA, and SOC 2 Type 2.

It also may include industry-specific certifications, like HIPAA or HITRUST.

This is where it’s important to understand whether or not the vendor is using their own proprietary technology or third-party. 

If the vendor is using their own technology and they have the proper security certifications, you should be covered. If they’re using a third-party platform, you will need to ensure both the vendor and the third-party platform are covered.

#3 Does it give you control over private data redaction?

Generative AI requires giving the machine access to data in order to execute tasks. That data might include customer conversations, knowledge base content, or live interactions. 

Because of this, it’s important for contact centers to be able to properly redact any sensitive information before feeding it to the machine. However, redaction poses a challenge: If you don’t redact enough, you’ll risk a violation; if you redact too much, the machine won’t be able to properly do its job. And if you have to spot check every instance manually, will it be worth it?

Most solutions will allow some level of redaction, but the two questions you should ask are: Do they allow automated selective redaction and how much control do you have over it?

#4 Does it have strong authentications across its data?

Because of the nature and volume of sensitive data, it’s important the Generative AI solution has strict restrictions on who has access to the data.

Ask about role-based access control (RBAC) for both vendor employees and users.

This may mean employees can’t access customer’s data without adhering to a specific process, which should also be audited regularly in case of a leak. 

On the contact center side, this may mean agents are only able to look at their calls and supervisors can only view their team’s calls.

For Generative AI, this would mean it only surfaces content related to the specific user and would not generate information that violates the RBAC.

#5 Is it accurate and useful for contact center use cases?

This last question is less about security, but it is equally important. 

When looking for a solution for your contact center, not all Generative AI solutions are created equal.

Those using more generic or third-party LLMs were likely trained on generic data sets, not contact center conversations or data. 

Because of this, it’s important to ensure that (in addition to all of the security questions above) the solution can perform the requested tasks with accuracy in a contact center context.

When we benchmarked our Contact Center LLM against the one powering ChatGPT, we found it dramatically more accurate when identifying key actions and events within the call, such as call reason, resolution steps, and sequence of events. 

Interested in learning more about LLMs and Generative AI?

Interested in learning more about how our contact center LLM can help drive better performance and efficiency in your contact center with enterprise-grade security? Schedule a meeting with one of our experts.

And get more Generative AI content here:

No items found.
Want more like this straight to your inbox?
Subscribe to our newsletter.
Thanks for subscribing. We've sent a confirmation email to your inbox.
Oops! Something went wrong while submitting the form.
Krishna Gogineni
Principal ML Engineer
LinkedIn profile
August 17, 2023
SCHEDULE A DEMO

Deliver breakthrough results with the Intelligent Workforce Platform