With 2 out of 3 organizations now regularly using generative AI in their business and 1 in 5 Dev Ops professionals leveraging it across the software development lifecycle, AI adoption is deeply woven into enterprise solutions.
With no signs of slowing down, this rapid adoption is raising serious concern among CISOs—and rightfully so.
Whether you’re building your own AI product, using an AI tool to enhance your service, or partnering with a third-party vendor that relies on AI, security can be an enigma. Can you trust the tools you’re deploying? Do you know what models your engineers are using? Are you confident in how the AI will behave? Have you instilled architectural controls that prevent it from being manipulated by a threat actor? Will the application remain secure even if prompt injection is successful?
In most organizations, the answer to all these questions is very concerning—and embarrassing—“No.”
The harsh reality is that AI introduces unprecedented risks that most organizations aren’t prepared to handle, as multiple notable incidents have made clear, including:
- Samsung’s accidental data leak through ChatGPT;
- The $1 Chevy Tahoe bought through chatbot manipulation;
- Slack’s private channel data exfiltration vulnerability; and
- Google’s $100bn Bard mishap.
New Challenges, Same Fundamentals
The skyrocketing deployment of AI is exposing a critical need for a security-first approach in AI DevOps. While it might feel like uncharted territory, the reality is the way we approach security hasn’t changed. The fundamentals are the same; it’s just a different application.
Here’s a quick look at some of the risks:
- Statistical variance. We can count on traditional applications to behave the same way every single time (barring any bugs or other exotic situations). But AI is a statistical model by definition, so it introduces statistical variance to our applications. That means we can have a general idea of how it will behave but can never be certain because state of the art models are constantly evolving. Not to mention, consistency can be a concern. Models can respond many different ways to a prompt, but they have to pick one, and it may not always be the same way, especially if the question changes even slightly. And that’s not even considering the risk of someone intentionally manipulating it to misbehave.
- Software supply chain considerations. A growing number of third-party business solutions rely on AI, and users may not even realize it. If these systems are being used to handle sensitive, personal, protected information, that’s a significant risk to your organization. Do you know what systems are in use across your company? How well do you trust those third parties to handle your data responsibly? What if their systems get compromised? The chain of liability in the event of a breach can get extremely messy for both parties.
- Data permanence. Not only do organizations need to be aware of how the models and/or vendors they choose use their data, but also the fact that, after training, data is permanently baked into the model. With a traditional application, if a mistake happens, you can contact the vendor and ask them to remove that log. But with AI, your data is training the model, and it becomes ingrained. Removing it is virtually impossible.
- Lack of standards. While there are no industry-wide compliance standards in place, there are emerging frameworks like ISO/IEC 42001, NIST AI Standards and the EU AI Act, for example. But until these are solidified, that leaves organizations to parse out their own model and limits the “hive mind” or crowdsourcing benefits of industry standardization. Instead, organizations should lean on proven security best practices and architecture guidelines for designing secure AI applications.
- Copyright implications. Currently, the U.S. Copyright Office has ruled that all output of AI systems is public domain unless you can prove that a human was instrumental in its generation. That means for creative organizations, or companies that use AI to generate assets like a logo or content, this content is likely not protected.
How to Build a Secure AI Integration Framework
If this feels overwhelming, you’re not alone. Most CISOs are in the same boat—keenly aware of the risks, but without many resources or consultancies available to help, they feel completely in the dark. Worse yet, in many cases, they may feel like the horse has already escaped the barn.
Let NCC Group serve as a beacon to shed light on the best practices for secure AI integration. As a front runner in AI application and integration security, we can help illuminate a path forward for your organization, even if you’re already off to the races.
Here are our 8 best practices for implementing a security-first approach to AI:
1)Start with a vision. Establish clear, value-driven AI integration goals around how you intend to “do AI” in your organization. What are you trying to achieve? Think about how you’ll protect data, review and select tools, and what architectures you’ll use. Just having an awareness of what you’re doing, why and a thorough understanding of the risks will stop you from putting AI in place just for the sake of AI.
2)Draft a security reference architecture. Create a document that outlines the policy and standard protocols for AI implementation in your organization. It should set rules around security boundaries and controls and integration patterns and best practices. There are some great models that already exist, which you can customize for your own unique situation. NCC Group can help to provide that foundation and customization guidance.
3)Determine data management and governance. Set parameters for model selection criteria and track changes over time to ensure data provenance. Understand how your data is being used by models you rely on—or conversely, if you’re building the model, set policies around how you’ll use your customers’ data. Create a process for handling sensitive data, including extraction if necessary.
4)Establish model output and behavior monitoring strategies. Since AI models are unpredictable (for reasons noted above), you’ll want to create a process for validating that your model is staying true to its expected scope and guidelines. You can do this manually by prompting it with a set of benchmarks to evaluate and classify the output by hand. Another option is to design a flag or complaint button into the UX that allows users to manually report noncompliant output. Some organizations set their model to capture and log outputs on a regular interval (every 30 prompts, for example) and manually review them for quality. Or, you could randomly sample the outputs it provides to verify they’re in line with your expectations.
5)Build in trust by design. How you structure AI systems can make or break the security posture. Start from a position of trust with some basic principles, including data-code separation that segregates trusted and untrusted models to prevent their interaction. This can prevent exposing your entire application assets to a compromised model or data set.
6)Conduct threat modeling. Make sure your DevOps team understands how AI changes threat models and that every project goes through a threat modeling exercise to ensure you’re managing risks the right way before you put it into production. Understanding how the model might behave can help you prepare for the unexpected.
7)Perform dynamic testing. Bring in AI red teamers to put your model and applications through some paces with a wide scope of analysis. Because bias is a well-known concern, too often organizations focus on testing their model for bias but ignore the potential for misuse of its capabilities. To be frank, you should be more concerned about your AI model enabling a bad actor access to delete your user accounts or manipulate your model than about it insulting someone.
8)Train and validate. AI can be uncharted territory for even the most experienced engineers. You’ll want to bring in AI application security experts to train your team on the risks and strategies for protecting their builds from a holistic perspective. Once you’ve established foundational knowledge, validate protocol compliance with checklists and integration review processes.
Remember the Fundamentals
While AI is exciting, and it’s tempting to jump on the bandwagon to incorporate this emerging technology into your applications, products and software stack, it’s essential that CISOs and their organizations balance innovation with security. Engineers, developers and IT security managers need to proactively think about security from both data trust and access control points of view, rather than blindly throwing these components into their environments.
While AI is changing in real-time and organizations may need to constantly remodel their architectures to secure their environments, the good news is that none of this is new from a security perspective. It changes our approach, but the fundamentals are the same.
Wherever you are in your AI implementation journey, NCC Group has the expertise, tools and resources to help you move forward with caution, confidence and efficiency. Our team has been on the forefront of AI integration and development for years, and we’ve helped dozens of organizations build safe frameworks to support their innovation and aspirations.
Contact us today to learn how our AI security practice can help your organization lead the AI revolution with confidence.
Learn more about NCC Group's AI security solutions
Our research-driven experts are ready to help with even your most complex challenges.