8 minute read

DR Technology Logo

News, news analysis, and commentary on the latest trends in cybersecurity technology.

The Challenges of AI Security Begin With Defining It

Security for AI is the Next Big Thing! Too bad no one knows what any of that really means.

Picture of Michael Bargury

Michael Bargury, CTO & Co-Founder, Zenity

March 5, 2024

4 Min Read

A fully automated police robot stands at a charging station in Terminal T4 at Singapore Changi International Airport

Source: Agencja Fotograficzna Caro via Alamy Stock Photo

LinkedinFacebookTwitterRedditEmail

As artificial intelligence (AI) continues to grab everyone’s attention, security for AI has become a popular topic in the marketplace of ideas. Security for AI is capturing the media cycle, AI security startups are coming out of stealth left and right, and incumbents are scrambling to release AI-relevant security features. It is clear security teams are concerned about AI.

But what does “AI security” mean, exactly?

Frankly, we don’t really know what security for AI means yet because we still don’t know what AI development means. “Security for X” typically arrives after X has matured — think cloud, network, Web apps — but AI remains a moving target.

Still, there are a few distinct problem categories emerging as a part of AI security. These line up with the concerns of different roles within an organization, so it is unclear whether they easily merge, though of course they do have some overlap.

These problems are:

  1. Visibility

  2. Data leak prevention

  3. AI model control

  4. Building secure AI applications

Let’s tackle them one at a time.

1. Visibility

Security always starts with visibility, and securing AI applications is no different. Chances are many teams in your organization are using and building AI applications right now. Some might have the knowledge, resources, and security savviness to do it right, but others probably don’t. Each team could be using a different technology to build their applications and applying different standards to ensure they work correctly. To standardize practices, some organizations create specialized teams to inventory and review all AI applications. While that is not an easy task in the enterprise, visibility is important enough to begin this process.

2. Data Leak Prevention

When ChatGPT was first launched, many enterprises went down the same route of desperately trying to block it. Every week new headlines emerged about companies losing their intellectual property to AI because an employee copy-pasted highly confidential data to the chat so they could ask for a summary or a funny poem about it. This was really all anybody could talk about for a few weeks.

Since you cannot control ChatGPT or any of the other AIs that appear on the consumer market, this has become a sprawling challenge. Enterprises issue acceptable use policies with approved enterprise AI services, but those are not easy to enforce. This problem got so much attention that OpenAI, which caused the scare in the first place, changed its policies to allow users to opt out of being included in the training set and for organizations to pay to opt out on behalf of all their users.

This issue — users pasting the wrong information into an app it does not belong to — seems similar to what data loss prevention (DLP) and cloud access security broker (CASB) solutions were created to solve. Whether enterprises can use these tools created for conventional data to protect data within AI remains to be discovered.

3. AI Model Control

Think about SQL injection, which boosted the application security testing industry. It arises when data is translated as instructions, resulting in allowing people who manipulate application data (i.e., users) to manipulate application instruction (i.e., its behavior). With years of severe issues wreaking havoc on Web applications, application development frameworks have risen to the challenge and now safely handle user input. If you’re using a modern framework and going through its paved road, SQL injection is for all practical purposes a solved problem.

One of the weird things about AI from an engineer’s perspective is that it mixes instructions and data. You tell the AI what you want it to do with text, and then you let your users add some more text into essentially the same input. As you would expect, this results in users being able to change the instructions. Using clever prompts lets you do that even if the application builder really tried to prevent it, a problem we all know today as prompt injection.

For AI application developers, trying to control these uncontrollable models is a real challenge. This is a security concern, but it is also a predictability and usability concern.

4. Building Secure AI Applications

Once you allow AI to act on the user’s behalf and chain those actions one after the other, you’ve reached uncharted territory. Can you really tell whether the AI is doing things it should be doing to meet its goal? If you could think of and list everything the AI might need to do, then you arguably wouldn’t need AI in the first place.

Importantly, this problem is about how AI interacts with the world, and so it is as much about the world as it is about the AI. Most Copilot apps are proud to inherit existing security controls by impersonating users, but are user security controls really all that strict? Can we really count on user-assigned and managed permissions to protect sensitive data from a curious AI?

A Finishing Thought

Trying to say anything about where AI or by extension AI security will end up is trying to predict the future. As the Danish proverb says, it’s difficult to make predictions, especially about the future. As AI development and usage continue to evolve, the security landscape is bound to evolve with them.

LinkedinFacebookTwitterRedditEmail

About the Author

Michael Bargury

Michael Bargury

CTO & Co-Founder, Zenity

Michael Bargury is an industry expert in cybersecurity focused on cloud security, SaaS security, and AppSec. Michael is the CTO and co-founder of Zenity.io, a startup that enables security governance for low-code/no-code enterprise applications without disrupting business. Prior to Zenity, Michael was a senior architect at Microsoft Cloud Security CTO Office, where he founded and headed security product efforts for IoT, APIs, IaC, Dynamics, and confidential computing. Michael holds 15 patents in the field of cybersecurity and a BSc in Mathematics and Computer Science from Tel Aviv University. Michael is leading the OWASP community effort on low-code/no-code security.

See more from Michael Bargury

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

Subscribe

More Insights

Webinars

More Webinars

Events

More Events

You May Also Like


Latest Articles in DR Technology

4 Min Read

3 Min Read

1 Min Read

4 Min Read

Read More DR Technology

Cookies Button

About Cookies On This Site

We and our partners use cookies to enhance your website experience, learn how our site is used, offer personalised features, measure the effectiveness of our services, and tailor content and ads to your interests while you navigate on the web or interact with us across devices. By clicking “Continue” or continuing to browse our site you are agreeing to our and our partners use of cookies. For more information see Privacy Policy

CONTINUE

Company Logo

When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer.

More information

Allow All

Strictly Necessary Cookies

Always Active

These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms.    You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.

Performance Cookies

Always Active

These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site.    All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.

Functional Cookies

Always Active

These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages.    If you do not allow these cookies then some or all of these services may not function properly.

Targeting Cookies

Always Active

These cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant adverts on other sites.    They do not store directly personal information, but are based on uniquely identifying your browser and internet device. If you do not allow these cookies, you will experience less targeted advertising.

Back Button

Search Icon

Filter Icon

Clear

checkbox labellabel

ApplyCancel

ConsentLeg.Interest

checkbox labellabel

checkbox labellabel

checkbox labellabel

Confirm My Choices

Powered by Onetrust

Updated: