The New Cyberattack Surface: Artificial Intelligence

Join CARE Mailing List

Follow us on Twitter @ Mason Cyber

The following article was submitted by Malek Ben Salem, PhD, Cybersecurity Senior Manager, Accenture Labs about Adversarial AI. Accenture generously supports our Cybersecurity Innovation Forum series, an evening, and speaker event with cybersecurity experts and tech innovators four times a year, sponsored by Volgenau’s CARE center and the School of Business at George Mason University.

The New Cyberattack Surface: Artificial Intelligence
Author: Malek Ben Salem, PhD, Cybersecurity Senior Manager, Accenture Labs

Know your threat

Adversarial AI causes machine learning models to misinterpret inputs into the system and behave in a way that’s favorable to the attacker.

To produce the unexpected behavior, attackers create “adversarial examples” that often resemble normal inputs, but instead are meticulously optimized to break the model’s performance.

Attackers typically create these adversarial examples by developing models that repeatedly make minute changes to the model inputs.

Eventually these changes stack up, causing the model to become unstable and make inaccurate predictions on what appear to be normal inputs.

What makes adversarial AI such a potent threat? In large part, it’s because if an adversary can determine a particular behavior in a model that’s unknown to developers, they can exploit that behavior. There’s also the risk of “poisoning attacks,” where the machine learning model itself is manipulated.

Secure your AI models – time to get started

While AI attack surfaces are only just emerging, business leaders’ security strategies should account for adversarial AI, with an emphasis on engineering resilient modelling structures and strengthening critical models against attempts to introduce adversarial examples. Your most immediate steps include:

Step 1 – Conduct an inventory to determine which business processes leverage AI, and where systems operate as black boxes.

Step 2 – Gather information on the exposure and criticality of each AI model discovered in Step 1 by asking several critical questions, including:
• Does it support business-critical operations?
• How opaque/complex is the decision-making for this process?

Step 3 – Prioritize plans for highly critical and highly exposed models, using information you acquired in step 2, and create a plan for strengthening models that support critical processes and are at high risk of attack.

Read the full Accenture Labs report for more about protecting your AI attack surface.

Create robust, secure AI

Business leaders need to combine multiple approaches to ensure robust, secure AI. Our research reveals four essential steps:

Rate limitation
By rate-limiting how individuals can submit a set of inputs to a system, effort is increased. That’s a deterrent to adversarial attackers.

Input validation
With a focus on what’s being put into your AI models, and by making modifications, it’s possible to “break” an adversary’s ability to fool a model.

Robust model structuring
The structuring of machine learning models can provide you with some natural resistance to adversarial examples.

Adversarial training
If enough adversarial examples are inserted into data during the training phase, a machine learning algorithm will learn how to interpret them.