Our Blog

Neural Networks: The Security Bombshell

Written by Jeff Adkin | May 26, 2017 5:57:35 PM

The cyber war is raging between governments, corporations, and renegade cyber groups. Artificial Intelligence (AI) through Neural Networks is being implemented heavily in this fight to a degree that most of us are not aware of. The speed of a properly trained Neural Network attack is blistering and the attackers have the ability to adapt.  

The only ways to counteract these attacks is to have customized Neural Networks designed to defend you.

Before we discuss how to defend ourselves, we need to understand how Neural Networks are being implemented in attacks. To make this a little easier we are going to split the Neural Networks into two teams and look at each team. So let’s set them up:

  • Red Team will be our attacking Neural Networks that have a single goal of penetrating our environment.
  • Blue Team will be our Neural Networks designed to defend our current environment from internal/external attacks.

Red Team

While most attacking Neural Networks are simplistic, we are going to focus on having the worst case Red Team possible. Neural Networks need to be trained, so to do this we have to make sure this one is properly trained in a way that makes it effective to attack and adapt quickly against our environment. As part of unpacking Neural Network training, let’s look at the abstract strategy-based board game “Go”.  

“Go” is a complex game that allows for the highest amount of possible moves on a 19X19 board having 2.082 × 10^170 possibilities. That means there are more possible moves in Go than every Wikipedia article combined has characters. The game has been a nightmare for programmers to beat because most machines are simply not fast enough to take in that many variables and be able to accurately make a move within our life time.

However last year DeepMind learned to play Go using a Neural Network that saw the patterns of the game. After playing millions of times, DeepMind was able to play the game just like a human, including the ability to adapt. DeepMind proceeded to beat masters of this game, where previously no program had even passed Beginner level.

This is important because our Red Team only needs a Genetic Algorithm to learn how to hack. A genetic algorithm in short is for optimization problems and uses the concept of natural selection to achieve results on a given problem. It looks at the entire population of objects, selects specific objects, determines recommendations, and then mutates the object. This allows it to learn how to complete a task more effectively, but not more intelligently. What it does give us is information for Neural Networks to learn from and simulate outcomes. No different than a video game or a game of Go.

A Neural Network does not need a Genetic Algorithm as it can also learn by watching hacks take place and then mimicking them rather effectively. Though this method does have a lot longer of a learning curve as it is hampered by real world scenarios that are much slower and may not be as accurate.

Let’s say that the Neural Network learned how to hack based on a Genetic Algorithm and was able to test itself against a virtual environment millions of times, given the current speed of GPGPU’s and TPU’s it could take only a single day for it to have more experience than an expert hacker.

Think about that just for a minute. It can practice a million times in a day, or if you have $129,000 for a P100 rig then it can do this in mere hours. Even if a Neural Network took three tries to learn what a human could learn in one, it is still able to test at such a rate that we could never keep up. Now what happens if all that power is focused on your network?

So here is our Red Team. It knows an immense amount of patterns for penetrating environments, is able to adapt quickly, and has done this millions of times. The Red Team is ready to make our life completely miserable and locking our doors or bolting our windows will not help us stop this threat.

Blue Team

Now we have the team stepping up to our defense all dressed in Blue. Our goal here is to create a system that will protect our assets and ensure we are able to continue working safe from threats. The question now is; how can the Blue Team become strong enough to defend against something as dangerous and focused as the Red Team?

Now that we know how Red Team is formed we can create a Blue Team using the same concepts, but we need to use what we have at our disposal to make it even better than the Red Team. To do this we look at our main strength, knowledge of the environment. We know everything that exists - communication streams, data sets, usages, adoption, current OS, and patching level. We have the one thing Red Team does not have, we know our environment and we have that information available to us. We are not storming the castle in the dark and working blind on best guesses, we know every nook and cranny of our battlefield.

When two Neural Networks fight, the most deciding factor between them is not always the algorithms used, but it is the speed, power, and most importantly knowledge base. Speed and power we can match fairly well, but domain knowledge is where we take a huge leap against our competitor. Red Team cannot put us under siege until resources dry up, it has no choice but to attack directly or through stealth.

Armed with all of this knowledge, our Blue Team is created nearly the exact same way as the Red Team. We create a Genetic Algorithm for defense based on the client’s environment and use that for Blue Team’s Neural Network to learn how our systems function and how we currently defend. Blue Team can now defend the castle walls just as well as Red Team can attack those same walls. But what if we wanted to make sure Blue Team wins?

Guarding the walls is one thing, but what if Blue Team learned our entire environment as one Genetic Algorithm. Would it not be able to then function as an immune system that roams through our environment? It would be able to stop functioning as a defender and start functioning as an auto immunity system based on its knowledge of our environment, with the speed, power, and domain knowledge needed to find and quarantine any outside penetration, no matter how good that penetration was.

Conclusion

Consider if Blue Team and Red Team were playing Tic-Tac-Toe. Each would be on equal footing capable of pulling a draw out of the game every time. For Blue Team a draw is enough to say you were blocked while for Red Team a draw is equivalent to a loss. So Blue can tie or win and it has accomplished its goal. This means Blue Team always has a 66.6% change of winning. If we gave Blue Team extra knowledge of the game and an extra turn then we increase Blue percentage to 100%

We have come a long way since Song Lu proposed back in 2008 how to create an automatic Red Team based on decision trees [1]. With the advances in AI we have now, teams like EndGame work on creating Red Teams for penetration tests, as proposed in their 2016 paper "Deep Adversarial Architectures for Detecting (and Generating!) Maliciousness"[2].  Attackers now have the ability to shift from path based learning to associative (Pattern) learning models.

The hard part here is building a discreet Neural Network that is customized to learn your environment, not one like yours, but yours exactly. I shouldn’t say it’s hard to build as it is actually fairly easy, but there are growing pains as it learns more and more until it becomes an expert having done it millions upon millions of times. It is the exact same journey it takes to onboard a new employee and bring them up to speed.

Where the fun part comes in is seeing how long it takes Blue Team to figure out that the best defense could be a good offense as it learns more and more from the Red Teams of the world.

At Online we are excited to be part of the AI community, working and striving to get closer and closer through internal research and development as well as having incredible partners. If you would like to learn more about AI, Neural Networks, Genetic Algorithms, or our Risk, Security and Privacy practice, please leave a comment below or visit our website.

 

[1] Lu, Song, "Auto Red Team: a network attack automation framework based on Decision Tree" (2008). Paper 10879.

[2] Anderson, Woodbridge, Filar, “Deep Adversarial Architectures for Detecting (and Generating!) Maliciousness” (2016)