Blog | Deep Learning

The Cybersecurity Challenge for Deep Learning Systems

The Cybersecurity Challenge for Deep Learning Systems
share on
by Sanjeev Kapoor 06 Apr 2020

Nowadays, many enterprises put the adoption of Artificial Intelligence (AI) systems at the very top of their strategic agendas. There is a surge of interest in the capabilities of AI, as it can automate activities and enable intelligent data-driven business processes. Deep learning systems and technologies represent a prominent segment of AI systems, which has proven its capability to deliver exceptional intelligence. As a prominent example, Google’s notorious Alpha AI platform that beat a genius grandmaster in the GO game is largely based on deep learning. Also, emerging applications like autonomous driving and cognitive unmanned aerial vehicles, comprise many deep learning components.

Deep learning has some key differences from traditional machine learning systems. It can work with less structured datasets and exhibits much better performance when trained with very large amounts of training datasets. That’s probably the number one factor driving the popularity of deep learning and deep neural networks: their performance reaches “plateau” much later than traditional machine learning i.e. they can leverage huge datasets to further improve their performance. In an era where new data are produced at an exponential pace, this is a very compelling advantage. Moreover, the mechanics of deep learning are very interesting, as it uses algorithms and mathematics that can to some extent mimic the human thought process. With the exponential growth of storage and computational capabilities, such systems are gradually operating more similar to the human brain than any other system.

Nevertheless, the growth of deep learning does not only come with exciting opportunities for intelligent applications. It also introduces some new challenges as well. Some of these challenges are found in the area of cybersecurity, as it is possible for adversaries to attack deep neural networks. Such attacks can take place both at the training time of the network and at the execution time. Two of the most prominent types of attacks against Deep Learning systems are evasion and poisoning attacks. Poisoning attacks occur at the time of training of the neural network, while evasion attacks concern their execution phase. Both of these attacks can have a catastrophic effect on the applications where deep neural networks are deployed.

Deep Learning or something else.
Let's help you with your IT project.

 

Understanding and Confronting Evasion Attacks

Evasion represents the most common attack type against deep learning systems during production. Evasion attacks provide a deep neural network with adversarial inputs that cannot be correctly identified by the system. For example, deep learning is used by autonomous cars to identify the driving context. Hence, they comprise deep neural networks that identify traffic lights and road signs, such as “Stop” and Speed Limits signs. In this context, an evasion attack may involve the creation of a modified Stop sign that cannot be classified as such by the deep neural networks. Towards launching evasion attacks, attackers have to create such modified, adversarial input, which is commonly characterized as a set of “adversarial examples”. In many cases, humans can correctly identify and classify such examples. There are however cases where humans can be fooled by adversarial patterns as well.

The concept of evasion attacks is not new. There are hundreds of evasion attack examples and many relevant cyber-protection solutions in the research literature. However, during the last couple of years, there has been a rapid increase in research outcomes on describing and confronting evasion attacks. The proliferation of such attacks is mainly due to the ever-increasing number of deep learning techniques and of their deployments in real-life use cases.

There are various techniques for confronting evasion attacks and for building relevant cyber-defense systems. Two of the most popular techniques include:

  • Employment of formal methods for testing and verification: Formal methods rely on rigorous mathematical representations of the operation of deep neural networks, towards testing them exhaustively with many different inputs. This exhaustive testing shall ensure that the neural network operates appropriately at all times. Any issues spotted during the formal testing can be taken into account in the scope of the risk assessment of the deep learning system. Likewise, possible mitigation actions shall be defined.
  • Adversarial learning: To ensure that the neural network remains robust when provided with adversarial examples, machine learning experts try to understand possible types of adversarial inputs. Accordingly, they use the identified examples to train the neural network. Hence, the neural network becomes able to classify adversarial examples as well.

 

Understanding and Confronting Poisoning Attacks

In several cases, malicious parties attack neural networks and other machine learning systems at the time when they are trained. Such attacks can be very effective given that many machine learning systems are (re)trained very frequently. For instance, a significant number of machine learning systems are retrained whenever new data become available. This is very common in applications like retail, finance and social media marketing systems.

Poisoning is one of the most common attacks that occur during the training phase of the project. It refers to the contamination of the data that are used to train a classifier, towards compromising its ability to operate correctly. A historically well-known type of poisoning attacks concerns spam e-mail classifiers that were fed with wrong classification data in order to yield their operation unreliable. The contamination of a neural network can take place in different ways, including:

  • Label modification. modifying the labels of the training dataset during the supervised learning process. Labels can be altered for any data points within the dataset.
  • Data Injection. enhancing the training dataset with additional data points that implement the contamination. In practice, this entails injecting adversarial examples in the training dataset, which alters the learning and operation of the neural network.
  • Data Modification. modifying the training data before using them to train the network. The difference from the previous cases is that the adversary alters the data with contaminated data even before they are used for training the neural network.
  • Logic Contamination e. completely disrupting the logic of the neural network through contaminating the algorithm itself. This is increasingly becoming possible as companies reuse machine learning models available on the internet, which makes it easier for adversaries to intervene on them.

There are also solutions for defending deep learning systems against poisoning attacks. The key is to understand that the deep learning system has been contaminated. To this end, techniques that explain the operation of a deep neural network can be employed. By explaining the operation of a network, it is possible to detect whether it has been trained to behave in an abnormal way (e.g., based on unusual or strange rules). Accordingly, adversarial training data can be identified and removed.

 

Overall, AI systems come with many exciting application opportunities, yet they also introduce new challenges. Cybersecurity challenges such as poisoning and evasion attacks are among them. Fortunately, there are tools and techniques for confronting these attacks. The challenge for AI system deployers and security experts is, therefore, to plan for addressing these challenges based on the implementation of relevant cybersecurity systems.

Leave a comment

Recent Posts

get in touch

We're here to help!

Terms of use
Privacy Policy
Cookie Policy
Site Map
2020 IT Exchange, Inc