Whether autonomous vehicles or smart chatbots – cyber attackers can trick artificially intelligent software models. “Hackers can manipulate neural networks and lead them astray,” says Nurullah Demir, an expert in cybersecurity and AI. How algorithms can be sabotaged and protected.
By Nils Klute, Specialist IT Editor and IoT Project Manager at eco – Association of the Internet Industry
The future belongs to artificial intelligence (AI). Processes can be optimized, supported, and automated. According to a study by the eco Association from 2019, a total potential of around 488 billion Euro will be released in German industry by 2025. This represents great opportunities, but also risks: Eight out of ten Germans (85 percent) are concerned about the safety of products, applications, and services based on AI, according to a recent German-language survey by the German Association of Technical Inspection Agencies (VdTÜV).
Deceive AI algorithms and gradually retrain them
A 2017 study by the University of Michigan, for example, shows why these concerns are justified. Researchers had succeeded in optically deceiving image recognition algorithms such as those used by autonomous vehicles. In so-called adversarial attacks, attackers retrain AI systems in a targeted manner. The result: Instead of a stop sign, the AI identifies e.g. a traffic sign for speed control. “Hackers can manipulate neural networks and lead them astray,” says Nurullah Demir, an expert in cybersecurity and AI at the Institute for Internet Security, if(is).
How the AI invaders do it: “The attacks can be divided into white box or black box attacks,” says Demir. In a white-box attack, the hackers have access to all the data processed by an AI, know the neural network, its structure, and its weak points. Demir: “Many AI software models are open source. Anyone can see them.” This knowledge can be exploited by attackers. Take the traffic sign recognition example above: The researchers manipulate the input image data, observe how the effects work and retrain an AI in small steps. “What looks like a stop sign with harmless stickers on it to the driver, is misinterpreted by the misguided algorithm as a green traffic light or a pedestrian,” says Demir. However: “Even if invaders have no access to AI algorithms, attacks are still possible.” In the so-called black box attacks, for example, the hackers evaluate the results of an AI system (Optimized Brute Force Method). To do this, they run hundreds of thousands of queries to find a generic example that can be used to manipulate the algorithm, as the IBM Research team has also shown.
Motives and goals of AI attackers
Attacks like these require a high level of technical effort and expertise. Possible invaders are therefore looking for particularly lucrative targets. “But what the criminals are pursuing, like the attacks themselves, cannot be generalized,” says Demir. Not only can autonomous vehicles be sabotaged, but also voice assistants, for example: Experiments show that an AI can mistake music for speech and then execute commands.
How to protect AI systems: “There is no such thing as 100% secure,” says Demir. “Thus, the attackers take advantage of the weaknesses that AI systems have by nature, so to speak.” For example, neural networks are not transparent; no one understands exactly what’s going on inside them. As a result, algorithms are not able to justify decisions. In addition, the models are continuously developed further in a self-learning, self-organizing and self-optimizing manner – there are no final software versions unlike computer programs.
Protect and secure AI, validate and check data sources
“One of the best protection options is therefore the actual data which an AI processes,” says Demir. “Users should check sources and data to make sure nobody has tampered with them.” For example, when companies share data, users should carefully validate the source. In this way, companies ensure that their AI only processes data that it is supposed to process. “In order to increase the resilience of the algorithms, users can integrate potential attacks into their own data set and train them as well,” says Demir. An algorithm can be protected by feeding it with known enemy examples (so-called Adversarial Examples). Differential privacy is also a good way to secure the exchange of information: With the mathematical method, data can be deliberately noisy without losing its statistical significance.
One thing is certain: Attacks on AI are a young and current topic in science,” says Demir. “Attacks are difficult to observe and possible harm is hard to predict.” For users, it’s not just a matter of knowing the danger: “You need to get on board and secure your systems.”
The if(is) was founded in 2005 at the Westfälische Hochschule, Gelsenkirchen by Prof. Norbert Pohlmann was founded to create innovations in the field of application-oriented Internet security research. Since the start of Service-Meister, the institute has supported the work in the AI project as a consortium partner.
You liked this article? Then subscribe to our newsletter and receive regular updates on similar topics and the Project Service-Meister and discuss with us about this and similar exciting topics in our LinkedIn Group.