Logo

The Data Daily

Exploring the Security Vulnerabilities of Neural Networks

Exploring the Security Vulnerabilities of Neural Networks

Neural networks have remarkable disruptive potential, but this makes their vulnerabilities more concerning. As helpful as these machine learning models can...

Neural networks have remarkable disruptive potential, but this makes their vulnerabilities more concerning. As helpful as these machine learning models can be, they could cause considerable damage if hackers infiltrate them. It’s important to recognize these risks to develop and implement neural networks safely.

As neural networks appear in more areas of work and life, they’ll be a more valuable target for cybercriminals. If data scientists don’t account for that, these networks may cause more harm than good for the businesses that use them. With that in mind, here’s a closer look at neural network vulnerabilities.

Themost prevalent type of attackagainst machine learning models is evasion. In these scenarios, cybercriminals make small modifications to an input to fool a neural network into producing an incorrect output.

Attacks against image classification algorithms are some of the most straightforward examples. Cybercriminals can change a few pixels to create more noise in an image. While human eyes may be unable to see a difference between the modified image and the original, the altered pixels could affect a deep learning model. These subtle changes could obscure key identification factors, lowering the model’s accuracy.

Some of the most common real-world examples of evasion attacks change messages’ language to evade spam filters. However, they could be far more damaging in other use cases. If attackers fool autonomous driving algorithms, they could cause cars to not recognize pedestrians or other drivers, causing collisions.

Neural networks can also be vulnerable during the training phase. Data poisoning attacks insert false, misleading or otherwise unfavorable information into a model’s training datasets. As a result, the attackers hinder the model’s accuracy or cause it to perform undesirable actions.

Google’s VirusTotal anti-malware service was an unfortunate victim of data poisoning in 2015. Attackers caused the model tolabel benign data as virusesby poisoning it. In other instances, users have trained neural networks to use inappropriate language or to use harmful stereotypes to classify people.

Since neural networks require such vast datasets for training, it’s easy for poisoned data to slip in unnoticed. Depending on the model’s architecture, it may not take much malicious data to poison a neural network, either. These models can hone in on factors that seem insignificant to humans, so even small changes can have considerable consequences.

Another alarming vulnerability in neural networks is the possibility of exposing sensitive data. If a model learns on personally identifiable information (PII), cybercriminals could exploit it to reveal these details.

A 2020 report from Google found thatattackers could extract PIIfrom the popular GPT-2 language model. Users can feed it specific words and phrases to make it fill in the blanks with PII it learned from during training. Model stealing, where attackers probe a neural network to learn how it works and reconstruct it, can also reveal the information models trained on.

Some neural networks are vulnerable to physical attacks, too. Researchers in 2017 discoveredthey could fool self-driving carsby placing stickers on stop signs. The stickers stopped the algorithms driving the vehicle from recognizing stop signs, causing it to run through them.

As neural networks become more prominent in the real world, securing these vulnerabilities becomes more important. Thankfully, data scientists can take several steps to ensure their models remain safe.

First, data scientists must secure their data storage solutions to prevent poisoning. Private cloudsare typically more securethan public clouds, but in either case, users should use at-rest and in-transit encryption and multi-factor authentication. It’s also important to restrict access as much as possible. The only people who should have access to this data are those who need it for their work.

Data scientists should only download datasets from trusted, reliable sources. Given the threat of data extraction, it’s also important to avoid using any potentially sensitive information in training. Teams can use off-the-shelf solutions that scrub datasets of PII before feeding training data to neural networks to help prevent leakage.

Google and similar companieshave developed training frameworksto help strengthen models against evasion attacks. Using these tools can help teams ensure that their neural networks have built-in protection against adversarial attacks.

If neural networks aren’t secure, their potential for damage could rival their potential for good. Consequently, cybersecurity should be a central consideration in any machine learning development process.

Education is the first step. When data scientists understand the vulnerabilities they face and know how to minimize them, they can make safer, more effective neural networks.

Images Powered by Shutterstock