Logo

The Data Daily

Mitigating the risks of artificial intelligence compromise - Help Net Security

Mitigating the risks of artificial intelligence compromise - Help Net Security

The number of cyberattacks directed at artificial intelligence (AI) continues to increase, and hackers are no longer planting malicious bugs within code – their techniques have become increasingly complex, allowing them to tamper with systems to compromise and “weaponize” AI against the organizations leveraging it for their operations.

Take commercial farms for example – hackers are now able to cause considerable damage to operations and livestock, whether through tampering with datasets or shutting down sprayers, harvesters, and drones responsible for the well-being of crops and produce. For SMEs and other organizations, this can lead to significant financial and reputational damage should sensitive data be stolen or a system authenticating and validating users is corrupted, and there is no quick fix should a breach take place.

To successfully mitigate any threats, security professionals must look at the surface-level aspects of an AI, but also dive deeper into the data sets of a system, and review how best these can be secured. By looking at the four interactional elements of machine learning (ML), one can evaluate how each can be affected by a cyberattack and put roadblocks in place to protect each aspect.

There are four typical elements to consider when it comes to ML.

The first is data sets: the data provided to a device or machine so it can function, review, and decide based on the information received. Data in this instance is not simply code – it can be anything from a processed fact, value, image, sound, or text which is then interpreted and analyzed by AI. It is therefore vital that the data provided to the machine during this process is made up of meaningful, accurate information.

The next element to consider is algorithms: the mathematical or logical problems that compute the data to feed it into a model. To secure a system, any algorithm deployed must be specifically adjusted to the unique problem that needs to be solved, to align with the specific model and nature of the data provided.

This leads into the next key element: models, i.e., the computational representations of real-world processes. These are trained by IT professionals to make predictions which will mirror real life. The data which has been incorporated into a model is then expected to increase the accuracy levels of the process going forward. To stop this process from being corrupted, it is essential the model is provided with trusted data to avoid any deviation within the ML model predictions.

Last but not least, training allows ML models to identify the patterns that allow them to make decisions. The training applied to a model must come from a trusted source to ensure that any supervised, unsupervised and reinforcement learning does not corrupt the model and make it deviate from its accurate feature extraction and predictions.

The fundamental actions required from any security approach is to protect, detect, attest, and recover from any modifications to coding, whether malicious or otherwise. The best way to fully secure a compromised AI is applying a “trusted computing” model that covers all four AI elements.

Starting with the data set aspect of a system, a component such as a Trusted Platform Module (TPM) is able to sign and verify that any data provided to the machine has been communicated from a reliable source.

A TPM can ensure the safeguarding of any algorithms used within an AI system. The TPM provides hardened storage for platform or software keys. These keys can then be used to protect and attest the algorithms.

Furthermore, any deviations of the model, if bad or inaccurate data is supplied, can be prevented through applying trusted principles focusing on cyber resiliency, network security, sensor attestation, and identity.

Businesses can also ensure the training given to machine learning is secure by making sure the entities providing also adhere to the trusted computing standards.

To ensure sensors and other connected devices maintain their integrity and provide accurate data, professionals should look to leverage Root of Trust hardware, such as the Device Identifier Composition Engine (DICE).

With DICE, the boot layers within each system receive a unique key, combined of the preceding key from the previous layer alongside the measurement of the current one. Should the system be exploited by a hacker, any exposed layer’s key and measurement will differ from any others within the system, mitigating the potential risk by securing data and protecting itself from any disclosure of data. Root of Trust hardware can even re-key a device should a flaw or tampering be unearthed within a device’s firmware, which allows users to uncover any vulnerabilities when carrying out system updates.

A proactive approach is now required from businesses and organizations to mitigate any tampering of an AI. Investment in up-to-date technologies alongside education on how to identify threats and establish a defense are essential to ensure severe reputational or financial damage does not occur.

Images Powered by Shutterstock