Logo

The Data Daily

Building Secure ML Starts with Understanding the Landscape: Oracle’s Perspective

Building Secure ML Starts with Understanding the Landscape: Oracle’s Perspective

With so many businesses implementing machine learning (ML) into their tech stack, it’s no wonder we’re seeing increases in the volume...

With so many businesses implementing machine learning (ML) into their tech stack, it’s no wonder we’re seeing increases in the volume and variety of cybersecurity threats. If engineers want to deploy machine learning models throughout the enterprise, they must acknowledge these risks and move from a reactive to a proactive state with secure ML. 

In the 2022 ODSC East Keynote, “Is Your ML Secure? Cybersecurity and Threats in the ML World,” Oracle’s Hari Bhaskar and JR Gauthier discuss current understandings of ML within the context of current events. They also offer strategies developers can use to keep their machine learning platforms secure. Let’s explore the state of cybersecurity from an ML platform perspective.

ML is now one of the most popular subsets of artificial intelligence, so organizations can no longer afford to skim over security concerns.Companies must understand the threats they face when undertaking these machine learning initiatives. Where network security was the primary cybersecurity focus in the 90s – when we began living more of our lives with machines online– AI security is the going to be the most significant focus for this coming decade.

Unfortunately, racing towards bigger and better AI often leaves cybersecurity overlooked. From 2010 to now, the number of AI security papers has increased multifold, so we know that mainstream research is picking up on the risks. And despite the excitement over AI use cases, the field has experienced some high-profile failures. And it isn’t always a cyberattack. In an autopilot experiment, a stickerstrategically placed on a signwas all it took to trigger catastrophic failure.

Oracle approaches cybersecurity from a comprehensive perspective. According to the organization, trustworthy AI contains three pillars that companies must address to build safely and securely:

All three pillars must be present to develop trustworthy AI that keeps the general public’s support and protects the organization from persistent cybersecurity threats.

By far the most common type of AI attack involves some kind of manipulation. Threat actors can change the actions of AI through stealth methods or change the expected behavior of the machine. According to Adversa, this makes up over 80% of all attacks on AI.

Other types of attacks, notably infection and exfiltration, also contribute to insecure AI programs and risks to the public. US regulations are taking a multifaceted approach in two parts. Governments are working with researchers to develop policies based on research areas within security first AI development.

In Europe, regulation is an imperative for AI systems. In fact, these are key topics within policy and economic decisions as trade grapples with changing understanding of what constitutes responsible AI. Europe takes an iterative approach with regulations requiring companies to start from the beginning steps if anything significant changes during the development process.

The unknown unknowns are a pressing concern for industries involved in deploying AI. This keynote’s speakers note that security is still not a widespread concern in the field because organizations are still working on the actual act of deployment in production. 

Oracle notes four pillars that make up best practices for building and deploying a security-first AI structure:

Machine learning pipelines are vulnerable at nearly every step. Therefore, these four pillars must be a pervasive process throughout building, training, and deploying. Developers must become familiar with the most common forms of attacks that can derail machine learning and AI. Oracle recommends a proactive rather than a reactive role in security management.

No matter what stage organizations are in their digital transformation, regulations are coming down the pipeline that could make machine learning safer for the everyday citizen but more difficult to deploy for businesses. Organizations can shift to a proactive approach by acknowledging that the possibility of threat is high. 

Oracle has taken several different approaches to common AI and machine learning threats, such as data poisoning and model stealing. Practitioners can look to examples in this keynote to discover more about what is available to them in pursuit of security. Following their four steps puts organizations in a better place to operate trustworthy AI.

Luckily, open-source tools and techniques can help even smaller organizations manage their security. And as companies get more control over their ML landscape and deployments, they’ll better manage the security concerns inherent in deploying. Companies need to differentiate themselves, and ML can be an innovative tool to bring operations into the new industrial revolution… It’s just a matter of understanding the threat landscape and knowing what to do about it.

Images Powered by Shutterstock