Logo

The Data Daily

Open-Source Assessment is Critical for the Future of Responsible AI

Open-Source Assessment is Critical for the Future of Responsible AI

Artificial intelligence’s technological sophistication and industry reach continues to evolve rapidly, which has prompted a movement to ensure that these systems are used responsibly. The broadest goal of Responsible AI (RAI) is to recognize the expansive consequences of AI systems, and adapt the development and management of these systems accordingly. While much time has been spent on the most important characteristics of responsible AI (e.g., fairness, transparency, accountability), less has been focused on the principles-to-practices gap. How do you actually bring RAI ideas to development?

Fortunately, although AI systems are complex, they differ from the systems they seek to replace in a crucial way: they are more amenable to assessment. By “assessment” we mean the quantitative analysis of system behavior, via analysis of datasets and models. We see comprehensive assessment as a critical foundation for RAI development. Proper assessment creates a better understanding of AI capabilities, serves as safeguards for productionizing, and provides important signals to inform future research and development. Assessment is the bridge from higher-order RAI principles to the development of AI systems.

Unfortunately, current practices don’t reflect the totality of an AI system’s capabilities and effects, which leave the users and deployers of AI systems partially blind. In some instances, ML practitioners face scientific obstacles; aligning AI systems with our objectives and values is an active area of scientific inquiry with no easy answers.

However, in other cases, the AI community has successfully discovered a range of approaches to support assessing AI systems for fairness, security vulnerabilities, privacy risks, and more. In these situations, the obstacles are not fundamentally scientific, but cultural and technological — it is difficult to know which approaches are appropriate, and how to apply them easily.

So how can we remove these barriers to adoption? Like any change to business practices, the adoption of RAI hinges on both the need and ease in which it is incorporated into current workflows. The need is certainly increasing, and RAI is rapidly becoming a business imperative.

Governments in the U.S. and Europe are developing a number of AI government regulations, requiring organizations relying on AI to ensure they are building or responsibly leveraging their AI governance capabilities. Beyond regulation, most organizations now recognize that bringing oversight and accountability to AI — and doing so at scale — is essential for the burgeoning field of AI.

As the need is recognized, many organizations are developing or deploying a mishmash of tools to assess their ML models for fairness, security, and the host of other aspects that constitute “responsible AI”. However, this fractured and siloed approach results in AI assessments that lack transparency, standardization, and verification. When built in-house, they are often manual and unscalable, and when proprietary, they can be expensive and even more opaque. The result of such a closed ecosystem approach is that it becomes difficult for the community at large to trust the efficacy and accuracy of AI assessments or any governance founded on them.

To ensure there is foundational evidence that AI assessments are rock solid, they must be transparent and auditable. We believe that open-sourcing these assessments is an obvious step to creating a trusted ecosystem that supports RAI and AI governance more generally. Within an open ecosystem, different practices are developed as a community, benefiting from diverse perspectives in academia and industry. Consumers of these assessments have full transparency into their work, and oversight bodies can align standards with tooling without fear of appearing biased towards a particular corporate interest. Most importantly, open-sourcing assessments lowers barriers to adoption.

One issue with the open-source ecosystem is ease-of-use. It is often difficult to know which RAI tools actually exist, let alone how to integrate them with current workflows. This is why we have added our own offering to the open-source ecosystem — Credo AI Lens, an assessment framework that provides a single, standardized point of entry to the broader RAI ecosystem. With Lens, developers have easy access to a curated ecosystem of assessments developed or honed by vendors and the broader open-source community.

We are bringing the best open source assessment frameworks and techniques into Credo AI Lens, including Microsoft’s Fairlearn toolkit for bias detection and mitigation, the SHAP library for model explainability, and the Adversarial Robustness Toolkit for security and privacy assessment. In addition to creating a single point of entry for existing open-source tools, we are also building new open-source assessment functionality within Lens, including dataset assessments that can help data scientists detect fairness issues in their underlying training data and evaluations of language generation models.

Fundamentally, Lens is about making Responsible AI open-source assessment easier to access. We hope this framework, and similar offerings, are more frequently used and incorporated into development ecosystems, making RAI practices ubiquitous. We encourage the open-source community to explore Credo AI Lens, suggest additional features and provide any feedback that will help us build the best framework possible.

Images Powered by Shutterstock