Logo

The Data Daily

The importance of AI governance and 5 key principles for its guidance

The importance of AI governance and 5 key principles for its guidance

The advent of artificial intelligence and machine learning has introduced a new set of challenges to the world. As algorithms seem to have a biasproblem in the used data training, big tech isn’t doing enough to fix it. This has led to an increased need for general governance policies that protect the people and the planet while also ensuring that cultural differences are taken into consideration. So, why it is important to talk about AI governance, and what can be done so AI stays on track with social responsibility as technology advances and changes in time? This article will dive deep into the answers and will establish 5 key dimensions that need to be addressed by organizations to make sure AI governance is safely and fairly established and what are its limitations.AI is a major factor in the future of our society, but who decides?

AI is one of the most captivating segment of technology. Its not only helping our everyday functioning to be easier and better, but it also drives social transformation within important sectors such as healthcare, finance and justice. Hence, it is a great starting point for creating change. As history shows us that governance applied at the right time, in the right manner, can help accelerate progress and ensure positive social outcomes, the momentum to talk about AI governance is now.

By definition, AI governance should close the gap that exists between accountability and ethics in technological advancement. It makes sure that boundaries within technology are set, so it does no harm and further aggravate inequalities incidentally while it operates.

As seen from our previous research articles, the cost of inaction when it comes to AI bias has led to different injustices towards whole groups of people, racial profiling and, other alarming events. The COVID-19 pandemic witnessed a worldwide rush of AI adoption in healthcare so it allows hospitals to diagnose and treat patients faster. Limitations in cooperation and incorrect assumptions about the data, such as the unrealistic expectations that encourage the use of AI systems before they are ready in the midst of the COVID-19 pandemic, have made it clear to many researchers that the way AI tools are built needs to change. But how can we make this change faster? And who decides?

It seems like there should be a universal legal framework for ensuring that AI is well researched and developed which opens up a whole different subset of AI governance’s umbrella- AI regulations. We will talk about this in other upcoming series on AI regulations. Now, let’s get back to AI governance.

There are a couple of examples worldwide about regulatory frameworks for AI governance.

For instance, Singapore’s government approach to AI governance can serve as a global reference point. Its Model AI Governance Framework provides detailed to private sector organizations to address key ethical and governance issues when deploying AI solutions. Some guidelines advise that AI should be explainable, transparent, and fair with the main aim to promote public understanding and trust in technologies.

Another case is the EU’s The General Data Protection Regulation (GDPR) which comes in the form of a law for privacy and security that targets organizations anywhere in the world as long as their data is related to people from Europe. Talking about this, the EU seems to have the strongest approach to AI governance. This year, they published a leaked draft of new EU regulations on AI misuse.

In the same line, China’s Ministry of Science and Technology also released AI governance principles for developing responsible AI, such as fairness and justice, security and respect for privacy. They also formed a new generation AI governance expert committee to research policy recommendations within the field and identify areas for international cooperation.

For any type of AI governing system to work, there needs to be somehow a universal framework to be followed by everyone. Analyzing all the examples above, the basic principles must revolve around Accountability, Fairness, Transparency, Safety, and Robustness. This framework should serve as a guideline for organizations to take actions and make their AI responsible and trustworthy.

Accountability means holding organizations, experts, and data scientists responsible for their decisions and actions when designing, developing, operating, or deploying AI systems. For instance, having safeguards against discrimination based on people’s color of the skin or gender when creating the data sets and training the algorithms.

Key aspects that fall in the category of accountability:

-High-level principles, values and other tenets to guide the ethical development, deployment, and governance of AI projects

-The conformance of software products and processes to applicable regulations, standards, guidelines, plans, specifications, and procedures

-Monitoring of the AI system if it serves its purpose and goal

-Key stakeholders are involved in the designing, building, and maintenance of your AI systems

Either how the data is being collected about the people and/or is used by the machines, there needs to be transparent practices and decisions about the same. Transparency also puts forward the importance of solutions overview if AI systems suggest or make wrong decisions and if it is possible to trace the key factors leading to any specific decision and diagnose the error.

There are plenty of discussions about the definition of what’s fair. However, there needs to be a universal decision established about common values and common moral norms and these need to be translated into the AI systems as well. The fairness principle also ensures that any potential bias presented in historic data or introduced by humans is considered and mitigated. And if there is a diversity in the development team.

As EU’s GDPR requires by law, AI systems must include data protection principles, data security, consent and respect for people’s privacy rights. Other two main aspects of this principle are:

-AI systems need to be tested to minimize effects unrelated to their main objectives, especially those that are irreversible or difficult to reverse

-Is there human oversight to your AI system that can interrupt and override its actions when necessary?

AI systems are automated and the design and quality of their algorithms is crucial for their functioning. Failing to do so, will result in unwanted consequences such as car crashes or mistreatment in the healthcare practices. It also means that AI systems behave robustly when they are tested for different scenarios, such as edge cases and extreme scenarios or “reward hacking”.

AI governance is still on its rise and there is still a lot of work to be done, so it serves the right purpose. It seems that decision-makers and those who hold the power have proposed only guidance on how to deal with AI governance, rather than a universal approach supported with laws or regulations. And this is the bare minimum as it only leaves the AI community with frameworks that are theoretical and hard to be put into practice. Another limitation is the shouts for the need for a global solution for AI and tech injustices and the regulatory structures are on a national level, again posing a ‘riddle’ on how we would like to evolve as a humanity. This leads to the other obstacle that AI governance is facing- multi-sectoral coalitions and joint cooperation with bold actions and collective efforts for transforming the purpose that AI is serving.

It’s time to start thinking about AI governance. The future is going to look very different from today, and we need to be ready for that change. Or better, responsibility and justice need to be included in the decisions that will affect how our future will look like. This can be done with the right AI governance in place.

Check outKOSA AI’scompliance checkerthat operationalizes AI governance by involving key stakeholders with concrete action points throughout the ML lifecycle, using theAI responsibleframework with 5 main principles: accountability, fairness, transparency, safety and robustness.

Images Powered by Shutterstock