The 11th of October marks Ada Lovelace Day, a special moment in the annual tech calendar. It’s an International Day of Recognition that celebrates women in STEM, named after the woman widely recognised as the world’s first computer programmer. So immense was Ada Lovelace’s contribution in a short life — she died of illness in 1842 at age 36 — that her notes provided inspiration for Alan Turing’s work on the first modern computers in the 1940s.
Ada Lovelace Day provides an opportunity to reflect. Our minds have travelled back not as far as the 1800s but to May this year when we hosted a panel at the Girls in Tech Australia Conference. We explored the important topic of bias in tech, specifically artificial intelligence, as an issue faced by all groups challenged by representation, not just women.
Of all days it might be appropriate to ask, what someone like Ada would think and do about modern-day AI, the origins of which could be traced back to her pencil-and-pad computer programmes?
As we can’t presume to know Ada’s view, here are some of the combined thoughts from our May session with some brilliant women in STEM — Arwen Griffioen, Head of AI/ML, Linktree; Fernanda Sassiotto, Head of Engineering, Trading & Market Data at Iress; and Lisa Green, Group Owner, Customer Intelligence at Telstra.
The panel agreed there is a huge potential for AI to solve real-world problems. Just some of the points the panel touched on were AI’s potential to help us address climate change, reduce greenhouse gas emissions and energy consumption, and improve water quality.
Medicine and biology also hold particularly promising AI developments. DeepMind’s AlphaFold model has provided a breakthrough capability to predict the structure of proteins. Closer to home in Australia, annalise.ai’s radiology neural network is supporting radiologists and radiographers to read X-rays in real-time, improving patient outcomes.
One of the most eye-opening discussions in the panel was sharing OpenAI’s DALL-E 2, an AI system that can turn textual descriptions into images. Just write down what you want to see, and the AI draws it for you, with vivid detail in high resolution images.
The joy comes from receiving a hi-res picture of a cat dressed as Napoleon holding a piece of cheese, but the moment of pause comes from the algorithm returning a group of white males after searching ‘CEO’ or ‘Asian females’ or ‘flight attendant’. This is the result of embedded biases from the 650 million images DALL-E 2 scrapes off the internet and corresponding captions. OpenAI identified these biases in their model and have worked to mitigate their prevalence.
Our panel discussion considered how this kind of bias can and does show up in many AI systems: what if such bias impacts decisions on home loans, reinforces a social media user’s ‘filter bubble’, or points to the wrong information due to an accidental or intentional misdirect?
These questions are becoming increasingly pressing, since the field of AI is currently undergoing rapid progress. Meta AI recently extended this generative AI technique to video with its Make-A-Video AI system. The AI developments in 2022 alone have been staggering, and we can only imagine what progress is in store over the next few years.
The panel observed how the biggest risk is often not in the algorithm, but rather in the data feeding it.
This means the key for organisations is to have the right people and processes in place to identify biased data before it can have a detrimental impact. The Australian government has worked with business to develop the AI Ethics Principles which can be the foundation for organisations to utilise AI in a values-based way.
While there is no single ‘silver bullet’ technique to comprehensively address bias in data, there are steps that practitioners can take. It is critical to remove sensitive attributes (for example, age and gender) from datasets where they should not inform a model’s output, with consideration give to the real-world context of the data and how the model will be used. Data practitioners should ask whether the dataset accurately reflects the reality they want to model, and if it does, whether that past reality is fair or not. And finally, human feedback is a key mechanism to validate and ‘red-team’ model outputs.
There is aspiration in the AI alignment community that advanced AI systems will in turn help us align model outputs to human intentions and reduce bias in data. But for now, practitioners need to remain personally vigilant and proactive in scrutinising their data and algorithms.
Some organisations already have systems and processes in place to spot and mitigate biases in AI, but to do this at scale it needs to be enabled by the right technology. The emerging practice of MLOps is enabling the best organisations to move quickly while maintaining customer trust.
As addressed above, a critical challenge in any AI system is that it can reflect the biases and unfairness of the culture that produced it. People should be conscientious about the companies they choose to work for to ensure they have compatible ethical views on the use and application of AI. We discussed how team members should have conversations with colleagues about the data they are creating, or even speak with the executives about the organisation’s privacy of users. They can also reach out to legal teams or to security managers, who tend to be more cognisant to these issues.
Organisations can also address technical bias by introducing diversity into teams and their employee base. These would also be multi-disciplinary teams that have knowledge across the board of ethical and responsible AI and cognitive bias; and it should be reflected in an organisation’s research teams.
We should not lose sight of the fact that the power of algorithms can make them incredibly beneficial, and that AI should be approached with cautious optimism.
The panel agreed getting the right guardrails in place is a complex conundrum: how can we use technology itself to create those guardrails so that we can all get out of one another’s way and make this stuff work in a way that enables massive change?
We agreed that two clear actions are not even specific to technology: everyone should speak up when they see biases or inequality, and organisations should strive towards implementing better values-based processes and guardrails.
With this done, we can worry less about things getting worse, and take joy in what’s possible. Including some excellent cat pictures.
How will you and your team(s) reduce the bias within the technology we are responsible for?
Let us know on LinkedIn!