Logo

The Data Daily

Your Data Team Should Be Your Company's Conscience | 7wData

Your Data Team Should Be Your Company's Conscience | 7wData

In the executive offices of today’s companies, there’s a lot of talk about making data-driven decisions. If you’re looking to maximize revenue, data can provide an objective, black-and-white recommendation: release one product over another, set a price point, invest in a specific type of advertising, etc. These decisions have clear, measurable results. But today's companies are part of an ecosystem that stretches beyond dollars and cents. Companies -- especially technology companies -- are increasingly being judged on the values they promote, the ethics of the decisions they make and the social outcomes they are creating.

As the use of machine learning and AI increases and we entrust those algorithms to make real-time business decisions, it’s becoming more and more important that we invest in training the people behind those systems to make ethical decisions. If we’re going to trust data teams (and the technology they create) to be the brain of an organization, we need to make sure that they’re also acting as the conscience. As disruptive teams are building data models to make faster and smarter decisions, the morality of these decisions is emerging as a new frontier.

It’s one thing for these machine learning models to use information about our individual behavior to show us a certain type of banner ad, but we’re now in an era where machine learning is used to make much more serious choices. For example, algorithms have been used dubiously todetermine criminal sentencingandset health insurance prices, situations where an imperfect understanding of a data model could have serious consequences. In a very short period of time, AI has advanced from its experimental infancy to a having jurisdiction over potentially life-changing events. Before we grant these models even more autonomy, it’s time to reexamine how we build them.

These particular examples have a common thread running through them: machine learning models built from data that is discriminatory results in decisions that perpetuate discrimination. The ability to use these models to make decisions faster and more efficiently may boost the bottom line, but these models can also work negatively to magnify a bias if it isn’t proactively removed. It's important to understand who owns these biased decisions: It’s not the data model, it’s the individual people who created it. AI systems aren't making decisions in a vacuum. They're making the decisions a team of people trained them to make.

The right way to respond to this new opportunity is not to shy away from incorporating these tactics into your data analysis out of fear of committing moral mistakes. If data teams put care and effort into data governance and ethical strategy, then their data models will accelerate the number of good decisions they make. Companies only run into problems when they cut corners laying out a moral strategic foundation.

As we have seen in the past and are seeing now with some of the high-profile legal battles around privacy and information usage, ignorance is not an acceptable excuse. Companies that are allowing AI to make decisions need to be held responsible for those decisions.

It’s important to remember that every employee of a company is also a member of something bigger -- a society that is just scratching the surface of new technical capabilities. The individuals on data teams have dual responsibilities: one to their companies to build models that make good business decisions and another to the broader society to be conscious of avoiding the harm that these models can create.

Images Powered by Shutterstock