Logo

The Data Daily

Optimizing Your AI/ML Efforts with Localization

Optimizing Your AI/ML Efforts with Localization

There’s an old saying that applies well to artificial intelligence and the data that powers it: “Garbage in, garbage out.” Gartner found that only 47% of ML/AI models go from prototype to production. These models are complex, with many elements affecting their success.

For instance, if you create models to expand your market share, they need to be flexible to adapt to the many external market factors. All this to say that you need to keep in mind that when it comes to AI/ML models, one size does not fit all. So, rather than using a blanket approach, more and more companies are starting to experiment with the concept of localized models.

You’ll often see a lot of value quickly with your first few versions of the AI/ML model when you’re using such models to drive your business. If we’re looking at the journey of success with AI as a “zero to 100” scale – you can go from 0 to 60 rather quickly by just making a few tweaks to your algorithms or models. But trying to make it all the way to 100 – trying to realize even more value – that’s often the most difficult part of the journey.

Imagine that you manage a retail chain and you use an AI model to predict how many employees you need for a store to operate. In most situations, you’ll start with a base model (also known as a foundation model.) And you’ll see some out-of-the-gate successes with that model right away. It can quickly take you to a certain level in your AI journey.

But it grows exponentially harder to realize value and success from that point. It requires out-of-the-box thinking and a new approach to fully realize the model’s value. This is where the concept of localization can fit in.

Skilled professionals train AI and ML models with one set of data, but that data set isn’t always (perhaps not ever) universally applicable.

For one thing, many ML/AI models are often trained with U.S.-based data. AI localization is aimed at creating data sets to train models for the many other markets in the world. A U.S.-based company’s AI models might work for how things are done in the U.S., for example, but they may fall short for markets abroad.

But localization is not only for worldwide or large-scale purposes. It can also be used on a micro level. There may be different needs and approaches for a company’s west coast locations compared to those on the east coast. Maybe  Californians are more likely to go clothing shopping on weekends, whereas residents of New York are more likely to go on a Wednesday.

Perhaps you’re using a model to determine staffing needs at each store – but that’s also something that can change based on geographic location, and it needs to be factored in. Otherwise, your models won’t be useful. You can’t address the differences in behavior or traffic or other factors unless you have separate models for each location.

It’s also possible to drill down further using localization. In a scenario like the one mentioned above, you might find that rather than using the same AI model for all your U.S. stores, you have a model for each state or each city – or even a model per location.

Businesses can gain a clearer understanding of their demographics and the unique needs/desires of different locations by experimenting with localized models. It’s all too common for a company that’s getting started with AI models to get into this line of thinking that a model is “one and done.” That’s an incorrect notion. Foundational to succeeding with AI is the recognition that it requires continuous iteration – and then running an iteration continuously until you find the optimal solution.

Localization requires a technology commitment – one that might prevent organizations from even considering the idea of localized models on top of what they’re already trying to tackle. But if AI is truly seen as a tool, a method for moving the needle on your business, then these are challenges you must tackle. If you don’t, your models won’t be successful.

Having said that, it’s often a significant challenge to keep track of all these separate models. It requires a lot of experimentation. You need to be able to try new things regularly and continue to make tweaks, trying out different approaches for weekdays versus weekends, for instance. This challenge isn’t insurmountable; there are tools available to help you with automating the management of all these different models.

Organizing and managing multiple models at scale is usually the problem – not building them. But you don’t have to go it alone, and this shouldn’t prevent you from experimenting with localized models. When it comes to the management aspect of your models, there are solutions that can assist with this, so don’t let that be a sticking point.

 AI and ML models take too much time and too many resources to put garbage data into them. It’s critical to the success of your models to understand that data isn’t one size fits all. Nor is it “one location fits all.” Companies can derive more accurate results by localizing their AI/ML models. There are solutions available now to help create and manage such models, so now is the time to try localization and see if it moves the needle for your organization.

About the author: Harish Doddi is the CEO of Datatron, an enterprise AI platform. Doddi started his career at Oracle where he specialized in systems and databases. Doddi then worked at Twitter to work on open source technologies, he then managed the Snapchat stories product from scratch and the pricing team at Lyft. Doddi completed his undergrad in Computer Science from the International Institute of Information Technology (IIIT-Hyderabad) and later graduated with a master’s in computer science from Stanford University.

How Data Ops + Data Literacy Can Turn Everyone into a Data-Driven Professional

ML Needs Separate Dev and Ops Teams, Datatron Says

Images Powered by Shutterstock