Logo

The Data Daily

Designing for Analytics (Brian T. O'Neill)

 Designing for Analytics (Brian T. O'Neill)

Vinay Seth Mohta is Managing Director at Manifold, an artificial intelligence engineering services firm with offices in Boston and Silicon Valley. Vinay has helped develop Manifold’s Lean AI process to build useful and accurate machine learning apps for a wide variety of customers.

During today’s episode, Vinay and I discuss common misconceptions about machine learning. Some of the other topics we cover are:

“We want to try and get them to dial back a little bit on the enthusiasm and the pixie dust aspect of AI and really, start thinking about it, more like a tool, or set of tools, or set of ideas that enable them with some new capabilities.”

“We have a process we called Lean AI and what we’ve incorporated into that is this idea of a feedback loop between a business understanding, a data understanding, then doing some engineering – so this is the data engineering, and then doing some modeling and then putting something in front of users.”

“Usually, team members who have domain knowledge [also] have pretty good intuition of what the data should show. And that is a good way to normalize everybody’s expectations.”

“You can really bring in some of the intuition that [clients] already have around their data and bring that into the conversation and that becomes an almost shared decision about what to do [with the data].”

Brian: We got Vinay Seth Mohta on the show today. I’m excited to have you here. Vinay’s maybe a little outside the normal parameters of who we planned to have as a guest on designing for analytics but not entirely. He has an engineering background but he’s done a lot of stuff in the product management space as an executive. Correct me if I’m wrong. You’ve been at MathWorks before, you worked on search at Endeca Technologies, and you were at Kayak, which is one of my favorite sites, actually, for booking travels. I’m sure everybody listening has probably touched Kayak at some point, and you were a product manager there, correct?

Brian: Okay, and I know you did some healthcare. You were a CTO at Kyruus, and now, you are a Managing Director of Data Platforms at manifold.ai, which is a services company that works on data science, machine learning projects, and artificial intelligence. Is that correct?

Brian: Tell us a bit about what Manifold’s doing and what you’re doing there.

Vinay: Sure thing. Manifold, as an organization, is an AI consulting company, as you mentioned. More importantly, we unpack AI into [...] really focusing on data engineering, data platforms, getting your data ready, and then also building machine learning models and getting all of that put together into either an internal-facing or an external-facing product. So, I’m looking forward to talking a lot more about that.

As a company, we largely work with Global 500 organizations and also a spectrum of organizations. Sometimes, I actually get down to fairly early stage startups, where they’re looking for very specialized help in a particular area like Computer Vision, for example. We are largely a team of experienced product folks and engineering folks who’ve worked at both large organizations like Google and Fullcom as well as venture-backed startups like some of the companies you’ve mentioned in my background.

Brian: What kinds of projects are people coming to you guys with? Obviously, the whole AI machine learning thing is a pretty active space right now. Everyone’s trying to jump on to that and you got to invest in this. What kinds of projects are you guys doing?

Vinay: That’s a great question in terms of the different places and the different motivations people have when they come to us. I try to demystify AI right from the first conversation. Particularly, when we’re talking to executives, which we often do, we want to try and get them to dial back a little bit on the enthusiasm and the pixie dust aspect of AI, and really start thinking about it more like a tool, or set of tools, or set of ideas that really enable them with some new capabilities that also can be thought of, and what I at least see as some more traditional product development spectrum.

That’s really what I like to use to frame where customers are when they come to us. By the product development spectrum, I mean there is a starting point of what are the right questions to ask and what are the right types of business strategy questions I should think about, go to market-type questions that might be relevant to consider.

Some customers that we've talked to are starting all the way back there. There are folks who’ve answered that question for themselves, and now, they’re actually starting to think more actively about what are the product-related areas I want to invest in based on my overall business strategy, what are some of the technology approaches I can take. Machine learning is not always the right answer for a pretty business problem and then really getting into more of the actual design and architecture pieces, and then the hands-on keyboard of actually building, and then deploying data engineering, related data pipelines, or machine learning models, for example.

We’ve really seen clients come to us at all different phases. The parts we generally like to focus on start from the product strategy, technology strategy-type conversations, going all the way to building and delivering software and machine learning models that are going to get deployed into production. So, that’s really our zone of focus.

Brian: If I could take it back for one second, you said pixie dust and I thought that was funny. But I also get what you’re saying in there. Do you think, as consultants and service providers working in the space—I work on the design side, you’re working a lot on the engineering side and the data science side—are we propagating the wrong thing when we say artificial intelligence and in the analytic space, the term big data?

Stephen Few just wrote a book, I think last year, they called Big Data, Big Dupe. I tend to agree with it. There’s a lot of marketing hype surrounding the term. No one can really even define what makes it big versus regular. Do you think we have to stop using that as that? Does it matter what we call it? I feel kind of silly every time I say “AI” because it has such a loaded meaning to people that maybe don’t know as much about it. What do you think about that?

Vinay: I generally agree with the spirit of your question, which is, it’s just good to use words all of us understand that map to things that we can touch when we type with our keyboards and things like that. So, it’s very helpful to talk about software engineering as oppose to AI for example or a machine learning model.

I’ve also come to terms with the fact that there is a massive marketing wave that is much larger than what you or I choose to do and I think that creates the context that someone is coming into a conversation with us. When they enter the conversation, they already have some of that context. So, what is more important for us to focus on, as opposed to the specific choice of words, is really taking where people are starting in a known context and then walking them into either a world where we feel we can have a much more real conversation with the types of things that are grounded and the actual work that we do. A lot of people are uncomfortable with terms they don’t understand but they believe they’re supposed to continue using them and they should understand them, et cetera. I also find the other thing that’s nice about taking in marketing term but then really almost using it as an educational opportunity when you’re unpacking those terms.

People start to feel more comfortable that, “Oh, okay. These things can be mapped into things I understand,” and then being able to use some much more effectively. At least, in our conversations with them, we have a shared vocabulary. I often bucket those conversations under recognizing that this is a marketing term. “Let’s talk about what you mean by AI and let me unpack what I mean and make sure we have a shared vocabulary.” I think there’s some nice ways to undo the marketing hype in more intimate settings, but at a larger scale, I had found that anytime I try to fight the marketing, the five-year macro trend marketing term, people mostly say, “Oh, you don’t do anything related to that and you do this after-effect.” And it’s like, “What? No, no, no. That’s not what I meant.” I think we have to pick our battles.

The other thing which I always have mixed feelings about but it does feel like—and I’ve seen this with several of the major technology trends over the last two to three decades—is that it does motivate organizations that traditionally wouldn’t look at technology as enabling components of their business strategy. It does force them to at least take a look, revisit new ideas that may have been scary before. But now they feel like, “Oh, well, let’s at least take a look because it seems everybody else is getting some value from it.” It does at least stir up things inside organizations where you get some creativity going and people are willing to at least step out of their day-to-day and take a look. I’m definitely not a hype person in general, but it does seem to serve at least some positive purpose in that sense.

Brian: I kind of see it—we’ve joked about this in the past offline—like there’s a new hammer at Home Depot and everyone’s racing out to go buy this tool but not everyone knows what it does. It’s just, “I got to have one like everyone else. It does everything.” On that thought, of the ten people, ten clients that come in, what role would your typical client be? And of ten of those, how many of them have either unrealistic expectations of like, “Hey, we want to do this grand project with AI and machine learning to do X,” versus, “Hey, we want to really optimize this one part of our supply chain,” or, “We want to do…” something very specific that’s been thought of in terms of either products or service offering or an internal analytics thing where they want to actually apply an optimization or something like that. How many had fallen to the “educated versus maybe less educated,” in terms of what they’re asking for from you?

Vinay: I would probably say order 20% to 30% of folks are in that bucket of, “I have a very targeted need. I know exactly what I want to get out of this state of pipeline. I have this other data pipeline I’d like you to work with to put the whole thing together,” or, “I need a specialized machine learning model that will help me segment some of my customers into more fine grain way for this very particular use case,” things like that. Those tend to be organizations that already have a software engineering capability. There’s some data for other business problems already and they either need more help than they have in house or they need some kind of specialized help. So maybe, they have largely done more structured data marketing-related use cases and now, they want to do more natural language-related or in a different area.

They generally have a fairly good feel of the landscape and they know how our work would plug into their work. There is probably roughly 50% of what we get as more where we get people who are VPs of Technology, VPs of Product. They understand operations in a pretty meaningful way. A line of business leader who has a meaningful business case in mind, so they already have one or more business problems in mind that they think will be compelling. They want to know, is this a good fit for a machine learning or not? What would be required to actually get to even trying out machine learning?

I would put those folks in the bucket that they have thought through some of the business strategy related, sort of going back to that spectrum idea of starting from business strategy all the way to shipping something to production. I would say they are more in the product and technology strategy bucket where they want to figure out, “I don’t know what I have in the rest of my organization, but I know we have some software, we have some data based on running a website for the last four years, whatever else, or some other kind of operational system. I’d like to figure out if we could use machine learning in some way to do something predictive, for example to improve how a call center handles inbound calls and prioritizes some of the tasks.”

There are cases where people have much more thought through use cases in mind, but they don’t have the expertise on: What is the data pipeline? What data do I actually need from machine learning? Have I actually ever built and deployed a model before? They've usually not have done that. There’re a lot of folks in that bucket. And then, the third bucket is the remainder, which is really people are starting more in the business strategy side, where they’re saying, “Oh, we’d really like to have an open-ended conversation. Our CEO has a five or ten-year vision around transforming our core business and how we service our customers.” I’ve talked to folks that are in much more traditionally industrial businesses like paper processing, for example, or staffing, or more instrument manufacturing, or other types of manufacturing.

Those kinds of areas, there is really this historical model of hardware or some other service that gets provided as opposed to Software as a Service. I think everybody is interested in some kind of move to a subscription model and also some understanding of what is the relevance of these technologies. But they are not at the stage where they’ve identified a particular business case or a use case.

Brian: If I’m a product manager or someone that’s in charge of bringing ROI to data within my company, say I’m not a technology company, should I be looking to make an investment in a place where maybe it’s more of a traditional analytics thing or maybe I have humans doing eyeball analysis, making decisions about insights from the data, and then saying, “Okay, what we’d like to do is actually see if we can automate this existing process. So, it’s like A, B, C, D, E, F. We want to swap out stage D with a machine learning solution to free those people to do other work”? Or is more like, “We have this data we’re sitting on. Hey, we could train it and do something with it. We’re not doing anything with it right now.” Is there a strategy or some thinking around one of those maybe being a more successful project to take on, any thoughts?

Vinay: I think that’s a great way to pose the question because one of the things I would think about as with any new effort in an organization, is that you want to be successful as the person who’s bringing in some new technology or new approach, whether it’s process or people or technology. I think really having a lower risk, a smaller bite at the apple in some sense to get your first success on the board, and then starting to build on that nucleus would definitely be the way I would think about get it going.

There may be different situations where, as a leader of a large organization, you really have a directive to be more transformative and that can be a different type of conversation. But as I’d think about somebody who’s in a product role at—let’s call it just for the sake of brevity—a non-tech organization, I think starting with a smaller project where you can get people used to the idea that you could do more with data, it’s not that scary, it’s like another tool, it’s like buying another piece of software and doing some training around it and those kinds of things, then it gives you a success that you can build on and people around you start to have some familiarity with it, where you get less resistance the next time you go and do some things. I think of the overall change management challenge would frame the choice of project in some ways than not.

One of the other frameworks I would use also, Ben Evans from Andreessen Horowitz, recently wrote a really nice blog post about how people can organize their thinking around applications of machine learning. The core of the framework is, there are three buckets in which you can think of the problems and potential applicability of machine learning. The first one, actually, falls very much into exactly the example you gave where I might have an analyst working with existing data, etcetera. That’s ‘a known data, known questions’ bucket. So, you have a set of data already available. You have a set of questions your analysts ask every day. Maybe they’re eyeballing it. Maybe they’re running a simple linear regression or something.

What’s nice about applying machine learning in that case is it’s literally like, “Oh, you have a mallet. Here I have a stainless steel hammer. Let’s see what happens if I apply my stainless steel hammer.” It’s relatively easy to get set up to do it. Our organization who knows roughly what’s already involved with that data, the semantics of the data. It’s clean enough that you could probably start working with it. It gives you a relatively easy pathway into trying out machine learning. Just saying like, “Oh, we got 50-basis point lift just by applying this new tool, without really changing anything else.” That’s one bucket.

The other two buckets, I definitely encourage folks to read the article, to put in the show notes or something. The other two buckets are ‘unknown data, new questions,’ and then the last one is ‘new data, new questions.’ Just to give you a placeholder for what the last bucket is, those are opportunities that you might be able to apply computer vision or put new sensors in a particular environment. So, gathering entirely novel data streams, unmasking new questions. There’s a handful of organizing ideas like this. We generally suggest a few different articles and I am definitely happy to offer those for the show notes as well, if [you’re looking for 00:17:27] different ways to organize their thinking around approaching machine learning problems.

Brian: Great. Yes, I’ll definitely put those links into the show notes. Thanks for sharing those. Also, a follow up to that. Once you’re into a project, what are some of the challenges around for projects that have user interface or some kind of user experience that’s directly accessed? Are there challenges that you see your clients having with getting the design right? Are there challenges about getting the model and the data science part right or getting it into production? I heard a lot about this at Strata Conference that I was at in London, that they’re talking a lot about you can do all this magic stuff with your data sciences in the PhDs. But if they don’t know how to either help the engineers or themselves get that code into a production environment, it’s just sitting in a closet somewhere and it’s never going to really return value. Can you talk about some of the design and the engineering challenges that you might be seeing?

Vinay: I’m assuming most people listening to the podcast are familiar with traditional product development processes, design iteration, and so forth. What I’ll offer here is the difference when you start thinking about data and machine learning. We have a process we call Lean AI and what we’ve incorporated into that is this idea of a feedback loop between a business understanding, a data understanding, then doing some engineering—this is the data engineering—then doing some modeling, and then putting something in front of users.

The major part here is that, you may have a particular idea around what the ideal user experience might be. But then as we start to get into the data, as we start trying different modeling techniques, we might either surface additional opportunities that there may be something compelling that the user could do in their workflow using what the model has surfaced. Or it may be that the original experience as envisioned is going to have to change because there is not enough predictive power in the data, or a data source that you thought you’d be able to get your hands on is just not going to be available, or things like that.

So, there is an additional component to the [iteration 00:19:46] loop that you have to rely on, which is just what is in the data, how much can I get access to, and then some of the more traditional software engineering constraints. If it’s going to take six months to get that particular piece of data cleaned up enough such that we can actually use it, is there something lighter weight that we could at least get started with at something in front of users first, and then continue to refine and iterate over time? That’s probably the big difference in terms of traditional product development that just involves software engineering in apps versus working with the data and machine learning. There’s a little bit of just this science of what is possible inside of the data given the signal inside [00:20:27] datum.

The engineering part is definitely, as you said, something that is talked less about historically and it sounds like, based on some of the things you’ve heard at Strata, that is something that is starting to change. What I’ve seen is that a lot of the tutorials, a lot of the content out there has historically been focused on, “Get your first model going,” or, “Take this particular data set and try out building a model or tweaking this or that.” In that sense, there’re also a lot of tools available for doing data science and data science exploration.

It’s great that, exactly like you said, Brian, that somebody’s built a model that’s interesting. But one, if we haven’t built the rest of the product around it and then if we haven’t actually got that model to production; as I like to say, if at the end of the day somebody’s not pushing a button differently because of your model or pulling the leverage differently because of your model, it really doesn’t matter that you built it in the first place. That actually goes back to requiring engineering and product development type expertise as opposed to data science type expertise, which I feel a little bit more like traditional on science type disciplines where you’re doing experimentation.

Brian: Do you get to the point where you’re midway through a project and just kind of like, “We’re not sure if we can do this,” or “The predictive power is not there”? I imagine you probably try to prevent getting into a situation where that happens. Is there a client training that has to go on if they’re coming to you too early? Like, “We’re ready to build this thing. We want to put a model to do X,” and you’re like, “Whoa.” How do you take them on like, “Come back to us in two months or when you guys have figured this out”? How do you take them on to make sure that doesn’t happen and they don’t spend all this money on hiring data scientist internally to work with you or on their own, or just you and not getting an ROI? How do you educate on that?

Vinay: That’s again what we have incorporated into this Lean AI process where we’ve taken the spirit of Agile and some of the ideas around Lean startup, for example. There’s actually an old framework from the late 90s called CRISP-DM—it’s from the data mining community—and really, the idea in all of these things is tackling your big risks early and surfacing them. We take a similar approach where anybody can do this. But it’s getting an understanding of what is the business problem you want to go after and what is the data you have available. We call it a business understanding phase and a data understanding phase.

During that phase of the data understanding, it’s really doing a data audit. Particularly, it’s an issue on large organizations. People think they have access to certain data but it may be that somebody in a different organization owns the data and they’re not going to give it to you. You sort of have the human problems that we’ve always had. Then there’s other parts which are, “Is there a predictive power in the data? Is the data clean?”

Generally, the first thing we do is just apply a suite of tools that will characterize the data, profile the data, and help us get an understanding of what do we think is there. Usually, we work with clients, team members who have domain knowledge. They generally have pretty good intuition of what should the data show and that oftentimes is a good way to normalize everybody’s expectations.

As an example, we’re working on one with an industrial client last year. In addition to sensor data coming off their devices, they also had field notes that people had entered when they were servicing some of the equipment. As we were working with their experts during the data understanding phase, the experts actually said, “You know what? I wouldn’t trust the field notes. People sometimes put them in and sometimes they don’t. The quality varies a lot across who put those notes in and what they put in there. So, let’s just not use that data source.” You can really bring in some of the intuition that people already have around their data and bring that into the conversation. That becomes an almost shared decision about what do we think we can try and get out of this data, what’s in the data, and do you guys agree that this data actually is saying what you think it should say? Those kinds of things.

I would say, tackling big risks early is one of the major themes of what we do. The other part really comes from, again, the engineering approach that a lot of us have taken historically from our past experiences. [It's probably 00:25:48] the best analogy I can do from their product management days is this idea of just doing mockups and doing paper mocks and those kinds of things before you get to higher fidelity mocks. There’s a similar idea in machine learning where we have this idea like, “Okay, get some basic data through your data pipeline. It doesn’t have to be perfect.” Then we build this thing called the baseline model, which is, “Yes, there are 45 different techniques you can use to build a machine learning model. Let’s take one of the simplest ones. Something like random forest where we know that’s not the best performing model for every use case, but it’s really easy to build. It’s really easy to understand at least out of your first version what the model is doing.”

You can get some baseline of performance pretty quickly, which is, does it perform at 60% or does it perform at 80%? From there, you can start to have a discussion about, how much more investment do we want to make? Do we need to get more data in here to clean the existing data and transform it in different ways, explore different modeling techniques? Those kinds of things. I draw the analogy to some of the product development processes that we would follow if we were just doing software engineering project, which is, let’s get something built end-to-end then add more functionality over time, things like that and then take it from there.

Brian: Regarding the projects you work on, are your clients , most of the time,the actual end users of this service or the direct beneficiaries, or typically, are they building something internally that will be used by other employees or vendors or their customers? How close to that is the person going to benefit from or use the service that you’re building?

Vinay: I’m definitely not aware of all of our projects, but the projects I’m aware of and the ones I’m working on right now, they all have enterprise users. None of them are applications that are going to go out to end users. But nonetheless, the enterprise users are folks who are not technology people or not particularly specialized in data or anything like that. They are more folks who are executing on processes as part of a broader workflow. For example, it might be a health coach that is at a particular company, or it might be a call center employee, or it might be the maintenance and repairs center at an industrials company. It’s more internal users or if it’s external users, it’s still again enterprise users who are using a larger product.

Brian: Do you ever get direct access to those when you’re working with your clients or typically, is your client the interface to them? How involved do you get with some of these like a call center rep or something like that?

Vinay: It actually depends on the type of expertise that our client has. If they have a product owner and a product manager who’s fairly confident about their ability to interface with the end user, we might. Instead of them being part of the user feedback sections, as some of these models go in front of users, there may be at the beginning of a project, having a few conversations to understand the context in which particular operational data was gathered, or the workflow that might surround the model that we’re building, or the data pipeline that we’re building. We might have a few conversations.

But again, if they have a strong product function already, we would probably be more isolated from that. If, on the other hand, there isn’t that much of a product function that is familiar with software engineering and product developments, some of these non-tech organizations, product managers, they are maybe much more hardware-oriented or they may not even have a product to roll, depending on the type of operation. There, we would be much closer to the end users understanding the use cases.

We also want to partner with whoever is doing the product design and some of the other UX components as well. I would imagine that there’d generally be another partner of some sort. We’re interested in talking to the end users. But we’re definitely not the experts on product design and so forth. We’d expect somebody else to play that role. Either somebody like you where the client is partnered with another organization or individual, or they have capability internally.

Brian: One place we think about lots of data, obviously, is in the traditional analytics space for internal companies or even information like SaaS products and information products. Do you see the capabilities of data science and machine learning that have really been enabled in the last few years?

Primarily,what I understand is there’s more data availability. There’s more compute power availability. It’s not so much that the science is new. A lot of the science I hear is quite old. The formulas and algorithms have been around. It hasn’t been as feasible to implement them. Now that it is, do you see that traditional analytics deployments over time will start to leverage more and more like predictive capabilities or prescriptive analytics where there’s less report generation, less eyeball analysis?

Say, in the next five years, 20% of traditional analytics capabilities will be replaced by more prescriptive and predictive capabilities because of this? Or is it really just it’s going to take a lot longer to do that? I imagine some of it’s just at the mercy of the data you have available. You can’t solve every problem with this, but do you see an evolution happening in that data? Is that making sense?

Vinay: Yeah, absolutely. You’ve hit upon a really important idea. I’ll start my answer though taking a slightly different view, which is what is going to stay constant, and then we can talk about what is going to change. The part I found most exciting about business intelligence, analytics reporting, pick your category name, is when you can get it embedded into a workflow. The folks who are actually on the front lines making, running through a workflow, or going through a customer interaction or whatever, they actually have access to that data and they’re able to drive decision-making as part of their process.

What we’ve seen in the last order of 20 years, is this continued increase of this notion of a data-driven organization, that people should have more access to data when they’re in these workflows and decision-making. Everything from things you’ve probably heard about, like insurance companies or telco companies, call center folks being able to offer you something if you’re going to turn, for example. An offer pop ups on their screen and they’ll able to give that to you. That’s a nice example where somebody’s actually using the decision-making as part of their production workflow. We’re just generally seeing more of that. So, no matter what, whether it’s prescriptive or descriptive, whatever else, I broadly see continued adoption of analytics and data in more workflows across a whole range of software products.

I’m generally excited about that. I wish it would take less time but at least we’re continuing to make progress on that. I think what you hit upon is what’s going to change. I firmly believe we’re seeing this in name today but we’ll see this more in actual. The nature of the work itself in the future, there’s a lot of people who have the business analyst role today and organizations in their supporting different functions. Largely, I think of them as people who have a fairly deep understanding of the business. They generally live in Excel. They’re complete masters of Excel. They can build what-if models, they can do scenario-solving, they can do VLOOKUPs, and do all of those kinds of things in Excel. I think they’re going to get a whole additional set of tools.

I tell people this and I’m going to go on the record here and suggest that, I’m almost imagining Excel 2020 is going to have a button that you can hit and you can say, “Here’s my data. Go try out 50 different models or 500 different models.” Excel will go off, ship your data to Azure, it’ll run a whole bunch of different models and come back and tell you, “Here’s the three that seem to fit your data best.”

Really, the skill that you need at the end of the day, which is the skill you need today, is understanding the statistics of the data, having some intuition around the business and what’s going on around you, and then really being able to swap ends and these other statistical methods that we group under machine learning, being able to swap those in once those tools are mature enough for broader use in deployment. Because of that, I think yes, in the five-year timeframe, we’ll see the leading edge of more prescriptive analytics entering product workflows just like we’re now. I’d be curious about your opinion on this but I feel like we’re past the earlier doctrine more now in the mainstream phase of descriptive analytics entering some of the different products.

Brian: Yes, maybe it’s fed Microsoft a little tip for how to improve their office lead down a couple of years from now. This has been really informative. Thanks for coming on. Do you have any single message or advice you’d give to data product managers or analytics leaders in businesses in terms of how they can design and/or deploy better data products in their organization or for their customers if they are like a SaaS or information provider? Any general tips you’ve seen or something you can offer them?

Vinay: Maybe a handful of things just to run through it with different levels of applicability. One of them is that having a good business case, as the way we talked about earlier and taking on something small is definitely very helpful to build some success. Also, maybe squelch some of the visionary enthusiasm that people might have. In general, trying to feed some of the vision component while you’re trying to get a great concrete success on the board, is something just to keep in mind to get people excited about the potential and the future. That’s one bucket.

If you have a vision in mind, one of the things your technology teams and your machine learning teams can do, and is something we definitely ask for when we do our engagements, while you’re solving a specific business case and a specific problem, you can do the work in a way that lays the foundation for longer term leverage on the work. So, if we build the data pipeline, we know that you have a specific two-year vision. We can actually start to lay some of the pieces even as part of that project to make investment towards that vision. While you should execute on smaller opportunities, you should also dream big. I think that’s one general thought.

Another thing I’ve been starting to form an opinion around is that, to execute successfully on a product and execute data and machine learning component of a product, you have to have a ‘what’ in mind, like, “What is this product going to do with the data?” You need to have a product direction, product sense, product vision, whatever you want to call it to know what’s going to happen in the context of that product.

Longer term, when you start to think about the context for these kinds of capabilities you need to think about organizational vision. For this product it may be that you did it with a couple of folks from another team that sat down the hall just to get something out the door. But then, really having an idea in the 18-month timeframe, do you want to build a software engineering organization? Do you want to build a data engineering capability? Do you want to have a data science team? Do you want to work with the finance team to maybe get a couple of business analysts over to a new team? I think really starting to contextualize your product vision with what’s your organizational vision, is important for the longer term picture and having clarity around that even as you tackle on the shorter term opportunities. Those are probably a couple of things that hopefully people find helpful.

Brian: Yes, I definitely did. I was actually going to follow this up but it may be an unnecessary question. But one of the services that I’m often asked to come in with clients is to help them either envision a new product, something that they’re working on, and it’s what I call getting from the nothing to something phase where it’s a Word document of requirements or capabilities, features, what have you and getting to that first visual something. It sounds like you still think that that step, even if you don’t bite off the whole thing from an engineering standpoint, having an idea of your goal post about where a service might go that could incorporate some machine learning or AI technology, still is helpful and deploying a small increment of utility into the organization. Would you agree with that still?

Vinay: Yes, absolutely. Even for the folks building their models or building your data pipeline to get the data cleaned up and usable, whether it’s for analytics or for your models, it’s really helpful to have that broader context as opposed to having a very narrow window into, “Oh, I need these three fields to be cleaned up and available.” If you can’t provide that broader context, I feel you end up with a lot of disjointed pieces as opposed to something that feels good when you’re done. I would definitely agree with that.

Brian: Well, Vinay, thank you so much for coming on. This has been super educational for me and I’m sure for people listening as well. Where can people learn more about what you’re doing? I’ll definitely put the Ben Evans link and your Lean AI process that you talked about. So, send me those links. But where can people learn more about what you do?

Vinay: Our website manifold.ai is definitely the best place to start. We have a few things about the type of work we do and some case studies as well as some background of our team. That would be helpful. In terms of my own time, I actually don’t spend that much time on social media. LinkedIn is probably the easiest place to find me. Generally, I post things there occasionally and definitely participate in some conversations there. It would be great to chat with folks there.

Brian: All right, great. Well, thanks again and I hope to talk to you soon.

Vinay: Thank you, Brian. I really appreciate it. It’s great conversation.

Images Powered by Shutterstock