What We Still Need to Learn about AI in Marketing — and Beyond

By September 18, 2021ISDose

intelligence

A conversation with HBS professor Eva Ascarza on using AI to improve marketing decisions.

Eva Ascarza, professor at Harvard Business School, studies customer analytics and finds that many companies investing in artificial intelligence fail to improve their marketing decisions. Why is AI falling flat when it comes to this key lever for profit? She says the main reasons are that organizations neglect to ask the right questions, weigh the value of being right with the cost of being wrong, and leverage the improving abilities of AI to change how companies make decisions overall. With London Business School’s Bruce G.S. Hardie and Michael Ross, Ascarza wrote the HBR article “Why You Aren’t Getting More from Your Marketing AI.”

Transcript

CURT NICKISCH: Welcome to the HBR IdeaCast from Harvard Business Review. I’m Curt Nickisch.

A growing number of companies are turning to artificial intelligence to solve some of their most vexing problems. The promise of AI is that it can go through vast amounts of data and help people make better decisions. And one area where companies often search for profitable use cases for the technology is in marketing.

It’s harder than it looks. Data scientists at one consumer goods company recently used AI to improve the accuracy of the sales forecasting system. While they did get the system working better overall, it actually got worse at forecasting high margin products. And so the new, improved system actually lost money.

Today’s guest says that many leaders lean too heavily on AI and marketing without first thinking through how to interact with it. They might be asking the wrong questions or forcing the technology into incompatible systems.

Eva Ascarza is an associate professor at Harvard Business School. And she’s the coauthor of the HBR article, “Why You Aren’t Getting More from Your Marketing AI.” Eva. Thanks for joining us.

EVA ASCARZA: Thanks for having me.

CURT NICKISCH: Let’s start a little broad here because I know that artificial intelligence or AI is a term that everybody hears thrown around a lot. We may have our own understanding of what AI is based off of the industry that we’re in or how we use it. How do you think about AI today and what is its most common use in marketing applications?

EVA ASCARZA: What we mean, me and my coauthors, what we mean about AI in what we have seen the most in marketing is generally the use of data collected by different tracking devices, different systems, transactions.

So it’s data of usually individual level data of customers. And the AI really talks about the whole system that collects the data, gets insights from the data and then inserts those data into decisions. So the simpler use of that could be just, I have collected data by customers. I look at my transactional data and what I’m going to do is I’m going to do say a forecast of how much they will be consuming in the following periods. That also could be defined as AI.Now it’s not the most sophisticated one, but it’s an AI because it uses data to get insights from it.

And that’s going to map into some marketing decision. Now the other uses of AI could be more and more sophisticated. It could be a system that for example, pricing at Uber, they use AI to price. Why? Because they need to collect real-time data or where are the drivers, where are the users, when they request a ride. They do this kind of matching between the driver and the rider. And they sit at a price that is actually optimal for the objective of the company. So all of that is a whole one system. The collection analysis and decision is made in one. So you could be a lot of automation uses AI, but also sometimes simple decisions that are based on statistics and machine learning, which are the methods used to understand those data.

CURT NICKISCH: Yeah, that makes a lot of sense that if you’re trying to forecast sales, knowing that it’s pretty tricky and takes a lot of time and it’s inaccurate and here’s a chance to use AI to be better at that, to do something that you’re already doing, but do it in hopefully a better way.

EVA ASCARZA: I mean a little bit of historical way to look at it, people talked about data-driven marketing, right? And many years ago, that just meant that you were going to use your data, to come up with an output, for example, a forecast number, and then you’re going to use it for your decision. Now that then was – people were talking about data mining because all of a sudden you had more data. So it was more mining the data.

And then I think we went from there and now to more of people started using machine learning when in fact machine learning refers to the tools that people use to understand data and to come up with predictions. And now AI encapsulates more of it, which could have the whole integration. Now there are many firms who are just using some prediction models and it’s also AI, but I think people are using AI as more the newest word, and just because also encapsulates more and more and more things.

CURT NICKISCH: Yeah. Where do people have the most problems?

EVA ASCARZA: So, we have mostly worked with companies in marketing space, right? Companies use AI for many different tasks and many different goals. The ones we have observed in marketing is what we call the misalignment. In order to leverage the value of an AI system or AI prediction, whatever thing you are predicting, whatever behavior, whatever number you’re predicting, had to be very well aligned with the decision that the company will be making eventually.

What we have seen is the biggest problem or not the biggest story, the most common one is the fact that companies adopt certain AI systems or they build data science teams. And these teams take the whole data and start predicting things that they can predict well and things that excite them. And they predict a lot of behaviors from the customer mindset.

So for example, could be from customers, you predict whether they will do on Expedia. You predict whether they will click on something, whether they will like something. And the team that is generating these predictions is very excited about that because this is what they know how to do, and this is what they enjoy doing.

But at the same time, the marketing team, it’s not truly interested on, okay, how many customers are going to like this. They really want to know what product to feature first or what price to set or who is going to receive a discount. So there’s a little bit of a misalignment between what predictions give you, which is an answer to some information that you did not have, to actually the decision being made.

So I’m going to give you an example. There’s many, many firms trying to reduce customer churn. You don’t want your customers to go to the competitor. And what you do is you think about strategies to keep them, right, generally proactive strategies. So you see many, many, many firms spending lots of money and effort with their teams building very sophisticated and very accurate models to tell you who are the customers who are most likely to leave in the following period, in the next month, the next year or whatever, right. Now  that is only part of the story because that prediction could be useful, but the prediction who to be amazingly useful it is could you please tell me who of my customers, who’d be persuaded by my offer?

So the comparison I always make is like, let’s say you want somebody to vote for you. This is an election. You want people to vote for you. The AI could give you who’s going to vote for you and who’s going to vote for the other person, but that’s not useful to you. The useful to you would be if the AI can tell you who is the persuadable vote, who is the person who by hearing something from you will be more likely than before to vote for you. So that’s why we call it misalignment because the prediction is giving you a behavior. But the decision is to change that behavior, not to predict, not to be like fortune teller.

So I think it’s the difference between being a doctor who prescribes and cures or being a fortune teller who tells you what’s going to happen.

CURT NICKISCH: Yeah. It’s interesting because you’re describing a scenario that almost sounds like it doesn’t have anything to do with AI at all. Right. It has to do with decision-making, thinking through the problem –

EVA ASCARZA: Absolutely. This is very general, is the problem per se has nothing to do with AI. However, AI has enhanced this problem. And let me tell you why. If we adopt sophisticated technologies for making predictions, it’s very easy to lose track of the business objective. So for example, if you don’t use any AI to make these predictions and the marketer makes decisions based on her gut feeling, for example, right? Let’s say I’m the marketer. I’m going to decide who I’m going to be sending this campaign to. If I do this just by my gut feeling, because I have experience from the past or what not, I’m going to be taking into account this persuadable customer’s story that I just told you, because that’s kind of common sense, and that is the way to think about the action.

But I wanted to adopt AI to be way more precise on my predictions, right? That’s why people are adopting AI to be way more precise, to be leveraging these immense amounts of data. So by adopting these AI, what I am doing is I’m separating the decision-maker from the prediction maker, and the prediction maker could be a different team now. Most likely, most of the cases the different team is the person who interacts with the AI. And this is how misalignment happens because now people end up doing what they know how to do. So people call it, we call it as well the streetlight effect. The data science team will gravitate towards predict and analyze the data the way they know how to do it.

But they haven’t thought through what would be the way to actually change people’s behavior versus the marketer on the other hand is sitting in a different room. And this miscommunication between these two teams is making this misalignment stronger. So it is not that the AI is causing the problem, not at all, but it’s kind of an enabler of it, so to speak.

CURT NICKISCH: Wow. It’s always slightly alarming, but also refreshing to see how so much just comes back to organizational structure and management. Right.

EVA ASCARZA: Exactly. The thing is, I think there was a lot of very interesting, amazing work on the decision making under uncertainty like 20, 30 years ago. And much of that has been forgotten. I believe, at least talking to, I mean, we have been talking to teams and they haven’t thought about decision making under uncertainty. And that’s exactly the way to leverage AI. There is an unknown. There’s things that you don’t know, and you’re going to use the AI to give you predictions in that.

These predictions will have uncertainty. So you have to understand what is the outcome of those uncertainties? What is the cost to the company and integrate that in the whole decision framework.

CURT NICKISCH: You’ve worked and studied, you’ve worked with and studied a huge range of companies that are implementing AI in their marketing efforts. What other key pitfalls do they seem to run into?

EVA ASCARZA: So, one thing that has happened in marketing in the last, I don’t know, few years is that there’s more data and therefore there is more predictions and there is more precision. So precision has been great in the sense that we can go more granular to customers.

Now, another pitfall that we have seen in the market is that some decision-makers, some marketers in this case, have not adjust their decision pace, or the frequency at which they make decisions to this level of granularity that AI can give. So this is, we call it the aggregation issue, which is many times the AI can give you very, very, very granular predictions about not only what is the price that you should set for this week, but maybe what is the price you should set for this hour of this day?

I’m talking for example, this is for example, from a large chain of hotels, and in their cases, the issue of pricing. So the pitfall there was that in the past, they were making these decisions at the weekly level. This is when the people were having the meetings. Now you have systems and data and information that give you this information, now these predictions at a way more granular level, but they have not kept up with that pace, for example.

CURT NICKISCH: So you’ve given some examples of how companies are not really leveraging AI as much as they could in the marketing capacity. And a lot of it has to do with decision-making frameworks and also the communication between the marketing and data science teams. What are some things that you think companies can do to change that?

EVA ASCARZA: Yeah, so we have seen this many cases. The framework we developed is a way to enable this communication and it’s a way to help both marketers to get closer to what the data science team is doing and get the data science team understanding what their predictions are going to be used for. It was very surprising for us for at first that seeing that many, many times the data science team didn’t even know how their predictions had been used.

Therefore there is no way you can actually leverage AI that way. So in this framework, what we’ll have is very simple framework. It’s really forcing the teams to ask the right questions. It’s a three three-step framework. The way it should be implemented is really having meetings or having workshops with both teams together. And the first thing we do is we put them together and we ask the question, what is the problem we are currently trying to solve? And that answer has to be relatively precise because people tend to start very vague or like increase profitability.

No, you have to be precise, and you have to be in real English, everybody understanding what the problem is. Then second step in this framework is okay, now, given what the problem is. And given that we all know what we are doing, what is the waste and what are the missed opportunities in what we are doing? And it’s always fascinating that how they start thinking about, oh yeah, if I knew this, then I could do that, and I’m not doing this. And the teams start realizing that there are many things that their prediction is not giving them that could be very useful to actually solve the problem.

And there is the moment in which they start this conversation. The first steps is really very much about getting them communicate to each other and getting on the same page.

CURT NICKISCH: So first you get people together and you define the problem and get everybody to understand what you’re really trying to solve. Then you also step back and analyze what’s currently being done wrong about the process of using AI to answer questions and where the breakdown is happening. Then what do you do?

EVA ASCARZA: Then it’s actually the moment of the actual analysis, actually the true analysis. And then you have to go to the data and you start evaluating from the data what exactly is the magnitude of the mistakes or the missed opportunities.

So the goal is as follows. First, you want to create a map between what you predict and you decide. It’s as simple as starting asking the question: okay, now that all of you know where the problem is, what would you ideally want to know that would fully, fully eliminate any waste or missed opportunities identified by the team? And there what you’re doing, you are taking AI away, because what AI does is looks at the data and gives you the best prediction possible.

Right? So in this case, we’ll say, okay, let’s ignore AI for a second. What exactly would be the ideal information you would like to have? And that’s very easy for them to do, because they have agreed on the steps. Steps one and two are putting them in the right place.

Now when you look at the ideal world, and now what you do is now you bring AI in and say, okay, the ideal world does not exist, right? This is what we do have. This is the possibilities we have now that we deviate from this ideal world. Then you go to the data and you compute the cost of doing one thing wrong, the cost of doing the other thing wrong. And this is when the team now starts actually measuring all the waste that they have in their current decision approach, or the missed opportunities. By deviating, by understanding that the AI is not perfect and understanding how these deviations are costing the company money or missing opportunities for the company.

And then the finalist step that of these last step has three steps, right? The first, what would be ideal? This mapping between decision prediction decision. The second is, okay, now you deviate from this world and now you measure it. You look at the data and you quantify it. And finally, the last one is in which now you are going to apply this decision framework under uncertainty that you mentioned before, and you said, okay, what would happen if I expand my decision space? Can I fix this by making this decision more frequently, or by adding this other intervention? So it’s changing the decision space from the marketer point of view. And at the same time you do the opposite.

You say, okay, what would happen if I start making my predictions more and more granular? Could this fix the problems identified? And doing this exercise again, I mean, it’s not just that you sit down one day and you do it very easily because the step three requires some work is when the team realizes where exactly all these value that they haven’t leveraged from AI. Where is it? And then it’s very easy just to have an action plan from there.

CURT NICKISCH: Yeah. It’s kind of fascinating, right? That this isn’t just, I mean, you’re not coming out and saying, you need to get better AI, or you need to have better tech wizards to implement these systems. You’re really talking about team by team, management application of good practices.

EVA ASCARZA: Absolutely. I mean, of course there’s always room for improvement in the algorithms that we use or the type of data we collect and marketers will continue getting value from observing more things and being able to predict new stuff and being more accurate. There’s no question about that. However, I believe actually AI has come very, very far and decision scientists have done amazing work on this space. Now I think that the problem is where to define these work and are actually now are working. We summarize in this article is human, so it’s mainly, I think you could actually summarize in three. I’m not a psychologist, but you could summarize these into, first of all, humans tend to do what they feel comfortable doing. This is what we call the streetlight effect. You, you do what you know, right. So you predict what you know how to predict. You act what you know how to act.

Second, humans are reluctant to change. All of us are, so you were making decisions at some level, and now you have a new assistance, but you might have not been at the right level there. And humans, sometimes we’re not very good at showing what we don’t know. And if you have marketers who don’t understand what AI can do to them, they won’t speak up in these meetings. And if you have the scientists who don’t know what the value is really the true value is, they won’t speak up in these meetings. And these three are just kind of like enabling this, it’s a value distraction, so to speak because there is more value that could be given that is not being captured by others. So it’s really human, to be honest. Yeah. I’m sure the AI will have issues too, but that’s not the ones we have identified.

CURT NICKISCH: Yeah. If you are one of those humans, right, in this situation, say you’re a member of a marketing team or a data science team, or maybe you’re the person trying to connect the two, what is a good mindset to approach all this with?

EVA ASCARZA: Oh, the mindset is always iteration and not aiming for perfection at first. So this process, this framework that I just described is really about iterating and it’s about, okay, we’re not going to get it right at first, but we’re going to improve what we are currently doing. And the next time we will improve it even farther and even farther. So it’s really that having really having our mindset in not the biggest pie and not success at first, but continuing improvement, continuous improvement via reiteration.

CURT NICKISCH: Does that speak for doing small experiments at first, or just working with AI in smaller, more manageable, simpler problems, just to get going?

EVA ASCARZA: So to be honest. So I think the scale here is not the problem. The data science teams are used to work with large scale data sets. It’s not about reducing the problem for them. I think is good to go in smaller steps in the communication between these two. So when we have these meetings with these companies, it’s not about, let’s just take these smaller data set and understand what’s happening. It’s more about, let’s take a problem one at a time and see how AI can help it.

CURT NICKISCH: Ava. Thanks so much for coming on the show to talk about this.

EVA ASCARZA: That was great. Thank you very much for having me.

CURT NICKISCH: That’s Eva Ascarza an associate professor at Harvard business school. And she’s the coauthor of the article, “Why You Aren’t Getting More from Your Marketing AI.” It’s in the July/August 2021 issue of Harvard Business Review and at HBR.org.

This episode was produced by Mary Dooe. We get technical help from Rob Eckhardt. Adam Buchholz is our audio product manager. Thanks for listening to the HBR IdeaCast. I’m Curt Nickisch.

This article first appeared in www.hbr.org

Seeking to build and grow your brand using the force of consumer insight, strategic foresight, creative disruption and technology prowess? Talk to us at +971 50 6254340 or engage@groupisd.com or visit www.groupisd.com/story