What kind of manager would but their full faith in an AI system? What types would brush aside AI in favor of their own conclusions? When it comes to high-level strategic decisions, many executives still will go with their gut, and not the machine. Is this a good thing?
AI is starting to play a key part in many things: customer personalization, sales recommendations, financial portfolio recommendations, aircraft collision avoidance, semi-autonomous vehicles, and medical screening. Such actions require on-the-spot decisions, often involving low-level functions that flow from system to system. It’s notable that the main business cases being promoted thus far are relatively tactical solutions. “Rote automation not the big opportunity here. It’s better strategic thinking, innovation, decision making,” Dion Hinchliffe, analyst with Constellation Research, notes.
Higher-level, more strategic decisions that shape the direction of a business represent the last great frontier for AI in enterprises. And, to date, there is no shortage of skepticism among decision makers when it comes to strategic AI.
When faced with identical AI outputs, many businesspeople still make their own decisions, a recent study concludes. The “human filter makes all the difference in organizations’ AI-based decisions,” according to Philip Meissner and Christoph Keding, both with ESCP, in a survey of 140 executives published in MIT Sloan Management Review.
The researchers presented participants with what was purportedly AI-generated recommendation of a new technology that would enable them to pursue potential new business opportunities, and asked them how much they trusted the AI recommendation. Many, it turns out, didn’t put full faith in the output, and still went with their own choices. On the other hand, some executives were only too willing to rely on AI. The researchers divided the respondents into three types of decision-makers — “skeptics,” “interactors,” and “delegators.” Skeptics “seem reluctant to lose autonomy in the process,” while delegators “who typically postpone decisions are happy to delegate decision-making responsibility to AI.”
The skeptics in the group “do not follow the AI-based recommendations, preferring to control the process themselves,” Meissner and Keding state. “These managers do not want to make strategic decisions based on the analysis performed by what they perceive as a black box that they do not fully understand. Skeptics are very analytical themselves and need to comprehend details before making a commitment in the decision process. When using AI, skeptics can fall prey to a false illusion of control, which leads them to be overly confident in their own judgment and underestimate AI’s.”
At the other end of the spectrum, delegators “largely transfer their decision-making authority to AI in order to reduce their perceived individual risk. For these executives, AI use significantly increases the speed of the strategic decision-making process and can break a potential gridlock. However, delegators may also misuse AI to avoid personal responsibility; they might rely on its recommendations as a personal insurance policy in case something goes wrong. This risk shift from the decision maker to the machine could induce unjustified risk taking for the company.”
“These different decision-making archetypes show that the quality of the AI recommendation itself is only half of the equation in assessing the quality of AI-based decision-making in organizations,” Meissner and Keding state.