My mother’s house runs on oil heat. A couple of years back, she needed a new oil tank for her property, and her provider gave her a quote on installing a new one. To make sure she was getting the best price, I went online and did searches on oil heating tanks. Lo and behold, over the next several weeks, I was inundated with online offers for oil heating storage tanks. It’s as if I was restocking my supply of oil storage tanks on a weekly basis.
That’s the story of, that’s the glory of, artificial intelligence. Sometimes, it defies common sense. Worse yet, it may dampen the spontaneous interactions and ideas that flow freely across enterprises.
Successful companies thrive on spontaneity, serendipity and spirit — not mechanized, robotic decision-making, AI, on the other hand, relies on data inputs and machine-generated outputs to guide decision-making at all levels. While operating as a data-driven enterprise may open up new vistas, it also could potentially dampen the human insight that advances things to new levels.
This paradox was brought up by Sylvain Duranton, senior partner at Boston Consulting Group, who warns AI has been going against the grain of innovation in a TED Talk. “For the last 10 years, many companies have been trying to become less bureaucratic, to have fewer central rules and procedures, more autonomy for their local teams to be more agile. And now they are pushing artificial intelligence, AI, unaware that cool technology might make them more bureaucratic than ever,” he says.
Duranton sees this paradox first-hand. “I’m leading a team of 800 AI specialists; we have deployed over 100 customized AI solutions for large companies around the world,” he relates. “I see too many corporate executives behaving like bureaucrats from the past. They want to take costly, old-fashioned humans out of the loop and rely only upon AI to take decisions.”
AI inherently operates just like bureaucracies, he adds. “The essence of bureaucracy is to favor rules and procedures over human judgment. And if human judgment is not kept in the loop, AI will bring a terrifying form of new bureaucracy — I call it ‘algocracy,’ where AI will take more and more critical decisions by the rules outside of any human control.”
The results of bureaucratic algocracy could be devastating — affecting university admissions, aircraft performance, or supply chain issues when a crisis hits. That’s why there needs to be humans providing input into AI decisions.
It should be added that it takes humans to design forward-thinking processes and companies — tools such as AI are only that — tools that will help make things happen. As with many technology innovations, it often gets assumed that by dropping AI into a moribund, calcified organisation, insights and profitability will magically clear things up.
AI should serve as “augmented” intelligence to support human decision-making — not the other way around. In a survey of 305 executives conducted by Forbes Insights in 2018, only 16 percent indicated they had full trust in low-level decisions (e.g., flagging errors, sending notifications, accepting payments, managing system performance) delivered by AI, and only six percent had full trust in mid-level decisions (e.g., helping customers with problems, serving as intelligent agent to employees). Yet, in a separate survey conducted around the same time, only 37 percent had processes in place to augment or override results if their AI system produced questionable or unsatisfactory results. (I helped design and author both surveys as part of my work with Forbes Insights.)
People need to be part of AI decision-making processes, Duranton urged in his talk, calling this formula “Human plus AI.” The formula for this process is to invest 10 percent of the effort into coding algorithms, and investing 20 percent to build technology around the algorithms, collecting data, building user interfaces, integrating into legacy systems. “But 70 percent, the bulk of the effort, is about weaving together AI with people and processes to maximize real outcome.”
It takes collaboration for that 70 percent human element, Duranton says. “The first step is to make sure that algos are coded by data scientists and domain experts together. Seventy percent weaving AI with team and processes also means building powerful interfaces for humans and AI to solve the most difficult problems together.”
This formula has been applied in his field work — for example, the team worked with doctors monitoring the risk of heart attacks with a new medication — which was about 40 percent of patients. “Would you be comfortable staying at home for your first dose if the algo said so?” he asks. “Doctors were not… There started our 70 percent.” After four months, Duranton’s team and the doctors built a model that blended AI with human judgment that “resulted in far less stress for half of the patients and better quality of life.”
Seventy percent of human engagement “also means human have to decide what is right or wrong, and define rules for what AI can do or not, like setting caps on prices to prevent pricing engines from charging outrageously high prices to uneducated customers who would accept them. Only humans can define those boundaries — there is no way AI can find them in past data.”