Artificial Intelligence

Artificial intelligence’s data problem meets AI’s people problem


It takes a well-designed information architecture — IA — to ensure good AI. The challenge is getting both people and data on the same page when it comes to AI work. And there’s much work to be done on both fronts.

national-gallery-of-art-washington-dc-photo-by-joe-mckendrick.jpg

Photo: Joe McKendrick

That’s the word from Seth Dobrin, global chief AI officer at IBM. “Data is the food for AI, yet few organizations sit down at the table to design an AI strategy with a full accounting of where all their data resides and how organized it is,” he says. “IT professionals are drawing from at least 20 data sources to inform their AI, and some have to draw from hundreds, so this is a big data infrastructure issue.”

Data is required for AI, he continues, pointing to the need to an information architecture approach. “There is no AI without IA. Today’s data landscape is hybrid and it is multicloud, the answer cannot be and is not centralize all data. The answer is AI is enabled by a data fabric that ensures privacy, compliance, and security at scale.” 

Questions that need to be addressed include what data is a solution using, and if it needs to be collecting all that data to function. In addition, there needs to be an examination of how data is being stored, and for how long. “These are questions that require a large number of perspectives within a single organization to answer and enterprise design thinking for data and AI offers an approach to help set clear intent and plans that connect the business strategy to the AI strategy to the execution.”    

Collaboration is key to such efforts, as AI is a human-centered endeavor. “We’ve found that when AI deployments come solely from the business side, or the IT side, or the data management side, crucial insights are inevitably lost,” Dobrin says. “When AI solutions are cobbled together and rushed into production, it becomes very difficult for business leaders and consumers alike to trust them.”

When it comes to ensuring ethical and unbiased AI, there’s still much work to be done, Dobrin cautions. “Companies still have a way to go as far as ensuring their AI and the results are carefully audited, maintained and improved.” At the same time, he notes, “businesses are now much more aware of the importance of having trustworthy AI.” The barriers to achieving this include “lack of skills, inflexible governance tools, biased data and more. It’s clear while there are tools and frameworks in the market to help build trustworthy AI, there is still work to be done helping businesses develop a comprehensive approach to AI governance that brings together tools, solutions, practices, and the right people to govern AI responsibly across its lifecycle.”

This is made more complicated that in “highly mature organizations, the AI technology landscape is very heterogenous and the AI teams are distributed across various lines of business,” he adds. “The AI landscape will only become more heterogenous. The need to have automated AI governance on top of a heterogenous AI landscape to provide transparency, explainability, fairness, robustness and privacy on top of existing tooling is only heighted, the trick is to tie them all together.”

The lesson many enterprises have learned is that AI implementation “is not the field of dreams,” he continues. “It is not a build-it-and-they-will-come effort. Successful AI implementation is human centered, tied to business strategy, has clear value-based success metrics and is trusted. When companies hit all four of these AI inevitably gets adopted, and then adds value.”



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.