Achieving AI and ML Nirvana – What Will It Take to Get There?
By Kristina LeBlanc, Staff Writer
According to the market research firm, Trifactica, the global artificial intelligence software market is expected to experience massive growth in the coming years with revenues increasing from approximately $9.5 billion at the end of 2018 to a projected $118.6 billion by 2025.
“Over the next several years, both AI and machine learning (ML) software will be essential for businesses to stay competitive, enabling targeted customer interactions in both B2B and B2C settings while bolstering operational efficiencies,” says Windy Garrett, Vice President, Cloud Partners at Atos North America.
“The pace at which businesses need to make changes is accelerating,” adds Jennifer Hamel, Research Manager for IDC’s Worldwide Services Team. “Businesses must make sense of information faster, and the more quickly they can harness AI to help with this, the more competitive they will be.”
One major factor contributing to this explosive market growth is the increase in research and development around AI and ML technologies. Microsoft alone has applied for more than 4,000 patents since 1999. In addition to R&D around data science, the automation of the platforms that will enable users without deep data backgrounds to build robust algorithms with relative ease will pave the wave for a prolific future with AI. In fact, the pace of development is so exponential that it is getting more difficult for companies to keep up with key disciplines that provide the necessary underpinnings of AI and ML use cases, namely data quality, data integration, and governance.
Unless significant progress is made in these three areas, organizations implementing AI and ML will have a difficult time achieving their ultimate vision.
According to Garrett, this means “Fully automated, accurate and smart businesses that are uniformly compliant across the enterprise, occur in real-time and require minimal human intervention.”
AI and ML initiatives, and the critical business and customer-facing decisions they support, are only as strong as the quality of the data feeding the algorithm. High-quality data—generally considered to be data that is consistent, trustworthy, accurate and complete—is essential in an era of automated decision making.
“Data quality is a key reason why many organizations don’t yet have the highest levels of confidence in decisions made by machines,” says Ali Zaidi, Research Director for IDC’s Worldwide Project-Based Services Research. “Exceptional data quality is the basic building block of AI initiatives. If you put poor data in, you will get poor intelligence out.”
While there are a variety of tools available on the market, poor data quality continues to run rampant in many enterprises. “Dirty data” can result from a variety of causes, including manual data entry errors, optical character recognition (OCR) mistakes, data transformation errors, duplicate data and more. The costs of poor data quality can be severe in terms of reputation, and there are countless examples of data quality fails, like the one where a retailer made the embarrassing mistake of sending a free personal care product designed for 18-year-old men to their broader customer base. According to an IDC White Paper, sponsored by Seagate, “Data Age 2025: The Digitization of the World from Edge to Core,” global data volumes will increase tenfold between 2016 and 2025(1). As the volume of data increases so, too, will the volume of bad data—unless something is done about it.
To render data as agile as possible, organizations must be ready to use it in real-time, meaning they must have full confidence and trust in the data without falling into “paralysis by analysis,” according to Garrett. In fact, human intervention should only be required when the potential business impact of taking the wrong action or making a mistake is relatively minor.
“The goal is that the default for any major decision should be to the AI system as opposed to human instinct,” Garrett says. “As an industry, we are not quite there yet.”
AI algorithms work best when they are based on the richest, most comprehensive data. Consider popular mapping applications where the more high-quality, accurate data is represented (street names, traffic light locations, buildings and landmarks), the easier and more intuitive it becomes to reach the destination.
In a customer relationship management (CRM) scenario, for example, a customer sends a message through a company’s website support widget, where an AI tool enabled with text processing deciphers the tone of the message. If it is classified as negative, a support staff member can be notified and assigned. The CRM system is also accessed to review the customer’s history, which provides context to support customer interaction.
AI algorithms cannot work well when enterprise information is siloed. However, recent surveys show that integration hurdles are prevalent, with more than half of all data scientists devoting most of their time to integration-related tasks. The strongest AI applications depend on rich data lakes that pull and integrate data from disparate sources—the cloud, back-end legacy systems, ERP systems and databases among them. This can be a lot harder than one might imagine given that all data must be cleaned, accessible and compatible.
By evangelizing the concept of AI, organizations can help rally employees around the importance of creation, formatting, labeling and positioning data across departmental boundaries, according to Garrett. AI stands for “all-in” in this regard and if everyone is on board, the outcomes become much more powerful.
One of the great things about AI is that once a proof of concept has been established in one department, the AI algorithm can be used elsewhere by making modifications to meet the unique needs of other departments. What is often harder to achieve, and tends to not be so uniform, is enterprise-wide AI governance—that is, ensuring that AI is only used in certain ways that align with the ethical principles and values of the organization.
It can be difficult to enforce a uniform approach to governance across all departments given that different enterprises have articulated different sets of AI principles, according to Garrett. Microsoft, for instance, includes a call for transparency where AI systems must be understandable. The company also demands accountability to ensure it’s possible to track the algorithm to explain certain outcomes, for example, why a person was denied a credit card or why a request for medical treatment was rejected.
AI and ML have the potential to touch peoples’ lives in very personal ways, making it imperative to enforce enterprise-wide rules ensuring AI systems are designed, developed and deployed in a manner that maintains accuracy, fairness, ethics, data privacy and security. Effective AI governance is something that many organizations struggle with; however, as AI adoption increases, effective governance will be critical to ensuring the perils of AI do not outweigh the promise.
“With AI and ML technology developing at such a rapid pace, it can be hard for businesses to keep up in ancillary areas, namely data quality, data integration, and governance,” Garrett says. “As business decisions are increasingly machine-driven and the role of human intuition diminishes, it will be critical to address these foundations.”
It takes a clear strategy and statement of purpose to effectively implement sustainable AI practices. Once the vision is clear, culture needs to be addressed to create a matrixed organization of data owners across relevant lines of business and operational groups. Once this is established, awareness campaigns can motivate broader use of AI that will exponentially improve data quality, accessibility and governance. Just like the algorithm, the business begins to train itself. With that, AI becomes embedded in the operational culture of the company.
- IDC/Seagate Survey, “Data Age 2025: The Digitization of the World from Edge to Core,” Doc # US44413318, November 2018