Whitepapers

Key Elements for Operationalizing AI in the Enterprise

In recent months, the field of Artificial Intelligence (AI) has experienced astronomical growth, giving rise to not only new capabilities but also entirely new ways of developing products and approaching problems. As innovative as these advancements are, staying up to date on their potential applications can be challenging, especially for technology leaders who are already juggling a multitude of mission critical responsibilities ranging from cloud migrations to securing the entire business.

In a recent interview with McKinsey & Company, Google Cloud CEO Thomas Kurian puts it best when he discusses the notion that customers will often express what they want before expressing what they need based on the belief of what’s currently possible, the vast number of tasks they are already managing, and the acceleration of innovation happening in the technology space. This coupled with the reality that a predicted 85% of AI projects will deliver erroneous outcomes, raises the stakes on technology partners to help organizations apply these capabilities to the right use cases and create value.

Often the difference between success and failure when navigating an AI initiative has less to do with developing a particular algorithm and more to do with having the right operational, infrastructure, and cultural elements in place to execute. Regarding this topic, this article will explore three key factors that are critical when applying and operationalizing AI in the enterprise.

Identifying and pursuing the right use case

While identifying the right use case for an AI initiative seems obvious, this first step ultimately shapes the implementation strategy, resource allocation, risk and privacy assessments, and timeline. Amidst the huge excitement surrounding the advancements in Machine Learning (ML), it is imperative not to rush this foundational phase and carefully evaluate all the potential use cases at the start of the planning process. The following framework outlines four key questions to assess the potential of enterprise AI use cases:

  1. Is the problem well-defined and applicable to AI/ML solutions? It is critical to ensure that the use case being addressed is both clearly articulated and applicable to the various capabilities of AI/ML. In reference to this subject, Obsidian Strategies Founder and CEO Michelle Lee insightfully states that “every company has a machine learning opportunity; however, not every problem is solvable by machine learning.”
  2. Is there sufficient data to address the use case? The lifeblood of AI/ML models is data. In many cases, most of the time spent developing an AI project happens in the trenches of the data consolidation, cleaning, labeling, and normalization phases. Addressing this question early in the process is critical because if data is limited, then there may be a need to select certain models or technologies that require less training data or deprioritize the use case until more data is gathered.
  3. What is the quantifiable business impact and value? Successful AI use cases are typically tied to a particular business problem with a defined success criteria that can be measured in ROI, risk mitigation, improved customer experiences, etc. When selecting the right use case, the process of outlining the potential business value and determining how success is quantified can keep the initiative focused and help gain broader organizational support.
  4. What is the risk profile of the use case? Different use cases will have different levels of associated risk based on the sensitivity of the application and data privacy requirements. For instance, an application used to determine a user’s credit score will require a different risk approach than a product discovery model. This doesn’t mean that higher-risk use cases should be disregarded, but instead approached with the right risk and change management framework.

Discussing these questions with stakeholders across different domains early in the planning process will help teams ensure they don’t pursue a solution looking for a problem, and instead prioritize use cases that align with their business goals and deliver real value. A recent example of this approach is Morgan Stanley’s use of OpenAI’s GPT-4 model to organize its extensive knowledge base. This has enabled its wealth management personnel to quickly access relevant information stored across hundreds of thousands of pages of knowledge around market research and investment strategies. As outlined by Jeff McMillan – Head of Analytics, Data & Innovation at Morgan Stanley – the power of large language models (LLMs) has enabled them to harness their intellectual capital and create an AI agent that has “the knowledge of the most knowledgeable person in wealth management-instantly” which is a truly transformative capability. The case of Morgan Stanley stands out as it exemplifies a nearly century-old organization becoming an early adopter in applying LLMs to value driven use cases within the hyper-competitive field of wealth management.

Technology Selection:

Selecting the right technology is a crucial factor that varies across different use cases and teams. Advancements in ML and high-performance cloud infrastructures have provided technology leaders with a vast array of options. In some respects, the large number of choices and the acceleration of new solutions entering the market can complicate the technology selection process. The field of AI is particularly intriguing when it comes to technology evaluation. Until recently, there were significant barriers to adoption due to the massive amount of computational resources needed to develop AI at scale. These proverbial technology moats are being bridged by advancements in cloud infrastructures and new ML driven solutions. Depending on the use case, available resources, and team, we will observe four important technology layers that can be leveraged when developing and applying AI in the enterprise.

  1. Infrastructure Layer: Many organizations and teams will only require access to the raw compute resources needed for developing and scaling AI capabilities. This is particularly relevant with R&D and engineering teams aiming to create proprietary IP and have access to large amounts of training data. Advancements in high-performance cloud computing have expanded the possibilities in this area through AI-specific infrastructures powered by Graphics Processing Units (GPUs) and even Tensor Processing Units (TPUs).
  2. Platform Layer: Situated above the infrastructure layer is the platform layer offered by major cloud providers. Recent advancements are providing customers with comprehensive platforms to manage the entire ML lifecycle, from data preparation and model training to deployment and ongoing monitoring. This layer offers more pre-built functionality, coupled with the flexibility to develop algorithms for specific use cases. The DatabricksPlatform is a strong example of this versatility, seamlessly integrating data unification, engineering, governance, and machine learning solutions into a cohesive end-to-end ecosystem.
  3. Foundational ML Models: Amongst the most exciting technology areas are the pre-built ML solutions that are rapidly entering the market. These tools enable developers to tailor and deploy pre-built models to their specific use cases. A prime example is the democratization of Natural Language Processing (NLP) through LLMs offered by start-ups like Anthropic, Cohere, and OpenAI. The Morgan Stanley story exemplifies how a large enterprise can apply LLMs to the right use case and quickly bring differentiating AI capabilities to market.
  4. AI-Infused Business Applications: As new capabilities emerge through the previous three layers, we are witnessing an increasing number of business applications incorporating AI-powered agents to assist users and execute processes. In recent months, multiple examples have been released across various applications, including Microsoft’s Co-pilot, Google Workspace, and Salesforce’s Einstein GPT. By integrating AI capabilities at the application level, technology providers can deliver immense value to customers, enabling them to modernize applications without overhauling the business process layer. This is particularly relevant for mission-critical applications like ERP, where reinventing the core system can take years and require tens to hundreds of millions of dollars in investment. AI-embedded agents hold the promise of analyzing larger datasets (structured and unstructured), surfacing insights for users, and dynamically executing actions based on context instead of relying on static business rules.

Developing a scalable risk framework

In a recent New York Times op-ed, Thomas Friedman characterized the rapid advancements in AI as a “Promethean moment,” after witnessing the capabilities of OpenAI’s GPT-4. This metaphor encompasses the immense potential for good that AI offers, as well as the inherent risks associated with such technology. The increasing pace of development, combined with organizations’ growing appetites to become early adopters, amplifies the importance of defining the right risk guardrails early in the assessment process. Incorporating compliance and legal stakeholders in the evaluation cycle and creating a scalable risk framework will enable organizations to harness the value AI provides while minimizing potential legal, reputational, and cultural consequences. In an article by McKinsey & Company, Senior Partners Kevin Buehler and Alex Singla and Partner Liz Grennan present a comprehensive framework for addressing the wide range of AI risks that should be contemplated when prioritizing initiatives. Their approach recommends developing a catalog of specific AI risks that can be assessed by a multidisciplinary “tech trust team,” tasked with identifying and mitigating various aspects of exposure. The outlined methodology describes a 6-by-6 framework for mapping AI risks against different business contexts which is captured figure 1 below:

Figure 1. Risk Framework in Mckinsey & Company’s “Getting to know—and manage—your biggest AI risks”

 

A powerful aspect of this framework is that it can be executed as a repeatable process to help different teams quickly evaluate their risk levels without having to sacrifice speed of execution. Furthermore, the structure is flexible as it incorporates a broad range of categories that all have different associated consequences. For instance, in the Privacy example the article breaks down the legal, monetary, and consumer trust implications that can arise from the misuse of data when building ML models. This is particularly relevant as policy makers across the world are rushing to update AI and data privacy regulations to try and keep up with recent exponential advances. Putting aside any kind of regulatory debate, the reality is organizations must be malleable in their ongoing assessment of AI and data policies in the industries and jurisdictions that they operate.

Another important example of an AI risk featured in the framework is Transparency and Explainability. From a Transparency perspective, having limited visibility into how a model was trained and tested could create varying challenges both internally and externally around understanding the type of data used and how the model combats potential biases. Having a strong degree of explainability – in being able to understand how a model arrived at a particular output – is also an important risk consideration, which ranges in significance depending on the use case at hand. For instance, if a model is trained to analyze and approve mortgage applications, then having a high degree of explainability is critical as the decision can impact people’s ability to get loans and purchase a home. The level of explainability required for the use case is an important early development consideration as it will impact model selection as certain AI approaches are far more explainable than others. There are several other important factors and insights outlined in the McKinsey article which is a highly recommended read. Having the right risk strategy and structure embedded early into the DNA of the AI development process is important from an ethical and compliance perspective. Furthermore, a risk strategy that is clearly understood will be an ally in the change management process when it comes to building internal support and trust around an initiative.

As AI and ML capabilities continue to grow and barriers to adoption are lowered, it becomes increasingly vital for organizations to establish and operationalize robust frameworks for prioritization, assessment, and risk mitigation. The rapid pace of innovation and expanding range of options call for a strategic and measured approach to ensure the successful execution of AI initiatives. By focusing on identifying the right use cases, selecting appropriate technology, and developing a scalable risk framework, organizations can harness the transformative potential of AI to deliver tangible value to employees and customers alike. As the AI landscape evolves, organizations that embrace a culture of adaptability, collaboration, and risk-awareness will be better positioned to leverage AI-driven opportunities and thrive in an increasingly competitive market.

Rab Bruce-Lockhart

Chief Revenue Officer

15:20 – 22nd January 2025

Ready to discover more?

Contact us and we’ll set up a video call to discuss your requirements in detail.