CompanyApril 10, 2023

Why So Many AI Initiatives Fail

Why So Many AI Initiatives Fail

You’re looking on your iPhone for a particular picture of your friend, taken a couple years ago. There are thousands of images to search through, but the Apple Photo app zeroes in on the right person, and, presto, within seconds you find the picture you’re looking for.

There’s a lot at work behind the scenes to make this happen, including facial recognition and image analysis and automatic tagging, coming together to save effort by making inferences about what’s needed or wanted, and then acting on those inferences in real time.

Companies like Apple — and Google, FedEx, Uber, Netflix too — have spent years building systems and architectures that enable user experiences to become easier, more personal and more intuitive. In some cases, artificial intelligence enables key decisions to be made nearly instantaneously, or predictions to occur in real time, empowering a business to improve outcomes in the moment. 

This isn’t lost on the broader universe of enterprises: According to a 2022 Deloitte survey, 94% of business leaders say that AI is critical to success. 

So why is it that, for most organizations, building successful AI applications is a huge challenge? It can be boiled down to three big hurdles: The wrong data, in the wrong infrastructure, at the wrong time. 

Hurdles to AI Success

According to McKinsey, 56% of companies have adopted AI, but, as Accenture notes in a report, only 12% succeed in achieving superior growth and business transformation with AI.

Many stumbling blocks stand in the way of successfully building AI into real-time applications, but most are related to one central element: data.

Many traditional ML/AI systems, and the outcomes they produce, rely on data warehouses and batch processing. The result: a complex array of technologies, data movements and transformations are required to “bring” this historical data to machine learning systems. 

The data that are fed into an ML model are called features (measurable properties that can be used for analysis), which are generally based on the data stored in an application database or written to log files. They often require transformations, such as scaling values or computations based on prior records (for example, a moving average at the time a record was generated). 

This generally slows the flow of data from input to decision to output, resulting in missed opportunities that can result in customer churn, or recognized cyber security threat patterns going undetected and unmitigated. The challenges can be summed up as having the inappropriate datasets, supported by misaligned infrastructure that moves too slowly. 

The Wrong Data …

Because of the sheer volume of data (and the related costs), it has to be aggregated for ease of transport and availability. Simply put, data that’s aggregated or excessively transformed prevents organizations from easily identifying proper actions in real time and lessens the possibility of achieving a preferred outcome, whether that’s a suggested product, an updated package delivery route or an adjusted setting on a machine in a factory. This slows down an organization’s ability to find answers to new questions, predict outcomes or to adapt to a rapidly evolving context. 

Data scientists are forced to use coarse-grained datasets that will drive vague predictions that, in turn, don’t lead to the expected business impact, especially in discrete contexts like a customer session. They also might not be made aware when applications are reconfigured or data sources evolve, leading to essential events that don’t feed features. This missing data leads to uninformed decision-making when it comes to picking models. This leads to less-accurate prediction performance, or worse, models using erroneous data can lead to wrong decisions. 

Finally, the aggregation is focused on creating existing features. New feature engineering — processing data necessary to choose and train models — requires going back to the raw data for different aggregations. This additional processing significantly slows down the work of data scientists, extending the experimentation process. 

… in the Wrong Infrastructure …

The second challenge is related to the current ML infrastructures that power AI initiatives and their inability to process datasets at scale. The quality of the models, and their outcomes, increases with the volume of event data ingested. Organizations often need to process massive volumes of events, which legacy infrastructures can’t cope with. 

The sequence of training models and serving them for running inference becomes complex, especially as it requires data movements between each. Attempting to handle the scale required for high-quality predictions pushes traditional architectures to their breaking points. It’s also painstakingly slow, unreliable and costly. All of this threatens the value and impact of apps that are increasingly mission critical.

… at the Wrong Time

Another stumbling block arises from processing data too late to make any significant impact. Current architectures require data processing through multiple systems to serve a model, and this introduces latency that affects AI initiatives in various ways:

  • The model’s output can’t alter the course of a developing situation. For example, it proposes a customer offer at a point when the conversion rate has declined, and the customer might have purchased something else. 
  • The time it takes to serve models and get an outcome doesn’t match the expectations of a digital experience or automated process. Sometimes, days might pass before the data is ready for processing. In highly competitive markets, data this old is at best irrelevant, and, at worst, dangerous (consider a ride-sharing app applying surge pricing during a crisis or disaster).
  • Data scientists don’t have access to the latest data. This can affect the outcome of models and might require data scientists to spend valuable time seeking additional data points or sources. 

Many current ML infrastructures can’t serve applications because they’re too expensive, too complex and too slow. And, regulatory changes could eventually require organizations to provide more detailed explanations of how models were trained and why they arrived at a particular decision. This level of visibility is impossible with current architectures because of the processing, aggregation and variety of tools involved. 

The problem with many infrastructures lies in the journey that data must take to the AI-driven application. The answer to the problem, simply put, is to do the opposite. 

Bringing AI to Data

Leaders like the companies mentioned at the start of this article succeed by aggregating massive amounts of real-time data from customers, devices, sensors or partners as it moves through their applications. This data in turn is used to train and serve their models. These companies act on this data in the moment, serving millions of customers in real time. 

Another critical piece of leaders’ success is the fact that they collect all the data at the most granular level — as time-stamped events. This means they don’t have just a lot of data; they can also understand what happened and when it happened, over time. 

Leading enterprises like Netflix, FedEx and Uber “bring AI to where data is” so they can deliver the inferences where the application lives. In other words, they embed their ML models in their applications, aggregate events in real time through streaming services and expose this data to ML models. And they have a database (in the case of the three leaders mentioned above, it's the high-throughput, open source NoSQL database Apache Cassandra®) that can store massive volumes of event data.

With the right unified data platform, ML initiatives have the right infrastructure and the right data. Data engineers and data scientists can “break out of their silos” and align their processes of feature engineering, model experimentation, training and inference to power predictions. While these processes still require many tools, they all work on the same data foundation. 

Powered by massive amounts of event data to serve models and applications, the most successful applications powered by AI differentiate and lead by constantly improving the experiences they provide to end users. Their ability to serve millions of customers, and get smarter as they do, enable them to define the markets they’re in. 

Learn how DataStax enables real-time AI

Discover more
AI
Share

One-stop Data API for Production GenAI

Astra DB gives JavaScript developers a complete data API and out-of-the-box integrations that make it easier to build production RAG apps with high relevancy and low latency.