The next evolution of AI is already here—and hiding in plain sight

Courtesy of GETTY IMAGES

Generative AI’s remarkable achievements are creating a growing misconception among executives that all previous artificial intelligence technologies will become obsolete. The resulting over-emphasis on generative AI is counterproductive, leading to the compartmentalizing of AI talent and resources. This ultimately limits AI’s potential because generative AI alone cannot solve every type of problem.

That’s why the next phase in the evolution of AI use won’t hinge on a new technological breakthrough. Instead, it will emerge as executives adopt a cohesive, strategic approach to AI—what we call a One-AI approach—that pairs the latest generative AI with other forms of AI to achieve far more than each can do on its own.

At the heart of all modern AI technologies lies the same fundamental ability to recognize and learn from sophisticated patterns in data. The practical differences among types of AI arise from the end-use applications they are best suited for. Large language models, for example, deploy pattern-recognition capabilities to anticipate the likely next word. These and other forms of generative AI are mostly applied to content creation and creative problem solving. On the other hand, predictive AI, which has been around for more than a decade, connects historical data to forecast future events, anticipate behaviors, and give recommendations for more informed decision-making.

The different uses of each form of AI complement one another, which is why Mastercard’s chief innovation officer, Ken Moore, has rightly argued that “combining the two facets of AI [i.e., generative and predictive] can produce superior results.” Yet companies’ tendency to segregate AI resources makes it hard to achieve those results, as it slows the process of AI adoption and threatens investment in predictive AI in particular, despite its demonstrated ROI.

This segmented approach is also at odds with the way in which AI will most likely be used in the future, in the form of end-to-end systems capable of executing a wide variety of tasks. Already we’re seeing savvy companies preparing for this: Mature AI companies are twice as likely as their less experienced counterparts to be using a One-AI approach to scale applications, according to a December 2023 BCG survey of C-suite executives. Companies that fail to build the architecture now to support a One-AI approach will be playing catch-up once this integrated approach becomes mainstream.

The biotech company Insilico Medicine exemplifies the power of the One-AI approach, which they have used to dramatically speed up drug development while reducing cost. Typically, it takes a new drug between three and six years and $430 million just to get to the trial stage of development. Insilico developed the world’s first AI-designed drug in just 18 months, for only $2.6 million. How did pairing generative and predictive AI help make that happen?

In the drug discovery process, AI is deployed to both identify the target molecule and design drugs to interact with it. Insilico used predictive AI to locate molecular compounds in the body that play a role in the progression of a rare lung disease. A natural language-processing engine then cross-referenced information about these targets with existing diseases, patents, and research literature to identify gaps. At that point, a generative AI model developed drug-like molecules from scratch that could potentially curb the disease. From those results, predictive AI algorithms selected the most promising molecules to advance to the clinical trial stage. The absence of either predictive or generative AI would have restricted the range of possibilities available to Insilico.

For companies looking to follow suit and adopt a One-AI approach of their own, it’s important to understand the different ways AI technologies can interact with each other. Businesses will have to decide which of these modes of AI interaction will best solve the problem they’re looking to address. These are the three most prevalent ones:

Sequential mode

AI models can sequentially feed one another, such that one model’s output becomes another model’s input. This sequential mode, as illustrated by Insilico Medicine, also underpins Spotify’s AI DJ, which uses AI to curate personalized playlists that reflect predicted listener preferences based on historical data. The DJ tool then inputs the curated playlist into an OpenAI technology to generate accompanying commentary that provides fun facts, tidbits, and anecdotes about each song. Finally, an AI-powered voice delivers this information to create a more human-like user experience.

Feedback-loop mode

AI models can also interact in such a way that they iteratively communicate with each other, resulting in a continuous cycle of reinforcement and mutual enhancement. FedEx recently introduced a robot that illustrates the power of these feedback loops. The robot is tasked with loading delivery trucks, an automated process that is highly complex because of the variability in the packages’ size and weight.

The One-AI robot first uses a model to create a stacking plan for the packages, similar to the game Tetris. Another AI model is needed to identify packages, evaluate whether they fit, and ultimately instruct the robot to grab and load each item. Completing this task, however, requires continuous feedback to update the stacking plan based on how well the packages actually fit together in the truck. The use of this sort of interdependent feedback loop is likely to increase in the future with the introduction of autonomous agents.

Standalone mode

To solve certain business problems, there may be limited value in making models interact. But even when used as standalone components of an integrated solution, the organization must still maintain a One-AI perspective on the problem as a whole.

For example, a fashion brand can use a single AI model to identify the latest fashion trends by analyzing content from fashion blogs and social media posts, while turning to another model to predict seasonal demand based on past sales. Used together, these models would provide valuable insights informing what the fashion brand should produce—even if they don’t directly communicate with one another. Over time, we expect this mode of AI deployment to become the exception rather than the norm.

The right mode will depend on the nature of the business problem. But it also hinges on the state of the technology at a given point in time. As both problems and technologies evolve, the modes of AI interaction may need to change as well. For example, the technical features of generative algorithms—their stochasticity and auto-regressive character, to be precise—make it difficult to anticipate where problems will arise and how severe they might be. This may, in some cases, create a preference for sequential and standalone modes that are more predictable. But future innovation in how the algorithms function may allow for more closely integrated feedback loops that are less prone to errors. Either way, a One-AI strategy is the best way to ensure an organization can adapt while simultaneously extracting the most value from all sorts of AI technologies.

Emerging One-AI best practices

Companies can take steps today to organize around a unified strategy for all forms of AI, putting themselves in a position to better capitalize on the technological breakthroughs that will continue to define our age of permanent AI revolution.

Unify AI teams: Many companies are tempted to set up generative AI units that are siloed from other AI teams. By unifying AI teams and resources, companies can share information and adapt quickly to inevitable technology or marketplace changes. On top of making strategic sense, the consolidation of AI teams may become virtually necessary due to the scarcity of skilled AI professionals and AI resources. Even technology giants are pivoting to this One-AI setup. After initially siloing its generative AI team, Meta quickly changed course, consolidating its resources into a single team.

Employ model-agnostic problem-solving: First and foremost, companies should consider a problem in its entirety before selecting the suitable One-AI mode. Understanding the full picture also promotes creativity when choosing One-AI modes, as well as updating them over time as the technology and problem themselves change. For instance, the American bank Capital One prides itself on encouraging its data scientists to gather and assess business problem statements before technical requirements. This mindset ensures that the company avoids a situation where they create an AI solution looking for a problem.

Watch data integrity when AI systems interact: Robust governance is needed to keep up with the proliferation of AI technologies and their evolving rules. Especially in sequential and feedback loop modes, One-AI gives rise to new risks to data quality and integrity. Singapore’s DBS Bank, for instance, created its own responsible data use framework, called PURE, to regularly re-evaluate its over 600 existing AI applications, as well as any future projects, to ensure compliance with the principles of the PURE framework.

Manage One-AI risks: Companies need to pay close attention to the inherent unpredictability of generative AI, particularly when used for decision-making. Consequently, companies must double down on processes to detect and address potentially unexpected or harmful behaviors of their One-AI solutions. Maintaining AI incident databases will help companies monitor how their One-AI solutions are evolving. Companies should also keep humans in-the-loop, allowing them to provide feedback to improve AI system performance and ensure alignment with desired values.

***

When it comes to what’s next in AI, the future is already here: It is the strategic and flexible integration of a wide range of AI technologies through a One-AI approach. The latent power of using AI in its full range of capabilities is clear; companies just need to take a big picture view and organize accordingly to realize that potential.

Read other Fortune columns by François Candelon

François Candelon is a managing director and senior partner of Boston Consulting Group and the global director of the BCG Henderson Institute (BHI).

Leonid Zhukov is the director of the BCG Global A.I. Institute and is based in BCG’s New York office.

Namrata Rajagopal is a consultant at BCG and an Ambassador at the BCG Henderson Institute.

David Zuluaga Martínez is a partner at BCG and an Ambassador at the BCG Henderson Institute.

Some of the companies featured in this column are past or current clients of BCG.

This story was originally featured on Fortune.com

Advertisement