How the need for speed is changing multi-model database requirements
In working with dozens of leading enterprises to help them implement a wide range of multi-model use cases, I’ve noticed a significant shift in why and how organizations use multi-model databases.
What historically drove enterprise adoption of multi-model databases was the flexibility they provided in data modeling. A multi-model database stores multiple data types in their native form, using a single back end, with unified data governance, management, and access.
But that flexibility isn’t flexible enough anymore. As the pace of change in customer demands and business requirements has accelerated, so have the ways that developers need to access data to build applications. They can’t wait around for architects to spend weeks or months on elaborate data modeling; this kind of detailed planning can be futile, as market demands can change in virtually the blink of an eye.
So now there’s a new primary requirement for multi-model database systems. The focus has evolved from integrating multiple data models within the same platform, to providing low-friction data access to developers (i.e. APIs) in a way that adapts seamlessly to application needs.
Data model flexibility
Data modeling entails defining the data elements and requirements necessary to support business processes, and the relationships between the various data structures. Databases that supported a single data model, such as a relational database or a document-oriented database, were sufficient for distinct, well-defined use cases (order processing, for example). In this kind of scenario, modeling data ahead of time for a particular anticipated need worked sufficiently well.
But as enterprises needed to serve multiple use cases with different requirements on the same data, the need arose for a new flexibility in how application data could be modeled. An aircraft motor manufacturer collects a massive amount of IoT data from airliner engine systems. The company recognized that while all of this data they collected is relational, engine fault diagnostics requires traversing that data as a graph to capture the causal relationships between engine components. In another example, a cruise line operator stores their onboard customer events data as JSON but builds their recommendation engine using relational analytics on that same data.
For organizations like these, multi-model provides the critical flexibility to store and access data in the way that is most conducive to solving a user’s known problems. In other words, architects or tech leads design their data models upfront based on the defined or anticipated use case needs. A multi-model database provides a clear way to avoid the complexity and cost of ETL and data synchronization between different models.
But how can you solve for unknown problems?
A key driver behind digital transformation is the need to quickly adapt to changing customer demands and market conditions. In this reality, detailed planning and elaborate data modeling provide greatly diminished returns on investment because it’s extremely difficult to anticipate rapidly shifting future needs.
To respond to quick shifts in requirements, developers need easy and fast access to multiple data models without having to wait for data to be remodeled or reloaded. To achieve this, multi-model databases need to provide organizations the ability to adapt to these app requirements quickly. They must be able to make data available for querying in the way developers want, without requiring data model optimization.
This evolution is an important one, but it’s still a work in progress. Providing flexible and fast data access to developers without requiring upfront data model optimization is a difficult technical challenge because the database needs to dynamically adjust its data representation to the query workload.
One step in simplifying data access for developers is the ability to provide flexible access via standard APIs or embedded SDKs. This is a very significant shift in emphasis for multi-model databases: away from giving architects data modeling flexibility and toward empowering developers with simplified data access that adapts to application requirements and accelerates time to market.
At DataStax, some of our biggest retail customers are retooling their e-commerce platforms to microservice architectures, and they know that a multi-model database will provide their developers flexible and fast means of data access. They recognize that planning out all of their microservices’ data modeling needs ahead of time is a futile exercise. They need a multi-model database that adapts with their microservices and gives developers agility in how they interact with data.
We’re working on ways to provide enterprises and users with this kind of flexibility in data access. Stargate, an open source data gateway, is showing early promise in building toward a vision of how multi-model databases can empower and enable developers. It’s an adaptive API layer that exposes JSON, CQL, GraphQL, and SQL APIs, which give developers the choice of document, tabular, graph, or relational data models.
The industry still has some work to do to truly remove the need for architects to do detailed planning and elaborate data modeling, but we’ve made significant progress toward this important new reality.