Skip to main content

Making Sense of Logical Data Management

Christopher Gardner
O'Reilly Media

I work full-time for the University of Michigan, helping leaders to leverage data for decision making, but on top of that, I also write books about different data-centric technologies. One of my favorite parts about the latter is the large variety of topics I encounter on a day-to-day basis. My latest title for O'Reilly, The Rise of Logical Data Management, was an eye-opener for me. I'd never heard of "logical data management," even though it's been around for several years, but it makes some extraordinary promises, like the ability to manage data without having to first move it into a consolidated repository, which changes everything. Now, with the demands of AI and other modern use cases, logical data management is on the rise, so it's "new" to many. Here, I'd like to introduce you to it and explain how it works.

The Traditional Approach: What's Missing?

Normally, data needs to be in one physical place before it can be queried, reported on, visualized, or leveraged in any meaningful way for decision-making or analytics. Years ago, the role of this one physical place was performed by the on-premises data warehouse, which has recently been upstaged by cloud data warehouses, data lakes, and data lakehouses. Each of these architectures have their own strengths and weaknesses, but they all rely on data being replicated from multiple source systems into one core repository, before it can be leveraged for analysis.

Mostly, this replication is accomplished through batch-oriented extract, transform, and load (ETL) processes and other, faster methods. The problem is, one doesn't always want to replicate data, even if it can be done in real time. I'll give you a few examples: Some privacy regulations may limit the number of times personal information can be copied, or they may restrict personal information from being copied across borders. During merger and acquisition (M&A) activity, one department may wish to combine certain data from both companies but it might not be able to happen if the acquiring company hasn't yet determined the structure of the newly combined company. I could go on and on.

Because some data will always be siloed to an extent or distributed, organizations are realizing that the traditional data management approach, though increasingly powerful, are struggling to support AI and other demanding use cases such as true self-service access to data for business users. This is because such use-cases require not only live data, but also live data that is trusted, semantically standardized, and well-governed. What is missing is a way to provide that, while also not requiring replication, which always has a cost in terms of disk space, security, and back-up provisions. This is exactly what logical data management provides.

How Logical Data Management Works

Logical data management leverages data virtualization to create a virtual model of all applicable data sources, made available to an organization as an enterprise-wide logical layer. To query the data in any of the underlying data sources, which could include not only traditional on-premises databases but also cloud systems, software-as-a-service (SaaS) applications, data warehouses and data lakehouses, a user would not even need to know where the data is actually stored or how to access it; a user would only need to query the logical layer, which would get the necessary data for the user during the moment of the query. In this sense, the logical data layer enabled by logical data management abstracts users from the complexities of accessing the individual data sources.

One of the most profound benefits of this architecture is that by establishing an enterprise-wide logical layer above an organization's disparate data sources, it also enables organizations to establish a powerful semantic layer within that logical layer, to standardize semantics across the entire organization in an immediate, highly effective manner rather than at each of the different data sources individually.

In the same way, logical data management also enables organizations to implement data governance and security controls across the logical layer, to effectively control all of the underlying data sources from a single interface.

A Flexible Solution

Unlike data lakehouses, cloud data warehouses, and other powerful data platforms, logical data management is not a monolithic platform that needs to replace existing infrastructure. Instead, it is relatively "light" in that it can be implemented above any existing data estate to add tremendous value in terms of support for AI and other modern use cases. It's like a layer of intelligence that can make the best use of existing systems to enable flexible new capabilities. 

Christopher Gardner is a Trainer and Author at O'Reilly Media, and the Campus Tableau Application Administrator at the University of Michigan

Hot Topics

The Latest

Outages aren't new. What's new is how quickly they spread across systems, vendors, regions and customer workflows. The moment that performance degrades, expectations escalate fast. In today's always-on environment, an outage isn't just a technical event. It's a trust event ...

Most organizations approach OpenTelemetry as a collection of individual tools they need to assemble from scratch. This view misses the bigger picture. OpenTelemetry is a complete telemetry framework with composable components that address specific problems at different stages of organizational maturity. You start with what you need today and adopt additional pieces as your observability practices evolve ...

One of the earliest lessons I learned from architecting throughput-heavy services is that simplicity wins repeatedly: fewer moving parts, loosely coupled execution (fewer synchronous calls), and precise timing metering. You want data and decisions to travel the shortest possible path. The goal is to build a system where every strategy and each line of code (contention is the key metric) complements the decision trees ...

As discussions around AI "autonomous coworkers" accelerate, many industry projections assume that agents will soon operate alongside human staff in making decisions, taking actions, and managing tasks with minimal oversight. But a growing number of critics (including some of the developers building these systems) argue that the industry still has a long way to go to be able to treat AI agents like fully trusted teammates ...

Enterprise AI has entered a transformational phase where, according to Digitate's recently released survey, Agentic AI and the Future of Enterprise IT, companies are moving beyond traditional automation toward Agentic AI systems designed to reason, adapt, and collaborate alongside human teams ...

The numbers back this urgency up. A recent Zapier survey shows that 92% of enterprises now treat AI as a top priority. Leaders want it, and teams are clamoring for it. But if you look closer at the operations of these companies, you see a different picture. The rollout is slow. The results are often delayed. There's a disconnect between what leaders want and what their technical infrastructure can handle ...

Kyndryl's 2025 Readiness Report revealed that 61% of global business and technology leaders report increasing pressure from boards and regulators to prove AI's ROI. As the technology evolves and expectations continue to rise, leaders are compelled to generate and prove impact before scaling further. This will lead to a decisive turning point in 2026 ...

Cloudflare's disruption illustrates how quickly a single provider's issue cascades into widespread exposure. Many organizations don't fully realize how tightly their systems are coupled to thirdparty services, or how quickly availability and security concerns align when those services falter ... You can't avoid these dependencies, but you can understand them ...

If you work with AI, you know this story. A model performs during testing, looks great in early reviews, works perfectly in production and then slowly loses relevance after operating for a while. Everything on the surface looks perfect — pipelines are running, predictions or recommendations are error-free, data quality checks show green; yet outcomes don't meet the ground reality. This pattern often repeats across enterprise AI programs. Take for example, a mid-sized retail banking and wealth-management firm with heavy investments in AI-powered risk analytics, fraud detection and personalized credit-decisioning systems. The model worked well for a while, but transactions increased, so did false positives by 18% ...

Basic uptime is no longer the gold standard. By 2026, network monitoring must do more than report status, it must explain performance in a hybrid-first world. Networks are no longer just static support systems; they are agile, distributed architectures that sit at the very heart of the customer experience and the business outcomes ... The following five trends represent the new standard for network health, providing a blueprint for teams to move from reactive troubleshooting to a proactive, integrated future ...

Making Sense of Logical Data Management

Christopher Gardner
O'Reilly Media

I work full-time for the University of Michigan, helping leaders to leverage data for decision making, but on top of that, I also write books about different data-centric technologies. One of my favorite parts about the latter is the large variety of topics I encounter on a day-to-day basis. My latest title for O'Reilly, The Rise of Logical Data Management, was an eye-opener for me. I'd never heard of "logical data management," even though it's been around for several years, but it makes some extraordinary promises, like the ability to manage data without having to first move it into a consolidated repository, which changes everything. Now, with the demands of AI and other modern use cases, logical data management is on the rise, so it's "new" to many. Here, I'd like to introduce you to it and explain how it works.

The Traditional Approach: What's Missing?

Normally, data needs to be in one physical place before it can be queried, reported on, visualized, or leveraged in any meaningful way for decision-making or analytics. Years ago, the role of this one physical place was performed by the on-premises data warehouse, which has recently been upstaged by cloud data warehouses, data lakes, and data lakehouses. Each of these architectures have their own strengths and weaknesses, but they all rely on data being replicated from multiple source systems into one core repository, before it can be leveraged for analysis.

Mostly, this replication is accomplished through batch-oriented extract, transform, and load (ETL) processes and other, faster methods. The problem is, one doesn't always want to replicate data, even if it can be done in real time. I'll give you a few examples: Some privacy regulations may limit the number of times personal information can be copied, or they may restrict personal information from being copied across borders. During merger and acquisition (M&A) activity, one department may wish to combine certain data from both companies but it might not be able to happen if the acquiring company hasn't yet determined the structure of the newly combined company. I could go on and on.

Because some data will always be siloed to an extent or distributed, organizations are realizing that the traditional data management approach, though increasingly powerful, are struggling to support AI and other demanding use cases such as true self-service access to data for business users. This is because such use-cases require not only live data, but also live data that is trusted, semantically standardized, and well-governed. What is missing is a way to provide that, while also not requiring replication, which always has a cost in terms of disk space, security, and back-up provisions. This is exactly what logical data management provides.

How Logical Data Management Works

Logical data management leverages data virtualization to create a virtual model of all applicable data sources, made available to an organization as an enterprise-wide logical layer. To query the data in any of the underlying data sources, which could include not only traditional on-premises databases but also cloud systems, software-as-a-service (SaaS) applications, data warehouses and data lakehouses, a user would not even need to know where the data is actually stored or how to access it; a user would only need to query the logical layer, which would get the necessary data for the user during the moment of the query. In this sense, the logical data layer enabled by logical data management abstracts users from the complexities of accessing the individual data sources.

One of the most profound benefits of this architecture is that by establishing an enterprise-wide logical layer above an organization's disparate data sources, it also enables organizations to establish a powerful semantic layer within that logical layer, to standardize semantics across the entire organization in an immediate, highly effective manner rather than at each of the different data sources individually.

In the same way, logical data management also enables organizations to implement data governance and security controls across the logical layer, to effectively control all of the underlying data sources from a single interface.

A Flexible Solution

Unlike data lakehouses, cloud data warehouses, and other powerful data platforms, logical data management is not a monolithic platform that needs to replace existing infrastructure. Instead, it is relatively "light" in that it can be implemented above any existing data estate to add tremendous value in terms of support for AI and other modern use cases. It's like a layer of intelligence that can make the best use of existing systems to enable flexible new capabilities. 

Christopher Gardner is a Trainer and Author at O'Reilly Media, and the Campus Tableau Application Administrator at the University of Michigan

Hot Topics

The Latest

Outages aren't new. What's new is how quickly they spread across systems, vendors, regions and customer workflows. The moment that performance degrades, expectations escalate fast. In today's always-on environment, an outage isn't just a technical event. It's a trust event ...

Most organizations approach OpenTelemetry as a collection of individual tools they need to assemble from scratch. This view misses the bigger picture. OpenTelemetry is a complete telemetry framework with composable components that address specific problems at different stages of organizational maturity. You start with what you need today and adopt additional pieces as your observability practices evolve ...

One of the earliest lessons I learned from architecting throughput-heavy services is that simplicity wins repeatedly: fewer moving parts, loosely coupled execution (fewer synchronous calls), and precise timing metering. You want data and decisions to travel the shortest possible path. The goal is to build a system where every strategy and each line of code (contention is the key metric) complements the decision trees ...

As discussions around AI "autonomous coworkers" accelerate, many industry projections assume that agents will soon operate alongside human staff in making decisions, taking actions, and managing tasks with minimal oversight. But a growing number of critics (including some of the developers building these systems) argue that the industry still has a long way to go to be able to treat AI agents like fully trusted teammates ...

Enterprise AI has entered a transformational phase where, according to Digitate's recently released survey, Agentic AI and the Future of Enterprise IT, companies are moving beyond traditional automation toward Agentic AI systems designed to reason, adapt, and collaborate alongside human teams ...

The numbers back this urgency up. A recent Zapier survey shows that 92% of enterprises now treat AI as a top priority. Leaders want it, and teams are clamoring for it. But if you look closer at the operations of these companies, you see a different picture. The rollout is slow. The results are often delayed. There's a disconnect between what leaders want and what their technical infrastructure can handle ...

Kyndryl's 2025 Readiness Report revealed that 61% of global business and technology leaders report increasing pressure from boards and regulators to prove AI's ROI. As the technology evolves and expectations continue to rise, leaders are compelled to generate and prove impact before scaling further. This will lead to a decisive turning point in 2026 ...

Cloudflare's disruption illustrates how quickly a single provider's issue cascades into widespread exposure. Many organizations don't fully realize how tightly their systems are coupled to thirdparty services, or how quickly availability and security concerns align when those services falter ... You can't avoid these dependencies, but you can understand them ...

If you work with AI, you know this story. A model performs during testing, looks great in early reviews, works perfectly in production and then slowly loses relevance after operating for a while. Everything on the surface looks perfect — pipelines are running, predictions or recommendations are error-free, data quality checks show green; yet outcomes don't meet the ground reality. This pattern often repeats across enterprise AI programs. Take for example, a mid-sized retail banking and wealth-management firm with heavy investments in AI-powered risk analytics, fraud detection and personalized credit-decisioning systems. The model worked well for a while, but transactions increased, so did false positives by 18% ...

Basic uptime is no longer the gold standard. By 2026, network monitoring must do more than report status, it must explain performance in a hybrid-first world. Networks are no longer just static support systems; they are agile, distributed architectures that sit at the very heart of the customer experience and the business outcomes ... The following five trends represent the new standard for network health, providing a blueprint for teams to move from reactive troubleshooting to a proactive, integrated future ...