Skip to main content

Data integration complexities have become a growing challenge for many organizations. Issues such as data silos, legacy applications, digital transformation, mobile enablement, real-time data needs, and cloud and SaaS application integration are just a few of the obstacles organizations face. In addition to these challenges, the sheer volume of data being created, along with the involvement of multiple data owners and stakeholders, adds another layer of complexity. To address these challenges, models such as Data Fabrics and Data Meshes have been developed. In this article, we will explore these concepts and the role of data virtualization in simplifying and streamlining data integration processes.

  • One of the earliest definitions of data fabric came from Forrester's Analyst, Noel Yuhanna, who primarily focused on big data scenarios. Data fabrics have gained significant momentum in recent years and remain a prominent area of focus. Gartner even listed data fabrics as one of the top 10 Data and Analytics technology trends in 2021. Essentially, a data fabric is a unified architecture that enables easy access and sharing of disparate data, providing a management framework for doing so.
    Organizations utilize a plethora of tools to facilitate the data fabric capability, including ETL/data warehousing, master data management (MDM), data virtualization, data catalogs, governance, and security solutions.
  • Coined by Zhamak Dehghani at ThoughtWorks, a data mesh is a type of data platform that decentralizes the ownership of data among domain data owners. Each domain manages its own domain-specific data, including modeling and aggregation, allowing for data democratization and self-service within the organization. Unlike the monolithic approach of a data fabric, a data mesh adopts a distributed approach where each domain controls its own data pipelines.
    However, without considering other elements of the business, this federated approach can lead to fragmentation, duplication, and inconsistencies. To address this challenge, a key element of a data mesh is the interoperability between domains. This is where data virtualization plays a vital role. Data virtualization provides a universal interoperable layer that establishes data standards, rules, syntax, and governance across the entire data mesh.

Data virtualization, through the use of a virtual layer between disparate data sources and domain-specific data consumers, facilitates the implementation of a data mesh. Unlike traditional ETL/data warehousing models, data virtualization eliminates the need to move and copy data. Instead, semantic models are defined in the virtual layer, enabling users to abstract the data they need in real-time or near real-time. This approach significantly reduces costs associated with data movement and ensures a modern, high-performance data architecture.
Gartner's Data Management Hype Cycle positions data virtualization on the "Plateau of Productivity," indicating its low risk and high return on investment. By having a data virtualization layer that abstracts the data, organizations can achieve interoperability, governance, and security needed in a data mesh architecture. Furthermore, it empowers domain-based ownership and agile business intelligence through federated data models, performance optimization capabilities, and self-service search and discovery.

In conclusion, data fabrics and data meshes are two distinct approaches to address the complexities of data integration. While a data fabric focuses on technology, a data mesh is more organizationally and process-focused. Both provide architectures to access disparate data sources, but it is essential to understand their differences. Data virtualization plays a critical role in enabling a data mesh by providing a virtual layer that abstracts the data, facilitating interoperability, governance, and self-service capabilities. As organizations continue to navigate the challenges of data integration, leveraging data virtualization within a data mesh architecture will undoubtedly drive efficiency, agility, and data-driven decision-making.

Integrate People, Process and Technology