Modernization Hub

DDM - Distributed Data Management

Enhanced Definition

Distributed Data Management (DDM) is an architecture that enables applications on one system to access data located on another, remote system in a transparent manner. It provides a standardized set of protocols and commands for distributed data access, allowing different operating systems and platforms to share data seamlessly within an enterprise computing environment. In the mainframe context, DDM facilitates data exchange between z/OS and other DDM-enabled systems like IBM i (AS/400) or even distributed databases. Distributed Data Management (DDM) is an open architecture and protocol that enables applications on one system to access and manage data located on another system in a distributed network. In the z/OS environment, DDM allows z/OS applications to transparently access data (such as files or database objects) residing on remote DDM-enabled systems, and similarly, allows remote systems to access data on z/OS. Its primary purpose is to provide location transparency for data access across heterogeneous platforms. DDM (Distributed Data Management) is an IBM architecture that enables applications on one system to access data located on a different, remote system, and vice-versa, in a transparent manner. It provides a standardized framework for distributed file and database access across various platforms, including z/OS, AS/400 (IBM i), and distributed servers. Its primary purpose is to facilitate seamless data sharing and integration in heterogeneous computing environments.

Key Characteristics

    • Location Transparency: Applications do not need to know the physical location of the data; DDM handles the routing and access to remote resources.
    • Client-Server Model: DDM defines client (source) and server (target) roles, where the client requests data and the server provides it.
    • Data Type Independence: It supports various types of data, including files (sequential, VSAM), relational database tables, and even objects, though its most prominent use is often with relational data.
    • Protocol Agnostic: While DDM itself is an architecture, its implementations typically rely on underlying communication protocols such as SNA/APPC (Advanced Program-to-Program Communication) or TCP/IP for network transport.
    • Standardized Access: DDM provides a common set of commands and protocols, allowing different systems to interpret and respond to data access requests consistently.
    • IBM Integration: It is a foundational component for distributed data access in various IBM products, notably DB2 Distributed Data Facility (DDF) and sometimes CICS.

Use Cases

    • Cross-Platform File Access: A COBOL batch application running on z/OS needs to read or write a file residing on an IBM i (AS/400) system without physically transferring the file.
    • Distributed Database Queries: A DB2 for z/OS application executes SQL queries against a DB2 database instance running on a remote distributed platform (e.g., Linux, Unix, Windows) or vice-versa.
    • Application Integration: Integrating data from multiple heterogeneous systems into a central z/OS application for consolidated reporting or processing.
    • Data Replication and Synchronization: Facilitating the movement and synchronization of data between mainframe and distributed systems for business intelligence or disaster recovery purposes.

Related Concepts

DDM is a foundational architecture that underpins several other key mainframe concepts. It heavily relies on SNA/APPC or TCP/IP for the actual network communication between systems. DRDA (Distributed Relational Database Architecture) is an industry standard built upon DDM principles, specifically defining the protocols for distributed relational database access, which is extensively used by DB2 DDF (Distributed Data Facility) on z/OS to enable remote SQL connectivity. While CICS can leverage DDM, it often uses its own IPIC or MRO for inter-region communication.

Best Practices:
  • Secure DDM Connections: Implement robust security measures using RACF or equivalent security managers for DDM access, ensuring proper authentication and authorization for remote data requests.
  • Optimize Network Configuration: Tune network parameters, buffer sizes, and communication protocols (e.g., APPC LU 6.2 settings or TCP/IP stack parameters) to minimize latency and maximize throughput for distributed data operations.
  • Monitor Performance: Regularly monitor DDM activity, network traffic, and resource utilization on both client and server systems to identify bottlenecks and ensure efficient data access.
  • Error Handling and Recovery: Design applications with comprehensive error handling for DDM failures, including connection drops, data access errors, and transaction rollback mechanisms.
  • Consistent Data Definitions: Maintain consistent data definitions, character sets, and naming conventions across all participating DDM systems to avoid data integrity issues and simplify integration.

Related Vendors

ABA

3 products

ASE

3 products

Data Access

1 product

IBM

646 products

Broadcom

235 products

Tone Software

14 products

Related Categories

Operating System

154 products

CASE/Code Generation

19 products

Automation

222 products