VDM Architecture Principles
- Modular organization along behavior boundaries
- Transitioning from analysis to design is the most vulnerable phase of the lifecycle. Under time pressure, often inadequately detailed or over-detailed requirements are directly translated to disjoint design artifacts. VDM provides a framework of core and optional semantic constructs and behaviors. These modular elements easily map to requirements and are supported by pre-elaborated implementations we call design patterns. Two premises we accept are
- Requirements will evolve and change
- Assembling and adapting well understood design patterns reduces defects and avoids the "afterthought design" syndrome.
- Separation of concerns
- Approaching dimensional design through disjoint structural elements such as dimensions, facts and aggregates is like missing the forest for the trees. In VDM we organize design elements along "aspects" of system functionality, i.e. data access or data provisioning, and along functional boundaries defined in the core entities or abstract behaviors. These elements are clearly elaborated, maintained and understood independently with minimal coupling, and they are built so that they can be synthesized into complex but coherent systems.
- Proportionality principle
- This principle, simply put, states that in a well designed system small changes in requirements correspond to small change of design artifacts while bigger changes should correspond to proportionally bigger changes in the design world. This is a tall order with large database systems where the nature of SQL, data sharing, performance and availability pose competing pressures. Some of these issues are addressed by automating DDL and script generation as discussed under "automation" below.
- Operability
- A complex system must satisfy well-thought operational use cases, for regular or exception situations. A system of measures must be provided to recognize internal consistency issues and assess on-going health regarding performance and quality. Finally, risk-mitigating methods for handling unexpected and complex corrections and changes must be provided.
- VLDB, performance and availability
- VDM has evolved from large data environments with performance and availability in mind. This makes some of the implementations more involved than what they could be for smaller systems.
- Resilience and volatility isolation
- To address both performance and availability in VLDB we have promoted the idea of isolating volatile data and minimizing change and redundancy. Maintenance processes are designed to be idempotent and tolerant of failure, restartable and often able to support retroactive corrections
- Parallel Share-Nothing Database focus
- VDM was born in a DB2/MVS environment in the early 90s and it has been successfully applied in Sybase and Oracle environments. Most of its application, however, is on VLDB applications on parallel share-nothing Unix-based systems such as DB2 or Teradata and parallel data flow ETL such as AbInitio and Torrent/Ascential/Data Stage.
- Automation
- To improve the quality of code, particularly in repetitive situations we adopt metadata-driven code generation and template-SQL/DDL, utilizing a system of structured comments and variables.
VDM Access:
- Log in to post comments