Anchor Modeling

Ever heard of this? Well, if you’re reading this site then you probably have.

Anchor Modeling is an entity-centric, normalized data modeling technique built to handle change over time in both structure and content. It uses four primitives:

  • Anchor (entity)
  • Attribute (property)
  • Tie (relationship)
  • Knot (shared domain/state)

Why might you not have heard of it? Well, it’s something of a niche approach to data modeling that originated in Sweden in the early 2000’s. Here is the academic paper that describes it’s usage:

A screenshot of an academic paper titled 'Anchor Modeling - Agile Information Modeling in Evolving Data Environments' by L. Rönnbäck, O. Regardt, M. Bergholtz, P. Johannesson, P. Wohed. The paper discusses the complexities of maintaining and evolving data warehouses.

Example Anchor Model

Diagram illustrating an example of an Anchor Model for data modeling, featuring entities, attributes, ties, and a shared domain. It includes elements like Stage, Program, Actor, Performance, and Event.

Unlike star schemas (dimensional) or classic 3NF, Anchor (and a modern popular solution, Data Vault 2.0) are designed for continuous evolution and multi-source integration. You add new anchors/attributes/ties non-destructively. Older schemas remain subsets of newer ones, so teams can ship iteratively without downtime.

The above example can then be used to generate SQL or JSON, as shown:

JSON
{
   "schema": {
      "format": "0.99.6.3-2",
      "date": "2025-09-17",
      "time": "20:52:19",
      ...
         "positingRange": "timestamp",
      ...
      },
      "knot": {
         "PAT": {
            "id": "PAT",
            "mnemonic": "PAT",
            "descriptor": "ParentalType",
            "identity": "smallint",
            "dataRange": "varchar(42)",
            "metadata": {
               "capsule": "dbo",
               "generator": "false"
      ...
      "anchor": {
         "PE": {
            "id": "PE",
            "mnemonic": "PE",
            "descriptor": "Performance",
            "identity": "int",
            "metadata": {
               "capsule": "dbo",
               "generator": "true"
            },
            "attribute": {
 

Anchor modeling is not particularly popular in mainstream data warehoarding and modeling practice. Here’s the current landscape:

Reality:

  • It’s a niche approach within data vault and temporal modeling communities
  • Most organizations use dimensional modeling (Kimball) or normalized approaches (Inman) instead
  • You’ll find it discussed more in academic circles and specialized data modeling forums than in widespread enterprise adoption

Why it hasn’t gained traction:

  • Complexity: The highly normalized structure with separate tables for attributes, ties, and anchors can be difficult to understand and maintain
  • Query performance: Requires many joins to reconstruct even simple business entities, which can impact performance
  • Tooling: Limited native support in mainstream BI and ETL tools compared to star schemas
  • Learning curve: Most data professionals are trained in dimensional or relational modeling, not anchor modeling

Where you might see it:

  • Organizations with extreme requirements for temporal tracking and auditability
  • Data vault implementations (which share some similar principles)
  • Highly regulated industries where tracking every historical change is critical

If you’re considering anchor modeling for a project, I’d recommend carefully weighing whether its specific benefits (extreme flexibility for schema changes, comprehensive historisation etc) outweigh the operational complexity for your use case. For most applications, dimensional modeling or data vault architectures tend to be more practical choices.

Discover more from Where Data Engineering Meets Business Strategy

Subscribe now to keep reading and get access to the full archive.

Continue reading