Testing the Agentic BIM Thesis

AEC Tech

My last post on BIM 3.0 ended with a set of open questions: whether existing platforms can adapt fast enough, whether new tooling needs to build from a different foundation, and what happens to the incumbents as the production model shifts. I didn't answer them - partly because nobody can, and partly because the ground is still moving. In parallel, Martyn Day published his AEC Magazine cover story - "The Agentic Future of BIM" - and it sharpened those questions considerably.

Written by Campbell
Post - Main-agentic-bim

My last post on BIM 3.0 ended with a set of open questions: whether existing platforms can adapt fast enough, whether new tooling needs to build from a different foundation, and what happens to the incumbents as the production model shifts. I didn't answer them - partly because nobody can, and partly because the ground is still moving.

In parallel, Martyn Day published his AEC Magazine cover story - "The Agentic Future of BIM" - and it sharpened those questions considerably.


What Martyn's Article Adds

The BIM 3.0 piece was mostly about the production layer - what AI does to documentation, authorship, and the workflow assumptions that current platforms are built on. Martyn's article goes a level below that: the substrate.

His argument: BIM was designed to produce drawings. The geometry is stored as fixed objects that humans edit piece by piece. That was fine when the output was a drawing set. It's not fine when the output needs to be a live computational environment that AI agents can read, reason over, and write back to. Agentic AI needs a runtime, not a file.

Moving Revit to a browser doesn't fix this. Building a cloud-native version of the same underlying model doesn't fix it either. It's an architectural problem, and it's why the BIM 2.0 challengers - however well-funded and well-built - may have been solving for the wrong thing.

The propositions Martyn outlines are specific: geometry derived from rules rather than stored as fixed shapes, spatial intent declared as constraints rather than drawn manually, the model accessible as live state rather than exchanged as a file, multiple specialist agents working simultaneously rather than disciplines handing off sequentially. Several of these are already demonstrable in early form, mostly outside the mainstream BIM ecosystem.


Let's Build

I wanted to test whether the propositions actually hold up. So I built four prototypes - one for each - over a day, using open-source tooling, primarily ThatOpen, the browser-native IFC toolkit formerly known as IFC.js.

Worth noting: I can code, but I didn't write or review a single line for this. I used AI throughout - described what I wanted, iterated on the output, redirected when something didn't work. That's part of the point. Two years ago none of this was a realistic day project. The open-source BIM ecosystem wasn't mature enough, and the tooling to connect an IFC model to an LLM didn't exist in a usable form. Both are now there, and that changes what's possible to explore quickly.


The Four Prototypes

Prototype 1 - Geometry as Code

A floor plate defined by rules rather than drawn. Site boundary, setback, grid spacing, and target area are declared as inputs. The geometry re-derives automatically when any of them change. No one redraws anything.

Prototype 2 - Declarative Rule Engine

Spatial rules embedded in the model as first-class properties: minimum corridor widths, room adjacency requirements, egress distances. Violations surface in real time rather than being caught in a separate compliance check weeks later.

Prototype 3 - Model as State

The building model represented as structured data accessible via API - read it, modify it, query its history - rather than exchanged as a file. No file sent back and forth. Changes logged with timestamp and source.

Prototype 4 - Multi-Agent Loop

Four agents running simultaneously on the same model: one making layout changes based on a design brief, one resolving clashes, one checking compliance against the rule set from Prototype 2, one tracking cost. No handoffs. No waiting.


What Comes Next

My next post will cover what each prototype actually showed - what worked, what didn't, and the questions that building them raised. Not code tutorials. Just an account of where the propositions hold and where they get complicated.

Rob Asher left a comment on the LinkedIn version of this post worth following up on: that the future of BIM might be considerably less BIM - that the right abstraction for an agentic environment might not be IFC objects at all, but something closer to a spatial database. He also correctly called the prototypes toys. Which is exactly why I could vibe-code them in a day. That's the point.

More soon.

Comments

Comments are moderated and will appear after approval.

No comments yet. Be the first to share your thoughts.