The software industry has been here before. Sort of.
Pricing software has never been straightforward. But AI has introduced a new version of a very old problem: how do you charge for something when the cost of delivering it is variable, real, and directly tied to how much the customer uses it?
To understand why this is hard, it helps to understand how we got here.
The Perpetual Era: You Pay Once, We Upgrade You
For most of software's commercial history, you bought a licence. You owned it. It sat on your machine, ran without any ongoing cost to the vendor, and you decided whether to pay for the next version — which is why annual maintenance and upgrade agreements existed. The vendor's problem was convincing you to keep upgrading rather than staying on version 2014 forever.
The economics were simple: high upfront cost, near-zero marginal cost per user once the software shipped. Hosting costs were yours, not theirs. Gross margins in this model could reach 80–90%.
The SaaS Era: Per Seat Becomes the Default
Then software moved to the cloud, and with it came the subscription model. The vendor now hosted everything, maintained the infrastructure, pushed updates automatically.
The pitch to customers was lower upfront cost, always-current software, pay-as-you-scale.
The pitch to investors was predictable recurring revenue and the famous SaaS gross margin profile — still high, because hosting and infrastructure costs, while real, were small relative to the subscription price.
Per seat pricing dominated because it was simple to understand, easy to budget, and aligned reasonably well with how software was actually used — more users meant more value, and more revenue for the vendor.
It also evolved. Usage-based models emerged where seat counts didn't capture value well (Snowflake charging for compute, Stripe charging a percentage of transaction volume). Tiered models developed to capture different segments. Enterprise agreements added volume discounts and custom bundles. But the core assumption held: the marginal cost of an additional user was low enough that the pricing model didn't need to directly reflect it.
The AI Era: Marginal Costs Are Back
Here's where it gets interesting.
AI has reintroduced something the SaaS model largely abstracted away: meaningful marginal cost. Every AI-generated response has a real cost. Every call to a large language model goes through an API, and that API charges by the token, the query, or the call. When your product's core feature is "ask it a question and it answers", your cost base scales directly with usage in a way that traditional SaaS hosting costs simply don't.
The most instructive responses have come not from AI-native startups but from established SaaS companies that have bolted AI onto existing products and had to decide, explicitly, what to charge for it.
Help Scout charges $0.75 per AI resolution - a support query handled entirely by AI, without human intervention, however many turns it takes. Intercom's Fin charges $0.99 on the same principle: you only pay when the AI successfully resolves the issue. Both are acknowledging something directly: AI support has a real marginal cost that can't be absorbed into a flat monthly fee at scale. The per-resolution model passes that cost through in a way that's tied to the outcome delivered.
Others have taken the opposite approach - folding AI into existing tiers, treating it as a feature rather than a usage line. This is the simpler commercial decision in the short term. It's also the one that creates a cost problem later, as usage scales and the AI bill grows independently of subscription revenue.
The foundation model providers themselves - OpenAI, Anthropic, Google - have landed on a subscription hybrid: a base subscription tier combined with usage-based overage for anything above a threshold. It's the acknowledgment that AI has two cost components: access and consumption. Most enterprise buyers understand this model; it maps to how cloud infrastructure has always been priced.
The Uncomfortable Math
This matters beyond pricing philosophy because some of the fastest-growing AI companies are discovering a painful version of it right now.
Cursor, Replit, Bolt and others have generated extraordinary revenue growth headlines - $100M ARR in five months, growth rates measured in thousands of percent. And some of them are spending more on AI model costs than they're generating in revenue. One widely-circulated description characterised the economics as "absolutely abysmal." Replit's gross margins swung between 36% and negative 14% during 2025, driven by the cost of LLM access for its coding agents. Bolt was operating at roughly 40% gross margins as of May 2025 - viable, but a long way from the 70–80% that SaaS investors expect.
The "$100M ARR with a $120M model provider bill" meme exists for a reason.
This isn't a failure of these companies. It's the honest economics of a model where the product is the AI inference, and the pricing hasn't yet caught up to the cost structure. Some of this will resolve as model costs fall (they have been falling). Some of it will require pricing discipline that growth-at-all-costs thinking doesn't always produce.
What This Means for AEC Software
Most AEC software companies are adding AI features to existing products right now. The instinct is to include AI as part of the existing tier — it's a feature, not a separate thing, and customers don't want complexity.
That instinct is understandable. It will also cost you, at scale.
A few questions worth working through:
Which AI features have real per-query costs and which don't? A smart default in a form is different from a full model interrogation. Knowing which is which matters before you bundle them.
Is your AI feature a workflow accelerator or a query engine? Workflow accelerators (autocomplete, smart suggestions, automated classification) have manageable marginal costs. Query engines (ask the model a question, get a substantive answer) do not. They require different pricing logic.
Are you building a pricing model now or retrofitting it later? Retrofitting pricing into a product that users are already treating as free is painful. The AI-first companies are learning this expensively. You don't have to.
And then there's the question that nobody has cleanly answered yet: how do agentic workflows get priced?
An AI agent that autonomously coordinates a clash detection run, generates an RFI response, and updates the project register isn't a seat. It isn't a query. It might run for twenty minutes and consume more compute than a hundred standard interactions. Does it get charged per task? Per minute of runtime? Per outcome? Does the "seat" concept even make sense when the agent works overnight without a human in the loop?
These aren't hypothetical questions — they're the pricing decisions that AEC software companies building agentic features will face within the next twelve to eighteen months. The per-seat model that underwrote a decade of SaaS growth wasn't designed for this. What replaces it is still being figured out.
The AEC industry runs on fixed fees and predictable costs — surprise overage invoices are not culturally acceptable, and no client wants to explain an unexpected AI bill on a project account. Guardrails and usage limits are the obvious response, and they work. But agentic workflows introduce a specific wrinkle: an agent that hits a cost ceiling halfway through coordinating a clash detection run and simply stops isn't a cost-saving measure — it's an incomplete deliverable. The guardrail that protects the budget can break the output. How limits get set, communicated, and handled mid-task is a design and commercial problem that nobody has cleanly resolved yet.
The transition from perpetual licensing to SaaS took a decade and required the industry to fundamentally rebuild its pricing logic. The AI transition is faster and the economics are sharper. The companies that understand their cost structure clearly — and price accordingly — are the ones that end up with a business, not just growth.
This is exactly the kind of commercial strategy question I work through with AEC software founders. If you're navigating the pricing side of adding AI to your product, let's talk.