AI agents are already starting to change how work gets done inside organisations. But AI isn’t magic: like any transformation project, you need structure. Now we’ve gathered some experience doing this in the real world, I think it’s time to boil it down to a catchy slogan.
That’s the three Cs of production AI: Context, Constraints, Control.
Context is the information an agent needs to do useful work. When humans make decisions, they draw on a vast pool of knowledge: the hastily-written ticket description, a paragraph buried in the company handbook, a conversation with the client in the pub. Agents somehow need to be given all this context too, but (so far) they lack the ability to structure and retrieve their own memories (and you’re far less likely to chat with one over a pint). Providing them with just the right information to make the right decision right now is context engineering, and that’ll be the big story of 2026.
Constraints are what the agent can do and (importantly) what it can’t. Agents need tools to interact with other systems, and just like human tools, they need to be well-designed, predictable and safe.
Control is what makes agentic systems manageable. Humans still need to be accountable, so they still need to be in the loop – whether that’s feedback and evaluation early in the project, or signoff and auditing of each action the agent takes.
These three principles apply in two directions. When we build AI systems for clients, they’re our design checklist. When we use AI to accelerate our engineering capabilities, they’re what keeps our codebase clean. The same guardrails work both ways.
When we build agentic systems for clients, we’re building the deterministic layer: the system of record and the control surfaces that the agent interacts with. In many ways, this is just like building a “traditional” web app designed for humans: it has to work the same way every time, and it has to be correct. In many ways, this is exactly like the apps we’ve been building for years. AI doesn’t replace the software, it replaces (or accelerates) the human using the software to achieve complex goals.
The three Cs are our design checklist.
An agent processing a customer support ticket doesn’t need access to your entire customer database. It needs the case ID, the last few interactions, the customer’s support tier, and the relevant product documentation. That’s it.
In practice, this means domain modeling and explicit boundaries. We build a function that assembles exactly the context the agent needs, not a raw database connection.
An agent with unrestricted access could read data it shouldn’t, trigger workflows it doesn’t understand, or make changes that violate business rules. You wouldn’t give a new member of staff admin access on day one.
We design narrow tool surfaces. The agent gets access to a limited number of very specific actions, depending on the task. Each action has validation, permissions checks, and rate limits built in.
An agent runs overnight and processes 500 invoices. In the morning you discover it misunderstood a business rule. Without an audit trail, without rollback, you have no way to see what it actually did or fix it.
We build audit trails first, automation second. Every agent action is logged with full context: who triggered it, what it did, when it ran, why it made that decision, and what the reasoning was. High-risk actions require human approval before execution.
This is what we already do.
Domain modeling. Business logic. Permissions layers. Audit trails. Explicit validation. The same guardrails that make a payroll system safe make an agentic workflow safe.
We’re not learning a new discipline. We’re applying the same one we’ve used for 15 years.
It’s impossible to ignore the rise of AI coding tools, but left to their own devices, models tend to fall into the same traps as a junior developer learning on the fly from Stack Overflow. The risk isn’t that they write bad code, it’s that they write code that looks plausible, but introduces hidden complexity or strays from your carefully constructed architecture.
To get better results from coding agents, you just need to apply the three Cs.
Having a set of clear, well-documented guardrails (like our Django RAPID architecture) and clearly defined engineering processes help with this, because it enforces the three Cs on everything, including code an LLM generates.
Modern coding agents are extremely good at gathering codebase context by exploring and reading files. They tend to find and follow the patterns that already exist in the code. But having a set of documented, unambiguous guidelines helps the agent make better decisions when stepping outside those lines, and especially when working on a brand new codebase.
As well as codebase context, they need human context. This is where our expertise in product management helps: a good, well written ticket with all the correct background information, acceptance criteria and desired user experience helps the agent write code that actually solves the right problem.
Documentation and clear context put the agent into a space where it prefers to write code that fits our patterns. But making sure the code is actually correct needs validation. The agent explores the space of possibilities; something needs to “push back” and constrain it to do things properly.
This is where our existing well-established engineering practices work in our favour: strict linting, automatic code formatting, automated test coverage and continuous integration all provide feedback to the agent about whether its changes are correct. Crucially, the agent is able to continue working for longer until all of the tests pass. We get better results with less human intervention.
Even as AI agents write more and more of the actual code, we still need to take responsibility and ownership to ensure our clients continue to trust us to deliver valuable features. Every change made by a coding model is co-authored by an engineer, and goes through exactly the same process as a change made by a human.
Expert engineers steer the agents in real time to keep them on the rails, and every line of code is checked before entering our version control system. Code is reviewed by both humans and an AI review bot (adversarial models that provide feedback on each other’s work is a great pattern to improve results, by the way). And every piece of work goes through our rigorous internal testing and QA process before getting anywhere near production.
Vibe coding is a fun distraction and great for throwaway apps. But AI hasn’t taken away the need for rigorous software engineering practices. If anything, it’s made it even more crucial.
The same principles apply whether you’re building an AI system or using AI to build a system.
In both cases, the hard part isn’t the AI. It’s the guardrails.
Guardrails are what turn clever demos into production systems. They’re what make agentic workflows trustworthy. And they’re what protect your codebase from drift, whether the drift comes from a junior developer, a rushed deadline, or an overconfident LLM.
We’ve been building systems with strong guardrails for 15 years. That’s why we’re good at building AI systems for clients. And it’s why AI makes us faster without making us sloppier.
Most businesses know they need to adopt AI somehow, but don’t know where to start. Others have tried one or two AI-supported workflows with disastrous (or, perhaps worse, disappointing) results. The three Cs give you a way to structure your AI adoption project: what does the agent know, what can it do, and how do we steer it? Solve those three and you’re way ahead of how most companies are thinking in this space.
We can help at all levels of this transformation. We can provide export business advice to support you in understanding what changes will be most impactful before you even write a single line of code. We can design and build secure systems that work for both humans and AI agents. We can accelerate the build timeline with agents that produce code an engineer would’ve written. And we can train your team to do the same.
The world is changing fast, but experience and knowledge about how to build valuable products is timeless.
AI agents are already starting to change how work gets done inside organisations. But AI isn’t magic: like any transformation project, you need structure. Now we’ve gathered some experience doing this in the real world, I think it’s time to boil it down to a catchy slogan.
That’s the three Cs of production AI: Context, Constraints, Control.
Context is the information an agent needs to do useful work. When humans make decisions, they draw on a vast pool of knowledge: the hastily-written ticket description, a paragraph buried in the company handbook, a conversation with the client in the pub. Agents somehow need to be given all this context too, but (so far) they lack the ability to structure and retrieve their own memories (and you’re far less likely to chat with one over a pint). Providing them with just the right information to make the right decision right now is context engineering, and that’ll be the big story of 2026.
Constraints are what the agent can do and (importantly) what it can’t. Agents need tools to interact with other systems, and just like human tools, they need to be well-designed, predictable and safe.
Control is what makes agentic systems manageable. Humans still need to be accountable, so they still need to be in the loop – whether that’s feedback and evaluation early in the project, or signoff and auditing of each action the agent takes.
These three principles apply in two directions. When we build AI systems for clients, they’re our design checklist. When we use AI to accelerate our engineering capabilities, they’re what keeps our codebase clean. The same guardrails work both ways.
When we build agentic systems for clients, we’re building the deterministic layer: the system of record and the control surfaces that the agent interacts with. In many ways, this is just like building a “traditional” web app designed for humans: it has to work the same way every time, and it has to be correct. In many ways, this is exactly like the apps we’ve been building for years. AI doesn’t replace the software, it replaces (or accelerates) the human using the software to achieve complex goals.
The three Cs are our design checklist.
An agent processing a customer support ticket doesn’t need access to your entire customer database. It needs the case ID, the last few interactions, the customer’s support tier, and the relevant product documentation. That’s it.
In practice, this means domain modeling and explicit boundaries. We build a function that assembles exactly the context the agent needs, not a raw database connection.
An agent with unrestricted access could read data it shouldn’t, trigger workflows it doesn’t understand, or make changes that violate business rules. You wouldn’t give a new member of staff admin access on day one.
We design narrow tool surfaces. The agent gets access to a limited number of very specific actions, depending on the task. Each action has validation, permissions checks, and rate limits built in.
An agent runs overnight and processes 500 invoices. In the morning you discover it misunderstood a business rule. Without an audit trail, without rollback, you have no way to see what it actually did or fix it.
We build audit trails first, automation second. Every agent action is logged with full context: who triggered it, what it did, when it ran, why it made that decision, and what the reasoning was. High-risk actions require human approval before execution.
This is what we already do.
Domain modeling. Business logic. Permissions layers. Audit trails. Explicit validation. The same guardrails that make a payroll system safe make an agentic workflow safe.
We’re not learning a new discipline. We’re applying the same one we’ve used for 15 years.
It’s impossible to ignore the rise of AI coding tools, but left to their own devices, models tend to fall into the same traps as a junior developer learning on the fly from Stack Overflow. The risk isn’t that they write bad code, it’s that they write code that looks plausible, but introduces hidden complexity or strays from your carefully constructed architecture.
To get better results from coding agents, you just need to apply the three Cs.
Having a set of clear, well-documented guardrails (like our Django RAPID architecture) and clearly defined engineering processes help with this, because it enforces the three Cs on everything, including code an LLM generates.
Modern coding agents are extremely good at gathering codebase context by exploring and reading files. They tend to find and follow the patterns that already exist in the code. But having a set of documented, unambiguous guidelines helps the agent make better decisions when stepping outside those lines, and especially when working on a brand new codebase.
As well as codebase context, they need human context. This is where our expertise in product management helps: a good, well written ticket with all the correct background information, acceptance criteria and desired user experience helps the agent write code that actually solves the right problem.
Documentation and clear context put the agent into a space where it prefers to write code that fits our patterns. But making sure the code is actually correct needs validation. The agent explores the space of possibilities; something needs to “push back” and constraint it to do things properly.
This is where our existing well-established engineering practices work in our favour: strict linting, automatic code formatting, automated test coverage and continuous integration all provide feedback to the agent about whether its changes are correct. Crucially, the agent is able to continue working for longer until all of the tests pass. We get better results with less human intervention.
Even as AI agents write more and more of the actual code, we still need to take responsibility and ownership to ensure our clients continue to trust us to deliver valuable features. Every change made by a coding model is co-authored by an engineer, and goes through exactly the same process as a change made by a human.
Expert engineers steer the agents in real time to keep them on the rails, and every line of code is checked before entering our version control system. Code is reviewed by both humans and an AI review bot (adversarial models that provide feedback on each others’ work is a great pattern to improve results, by the way). And every piece of work goes through our rigorous internal testing and QA process before getting anywhere near production.
Vibe coding is a fun distraction and great for throwaway apps. But AI hasn’t taken away the need for rigorous software engineering practices. If anything, it’s made it even more crucial.
The same principles apply whether you’re building an AI system or using AI to build a system.
In both cases, the hard part isn’t the AI. It’s the guardrails.
Guardrails are what turn clever demos into production systems. They’re what make agentic workflows trustworthy. And they’re what protect your codebase from drift, whether the drift comes from a junior developer, a rushed deadline, or an overconfident LLM.
We’ve been building systems with strong guardrails for 15 years. That’s why we’re good at building AI systems for clients. And it’s why AI makes us faster without making us sloppier.
Most businesses know they need to adopt AI somehow, but don’t know where to start. Others have tried one or two AI-supported workflows with disastrous (or, perhaps worse, disappointing) results. The three Cs give you a way to structure your AI adoption project: what does the agent know, what can it do, and how do we steer it? Solve those three and you’re way ahead of how most companies are thinking in this space.
We can help at all levels of this transformation. We can provide business-expert advice to support you in understanding what changes will be most impactful before you even write a single line of code. We can design and build secure systems that work for both humans and AI agents. We can accelerate the build timeline with agents that produce code an engineer would’ve written. And we can train your team to do the same.
The world is changing fast, but experience and knowledge about how to build valuable products is timeless.