One Degree Off

·
aiworkflowarchitecture

An error of one degree is barely perceptible. Walk a hundred feet at one degree off course and you'll be about two feet from where you intended. Close enough that you'd never notice. Walk ten miles and you're off by over 900 feet. You're not just slightly wrong. You're in a different place entirely.

The error doesn't change. One degree is one degree at step one and at step ten thousand. What changes is the cost of correction. After a hundred feet, you turn slightly and you're fine. After ten miles, you're looking at a long walk back.

I think about this constantly when I'm working with AI coding tools. And it matters more now than it did a year ago, because agentic coding has dramatically increased the speed of implementation. Going the wrong direction faster is not a minor version of going the wrong direction slowly. It's actively worse. You cover more ground, build more structure on top of the flawed foundation, and the cost of correction grows with every confident step in the wrong direction. Velocity without alignment is just expensive wandering.

The loop

Here's a pattern I've seen repeatedly when I let an AI agent run too long without steering. I give it a complex task, something with multiple moving parts, like designing a role-based permission system with layered security rules. The agent starts building. Its first decision is reasonable. Its second decision builds on the first. By the fifth or sixth decision, it's confidently building in a direction that's slightly wrong, and each new layer of implementation is compounding that initial misalignment.

Then it hits a problem. Something doesn't fit. Instead of questioning the direction, it solves the immediate problem, which creates a new problem, which it solves, which creates another problem. I've started calling this the solve-break-solve loop. The agent spirals into exploding complexity that never actually resolves the original issue. It's not failing in a way that's obvious. It's failing in a way that looks like progress.

If you've ever watched this happen, you know the feeling. You come back after letting it work for a while, look at the diff, and realize it's built an elaborate structure that's fundamentally pointed the wrong direction. The code works in isolation. Each individual decision is defensible. But the aggregate is wrong, and the cost of correcting it is now much higher than the cost of having steered earlier.

Steer before the maze

My workflow now is built around one principle: steer early, before there's code to defend.

I avoid writing any code for the first couple of iterations. Instead, I work in plan mode. I have the AI generate markdown plans. Not code, not pseudocode, actual written descriptions of the approach, the data structures, the flow of logic. I want to see the thinking before I see the implementation.

Then I have agents critique the plan from different angles. What are the edge cases? What happens under concurrent access? Where does this design create coupling that will hurt later? I collect these critiques into a table, a structured view of the risks and tradeoffs rather than a wall of prose I have to parse.

I use these critiques to refine the plan, and I focus heavily on generating Mermaid diagrams to visualize the connections between components and the flow of data through the system. I want to see the architecture spatially, boxes and arrows rather than paragraphs. When something doesn't connect, it's immediately visible in a diagram in a way that it's not in a written description.

Only after the plan has been critiqued, refined, and diagrammed do I start writing code. And when I do, the agent has a clear map to follow. The one-degree errors get caught when they're still cheap to fix, before they've been baked into an implementation that the agent will now try to defend.

The permissions example

The clearest example of this in my own work was designing the role and permission system for T3 Books. This is a multi-tenant SaaS where different user roles (admins, bookkeepers, decision-makers, owners) have complex, overlapping read and write permissions. The security rules live at the intersection of data structure and functional requirements. Get the data model wrong and the security rules become contorted. Get the security rules wrong and the data model needs restructuring. Everything is coupled.

Without the planning workflow, what I got was a permission system that worked for the first two roles, then started creating special cases for the third, then needed exceptions for the fourth. The security rules became a thicket of conditionals, and each fix introduced a new edge case. The AI was wandering. It had finished the ten-mile hike, arrived in the wrong place, and was now bushwhacking through the woods trying to find the destination after the fact. Every correction was local and reactive, never stepping back to ask whether the path itself was wrong.

With the planning workflow, the same problem went differently. Markdown plan first. Critique from the angles of data access patterns, concurrent editing scenarios, and the principle of least privilege. Then a Mermaid diagram. Here's a simplified version of what the actual planning artifact looked like:

flowchart TB
subgraph "User Roles"
OW[Owner]
AD[Admin]
BK[Bookkeeper]
DM[Decision Maker]
end

subgraph "Data Access"
JE[(Journal Entries)]
AC[(Accounts & Funds)]
RP[(Reports)]
US[(User Management)]
SUB[(Subscription)]
end

OW -->|read/write| JE
OW -->|read/write| AC
OW -->|read/write| RP
OW -->|read/write| US
OW -->|read/write| SUB

AD -->|read/write| JE
AD -->|read/write| AC
AD -->|read/write| RP
AD -->|read/write| US
AD -.->|no access| SUB

BK -->|read/write| JE
BK -->|read| AC
BK -->|read| RP
BK -.->|no access| US
BK -.->|no access| SUB

DM -.->|no access| JE
DM -->|read| AC
DM -->|read| RP
DM -.->|no access| US
DM -.->|no access| SUB

Laid out like this, I could see the symmetries and exceptions at a glance. The Decision Maker role has no write access to anything. It's read-only by design, not by accident. The Bookkeeper can write transactions but can't touch the organizational structure. These aren't things you discover in the middle of writing security rules. You decide them in a diagram, argue about them in a critique, and then implement them with confidence.

I built similar diagrams for the data sync pipeline that connects Firestore to BigQuery for reporting, the system I wrote about in a previous post. Here's the kind of flow diagram that made the architecture visible before I wrote any sync code:

flowchart LR
subgraph "Normal Path"
FS1[Firestore Write] --> AT[Activity Tracker]
AT --> DW[Direct Writer]
DW --> BQ1[BigQuery]
DW -->|on error| SE[Sync Errors]
end

subgraph "Repair Path · 2 AM"
SE --> ER[Error Repair]
ER --> BQ1
end

subgraph "Reconciliation · 4 AM"
FS2[(Firestore)] --> RC[Reconciliation]
BQ2[(BigQuery)] --> RC
RC -->|discrepancy| DS[Auto-Repair]
DS --> BQ2
end

Three paths, clearly separated. The normal path handles the happy case. The repair path handles known failures. The reconciliation path catches everything else, the silent failures that the other two paths miss. I could see this architecture before I wrote a line of code, and I could critique it: what happens if the repair job and the reconciliation job run at the same time? What if a transaction is edited between detection and repair? These questions are cheap to answer in a diagram and expensive to answer in production.

The big corrections happened early, when they were cheap. Adjusting the data model before a single security rule existed. The small corrections happened later, when they were small. By the time I was writing actual code, the hard decisions had already been made.

The difference wasn't that the AI was smarter. It was that I front-loaded the steering. The most aggressive corrections came earliest, when the cost of a course change was a revised diagram instead of a rewritten system.

What this has to do with organic chemistry

I teach organic chemistry at a community college, and I see the exact same pattern in my students. When they encounter a synthesis problem (figure out how to turn molecule A into molecule B) the ones who struggle are the ones who immediately start applying reactions. They grab the first technique that looks applicable and start building a pathway. When it dead-ends, they backtrack and try another reaction. They end up deep in the maze, staring at three walls and the way straight backwards, with no sense of how they got there or what went wrong.

The students who succeed are the ones who stop before entering the maze. They look at the starting material and the target and ask: what's actually different between these two structures? Where do I need to build complexity, and where do I need to simplify? What's the big picture of what's happening here? They draw it out. They see the whole path before they take the first step.

I've been teaching this way for years, trying to get students to diagram before they calculate, to ask the framing question before they apply the technique. I didn't realize until recently that I'd developed the exact same instinct for working with AI tools, for the same reasons. The failure mode is identical: premature commitment to a direction that compounds errors with every subsequent step.

The principle

The underlying idea is simple enough to fit in a sentence: the cost of correction increases with distance from the point of error.

In navigation, the error is angular and the distance is physical. In AI-assisted development, the error is architectural and the distance is measured in lines of code built on top of a flawed assumption. In organic chemistry, the error is strategic and the distance is measured in reaction steps built on a flawed retrosynthetic analysis.

In all three cases, the intervention is the same: invest in visibility before you invest in motion. See the whole path before you take the first step. The time you spend planning feels like you're not making progress. It's the most productive work you'll do.

This doesn't mean plan everything to death. It means plan enough that your first step is pointed in the right direction. One degree matters.