# Glenn Eggleton — Full Blog Archive > Concatenated markdown source of every published blog post on glenneggleton.com. Provided for AI agents and language models per the llms.txt convention. Generated: 2026-05-07T08:19:26.571Z Source: https://glenneggleton.com/llms-full.txt --- # The Human Is the Next Bottleneck (And Scrum Won't Survive) URL: https://glenneggleton.com/blog/the-human-is-the-next-bottleneck Date: 2026-05-06 Tags: ai, leadership, engineering-culture, hot-take > AI didn't remove the bottleneck. It moved it onto the human running the agents. Scrum dies. Build-then-cooldown cycles take its place. *The next twelve months of AI throughput are not decided by the model. They are decided by whether the operator burns out.* The bottleneck used to be the model. It isn't anymore. For the last two years, every conversation about AI throughput has been about the system: bigger context windows, smarter agents, better tools, faster inference. We optimized the loop on the AI side because that is where the obvious slowness lived. Now the loop is fast. What is slow is the human at the other end. That's the post. AI didn't remove the bottleneck. It moved it onto the human running the agents, and most teams haven't noticed yet because their process (Scrum, continuous sprints, "always be shipping") is built on the assumption that the bottleneck still lives in the build phase. It doesn't. The bottleneck now lives in the operator's working memory, and Scrum is actively making it worse. This is a 2026 problem, not a 2028 problem. I'm seeing it in my own work and in every team I'm engaged with right now. ## The wall is at five I hit the agent-management wall at five. Specifically: five agents working on five completely separate things. Not five agents doing related work. Those compose. You can hold the shared context in your head and switch between them cheaply. Five agents on five unrelated streams is a different shape entirely. By the third one, you're paying real cost on every context switch. By the fourth, you're rubber-stamping. By the fifth, you've stopped reading the diffs and you're approving things you don't fully understand. I don't think five is special. The number is somewhere in that range for most operators, bounded by working memory rather than skill. The exact value depends on how related the streams are, how mature your patterns are, and how much sleep you've had. But there is a number, you will find it, and it will be lower than you expect. This isn't the same as "I can't manage 50 reports." A direct report owns their context. They synthesize the day's work and bring you the decisions that need a human. An agent doesn't synthesize. It generates output and asks you to approve every meaningful step. Five agents running in parallel are not five reports. They're five very fast, very confident interns who all need a reply right now. This is a wetware ceiling, not a skill ceiling. The teams treating it as a skill ceiling are about to be very surprised by how it scales. ## N agents, N² decisions Here's the part nobody priced in. When you add a sixth agent to a five-agent setup, you don't get one more stream of work. You get one more stream *plus* the cost of everything that sixth stream now has to be reconciled against. Did the auth pattern that agent 6 just chose match the one agent 2 is using? Does agent 6's data model conflict with agent 4's migration? Is agent 6's PR going to step on the file that agent 1 has open? Decision load doesn't scale linearly with agent count. It scales closer to the square of it, because the decisions aren't just "approve this PR." They are "approve this PR *given* the state of every other agent's in-flight work." Every agent you add multiplies the cross-checks the operator has to run. ``` +-------------------+-----+-----+-----+-----+-----+-----+ | Agents in flight | 1 | 2 | 3 | 4 | 5 | 6 | +-------------------+-----+-----+-----+-----+-----+-----+ | Streams to follow | 1 | 2 | 3 | 4 | 5 | 6 | | Cross-checks | 0 | 1 | 3 | 6 | 10 | 15 | | Total decisions | 1 | 3 | 6 | 10 | 15 | 21 | +-------------------+-----+-----+-----+-----+-----+-----+ decision load grows as N + N(N-1)/2 the 6th agent costs +6 decisions, not +1 ``` The build phase compressed. The decision phase didn't. And the decision phase is the part the human owns. I wrote about this from a different angle in [the post on chunking](/blog/spec-driven-development-the-chunking-problem): the work shrunk and the *deciding* didn't. Same shape, one layer up. There the bottleneck was figuring out where to cut a feature. Here it's figuring out which of five streams is about to drift, which decision needs your attention, which one can wait. Same loss of leverage. Same place the day disappears. ## Burnout is not an edge case I tried speed-running agents for a stretch. Maximum parallelism, maximum throughput, treat it as a sport. The output was real. The cost was also real. I burned out. The story I want to head off here is "you just need to manage your energy better" or "you need better tooling." Both of those are partially true and entirely beside the point. The cost wasn't a discipline failure. It was the predictable output of running the human at agent-pace continuously, with no built-in cooldown, while the process I was operating inside (the same process most teams operate inside) said *keep going, you're shipping*. Scrum was designed for human-paced output. The sprint cadence assumes the bottleneck is the build, the build is paced by humans, and the way to ship more is to keep the build phase running. Stand-up, sprint, retro, repeat. There is no first-class concept of cooldown because for twenty years there didn't need to be one. The cooldown lived inside the build itself: the slow days, the hour you stared at a problem before typing, the PR review where your brain quietly defragged. AI removed those slow moments and didn't replace them with anything. The build is fast now. The defrag time is gone. The cadence everyone is running has nothing to put in its place. This is the part that connects to [the AI adoption argument](/blog/your-ai-adoption-isnt-stalled): the slowness didn't disappear, it moved. Decisions moved upstream into product and design. Cooldown moved nowhere. It got cut, because the process never had a name for it. ## "Just stack more AI on top" The objection I hear every time is some version of: solve it with more AI. Meta-agents. Supervisor agents. An agent that watches the other agents and only escalates the decisions that need a human. I'm not against this. I think it helps. I also think it doesn't solve the problem. Every delegation chain terminates at a human. The human signs off on direction, scope, risk, novelty. You can push the bottleneck up the stack, and you should where you can, but you can't eliminate it. Here's the part that doesn't get said: the higher you push it, the more expensive each remaining decision becomes. The decisions a meta-agent escalates are the ones with the biggest blast radius. Get one wrong and you don't lose a PR. You lose a week. You haven't reduced the load. You've concentrated it. The same operator now makes fewer decisions, but each one matters more. The fatigue curve doesn't flatten. It sharpens. This is the team-throughput point made [one layer down](/blog/the-10x-dev-is-dead): individual heroics don't scale. Stacking AI on top of a fatigued operator is the same trap with a faster engine. You're not making the human faster. You're just expecting more from them. ## Build cycles need cooldowns The fix is process, not software. Build cycles need cooldown periods after them. Not optional, not "if there's time." First-class. Scheduled. Funded. Cooldowns are where the operator integrates what just happened: which patterns held up, which agents drifted, which decisions you'd make differently next time. They are where the *process itself* gets iterated on. Without them you ship faster and faster while the quality of your decisions silently degrades, because the decision-making muscle is never given a chance to recover or recalibrate. ``` SCRUM (human-paced era): [==========][==========][==========][==========][==========]... sprint sprint sprint sprint sprint no first-class cooldown. builds run back to back. EXTREME DEVELOPMENT W/ AI (agent-paced era): [==========][~~~~~][==========][~~~~~][==========][~~~~~] build cool build cool build cool cooldown is not a vacation. it is a working pass over the process. ``` The cooldown isn't a vacation. It's a working pass over the process: what worked, what drifted, where the chunks were sized wrong, where the patterns broke down, what conventions to update. It's the retro that Scrum nominally has but that everyone treats as overhead because the build cycle is already saturating the calendar. In the agent-paced era there's no excuse. The build is fast, you have the time, and skipping the cooldown is exactly what produces the burnout downstream. This is what Extreme Development w/ AI looks like in practice. Not a brand. A re-grounding of XP's original instincts (pair programming, refactor as a first-class activity, sustainable pace) for a world where the pair is partly an agent. Sustainable pace is no longer a vibes claim. It's the difference between an operator who's still making good decisions in month six and one who's stopped reading the diffs. The pair-programming part transfers directly. Pair-programming with an agent is what most of the good AI workflows already converge on, whether they call it that or not. Sustainable pace transfers with one update: cooldown becomes its own ritual, separate from the sprint, because the sprint's natural pauses are gone. Refactor-as-first-class transfers with the biggest update. In the agent-paced era, refactor isn't only about [code](/blog/clean-architecture-is-for-llms-now). It's about the *process*. The cooldown is when you refactor how you work: which agents to keep using, which patterns to retire, which chunks to make smaller, which decisions to push upstream. The codebase isn't the only system that drifts. Your process does too, and it drifts faster now because the iterations are faster. ## What dies, and what doesn't Scrum dies for AI-native teams. Or it survives in name only, the way "agile" survived as a vibe long after the original idea was unrecognizable. The sprint cadence assumes a human-paced build, and that assumption is gone. What doesn't die: the things underneath Scrum that were always doing the load-bearing work. Small batches. Frequent integration. Working software over documentation. Direct feedback loops between the people building and the people deciding. Those were XP's contributions before Scrum borrowed them, and they get sharper, not weaker, when the build is fast. The teams that keep running Scrum on AI-paced work will look productive for a quarter or two and then start losing their best operators to burnout. The teams that build cooldowns into the cadence, that treat process refactor as first-class, that pair with agents instead of supervising them, will look slower for a quarter and then be the only ones still shipping good decisions a year in. The next process fight isn't Scrum vs. Kanban or agile vs. waterfall. It's build-pace vs. cooldown-pace. Pick the wrong one and the cost is the operator at the helm. That's the bottleneck nobody priced in. It's where the next twelve months of work get decided. If you're an engineering leader trying to figure out what your cadence should look like in the AI-native era (not what tools to buy, but what rhythm your team should run on), [book a call](https://calendly.com/geggleto/30min). I'd rather you figure this out before the burnout shows up in your Slack. --- # Spec-Driven Development Works. The Hard Part Is Where You Cut It. URL: https://glenneggleton.com/blog/spec-driven-development-the-chunking-problem Date: 2026-05-04 Tags: ai, leadership, engineering-culture, war-story, clean-code > Two weeks of SDD on a real client team: 40% more tests, 80% of a feature shipped from PRD. The numbers needed a prepared codebase and the right chunk size. Two weeks ago I helped a client engineering team switch to Spec-Driven Development. The numbers from the first week: test coverage up 40%, roughly 80% of a new feature shipped from a single PRD with minimal engineer intervention. That isn't the story. The story is what showed up in week two: a bottleneck nobody on the team had a name for, sitting exactly where they'd just freed up the most capacity. The bottleneck is figuring out where to cut the work. The bottleneck is chunking. And one thing the receipts won't tell you on their own: most of why this worked happened before week one. ## Before The team had the workflow most product engineering teams have. A ticket lands. An engineer picks it up. They build, they push, somebody reviews, it ships. Product and design renegotiate inside the build window — sometimes politely in PR comments, sometimes via a Slack DM that starts with "quick question". The engineers were absorbing the ambiguity. They always do. That absorption was the slow part of the cycle, but because it lived inside engineering's slice of the process, the rest of the org filed it under "that's just how long building takes". Standard story. I see this exact shape in most engagements. ## What we'd already done What I haven't said yet is that this team had spent the previous quarter making the codebase ready for this. Not as part of an SDD initiative — they didn't know SDD was coming. As regular cleanup work that any engineering team prioritizes when they have the runway: a refactor of three duplicated email service implementations into one, a consolidation of the auth flow down to a single canonical version, the removal of a layer of legacy middleware that nobody could explain anymore. That was on top of a skill library they'd been building all year. Named, reusable patterns the team referred to by name in PRs and in standups. When a PRD said "validate this with the auth pattern," there was exactly one auth pattern. The AI knew which one. So did the engineer reviewing the PR. There was no ambiguity to absorb. The seams that let an AI hold a chunk in working memory are the same seams that let a human do it. SDD doesn't create those seams. It rewards them. [The architectural prep work isn't ornamental](/blog/clean-architecture-is-for-llms-now) — it's the precondition that decides whether your week-one numbers look like ours. The honest version of this story is that the team wasn't fast at SDD because they switched workflows. They were fast because the code was ready *and* they switched workflows. Either alone underperforms. ## What we changed The change wasn't dramatic in shape. We moved to PRDs first — a real PRD, not a Notion doc with three bullet points and a Loom — and routed the implementation through an AI coding workflow. Engineers stopped writing first-draft implementation. They reviewed PRDs, they reviewed AI-generated PRs, they integrated, they fought the ambiguous bits. This change was cheap because the code was already in shape for it. If you're starting from a tangled codebase, the workflow change alone won't reproduce these numbers; read the previous section as the bill that came due first. If you've read [why I think AI adoption isn't actually stalled](/blog/your-ai-adoption-isnt-stalled), this is the team-level version of that argument. The org-level claim is that your AI pilot stalled because your decision pipeline didn't change. The team-level claim is what happens when you do change it: the decisions move out of the code, the build phase shrinks, and a different problem appears in their place. I'm staying tool-agnostic on what we used. The interesting thing isn't which AI agent. It's what changed about the loop. ## The first week: receipts Two numbers came out of the first week. **Test coverage went up 40%.** Not because anyone made a coverage initiative. The PRDs specified behavior at a level of detail that made tests writeable as a side effect of implementation. The AI agent wrote the tests because the spec told it what the function should do. Engineers stopped deferring tests because there was nothing to defer — the test was part of the chunk, not a separate ticket. **Roughly 80% of a new feature shipped from a single PRD with minimal engineer intervention.** Engineers were involved in the boundaries, the integration points, and a handful of decisions that hadn't been pre-resolved in the spec. They weren't writing the lines. They were directing. I'd seen these kinds of numbers in demos and thought "sure, on a toy". Watching them hold up against real product work changed my read. ## The second week: where the workflow stalled Then we hit week two. The team kept the cadence going on small, well-shaped work. PRDs that targeted a single endpoint, a single migration, a single component family — those flew. Engineers were starting to push back on the spec-writing time as feeling slower than just opening the editor. Then a feature came up that was bigger. A real feature. Cross-cutting, touching three subsystems, with a handful of policy decisions baked in. The team wrote one PRD for the whole thing. The AI started strong, but as the implementation moved across subsystems it began contradicting earlier decisions. It re-litigated function signatures it had defined three steps back. It introduced patterns that didn't match what existed elsewhere in the codebase. The engineer reviewing it spent the day correcting drift instead of integrating. The team's response was to split the PRD. They re-spec'd the feature as four separate chunks. That worked. Each chunk shipped clean. But the team had spent two days arguing over where to draw the lines. Somebody said it out loud in standup: "we shipped this in a day but spent two arguing over where to split the spec". That was the moment. The work compressed but the *deciding* didn't. ## The two failure modes What we found, repeating, was a narrow band where SDD works and two failure modes outside it. **Chunk too big**: the AI loses context partway through. It stops being able to hold the whole problem in working memory. Earlier decisions get re-decided, patterns drift, the integration cost on review eats whatever speed you gained. **Chunk too small**: the spec-writing overhead exceeds the implementation cost. By the time an engineer has written a spec precise enough for the AI to act on, they could have just opened the file and made the change. The team spent more time documenting intent than building. The interesting thing is that the working band — the chunk size where SDD pays — is narrower than I expected and shifts depending on the problem. Cross-cutting changes have a smaller working band than localized ones. Greenfield code has a bigger band than touching legacy. Code with strong existing patterns has a bigger band than code without. This is the same insight that shows up [one layer down, in the architecture itself](/blog/clean-architecture-is-for-llms-now): explicit boundaries are what AI needs to function. Chunking is the same problem at the workflow layer. A chunk is a boundary. Where you cut it is where the AI can reason about the whole problem. There's a third failure mode underneath both of these, and it doesn't care about chunk size. If the codebase doesn't have stable patterns, named conventions, or clean seams, no chunk works: you either drift inside a big chunk because there's no anchor for the AI to reason from, or you burn cycles re-explaining context inside small ones. Chunking only matters once the code is ready to be chunked. You might read this as "AI is doing 80% of the work, so we need fewer engineers, or we can hire less-senior ones to handle the easy 20%". That isn't the claim. The 20% didn't get smaller. It moved upstream, and the stakes on each call went up. The senior engineer who used to spend 80% of their week implementing now spends that 80% deciding where to cut. They're not less needed. They're applied to a problem with a much bigger blast radius — get the chunk wrong and you lose a day of AI drift; get it right and a feature ships from a single spec. ## What this actually means Two weeks isn't long. Here's what we know and what we don't. What we know: SDD ships real product work. The 40% / 80% numbers happened on a real engagement, on real code, in week one. They held up in week two on the parts of the work where the chunks were sized right. What we don't know: a method for sizing chunks correctly on a feature you've never built before. We have hunches. Localized changes have a working chunk size somewhere around "one PRD per file family". Cross-cutting features want to be split before you write the PRD, not after. Greenfield code rewards bigger chunks than legacy. None of these are a method. They're patterns we keep noticing. There's another misreading to head off, the one a CTO might leave with on a Monday after seeing the 40% / 80% numbers: "great, we'll switch to PRD-first next sprint." That isn't the claim. You're 80% of the way to those numbers if your code is already in shape; if it isn't, the workflow change will surface every weakness in your codebase under load, faster than you can patch them. The team that figured out SDD wasn't lucky in their tools. They had spent the previous quarter putting the codebase in a state where SDD could even run. The lesson isn't "do SDD". The lesson is that SDD works the way good tooling always works. It eliminates the easy version of a problem and reveals the hard version underneath. The hard version of "build a feature" used to be "write the implementation". It's now "decide where the boundaries are". That decision used to live inside engineering's slice of the cycle, hidden inside the implementation work. Now it lives in front of the cycle, in the open, requiring an answer before any code gets written. This is what your senior engineers should be doing. It's also what most teams have spent the last five years not training engineers to do. ## Where this leaves us We're still working on a chunking heuristic worth shipping as a method. We also don't have a clean way to tell a team their codebase isn't ready yet, beyond the obvious smells: no clear module boundaries, no shared patterns, lots of legacy crud the team has stopped seeing. When we have either, I'll write it. For now: SDD works on the chunks you size right, in code that's ready to be chunked, and the team that figures out both faster will outpace the team that just writes more PRDs. If you're a CTO or VP of Engineering thinking about running SDD on a real team and wondering how to think about chunking from day one, [book a call](https://calendly.com/geggleto/30min). I'd rather you hit week two without spending two days arguing over a spec. --- # Your AI Adoption Isn't Stalled. Your Decisions Are. URL: https://glenneggleton.com/blog/your-ai-adoption-isnt-stalled Date: 2026-04-13 Tags: ai, leadership, engineering-culture, hot-take > Your AI pilot didn't fail. Your org never made the product decisions AI needs. Spec-Driven Development is what actually unlocks AI adoption. Your team bought Cursor. Your engineers are happy. Your cycle time hasn't moved. Everyone's quietly wondering what went wrong. Nothing went wrong with the tool. You just discovered that typing was never the bottleneck. Ambiguity was — and AI doesn't absorb ambiguity the way engineers did for the last twenty years. Spec-Driven Development is the forcing function that moves the decisions upstream, where they were supposed to live the whole time. That's the post. ## The comfortable lie The story every founder has been sold in the last eighteen months is simple: buy seats, train the team, watch output 10x. When it doesn't happen — and at the org level, it almost never does — the blame lands on engineers (not leaning in) or tools (not good enough yet). Both are wrong. The handoff *into* engineering is broken, and it has been broken for a long time. AI didn't cause that failure mode. It just made it impossible to hide. ## Engineering has always been downstream of bad requirements Here is the thing nobody likes to say out loud: for most of the last twenty years, engineering has quietly absorbed vague requirements as part of the job. Product hands over a ticket with half the decisions deferred. Design iterates during the build. The engineer "figures it out," which is a euphemism for making the decisions product and design didn't get around to. That absorption was invisible work, and it was expensive. It just didn't look expensive, because engineering owned it on the ledger. This was survivable when the build cycle was three weeks. The engineer had time to churn, push back, renegotiate, build, throw it away, and build again. The slowness lived inside the engineering phase, so that's where everyone pointed when the roadmap slipped. AI collapsed the build cycle from weeks to hours. I watched [the shift happen in my own workflow](/blog/how-cursor-changed-how-i-build). At the org level, that collapse doesn't give you a faster org. It gives you an org where the ambiguity has nowhere to hide. ## What this actually looks like Here's the pattern I see in almost every engagement right now. An engineering leader tells me: "Our timelines explode every single feature. Product and design keep renegotiating in flight. The engineers absorb it as iteration. We call it agile." Then they roll out AI. Suddenly a feature that used to take three weeks can be built in an afternoon. Great news, except the team still has three weeks of product-and-design churn queued up for that feature. The churn has nowhere to go. It can't compress into the three-hour build window. It spills out onto Slack, onto tickets, onto PR review threads, onto the engineering manager's calendar. **AIs don't do vague. They require precision.** A feature done in three hours cannot survive three product-and-design rewrites. There is no build cycle left to absorb them. This is the same insight that shows up [one layer down, in the codebase itself](/blog/clean-architecture-is-for-llms-now): AI works when the boundaries are explicit. The org-level version of explicit boundaries is a spec. The slowness didn't disappear. It moved upstream, into the open, where the whole org can finally see it. You might read this as "the author wants us to go back to waterfall." That isn't the claim. Spec-Driven Development is decision-forcing, not phase-gating. Engineers still build iteratively. What changes is that the *product decisions* get made once, upfront, instead of being re-litigated in every PR review. The iteration happens inside the spec, not against it. ## "But our PMs won't write specs" This is the first objection every time I raise it. And I take it seriously, because historically it's been true. Most PMs don't write specs. Most designers don't write specs. Most companies don't ship specs. That's the history. It's also a 2022 objection. Writing a structured spec used to take days, which is where the "PMs won't" reputation came from. In 2026, a PM can produce a passable spec in under an hour from a voice memo, using the same AI tools you bought for engineering. The tooling objection has melted. What's left is a choice, not a constraint. The real question isn't "will our PMs write specs." It's this: does your org want to make each product decision once, upfront, or re-make it every time an engineer pings the PM in Slack about edge case 47? Those are the two options. Pick one. ## What changes on Monday Stop measuring AI adoption by seat counts. Stop measuring it by satisfaction surveys. Measure it by **time from decision to ship**, and notice how much of that time happens before engineering writes a single line. Throughput is an org property, not a team property — [a point I've made from a different angle](/blog/the-10x-dev-is-dead), and it applies with more force now, not less. The AI seat on the engineer's desk is one-seventh of the pipeline. The other six-sevenths live upstream: product, design, legal, exec alignment, the decisions nobody wanted to make because they were inconvenient. You don't need a six-month SDD rollout. You need the next feature to ship with a spec your engineers can hand to an AI agent without a single follow-up question. Do that once. Notice what it tells you about your org. Do it again. That's what AI adoption actually looks like. Not better tools. Better decisions, made earlier, by the people who were supposed to make them. ## The diagnosis If that reads like a diagnosis of your current rollout, it probably is. Most AI adoption programs I see are organized around the wrong layer — the IDE, not the decision pipeline. If you want help building the forcing function across product, design, and engineering at the same time, not just handing your engineers better autocomplete, [book a call](https://calendly.com/geggleto/30min). --- # Boats on Fire, Oars Bent — A War Story from the Web3 Frontlines URL: https://glenneggleton.com/blog/boats-on-fire-web3-war-story Date: 2025-05-20 Tags: web3, war-story, architecture > What happens when a Web3 project moves fast, breaks things, and discovers that 'move fast' means something very different when smart contracts are immutable. There's a special kind of chaos that happens when a Web3 team tries to "move fast and break things." The problem? Smart contracts don't break gracefully. They break permanently. And they take real money with them. ## The Setup The project was ambitious: a DeFi protocol with novel yield mechanics, launching on a tight timeline. The team was talented but came from Web2, where you can hotfix production at 2am and nobody loses their life savings. ## What Went Wrong Everything you'd expect: 1. **No formal verification.** "We'll audit it later" is the Web3 equivalent of "we'll write tests later." Later never comes, or it comes after the exploit. 2. **Upgradeable contracts used as a crutch.** Instead of getting the design right, the team leaned on proxy patterns to "fix it in production." This introduced more attack surface, not less. 3. **No circuit breakers.** When the first anomaly hit, there was no way to pause the protocol. The only option was to watch. ## The Lesson In Web3, architecture isn't optional. It's not something you add after product-market fit. The immutability of smart contracts means your architecture *is* your product. Get it wrong, and there's no deploy button that saves you. The boats were on fire. The oars were bent. But we rebuilt — this time with proper foundations. --- # The 10x Dev is Dead. Long Live the 100x Engineer. URL: https://glenneggleton.com/blog/the-10x-dev-is-dead Date: 2025-05-19 Tags: leadership, hot-take, engineering-culture > Individual output doesn't scale. The engineers who matter most are the ones who make everyone around them better. The mythology of the 10x developer has done more damage to engineering culture than any bad framework choice. It optimizes for individual heroics when what actually ships products is team throughput. ## The Myth The 10x dev writes 10x the code. They're the smartest person in the room. They work late, they know everything, and the project would collapse without them. This person is a liability. ## The 100x Engineer The 100x engineer doesn't write 10x the code. They make 10 other engineers 10x more effective. How? - **They write clear interfaces** that others can build against without asking questions - **They review code** in a way that teaches, not gatekeeps - **They document decisions**, not just code - **They build systems** that are easy to operate, not just clever to write - **They remove blockers** instead of hoarding context ## The Math One engineer producing 10x output: **10x total.** One engineer making 10 engineers 10x better: **100x total.** The 10x dev is a bottleneck wearing a cape. The 100x engineer is a force multiplier. Build your team around multipliers. --- # How Cursor Changed the Way I Build Software URL: https://glenneggleton.com/blog/how-cursor-changed-how-i-build Date: 2025-05-15 Tags: AI, tooling, developer-experience > AI-assisted development isn't about replacing engineers. It's about changing what 'senior' means when your tools can scaffold faster than you can type. I've been writing code professionally for over a decade. Cursor changed my workflow more in three months than any tool in the previous ten years. ## What Actually Changed It's not that Cursor writes code for me. It's that it shifted where I spend my time: - **Less boilerplate, more architecture.** The mechanical parts of coding — setting up routes, writing data mappers, scaffolding tests — get handled. I spend more time on the hard problems. - **Faster iteration loops.** I can try three approaches in the time it used to take to try one. That means better solutions, not just faster ones. - **Code review is different now.** I'm reviewing AI-generated code alongside human code. The skill is knowing what to accept, what to reject, and what to rewrite. ## What Didn't Change - You still need to understand your system deeply - You still need to know when the AI is wrong (it often is) - Architecture decisions still require human judgment - The hard parts of software — naming, boundaries, tradeoffs — are still hard ## The New Senior Engineer The senior engineer of 2025 isn't the person who types the fastest. It's the person who can direct AI tools effectively, catch their mistakes, and maintain the architectural integrity of the system while moving 3x faster. That's not a lower bar. It's a different bar. --- # Clean Architecture Is for LLMs Now URL: https://glenneggleton.com/blog/clean-architecture-is-for-llms-now Date: 2025-05-14 Tags: architecture, AI, clean-code > The patterns we built to manage human complexity turn out to be exactly what AI agents need. Clean architecture isn't legacy thinking — it's the future. The irony is thick. The same "over-engineered" patterns that got mocked in standup — hexagonal architecture, dependency inversion, explicit boundaries — are exactly what LLM-powered systems need to function reliably. ## Why Structure Matters More Than Ever When an AI agent is generating code, calling tools, or orchestrating workflows, it needs clear contracts. It needs to know where things live. It needs boundaries it can reason about. Sound familiar? That's literally what clean architecture gives you. ## The Patterns That Translate - **Ports and adapters** become tool interfaces for agents - **Command/query separation** maps directly to agent action vs. observation - **Bounded contexts** prevent LLMs from hallucinating across domain boundaries - **Explicit dependency injection** makes systems testable *and* agent-navigable ## The Punchline If your codebase is a tangled mess of god objects and implicit dependencies, good luck getting an AI to work with it. The teams that invested in clean architecture aren't behind — they're ahead. The future of software isn't "AI writes all the code." It's "AI collaborates with well-structured systems." And that means the boring architectural work you did five years ago is about to pay dividends. ---