↑ Back to Contents
6

From Writing Code to Directing Agents

What actually changes when AI takes your (old) job


From writing code to directing agents

ne way to look at how work has been traditionally organized in software development is to divide it into what involves code and what doesn't: the inner and outer circles. It's a useful metaphor in several ways: as developers and engineers we've been, for good and bad, quite obsessed with technology and code. The people and organizational aspects, or even the 'what' we're actually supposed to be doing, have (sadly) been a secondary concern. How I see it is that many issues with software projects have always been the disconnect between the two circles i.e. not technical, but human interaction problems. As engineers, we need to swallow the bitter pill and come to terms with the fact that our beloved inner circle or 'code' is no longer the "real work" or "real beef". The outer circle is.

Examples of tasks in these circles are of course not clear-cut, and the boundaries are blurry and connection points are numerous. Roughly speaking, you could think of it like this:

The Shift from Inner to Outer CircleHow AI-assisted development inverts the center of gravity of developer workTraditional DevelopmentOUTER CIRCLERequirementsSpecificationsArchitectureReview & ProcessStakeholdersGovernancePlanningCodingTestingDebuggingRefactoringBuildingDesigning🧑‍đŸ’ģINNER CIRCLE~70% Inner circle~30%Developer time allocationAI shiftsthe developerAI-Assisted DevelopmentOUTER CIRCLERequirementsSpecificationsArchitectureReview & GovernanceStakeholdersGovernancePlanning🧑‍đŸ’ģCodingTestingDebuggingRefactoringBuildingGenerating🤖INNER CIRCLE(automated)~30%~70% Outer circleDeveloper time allocation
Click to enlarge
The inner circle
Code, test cases, pipelines, debugging, refactoring.

Developers, testers, oddball architect, tech lead, cloud specialist, SRE.
The outer circle
Requirements, specifications, architecture design, aligning with stakeholders, reviews, workshops, UI/UX design, processes.

Architects, tech leads, product owners, project managers, delivery leads, designers.

An alternative way to think about this is that the outer circle was what you did in meetings, or what the tech lead handled, or what happened before and after the "real work.". And the inner circle began when those guys split and you got back to coding.

Building this game, I never wrote a line of game logic. I wrote specifications, reviewed output, and made architectural decisions about module boundaries. I was already working in the "outer circle" this chapter describes. I just didn't have a name for it yet.

AI-assisted development adds a new abstraction level to our work. Getting serious about this means you need to actually make that leap. Now, even though AI might very well handle most of the typing, the inner circle doesn't disappear, but it shrinks, and more of your efforts will be in the outer circle: deciding what to build, specifying it precisely enough for an agent to build it correctly, reviewing whether the result meets the standard, and governing the process that connects these activities.

Naturally, this is a "shift left" in a much deeper sense than the DevOps community originally meant. Originally we shifted testing left, with the idea that writing tests earlier catches defects sooner (this of course still holds). Now the new shift left moves the developer left: from production to direction, from writing to specifying, from building to guiding.

The outer circle was never the part most developers trained for, or chose this career for, or built their identity around. It was the overhead. Now it's the job.

Everything that follows in this chapter — what the daily work looks like, why senior engineers struggle, where junior developers go, how complacency creeps in, why the identity shift is hard — is a consequence of this inversion.

What actually changes Day-to-Day

Now that we're convinced this shift is inevitable, it's a good time to say what it really means. I'll throw in examples by role, and my assessment of what the future holds for us.

Expect cynical humor ahead. If you don't like it, skip to the next section.

Architects

BeforeAfter
RequirementsAnalyze and specify the core building blocks, their interactions and the NFRsDescribe your architecture in a way that's accessible for AI. Same as before, but in a new way.
AlignmentConstantly ensure, as none of the above is connected to code unless you do it yourselfRefine these descriptions and test the alignment yourself. Develop automated ways to check,
DistanceRarely code (I've been fortunate enough to almost always be hands-on big time)Should check code, documents and their indexing, agents and their tooling frequently
ToolsetDraw diagrams and author documents and ADRs (Architecture Design Records)Generate and refine drawings with AI, when needed
Where time is spentMeetingsMeetings and throwing insults at computer when things don't work

Designers

BeforeAfter
AnalysisCarefully and iteratively refine the user experience, accessibility and workflows for coders to implementQuickly develop a working prototype by ingesting a design system based
DistanceDepends, but usually never see the code, and not always really in the same pace or loop as the developers and testersNeed to be closer to development to align that e.g. the prototype you make is a viable starting point
ToolsetVarious design tools (such as the one that begins with the letter F) and other visual tools.Automated UI-focused AI platforms such as Lovable
Where time is spentMeetings arguing with developers who didn't follow your designsTrying to iteratively modify the solution made by Lovable and hurling obscenities at the computer

Developers

BeforeAfter
AnalysisSkip the specs, we'll figure this out as we goYeah I need to create a proper plan and a todo list before I can code
DistanceWell, you're as close as it gets, and the further away the architects and others are the betterThe architect now makes commits to the codebase. Still undecided whether that's a good thing.
ToolsetIDEs, CLIs and git. Never touch the JIRA unless somebody forces you to.IDEs and CLIs and git with an AI-wrapper on top of them
Where time is spentMeetings that don't really concern you, and debugging funny issues. 10% real coding.Debugging funny issues and babysitting AI agents going rogue

Testers

BeforeAfter
AnalysisSkip the specs, we'll figure this out as we goYeah I need to create a proper plan and a todo list before I can test
DistanceWaiting for developers to finish their work literally just behind their backsI actually need to code or review the automatic tests myself
ToolsetTest frameworks, CI pipelines (in theory) and some mostly testing tools (reality).spec.ts and the agents that run them. Playwright trace files.
Where time is spentWaiting for something to test or get fixed to be tested againLooking at the trace files and trying to figure out what the heck happened and why the test failed

Cloud engineers, SREs

BeforeAfter
AnalysisIt's the same as what is needed regardless of what is being built.We still need the Kubernetes cluster
DistanceThe further away the better, as long as you get the requirements rightYou actually need to have your code in the same repository as the developers
ToolsetTerraform, Kubernetes, CI pipelines, monitoring toolsAgents writing Terraform and Kubernetes manifests, and monitoring tools that are more focused on the output of the agents and the quality of the code they produce
Where time is spentMeetings arguing about how to do the infrastructure and firefighting when things break. Trying to figure out where and why the Terraform state file went out of sync with the actual infrastructureTrying to figure out why the agent wrote a manifest that doesn't work and then trying to fix it yourself

Project managers

BeforeAfter
AnalysisA detailed 100-page document nobody, including you, ever reads, and a Gantt chart that is outdated the moment you create itA short PowerPoint and bold promises "AI will do this in 1/10 of the time"
DistanceWeekly meeting with the team, and then you go back to your office and do the actual work of project managementMonthly meeting with the team
ToolsetShould've used: JIRA, Devops, Dashboard. Really used: Excel and e-mailsShould've used: Dashboards that track the progress of the agents and the quality of their output. Really used: Excel and e-mails
Where time is spentMeetings and living in despair as the JIRA tickets never get updated anymoreMeetings and living in despair as the JIRA tickets never get updated

What's left for junior developers?

In traditional software engineering, junior developers learn by doing progressively harder work. They start with simple bug fixes, graduate to small features, and over several years build the pattern library and system intuition that makes a senior developer effective. The work is sometimes tedious, but the tedium is where learning happens.

AI-assisted development threatens this pipeline directly. If agents handle the pattern work that juniors learned on, how do they build judgment? If they never write a service adapter from scratch, how do they learn to recognize when an agent has written one incorrectly? If they never debug their own logic errors, how do they develop the diagnostic instincts that make review effective?

My point here is that in order to validate a plan or outcome, which is the supposed new role, you need to have an idea what it should look like, know what the correct answer is. You cannot tell what it is without experience. Failing to do so, as will be the case, we will have a problem. If you let a number of inexperienced juniors go AWOL with agents for a week, with all certainty they'll end up generating a month's worth of technical debt, dozens of hard-to-find subtle bugs, and a codebase that nobody understands. The senior engineers will be left with the task of cleaning up the mess, and the juniors will have learned nothing but how to make a mess. Actually, this kind of roles ('openings for Senior Slop Fixers') seem to have already emerged.

The Stanford statistic (employment among developers aged 22–25 fell nearly 20% between 2022 and 2025) suggests this isn't a hypothetical concern. The pipeline of future senior engineers is already being disrupted, and the consequences will take five to ten years to become visible.

My take on the study is that if we fail to keep the juniors engaged and learning, the consequences will be severe. Failing to address this, we will discover a generation gap: a cohort of experienced engineers approaching retirement and a cohort of AI-dependent developers who never built foundational skills. All in all I think people should really learn to code, write SQL, debug and experiment before turning them loose with AI tools unsupervised. Otherwise they'll be just rubber-stamping AI slop and learning nothing.

I'm not making an argument against AI-assisted development here. It is an argument for deliberate investment in junior developer training programs that use governed AI development as a teaching tool and insisting on knowing the old way as well. In this program, or whatever, juniors should learn specification writing, review technique, and system design explicitly, rather than absorbing them implicitly through years of code production.

The automation complacency trap

Aviation researchers identified "automation complacency" decades ago: when humans monitor automated systems, their vigilance degrades over time. Humans are reliably poor at sustained attention to systems that almost always work correctly.

This applies directly to governed AI development. A plan approval gate is only as good as the human's willingness to read the plan critically. A review gate is only as good as the human's attention to code they didn't write. If the AI's output is good 90% of the time, the human reviewer will learn, perhaps unconsciously, over weeks, to expect correctness and skim rather than scrutinize. The 10% of the time the output is subtly wrong is exactly when the human's attention is most likely to have lapsed.

And the output looks and feels good, down to the wording like 'Your code is now ready for production.'. It's entirely understandable to skip the hard part of verifying the work and just approve, as the literal quote from a generally used LLM already said we're done.

The literature suggests several mitigations that translate to software governance. Rotation of review responsibility prevents any single person from becoming complacent with a particular agent's output style. Varying the level of AI autonomy on purpose, like running agents with deliberate constraints that require more human input could keep reviewers engaged. Surprise audits of gate approvals, where a second reviewer evaluates whether the first reviewer caught issues that were present, keep the reviewers honest too. Perhaps this might feel like overkill, but it is something you'll need to address to mitigate this trap of trusting the automation too much.

None of these are magic bullets. Automation complacency is a deep feature of human cognition, not a process deficiency. Acknowledging the facts on the ground, rather than assuming that "mandatory human checkpoints" will automatically produce careful human review is the difference between governance and governance theatre.

The new role nobody planned for

This discussion about roles and competences reminds me of the advent of cloud computing. Not that long ago, developers (often with somewhat reluctant help from sysadmins) managed their own infrastructure. Provisioning servers, configuring networks, and maintaining deployments were part of their jobs. It was not uncommon for the same person who wrote the software to also install the server, set up the databases, and handle whatever else was required. Perhaps the sysadmins found their new home as SREs or Cloud Engineers who struggle to keep their Kubernetes clusters alive, but for the most part, the cloud abstracted away the infrastructure and made it a side responsibility.

This worked to some extent when the commercial cloud offering was simple, but it grew complex enough that it couldn't remain a side responsibility. You've heard it all: dozens of certifications, consoles, IaC formats, well-architected pillars, the N Rs of cloud transformation, and so forth. The "cloud engineer" emerged not because anyone planned the role, but because the work demanded it.

AI-assisted development requires something similar. Whether we call this new job description the AI Agent Engineer, or the AI Governor, or the Agent Orchestrator does not make much difference. The point is that someone needs to own the responsibility of maintaining the agentic development environment, and that responsibility is not optional. This role includes but is not limited to:

Maintain the custom instructions that guide every agent interaction. Design and curate the agent configurations: which models for which tasks, what constraints, what context. Keep the documentation hierarchy and all of the above current and well-indexed so agents get useful context rather than noise. Keep up with the tooling and model developments Actively look up new features and capabilities that could be useful for the project, and experiment with them to understand how they work and when they are useful.

And this is iterative and continuous. I think it's unreasonable to expect this to be everybody's job or a shared responsibility. It requires a level of attention and care that is hard to maintain when it's not owned by anyone. Whether this is a full-time job remains to be seen and is obviously dependent on the project.

In small teams, this easily falls on whoever cares most. Usually this falls to a senior developer who has developed intuitions about what makes agents effective, and perhaps was an early adopter of AI in the first place. But as teams grow, the "whoever cares most" approach breaks down the same way "whoever manages the servers" did. The work requires dedicated attention, systematic maintenance, and a skill set that combines understanding of the development process, agent capabilities, and the project's evolving needs.

Organizations that recognize this early will create the role deliberately. Some won't, and will discover it through slowly degrading output quality, stale documentation that agents can no longer usefully consume, and instructions that reflect how the project worked six months ago. The cost of not having someone own this responsibility is invisible until it isn't.

The identity shift might be the hardest part

Many software developers chose this career because they enjoy building things. Myself included. Also quite a few of us began programming long before entering the job market. The satisfaction of solving a puzzle, of seeing your code work, of creating something from nothing can be lost forever when delegating that part to AI.

And this shift is a matter of identity. Many take pride (often well deserved) in their craftsmanship of writing good code, clean architecture, or beautiful design, if it wasn't really you doing it. You know the scene: you meet a fellow developer and the first question is "what do you code in?", not "what do you govern in?".

Working with an AI agent is more like managing a team than writing code. Or sometimes like working as a kindergarten teacher. You set direction, review output, make judgment calls, and accept responsibility for results you didn't directly produce. Some developers find this deeply satisfying: the power you can unleash is intoxicating, and the problems are harder. You can build tools and make refactoring decisions you would have dismissed outright before as too laborious or risky. For others it feels like a demotion: instead of engineer or builder, you become some kind of "orchestrator" or "curator" or "governor." The work is more about people and process than about code, and that can be a hard shift to make.

Anecdotally, the long-timers seem to be the ones who get on with it more naturally. Perhaps they've already typed enough code for a lifetime, and they're generally more comfortable with character-based UIs and the back-and-forth of a terminal session. The CLI-first workflow of modern AI coding tools feels familiar to someone who grew up with vi and grep, less so to someone whose entire career has been in graphical IDEs with drag-and-drop scaffolding.

Sunk cost fallacy and the willingness to start over

Especially early in the lifespan of a project, a lot of the work is basically throwaway. It's been like that forever, but the things you ought to have thrown away will haunt you for the rest of your lifetime. How it usually turns out is that the next sprint brings new features, not rewrites of the old ones or the long-awaited pause to think things through properly.

So what usually happens is this:

"There's never time to do it right but there's always time to do it over." "Temporary solutions are always permanent" "There will never be proper time to refactor" "The release train needs to keep moving" "Technical debt is forever"

Often it is hard to admit that we need to start over. You know, the sunk cost fallacy and the reluctance to admit we went wrong are both very much human nature.

I'd argue that this time it's different: regenerating major parts is a viable strategy, especially if you have good specs and know what you're doing and you can manage it. At least it's possible much later in the project than before; the attempt that produced 40,000 lines of throwaway slop might not be the disaster it would've been in the old days. Refine the architecture and your designs, and let the agents take another shot at it. At least it should be; you didn't write it yourself, did you, so what's the harm in trying again?

A word of caution though: this applies best to bounded components and early-stage work. Once the system is in production, integrated with other services, and storing real data, the window for wholesale regeneration closes fast. Chapter 11 explores that "point of no return" in detail. But within bounded scope, the willingness to throw away and redo is a genuine advantage of this way of working.

By adjusting your mental model to better admit defeat, consider this as a learning opportunity, regroup and do it again! Better yet, if you have lots of premium tokens to burn, you can even experiment with different approaches and see which one works best.

Organizations that ignore this dimension will lose good people. Not because governed AI development is bad, but because they failed to recognize that they were fundamentally changing the job, and that not everyone will want the new version of it. I encourage honest conversations about what the roles will be, at both team and individual level, and offering options for people who prefer a different kind of work.