Why Organizations Resist (and How They Adapt)
The Learning Curve
et someone claiming to have years of experience agentizing their software development? Take it with a grain of salt. Frankly, the early days weren't that great after all, and it is only very recently (as of Feb 2026) we have the capabilities and tooling to seriously automate the software development lifecycle beyond the "code generation" phase, or even that.
In the previous two chapters, we looked at how individual roles are shifting and what new skills engineers need. This chapter is about the hurdles we have when trying to "AIdustrialize" our software development process.
First of all, the developers. At least many on my radar, including myself, are quite picky about the tools they use, and prefer tuning their own workflows and tools rather than going with the company standard. Many are also rather active in finding new tools and adopting (and ditching) them quickly. Also organizations, as it turns out, have their own immune system.
Resistance to AI-assisted development lives at multiple levels. Some of it is deeply personal, as if a developer's professional identity is tied to writing code by hand with the favorite tools. Another bar is about competence and training: people or team simply don't know where to start. The friction might be also technical, as the tools genuinely aren't mature enough for their context, sometimes leading to bad first impressions (which never go away, do they?). Finally, some (or, most) the resistance is about organization and politics. You know, red tape, budget, politics, rules, MORDACs. et al.
Who is pulling the brakes and why?
I don't want to make fun of sceptics here, I'm one myself. It's more than justified, especially as hype towards 'all code is written by AI' gets more steam.
Not all resistance is irrational, some luddite-style 'it was better before' principled stuff. That would be underestimating your probably bright colleagues.
So enough with my disclaimers, let's name the suspects and their motives.
The burned team
These are the teams that went through Agile transformations, SAFe rollouts, or DevOps revolutions. Many of them have the scars to prove it. They've seen "the next big thing" before, what ever it was, from monorepos to BDDs/TDDs promised to solve most or all of your problems. The pitch always sounded great; the reality was months of disrupted productivity, abandoned tooling, and a process that turned out to be more about compliance theater than actual improvement.
Much of the above is not even tongue-in-cheek, but what is perhaps very typical of our industry: what ever (usually just one thing) emerges as the Next Big Things appears on Innovative Lead's Powerpoint Stragies the next day.
So now, when you (or I) show up with AI-assisted development, they hear the same pitch.
The key with burned teams is honesty. Don't oversell. Start with a problem they actually have, show a small win, and let them set the pace.
The inertial team
I.e. the guys who believe 'if something ain't broke, don't fix it'.
These are the teams that are productive and comfortable. They ship reliably, their processes work, and they have no burning pain that AI promises to solve.
They might and often do have the basic 'code completions'-style AI tools in use, but they prefer to keep them at that level.
This is the hardest resistance to overcome because it's entirely rational from their current viewpoint. If our mission is to change their job descriptions from coders to agent controllers that is of course.
How to convince these guys that we aren't gonna be able to push this djinn back in the bottle, and that the competitive field is shifting under them? That you might do OK for a year, perhaps two?
I guess patience and showing results is the best bet. Have that 'AI Whisperer' to setup some nice not-too-ambitions helpers for their biggest time eaters (EVERYBODY has them). Seeing is believing.
The mandated team
Yeah there are teams that actually do something because to CTO or a similar PHB tells them to. Sounds good, right?
In my experience these are the guys who will answer "Yes" to the corporate questionnaire "Have you adopted AI tools in your software development process?" but in reality have just filled some licence order form but never installed the damned thing.
The key with mandated teams is to convert the mandate into agency. Instead of "you must use AI tools," try "here's a budget and time to figure out how AI tools could help your work.". And if that does not work, send the same AI Whisperer the Inertial Team just laughed out with 'come back next year when you have something working'.
Wasn't the self-goveringing teams once the basis of productivity?
The resistance patterns
Now that I've named the suspects, let's look at the detailed patterns of resistance that they (and others) might exhibit.
The red tape
First, and often the biggest obstacle, is the red tape. The outright refusal or artificial limitation of AI tools to be used to generate software. This reminds me of the early days of web, when putting anything online was considered a major risk. News of breaks and security snafus, even recently, hasn't exactly paved the way to use AI. In the end, cutting the red tape means getting somebody higher up to sign off.
There are several reasons for having governance around any IT tools, internet or not. Many of them are entirely valid in the AI era as well, such as those concerning privacy, security or IPRs. Regarding the AI tools, however, many of these concerns are being used to justify redtaping the AI tools entirely, due to a basic misunderstanding of what the AI tools really do in the context of software engineering.
Code is not Data
First of these is the misconception between the code and data. Producing sourcecode is not the same as running that code. Believe it or not, this is not a rare misconception; while running AI as part of your runtime might be scary, and sometimes also risky, that's not what we're doing with the AI-assisted development.
Microsoft and Amazon Already Have Your Source Codes
Another aspect is the sensitivity of the code itself. Indeed, much of the programs that run the world are not really open source to this day. Either the area is so niche that nobody would be interested anyway, or it has some genuine inventions or trade secrets. Or, reveal the annoying algorithms to sell you your airplane ticket twice (you know, the cheapest ticket does not include right to sit, sleep or drink, or carry anything including clothes etc).
This brings me to my second equation, directly deducible from first principles:
If your fear is really about trade secrets, you shouldn't be running it in the cloud either, right? Or host it in Azure DevOps or github, no matter what your privacy settings are. And, there's an option to run your own in your private cloud or even on-prem, if you really want to. But the point is that the code is not going anywhere, and the owners of AI tools are not stealing your code, at least no more than they do already.
I'm not arguing that AI tools should not go through an audit and due diligence process before rolling them out. Right now it seems that sometimes it takes literally years to even start the audit or DD.
We don't know how to use it and training budget is not there
Adopting new technology, especially as disruptive (there you go, I used the word!) as AI is naturally approached with caution. The field is still immature; especially the tooling scene is a mess and many probably already had some bad experiences with the early tools.
You might heard some of the following excuses:
So, "let's wait until things settle down", right?
Or, "we need the new AI guy willing to walk the walk to get the thing rolling" (and then the rest of the team will follow). "No licence budget" "No training budget" "We need to get IT and Cyber involved" "We need to focus on the current roadmap" "They will learn by doing"Usually the obstacle is not the team.
This kind of resistance or friction easily becomes a self-fulfilling prophecy.
Generally the best way to build competence is to start learning by doing. Take the first step, or the next. Get those licences out, perhaps for a small project or isolated into containers, or whatever. We and the rest of the world aren't going to wait for you to catch up in 2027-2030.
The skeptic inertia
Often the randomness and bad early experiences, combined with a real fear of losing one's job gives rise to the skeptic movement. You know, AI slop, Who is gonna maintain this, Technical debt. I've thrown these concerns around in the book as well, but they are often used as a blanket excuse to avoid even trying.
I'm not saying these concerns aren't valid. They are. But they are also often based on a lack of understanding of what modern tools can do, and how to use them effectively.
Perhaps the best way to overcome this class of skepticism is to show results. Start with a small pilot project, measure the impact, and share the success stories. Do it honestly and openly.
If you are concerned about your job security as a developer: there's a good chance the total amount of work remains the same, it's just going to be redistributed. With proper use of AI tools, you can do WAY more and things you would have never done before. Rethink your position a bit; somebody will need to verify the code still, even debug it, and guide the AI to refactor when needed, and it's not gonna be the former UX designer doing it via AI either (to be clear, AI aren't gonna turn engineers like me as competent UX designers either).
Your expertise is not just about frameworks, programming languages and libraries.
Synchronizing the speed of change
Supposedly we can now deliver things all the way to production. This will happen at much higher speed than we used to. This might not feel like a problem at first glance, and sometimes it's actually great.
But then we enter the SAFe world (pun intended) of multiple teams, systems and dependencies, and a large backlog that just cannot be fed to end users piecemeal. You cannot install at will. It might not matter if you delivered in a week instead of a month, if you will end up just waiting for somebody else. So basically, the lead time from the idea to the end user doesn't really change much.
This brings me to Equation 3.
So in a complex system, speeding up one part might not have that much effect on the lead time, if it wasn't the real bottleneck in the first place.
Practical issue related to this speed is also between the developers and stakeholders. In order to get good results from AI, you need to write and read an order of magnitude more documentation. Chances are people are busy already and cannot just bend to this insane pace and volume.
There's a more immediate price, too. You can speed things up dramatically, but unless special care is taken to keep work separate, you'll drown in merge conflicts. Multiple agents (or agent sessions) touching overlapping files at high speed is a recipe for integration pain. The architecture chapter (Chapter 11) argues for separation of concerns partly for this reason: modules with clear boundaries can be worked on in parallel without stepping on each other.
The bureaucracy issue
Going full-on specification driven with all the hooplas of the agents, reviews, traceability and all that is a lot of work and certainly something people might not be ready for.
Let's not add any more complexity to the process if we don't have to. This is supposed to be easier, right?
The thing to consider, for which I don't have an answer yet, is to define something that is good enough for the project and the team. Yes, we need to see we're heading the right way, and yes, you're gonna need to write specs and not just code.
My solution so far is to have much of the 'good' of the old world in place. Like kanban boards, some kind of ordering of the backlog, perhaps a higher level roadmap. Use that as a source of truth, and try to ingest as much as possible from the internal loop i.e. what happens between the 'Definition of Ready' and 'Definition of Done' near the code and, consequently, the AI agents. Find or develop an easy to use tool to help you manage that.
The adoption ladder
Easiest thing to alienate your developers and screw the opening ceremony of your Agentic Software Factory is to go from 0 to 100 (kph) overnight. Take one step at a time. For example, start from the end of the process by automating reviews. Then, launch a todo-helper or a planner. And so on.
| Level | What You Add | What You Get |
|---|---|---|
| 1. Quality Gates | Manual review before merging AI-generated code | Catches the worst problems; costs almost nothing |
| 2. Plan Artifacts | A brief plan before the AI writes code: what to build, acceptance criteria | Makes AI dramatically more effective; takes ten minutes |
| 3. Traceability | Link plan to code to review; trace back when things break | Patterns emerge in what goes wrong and why |
| 4. Quality Signals | Automated checks: tests, linting, no regressions | Data on how well the AI-assisted process actually works |
From resistance to governed response
Perhaps all new technology will inevitably bring up it's own generation of luddites and sceptics.
They will always adapt, over time.
The patterns (however synthetic they might feel) I've described in this Chapter are not products of my imagination, but something real. Many of them I've witnessed either first hand or via community lore, tales and linked in posts. Generally speaking, I think it's always useful to think about why people think or act the way they do first if you try to change their minds. Or if you're on a mission like I'm with this book, help them to get best out this wonderful yet quirky technology we now have.
To conclude, here's my take on how to soften the resistance, convince the skeptics, and get the ball rolling before feeding them the corporate governed lifecycle that awaits you in the next part of this book.
Don't wait for the perfect tool or the perfect process. Start with something small
Show results. Measure the impact and share the success stories. Educate and train your team. The more they understand the tools and their potential, the more likely they are to embrace them. Address concerns and limitations openly and honestly. Don't dismiss them. Be patient. Change takes time, and there will be bumps along the way.