↑ Back to Contents
21

Software Engineering's Third Golden Age

Historical context and where this is heading


rady Booch, known as one of the founders of UML and a lifelong voice in software engineering, frames the current moment as the "third golden age of software engineering."

According to Booch, the first golden age was about algorithms (1940s to 1970s), the second was about object-oriented abstractions (1970s to the 2000s), and the third golden age is about systems. It started with the rise of abstraction from individual components to whole libraries, platforms, and packages, not with the recent AI boom. Though AI fits into this, as it helps create even more complex systems with less effort than before.

Three Golden Ages of Software Engineering

1940s–1970sAlgorithmsHow do we compute?
1970s–2000sAbstractionsHow do we organize?
2000s–presentSystems & AIHow do we govern?
Click to enlarge

While I could think of alternative abstractions and viewpoints to augment Booch's framing, I find it compelling and useful. It provides a historical context for the current moment, and it helps us understand the broader forces at play. The rise of systems and AI is not just a technological shift but it's a big change in how we think about software engineering itself. I might add that it might turn out to be way more fundamental than object-oriented programming ever was.

Existential crises - remember the Y2K?

Booch recalls that when compilers and higher-level languages emerged, developers feared obsolescence then, too. Well, our profession evolved. All in all computer science and programming are an astonishingly young field: "The term 'digital' was not coined until the late 40s, the term 'software' was not done until the 50s." Some of the existential dread about AI is happening in an industry that's barely 70 years old, out of which the first 40 were rather small-scale.

There's been so many 'crises' or 'disruptions' already in my 30 years in the industry. Internet, mobile, DevOps, Agile, Y2K, Blockchain, IoT, low code/no code, SaaS, offshoring, and now AI. Each one was hyped a lot, and yet here we are, still building software, still needing engineers.

In an event long time ago, a presenter (needless to say, rather 'high up') was confident that instead of writing code, we shall start building 'components' to a platform. Yes, the platform was the he was most familiar with, let's say it was a big name. There wasn't anything that could not be done with it. So his conclusion was that custom code/proprietary systems are dead (at least for us), and we should just start selling those 'reusable components' to customers waiting in the lobby with deep pockets. Yeah I might be cutting corners here, but that was pretty much the gist of it, and I'm quite sure many of you have heard similar claims in our industry.

As you might have already guessed, none of those components ever saw daylight. The presenter moved on after a couple of years after his 'components are the future' speech, and we continued coding as nothing happened. And here I am, after all these years, making a living creating proprietary software as many of my colleagues listening to the same talks still do.

How I see it is that AI coding tools represent another rise in abstraction, not the end of engineering. You could think of it just as we moved from assembly to C or Pascal or whatever, even down to object-oriented programming. AI assistants are "akin to what was happening with compilers in those days."

Pattern work vs. frontier work

Every problem in software sits somewhere on a spectrum. At one end we have known generic-use patterns, such as standard CRUD operations, standard (well...) integrations, and well-understood algorithms. At the other end of this range we have the larger, frontier problems, such as novel architectures, domain-specific logic, the decisions about what to build and why.

Modern LLMs have been trained on the entirety of the public Internet: every Stack Overflow answer, every GitHub repository, every tutorial and blog post. That's an enormous library of patterns. It's why AI agents can be remarkably effective at the pattern end of the spectrum, and why they struggle at the frontier end. They can recombine what's been done before; they cannot reason about what hasn't.

This is precisely where the step size principle from Chapter 5 meets reality. Governed decomposition works because it breaks frontier-scale problems into pattern-sized tasks, moving work from the part of the spectrum where AI struggles to the part where it excels.

Pattern Work vs. Frontier Work

AI excelsHumans essential
Governed deliveryAI works, humans decide at gates
Boilerplate & CRUDStandard integrationsTest generationRefactoringArchitecture trade-offsAmbiguous requirementsNovel problem domains
Pattern work Boundary Frontier work
Click to enlarge

But there's a concern worth noting. As AI-generated code floods the training pipeline, future models may increasingly train on their own output. Research published in Nature suggests this recursive loop degrades model quality over time — a phenomenon the authors call model collapse.

If the pattern library itself starts degrading, the boundary between pattern work and frontier work shifts, and the judgment we've discussed throughout this book becomes even more critical.

Deep foundations matter more

Our profession is moving at an incomprehensible pace towards automation. The people who will thrive in this environment are the ones with deep foundations, the ones who understand why systems work, not just how to use the tools built on top of them.

This is one part I'm genuinely worried about. When AI handles the routine implementation, there's less incentive to learn the basics. Programming from first principles, SQL, computer architecture, software design. But those are exactly the skills you need to tell whether an AI's output is correct, and to fix it when it's not. If we lose the ability to understand the systems we build, we can't govern them either. How do you validate something you don't understand?

Another concern is that we will become very dependent on this new technology. Model availability, training data quality, the geopolitical and economic forces behind these systems. That's not a reason to reject AI, but it is a reason to keep our own skills sharp. If you don't see it necessary just for the sake of engineering craftsmanship, then consider this for the sake of our own agency and autonomy.

I came into this field knowing how computers work from the silicon up. Semiconductors, networking, operating systems. I'd still argue every engineer should start with C and a plain text editor, and try to make something work from scratch. That experience builds the mental model you'll need when the AI gives you something that looks right but isn't.

What's next

After this book? I'll first do something not involving computers at all.

Professionally, I believe there's a lot to be learned about enhancing the reliability of this new technology, and how to govern it. I remain skeptical about unopinionated generic tools and one-size-fits-all models. For some years to come you'll still need to build your own factories, at least upwards from the foundation. I've found that strangely interesting and fun.

The roguelike was my proof of concept. A solo hobby project with no deadline and no client, where the governed flow still made the result better than winging it would have. Not every lesson from it scales to enterprise delivery, but the core ones do: plan before you build, verify before you move on, and stay in the decisions that matter.

Closing

This book is a collection of experiences, lessons, and opinions on how to build software with AI agents. But if there's a single thread running through all of it, it's this: the technology works when you govern it, and it fails in predictable ways when you don't.

The compound probability problem hasn't gone away. Each step in an AI pipeline still compounds uncertainty, and no model improvement has changed that fundamental math. What has changed is that we now know how to manage it: smaller steps, clear specifications, enforced gates, and humans at the checkpoints that matter. None of this is revolutionary. It's engineering discipline applied to a new kind of tool.

The people I've seen succeed with AI-assisted development share a few traits. They invest more time in planning than feels comfortable. They resist the temptation to let the AI run unsupervised just because it can. They treat specifications as the product and code as the derivative. And they stay curious about what's not working, not just what is.

I do agree with Grady Booch that we are in a golden age. This is, hands down, the most exciting time to be a software engineer during my career, which started in 1997. I don't believe we can put the genie back in the bottle. But we, as a profession, must learn to govern these systems that will produce most of the new program code going forward. We need to shift our focus to the what and the why. In a way, that might just be a good thing.

AI-assisted development is here to stay. The question was never whether to adopt it, but how to make it reliable. Govern the process, stay involved in the decisions that matter, and build on foundations you actually understand.