This week, I was at a conference in Ghent, Belgium, where I had the pleasure of speaking to an audience of software engineers about some of the themes I’ve explored in the book Human Software.
The title of my Ignite talk was “How We Treat Each Other At Work”, and while I wasn’t directly talking about AI, I felt I had to react to some presentations I’d heard on earlier days. Some software entrepreneurs had been boasting that GenAI-enabled engineers were delivering 50,000 lines of code a day to their codebases and that they could iterate on entire applications quickly.
My response to hearing these claims is one of immediate distaste and revulsion. But why and what should I do about it? Are Software Engineers in general afraid to say no to GenAI?
The Throwaway App
To borrow a metaphor from the Industrial Revolution, if GenAI is the Spinning Jenny, then AgenticAI are Blake’s dark satanic mills.
GenAI is the precursor – the defining moment of change. AgenticAI is the harnessing of GenAI into a cacophony of action.
While GenAI can, by itself, replace a person’s skills in some areas, it does so in a piecemeal, slipshod fashion. Ask it to do something, and it might get it right or according to your satisfaction, but it’s a one-shot process or manual cranked by the user, sending in repeated prompts.
AgenticAI, on the other hand, is a whole factory of GenAI models and processes all being run simultaneously to produce software at scale. This essentially means the possibility of creating large, complex pieces of software very quickly.
This is why some entrepreneurs and soothsayers get very excited about the idea of software systems becoming very simple to write, maintain and roll out. Software becomes a throwaway service, and all of those expensive, knowledgeable software engineers can be replaced.
You Ain’t Going To Need It
There are many sayings in software engineering circles. Most of them are used to remind us how to practice our craft optimally. One of these acronyms is YAGNI, which stands for “You Ain’t Going To Need It”. The corollary of this is that most of the time, we don’t need more code; we need less.
Boasting how many lines of code an AgenticAI-enabled engineer can create is actually anathema to a good software engineer. So don’t show me how many lines of code you added, show me how many lines you removed to make your design more elegant.
And yet, some engineers still nod and applaud while the machines burn through their tokens. Why? Is this simply ignorance of good engineering principles or something else? Are engineers afraid of saying no to AI?
Revolution or Revulsion?
Personally, I find these productivity claims and the “more is better” approach to software engineering go against everything I know about the business of writing software at any scale. Lines of code generated do not equal productivity. Productivity is not the goal.
But I want to drill down into the reasons for my revulsion to understand specifically what makes me upset. Perhaps because the idea of the throwaway app is just another step on the road to the devaluation of another skill? Is it because the use of AgenticAI, fleets of LLMs doing and redoing parts of the work until it meets a given specification, is wasteful rather than thoughtful? Is it the environmental impact of all of these wasted computing cycles in data centres that are taking over our communities and stealing our water and energy from our communities?
Yes. It’s certainly all of these things. So why aren’t more software engineers concerned enough about this to stand up and say something?
Division of Labour
Among my long-term friends and peers, we seem deeply divided. Some are excited about the opportunity of AI, while some are horrified. Others worry about their future, and some are blasé. Some of my contacts and connections on LinkedIn are fierce advocates, some are opposed. Many sit on the fence. Is this fear of saying something that will mean they can no longer work? Are they scared of their own shadow? Is it something else?
I believe people are afraid because AI threatens their jobs. People are afraid because their bosses are insisting they use the AI tools available to them to write more and more code. Engineers are afraid because the prevailing narrative is that AI will eventually come for their jobs. And what are any of us doing to resist these changes?
As Engineers, Should We Pick a Side?
Now, it may surprise you to learn that I am not fundamentally anti-GenAI. It can be used for good. I use ChatGPT daily in my work. For me, it helps with my research: it’s a summariser, a sparring partner, and a generator of ideas or angles I hadn’t considered. I don’t use it blindly. I use it cautiously, I check the references and never generate text or large amounts of code. Whether it is ChatGPT, GitHub Copilot or Claude Code.
I use GenAI as a tool.
As an engineer, I’m not fundamentally anti-AI in the same way that I’m not anti-automation. I understand the mechanisms that underlie how GenAI works, and I can see there is no magic there. These systems are number-crunching machines that deliver predictions based on prompts. They guess what we want to see and do it well for most use cases, but they are unreliable and not yet truly scalable.
While for a certain class of problems, GenAI can save time and energy, AgenticAI is another level. What we’re seeing through the rollout of these factory hyperscalers and datacentres is a sledgehammer approach to large-scale software engineering problems.
The Silent Exploitation of Resources and Communities
The environmental and social impacts of GenAI, LLMs, and hyperscale computing are not discussed enough in tech circles. Any tech conference will discuss and marvel at the technological impact of technology; few approach its negative impact. Social media is the same. Led by corporations pushing their products, industry leaders are in the thrall of hyperscaler culture. Data centres are appearing rapidly, globally. Local resources, including energy and water, are diverted to power and cool these behemoths.
As an engineer, I want to build things that improve our lives and advance the human race. I build things that are more efficient, less wasteful and require less maintenance. I want to build labour-saving, automatic systems that aid humanity.
AgenticAI is pure waste in the name of bad engineering practice. We end up with worse systems, more code, and more fuel and water used. The environmental cost is too high, and worse, the results often go directly against good engineering practice.
Surely this crosses a boundary of what it means to be an engineer. We should ask ourselves where we draw the line. What techniques are acceptable to achieve our aims?
Losing Credibility
Through not asking ourselves these questions as engineers, we risk losing any credibility that we might have gained. I see people making fools of themselves for the sake of a few clicks.
There is also a very real risk that, through outsourcing our skills to proprietary machines owned by private individuals, we lose our control. If we outsource our thinking and our actions to third parties, we lose leverage over our own destinies. For many “white collar” or professional jobs, while AI is becoming a valuable tool, what is the cost to our own careers? How do we ensure it doesn’t devalue the impact of our own knowledge and influence? Can we?
I believe that engineers who do not question the use of AgenticAI are not only putting themselves out of a job but also betraying the very concepts they once upheld.
Data Centre Opposition
I am deeply opposed to the exploitation of humans and our planet’s resources to simply achieve better technology. Better technology itself is not a goal, especially if it comes at such a high cost. Data centres’ power requirements and water usage for cooling are among the things I find abhorrent about the current race towards greater AI rollout and use.
Additionally the impact on local communities cannot be understated. In the Netherlands, a village is being moved to make space for more energy infrastructure, undoubtedly for data centre support. Globally, there are over 1000 hyperscale data centres (owned by Google, Meta, AWS etc) in existence as of early 2025, with over 500 additional hyperscale facilities in planning or construction at that time. (Source: cargoson)
There are many websites dedicated to tracking and opposing the expansion of data centres – like this one.
Our Choice
In my view, this is far from a golden age of AgenticAI – it is a terrible, wasteful age where we burn through tokens and destroy communities to build data centres and divert water and power to the machines. I became an engineer (and later a software engineer) to solve problems for humanity, not create them. This mindless factory approach to software engineering must be tempered with the parsimony and attention to detail of real engineering.
I would like to see more debates that also account for the human and planetary impacts of GenAI and hyperscale factory computing and I urge all software engineers to think carefully about the impact of their experiments in AgenticAI. Think of the cost of your actions.
From talking to people, it has become clear to me that GenAI is still one of the most divisive topics in engineering, and yet its moral implications are often ignored. It’s time to take a stand, assume a position, and push for not just better engineering, but safe engineering, for ourselves and our planet.
