AI Coding Agents confirm that Agile and XP still stand
Clean code, XP, and some Agile practices have curiously made a big comeback now that we use AI coding agents like Claude Code, Augment, or Cursor.
“Come back” might be a strong word here - after all, hopefully, most teams have been following these practices all along. Nevertheless, it’s now clear that the practices that kept us - human software engineers - sane and productive all these years are even more relevant now with AI agents for coding. Ultimately, they benefit both human and AI.
AI × XP, in legacy and greenfield projects
Teams that have been writing clean code, designing clear architectures with good abstractions and separation of concerns, relying on unit tests, and keeping the documentation up to date now find themselves in an advantageous position when it comes to using Claude Code and similar tools. Granted, documentation and unit tests can now be easily generated with AI, and the code can be refactored, but good groundwork and the proper testing or documentation of those pesky corner cases still provide a significant advantage. Anyone who survived the “rewrite this whole thing” type of project knows what I mean.
In other, probably smaller side projects, where we thought that unit tests were
an overkill, or where we kept the READMEs generated by create-react-app
(we
all have a couple of those, right?), we are now updating the documentation and
catching up on test coverage, making these projects better for both human
developers and AI agents.
Greenfield projects illustrate this even better. My typical Claude Code workflow in a greenfield?
- Write down an idea,
- tell Opus, “You are a product manager…” and generate
docs/features.md
, - similarly, generate and review an architecture in
docs/architecture.md
, - break down the work into stories or tasks (“You are the Technical Team
Leader…”) in
docs/planning.md
, - finally, use Claude Code to “Implement task FEAT-001 and don’t forget to run tests and linters when done”,
- rinse and repeat,
where splitting work into the smallest possible deliverables is key for quality results, and validating the results automatically with tests is key for AI to complete the task with full autonomy.
Human engineers and AI: both like unit tests when they pass
I am really glad to see that time-honored and simple pragmatic practices like writing clean code, having unit tests, etc., which mostly stem from XP, are, in a way, being reaffirmed by AI Agents for Coding.
In some ways, it is somewhat unsurprising - after all, LLMs naturally replicate our own reasoning, and AI Coding Agents are created by trained software engineers.
Yet, it is interesting to note that these specific practices are the ones that stand out and, crucially, make life easier for both software engineers and AI.
–
Happy coding!