Don't Let Vibe Code Become Legacy Code

Nick Telsan, Senior Developer

Article Categories: #Code, #Front-end Engineering, #Back-end Engineering, #Tooling

Posted on

Generated code can easily slip into a realm of unreadable, unmaintainable and frustrating code. Keep your repo legacy code free with these four simple tricks!

This might be shocking coming from a developer, but I like to write code. I usually spend a good chunk of my weekend tooling around with my home server set up or building side projects that will never see the light of day. My time is limited though, so if I really want a little app for my own personal use, I'll let Cursor or Claude Code (or whatever else new and shiny that week) write the code for me. This works really well... until I come back the next weekend and realize I have no idea where to start changing code to introduce new features. All of that code has now become legacy code.

What is legacy code and why do we care? Can't we just let the AI continue to work on the code?

Legacy code is code that you cannot comfortably change because either you don't understand why it was written the way it was or it's such a mess that it easier to code around it than to fix it.

On a little solo project, this is probably not a big deal — just keep letting the AI build more and more legacy code. Sure, you might never understand it, but you can always just put a TODO in the repo saying you'll rewrite it in Rust anyways.

When you start working with a team, that code that you're generating costs someone else time and maybe a little sanity. Your colleague who was told to change something in that vibe coded feature has to spend a potentially non-trivial amount of time trying to figure where they could even start to change something, and then they have to hope that they don't break something with their change.

Legacy code tends to "breed," generating more legacy code. There is nothing more permanent than a temporary fix. It's easy to layer on more and more workarounds, and each of those workarounds adds more tech debt and frustrates your colleagues. At some point, you'll hit a critical mass where the only reasonable way to add a new feature is a full rewrite, costing you time that could have been spent on new features, the chance for regressions, and good will from your client.

You might be thinking that this is just an issue if you don't write tests, but this isn't just about test coverage. It's about unclear abstractions, inconsistent patterns, and logic that made sense to the AI but not to humans. Tests tell you if something breaks, but they don't tell you where to make your change or why the code does what it does.

Let's take a look at how to go from "vibe coding" to engineering with AI-powered tools.

Own It #

One of my favorite ways to use AI tools — and the reason I haven't called any of this "agentic" yet — is to just ask it questions and write the code myself. For simple or repetitive things, I'll let the agent handle the coding, but we learn best by doing. Just the act of typing the code is going to help it stick in your head and is going to give you the chance to adjust the output to be more human-friendly. This isn't necessarily very efficient if only because I'm a slow and error-prone typist, but it does seem to help me learn about how to use a new package or pattern.

If that's not appealing to you, own something. Something that you write. Test driven development is a great pattern for this. Write the tests yourself and don't let the AI change them except small details like parameter names. You might be a level abstracted from the code, but you'll still have a good grasp on how it should work. Another good option is to manually write integration points. This could be defining the database or endpoint schemas, or, if you're using something like TypeScript, define the inputs and outputs of the function.

Regardless of how you choose to approach this, the point is for you to really think about the problem you're solving and get your hands a little dirty. Just a little bit of manual work will help the ideas stick around and give you better insight into what is good code and what is not.

Review Your Own PRs #

Whenever you finish a feature and put up your PR, the first person to review that code should be you. If you wrote all the code yourself, you might have some writer's blindness, but hopefully you can catch silly things so your teammates can focus on the meaningful code. If the code is generated, you should scrutinize it. Your name is on these commits. If something isn't right with the implementation or if someone has a question about the feature, they're going to come to you first. Make sure the code is code that you understand and that you are comfortable having your name on it. Saying "oh, I use Codex for this" isn't going to cut it when there's a bug in production.

Write Commits for Your Future Self #

Your commits are not just to make your contributions chart on Github look good. They are the roadmap that tells you how you got from point A to point B all the way to point 1c17df6. Some of the best tools you have for understanding a code base are things like git diff and git blame. Looking at the code as it currently is doesn't give you enough information to understand the "why" of the code. However, looking at change over time (especially with good commit messages), you can usually piece together the "why."

Whether or not you're using an AI tool, a good rule of thumb is "one logical change per commit." If you're using an AI tool, you might adapt this to "one (complete) prompt per commit". If you tell the AI to add a new function, once it's done and the functionality is right (which could take multiple prompts), write your commit. It's better to have too many commits than not enough (something I don't follow enough). You can always come back through and adjust or edit commits as you need.

Centralize Your Tooling #

Everyone on our team uses different tools — Windsurf, Cursor, Codex, Claude Code, and probably more. Each of those manages some form of rules differently. However, most tools do understand AGENTS.md. You can think of AGENTS.md as being the README.md for your AI tooling. It's not as powerful as tooling-specific rules, but it is a good place to drop repo-wide context. This can be things like basic scripts and commands, style guides for code and commits, and general context about the project. If you're a monorepo, you can even nest them so that each package has their own special context.

Getting everyone on the same tool is also great, but we're in period of exploration and invention right now. There's a lot of value in trying a bit of everything and using AGENTS.md still gives you a reasonable starting place for context and rules.

Wrap Up #

The genie is out of the bottle. Agentic coding and AI tooling are here to stay. These tools open all sorts of doors for us. Designers prototype their ideas without having to learn all of React, developers focus on the really tricky architectural problems, and everyone can get their weekend projects done (so that they can start another one).

Owning your code, reviewing your own PRs, committing frequently, and sharing knowledge with your team: none of this is exactly groundbreaking. That's because these principles are just best practices. Turns out that the practices our industry has been developing for decades just kind of work. We just need to update a few flows for the modern day.

None of us can really predict what software development is going to look like in five years or a decade, but I know I'm still going to be writing code. As these tools evolve, I'm not going to shy away from them. As long as we keep building on these principles, no matter what these tools become, we'll still be able to understand the code and get our hands dirty when we need to.

Nick Telsan

Nick is a Senior Developer. He has a passion for building things and is never one to shy away from learning new things.

More articles by Nick

Related Articles