AI Coded So Fast It Exposed All My Mistakes. So I Made It Fix Those Too.
How one broken AI coding sprint forced me to rethink software engineering, and build something that fixed it.
The real coding happens before a single line of code is written. I was trying to vibe code a GUI/CLI importer program and was slowly losing the vision of the product with each pass. I wasn’t sure what the GUI should look like anymore. I wasted time and Codex usage migrating the Golang GUI framework three times. First, from Fyne to Wails+JS, then from Wails+JS to Wails+TS, then from Wails+TS to Wails+Vite+Svelte. At some point, old code and docs was what was prevented Codex from doing a true complete overhaul, leaving docs stale, risk of mismatched docs/code. It was honestly impressive the codebase even survived up to this point.
I already felt I wasn’t being concrete enough with my spec, but I had no process for narrowing it down, and no process to stop myself from moving on when it wasn’t concrete enough. Soon I realized if the spec wasn’t clear and tight, the AI was going to make those decisions for me.
And it did. I looked at the JSON spec and thought: wait, that field isn’t supposed to be there. I thought I was being clear enough. How could Codex hallucinate that and not tell me it was uncertain, despite me always appending “if you have any clarifying questions, stop and ask.”
I was stressed about what happens if the importer spec changes. What if the sqlite schema changes. What if other programs depending on the CLI args suddenly find the argument shape is different now. v0 prototyping is easy because you can change anything at a whim. But I knew the moment I published v1, the fate of the program would be sealed and everything gets much harder instantly.
So I kept pushing. Making sure the first v1 was as polished as possible. Covering every edge case I could think of. Making sure the only updates it would ever need are simple tiny bug patches. Asking Codex “is there anything confusing or wrong with the code” and watching it helplessly sift through thousands of lines of code, doing its best to find what was wrong between the 1000+ lines of docs and the 10000+ lines of code. It wasn’t until my weekly Codex limit ran out that I was forced to stop and reflect.
I didn’t feel like I had solid ground anymore, though Codex had only amplified that feeling for me. I panic researched and found these quotes to be almost relieving:
The hardest single part of building a software system is deciding precisely what to build. No other part of the conceptual work is to difficult as establishing the detailed technical requirements, including all the interfaces to people, to machines, and to other software systems. No other part of the work so cripples the resulting system if done wrong. No other part is more difficult go rectify later. - Brooks
If you don’t understand the problem, you can’t possibly come up with a good solution. – Douglas Hubbard
Afterwards, I sat with what happened and those quotes for a day. I began to conceptualize what it really meant to write software for someone wearing both a product manager and a software engineering hat: which questions to ask first, how to narrow the product scope well. Really asking the right tough questions from the very beginning. Through this process, I felt my anxiety was slowly starting to get answered.
I pasted what I wrote, along with my sprint notes, into ChatGPT. I didn’t know where I was going. Somehow, through that conversation, I landed on a product lifecycle diagram, which turned into a state machine that an AI could run. An AI-run state machine that can keep track of product lifecycle state so you don’t have to. I now call this PLSM: Product Lifecycle State Machine.
I stared at the ChatGPT screen as the realization started to click that you could use an AI to run high level state machines, which meant you could run the state machine on the product lifecycle diagram itself, with product lifecycle state written to a stable text file. That meant I didn’t have to have keep track of the entire product lifecycle state in my head. I didn’t need to have constant anxiety anymore as I try to fit the codebase in my head or make sure every little thing that is “expected” like the database or the schema was properly migrated, only to realize CLI argument shape is also considered a kind of “expectation” that CLI users expect.
I felt like I finally cracked something that I had silently suffered through my entire software engineering career. Imagine someone finds an open source project that does exactly what they need, but the last commit was three years ago and the author is gone. Normally, they’d have to reverse engineer the author’s intentions from the code itself, hoping nothing breaks when they touch it. With PLSM, they open the entrypoint file, point the AI to it, and within minutes, know exactly what the project is, what it promised its users, where it left off, and what a safe update looks like. That’s what I built, and I hope this helps you too.
(The GitHub explains more in depth what PLSM is, how it works, and why it matters.)
MIT open source @ github.com/binary-person/plsm
Acknowledgements: David Malawey on YT, for his inspiration on what open design truly means. Only through his hints was I able to reach to the PLSM conclusion.

