DEV Community

Dave Mackey
Dave Mackey

Posted on • Edited on

The Dangers of Vibe Coding Part 1: Premature Optimization

Stronger Hook in the Intro:

I wrote this article, but I asked ChatGPT for recommendations on how to improve it. It suggested I needed a stronger hook for the intro and I enjoyed its suggestion:

"I set out to build a simple browser extension. Two hours later, I was knee-deep in cyclomatic complexity metrics. Welcome to vibe coding with AI." -- ChatGPT.

Your Regularly Scheduled Intro...

Funny, intelligent, and not at all the way I talk...anyways...

I've noticed an issue I've run into repeatedly with vibe coding is premature optimization. It happened again with a simple browser extension (ForgetfulMe) I'm vibing currently.

No, I'm not talking about the AI doing premature optimization on its own (though that happens frequently). Instead, I'm talking about the AI tempting me to focus on premature optimization.

"TL;DR: AI coding assistants are great at pointing out problems—but they also make it dangerously easy to procrastinate via premature optimization. In early-stage projects, chasing perfection can stall real progress." -- ChatGPT.

The Temptations of AI Feedback

One of my practices is to regularly ask the AI to create a markdown document with significant ways in which the code can be improved, bugs that need fixing, and best practices that aren't being followed. This works well, a little too well.

For example, when I recently asked it to analyze the codebase for dead code it did so and included some additional tidbits such as:

A. Overly Complex Functions

Several functions exceed the complexity limit of 10:

High Complexity Functions

  • handleMessage in background.js (complexity: 19)
  • handleSignup in auth-ui.js (complexity: 11)
  • getBookmarks in supabase-service.js (complexity: 16)
  • categorizeError in error-handler.js (complexity: 38)
  • getUserMessage in error-handler.js (complexity: 21)
  • createButton in ui-components.js (complexity: 12)
  • createFormField in ui-components.js (complexity: 11)
  • createListItem in ui-components.js (complexity: 12)
  • toSupabaseFormat in bookmark-transformer.js (complexity: 15)

Impact: High - Reduces maintainability and testability
Recommendation: Break down into smaller, focused functions

Well, none of that is good. It also reported on the files I have exceeding 300 lines, functions with too many parameters, and the list goes on.

At other times it's recommended implementing dependency inversion, updating integration tests - you get the idea.

It has an abundance of ways that I can and should improve the code base (now if only AI wrote a clean codebase to start with!).

Why It's a Trap

The danger is that I get distracted from what I was actually trying to accomplish (building X feature or fixing Y bug) and spend precious time on refactoring.

Yes, these things need to be done - but especially for early prototyping building something is more important than building something perfect.

Even as I say that it sits poorly in my mouth. I like architecting things the right way, I don't like sloppy code. I'd rather take a little longer to implement things well than save some time writing messy code.

But...

Conclusion

If AI is to be used productively we have to balance developing features against perfecting code. For side projects, for early prototyping too much optimization is a bad thing. Sure, the AI can do it for us, but (at least currently) this isn't a fast process and the hours we spend on refactoring are hours not spent on building out basic functionality.

What do you think?

"Have you caught yourself vibe coding into a refactoring rabbit hole? How do you balance AI feedback with actual progress?" -- ChatGPT.

What Else Did ChatGPT Do?

  • I followed its suggestion to break up a long paragraph.
  • I also implemented section headings as it suggested, but generally used my own wording.

Top comments (11)

Collapse
 
xwero profile image
david duymelinck • Edited

If you are a non technical person I think vibe coding a prototype is a great way to show developers your idea of the application.
A developer should not let AI do prototyping.

I find when I'm typing I often get other ideas about the solution. Then I stop and explore that idea to access if it is the better solution or not. If I let AI do the typing I remove those moments from my process.

A prototype is like a fixer upper. The core is good enough to go into production with minor changes. And outside the core the quality matters less because most of the times it will be rebuild. And then there are the parts that are in between.
Creating code for different levels of cleanliness is hard to define in a prompt. I never even attempted it because I have no clue how to do that.

I know I didn't directly respond to the question. But maybe it can help you find an answer.

Collapse
 
davidshq profile image
Dave Mackey

I think there is a lot of wisdom in what you state here. I'm still working on the balance myself. I'm more willing to vibe code on things that aren't important but that I'd like to have now.

Collapse
 
xwero profile image
david duymelinck

I can agree with your view. When I wrote; A developer should not let AI do prototyping, I was thinking about long term projects.

If you got an idea and you want to quickly see it working, AI can be a good tool. With the consequence that it is possible you have to rewrite the whole thing if the idea turns out to be something you want to continue to develop.
I think at some point in the process you have to decide if the idea is a throwaway or a keeper. And If it is a keeper you should invest in code quality.

Collapse
 
schusterbraun profile image
Schuster Braun

Totally agree with this. AI is an engine for "good ideas" on next steps. I do think though that it does identify done states. So if you give it a goal (I'm not saying it does the best job) but it can judge when it's done. But yeah I think it'll be harder to triage open ended good ideas. So maybe having a clear Definition of Done and Focusing on getting to that point. Do clean up in another step with a clear Definition of Done.

Collapse
 
davidshq profile image
Dave Mackey

I agree giving a definition of done is helpful. The problem I've run into is that the AI creates a list of tasks (e.g. 5 code refactors to improve code quality) but it faulters on maybe 2 of them and requires a lot of handholding. It gets there eventually, but the handholding takes me away from continuing the prototype.

Collapse
 
schusterbraun profile image
Schuster Braun

I'm working on this as well. Is it right sizing the ask? Asks too big get bugs, too small feels like hand holding. Some of the answer I know is task decomposition. You ask it to break down big asks into the small ones, it automates some of the work out of hand holding. But I too am trying to keep it out of the black hole of fixing incorrect assumptions

Collapse
 
prema_ananda profile image
Prema Ananda

Totally agree! This problem is very familiar.
When AI starts diving too deep into details and suggesting tons of "improvements", I actively discourage it. Sometimes I want to scold it for such eagerness, but I hold back 😅
I just undo everything it created and clearly explain: "I need to solve specific task X, not rewrite the entire project. Let's focus only on this."
The key is to immediately cut off AI's attempts to "improve everything around" and keep focus on the current goal.

Collapse
 
davidshq profile image
Dave Mackey

Haha, I give in to the scolding sometimes!

Collapse
 
raisingpixels profile image
Mama Dev

Have you tried different models? Notice any differences? I've found Claude's models to be quite practical

Collapse
 
davidshq profile image
Dave Mackey

I have used a few different models. I also find that Claude's models seem to do quite well. I'm less impressed with OpenAI and I need to experiment with Gemini more, it feels so foreign.

Collapse
 
prema_ananda profile image
Prema Ananda

Gemini 2.5 Pro and Claude 4 Sonnet are approximately at the same level