DEV Community

David Turnbull for Lingo.dev

Posted on

AI coding anti-patterns: 6 things to avoid for better AI coding

Introduction

Since joining Lingo.dev, startup life has been pushing me to eek out as much as I can from AI coding assistants. The more I do this though, the more I've identified counter-productive patterns that I'm sure other people are suffering from.

This article covers some of the most significant anti-patterns that I want to proclaim from my soapbox.

1. Assuming understanding

If your AI assistant regularly goes off the rails, your prompts might not be as clear as you think. There's probably a lot of room for interpretation, leading to outcomes that are technically correct but not what you want.

The simplest option is to ask the AI to explain its understanding of your prompt back to you in its own words.

For more reliability, discuss the probelm and intent with the AI, prompting it to ask questions until the problem space is well understood. You'll likely realize your own understanding has some significant gaps.

2. Persisting with dead-end conversations

When an AI assistant goes down the wrong path, it can be difficult to get it back on the right path.

If you feel like you're not making progress in a certain conversation, don't fall into the sunk cost fallacy. Instead, start over and try again.

LLMs are slot machines, and simply pulling the lever again can be the most effective option. (This is one of those "If it is stupid but it works, it isn't stupid" ideas.)

3. Wasting tokens on codebase exploration

AI coding tools are useful for exploring a codebase, but they also stumble down irrelevant rabbit holes. This means all sorts of useless information can end up in the context window, degrading the performance of the model.

If you know what the relevant parts of the codebase are, or if you know it'll only take you a couple of minutes to figure it out, refer to those files directly.

Alternatively:

  1. Use AI to explore the codebase.
  2. Ask the AI to return the list of file paths relevant to a certain task.
  3. Start a new conversation and reference those files directly.

That's the best of both worlds with only a couple of extra keystrokes.

4. Using too many MCP servers

Model Context Protocol (MCP) servers can be useful — I'm particularly fond of Context7 and Playwright MCP — but each server exposes tools with descriptions and those descriptions eat into the context window.

The more MCP servers that are enabled by default, the worse your starting point for every conversation. It's an immediate handicap.

Here's what I recommend:

  • Only enable MCP servers when they're relevant to the task you're working on.
  • Get comfortable with toggling MCP servers on and off in your coding assistant.
  • Consider if an MCP server is even necessary. For example, AI coding tools are excellent at using the GitHub CLI, so setting up an MCP to interact with GitHub may not be the best trade-off.

5. Bloated memory files

Most AI coding assistants have some concept of "memory" or "rules". These are instructions that are automatically injected into the context window based on what part of the code is being explored or modified.

These files can be a huge timesaver, but it's also easy for them to become bloated over time with an ever-expanding list:

  • Preferences that aren't relevant to every request
  • Rules that are better followed without the use of AI

Here's what I recommend:

  • Start by putting preferences in text files that you have to explicitly reference. If you find reference them regularly, "promote" them to memory files.
  • Don't waste the context window on rules that can be handled deterministically through linting and formatting tools.
  • Review and prune memory files on a regular basis. Ensure that every rule fights for its right to exist.

6. Having loyalty to any assistant or model

On Twitter, I regularly see die-hard fans of Claude Code or Codex or Cursor or whatever else. This kind of fandom only benefits the makers of the tools, not the people using the tools.

Instead:

  • Be willing to jump between tools and models
  • Regularly reevaluate (possibly outdated) assumptions about the "best" tools
  • Don't buy into the hype that tool makers are selling

I don't think it's necessary to constantly seek greener grass, but at least don't become static when the ground is shifting so quickly.

Top comments (2)

Collapse
 
maxprilutskiy profile image
Max Prilutskiy Lingo.dev

Assuming understanding

This is very important, we oftentimes assume models already understand "fundamentals" but actually that's a big mistake, as the training data is usually outdated when you work with the cutting edge tech.

Collapse
 
polterguy profile image
Thomas Hansen

To (seriously) reduce token count, you have to change programming language - Just sayin' ...

I'll probably get half of DEV disagreeing with me here, but you simply cannot build serious systems using AI with traditional languages, you'll need declarative languages.

You can read more here ...