ElixirLS MCP Server

With this week’s release, ElixirLS bundles an experimental MCP Server. (see CHANGELOG)

Highlights

  • Added Call hierarchy provider implementing LSP textDocument/prepareCallHierarchy, callHierarchy/incomingCalls and callHierarchy/outgoingCalls
  • ElixirLS now bundles a number of experimental LLM oriented tools exposed as custom commands and a builtin MCP server. The tools focus on model friendly text interface instead of typical IDE oriented LSP API methods. Refer to README.md on how to connect to the MCP server. The tools include:
    • find_definition - Find and retrieve the source code of symbols.
    • get_environment - Retrieve environment at location with aliases, imports, requires and more.
    • get_docs - Aggregate and return comprehensive documentation
    • get_type_info - Extract typespecs and contracts.
    • find_implementations - Find all implementations of behaviours and protocols.
    • get_module_dependencies - Analyze module dependency relationships
  • Unofficial support for elixir 1.19

Thanks to @lukaszsamson for this new feature. I’m curious to hear user reports!

  • configuring to use with Claude Code, OpenCode, etc.
  • compare/contrast with Tidewave
  • integration with UsageRules or other coding aids
  • pro-tips, benefits, drawbacks, opportunities, concerns, etc.
8 Likes

Interesting. Looks like it’s enabled by default so for those of us who want to turn it off elixirLS.mcpEnabled = false will take care of it.

I’m curious how many here are now using these agentic tools. Do we have a rough percentage?

1 Like

Shouldn’t this be an opt-in feature?

Personally, I just started playing with MCP very recently. I’m working with Ollama and Open Web UI. I tried getting Tidewave to work… I got to respond to my requests, but I couldn’t actually get anything useful out of it. It wouldn’t even eval my “hello world” function. :frowning:

2 Likes

Could always start a poll!

1 Like

The newest patch release makes MCP opt-in, not opt-out:

2 Likes

I recently went down this road, too. My gut said agents could benefit from language server tools just like us humans. Every time I asked the LLM whether “semantic tools” offered a benefit over their built-in “text tools”, the agent (okay, it was Claude) responded enthusiastically that, “yes, they are much more powerful than simple text-based tools!”

But in actual usage I rarely saw Claude using them effectively. Often it would try the LS tools, fail to get a good result, then fall back to its built-ins.

So I finally asked Claude to perform some refactor tasks with and without the LS tools, and to give an honest report on the comparative advantages, and it told me the semantic tools were a cool idea but were “optimized for human cognitive limitations and offer no benefit in an agentic AI context.” :pleading_face:

I have since abandoned this approach and have seen no dip in the quality of Claude’s output. I would love to be convinced that the problem was my implementation, not the underlying concept. But so far my conclusion is this is not a productive direction for leveraging the strengths of AI in a dev environment.

[edit: Clarification that I was trying to use my own MCP tools during this exploration, not the ElixirLS MCP tools.]

1 Like

Thanks. My experience is something similar. When I am using claude cli - firstly, it does not use the tools all that much - and - repeated nudging does not help much. In fact, if we are using plain claude cli - there will be zero difference between having any MCP or completely raw setup.

However, I have realized things change dramatically once we change the agent. Two examples that I can readily show:

  1. Zed Editor. When I am using the Zed editor - and - same claude sonnet 4 model - MCP calling, overall output quality dramatically improves. Everything else same. So, what Zed is giving as input to the LLM model is quite different from what claude cli gives to LLM. And that is making a lot of difference.
  2. OpenCode. When I am using the opencode cli - and - same claude sonnet 4 model - suppose I am using it for fixing failed tests. The way OpenCode approaches the problem is hugely different from how the same is done via claude cli. With opencode it will fix the code in the most optimal way in the fastest time with lower tokens. Nothing certain about claude cli.

Again both these tools (Zed, OpenCode) are using LSP or some semantic tool as part of the prompt. And it is showing its impact. How can it be enhanced is the question. That is where everyone is still in exploratory mode.
I can say with reasonable conviction that these graph databases, vector embeddings etc. are not that useful - atleast for code generation.

3 Likes

Yes! We all, including the folks developing coding agents, are figuring this out at the same time!

Interesting about your improved results with Zed/OpenCode–they must have more clever system prompts? I will try some side-by-side comparisons vs. Claude Code…

[edit: Sorry for somewhat hijacking this ElixirLS thread; I will try your MCP tools too!]