Skip to content

Compiling Custom Model Fails to Load Into Web-LLM #633

@Justin-Cignal

Description

@Justin-Cignal

I have Followed the instruction here to add a new model and load locally: https://llm.mlc.ai/docs/deploy/webllm.html. I was able to successfully convert the weights, generate a config and compile a model. However I get this error when trying to initialize the model. Error: Cannot find global function mlc.grammar.BNFGrammarGetGrammarOfJSON
image

I have also run the compilation on this model from huggingface https://huggingface.co/mlc-ai/SmolLM2-135M-Instruct-q4f32_1-MLC and got the same issue, so I know the issue is with the compilation to the .wasm file. When I use the wasm file from github the model loads fine SmolLM-135M-Instruct-q4f32_1-ctx2k_cs1k-webgpu.wasm.

I also tested this on gemma-2b-it-q4f32_1-MLC with the same issue
compilation tested on mlc-llm v0.18, v0.17.2

Tested Compilation on the following systems

  • Ubuntu 22.04.5 LTS, NVIDIA GeForce RTX 3060
  • Ubuntu 22.04 LTS, NVIDIA GeForce RTX 4090
  • Xubuntu 22.04.5 LTS, NVIDIA GeForce RTX 3060

Tested Loading with web-llm on the following browsers

  • Chrome Canary V133.0.6852.0, Windows
  • Google Chrome Stable V130.0.6723.91, Ubuntu

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions