Skip to content

Conversation

P4o1o
Copy link
Contributor

@P4o1o P4o1o commented May 24, 2025

added two flags for the scan command:

  • --url <llm_host_url> / -u <llm_host_url> gives you the possibility to specify the url of the LLM backend (if not specified the program will use OpenAI url: https://api.openai.com)
  • --model <model_name> / -m <model_name> gives you the possibility to choose the model you prefer (if not specified the program will use gpt-4.1-nano)

Example:

 # example with ollama at local host with default ollama port vibesafe scan --url http://127.0.0.1:11434 --model gemma3:27b-it-q8_0 # or vibesafe scan -u http://127.0.0.1:11434 -m gemma3:27b-it-q8_0 # or if you prefer Deepseek vibesafe scan -u https://api.deepseek.com -m deepseek-chat
@slowcoder360
Copy link
Owner

Thank you for doing this, I really appreciate it!

Copy link
Owner

@slowcoder360 slowcoder360 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you so much for your work! This is a great addition to the project, you are a legend.

@slowcoder360 slowcoder360 merged commit c832135 into slowcoder360:master May 30, 2025
1 check failed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

2 participants