-   Notifications  You must be signed in to change notification settings 
- Fork 434
[WIP] feat: ai plugin refactor #1657
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
| Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the  You can disable this status message by setting the  WalkthroughThis change set overhauls the robot plugin: introduces a new Home.vue entry/component, adds an OpenAI-compatible client/provider with dynamic LLM config, implements a streaming chat + MCP tools composable, updates modes from Builder→Agent, refactors components and icons, adjusts MCP types/usage, bumps robot deps, and tweaks minor layout/CSS and auth header handling. Changes
 Sequence Diagram(s)sequenceDiagram autonumber actor User participant UI as Home.vue / RobotChat.vue participant Chat as useChat (composable) participant Client as AIClient participant Prov as OpenAICompatibleProvider participant MCP as MCP Server User->>UI: Type prompt / select prompt UI->>Chat: send(message, attachments?) Chat->>Client: chatStream(request with messages/tools) Client->>Prov: chatStream(payload, beforeRequest) Note over Client,Prov: beforeRequest injects MCP tools, model, apiKey, baseURL Prov-->>Client: SSE stream (delta, tool_calls, reasoning) Client-->>Chat: onReceiveData(delta) Chat->>UI: Update bubbles (reasoning/markdown) alt Tool calls present Chat->>MCP: Execute tool(name,args) MCP-->>Chat: Tool result Chat->>Client: Continue stream with tool result end Client-->>Chat: onFinish Chat->>UI: Finalize message/state sequenceDiagram autonumber actor User participant UI as Home.vue / SettingsPopover participant Cfg as updateLLMConfig participant Client as AIClient participant Prov as OpenAICompatibleProvider User->>UI: Change model/apiKey/baseURL UI->>Cfg: updateLLMConfig(newConfig) Cfg->>Client: update in-memory config Cfg->>Prov: updateConfig(newConfig) Note over Client,Prov: Subsequent requests use updated provider settings Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Poem
 Pre-merge checks and finishing touches and finishing touches❌ Failed checks (1 warning, 1 inconclusive)
 ✅ Passed checks (1 passed)
 Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment  | 
0d7f889 to 57ac995   Compare   57ac995 to 5eb53e9   Compare   | 感觉可以增强一下代码结构,增强插件的可拓展性与可维护性,比如: 
 好处:将大模型提供商+模型的静态配置集中放置,容易阅读+可维护;也方便后续集中提供配置进行新增或者隐藏 增强示例结构: export default { name: 'DeepSeek', apiBase: 'https://api.deepseek.com/v1', models: [ { id: 'deepseek-chat', name: 'deepseek-chat', contextWindow: 65536, // 上下文大小 maxTokens: 8192, defaultMaxTokens: 8000, inputPrice: 0.0006, // 输入 token 价格 outputPrice: 0.002, // 输出 token 价格 isDefault: true, description: `60 tokens/second, Enhanced capabilities,API compatibility intact`, // 描述 capabilities: { // 模型能力 tools: { enabled: true, }, }, }, ] }
 
 好处:不同的大模型提供商、甚至不同的大模型的 tool_call 格式、以及传参可能都有细微的差别,我们将通用的处理模式全部内聚到一个 provider 里面,后续如果有定制化的需求,直接开放配置出来,让二开用户传入自己的 provider 即可处理 tool_call 格式、传参的相关差异。 总结:增强扩展性+高内聚 
 | 
| 
 It feels like the code structure can be enhanced to enhance the scalability and maintainability of the plug-in, such as: 
 Benefits: Centralize the static configuration of large model providers and models, making it easy to read and maintain; it also facilitates subsequent centralized provision of configurations for adding or hiding Enhanced example structure: export default { name: 'DeepSeek', apiBase: 'https://api.deepseek.com/v1', models: [ { id: 'deepseek-chat', name: 'deepseek-chat', contextWindow: 65536, //Context size maxTokens: 8192, defaultMaxTokens: 8000, inputPrice: 0.0006, //Input token price outputPrice: 0.002, // Output token price isDefault: true, description: `60 tokens/second, Enhanced capabilities,API compatibility intact`, // description capabilities: { // Model capabilities tools: { enabled: true, }, }, }, ] }
 
 Benefits: Different large model providers, or even different large models, may have subtle differences in the tool_call format and parameter passing. We have integrated all common processing modes into one provider. If there is a need for customization in the future, the configuration can be directly opened, allowing secondary users to pass in their own providers to handle the differences in tool_call format and parameter passing. Summary: Enhanced scalability + high cohesion 
 | 
cd3d8c0 to 8005437   Compare   f8a401a to 0d7ca22   Compare   
English | 简体中文
PR
PR Checklist
Please check if your PR fulfills the following requirements:
PR Type
What kind of change does this PR introduce?
Background and solution
What is the current behavior?
Issue Number: N/A
What is the new behavior?
Does this PR introduce a breaking change?
Other information
Summary by CodeRabbit
New Features
Improvements
Bug Fixes
Chores