This document provides a high-level introduction to Prompt Optimizer, covering its purpose, architecture, deployment options, and key design decisions. For detailed information about specific subsystems, refer to the following pages:
Prompt Optimizer is a multi-platform AI prompt engineering tool that helps users write better prompts for large language models (LLMs). The application takes a user's initial prompt and optimizes it using various templates and AI models, supporting iterative refinement through version-tracked prompt chains.
The system operates as a pure client-side application with no backend server—all data flows directly between the user's browser/desktop and AI service providers (OpenAI, Gemini, DeepSeek, etc.). This architecture ensures data privacy while eliminating the need for infrastructure maintenance.
Current Version: 2.1.0 (as of package.json3)
Primary Use Cases:
Sources: README.md1-50 README_EN.md1-50 package.json1-10
| Capability | Description | Key Components |
|---|---|---|
| Prompt Optimization | Transform basic prompts into structured, effective prompts using AI | PromptService, TemplateManager, LLMService |
| Multi-Model Support | Connect to OpenAI, Gemini, DeepSeek, Zhipu, SiliconFlow, custom APIs | TextAdapterRegistry, provider adapters |
| Image Generation | Text-to-image (T2I) and image-to-image (I2I) capabilities | ImageService, ImageModelManager, ImageAdapterRegistry |
| History Tracking | Version-controlled prompt chains for iterative refinement | HistoryManager, PromptRecordChain |
| Advanced Testing | Context variables, multi-turn conversations, function calling | ContextRepo, advanced mode UI |
| Favorites Management | Save and categorize prompts with tags and hierarchical categories | FavoriteManager |
| Data Portability | Import/export all configurations and data | DataManager |
| Internationalization | Multi-language UI support (English, Simplified Chinese, Traditional Chinese) | Vue I18n system |
Sources: README.md43-72 README_EN.md43-71
Monorepo Structure: Five deployment targets share common @prompt-optimizer/core and @prompt-optimizer/ui packages
Three-Layer Architecture Pattern:
Key Architectural Principles:
Service Injection Pattern: Core services implement interfaces (e.g., ILLMService, IModelManager) and are injected into UI composables, enabling platform-specific implementations (direct import in web, IPC proxies in Electron)
Adapter Pattern: AI provider integration uses adapters (OpenAIAdapter, GeminiAdapter) registered in TextAdapterRegistry, allowing runtime provider selection without tight coupling
Storage Abstraction: Storage operations use IStorageProvider interface with implementations for browser (BrowserStorageProvider using IndexedDB) and file system (FileStorageProvider for Electron)
Pure Client-Side: No backend server—all AI API calls originate from the client, with API keys stored locally
Sources: package.json1-96 README.md23-55 high-level architecture diagrams
Prompt Optimizer deploys to five distinct environments, each optimized for different use cases:
| Target | Entry Point | Build Command | Use Case |
|---|---|---|---|
| Web Application | packages/web/src/main.ts | pnpm build:web | Online access, easiest to try |
| Browser Extension | packages/extension/src/background.ts | pnpm build:ext | Persistent sidebar in browser |
| Desktop Application | packages/desktop/main.js | pnpm build:desktop | No CORS limitations, auto-updates |
| Docker Container | Dockerfile docker-compose.yml | Docker build | Self-hosted with authentication |
| MCP Server | packages/mcp-server/src/index.ts | pnpm mcp:build | Claude Desktop integration |
Deployment Architecture:
Platform-Specific Features:
BrowserStorageProvider with IndexedDB, subject to CORS limitationsFileStorageProvider with file system access, no CORS restrictions, implements IPC bridge via packages/desktop/preload.js/mcp to MCP server on port 3000, configurable via docker/nginx.confoptimize-user-prompt, optimize-system-prompt, iterate-prompt via Model Context ProtocolSources: package.json11-43 README.md73-246 vercel.json1-32 docker-compose.yml1-44 packages/desktop/package.json1-98 packages/extension/public/manifest.json1-34
| Technology | Purpose | Key Files |
|---|---|---|
| Vue 3 (3.5.13+) | UI framework with Composition API | All .vue files |
| Naive UI (2.42.0+) | Component library (forms, tables, modals) | packages/ui/src/components/ |
| Vite (6.0+) | Build tool and dev server | vite.config.ts files |
| TypeScript (5.8.2+) | Type safety | All .ts files |
| Vue I18n (10.0.6+) | Internationalization | packages/ui/src/locales/ |
| Markdown-it (14.1.0) | Markdown rendering | packages/ui/src/components/OutputDisplay.vue |
| Highlight.js (11.11.1) | Code syntax highlighting | Used with Markdown renderer |
| Technology | Purpose | Key Files |
|---|---|---|
| OpenAI SDK (4.83.0+) | OpenAI API client | packages/core/src/llm/adapters/openai.ts |
| Google GenAI (1.0.0+) | Gemini API client | packages/core/src/llm/adapters/gemini.ts |
| Anthropic SDK (0.65.0+) | Claude API client (MCP only) | packages/mcp-server/ |
| Dexie (4.0.11) | IndexedDB wrapper | packages/core/src/storage/ |
| Mustache (4.2.0) | Template engine | packages/core/src/template/processor.ts |
| Zod (3.22.4+) | Schema validation | Throughout core services |
| UUID (11.0.5) | Unique ID generation | Record chain IDs, model keys |
| Platform | Technology | Purpose |
|---|---|---|
| Electron | electron (37.1.0), electron-builder (24.0.0) | Desktop application packaging |
| Electron | electron-updater (6.3.9) | Auto-update mechanism |
| Electron | undici (6.19.8) | HTTP client with proxy support |
| Docker | Nginx (alpine) | Reverse proxy and static file serving |
| Docker | Node.js (20-alpine) | MCP server runtime |
| MCP | @modelcontextprotocol/sdk (1.16.0) | MCP protocol implementation |
Sources: package.json63-84 packages/core/package.json32-42 packages/ui/package.json31-39 packages/desktop/package.json23-29
The project follows a pnpm monorepo structure with workspace packages:
Package Dependency Rules:
@prompt-optimizer/core has zero dependencies on other workspace packages (foundational layer)@prompt-optimizer/ui depends only on core (presentation layer)web, extension, desktop, mcp-server) depend on core and optionally uicore → ui → applications (enforced by package.json12-19)Key Directories:
| Path | Purpose | Key Files |
|---|---|---|
packages/core/src/ | Core business logic | llm/, model/, template/, history/, favorite/ |
packages/ui/src/ | UI components | components/, composables/, locales/ |
packages/web/src/ | Web application | main.ts, App.vue |
packages/extension/ | Extension files | manifest.json, background.ts |
packages/desktop/ | Electron files | main.js, preload.js |
packages/mcp-server/src/ | MCP implementation | index.ts, tools/ |
docker/ | Docker configuration | Dockerfile, nginx.conf, generate-auth.sh |
Sources: package.json1-96 directory structure from file listings
Decision: All AI API calls originate from the client with no intermediary backend server.
Rationale:
Tradeoffs:
Implementation: Direct provider SDK usage in packages/core/src/llm/adapters/
Decision: Core services implement TypeScript interfaces, injected into UI layer via composables.
Rationale:
Implementation:
Decision: Each AI provider has an adapter implementing a common interface, registered in a central registry.
Rationale:
Implementation:
TextAdapter)Decision: Each TextModelConfig embeds full provider and model metadata, not just IDs.
Rationale:
Tradeoffs:
Implementation: packages/core/src/model/types.ts (TextModelConfig structure)
Decision: All LLM interactions use streaming APIs with callback-based token delivery.
Rationale:
Implementation:
sendMessageStream method)optimizePromptStream)Decision: Single repository with pnpm workspaces, sharing core and ui packages across platforms.
Rationale:
Tradeoffs:
Implementation:
workspace:* protocol in package.json filesSources: README.md1-435 README_EN.md1-437 architecture analysis from diagrams, packages/core/src/ packages/ui/src/
To begin exploring the codebase:
For immediate hands-on experience, visit the live demo at https://prompt.always200.com or install the desktop application from GitHub Releases.
Sources: README.md73-310 README_EN.md73-313
Refresh this wiki
This wiki was recently refreshed. Please wait 5 days to refresh again.