Menu

Overview

Relevant source files

This document provides a high-level introduction to Prompt Optimizer, covering its purpose, architecture, deployment options, and key design decisions. For detailed information about specific subsystems, refer to the following pages:


What is Prompt Optimizer

Prompt Optimizer is a multi-platform AI prompt engineering tool that helps users write better prompts for large language models (LLMs). The application takes a user's initial prompt and optimizes it using various templates and AI models, supporting iterative refinement through version-tracked prompt chains.

The system operates as a pure client-side application with no backend server—all data flows directly between the user's browser/desktop and AI service providers (OpenAI, Gemini, DeepSeek, etc.). This architecture ensures data privacy while eliminating the need for infrastructure maintenance.

Current Version: 2.1.0 (as of package.json3)

Primary Use Cases:

  • Optimizing prompts for better AI responses
  • Role-playing character development with structured prompts
  • Knowledge extraction with format consistency
  • Creative writing with precise requirement specification
  • Function calling and tool integration testing

Sources: README.md1-50 README_EN.md1-50 package.json1-10


Core Capabilities

CapabilityDescriptionKey Components
Prompt OptimizationTransform basic prompts into structured, effective prompts using AIPromptService, TemplateManager, LLMService
Multi-Model SupportConnect to OpenAI, Gemini, DeepSeek, Zhipu, SiliconFlow, custom APIsTextAdapterRegistry, provider adapters
Image GenerationText-to-image (T2I) and image-to-image (I2I) capabilitiesImageService, ImageModelManager, ImageAdapterRegistry
History TrackingVersion-controlled prompt chains for iterative refinementHistoryManager, PromptRecordChain
Advanced TestingContext variables, multi-turn conversations, function callingContextRepo, advanced mode UI
Favorites ManagementSave and categorize prompts with tags and hierarchical categoriesFavoriteManager
Data PortabilityImport/export all configurations and dataDataManager
InternationalizationMulti-language UI support (English, Simplified Chinese, Traditional Chinese)Vue I18n system

Sources: README.md43-72 README_EN.md43-71


System Architecture at a Glance

Monorepo Structure: Five deployment targets share common @prompt-optimizer/core and @prompt-optimizer/ui packages

Three-Layer Architecture Pattern:

Key Architectural Principles:

  1. Service Injection Pattern: Core services implement interfaces (e.g., ILLMService, IModelManager) and are injected into UI composables, enabling platform-specific implementations (direct import in web, IPC proxies in Electron)

  2. Adapter Pattern: AI provider integration uses adapters (OpenAIAdapter, GeminiAdapter) registered in TextAdapterRegistry, allowing runtime provider selection without tight coupling

  3. Storage Abstraction: Storage operations use IStorageProvider interface with implementations for browser (BrowserStorageProvider using IndexedDB) and file system (FileStorageProvider for Electron)

  4. Pure Client-Side: No backend server—all AI API calls originate from the client, with API keys stored locally

Sources: package.json1-96 README.md23-55 high-level architecture diagrams


Deployment Targets

Prompt Optimizer deploys to five distinct environments, each optimized for different use cases:

TargetEntry PointBuild CommandUse Case
Web Applicationpackages/web/src/main.tspnpm build:webOnline access, easiest to try
Browser Extensionpackages/extension/src/background.tspnpm build:extPersistent sidebar in browser
Desktop Applicationpackages/desktop/main.jspnpm build:desktopNo CORS limitations, auto-updates
Docker ContainerDockerfile docker-compose.ymlDocker buildSelf-hosted with authentication
MCP Serverpackages/mcp-server/src/index.tspnpm mcp:buildClaude Desktop integration

Deployment Architecture:

Platform-Specific Features:

  • Web/Extension: Use BrowserStorageProvider with IndexedDB, subject to CORS limitations
  • Desktop: Use FileStorageProvider with file system access, no CORS restrictions, implements IPC bridge via packages/desktop/preload.js
  • Docker: Runs Nginx reverse proxy routing /mcp to MCP server on port 3000, configurable via docker/nginx.conf
  • MCP: Exposes three tools: optimize-user-prompt, optimize-system-prompt, iterate-prompt via Model Context Protocol

Sources: package.json11-43 README.md73-246 vercel.json1-32 docker-compose.yml1-44 packages/desktop/package.json1-98 packages/extension/public/manifest.json1-34


Technology Stack

Frontend Stack

TechnologyPurposeKey Files
Vue 3 (3.5.13+)UI framework with Composition APIAll .vue files
Naive UI (2.42.0+)Component library (forms, tables, modals)packages/ui/src/components/
Vite (6.0+)Build tool and dev servervite.config.ts files
TypeScript (5.8.2+)Type safetyAll .ts files
Vue I18n (10.0.6+)Internationalizationpackages/ui/src/locales/
Markdown-it (14.1.0)Markdown renderingpackages/ui/src/components/OutputDisplay.vue
Highlight.js (11.11.1)Code syntax highlightingUsed with Markdown renderer

Backend/Core Stack

TechnologyPurposeKey Files
OpenAI SDK (4.83.0+)OpenAI API clientpackages/core/src/llm/adapters/openai.ts
Google GenAI (1.0.0+)Gemini API clientpackages/core/src/llm/adapters/gemini.ts
Anthropic SDK (0.65.0+)Claude API client (MCP only)packages/mcp-server/
Dexie (4.0.11)IndexedDB wrapperpackages/core/src/storage/
Mustache (4.2.0)Template enginepackages/core/src/template/processor.ts
Zod (3.22.4+)Schema validationThroughout core services
UUID (11.0.5)Unique ID generationRecord chain IDs, model keys

Platform-Specific Stack

PlatformTechnologyPurpose
Electronelectron (37.1.0), electron-builder (24.0.0)Desktop application packaging
Electronelectron-updater (6.3.9)Auto-update mechanism
Electronundici (6.19.8)HTTP client with proxy support
DockerNginx (alpine)Reverse proxy and static file serving
DockerNode.js (20-alpine)MCP server runtime
MCP@modelcontextprotocol/sdk (1.16.0)MCP protocol implementation

Sources: package.json63-84 packages/core/package.json32-42 packages/ui/package.json31-39 packages/desktop/package.json23-29


Project Structure

The project follows a pnpm monorepo structure with workspace packages:

Package Dependency Rules:

  1. @prompt-optimizer/core has zero dependencies on other workspace packages (foundational layer)
  2. @prompt-optimizer/ui depends only on core (presentation layer)
  3. Application packages (web, extension, desktop, mcp-server) depend on core and optionally ui
  4. Build order: coreui → applications (enforced by package.json12-19)

Key Directories:

PathPurposeKey Files
packages/core/src/Core business logicllm/, model/, template/, history/, favorite/
packages/ui/src/UI componentscomponents/, composables/, locales/
packages/web/src/Web applicationmain.ts, App.vue
packages/extension/Extension filesmanifest.json, background.ts
packages/desktop/Electron filesmain.js, preload.js
packages/mcp-server/src/MCP implementationindex.ts, tools/
docker/Docker configurationDockerfile, nginx.conf, generate-auth.sh

Sources: package.json1-96 directory structure from file listings


Key Design Decisions

1. Pure Client-Side Architecture

Decision: All AI API calls originate from the client with no intermediary backend server.

Rationale:

  • Eliminates infrastructure costs and maintenance
  • Ensures data privacy (no data passes through third-party servers)
  • Simplifies deployment (static site hosting)

Tradeoffs:

  • Subject to browser CORS limitations (mitigated by desktop app)
  • API keys stored client-side (acceptable for personal use)
  • Cannot implement server-side API key rotation

Implementation: Direct provider SDK usage in packages/core/src/llm/adapters/

2. Service Injection via Interfaces

Decision: Core services implement TypeScript interfaces, injected into UI layer via composables.

Rationale:

  • Enables platform-specific implementations (e.g., Electron IPC proxies)
  • Facilitates testing with mock implementations
  • Decouples UI from storage/network implementation details

Implementation:

3. Adapter Pattern for AI Providers

Decision: Each AI provider has an adapter implementing a common interface, registered in a central registry.

Rationale:

  • Allows adding providers without modifying core logic
  • Enables runtime provider selection based on model configuration
  • Encapsulates provider-specific API differences

Implementation:

4. Self-Contained Model Configurations

Decision: Each TextModelConfig embeds full provider and model metadata, not just IDs.

Rationale:

  • Enables offline capability (no need to fetch model lists)
  • Preserves historical accuracy (config remembers model's capabilities even if provider changes)
  • Supports portability (configs can be exported/imported without external dependencies)

Tradeoffs:

  • Larger storage footprint
  • Potential metadata staleness (requires manual updates)

Implementation: packages/core/src/model/types.ts (TextModelConfig structure)

5. Streaming-First API Design

Decision: All LLM interactions use streaming APIs with callback-based token delivery.

Rationale:

  • Provides real-time feedback to users (perceived performance)
  • Supports reasoning models (separate reasoning token stream)
  • Aligns with modern LLM provider SDKs

Implementation:

6. Monorepo with Shared Packages

Decision: Single repository with pnpm workspaces, sharing core and ui packages across platforms.

Rationale:

  • Maximizes code reuse (estimated 80%+ shared code)
  • Ensures consistency across platforms
  • Simplifies dependency management
  • Enables atomic cross-package changes

Tradeoffs:

  • More complex build orchestration
  • Requires careful dependency management
  • Larger repository size

Implementation:

Sources: README.md1-435 README_EN.md1-437 architecture analysis from diagrams, packages/core/src/ packages/ui/src/


Getting Started

To begin exploring the codebase:

  1. Understand the architecture: Read Architecture Overview for service dependencies and design patterns
  2. Explore core services: Start with Core Services to understand business logic
  3. Learn the UI layer: Review User Interface for component structure
  4. Choose a platform: See Platform Applications for platform-specific implementations
  5. Set up development: Follow Development Workflow for local setup

For immediate hands-on experience, visit the live demo at https://prompt.always200.com or install the desktop application from GitHub Releases.

Sources: README.md73-310 README_EN.md73-313