DEV Community

Cover image for How to Build a Prompt-Friendly UI with React & TypeScript
Gianna
Gianna

Posted on

How to Build a Prompt-Friendly UI with React & TypeScript

Designing composable, maintainable, and developer-oriented interfaces for LLM apps

Introduction: Rethinking Interfaces for Prompt-Driven Workflows

Large language models (LLMs) introduce a fundamentally different interaction model. Instead of submitting static form data to deterministic APIs, users compose flexible, evolving instructions( prompts)to drive behavior. The UI is no longer just a form. It becomes a live environment for crafting, executing, and refining these prompts.

This shift introduces new engineering challenges:

  • Unstructured, contextual input: prompts resemble natural language, code, or hybrids
  • Probabilistic, variable outputs: repeated prompts yield different results
  • Exploratory iteration patterns: success depends on trying, comparing, adjusting

To support this, frontend engineers must rethink how they model state, compose components, and structure control flows. This article presents a practical architecture for building scalable, maintainable prompt interfaces using React + TypeScript, with a focus on modular composition, layered state separation, and UI as execution surface. how to build a scalable, prompt-centric UI system in React + TypeScript, with clean component boundaries, state isolation, and production-ready UX patterns.


System Breakdown: A Prompt-Centric Architecture

Consider the system as a composition of modular parts. A well-structured LLM interface supports experimentation, parameter tuning, and iterative refinement. To enable this, the UI should be decomposed into clear and focused modules:

[PromptTemplateForm] ↘ [PromptCompiler] → [LLM API Caller] → [ResponseRenderer] [RawPromptEditor] ↗ ↘ ↘ [PromptHistoryManager] [StreamingHandler] 
Enter fullscreen mode Exit fullscreen mode

Each module serves a distinct function:

  • PromptTemplateForm: Structured input controls that compile into a reusable prompt template.
  • RawPromptEditor: Freeform editor for power users and debugging.
  • PromptCompiler: Central utility that assembles runtime-ready prompt strings.
  • LLM API Caller: Handles API request lifecycle, including streaming and error management.
  • PromptHistoryManager: Tracks session history and supports prompt reuse.
  • ResponseRenderer: Displays model output with proper formatting and UX affordances.
  • StreamingHandler: Manages low-latency output rendering.

This modular breakdown enables separation of concerns, testability, and progressive enhancement.

To improve long-term maintainability and scalability, isolate presentational components (UI rendering only) from container components (stateful logic). For example:

<PromptFormContainer><PromptTemplateForm /><PromptPreview /> </PromptFormContainer> <HistoryPanelContainer><HistoryList /><OutputComparison /> </HistoryPanelContainer> 
Enter fullscreen mode Exit fullscreen mode

This allows business logic (e.g. retry, compile, fork) to live in container layers while UI layers remain reusable and declarative.


Core Components & Hooks

This section details not just the technical roles of each component, but also how they collaborate within a real-world LLM interface, and why their design matters for clarity, testability, and iterative development. Below is a complete collaboration walkthrough:

⛓ Example: A User Types a New Prompt Using the UI

[PromptFormContainer] -- owns --> (topic, tone) | v [PromptTemplateForm] -- controlled input --> [TextInput, Select] | v [PromptFormContainer] -- compile --> [PromptCompiler] | v -- call --> [LLMApiCaller] -- fetch --> /api/llm | v stream response via ReadableStream | v [PromptFormContainer] -- updates --> (responseText) | v [ResponseRenderer] -- render --> formatted LLM output | v [PromptFormContainer] -- save --> [usePromptHistory().add()] | v [HistoryPanelContainer] -- render --> [HistoryList] 
Enter fullscreen mode Exit fullscreen mode
  1. PromptFormContainer owns the local state of form fields (e.g. topic, tone) and passes them to:
  2. PromptTemplateForm which renders <TextInput> and <Select> components. These are purely presentational.
  3. When the user submits, the container uses PromptCompiler to transform the filled-in variables into a complete string prompt.
  4. This prompt is passed to LLMApiCaller, which triggers a fetch request to your /api/llm endpoint. It also optionally sets up a streaming reader.
  5. As tokens arrive, PromptFormContainer pipes them into local state (e.g. responseText), which gets passed to:
  6. ResponseRenderer, a presentational component that renders the output using <Markdown />.
  7. When the generation completes, the container calls usePromptHistory().add() to store the prompt/output pair.
  8. The output now appears inside HistoryList, rendered by another container: HistoryPanelContainer.
  9. If the user clicks "retry" on a previous generation, retry(id) is called, and the whole flow replays from step 3.

This flow illustrates how each module contributes just enough logic:

  • Presentational components do no data fetching or state control
  • Hooks encapsulate shared logic (prompt execution, history management)
  • Containers coordinate data + orchestration

This layered separation enables the following:

  • Unit testing of stateless UI components in isolation
  • Reuse of prompt-related logic across templates and editors
  • Swappable output renderer implementations (streamed, animated, minimal)
  • Optional expansion toward multi-agent flows, auto-reply, or collaborative editing

Each layer does one job:

  • PromptTemplateForm → render dynamic prompt inputs
  • PromptCompiler → compile from schema
  • LLMApiCaller → abstract API transport
  • ResponseRenderer → format output
  • usePromptHistory() → model iteration

Understanding these interactions helps scale small prototypes into reliable AI interfaces — with full visibility and traceability.


 Application Architecture Layer     

A well-architected prompt interface benefits greatly from a clear separation between global application state and UI rendering logic. This is especially important in LLM apps where the same data may affect multiple components at different layers.

🔧 State Management Strategy

Use a combination of React context + custom hooks or a small store library like Zustand to model the following top-level state, a simple example: 

interface AppState { currentPrompt: string; variables: Record<string, string>; output: string; history: PromptSession[]; selectedHistoryId?: string; isStreaming: boolean; } 
Enter fullscreen mode Exit fullscreen mode

Encapsulate updates in domain-specific hooks:

usePromptExecution() // handles LLMApiCaller + StreamingHandler usePromptHistory() // stores and retrieves past sessions usePromptVariables() // manages input field state 
Enter fullscreen mode Exit fullscreen mode

🧱 Component Role Alignment

  • PromptFormContainer reads from usePromptVariables, compiles prompt, calls execution hook
  • PromptTemplateForm renders inputs, receives value+onChange as props
  • PromptPreview reflects current compiledPrompt
  • ResponseRenderer reflects latest output
  • HistoryPanelContainer subscribes to usePromptHistory and dispatches retry / fork

This separation ensures:

  • Consistent updates across containers
  • No prop-drilling or tight coupling between siblings
  • Hooks are testable and traceable in isolation

🎯 Why It Matters

A single LLM interaction affects:

  • Input state (form variables)
  • Execution flow (stream/cancel)
  • Output view (response + error)
  • History trace (storage + versioning)

Without a clear data layer, coordinating these becomes error-prone and hard to reason about. With scoped state hooks and container/presenter separation, each module handles only the logic relevant to its role.

This enables confident refactoring, feature growth (e.g., adding preset templates or multi-agent threads), and team scalability.


Closing Thoughts

A well-designed prompt interface should function as a complete execution environment. It requires more than a form, it requires state models, clear separation of concerns, and feedback-aware architecture.

From controlled inputs to prompt compilation, API orchestration, streaming output, and session history, each layer must be deliberately modeled to remain inspectable and extendable.

Each component should reflect its intent:

  • Inputs should be inspectable.
  • Outputs should be traceable.
  • History should be restorable.
  • Interactions should be reversible.

This design mindset ensures engineers can build with clarity, test with confidence, and evolve features without losing control. Prompt interfaces deserve the same rigor as any other developer tool.

Top comments (0)