This is the third post in a series documenting how we built a browser-based English learning app with a cost-friendly conversation system.
Overview: From Static Prompts to Dynamic Character Generation
After transcribing user speech, our system needs to generate contextually appropriate responses that feel natural and authentic. I started with hardcoded prompts for our MVP, which worked well initially. As we will add more scenarios and want greater personality variation in the futhure, then I upgraded it to a database-driven system that could scale.
The complete character generation workflow involves:
- Scenario Analysis → Intelligent mapping from scenario titles to character types
- Character Selection → Pull the right character profile from the database
- Dynamic Prompt Building → Build prompts that include personality details
- Context Adaptation → Adjust responses based on timing (e.g., busy vs. relaxed moments)
- GPT-4 Integration → Generate replies that fit the chosen character
Total processing time: ~1-2s (varies by network conditions)
Technical Stack:
- Database: Supabase (PostgreSQL)
- Prompt Engineering: Dynamic multi-layer construction
- AI Model: GPT-4o with character-specific prompts
- Language: JavaScript with Next.js API routes
- Error Handling: JSON parsing safety and database fallbacks
From Prototype to Scale
The Starting Point
I began with a straightforward approach for our MVP:
// Simple but effective for initial scenarios if (scenarioTitle.includes('tim hortons')) { const prompt = "You are a friendly Tim Hortons employee. Be helpful and use Canadian expressions..."; } else if (scenarioTitle.includes('restaurant')) { const prompt = "You are a restaurant server. Be professional and friendly..."; }
This worked well for getting started, but as our content expanded, we needed more flexibility and personality variation.
the Database-driven Solution---Core Implementation Details
Intelligent Scenario Mapping
We built a mapping system that automatically determines which character to use:
const mapScenarioToCharacterKey = (scenarioTitle) => { const titleLower = scenarioTitle.toLowerCase(); // Tim Hortons scenarios if (titleLower.includes('tim hortons') || titleLower.includes('coffee shop')) { return 'tim_hortons'; } // Restaurant scenarios with sub-categories if (titleLower.includes('restaurant') || titleLower.includes('dining')) { if (titleLower.includes('fast') || titleLower.includes('mcdonald')) { return 'restaurant_fast'; } else if (titleLower.includes('fine') || titleLower.includes('upscale')) { return 'restaurant_fine'; } else { return 'restaurant_casual'; } } return 'directions'; // Default fallback };
Database Schema Design
Our character system uses four tables that work together:
erDiagram character_profiles { int id PK string scenario_key string character_name text base_prompt_template string correction_style } personality_traits { int id PK int character_id FK string trait_type string trait_value } canadian_expressions { int id PK int character_id FK string expression_category json phrases } response_patterns { int id PK int character_id FK string time_context string situation_type json response_templates int avg_sentence_count boolean includes_questions } character_profiles ||--o{ personality_traits : "defines" character_profiles ||--o{ canadian_expressions : "uses" character_profiles ||--o{ response_patterns : "follows"
Table Relationships:
- One character profile links to many personality traits
- Each character has their own Canadian expressions as examples, and they are organized by category
- Response patterns vary based on context (busy vs slow)
Character Generation Service
Here's our main character generation function (lib/characterService.js
):
export async function generateCharacterPrompt(scenarioKey) { try { // Get the basic character information const { data: character, error: characterError } = await supabase .from('character_profiles') .select('*') .eq('scenario_key', scenarioKey) .single(); if (characterError) { console.error('Character not found:', characterError); return null; } // Get personality traits const { data: traits, error: traitsError } = await supabase .from('personality_traits') .select('*') .eq('character_id', character.id); if (traitsError) { console.error('Error fetching traits:', traitsError); return null; } // Get Canadian expressions const { data: expressions, error: expressionsError } = await supabase .from('canadian_expressions') .select('*') .eq('character_id', character.id); if (expressionsError) { console.error('Error fetching expressions:', expressionsError); return null; } // Get response patterns based on random time context const timeContext = getRandomTimeContext(); const { data: patterns, error: patternsError } = await supabase .from('response_patterns') .select('*') .eq('character_id', character.id) .eq('time_context', timeContext); if (patternsError) { console.error('Error fetching patterns:', patternsError); return null; } // Convert arrays to objects for easier access const personalityMap = {}; traits.forEach(trait => { personalityMap[trait.trait_type] = trait.trait_value; }); const expressionsMap = {}; expressions.forEach(expr => { try { expressionsMap[expr.expression_category] = JSON.parse(expr.phrases); } catch (error) { console.error('JSON parse error for expression:', expr.expression_category); expressionsMap[expr.expression_category] = []; } }); const patternsMap = {}; patterns.forEach(pattern => { try { patternsMap[pattern.situation_type] = { templates: JSON.parse(pattern.response_templates), avg_sentence_count: pattern.avg_sentence_count, includes_questions: pattern.includes_questions }; } catch (error) { console.error('JSON parse error for pattern:', pattern.situation_type); patternsMap[pattern.situation_type] = { templates: [], avg_sentence_count: 2, includes_questions: false }; } }); // Build the complete system prompt const systemPrompt = buildSystemPrompt(character, personalityMap, expressionsMap, patternsMap, timeContext); return { role: character.character_name, systemPrompt, timeContext, characterData: { personality: personalityMap, expressions: expressionsMap, patterns: patternsMap, correctionStyle: character.correction_style } }; } catch (error) { console.error('Error generating character prompt:', error); return null; } }
Dynamic Prompt Construction
Our prompt-building function combines all character data:
function buildSystemPrompt(character, personality, expressions, patterns, timeContext) { const contextDescription = timeContext === 'busy' ? 'It is currently a busy time, so you respond more quickly and concisely.' : 'It is currently a slower time, so you can be more conversational and detailed.'; // Extract expression examples for the prompt const greetingExamples = expressions.greetings ? expressions.greetings.join(', ') : ''; const transitionExamples = expressions.transitions ? expressions.transitions.join(', ') : ''; const confirmationExamples = expressions.confirmations ? expressions.confirmations.join(', ') : ''; return `${character.base_prompt_template} CURRENT CONTEXT: ${contextDescription} PERSONALITY TRAITS: - Formality: ${personality.formality_level || 'casual'} - Energy: ${personality.energy_level || 'medium'} - Chattiness: ${personality.chattiness || 'moderate'} - Service Pace: ${personality.service_pace || 'normal'} CANADIAN EXPRESSIONS TO USE: - Greetings: ${greetingExamples} - Transitions: ${transitionExamples} - Confirmations: ${confirmationExamples} RESPONSE GUIDELINES: - Keep responses ${timeContext === 'busy' ? '1-2 sentences' : '2-4 sentences'} maximum - Use natural Canadian expressions from the examples above - Stay completely in character as a real ${character.character_name} - If you need to correct English mistakes, use the "${character.correction_style}" style - For corrections, respond naturally first, then gently clarify. Example: "Oh, you mean a large coffee? Sure thing!" - NEVER comment on someone's English skills or act like a language tutor - Just have a natural conversation as if you're really in this workplace CORRECTION EXAMPLES: - If they say "I want big coffee" → "Oh, you mean a large coffee? Coming right up!" - If they say "Where is bathroom?" → "Oh, the washroom? It's just down the hall there." - If they say unclear something → "Sorry, what was that?" or "Can you repeat that?" Remember: You are a real ${character.character_name} during ${timeContext === 'busy' ? 'busy' : 'slow'} time. Act natural and authentic!`; }
A Real Example to help understand deeper: Tim Hortons Employee
User Scenario: "Ordering at Tim Hortons"
Step 1: Scenario Mapping
mapScenarioToCharacterKey("Ordering at Tim Hortons") // Returns: "tim_hortons"
Step 2: Database Queries Return
character = { id: 1, character_name: "Tim Hortons Employee", base_prompt_template: "You are a friendly Tim Hortons employee working the counter..." } personalityMap = { formality_level: "casual", energy_level: "high", chattiness: "moderate", service_pace: "fast" } expressionsMap = { greetings: ["Hey there!", "How's it going?", "Morning!"], confirmations: ["You bet!", "For sure!", "Absolutely!"], transitions: ["Alrighty", "Perfect", "Awesome"] } // Random timeContext = "busy" patternsMap = { ordering: { templates: ["What can I get started for you?", "Next!"], avg_sentence_count: 1, includes_questions: true } }
Step 3: Generated Prompt
const systemPrompt = `You are a friendly Tim Hortons employee working the counter... CURRENT CONTEXT: It is currently a busy time, so you respond more quickly and concisely. PERSONALITY TRAITS: - Formality: casual - Energy: high - Chattiness: moderate - Service Pace: fast CANADIAN EXPRESSIONS TO USE: - Greetings: Hey there!, How's it going?, Morning! - Confirmations: You bet!, For sure!, Absolutely! - Transitions: Alrighty, Perfect, Awesome RESPONSE GUIDELINES: - Keep responses 1-2 sentences maximum - Use natural Canadian expressions from the examples above - Stay completely in character as a real Tim Hortons Employee `;
Step 4: GPT-4 Response
"Hey there! What can I get started for you today?"
Notice how the response incorporates:
- Casual greeting from the expressions database
- Concise format due to "busy" context
- High energy personality trait
- Natural Canadian friendliness
If the same scenario ran with timeContext = "slow"
, we might get:
"Morning! How's your day going so far? What can I get you today - maybe a nice double-double to start?"
Same character, different context, completely different response style.
Now it’s super easy to add or change characters since everything lives in the database, no need to mess with code. Each character has their own vibe and can react differently depending on the situation, and even if something goes wrong, the convo keeps going without breaking.
What's Next
In our next post, we'll explore how these generated text responses are converted back to speech using totally free browser-based text-to-speech capabilities, completing the full audio conversation loop.
Top comments (0)