Hey dev friends! π Following our Making OpenGraph Work, let's dive into a real performance optimization journey. Here's what happened when I needed to optimize gleam.so's OG image generation from 2.5s to under 500ms.
Initial State: The Problem π
When I first launched gleam.so, performance wasn't great:
Initial Metrics: - Average generation time: 2.5s - P95 generation time: 4.2s - Memory usage: ~250MB per image - Cache hit rate: 35% - Failed generations: 8%
Users were noticing:
"Preview takes too long to load"
"Sometimes images don't generate at all"
"System feels sluggish"
Measurement Setup π
First, I set up proper monitoring:
interface PerformanceMetrics { generation: { duration: number; // Total time steps: { // Step-by-step timing template: number; render: number; optimize: number; store: number; }; memory: number; // Memory usage success: boolean; // Success/failure }; cache: { hit: boolean; // Cache hit/miss duration: number; // Cache operation time }; } // Monitoring implementation const monitor = new PerformanceMonitor({ metrics: ['generation', 'cache', 'memory'], interval: '1m', retention: '30d' });
The Optimization Journey π οΈ
1. Template Preprocessing
Before:
// Parsing templates on every request const renderTemplate = async (template, data) => { const parsed = await parseTemplate(template); return renderImage(parsed, data); };
After:
// Precompiled templates const templateCache = new Map<string, CompiledTemplate>(); const renderTemplate = async (templateId, data) => { if (!templateCache.has(templateId)) { templateCache.set( templateId, await compileTemplate(templates[templateId]) ); } return renderImage(templateCache.get(templateId), data); }; // Result: // - 300ms saved per generation // - 40% less memory usage
2. Multi-Layer Caching
class OGImageCache { constructor() { this.memory = new QuickLRU({ maxSize: 100 }); this.redis = new Redis(process.env.REDIS_URL); this.cdn = new CloudflareKV('og-images'); } async get(key: string): Promise<Buffer | null> { // 1. Check memory cache const memoryCache = this.memory.get(key); if (memoryCache) return memoryCache; // 2. Check Redis const redisCache = await this.redis.get(key); if (redisCache) { this.memory.set(key, redisCache); return redisCache; } // 3. Check CDN const cdnCache = await this.cdn.get(key); if (cdnCache) { await this.warmCache(key, cdnCache); return cdnCache; } return null; } } // Result: // - Cache hit rate: 35% β 85% // - Average response time: 2.5s β 800ms
3. Resource Optimization
// Before: Loading fonts per request const loadFonts = async () => { return Promise.all( fonts.map(font => fetch(font.url).then(res => res.arrayBuffer())) ); }; // After: Preloaded fonts const FONTS = { inter: fs.readFileSync('./fonts/Inter.ttf'), roboto: fs.readFileSync('./fonts/Roboto.ttf') }; // Result: // - Font loading: 400ms β 0ms // - Memory usage: -30%
4. Parallel Processing
// Before: Sequential processing const generateOG = async (template, data) => { const image = await render(template, data); const optimized = await optimize(image); const stored = await store(optimized); return stored; }; // After: Parallel processing const generateOG = async (template, data) => { const [image, resources] = await Promise.all([ render(template, data), loadResources(template) ]); const [optimized, stored] = await Promise.all([ optimize(image), prepareStorage() ]); return finalize(optimized, stored); }; // Result: // - 30% faster generation // - Better resource utilization
Current Performance π
After these optimizations:
Current Metrics: - Average generation time: 450ms (-82%) - P95 generation time: 850ms (-80%) - Memory usage: 90MB (-64%) - Cache hit rate: 85% (+50%) - Failed generations: 0.5% (-7.5%)
Key Learnings π
-
Measurement is Critical
- Set up monitoring first
- Track detailed metrics
- Make data-driven decisions
-
Cache Strategically
- Multiple cache layers
- Smart invalidation
- Warm cache for popular items
-
Resource Management
- Preload where possible
- Optimize memory usage
- Parallel processing
-
Error Handling
- Graceful degradation
- Detailed error tracking
- Automatic recovery
Implementation Tips π‘
- Start with Monitoring
// Simple but effective monitoring const track = metrics.track('og_generation', { duration: endTime - startTime, memory: process.memoryUsage().heapUsed, success: !error, cached: !!cacheHit });
- Cache Wisely
// Generate deterministic cache keys const getCacheKey = (template, data) => { return crypto .createHash('sha256') .update(`${template.id}-${JSON.stringify(data)}`) .digest('hex'); };
- Handle Errors Gracefully
// Always provide a fallback const generateWithFallback = async (template, data) => { try { return await generateOG(template, data); } catch (error) { metrics.trackError(error); return generateFallback(template, data); } };
Try It Yourself! π
I've implemented all these optimizations in gleam.so, and for Black Friday, you can try the optimized system at 75% off! Generate blazing-fast OG images without worrying about performance.
Share Your Experience π€
- What performance challenges have you faced with OG images?
- Which optimization techniques worked for you?
- Any tips to share with the community?
Let's discuss in the comments!
*This is part of the "Making OpenGraph Work" series. *
Top comments (0)