Hey there! π After exploring OpenGraph fundamentals in our Making OpenGraph Work series, let's dive into building a complete, production-ready OG image system. I'll share what I learned while building this for gleam.so.
System Overview πΊοΈ
First, let's look at what we're building:
interface OGSystem { generator: { render: (template: Template, data: InputData) => Promise<Buffer>; optimize: (image: Buffer) => Promise<Buffer>; }; cache: { get: (key: string) => Promise<Buffer | null>; set: (key: string, image: Buffer) => Promise<void>; }; storage: { upload: (key: string, image: Buffer) => Promise<string>; getUrl: (key: string) => string; }; }
Key Requirements:
- Fast generation (<500ms)
- Consistent rendering
- Error resilience
- Cache optimization
- Cost-effective scaling
Technical Implementation π οΈ
1. Core Generation Service
// services/og-generator.ts import { ImageResponse } from '@vercel/og'; import sharp from 'sharp'; export class OGGenerator { async render(template: Template, data: InputData): Promise<Buffer> { try { // 1. Prepare template const element = await this.prepareTemplate(template, data); // 2. Generate image const imageResponse = new ImageResponse(element, { width: template.width, height: template.height, // Performance optimizations emoji: 'twemoji', fonts: await this.loadFonts(), }); // 3. Get buffer return imageResponse.arrayBuffer(); } catch (error) { console.error('OG Generation failed:', error); return this.generateFallback(template, data); } } async optimize(buffer: Buffer): Promise<Buffer> { return sharp(buffer) .jpeg({ quality: 85, progressive: true, force: false, }) .png({ compressionLevel: 9, palette: true, }) .toBuffer(); } }
2. Caching Layer
// services/og-cache.ts import { Redis } from 'ioredis'; export class OGCache { private redis: Redis; private ttl: number = 7 * 24 * 60 * 60; // 1 week async get(key: string): Promise<Buffer | null> { try { const cached = await this.redis.get(key); return cached ? Buffer.from(cached, 'base64') : null; } catch (error) { console.error('Cache fetch failed:', error); return null; } } async set(key: string, image: Buffer): Promise<void> { try { await this.redis.set( key, image.toString('base64'), 'EX', this.ttl ); } catch (error) { console.error('Cache set failed:', error); } } }
3. API Implementation
// pages/api/og/[key].tsx export const config = { runtime: 'edge', } export default async function handler(req: Request) { try { // 1. Parse request const { searchParams } = new URL(req.url); const template = searchParams.get('template'); const data = JSON.parse(searchParams.get('data') || '{}'); // 2. Generate cache key const cacheKey = generateCacheKey(template, data); // 3. Check cache const cached = await cache.get(cacheKey); if (cached) { return new Response(cached, { headers: getImageHeaders(cached), }); } // 4. Generate new image const generator = new OGGenerator(); const image = await generator.render(template, data); const optimized = await generator.optimize(image); // 5. Cache result await cache.set(cacheKey, optimized); return new Response(optimized, { headers: getImageHeaders(optimized), }); } catch (error) { console.error('OG API failed:', error); return new Response(await generateFallback(), { status: 500, }); } }
Performance Considerations β‘
1. Caching Strategy
interface CacheStrategy { layers: { edge: EdgeCache; // Vercel Edge Cache application: Redis; // Redis Cache cdn: CloudflareKV; // CDN Cache }; policies: { ttl: number; // Cache duration stale: boolean; // Serve stale content revalidate: boolean; // Background refresh }; }
2. Resource Optimization
// services/resource-optimizer.ts export class ResourceOptimizer { // Preload common fonts private fontLoader = new FontLoader([ { name: 'Inter', weight: 400, style: 'normal' }, { name: 'Inter', weight: 700, style: 'normal' }, ]); // Optimize images private imageOptimizer = new ImageOptimizer({ jpeg: { quality: 85 }, png: { compressionLevel: 9 }, webp: { quality: 85 }, }); // Memory management private memoryManager = new MemoryManager({ maxSize: '1GB', cleanupInterval: '5m', }); }
Monitoring & Debugging π
1. Performance Monitoring
// monitoring/performance.ts export class OGMonitor { async trackGeneration(key: string, timing: Timing) { await this.metrics.record({ name: 'og_generation', value: timing.duration, tags: { template: key, cache: timing.cached ? 'hit' : 'miss', error: timing.error ? 'true' : 'false', }, }); } }
2. Error Tracking
// monitoring/errors.ts export class ErrorTracker { async captureError(error: Error, context: Context) { // 1. Log error console.error('OG Error:', { error, context, stack: error.stack, }); // 2. Track in monitoring await this.monitor.trackError({ type: 'og_generation_error', error, context, }); // 3. Alert if critical if (this.isCritical(error)) { await this.alertTeam(error); } } }
Future Improvements π
-
Advanced Caching:
- Predictive pre-generation
- Smart cache invalidation
- Regional edge caching
-
Performance:
- WebAssembly optimization
- Worker thread pooling
- Resource preloading
-
Features:
- A/B testing support
- Analytics integration
- Custom fonts handling
Try It Yourself! π―
I've implemented all these principles in gleam.so. And for Black Friday,
π₯ Current deal: 75% OFF all paid plans
Perfect timing to grab your favorite designs π¨
Want to see the system in action? Drop a comment with your use case, and I'll help you implement it!
This is part of the "Making OpenGraph Work" series. Check out the Making OpenGraph Work series for more OG image insights!
Top comments (0)