One rainy weekend I decided to refresh my microservices skills by building a small eCommerce platform from scratch. I wanted a playground that was close enough to real work to show the classic problems—clear boundaries, steady APIs, reliable deployments—without growing into a long project. This article is my field journal from that sprint: what I built, why I made certain choices, and how the code in this repo supports every decision.
Architecture at a Glance
Saturday morning started with a blank page and four simple boxes. I knew the weekend would stay calm only if every box owned one clear job and followed the same rules. The result is a Node.js monorepo with four deployable workspaces that live together but stay independent:
- User Service handles registration, login, and profile lookups so the rest of the stack never has to guess who is calling.
- Product Service manages the catalog and keeps price data clean.
- Order Service turns carts into history by connecting users and products.
- API Gateway sits on the edge and hides the backend layout from clients.
Each service gets its own Postgres database and REST API. To avoid copying the same setup again and again, every service depends on @mini/shared
for logging, HTTP helpers, error classes, and configuration tools. From there the workflow stays simple on purpose: npm run compose:up
brings the stack online with this Compose file driving the topology:
# docker-compose.yml services: user-service: command: npm run dev --workspace services/user ports: - "3001:3001" depends_on: [user-db] product-service: command: npm run dev --workspace services/product ports: - "3002:3002" depends_on: [product-db] order-service: command: npm run dev --workspace services/order ports: - "3003:3003" depends_on: [order-db, user-service, product-service] api-gateway: command: npm run dev --workspace gateway ports: - "8080:8080" depends_on: [user-service, product-service, order-service] volumes: user-db-data: product-db-data: order-db-data:
The manifests in k8s/
reproduce the same shape inside a Kubernetes cluster when I want to push things a little harder.
Shared Platform Capabilities
By midday I noticed the same pattern, service after service. Each one wanted identical Express plumbing, the same error classes, and the same .env
routine. Rather than repeat myself, I moved those cross-cutting pieces into @mini/shared
so the rest of the weekend could focus on business rules instead of setup.
The shared HTTP helper keeps every edge consistent by centralising the Express setup, wiring in JSON parsing, health checks, and error handling so every service exposes the same behaviour:
// shared/src/http.js function createApp({ serviceName, logger, routes }) { if (!serviceName) throw new Error('serviceName is required'); const app = express(); app.disable('x-powered-by'); app.use(express.json()); app.get('/healthz', (_req, res) => { res.json({ service: serviceName, status: 'ok', uptime: process.uptime() }); }); if (typeof routes === 'function') { routes(app); } app.use((_req, _res, next) => next(new NotFoundError())); app.use((err, req, res, _next) => { const error = err instanceof AppError ? err : new AppError('Internal Server Error'); logger?.error?.('request failed', { code: error.code, status: error.status, id: req.id }); res.status(error.status).json({ error: { code: error.code, message: error.message } }); }); return app; }
Error classes stay in one place, so every service can throw meaningful responses and map domain problems to HTTP status codes without duplicating boilerplate:
// shared/src/errors.js class ValidationError extends AppError { constructor(message = 'Validation failed', details) { super(message, { status: 400, code: 'validation_error', details }); } } class UnauthorizedError extends AppError { constructor(message = 'Unauthorized') { super(message, { status: 401, code: 'unauthorized' }); } }
Configuration loading is just as centralised, which means each service validates its environment variables before it starts and applies optional parsers or defaults in one predictable location:
// shared/src/env.js function getConfig(schema) { return Object.entries(schema).reduce((acc, [key, options]) => { let value = process.env[key]; const required = !!options?.required; const fallback = options?.default; const parser = options?.parser; if ((value === undefined || value === '') && fallback !== undefined) { value = typeof fallback === 'function' ? fallback() : fallback; } if ((value === undefined || value === '') && required) { throw new Error(`Missing required environment variable ${key}`); } acc[key] = typeof parser === 'function' && value !== undefined ? parser(value) : value; return acc; }, {}); }
Lastly, the shared logger stamps every log line with the service name, which makes cross-service debugging feel like reading a conversation instead of a jumble of anonymous messages:
// shared/src/logger.js function createLogger(serviceName) { const prefix = serviceName ? `[${serviceName}]` : '[app]'; const base = { info: console.log, error: console.error, warn: console.warn }; return { info: (msg, meta) => base.info(prefix, msg, meta || ''), warn: (msg, meta) => base.warn(prefix, msg, meta || ''), error: (msg, meta) => base.error(prefix, msg, meta || ''), }; }
After that refactor each service file felt lighter. The interesting code stayed in front, and new features no longer meant reworking the foundations.
Service Deep Dive
User Service: Reestablishing Identity Basics
The first feature I added was identity. Past projects taught me that most bugs look like security bugs when the caller is unknown, so registerUser
hashes the password, saves it, and issues a JWT in one short flow:
// services/user/src/service.js async function registerUser({ username, password }) { if (!username || !password) { throw new ValidationError('username and password are required'); } const existing = await findByUsername(username); if (existing) { throw new ValidationError('username already taken'); } const passwordHash = await hashPassword(password); const user = await createUser({ username, passwordHash }); const token = issueToken({ sub: user.id, username: user.username, role: user.role }); return { user, token }; }
Startup logic seeds an admin account from environment variables because I have locked myself out of dashboards before; the database initializer keeps that safety net in place by creating the table and populating the admin row the moment the service boots:
// services/user/src/db.js async function initDb(customPool = getPool()) { await customPool.query(` CREATE TABLE IF NOT EXISTS users ( id TEXT PRIMARY KEY, username TEXT UNIQUE NOT NULL, password_hash TEXT NOT NULL, role TEXT NOT NULL DEFAULT 'user' ); `); const { rows } = await customPool.query('SELECT id FROM users WHERE username = $1 LIMIT 1', [ config.ADMIN_USERNAME, ]); if (rows.length === 0 && config.ADMIN_PASSWORD) { const passwordHash = await hashPassword(config.ADMIN_PASSWORD); await customPool.query( 'INSERT INTO users (id, username, password_hash, role) VALUES ($1, $2, $3, $4)', [crypto.randomUUID(), config.ADMIN_USERNAME, passwordHash, 'admin'], ); } }
Authentication sits in a small middleware that checks Bearer tokens and attaches the decoded data to the request. The cryptography helpers stay in their own module so the rest of the code can trust req.user
without drama, and so future changes to signing logic happen in one place:
// services/user/src/auth-middleware.js function authRequired(req, _res, next) { const header = req.headers.authorization || ''; const [, token] = header.split(' '); if (!token) { return next(new UnauthorizedError('Missing bearer token')); } try { const payload = verifyToken(token); req.user = { id: payload.sub, username: payload.username, role: payload.role }; return next(); } catch (error) { return next(new UnauthorizedError('Invalid token')); } }
// services/user/src/security.js function issueToken(payload) { return jwt.sign(payload, config.JWT_SECRET, { expiresIn: '1h' }); } function verifyToken(token) { return jwt.verify(token, config.JWT_SECRET); }
Product Service: Guarding the Catalog
With identity stable, I moved to the catalog. Public routes need to be friendly but safe, so they validate pagination settings before running a query to avoid accidental full-table scans or wasteful database calls:
// services/product/src/service.js async function fetchProducts(query = {}) { if (query.limit !== undefined && isNaN(Number(query.limit))) { throw new ValidationError('limit must be numeric'); } if (query.offset !== undefined && isNaN(Number(query.offset))) { throw new ValidationError('offset must be numeric'); } return listProducts({ limit: query.limit, offset: query.offset }); }
Admin routes are stricter: the price parser stops invalid or negative numbers before they reach the database, and the admin middleware keeps write actions behind a trusted role so change control stays tight:
// services/product/src/service.js function parsePrice(price) { if (price === undefined) return undefined; const value = Number(price); if (Number.isNaN(value) || value < 0) { throw new ValidationError('price must be a non-negative number'); } return Math.round(value * 100) / 100; } async function createProductRecord({ name, description, price }) { if (!name || !description) { throw new ValidationError('name and description are required'); } const parsedPrice = parsePrice(price); if (parsedPrice === undefined) { throw new ValidationError('price is required'); } return createProduct({ id: randomUUID(), name, description, price: parsedPrice, }); }
// services/product/src/admin-middleware.js function adminOnly(req, _res, next) { if (!req.user) { return next(new UnauthorizedError('Auth required')); } if (req.user.role !== 'admin') { return next(new UnauthorizedError('Admin access required')); } next(); }
Each product receives a UUID when it is created and is stored in Postgres. That small step keeps tracking clear and makes later integrations easier if this prototype grows into something larger because every product ID stays unique across environments and migrations.
Order Service: Cross-Service Collaboration
Orders were the most satisfying part because they make the services work together and force the boundaries to prove themselves. The handler checks that both userId
and productId
exist, validates pagination options, and then calls the product service to confirm the item is still available:
// services/order/src/service.js async function recordOrder({ userId, productId }) { if (!userId || !productId) { throw new ValidationError('userId and productId are required'); } const product = await fetchProduct(productId); if (!product) { throw new ValidationError('product not found'); } return createOrder({ id: randomUUID(), userId, productId }); }
That remote call lives in a small client that normalizes URLs, treats 404s as “not found,” and wraps other errors in a validation message so downstream consumers receive clean, human-readable results:
// services/order/src/clients/product-client.js async function fetchProduct(productId) { const base = config.PRODUCT_SERVICE_URL.endsWith('/') ? config.PRODUCT_SERVICE_URL.slice(0, -1) : config.PRODUCT_SERVICE_URL; const res = await fetch(`${base}/products/${productId}`); if (res.status === 404) { return null; } if (!res.ok) { throw new ValidationError('product lookup failed'); } const body = await res.json(); return body.product; }
The repository stays lean by saving only foreign keys. If the catalog changes later, the order history still reads well, and the service can rebuild richer views by fetching user and product details when needed, which keeps the storage footprint small and the coupling loose:
// services/order/src/repository.js async function createOrder({ id, userId, productId }, pool = getPool()) { await pool.query( 'INSERT INTO orders (id, user_id, product_id) VALUES ($1, $2, $3)', [id, userId, productId], ); return mapOrder({ id, user_id: userId, product_id: productId, created_at: new Date() }); }
API Gateway and Service-to-Service Communication
From the start I wanted one door for clients. The gateway connects everything, and the proxyTo
helper does the heavy lifting by taking an incoming request, rebuilding the destination URL, and streaming the response back without leaking hop-by-hop headers:
// gateway/src/index.js function proxyTo(baseUrl) { const normalizedBase = baseUrl.endsWith('/') ? baseUrl.slice(0, -1) : baseUrl; return async (req, res, next) => { try { const targetUrl = new URL(req.originalUrl, `${normalizedBase}/`).toString(); const headers = { ...req.headers }; delete headers.host; const init = { method: req.method, headers, }; if (req.method !== 'GET' && req.method !== 'HEAD') { init.body = req.body ? JSON.stringify(req.body) : undefined; init.headers['content-type'] = 'application/json'; } const response = await fetch(targetUrl, init); const text = await response.text(); res.status(response.status); try { const parsed = JSON.parse(text || '{}'); res.json(parsed); } catch (_err) { res.send(text); } } catch (error) { next(error); } }; }
The routes mount each downstream service under a clean prefix, which keeps the public API steady even if I move services around inside the cluster and makes documentation easier for anyone consuming the gateway:
// gateway/src/index.js router.use('/users', proxyTo(serviceConfig.userServiceUrl)); router.use('/products', proxyTo(serviceConfig.productServiceUrl)); router.use('/orders', proxyTo(serviceConfig.orderServiceUrl));
Inside the system, the order service calls the product service through the same HTTP endpoints. The approach is intentionally simple because it matches what many teams already run. Right now those calls trust the network and do not add extra authentication, so improving that handshake is near the top of my hardening list. When I explore rate limiting or service discovery, the gateway will be the natural place to add them.
Configuration, Security, and Secrets Management
One personal rule for the project was simple: avoid “works on my machine” bugs. Every service reads configuration through env.getConfig
, which applies defaults, checks required values, and handles small type conversions before the app even starts:
// services/product/src/config.js env.loadEnv({ files: [path.join(__dirname, '..', '.env')] }); const config = env.getConfig({ PORT: { default: 3002, parser: Number }, DATABASE_URL: { default: 'postgres://product_service:password@localhost:5434/product_db' }, JWT_SECRET: { default: 'devsecret', required: true }, });
When the stack runs in Kubernetes, the JWT secret comes from a cluster secret instead of shipping inside the image, which means new secrets can be rotated without rebuilding containers:
# k8s/secret.yaml apiVersion: v1 kind: Secret metadata: name: jwt-secret namespace: mini-ecommerce stringData: value: devsecret
The user service issues tokens with that secret, the other services verify them locally, and role checks—like the admin filter in the product service—use the decoded payload to make decisions.
Local Development Workflow
Weekend hacking works only if the feedback loop stays short, so Docker Compose became the main control room:
- Install dependencies once with
npm install
so every workspace shares the same node_modules tree. - Run
npm run compose:up
to launch the three services, the gateway, and their Postgres companions (using the compose file shown above) and let Docker wire the local network for you. - Send every request through
http://localhost:8080
so the gateway path stays well traveled and the API surface mirrors production traffic.
Right now the services run with plain node
processes, so I still restart them by hand when code changes. Hot reloaders are on the to-do list, but even without them the shared package keeps logs and errors consistent. Docker volumes remember the seeded catalog and test users between runs, so I can experiment, restart, and keep moving without rebuilding the database every time.
Deploying to Kubernetes
By Sunday afternoon curiosity won. I wanted to watch the system run inside a cluster, so the manifests in k8s/
mirror the Compose layout almost line for line.
The user service deployment is representative of the pattern:
# k8s/user-service.yaml apiVersion: apps/v1 kind: Deployment metadata: name: user-service spec: replicas: 1 template: spec: containers: - name: user-service image: mini-ecommerce-user:latest env: - name: JWT_SECRET valueFrom: secretKeyRef: name: jwt-secret key: value readinessProbe: httpGet: path: /healthz port: 3001
The gateway pairs a deployment with an ingress so there is one public entry point:
# k8s/gateway.yaml apiVersion: apps/v1 kind: Deployment metadata: name: api-gateway spec: template: spec: containers: - name: api-gateway image: mini-ecommerce-gateway:latest env: - name: USER_SERVICE_URL value: http://user-service --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: api-gateway spec: rules: - http: paths: - path: / pathType: Prefix backend: service: name: api-gateway port: number: 80
Dedicated Postgres deployments keep data siloed per service, honoring the “database per service” mantra without any shared state leaks.
With images tagged—think mini-ecommerce-user:latest
—a kubectl apply -f k8s/
sets up the same architecture I run locally. Rolling updates and restarts behave the way I expect, which makes this repo a comfortable sandbox for practicing cluster operations. Secrets ship with kubectl apply -f k8s/secret.yaml
, and the workload manifests read them as environment variables; config maps follow the same pattern for plain settings.
Observability, Testing, and Next Experiments
I kept observability light but friendly. The logger shown earlier prefixes every line with a service name, so one tail -f
gives a clear picture of who is talking. Tests live next to the code inside each service’s __tests__
folder; they mix unit checks with small integration cases so I can change a function and still trust the boundaries, and they double as documentation because they show how the modules are meant to collaborate.
There is still plenty to explore. A message broker for order events, circuit breakers inside the product client, and rate limiting at the gateway are already on the list. The current setup leaves room for those ideas without tearing up the base.
What I Relearned
- Clear domain boundaries keep ownership simple and give every rule a home.
- A small shared toolkit (
@mini/shared
) stops the team—future me included—from rebuilding the same helpers. - The API gateway protects client URLs while backend services evolve in private.
- Matching the local Compose setup inside Kubernetes lowers the stress when promoting changes.
The weekend build reminded me that microservices are less about counting repositories and more about choosing clear boundaries. Steady ownership, honest contracts, and repeatable operations beat shiny patterns every time. Now that this mini eCommerce system lives in the toolbox, I can reopen the code and the lessons whenever I need a quick refresher.
Top comments (0)