DEV Community

Cover image for Can One Person Really Build a Complex System from Scratch?
Joseph Mukorivo
Joseph Mukorivo

Posted on • Originally published at Medium

Can One Person Really Build a Complex System from Scratch?

Introduction

When I started building complexus, I wasn’t trying to ship just a side project. I wanted to create a real system, one that could replace how modern teams plan, track, and align their work, and replace tools like Monday, Notion, and Jira, but fully centered around OKRs.

I have always built software with teams. I wanted to find out whether one person could tackle something this complex, so I built Complexus by myself. Complexus is a production ready ai project management platform with OKR tracking, real time collaboration, AI powered automation, background processing, engagement email workflows, multi tenant authentication, analytics, GitHub integration, and more. It consists of a frontend monorepo containing multiple Next.js applications and a separate backend service written in Go.

This post breaks down how I built it all. I’ll go deep into architecture, tooling decisions, background processing, AI tools, and the small details that made the system feel fast and cohesive. If you’re thinking about building a complex platform by yourself or leading the architecture of one, I hope this gives you a clear picture of what it actually takes.

The Frontend

Monorepo Setup with Turborepo

I chose Turborepo to house every part of the Complexus frontend in a single repository. This makes it trivial to share code and configuration across the various web applications. The top level structure looks like this:

/complexus.tech apps/ landing/ # Next.js marketing site docs/ # MDX-driven documentation projects/ # Core project management app packages/ ui/ # Shared components built with Tailwind and Radix tailwind-config/# Design tokens and Tailwind configuration lib/ # Shared helpers (cn, date formatters, Zod schemas) eslint-config-custom/ # Consistent linting rules turbo.json # Pipeline definitions and cache settings tsconfig.json # Base TypeScript configuration package.json # Workspace dependencies and scripts 
Enter fullscreen mode Exit fullscreen mode

Each package declares its own TypeScript project reference so that changes to packages/ui or packages/lib immediately type check across all apps. Turborepo pipelines in turbo.json ensure only affected projects rebuild. Remote caching in CI restores build artifacts from previous runs, cutting pipeline times by over fifty percent. Locally, I run turbo run dev and get all three apps running in parallel with hot reload.

Shell and Layouts with Next.js

The core application under apps/projects is a Next.js App Router project. I rely on nested layouts to compose global UI elements and context providers. The root layout is minimal, setting up the HTML shell, fonts, and essential providers like SessionProvider for auth and a custom Providers component that wraps the app in QueryClientProvider, PostHogProvider, and ThemeProvider.

// apps/projects/src/app/layout.tsx import type { Metadata } from "next"; import { Instrument_Sans as InstrumentSans } from "next/font/google"; import { type ReactNode } from "react"; import "../styles/global.css"; import { SessionProvider } from "next-auth/react"; import { auth } from "@/auth"; import { Providers } from "./providers"; import { Toaster } from "./toaster"; // ... export default async function RootLayout({ children, }: { children: ReactNode; }) { const session = await auth(); return ( <html className={font.className} lang="en" suppressHydrationWarning> <body> <SessionProvider session={session}> <Providers> {children} <Toaster /> </Providers> </SessionProvider> {/* ... */} </body> </html> ); } 
Enter fullscreen mode Exit fullscreen mode

A second layout at src/app/(workspace)/layout.tsx is a server component that fetches critical workspace data. A third, nested layout at src/app/(workspace)/(app)/layout.tsx wraps everything in the main ApplicationLayout, which contains the resizable sidebar and the main content area. This layered approach keeps data fetching and UI rendering cleanly separated.

Data Fetching and Optimistic Updates with React Query

React Query is the backbone of all data operations. I define query keys in a shared file and co-locate data fetching hooks with their respective features under src/modules. For mutations, I implement optimistic updates to give users instant feedback. When a user performs an action, like creating a new sprint, the UI updates instantly while the network request happens in the background.

// apps/projects/src/modules/sprints/hooks/create-sprint-mutation.ts import { useMutation, useQueryClient } from "@tanstack/react-query"; import { sprintKeys } from "@/constants/keys"; import { useTracking } from "@/hooks"; import { createSprint } from "../actions/create-sprint"; import type { NewSprint } from "../types"; export const useCreateSprint = () => { const queryClient = useQueryClient(); const { track } = useTracking(); return useMutation({ mutationFn: async (sprint: NewSprint) => createSprint(sprint), onSuccess: (newSprint) => { queryClient.invalidateQueries({ queryKey: sprintKeys.all, }); track("sprint_created", { sprintId: newSprint.id, sprintName: newSprint.name, }); }, onError: (error) => { // Handle error, e.g., show a toast notification }, }); }; 
Enter fullscreen mode Exit fullscreen mode

on some of these operations i am also tracking the user actions with a custom useTracking hook and i use this data to understand how users are interacting with my platform and use this data to provide customised experiences.

Real Time Synchronization with Server Sent Events

To keep multiple users in sync, a dedicated ServerSentEvents component, rendered in the workspace layout, establishes a connection to a secure, workspace specific endpoint on the Go backend.

// apps/projects/src/app/server-sent-events.tsx "use client"; import { useQueryClient } from "@tanstack/react-query"; import { useSession } from "next-auth/react"; // ... other imports export const ServerSentEvents = () => { const { data: session } = useSession(); const { workspace } = useCurrentWorkspace(); const queryClient = useQueryClient(); // ... handleNotification and handleWorkspaceUpdate callbacks useEffect(() => { // The endpoint URL is constructed dynamically using the workspace ID and session token const eventSource = new EventSource(/* ... secure endpoint ... */); eventSource.onmessage = (event) => { const data = JSON.parse(`${event.data}`); if (data.type === "story.workspace_update") { handleWorkspaceUpdate(data); } else { handleNotification(data); } }; // ... error handling return () => eventSource.close(); }, [session?.token, workspace /* ... */]); return null; }; 
Enter fullscreen mode Exit fullscreen mode

When an event arrives from the server, the client intelligently updates the React Query cache. For a story update, it patches the changed fields in both the main story list and the detailed story view using queryClient.setQueryData, ensuring all visible components update instantly without refetching entire lists.

Authentication Across Subdomains with NextAuth.js

NextAuth.js handles authentication. I use magic links and social auth system with secure tokens instead of storing passwords. A key architectural decision was to enable a single login session to work across all workspace subdomains, like acme.complexus.app and corp.complexus.app. This is achieved by scoping the authentication cookie to a shared parent domain, .complexus.app. When a user signs in, NextAuth.js issues a session cookie with this domain setting. The browser then automatically sends the cookie with requests to any subdomain, creating a seamless authentication experience.

AI Assistant Maya with the Vercel AI SDK

Maya, the AI assistant, is powered by the AI SDK and mostly OpenAI models. In the projects app, a chat component posts messages to an endpoint like /api/maya. For this i also collect some telemetry data with posthog to understand how users interact with Maya.

// apps/projects/src/app/api/maya/route.ts import { createOpenAI } from "@ai-sdk/openai"; import { streamText } from "ai"; import { withTracing } from "@posthog/ai"; import { navigation, theme /* ... many other tools */ } from "@/lib/ai/tools"; // ... export async function POST(req: NextRequest) { const { messages /* ... other context */ } = await req.json(); const session = await auth(); const phClient = posthogServer(); const openaiClient = createOpenAI({ apiKey: process.env.OPENAI_API_KEY }); const model = withTracing(openaiClient("gpt-4o-mini"), phClient, { posthogDistinctId: session?.user?.email ?? undefined, // ... }); const result = streamText({ model, messages, tools: { navigation, theme, // ... all other tools }, system: systemPrompt + userContext, async onFinish({ response }) { await saveChat({ id, messages: appendResponseMessages({ messages, responseMessages: response.messages, }), }); }, }); return result.toDataStreamResponse(); } 
Enter fullscreen mode Exit fullscreen mode

The key to Maya's effectiveness is the comprehensive set of tools defined in apps/projects/src/lib/ai/tools. These tools allow the AI to interact with the application's state, performing actions like creating stories, navigating the user, changing the theme, searching for items, and much more. This makes Maya a powerful, context aware assistant rather than just a simple chatbot.

Frontend Analytics with PostHog

PostHog is integrated across the frontend to capture events, run A/B tests, and allow users to opt into beta features. A custom useTracking hook makes it easy to send events from any component.

// apps/projects/src/hooks/tracking.ts import { usePostHog } from "posthog-js/react"; export const useTracking = () => { const posthog = usePostHog(); const track = (eventName: string, properties?: Record<string, any>) => { posthog.capture(eventName, properties); }; return { track }; }; 
Enter fullscreen mode Exit fullscreen mode

This hook is used throughout the application to track key user interactions, such as creating a story or completing a walkthrough.

The Backend

Backend Architecture in Go

The backend is a standalone Go service built with a clean, layered architecture. It relies heavily on the standard library, with sqlx for database access and asynq for background jobs. Code is organized by domain under the internal directory in a separate repository.

/projects-api cmd/ api/main.go # API server entry point worker/main.go # Asynq worker entry point internal/ core/ # Core business logic and types handlers/ # HTTP endpoint handlers repo/ # Database queries and repository pattern sse/ # Server-Sent Events hub web/mid/ # Middleware (auth, logging, etc.) pkg/ # Shared packages (database, cache, etc.) 
Enter fullscreen mode Exit fullscreen mode

The service layer in internal/core contains the business logic. Updating a story, for example, involves fetching the current state, applying the updates, recording the changes for the audit log, and publishing an event to notify other systems and send the SSE events to other connected users for real-time updates.

// internal/core/stories/stories.go func (s *Service) Update(ctx context.Context, storyID, workspaceID uuid.UUID, updates map[string]any) error { ctx, span := web.AddSpan(ctx, "business.core.stories.Update") defer span.End() actorID, _ := mid.GetUserID(ctx) story, err := s.repo.Get(ctx, storyID, workspaceID) if err != nil { // ... error handling return nil } if err := s.repo.Update(ctx, storyID, workspaceID, updates); err != nil { span.RecordError(err) return err } // Record activities for the update var activities []CoreActivity for field, value := range updates { // ... create activity objects ... } if _, err := s.repo.RecordActivities(ctx, activities); err != nil { span.RecordError(err) // Log but don't fail the operation } // Publish event for real time sync and notifications payload := events.StoryUpdatedPayload{ /* ... */ } event := events.Event{ Type: events.StoryUpdated, Payload: payload, ActorID: actorID, } if err := s.publisher.Publish(context.Background(), event); err != nil { s.log.Error(ctx, "failed to publish story updated event", "error", err) } return nil } 
Enter fullscreen mode Exit fullscreen mode

The repository packages under internal/repo contains raw SQL queries, which allows for precise tuning and indexing. I use sqlx to map query results to Go structs. OpenTelemetry instruments handlers and service calls, sending traces to Jaeger for end to end observability. Here is an example of adding a trace event within a handler:

// internal/handlers/storiesgrp/stories.go func (h *Handlers) List(ctx context.Context, w http.ResponseWriter, r *http.Request) error { ctx, span := web.AddSpan(ctx, "business.core.stories.List") defer span.End() // ... if err := h.cache.Get(ctx, cacheKey, &cachedStories); err == nil { // Cache hit span.AddEvent("cache hit", trace.WithAttributes( attribute.String("cache_key", cacheKey), )) web.Respond(ctx, w, toAppStories(cachedStories), http.StatusOK) return nil } // ... } 
Enter fullscreen mode Exit fullscreen mode

Background Jobs with Asynq

Asynq runs scheduled and ad hoc tasks. I define jobs in internal/taskhandlers and register them in the worker's main entry point.

// cmd/worker/main.go func main() { // ... setup ... srv := asynq.NewServer( asynq.RedisClientOpt{Addr: redisAddr, Password: redisPassword}, asynq.Config{ /* ... */ }, ) mux := asynq.NewServeMux() mux.Handle(tasks.TypeOnboardingEmail, taskhandlers.NewOnboardingEmailHandler(log, brevoService)) mux.Handle(tasks.TypeCleanupDeletedStories, taskhandlers.NewCleanupDeletedStoriesHandler(log, db)) // ... other handlers if err := srv.Run(mux); err != nil { log.Fatal("could not run server: ", err) } } 
Enter fullscreen mode Exit fullscreen mode

A daily cleanup job deletes soft deleted stories older than 30 days. Archiving jobs flag inactive boards. Notification jobs send transactional emails for unread notifications. Retry policies and dead letter queues prevent silent failures.

Emails and Onboarding with Brevo

Brevo handles both transactional and marketing emails via a single integration defined in pkg/brevo/emails.go.

// pkg/brevo/emails.go import ( "context" brevo "github.com/getbrevo/brevo-go/lib" ) func (s *Service) SendTransactionalEmail(ctx context.Context, subject, toName, toEmail, htmlContent string) error { _, _, err := s.client.TransactionalEmailsApi.SendTransacEmail(ctx, sib.SendSmtpEmail{ Sender: &brevo.SendSmtpEmailSender{Name: "Complexus", Email: "no-reply@complexus.app"}, To: []brevo.SendSmtpEmailTo{{Name: toName, Email: toEmail}}, Subject: subject, TemplateId: 4, }) return err } 
Enter fullscreen mode Exit fullscreen mode

On signup, an Asynq task subscribes the user to a Brevo list. A five step onboarding automation then sends emails introducing features over two weeks.

CI/CD and Infrastructure

The frontend apps deploy on Vercel, with each apps/* folder detected as its own project, generating preview URLs for pull requests. The backend service builds a Docker image in a GitHub Actions workflow and deploys to Azure Container Apps.

# .github/workflows/deploy-backend.yml name: Build and Deploy API on: push: branches: [main] paths: ["projects-api/**"] jobs: build_and_deploy: runs-on: ubuntu-latest steps: - name: Checkout code uses: actions/checkout@v2 - name: "Login to Azure" uses: azure/login@v1 with: creds: ${{ secrets.AZURE_CREDENTIALS }} # ... docker build and push commands ... - name: "Deploy to Azure Container Apps" uses: azure/container-apps-deploy-action@v1 with: appName: [your_app_here] resourceGroup: [your_resource_group] imageToDeploy: [container_name].azurecr.io/projects-api:${{ github.sha }} 
Enter fullscreen mode Exit fullscreen mode

GitHub Integration

Complexus includes a GitHub App that listens for issue events. When installed on a repo, it sends webhooks to a dedicated endpoint in the Go backend. The handler creates or updates stories based on issue actions, linking development work directly to project planning. Daily Asynq jobs prune old webhook event logs to keep the system clean.

Closing Thoughts

Solo building a platform of this scale demands modular architecture, shared tooling, clear boundaries, and disciplined iteration. Complexus works because each layer integrates seamlessly. The frontend feels instant thanks to optimistic updates and SSE. The backend stays reliable through clean architecture patterns and raw SQL. Background jobs automate maintenance. AI tools extend functionality without bloating the core code. Authentication and feature flags keep everything secure and testable. Deployment pipelines ensure smooth releases.

Of course, this post doesn't cover everything. I've left out details on the comprehensive testing strategy, which includes end to end tests with Playwright and a suite of unit and integration tests for both the frontend and backend. The goal was to provide a high level map of the system.

If you are considering a solo build, invest heavily in your foundational architecture, automate what you can, and keep everything in one coherent workspace. I invite you to try the platform for yourself at www.complexus.app. Share your experiences or ask questions in the comments below. I look forward to hearing about your own complex one person builds.

Top comments (0)