Menu

Overview

Relevant source files

The docker-selenium project provides Docker images and deployment configurations for running Selenium Grid, a platform for distributed browser automation testing. The project enables running browser tests in containerized environments, from local development setups to production-scale Kubernetes clusters.

The project is maintained by SeleniumHQ and distributed under the Apache License 2.0. Images are published to Docker Hub and Helm charts are available at Artifact Hub.

Project Scope

The repository provides:

  • Docker Images: Base images, Grid components (Hub, Router, Distributor, Sessions, SessionQueue, EventBus), browser nodes (Chrome, Firefox, Edge, Chromium), and specialized images (video recording, dynamic provisioning)
  • Deployment Templates: Docker Compose files and Kubernetes Helm charts for various deployment scenarios
  • Build System: Multi-architecture build support (AMD64/ARM64) with version compatibility management
  • Testing Infrastructure: Automated testing framework for Docker and Kubernetes deployments

For configuration details, see page 5. For Kubernetes deployment, see page 3. For advanced features like video recording and autoscaling, see page 6.

Sources: README.md15-30 Makefile1-40

Deployment Patterns

Docker Selenium supports four primary deployment patterns, each suited for different use cases:

Pattern 1: Standalone Mode

Deployment Diagram: Standalone Mode

Standalone mode combines the Selenium Hub and browser Node in a single container. This is the simplest deployment option, suitable for:

  • Local development and debugging
  • CI/CD pipeline test execution
  • Small-scale testing workloads

Quick Start:

Sources: README.md116-136 README.md408-430

Pattern 2: Hub and Nodes

Deployment Diagram: Hub-Node Architecture

Hub-Node mode separates the Grid orchestration (Hub) from browser execution (Nodes). The Hub manages session routing and load balancing across multiple Node containers. This pattern supports:

  • Multi-browser testing (Chrome, Firefox, Edge in parallel)
  • Horizontal scaling of specific browsers
  • Resource isolation between browser types

Quick Start:

Sources: README.md434-503 Hub/Dockerfile1-32 NodeBase/Dockerfile41-52

Pattern 3: Distributed Grid

Deployment Diagram: Distributed Grid Components

Distributed Grid decomposes the Hub into separate microservices:

  • Router: Entry point, routes requests to appropriate components
  • Distributor: Manages session distribution and node registration
  • Sessions: Maintains active session mappings
  • SessionQueue: Queues session requests when nodes are at capacity
  • EventBus: Message broker for inter-component communication

This pattern enables:

  • Independent scaling of Grid components
  • High availability through component redundancy
  • Production-grade deployments with observability

Sources: README.md623-637 Makefile151-168

Pattern 4: Dynamic Grid

Deployment Diagram: Dynamic Grid Provisioning

Dynamic Grid provisions browser containers on-demand for each session request. The selenium/node-docker or selenium/standalone-docker images communicate with the Docker daemon to create and destroy ephemeral browser containers. This pattern provides:

  • Optimal resource utilization (containers only exist during active sessions)
  • Session isolation (each session runs in a fresh container)
  • Dynamic browser version selection

Sources: README.md822-829 Makefile225-229

Deployment Pattern Comparison

AspectStandaloneHub-NodeDistributed GridDynamic Grid
Container Count12+ (1 Hub + N Nodes)5+ (Router, Distributor, Sessions, Queue, EventBus + Nodes)1-2 + ephemeral
Use CaseDevelopment, CI/CDSmall-to-medium scaleProduction, high availabilityVariable workloads
ScalabilityNone (single container)Manual horizontal scalingComponent-level scalingAutomatic on-demand
ComplexityLowMediumHighMedium
Resource EfficiencyFixedFixedFixedDynamic
Compose FileN/Adocker-compose-v3.ymldocker-compose-v3-full-grid.ymldocker-compose-v3-dynamic-grid.yml

Sources: README.md406-629

Docker Image Ecosystem

Image Hierarchy Diagram

The image hierarchy follows a layered architecture:

Foundation Layer

  • selenium/base: Provides Java 21, Selenium server JAR, OpenTelemetry instrumentation, and Supervisor process manager. All other images extend from this base.

Infrastructure Layer

  • Grid Components: selenium/hub, selenium/router, selenium/distributor, selenium/sessions, selenium/session-queue, selenium/event-bus extend base directly and implement specific Grid microservices.

Browser Execution Layer

  • selenium/video: FFmpeg-based image for session recording, compiled with x264, xcb, and pulse audio codecs.
  • selenium/node-base: Adds graphical display infrastructure (Xvfb virtual framebuffer, VNC server, noVNC web interface, Fluxbox window manager) on top of the video image.
  • Browser Nodes: selenium/node-chrome, selenium/node-firefox, selenium/node-edge, selenium/node-chromium extend node-base with browser binaries and WebDrivers.

Standalone Layer

  • Standalone Images: Combine Hub functionality with browser nodes in single containers for all-in-one deployments.

For detailed image documentation, see page 2.

Sources: Makefile41-59 Makefile144-308 NodeBase/Dockerfile1-182 NodeChrome/Dockerfile1-59 NodeFirefox/Dockerfile1-95 Hub/Dockerfile1-32

Multi-Architecture Support

The project supports both AMD64 and ARM64 architectures, though browser availability varies:

Browser Architecture Support Matrix

BrowserAMD64ARM64Notes
ChromeGoogle does not build Chrome for Linux/ARM
EdgeMicrosoft does not build Edge for Linux/ARM
FirefoxAvailable from v136+ via APT stable channel
ChromiumOpen source, multi-platform support

The build system automatically detects the host architecture and builds appropriate images. Multi-architecture builds can be performed using:

Note: Running AMD64 images under emulation on ARM64 platforms is not recommended due to performance and stability issues.

Sources: README.md144-191 Makefile27-28 Makefile210-229

Image Tagging and Versioning

Images are tagged with multiple formats to support different use cases:

Tag FormatExampleUse Case
Full versionselenium/hub:4.36.0-20251001Pin to specific Selenium and build date
Major.Minor.Patchselenium/hub:4.36.0Pin to Selenium version
Major.Minorselenium/hub:4.36Auto-update patch versions
Majorselenium/hub:4Auto-update minor and patch versions
latestselenium/hub:latestAlways use newest stable release
nightlyselenium/hub:nightlyNightly builds from Selenium main branch
dev/betaselenium/standalone-chrome:devPre-release browser channels

For production deployments, always use full version tags to ensure reproducibility.

Sources: README.md20-21 README.md132-136 README.md232-284

Deployment Methods

The project provides three primary deployment methods:

Docker Compose

Docker Compose files simplify multi-container orchestration for local development and small-scale deployments. Key configurations:

  • docker-compose-v3.yml: Basic Hub-Node grid
  • docker-compose-v3-full-grid.yml: Distributed Grid with all components
  • docker-compose-v3-video.yml: Grid with video recording
  • docker-compose-v3-dynamic-grid.yml: Dynamic Grid with on-demand provisioning
  • docker-compose-v3-tracing.yml: Grid with distributed tracing (Jaeger)

See page 4 for Docker Compose deployment details.

Kubernetes Helm Charts

The charts/selenium-grid Helm chart enables production Kubernetes deployments with:

  • KEDA-based autoscaling
  • Ingress and TLS configuration
  • Distributed tracing (Jaeger) and monitoring (Prometheus)
  • Video recording and upload
  • ConfigMap/Secret management

See page 3 for Kubernetes deployment details.

Manual Docker

Individual containers can be run manually using docker run commands, suitable for simple use cases or custom orchestration.

Sources: README.md607-629 README.md69

System Requirements

ComponentMinimum VersionNotes
Docker Engine26.1.4Required for all deployments
Docker Composev2.34.0For compose-based deployments
Docker Buildxv0.25.0For building multi-arch images
Kubernetesv1.26.15For Helm chart deployments

Resource Recommendations:

  • Standalone containers: --shm-size=2g minimum
  • Video recording: Additional 1 CPU per video container
  • Multi-browser grids: 2GB+ shared memory per browser node

Sources: README.md110-114 README.md132-133

Key Features

Video Recording and Upload

Sessions can be recorded using the selenium/video:ffmpeg-8.0-latest image with automatic cloud upload via rclone:

Sources: README.md640-782

Dynamic Grid (Docker-in-Docker)

The dynamic grid feature enables on-demand browser container provisioning using selenium/node-docker and selenium/standalone-docker images:

  • Containers are created per session request
  • Supports both hub-node and standalone configurations
  • Enables resource isolation and scaling flexibility

Multi-Architecture Support

Browser availability varies by architecture:

BrowserAMD64ARM64
Chrome
Edge
Firefox
Chromium

Sources: README.md144-191 README.md368-387

Configuration Management

The system uses environment variables with the SE_* prefix for configuration across all deployment modes. Key configuration areas include:

  • Grid Topology: SE_EVENT_BUS_HOST, SE_NODE_HOST
  • Session Limits: SE_NODE_MAX_SESSIONS, SE_NODE_SESSION_TIMEOUT
  • Video Recording: SE_RECORD_VIDEO, SE_VIDEO_FILE_NAME
  • Security: SE_ENABLE_TLS, SE_ROUTER_USERNAME
  • Monitoring: SE_ENABLE_TRACING, SE_OTEL_SERVICE_NAME

Sources: README.md388-405 NodeBase/Dockerfile34-79

This overview establishes the foundational understanding of Docker Selenium's architecture and capabilities. Subsequent sections provide detailed implementation guidance for specific deployment scenarios and advanced features.