Server Installation
You can run Nebula Commander in several ways:
| Method | Best for |
|---|
| Docker (recommended) | Quick start and production: pre-built images, docker compose |
| NixOS | NixOS hosts: systemd module with options for port, database, certs, JWT |
| Reverse Proxy | Nginx, Traefik, or Caddy in front with TLS and HSTS |
Choose one and follow the linked page. For local development (venv, uvicorn, frontend dev server), see Development: Setup. After installation, see Server Configuration to set the database, JWT, and optional OIDC.
1 - Docker
Run Nebula Commander with Docker using pre-built images or a local build. You can download all required files without cloning the repository. For HTTPS and HSTS at the edge (Nginx, Traefik, or Caddy in front of the frontend container), see Reverse Proxy.
Downloading files
You need: docker-compose.yml, docker-compose-keycloak.yml, .env.example, and the env.d.example/ directory (backend and Keycloak config). Either use the commands below or the provided script.
Using curl
Run these in an empty directory (or where you want the Docker setup):
BASE_URL="https://raw.githubusercontent.com/NixRTR/nebula-commander/main/docker"
curl -sSL -o docker-compose.yml "${BASE_URL}/docker-compose.yml"
curl -sSL -o docker-compose-keycloak.yml "${BASE_URL}/docker-compose-keycloak.yml"
curl -sSL -o .env.example "${BASE_URL}/.env.example"
mkdir -p env.d.example/keycloak
curl -sSL -o env.d.example/backend "${BASE_URL}/env.d.example/backend"
curl -sSL -o env.d.example/keycloak/keycloak "${BASE_URL}/env.d.example/keycloak/keycloak"
curl -sSL -o env.d.example/keycloak/postgresql "${BASE_URL}/env.d.example/keycloak/postgresql"
Then create the Docker network (required by the compose files):
docker network create nebula-commander
Copy the example env into place and edit as needed:
cp .env.example .env
cp -r env.d.example env.d
# Edit env.d/backend (JWT secret, OIDC, etc.)
Using the download script
The repository provides a script that downloads the same files and checks for prerequisites. It does not install anything; if Docker or Docker Compose is missing, it prints what you need and exits.
curl -sSL https://raw.githubusercontent.com/NixRTR/nebula-commander/main/docker/download.sh | bash
Then run the steps the script prints: copy .env.example to .env, copy env.d.example to env.d, edit env.d/backend, create the network, and start with docker compose up -d.
Example file contents
Below are the default file contents for reference. After downloading, copy to .env and env.d/ and customize.
docker-compose.yml
name: nebulacdr
include:
- path: ./docker-compose-keycloak.yml
services:
backend:
build:
context: ..
dockerfile: docker/backend/Dockerfile
image: ghcr.io/nixrtr/nebula-commander-backend:latest
container_name: nebula-commander-backend
restart: unless-stopped
ports:
- "${BACKEND_PORT:-8081}:8081"
volumes:
# Persistent data storage
- nebula-commander-data:/var/lib/nebula-commander
# Optional: mount JWT secret file
- ${JWT_SECRET_FILE:-/dev/null}:/run/secrets/jwt-secret:ro
env_file:
# Backend configuration (database, JWT, OIDC, CORS, debug)
- env.d/backend
environment:
- NEBULA_COMMANDER_SERVER_HOST=0.0.0.0
- NEBULA_COMMANDER_SERVER_PORT=8081
healthcheck:
test: ["CMD", "python3", "-c", "import urllib.request; urllib.request.urlopen('http://localhost:8081/api/health')"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
networks:
- nebula-commander
frontend:
build:
context: ..
dockerfile: docker/frontend/Dockerfile
image: ghcr.io/nixrtr/nebula-commander-frontend:latest
container_name: nebula-commander-frontend
restart: unless-stopped
ports:
- "${FRONTEND_PORT:-80}:80"
depends_on:
backend:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 5s
networks:
- nebula-commander
networks:
nebula-commander:
external: true
volumes:
nebula-commander-data:
driver: local
docker-compose-keycloak.yml
# Keycloak OIDC Authentication Stack
# To use: docker compose -f docker-compose.yml -f docker-compose-keycloak.yml up -d
services:
keycloak_db:
image: postgres:16-alpine
container_name: keycloak-db
restart: unless-stopped
env_file:
- env.d/keycloak/postgresql
volumes:
- keycloak-db-data:/var/lib/postgresql/data
networks:
- nebula-commander
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-keycloak}"]
interval: 10s
timeout: 5s
retries: 5
start_period: 10s
keycloak:
build:
context: ..
dockerfile: docker/keycloak/Dockerfile
image: ghcr.io/nixrtr/nebula-commander-keycloak:latest
container_name: keycloak
restart: unless-stopped
env_file:
- env.d/keycloak/keycloak
- env.d/backend
ports:
- "${KEYCLOAK_PORT:-8080}:8080"
- "${KEYCLOAK_ADMIN_PORT:-9000}:9000"
depends_on:
keycloak_db:
condition: service_healthy
networks:
- nebula-commander
healthcheck:
test: ["CMD-SHELL", "exec 3<>/dev/tcp/127.0.0.1/8080 && echo -e 'GET /health/ready HTTP/1.1\\r\\nhost: 127.0.0.1\\r\\nConnection: close\\r\\n\\r\\n' >&3 && cat <&3 | grep -q '200 OK'"]
interval: 30s
timeout: 10s
retries: 5
start_period: 60s
volumes:
keycloak-db-data:
driver: local
networks:
nebula-commander:
external: true
.env.example
# Nebula Commander Docker Infrastructure Configuration
# Copy to .env and customize. Backend settings go in env.d/backend.
# Port for the frontend (Nginx)
FRONTEND_PORT=80
# Port for the backend (FastAPI)
# BACKEND_PORT=8081
# Port for Keycloak (when using docker-compose-keycloak.yml)
# KEYCLOAK_PORT=8080
# Path to JWT secret file on host (optional)
# JWT_SECRET_FILE=/path/to/jwt-secret.txt
env.d.example/backend
Full example. See Configuration: Environment for all options.
# =============================================================================
# Nebula Commander Backend Configuration
# =============================================================================
# All variables use the NEBULA_COMMANDER_ prefix.
# Database
NEBULA_COMMANDER_DATABASE_URL=sqlite+aiosqlite:////var/lib/nebula-commander/db.sqlite
NEBULA_COMMANDER_CERT_STORE_PATH=/var/lib/nebula-commander/certs
# JWT (generate with: openssl rand -base64 32)
NEBULA_COMMANDER_JWT_SECRET_KEY=CHANGE_ME_GENERATE_RANDOM_32_CHARS_MIN
NEBULA_COMMANDER_JWT_ALGORITHM=HS256
NEBULA_COMMANDER_JWT_EXPIRATION_MINUTES=1440
# Public URL (FQDN or host:port)
NEBULA_COMMANDER_PUBLIC_URL=https://nebula.example.com
# OIDC (optional)
NEBULA_COMMANDER_OIDC_ISSUER_URL=http://keycloak:8080/realms/nebula-commander
NEBULA_COMMANDER_OIDC_PUBLIC_ISSUER_URL=https://auth.example.com/realms/nebula-commander
NEBULA_COMMANDER_OIDC_CLIENT_ID=nebula-commander
NEBULA_COMMANDER_OIDC_CLIENT_SECRET=YOUR_KEYCLOAK_CLIENT_SECRET_HERE
NEBULA_COMMANDER_OIDC_SCOPES=openid profile email
# CORS (include your public URL)
NEBULA_COMMANDER_CORS_ORIGINS=https://nebula.example.com
NEBULA_COMMANDER_SESSION_HTTPS_ONLY=false
NEBULA_COMMANDER_ALLOWED_REDIRECT_HOSTS=
# SMTP (optional)
NEBULA_COMMANDER_SMTP_ENABLED=false
NEBULA_COMMANDER_SMTP_HOST=smtp.gmail.com
NEBULA_COMMANDER_SMTP_PORT=587
NEBULA_COMMANDER_SMTP_USE_TLS=true
NEBULA_COMMANDER_SMTP_USERNAME=your-email@gmail.com
NEBULA_COMMANDER_SMTP_PASSWORD=your-app-password
NEBULA_COMMANDER_SMTP_FROM_EMAIL=noreply@example.com
NEBULA_COMMANDER_SMTP_FROM_NAME=Nebula Commander
# Debug (disable in production)
NEBULA_COMMANDER_DEBUG=true
env.d.example/keycloak/keycloak
# Keycloak Configuration
KC_DB=postgres
KC_DB_URL_HOST=keycloak_db
KC_DB_URL_PORT=5432
KC_DB_URL_DATABASE=keycloak
KC_DB_USERNAME=keycloak
KC_DB_PASSWORD=keycloak_db_password
KC_BOOTSTRAP_ADMIN_USERNAME=admin
KC_BOOTSTRAP_ADMIN_PASSWORD=admin
KC_HOSTNAME=localhost
KC_HOSTNAME_STRICT=false
KC_HTTP_ENABLED=true
KC_HEALTH_ENABLED=true
KC_METRICS_ENABLED=true
KC_LOG_LEVEL=info
env.d.example/keycloak/postgresql
# PostgreSQL Configuration for Keycloak
POSTGRES_DB=keycloak
POSTGRES_USER=keycloak
POSTGRES_PASSWORD=keycloak_db_password
Quick start (pre-built images)
After you have the files and env.d/backend configured:
docker network create nebula-commander # if not already created
docker compose pull
docker compose up -d
docker compose logs -f
The application is available at http://localhost (or the port set in .env).
Building locally
If you prefer to build images instead of pulling:
docker compose build
docker compose up -d
See Development: Manual builds for build-args and multi-arch builds.
Images
| Image | Registry | Base | Port | Platforms |
|---|
| Backend | ghcr.io/nixrtr/nebula-commander-backend:latest | Python 3.13-slim | 8081 | linux/amd64, linux/arm64 |
| Frontend | ghcr.io/nixrtr/nebula-commander-frontend:latest | nginx:alpine | 80 | linux/amd64, linux/arm64 |
| Keycloak | ghcr.io/nixrtr/nebula-commander-keycloak:latest | Keycloak | 8080 | linux/amd64, linux/arm64 |
Architecture
- Frontend (Nginx) – Serves the React SPA and proxies
/api/* to the backend. Port 80. - Backend (FastAPI) – REST API, certificate management, SQLite. Port 8081.
- Persistent volume – SQLite database and Nebula certificates under
/var/lib/nebula-commander.
Frontend (Nginx) :80 --> Backend (FastAPI) :8081 --> Volume (db + certs)
Configuration
Use two places:
- Infrastructure –
.env: ports, optional JWT secret file path. - Backend –
env.d/backend: all NEBULA_COMMANDER_* variables (database, JWT, OIDC, CORS, SMTP, debug).
See Configuration for full option lists.
With Keycloak (OIDC)
To use Keycloak for login:
docker compose -f docker-compose.yml -f docker-compose-keycloak.yml up -d
Configure OIDC in env.d/backend and optionally use the zero-touch Keycloak setup. Details: Configuration: OIDC.
Without Keycloak
Run only the backend and frontend. The backend exposes /api/auth/dev-token when OIDC is not configured (suitable for development only).
2 - NixOS
Run Nebula Commander as a NixOS service by adding the module and enabling the service. You can add the module from a local path (clone) or, when available, from a flake.
Adding the module (path-based, no flake)
Use this when you have the nebula-commander repository on disk (for example under /etc/nixos or a path you manage).
1. Get the repository
Clone or copy the nebula-commander repo so that the path contains both nix/ and backend/:
git clone https://github.com/NixRTR/nebula-commander.git /etc/nixos/nebula-commander
# Or use a path of your choice; the module expects ../../backend relative to nix/module.nix
2. Import the module in your NixOS configuration
In configuration.nix (or a NixOS module you include), add the import and enable the service:
{
imports = [
/etc/nixos/nebula-commander/nix/module.nix
];
services.nebula-commander.enable = true;
}
If you use a different path, use that path in imports, for example ./nebula-commander/nix/module.nix if the repo is in the same directory as your configuration.nix.
3. Optional: set options
Override any of the options (see the table below). The default package builds the backend from the same repo: it copies backend/ from the path relative to nix/module.nix (../../backend), so your clone must have that layout.
services.nebula-commander = {
enable = true;
backendPort = 8081;
databasePath = "/var/lib/nebula-commander/db.sqlite";
certStorePath = "/var/lib/nebula-commander/certs";
jwtSecretFile = null; # or e.g. /run/secrets/nebula-commander-jwt
debug = false;
};
Then rebuild: nixos-rebuild switch (or your usual method).
Adding via a flake
If the nebula-commander repository provides a flake.nix that exposes a NixOS module, you can add it as a flake input and use it in your NixOS configuration.
In your system flake (e.g. flake.nix in /etc/nixos or your config directory):
{
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
nebula-commander.url = "github:NixRTR/nebula-commander";
# If the repo has a flake, you might use a specific ref:
# nebula-commander.url = "github:NixRTR/nebula-commander/main";
};
outputs = { self, nixpkgs, nebula-commander, ... }: {
nixosConfigurations.yourHost = nixpkgs.lib.nixosSystem {
system = "x86_64-linux";
modules = [
./configuration.nix
nebula-commander.nixosModules.default # exact name depends on what the flake exports
];
};
};
}
The exact attribute (e.g. nebula-commander.nixosModules.default) depends on what the nebula-commander flake exports. If the repository does not yet have a flake.nix, use the path-based import above. When a flake is added, it may expose the module under nixosModules.default or a named module; check the flake output.
2. Enable the service and optional package
In your configuration.nix (or in the flake’s module list):
services.nebula-commander.enable = true;
# If the flake provides a package, point the service at it:
# services.nebula-commander.package = nebula-commander.packages.${pkgs.system}.default;
Then rebuild: nixos-rebuild switch --flake .#yourHost (or your usual flake command).
Options
All options live under services.nebula-commander:
| Option | Type | Default | Description |
|---|
enable | bool | — | Enable the Nebula Commander service |
package | package | backend source from repo | Nebula Commander package (backend source). With path-based import, built from ../../backend relative to the module file. |
port | port | 8080 | Port for the HTTP API when using nginx |
backendPort | port | 8081 | Port for the FastAPI backend (internal) |
databasePath | string | /var/lib/nebula-commander/db.sqlite | SQLite database file path |
certStorePath | string | /var/lib/nebula-commander/certs | Directory for CA and host certificates |
jwtSecretFile | null or path | null | Path to JWT secret file (e.g. managed by sops-nix). If null, a oneshot service generates /var/lib/nebula-commander/jwt-secret on first boot. |
debug | bool | false | Enable debug mode |
The module creates a nebula-commander system user and group, tmpfiles for data directories, and (when jwtSecretFile is null) a oneshot service that generates a JWT secret. The main service runs uvicorn with the backend and passes environment variables (database URL, cert store path, port, JWT secret file, debug).
For OIDC and other backend settings not exposed as NixOS options, extend the service environment in your config or use a config file; the backend reads NEBULA_COMMANDER_* from the environment.
3 - Reverse Proxy
Nebula Commander is typically run behind a reverse proxy such as Nginx, Traefik, or Caddy.
The frontend and backend containers listen on plain HTTP inside Docker; your reverse proxy:
- Terminates TLS on port 443 with a certificate
- Proxies requests to the frontend container (and through it to the backend)
- Adds security headers like
Strict-Transport-Security (HSTS)
This page shows recommended HSTS settings and example reverse proxy configurations.
HSTS and HTTPS
For best security, configure HTTPS and HSTS on your reverse proxy, not inside the containers.
That lets you choose the right TLS policy for your environment (LAN-only, VPN-only, or Internet-facing).
Recommended baseline HSTS header:
Strict-Transport-Security: max-age=31536000
For public Internet deployments on a DNS-backed domain, you can include subdomains:
Strict-Transport-Security: max-age=31536000; includeSubDomains
Avoid preload and avoid HSTS on bare IPs or non-public hostnames:
- HSTS preload lists require a public, DNS-backed domain
- HSTS on an IP or "internal-only" hostname can cause problems if it is repurposed later
The examples below assume you already have TLS certificates (for example from Let’s Encrypt)
and that the Nebula Commander frontend container is reachable as frontend:80 on a Docker network.
Nginx example
Minimal Nginx configuration to put nebula.example.com behind HTTPS with HSTS:
server {
listen 443 ssl http2;
server_name nebula.example.com;
ssl_certificate /path/to/fullchain.pem;
ssl_certificate_key /path/to/privkey.pem;
# HSTS: adjust policy for your environment
add_header Strict-Transport-Security "max-age=31536000" always;
location / {
proxy_pass http://frontend:80;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Change:
nebula.example.com to your actual hostname- Certificate paths to where your certs are stored
frontend:80 if you renamed the frontend service or are not using Docker networking
Traefik (file or dynamic configuration)
Traefik can apply HSTS via a headers middleware and route HTTPS traffic to the frontend service.
Example dynamic configuration (YAML) using the websecure entrypoint:
http:
middlewares:
hsts-headers:
headers:
stsSeconds: 31536000
stsIncludeSubdomains: true
routers:
nebula:
rule: "Host(`nebula.example.com`)"
entryPoints: ["websecure"]
service: nebula-frontend
middlewares:
- hsts-headers
tls:
certResolver: letsencrypt
services:
nebula-frontend:
loadBalancer:
servers:
- url: "http://frontend:80"
Adapt:
nebula.example.com to your hostnamewebsecure and letsencrypt to your Traefik entrypoint and certresolver namesfrontend:80 to match your frontend container name and port
Traefik with Docker labels
If you use the Nebula Commander Docker compose stack and Traefik’s Docker provider,
you can attach labels directly to the frontend service. Example:
frontend:
image: ghcr.io/nixrtr/nebula-commander-frontend:latest
container_name: nebula-commander-frontend
restart: unless-stopped
networks:
- nebula-commander
- traefik
labels:
- "traefik.enable=true"
# Router: HTTPS entrypoint and host rule
- "traefik.http.routers.nebula.rule=Host(`nebula.example.com`)"
- "traefik.http.routers.nebula.entrypoints=websecure"
- "traefik.http.routers.nebula.tls.certresolver=letsencrypt"
# Service: forward to container port 80
- "traefik.http.services.nebula-frontend.loadbalancer.server.port=80"
# HSTS middleware
- "traefik.http.middlewares.nebula-hsts.headers.stsSeconds=31536000"
- "traefik.http.middlewares.nebula-hsts.headers.stsIncludeSubdomains=true"
# Attach middleware to router
- "traefik.http.routers.nebula.middlewares=nebula-hsts"
You must also ensure Traefik is on the same Docker network as the frontend service
(for example a shared traefik network).
Rename router (nebula), service (nebula-frontend), middleware (nebula-hsts),
entrypoint (websecure), and certresolver (letsencrypt) to fit your existing Traefik setup.
Caddy example
Caddy can automatically obtain certificates and proxy to the frontend container:
nebula.example.com {
reverse_proxy frontend:80
header {
Strict-Transport-Security "max-age=31536000"
}
}
Change nebula.example.com and frontend:80 as needed. Caddy will handle TLS certificates
for you when properly configured with DNS and ports 80/443 exposed.