Skip to content

Self-Hosted

Komand is designed to be self-hosted on your own infrastructure using Docker. This gives you full control over your data, deployment environment, and network configuration.

  • Docker and Docker Compose v2+
  • 4 GB RAM minimum (8 GB recommended for production)
  • PostgreSQL 17 (included in Docker Compose, or bring your own)
Terminal window
git clone https://github.com/komand-ai/komand-server.git
cd komand-server

Create a .env file in the docker/ directory:

Terminal window
# Database
POSTGRES_DB=komand
POSTGRES_USER=komand
POSTGRES_PASSWORD=your-secure-password
# Environment
DOTNET_ENVIRONMENT=Production
ASPNETCORE_ENVIRONMENT=Production
# Connection string (must match POSTGRES_* values)
ORLEANS_CONNECTION_STRING=Host=postgres;Database=komand;Username=komand;Password=your-secure-password
# Logging
SEQ_URL=http://seq:80
# CORS (your domain)
Cors__AllowedOrigins=https://komand.your-domain.com
# JWT (MUST change in production — minimum 32 characters)
Jwt__SecretKey=replace-this-with-a-secure-random-string-min-32-chars
Terminal window
docker compose -f docker/docker-compose.yml up -d
Terminal window
# Check all services are healthy
docker compose -f docker/docker-compose.yml ps
# Hit the health endpoint
curl http://localhost:5000/health

The dashboard is available at http://localhost:5000.

The Docker Compose stack runs four services with enforced resource limits:

ServiceImagePortsMemoryCPUHealth Check
postgrespostgres:17-alpine5432512 MB1.0pg_isready
seqdatalust/seq:latest5341 (UI), 5342 (ingest)256 MB0.5
siloCustom (Dockerfile.silo)11111, 300001 GB2.0TCP socket 11111
gatewayCustom (Dockerfile.gateway)5000512 MB1.0HTTP /health

Services start in dependency order with health check gates:

postgres (healthy) → silo (healthy) → gateway
seq (parallel)

The silo waits for PostgreSQL to be ready before starting. The gateway waits for the silo to be healthy before starting. Seq starts independently.

VolumePurpose
postgres_dataPostgreSQL data persistence
seq_dataStructured log data

On first startup, PostgreSQL runs the init script docker/init-db/01-orleans-schema.sql which creates:

Orleans tables:

  • OrleansMembershipTable — silo cluster membership
  • OrleansMembershipVersionTable — cluster version tracking
  • OrleansStorage — grain state persistence (indexed by grain ID hash)
  • OrleansRemindersTable — durable reminders for CronGrain
  • OrleansQuery — stored procedures for Orleans operations

Komand tables:

  • komand_audit_log — audit trail with indexes on timestamp, agent_id, actor_id

Orleans manages its own schema — you don’t need to run migrations manually.

Both application containers (silo and gateway) run with security hardening:

  • Non-root user — runs as komand user inside the container
  • Resource limits — CPU and memory caps prevent resource exhaustion
  • Multi-stage builds — production images contain only the .NET runtime, not the SDK
  • Health checks — unhealthy containers are flagged for restart
  • No host network — containers use an isolated Docker network

The gateway Dockerfile uses a three-stage build:

Stage 1: Node.js 22 → Build React frontend (npm ci + npm run build)
Stage 2: .NET SDK 10.0-preview → Build Gateway + copy frontend dist to wwwroot/
Stage 3: ASP.NET Core 10.0-preview runtime → Production image with curl for health checks

The silo Dockerfile uses a two-stage build:

Stage 1: .NET SDK 10.0-preview → Build Silo
Stage 2: ASP.NET Core 10.0-preview runtime → Production image

Komand runs HTTP internally. Use a reverse proxy for TLS termination:

nginx:

server {
listen 443 ssl http2;
server_name komand.your-domain.com;
ssl_certificate /path/to/cert.pem;
ssl_certificate_key /path/to/key.pem;
location / {
proxy_pass http://localhost:5000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# SignalR WebSocket support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}

Caddy (automatic TLS):

komand.your-domain.com {
reverse_proxy localhost:5000
}

The default development key must be replaced in production. Generate a secure key:

Terminal window
openssl rand -base64 48

Set it as Jwt__SecretKey in your .env file. The key must be at least 32 characters.

Only expose the reverse proxy port (443). All other ports should be internal-only:

PortAccess
443Public (via reverse proxy)
5000Internal only (Gateway)
11111Internal only (Orleans silo-to-silo)
30000Internal only (Orleans client-to-silo)
5432Internal only (PostgreSQL)
5341/5342Internal only (Seq)

In production, CORS defaults to no allowed origins if not configured. You must explicitly set:

Terminal window
Cors__AllowedOrigins=https://komand.your-domain.com

Adjust the Docker Compose resource limits based on your workload:

ServiceSmall (< 10 agents)Medium (10–100 agents)Large (100+ agents)
Silo1 GB / 2 CPU2 GB / 4 CPU4 GB / 8 CPU
Gateway512 MB / 1 CPU1 GB / 2 CPU2 GB / 4 CPU
PostgreSQL512 MB / 1 CPU1 GB / 2 CPU2 GB / 4 CPU

Back up the database regularly:

Terminal window
# One-time backup
docker exec komand-postgres pg_dump -U komand komand > backup-$(date +%Y%m%d).sql
# Restore from backup
docker exec -i komand-postgres psql -U komand komand < backup-20260223.sql

Add a cron job for daily backups:

Terminal window
0 2 * * * docker exec komand-postgres pg_dump -U komand komand | gzip > /backups/komand-$(date +\%Y\%m\%d).sql.gz
DataLocationMethod
Grain state (agents, sessions, skills)PostgreSQLpg_dump
Audit logsPostgreSQLpg_dump
Structured logsSeq volumeVolume backup
Configuration.env + appsettings.jsonFile backup

To use an existing PostgreSQL instance instead of the containerised one:

  1. Remove the postgres service from docker-compose.yml
  2. Remove the postgres_data volume
  3. Update ORLEANS_CONNECTION_STRING to point to your database
  4. Import the schema: psql -f docker/init-db/01-orleans-schema.sql
  5. Ensure the database user has full schema permissions
Terminal window
ORLEANS_CONNECTION_STRING=Host=your-db.example.com;Port=5432;Database=komand;Username=komand;Password=your-password;SslMode=Require

The /health endpoint checks:

  • Orleans silo — pings the SkillRegistryGrain with a 5-second timeout
  • PostgreSQL — connection health check (if connection string is configured)

Returns HTTP 200 (healthy) or HTTP 503 (unhealthy).

Seq provides a web UI at port 5341 for searching and analysing structured logs. All log entries include:

  • Correlation ID (X-Request-Id)
  • Application name (Komand.Silo or Komand.Gateway)
  • Grain activation/deactivation events
  • API request durations
MetricAlert ThresholdSource
/health statusNon-200 for > 30sHealth endpoint
Container restarts> 3 in 10 minutesDocker
PostgreSQL connections> 80% poolPostgreSQL
Memory usage> 85% of limitDocker stats
Grain call timeouts> 5% error rateSeq logs
Terminal window
cd komand-server
git pull
docker compose -f docker/docker-compose.yml build
docker compose -f docker/docker-compose.yml up -d

Orleans handles grain state migration across versions automatically. No manual migration steps are required.

For zero-downtime deployments, use rolling updates with a multi-silo configuration. Orleans supports adding and removing silos from a running cluster without service interruption.