Skip to main content

Overview

Opiinix Trade uses Docker for containerizing infrastructure services and the entire application. The project includes:
  • docker-compose.yml - Infrastructure services (PostgreSQL, Redis)
  • Dockerfile - Application containerization
Docker provides consistent development environments and simplifies deployment.

Infrastructure Services

Docker Compose Configuration

The docker-compose.yml file defines two essential services:
version: "3.8"

services:
  timescaledb:
    image: timescale/timescaledb:latest-pg12
    container_name: timescaledb
    ports:
      - 5432:5432
    restart: always
    environment:
      POSTGRES_USER: dev
      POSTGRES_PASSWORD: dev
      POSTGRES_DB: repo
    volumes:
      - timescale-data:/var/lib/postgresql/data
    
  redis:
    image: redis:latest
    ports:
      - 6379:6379
    restart: always
    volumes:
      - cache:/data

volumes:
  timescale-data:
  cache:
    driver: local

Services Breakdown

Image: timescale/timescaledb:latest-pg12TimescaleDB is a PostgreSQL extension optimized for time-series data, perfect for tracking order book history and price movements.Configuration:
  • Port: 5432 (mapped to host)
  • User: dev
  • Password: dev
  • Database: repo
  • Volume: timescale-data (persistent storage)
  • Restart: Always
Why TimescaleDB?
  • Time-series optimization for historical data
  • Built on PostgreSQL (full SQL support)
  • Efficient for storing order history
  • Better compression for time-series data
Image: redis:latestRedis serves multiple purposes in Opinix Trade:
  • Order queue (BullMQ)
  • Pub/sub for WebSocket messages
  • Caching layer
Configuration:
  • Port: 6379 (mapped to host)
  • Volume: cache (persistent storage)
  • Restart: Always
Use Cases:
  • Queue management for asynchronous order processing
  • Real-time pub/sub for WebSocket updates
  • Session storage
  • Rate limiting

Starting Infrastructure

1

Start services

Launch all infrastructure services:
docker-compose up -d
The -d flag runs containers in detached mode (background).
2

Verify services are running

Check container status:
docker-compose ps
Expected output:
NAME         COMMAND                  SERVICE       STATUS       PORTS
timescaledb  "docker-entrypoint.s…"   timescaledb   Up 2 mins    0.0.0.0:5432->5432/tcp
redis        "docker-entrypoint.s…"   redis         Up 2 mins    0.0.0.0:6379->6379/tcp
3

Test connections

docker exec -it timescaledb psql -U dev -d repo
Or using psql locally:
psql -h localhost -U dev -d repo

Application Dockerfile

The main Dockerfile containerizes the entire Opinix Trade application:
FROM node:20-alpine

ARG DATABASE_URL

WORKDIR /usr/src/app

COPY packages ./packages
COPY apps/client ./apps/client
COPY apps/server ./apps/server
COPY package.json .
COPY package-lock.json .
COPY turbo.json .

RUN npm install

RUN cd packages/db && DATABASE_URL=$DATABASE_URL npx prisma generate

WORKDIR /usr/src/app/

EXPOSE 3000 3001

RUN npm run build

CMD ["npm", "run", "dev"]

Dockerfile Breakdown

FROM node:20-alpine
  • Uses Node.js 20 on Alpine Linux
  • Alpine provides a minimal image (~5MB base)
  • Reduces container size and attack surface
ARG DATABASE_URL
Accepts DATABASE_URL as a build argument for Prisma client generation.
COPY packages ./packages
COPY apps/client ./apps/client
COPY apps/server ./apps/server
COPY package.json .
COPY package-lock.json .
COPY turbo.json .
Copies only necessary files for the monorepo:
  • Core packages (db, order-queue, types, etc.)
  • Client and server apps
  • Root configuration files
Services (engine, wss) are not included in this Dockerfile. Consider separate containers for production.
RUN npm install
RUN cd packages/db && DATABASE_URL=$DATABASE_URL npx prisma generate
RUN npm run build
Build steps:
  1. Install all dependencies
  2. Generate Prisma client with provided DATABASE_URL
  3. Build all packages and apps using Turborepo
EXPOSE 3000 3001
  • Port 3000: Next.js client
  • Port 3001: Express server

Building the Docker Image

Build the Docker image for development:
docker build \
  --build-arg DATABASE_URL="postgresql://dev:dev@host.docker.internal:5432/repo?schema=public" \
  -t opinix-trade:dev \
  .
Use host.docker.internal to access localhost from inside the container.

Running the Container

docker run -p 3000:3000 -p 3001:3001 opinix-trade:dev

Docker Compose for Full Stack

Create a complete docker-compose setup including the application:
docker-compose.full.yml
version: "3.8"

services:
  timescaledb:
    image: timescale/timescaledb:latest-pg12
    container_name: timescaledb
    ports:
      - 5432:5432
    environment:
      POSTGRES_USER: dev
      POSTGRES_PASSWORD: dev
      POSTGRES_DB: repo
    volumes:
      - timescale-data:/var/lib/postgresql/data
    
  redis:
    image: redis:latest
    ports:
      - 6379:6379
    volumes:
      - cache:/data

  app:
    build:
      context: .
      args:
        - DATABASE_URL=postgresql://dev:dev@timescaledb:5432/repo?schema=public
    ports:
      - 3000:3000
      - 3001:3001
    depends_on:
      - timescaledb
      - redis
    environment:
      - DATABASE_URL=postgresql://dev:dev@timescaledb:5432/repo?schema=public
      - REDIS_URI=redis://redis:6379
    env_file:
      - .env

volumes:
  timescale-data:
  cache:
Start the full stack:
docker-compose -f docker-compose.full.yml up

Production Considerations

Multi-stage Builds

Use multi-stage Dockerfile to separate build and runtime:
# Build stage
FROM node:20-alpine AS builder
WORKDIR /app
COPY . .
RUN npm install && npm run build

# Production stage
FROM node:20-alpine AS production
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
CMD ["npm", "run", "start"]

Separate Services

Create separate containers for each service:
  • Client container
  • Server container
  • Engine container
  • WebSocket container
Benefits:
  • Independent scaling
  • Easier updates
  • Better resource allocation

Health Checks

Add health checks to containers:
HEALTHCHECK --interval=30s --timeout=3s \
  CMD node healthcheck.js || exit 1

Security

Production security practices:
  • Run as non-root user
  • Scan for vulnerabilities
  • Use specific image tags
  • Minimal base images
  • No secrets in images

Docker Commands Reference

# Start services
docker-compose up -d

# Stop services
docker-compose down

# Restart a service
docker-compose restart timescaledb

# View logs
docker-compose logs -f

# View logs for specific service
docker-compose logs -f redis

# Execute command in container
docker-compose exec timescaledb bash
# List volumes
docker volume ls

# Remove all volumes (WARNING: deletes data)
docker-compose down -v

# Backup database
docker exec timescaledb pg_dump -U dev repo > backup.sql

# Restore database
docker exec -i timescaledb psql -U dev repo < backup.sql

# Clear Redis cache
docker-compose exec redis redis-cli FLUSHALL
# Check container status
docker-compose ps

# View resource usage
docker stats

# Inspect container
docker inspect timescaledb

# View container IP
docker inspect timescaledb | grep IPAddress

# Access PostgreSQL shell
docker-compose exec timescaledb psql -U dev -d repo

# Access Redis CLI
docker-compose exec redis redis-cli

Troubleshooting

Error: Bind for 0.0.0.0:5432 failed: port is already allocatedSolutions:
  1. Stop local PostgreSQL: sudo service postgresql stop
  2. Change port mapping in docker-compose.yml:
    ports:
      - 5433:5432
    
  3. Find and kill process using port: lsof -ti:5432 | xargs kill -9
Solutions:
  1. Check logs: docker-compose logs timescaledb
  2. Remove and recreate: docker-compose down && docker-compose up -d
  3. Remove volumes: docker-compose down -v (WARNING: deletes data)
  4. Check disk space: df -h
Error: Can't reach database serverSolutions:
  1. Ensure container is running: docker-compose ps
  2. Wait for initialization (first start takes time)
  3. Use correct hostname:
    • From host: localhost:5432
    • From container: timescaledb:5432
  4. Check DATABASE_URL format
Solutions:
  1. Clear Docker cache: docker builder prune
  2. Build without cache: docker build --no-cache .
  3. Check available disk space
  4. Ensure DATABASE_URL is provided: --build-arg DATABASE_URL=...

Next Steps