Port Daddy in a 50-Service Monorepo

Scan your entire monorepo, start the full stack in dependency order, and finally stop juggling 15 terminal tabs.

14 min read Tutorial 4 of 5 Advanced

You work at PaymentCo. Fifteen services. Three databases. Two message queues. A search engine. And every developer on the team has a different way of starting it all -- a shell script here, a Docker Compose there, and the new hire is still waiting for someone to tell them which port the API runs on.

The Monorepo Port Nightmare

Here is what monorepo development looks like without orchestration:

Monday:
  "API is on 3000, frontend on 3001, worker on 3002"

Tuesday:
  "Wait, Dave changed the API to 4000 in his branch"
  "The worker is failing because it's hardcoded to call localhost:3000"

Wednesday:
  "Docker Compose is fighting with my local postgres"
  "Who left a zombie redis-server on port 6379?"

Thursday:
  "I just need the frontend. Why am I starting all 15 services?"
  "Elasticsearch is eating 4GB of RAM and I'm not even using search"

Friday:
  "I give up. I'm just going to work on the mobile app."

The core problems are always the same:

Port Daddy solves every single one of these. Let's walk through it.

Scanning Your Monorepo

First, let Port Daddy understand what you're working with. The pd scan command recursively walks your directory tree and detects every service it finds:

$ cd ~/code/paymentco
$ pd scan

Scanning /Users/you/code/paymentco...
Found 15 services in 3.2s

  services/api           Next.js (App Router)     needs: [postgres, redis]
  services/admin         Next.js                  needs: [api]
  services/dashboard     Vite + React             needs: [api]
  services/worker        Node.js (custom)         needs: [postgres, redis, nats]
  services/scheduler     Node.js (custom)         needs: [postgres, redis]
  services/webhooks      Express                  needs: [postgres, nats]
  services/search-sync   Node.js (custom)         needs: [postgres, elasticsearch]
  services/email         Fastify                  needs: [redis, nats]
  services/auth          Express                  needs: [postgres, redis]
  services/billing       NestJS                   needs: [postgres, redis, nats]
  services/notifications Hono                     needs: [redis, nats]
  infra/postgres         PostgreSQL 16            standalone
  infra/redis            Redis 7                  standalone
  infra/nats             NATS                     standalone
  infra/elasticsearch    Elasticsearch 8          standalone

Wrote .portdaddyrc (15 services, 23 dependencies)

Port Daddy detects 60+ frameworks -- Next.js, Express, Fastify, NestJS, Hono, Django, Rails, Spring Boot, Go, Rust, and many more. It reads package.json, Cargo.toml, go.mod, requirements.txt, and other manifest files to determine the framework and its default start command.

The generated .portdaddyrc captures your entire architecture:

{
  "project": "paymentco",
  "services": {
    "postgres": {
      "cmd": "pg_ctl -D /usr/local/var/postgresql@16 start",
      "health": "pg_isready -h localhost -p ${PORT}",
      "shutdownCmd": "pg_ctl -D /usr/local/var/postgresql@16 stop"
    },
    "redis": {
      "cmd": "redis-server --port ${PORT}",
      "health": "redis-cli -p ${PORT} ping"
    },
    "nats": {
      "cmd": "nats-server -p ${PORT}",
      "health": "curl -sf http://localhost:${PORT}/healthz"
    },
    "elasticsearch": {
      "cmd": "elasticsearch -E http.port=${PORT}",
      "health": "curl -sf http://localhost:${PORT}/_cluster/health",
      "healthTimeout": 30
    },
    "api": {
      "cmd": "npm run dev -- --port ${PORT}",
      "cwd": "services/api",
      "needs": ["postgres", "redis"],
      "healthPath": "/health"
    },
    "dashboard": {
      "cmd": "npm run dev -- --port ${PORT}",
      "cwd": "services/dashboard",
      "needs": ["api"],
      "healthPath": "/"
    }
  }
}

Notice there are zero hardcoded ports anywhere. Every ${PORT} reference is filled in dynamically by Port Daddy at start time. Commit this file to your repo -- every developer gets the same config.

Starting the Whole Stack

Now the moment of truth. One command to start everything:

$ pd up

[paymentco] Starting 15 services...

  [postgres]        Starting on port 5532...
  [redis]           Starting on port 6479...
  [nats]            Starting on port 4322...
  [elasticsearch]   Starting on port 9300...

  [postgres]        Health check passed (0.3s)
  [redis]           Health check passed (0.1s)
  [nats]            Health check passed (0.2s)
  [elasticsearch]   Health check passed (8.4s)

  [auth]            Starting on port 3100...
  [api]             Starting on port 3101...
  [worker]          Starting on port 3102...
  [scheduler]       Starting on port 3103...
  [webhooks]        Starting on port 3104...
  [billing]         Starting on port 3105...
  [search-sync]     Starting on port 3106...
  [email]           Starting on port 3107...
  [notifications]   Starting on port 3108...

  [auth]            Health check passed (1.2s)
  [api]             Health check passed (2.1s)
  [worker]          Health check passed (0.8s)
  [billing]         Health check passed (1.9s)

  [admin]           Starting on port 3109...
  [dashboard]       Starting on port 3110...

  [admin]           Health check passed (1.4s)
  [dashboard]       Health check passed (1.1s)

All 15 services healthy. Total startup: 14.3s

What just happened:

  1. Dependency resolution -- Port Daddy built a directed acyclic graph (DAG) from the needs fields
  2. Parallel startup -- Independent services (postgres, redis, nats, elasticsearch) started simultaneously
  3. Health gate -- Services that depend on infra waited until health checks passed
  4. Second wave -- API-level services started after infra was healthy
  5. Third wave -- Frontend services started after the API was healthy
  6. Port injection -- Every service received its deterministic port via ${PORT}

Compare this to your old start-everything.sh script that was 200 lines of sleep 5 calls and hardcoded ports.

Intelligent Dependency Management

The needs field is the heart of the orchestrator. Port Daddy resolves the full dependency graph before starting anything:

{
  "api": {
    "needs": ["postgres", "redis"]
  },
  "dashboard": {
    "needs": ["api"]
  },
  "billing": {
    "needs": ["postgres", "redis", "nats"]
  }
}

From this, Port Daddy computes the start order:

Wave 1: postgres, redis, nats, elasticsearch  (no dependencies)
Wave 2: api, auth, worker, scheduler, webhooks, billing, search-sync, email, notifications
Wave 3: admin, dashboard  (depend on api)

Within each wave, services start in parallel. Between waves, Port Daddy waits for every health check to pass before moving on.

If you create a circular dependency, Port Daddy catches it immediately:

$ pd up
Error: Circular dependency detected: api -> billing -> api
Fix the "needs" chain in .portdaddyrc before starting.

Starting Individual Services

You don't always need all 15 services. If you're working on the dashboard, you only need the dashboard plus whatever it depends on:

$ pd up --service dashboard

[paymentco] Resolving dependencies for: dashboard
  dashboard -> api -> postgres, redis

Starting 4 services...

  [postgres]   Starting on port 5532...  Health check passed (0.3s)
  [redis]      Starting on port 6479...  Health check passed (0.1s)
  [api]        Starting on port 3101...  Health check passed (2.1s)
  [dashboard]  Starting on port 3110...  Health check passed (1.1s)

4 of 15 services started. Skipped: nats, elasticsearch, worker, scheduler,
  webhooks, billing, search-sync, email, notifications, auth, admin

Port Daddy walks the dependency tree backward from your target service and starts exactly what's needed -- nothing more. You saved yourself from launching elasticsearch (4GB of RAM) and NATS (not needed for dashboard work) while still getting a working API with its database.

You can start multiple specific services too:

# Start just the billing and webhooks stacks
$ pd up --service billing --service webhooks

[paymentco] Resolving dependencies for: billing, webhooks
  billing  -> postgres, redis, nats
  webhooks -> postgres, nats

Starting 5 services (deduplicated)...

Custom Health Checks

Different services need different health checks. Port Daddy supports three types:

HTTP Health Checks (default)

For web services, specify a path that returns 200:

{
  "api": {
    "healthPath": "/health"
  }
}

Port Daddy polls http://localhost:{port}/health until it gets a 200 status code.

Command Health Checks

For databases and infrastructure, use a shell command:

{
  "postgres": {
    "health": "pg_isready -h localhost -p ${PORT}"
  },
  "redis": {
    "health": "redis-cli -p ${PORT} ping"
  },
  "elasticsearch": {
    "health": "curl -sf http://localhost:${PORT}/_cluster/health",
    "healthTimeout": 30
  },
  "nats": {
    "health": "curl -sf http://localhost:${PORT}/healthz"
  }
}

The command is re-executed every second until it returns exit code 0 or the timeout is reached.

TCP Health Checks

For services that just need a port to be open:

{
  "custom-tcp-service": {
    "healthMode": "tcp"
  }
}

Health Timeout

Slow services like Elasticsearch can take 30 seconds to start. Set a custom timeout:

{
  "elasticsearch": {
    "health": "curl -sf http://localhost:${PORT}/_cluster/health",
    "healthTimeout": 60
  }
}

If a health check exceeds its timeout, Port Daddy reports the failure and continues starting other services that don't depend on it.

Environment Variables and Service Discovery

When Port Daddy starts a service, it injects environment variables for every other running service. Your application code never needs to hardcode a port number:

# Environment injected into the "api" service:
PORT=3101                              # This service's own port
PORT_postgres=5532                     # Postgres port
PORT_redis=6479                        # Redis port
PORT_nats=4322                         # NATS port
PORT_elasticsearch=9300                # Elasticsearch port
PORTDADDY_PROJECT=paymentco            # Project name
PORTDADDY_SERVICE=api                  # This service's name

In your application code, reference these instead of hardcoded values:

// services/api/src/config.ts
export const config = {
  port: process.env.PORT,
  database: {
    host: 'localhost',
    port: parseInt(process.env.PORT_postgres || '5432'),
  },
  redis: {
    host: 'localhost',
    port: parseInt(process.env.PORT_redis || '6379'),
  },
  nats: {
    url: `nats://localhost:${process.env.PORT_nats || '4222'}`,
  },
};

The fallback values (5432, 6379, 4222) are the standard default ports, so your code still works when running outside of Port Daddy. But inside the orchestrator, the PORT_* variables always point to the correct dynamically-assigned port.

You can also define custom environment variables per service:

{
  "api": {
    "env": {
      "NODE_ENV": "development",
      "LOG_LEVEL": "debug",
      "DATABASE_URL": "postgresql://localhost:${PORT_postgres}/paymentco_dev"
    }
  }
}

Logs with Color and Prefixes

With 15 services writing to stdout, you need to know which line came from which service. Port Daddy prefixes every line with a color-coded service name:

[postgres]        2026-03-01 10:00:01 LOG: database system is ready
[redis]           10:00:01 Ready to accept connections on port 6479
[nats]            [INF] Starting nats-server
[api]             > next dev --port 3101
[api]             ready - started server on 0.0.0.0:3101
[worker]          Connected to NATS on port 4322
[dashboard]       VITE v6.1.0 ready in 340ms
[billing]         [Nest] LOG NestApplication successfully started
[search-sync]     Connected to Elasticsearch on port 9300
[email]           Server listening on port 3107

Each service gets a consistent color. Infrastructure services are one hue, backend services another, and frontend services a third. The colors are deterministic -- postgres is always the same color on your machine.

Logs are also grep-able. Because each line is prefixed, you can filter in real time:

# Follow only the API logs
$ pd up 2>&1 | grep "\[api\]"

# Follow API and billing together
$ pd up 2>&1 | grep -E "\[(api|billing)\]"

# Filter for errors across all services
$ pd up 2>&1 | grep -i "error"

Stopping the Stack

When you're done, one command shuts everything down:

$ pd down

[paymentco] Stopping 15 services...

  [dashboard]       Stopped (SIGTERM)
  [admin]           Stopped (SIGTERM)
  [notifications]   Stopped (SIGTERM)
  [email]           Stopped (SIGTERM)
  [search-sync]     Stopped (SIGTERM)
  [billing]         Stopped (SIGTERM)
  [webhooks]        Stopped (SIGTERM)
  [scheduler]       Stopped (SIGTERM)
  [worker]          Stopped (SIGTERM)
  [api]             Stopped (SIGTERM)
  [auth]            Stopped (SIGTERM)
  [elasticsearch]   Stopped (SIGTERM)
  [nats]            Stopped (SIGTERM)
  [redis]           Stopped (SIGTERM)
  [postgres]        Stopped (pg_ctl stop)

All 15 services stopped. Ports released.

Notice the order: reverse dependency order. Frontend services stop first, then backend services, then infrastructure. This prevents errors from services trying to reach dependencies that have already been killed.

Services that define a shutdownCmd (like postgres) use their graceful shutdown command instead of raw SIGTERM. This ensures data integrity.

You can also stop individual services:

# Stop just the dashboard (leaves its dependencies running)
$ pd down --service dashboard

# Stop elasticsearch and everything that depends on it
$ pd down --service elasticsearch --cascade
Stopping: search-sync, elasticsearch (cascade)

Real Monorepo Example

Here's a complete .portdaddyrc for a 15-service payment processing company. This is a real-world config, not a simplified tutorial example:

{
  "project": "paymentco",
  "services": {
    "postgres": {
      "cmd": "pg_ctl -D /usr/local/var/postgresql@16 -l /tmp/pg.log start",
      "health": "pg_isready -h localhost -p ${PORT}",
      "shutdownCmd": "pg_ctl -D /usr/local/var/postgresql@16 stop -m fast",
      "healthTimeout": 10
    },
    "redis": {
      "cmd": "redis-server --port ${PORT} --daemonize no",
      "health": "redis-cli -p ${PORT} ping"
    },
    "nats": {
      "cmd": "nats-server -p ${PORT} -m 8222",
      "health": "curl -sf http://localhost:8222/healthz"
    },
    "elasticsearch": {
      "cmd": "elasticsearch -E http.port=${PORT} -E transport.port=9400",
      "health": "curl -sf http://localhost:${PORT}/_cluster/health",
      "healthTimeout": 60,
      "env": { "ES_JAVA_OPTS": "-Xms512m -Xmx512m" }
    },
    "auth": {
      "cmd": "npm run dev",
      "cwd": "services/auth",
      "needs": ["postgres", "redis"],
      "healthPath": "/health",
      "env": {
        "JWT_SECRET": "dev-secret-do-not-use-in-prod",
        "DATABASE_URL": "postgresql://localhost:${PORT_postgres}/paymentco_auth"
      }
    },
    "api": {
      "cmd": "npm run dev",
      "cwd": "services/api",
      "needs": ["postgres", "redis", "auth"],
      "healthPath": "/api/health",
      "env": {
        "DATABASE_URL": "postgresql://localhost:${PORT_postgres}/paymentco_dev",
        "REDIS_URL": "redis://localhost:${PORT_redis}",
        "AUTH_SERVICE_URL": "http://localhost:${PORT_auth}"
      }
    },
    "worker": {
      "cmd": "npm run worker",
      "cwd": "services/worker",
      "needs": ["postgres", "redis", "nats"],
      "noPort": true,
      "health": "curl -sf http://localhost:${PORT}/worker/health"
    },
    "billing": {
      "cmd": "npm run dev",
      "cwd": "services/billing",
      "needs": ["postgres", "redis", "nats"],
      "healthPath": "/health",
      "env": {
        "STRIPE_KEY": "sk_test_placeholder",
        "NATS_URL": "nats://localhost:${PORT_nats}"
      }
    },
    "webhooks": {
      "cmd": "npm run dev",
      "cwd": "services/webhooks",
      "needs": ["postgres", "nats"],
      "healthPath": "/health"
    },
    "scheduler": {
      "cmd": "npm run dev",
      "cwd": "services/scheduler",
      "needs": ["postgres", "redis"],
      "healthPath": "/health"
    },
    "search-sync": {
      "cmd": "npm run dev",
      "cwd": "services/search-sync",
      "needs": ["postgres", "elasticsearch"],
      "healthPath": "/health"
    },
    "email": {
      "cmd": "npm run dev",
      "cwd": "services/email",
      "needs": ["redis", "nats"],
      "healthPath": "/health"
    },
    "notifications": {
      "cmd": "npm run dev",
      "cwd": "services/notifications",
      "needs": ["redis", "nats"],
      "healthPath": "/health"
    },
    "admin": {
      "cmd": "npm run dev",
      "cwd": "services/admin",
      "needs": ["api"],
      "healthPath": "/"
    },
    "dashboard": {
      "cmd": "npm run dev",
      "cwd": "services/dashboard",
      "needs": ["api"],
      "healthPath": "/"
    }
  }
}

Fifteen services. Zero hardcoded ports. Full dependency graph. Any developer can clone the repo, run pd up, and have the entire stack running in under 20 seconds.

Branch-Specific Configs

Sometimes a feature branch changes the service topology. Maybe you're adding a new fraud-detection service or temporarily splitting the API into two. Use branch-specific overrides:

$ pd up --branch feature/fraud-detection

[paymentco] Loading .portdaddyrc
[paymentco] Applying branch override: .portdaddyrc.feature-fraud-detection
[paymentco] Added service: fraud-detection (needs: [api, redis, nats])
Starting 16 services...

The branch override file (.portdaddyrc.feature-fraud-detection) merges with the base config:

{
  "services": {
    "fraud-detection": {
      "cmd": "npm run dev",
      "cwd": "services/fraud-detection",
      "needs": ["api", "redis", "nats"],
      "healthPath": "/health"
    },
    "api": {
      "env": {
        "FRAUD_SERVICE_URL": "http://localhost:${PORT_fraud-detection}"
      }
    }
  }
}

The override adds the new service and injects its URL into the API's environment. When you merge the branch, the override file goes away and the base config is unchanged.

Health Monitoring

Once the stack is running, Port Daddy continuously monitors service health. Check the status at any time:

$ pd status

paymentco (15 services)

  Service          Port   Status    Uptime     Last Check
  -------          ----   ------    ------     ----------
  postgres         5532   healthy   2h 14m     2s ago
  redis            6479   healthy   2h 14m     1s ago
  nats             4322   healthy   2h 14m     3s ago
  elasticsearch    9300   healthy   2h 13m     5s ago
  auth             3100   healthy   2h 13m     2s ago
  api              3101   healthy   2h 13m     1s ago
  worker           3102   healthy   2h 13m     4s ago
  scheduler        3103   healthy   2h 12m     2s ago
  webhooks         3104   healthy   2h 12m     3s ago
  billing          3105   healthy   2h 12m     2s ago
  search-sync      3106   healthy   2h 11m     6s ago
  email            3107   healthy   2h 11m     1s ago
  notifications    3108   healthy   2h 11m     2s ago
  admin            3109   healthy   2h 10m     3s ago
  dashboard        3110   healthy   2h 10m     1s ago

If a service crashes, Port Daddy detects it within seconds:

$ pd status

  api              3101   DOWN      --         0s ago
  dashboard        3110   unhealthy 2h 15m     1s ago  (upstream: api)

The dashboard is marked unhealthy because its upstream dependency (the API) is down. You can see exactly where the problem originates.

Check the health of a single service from the CLI or via HTTP:

# CLI
$ pd health paymentco:api

# HTTP
$ curl http://localhost:9876/services/health/paymentco:api
{"id":"paymentco:api","port":3101,"status":"healthy","latency":"12ms"}

Sharing Your Config

The .portdaddyrc file is designed to be committed to your repository. Every developer gets the same orchestration config:

# Commit the config
$ git add .portdaddyrc
$ git commit -m "Add Port Daddy orchestration config"

# New developer experience
$ git clone [email protected]:paymentco/monorepo.git
$ cd monorepo
$ pd up
# Everything starts. No README. No setup guide. No Slack questions.

Because ports are deterministically assigned from the service identity, every developer on the team gets the same port for the same service. The API is always on the same port, whether you're on a MacBook in Brooklyn or a Linux workstation in Berlin.

Tips for maintaining the config in a team:

What's Next

You've learned how to orchestrate a full monorepo. The key insights:

Continue with:

  1. Debugging -- When services crash, health checks fail, or ports go missing
  2. Multi-Agent Orchestration -- Add AI agents coordinating on top of your running stack
  3. Tunnel Magic -- Share your running monorepo with external testers via public URLs

The hardest part of running a monorepo was never the code. It was getting the infrastructure to cooperate. That problem is now solved.