Skip to content

System Deep Dive

Last updated: 2026-03-08

Dispatch is a monorepo with four main parts:

  1. api/: a Rust Axum service that owns authentication, workspace/inbox provisioning, inbound mail ingestion, outbound mail sending, billing, and ops status.
  2. cli/: a Rust CLI that treats the API as the control plane for creating inboxes, reading mail, extracting OTPs, and storing subject-keyed “memory” via self-email.
  3. web/: a Next.js app that sits on top of the same API and gives operators a setup flow, console, mail desk, and admin ops screen.
  4. infra/terraform/: AWS infrastructure for production: VPC, ALB, ECS Fargate services, RDS Postgres, SES inbound plumbing, S3, SNS, SSM, IAM, CloudWatch, and alarms.

At runtime, the system has two backend processes:

  • dispatch-api: handles HTTP traffic.
  • dispatch-worker: processes queued SES inbound mail jobs from Postgres.

The important architectural idea is that Dispatch treats an inbox as the primary operational unit. Almost every surface revolves around:

  • creating a workspace
  • creating or selecting inboxes in that workspace
  • minting account or inbox API keys
  • reading inbound mail
  • extracting artifacts like OTPs and links
  • sending outbound mail from an inbox address
.
├── api/ # Axum API + worker + migrations + tests
├── cli/ # Rust CLI for operators/agents
├── web/ # Next.js web console
├── infra/terraform/ # AWS production infra
├── scripts/ # bootstrap + smoke-test helpers
├── Makefile # local dev/test workflows
└── README.md # top-level ops/deploy guidance
flowchart LR
Web["Next.js web app"] --> API["dispatch-api (Axum)"]
CLI["dispatch CLI"] --> API
API --> DB["Postgres"]
API --> SES["AWS SES v2 outbound"]
Stripe["Stripe webhooks"] --> API
SNS["AWS SNS"] --> API
S3["SES raw email in S3"] --> SQS["Inbound SQS queue"]
SQS --> Worker["dispatch-worker"]
Worker --> DB
Worker --> S3

The API is intentionally structured in a layered, domain-oriented way:

  • api/src/main.rs boots config, database, app state, and Axum router.
  • api/src/common/bootstrap.rs wires service implementations into AppState.
  • api/src/app.rs defines the router and middleware stack.
  • api/src/domains/* contains the domain modules.

The main shared container is AppState, which carries:

  • PgPool
  • parsed Config
  • route limiters
  • domain services for auth, user, device, file, and dispatch

There are three binaries in api/:

  • dispatch-api: HTTP server.
  • dispatch-worker: background worker runtime for inbound SES jobs.
  • dispatch-migrate: one-shot migration binary.

That split matters operationally:

  • migrations do not run automatically on API or worker startup
  • production deploy runs migrations as a separate one-shot ECS task
  • API and worker run as separate ECS services

The router in api/src/app.rs merges several groups:

  • /auth/*: public auth routes
  • /user, /device, /file: legacy JWT-protected routes
  • /v1/...: Dispatch routes

Dispatch routes use three auth modes:

  1. Public routes: signup, webhooks, public metrics
  2. JWT routes: operator actions like /v1/agent/claim
  3. API key routes: workspace or inbox operations

Protected Dispatch middleware order is:

API key auth -> workspace context -> policy guard -> handler

There are two API key kinds:

  • account: workspace-scoped, tied to a user id
  • inbox: inbox-scoped, tied to a specific inbox id

This is how the service separates:

  • workspace-level actions like listing inboxes or creating additional inboxes
  • inbox-level actions like reading messages or sending mail from one inbox

api/src/common/config.rs is the central runtime contract.

It covers:

  • database and server settings
  • asset storage paths
  • inbox domain and app base URL
  • outbound provider selection
  • SES/SNS/S3 inbound settings
  • Stripe settings
  • x402 EVM/Solana verification settings
  • rate limiting settings
  • private beta flags/admin allowlist
  • inbound worker tuning
  • quotas and message retention

A notable design choice: in production, several values become mandatory, including:

  • DISPATCH_APP_BASE_URL
  • DISPATCH_AWS_SNS_TOPIC_ARN
  • DISPATCH_SES_INBOUND_S3_BUCKET
  • DISPATCH_ALLOWED_AWS_ACCOUNT_ID
  • STRIPE_WEBHOOK_SECRET
  • chain RPC + recipient settings for x402

This is the core of the product.

It owns:

  • workspace and agent provisioning
  • account and inbox API keys
  • inbox creation/listing
  • message listing and artifact extraction
  • OTP retrieval
  • outbound email sending
  • inbound webhook ingestion
  • SES/SNS inbound queueing and worker processing
  • Stripe/x402 billing
  • worker heartbeats and ops status
  • recipient suppression handling

This is the domain that turns Dispatch from a CRUD app into an inbox control plane.

This is operator authentication, not inbox auth.

It owns:

  • /auth/signup
  • /auth/login
  • email verification codes
  • beta interest signups
  • managed beta invite codes
  • admin-only invite code management

The output of this layer is a JWT for the operator account. That JWT is then used to claim/link agents.

These are older, more traditional CRUD-style domains that still exist alongside Dispatch.

  • user: manage users, including profile picture uploads
  • device: CRUD + batch updates for user devices
  • file: protected file serving and deletion

These appear to be legacy/demo-style modules retained while the product pivots around inbox workflows.

The schema is spread across migrations, but the practical model looks like this:

flowchart TD
Workspace["workspaces"] --> Agent["agents"]
Workspace --> Inbox["inboxes"]
Agent --> ApiKey["api_keys"]
Agent --> ClaimToken["agent_claim_tokens"]
Agent --> Claim["user_agent_claims"]
Inbox --> Message["messages"]
Message --> Artifact["artifacts"]
Message --> Security["message_security"]
Workspace --> Quota["workspace_quotas"]
Workspace --> Outbound["outbound_messages"]
Workspace --> Entitlement["billing_entitlements"]
Workspace --> Verification["billing_transaction_verifications"]
WorkerHB["dispatch_worker_heartbeats"]

workspaces

  • top-level tenancy boundary
  • stores status, plan, and policy json

agents

  • an inbox runtime/operator-facing unit inside a workspace
  • used for provisioning and ownership/claiming

api_keys

  • hashed keys
  • dual-mode via key_kind
  • account keys bind to user_id
  • inbox keys bind to inbox_id

inboxes

  • unique email addresses per workspace
  • primary handle for sending and receiving

messages

  • inbound messages only
  • deduped on provider message id

message_security

  • verdicts and selected SES/authentication metadata
  • also stores folder classification (inbox or spam)

artifacts

  • machine-extracted values from messages
  • currently otp and link

workspace_quotas and workspace_quota_recipients

  • daily send, unique recipient, and inbound counters

user_agent_claims

  • maps an operator user to an agent
  • one agent can only be claimed once

agent_claim_tokens

  • short-lived hashed claim token for inbox linking
  • token is bound to an agent and inbox

billing_events

  • webhook idempotency and event capture

billing_entitlements

  • canonical plan grants per workspace
  • active entitlement updates workspace plan

billing_transaction_verifications

  • x402 verification ledger for EVM/Solana tx hashes
  • uniqueness on (chain, tx_hash) prevents reuse

Inbound buffering is handled by SQS (dispatch-prod-inbound-jobs + DLQ), not Postgres.

dispatch_worker_heartbeats

  • worker liveness tracking
  • used by /v1/ops/status

outbound_messages

  • records sends from inboxes and latest delivery state

ses_outbound_events

  • stores SES delivery/bounce/complaint events

dispatch_recipient_suppressions

  • blocks future sends to bounced/complaining recipients

users

  • operator accounts
  • includes email_verified_at

user_auth

  • password hashes

user_email_verification_codes

  • short-lived hashed email verification codes

beta_invite_codes, beta_invite_code_redemptions, beta_interest_signups

  • private beta gating and waitlist

1. New operator signup and email verification

Section titled “1. New operator signup and email verification”
sequenceDiagram
participant Browser
participant API
participant DB
participant Email
Browser->>API: POST /auth/signup
API->>DB: create user + password hash
API->>DB: issue email verification code
API->>Email: send verification email
API-->>Browser: JWT + email_verified=false
Browser->>API: POST /auth/verify-email
API->>DB: validate code, mark user verified
API->>Email: send onboarding email
API-->>Browser: JWT + email_verified=true

Important details:

  • signup can be gated by either a single env invite code or managed invite codes in DB
  • verification emails are best-effort; user creation still succeeds if mail send fails
  • login does not block unverified users, but the response tells the UI whether email is verified

POST /v1/agent/signup is the fastest way to bootstrap Dispatch.

The service does all of this transactionally:

  • creates a workspace
  • creates an agent
  • allocates a unique default inbox address under DISPATCH_INBOX_DOMAIN
  • mints an account API key
  • mints a default inbox API key

The response becomes the control-plane seed for both CLI and web.

Section titled “3. Claim/link an existing inbox runtime to an operator account”

This is how the system bridges API key-based inbox runtimes back to JWT-authenticated operator accounts.

There are two claim modes:

  • standard claim with claim token or existing API key
  • x402 claim with wallet signature proof attached to the claim token flow

Token claim flow:

  1. inbox or workspace API key calls /v1/agent/claim-token
  2. service creates a short-lived hashed claim token tied to an inbox
  3. operator signs in on the web app
  4. operator calls /v1/agent/claim with claim_email + claim_token
  5. service validates token, consumes it, records user_agent_claims, and mints fresh account/inbox keys for the operator

API key claim flow:

  • operator already has a raw inbox/account API key
  • operator submits it through /v1/agent/claim
  • service authenticates the API key and transfers logical ownership to the operator account

x402 claim flow adds wallet signature verification:

  • expected message text is deterministic
  • signature is checked locally for EVM personal-sign or Solana ed25519

POST /v1/inboxes creates additional inboxes for a workspace.

Behavior:

  • first inbox is always created during workspace bootstrap
  • additional inbox creation may be blocked when DISPATCH_ENFORCE_MULTI_INBOX_UPGRADE=true
  • if blocked, the workspace needs an active multi_inbox entitlement or workspace plan

This is the key product monetization gate.

To read mail, the caller must use an inbox API key scoped to that inbox.

Endpoints:

  • GET /v1/inboxes/{id}/messages
  • GET /v1/inboxes/{id}/artifacts
  • GET /v1/inboxes/{id}/otp

Behavior:

  • identity middleware rejects access to inboxes outside the key scope
  • pagination uses a base64 cursor over (received_at, message_id)
  • reads are filtered by retention window
  • folder filter supports inbox, spam, or all
  • OTP endpoint only returns otp artifacts from non-spam inbox messages
sequenceDiagram
participant CLIorWeb
participant API
participant DB
participant SES
participant SNS
CLIorWeb->>API: POST /v1/send (inbox API key)
API->>DB: lock quota row, validate inbox ownership
API->>DB: check recipient suppression
API->>SES: send email
API->>DB: store outbound_messages row
API->>DB: increment quota counters
SNS->>API: SES outbound event notification
API->>DB: update outbound status / suppress recipient if needed

Important guardrails:

  • sender address is the inbox address itself
  • at least one of text/html body must exist
  • daily send and unique recipient quotas are enforced
  • sends to suppressed recipients are blocked
  • permanent bounces and complaints create/refresh suppressions

Local/dev can use a simplified normalized injection endpoint:

  • POST /v1/webhooks/inbound
  • guarded by x-dispatch-inbound-token

This path inserts directly into messages, message_security, artifacts, and quota counters.

This is what the web console inbound injector and smoke test use.

Production uses SES -> S3 -> SQS -> worker process.

sequenceDiagram
participant SES
participant S3
participant SQS
participant DB
participant Worker
SES->>S3: store raw MIME email
S3->>SQS: publish object-created notification
Worker->>SQS: receive message (long poll)
Worker->>S3: fetch raw email object
Worker->>Worker: parse MIME into normalized payload
Worker->>DB: insert message + security + artifacts
Worker->>SQS: delete message on success / retry via visibility timeout

Key production protections:

  • SNS signatures are verified using fetched signing certs
  • cert URLs are validated and cached
  • SNS topic ARN and AWS account id are checked against config
  • S3 bucket name in SES notification must match configured bucket
  • queue dedupe uses provider message id
  • worker retries use exponential backoff up to configured max attempts

Artifact extraction is intentionally lightweight and regex-based on the server.

Current artifact types:

  • otp
  • link

Server extraction happens during inbound ingestion:

  • OTP primary regex looks for nearby words like otp, code, verification
  • fallback extracts generic 6-digit codes
  • links are all matched http/https URLs

The CLI has a second extraction layer for mail extract:

  • first tries stored artifacts
  • can fall back to scanning recent messages locally
  • can optionally use OpenAI for model-assisted auth code extraction

That split is useful:

  • server gives cheap normalized artifacts for common cases
  • CLI can be more opportunistic without making the API heavier

Stripe is webhook-driven for entitlement grants.

Checkout flow:

  1. client calls POST /v1/billing/checkout/session
  2. API creates a Stripe Checkout session directly against Stripe
  3. metadata includes workspace_id, product, and optionally user_id
  4. Stripe redirects user back to success/cancel URLs
  5. Stripe sends webhook to /v1/webhooks/stripe
  6. API verifies Stripe-Signature, validates metadata and amount thresholds
  7. API writes billing_events and grants/updates billing_entitlements

Notable constraint:

  • manual confirm_payment explicitly rejects Stripe, so Stripe is webhook-only

x402 is the on-chain payment path.

Quote flow:

  • POST /v1/billing/x402/quote
  • API returns recipient, amount, RPC URL, and token contract/mint information

Confirm flow:

  • POST /v1/billing/payments/confirm
  • API verifies the transaction against chain RPCs
  • checks chain, recipient, amount, optional payer address, and embedded payment reference
  • persists transaction verification
  • grants entitlement and updates workspace plan

Supported rails:

  • EVM
  • Solana

Current autopay assumption:

  • quote endpoint expects USDC-style token transfers for both chains
  • config requires asset/mint addresses for autopay path

The CLI is not a second backend. It is a thin local state layer over the Dispatch API.

The CLI stores config in the user config directory as dispatch/config.toml.

It remembers:

  • API base URL
  • output format
  • workspace key
  • workspace id
  • selected inbox id
  • cached inbox API keys by inbox id

The main command families are:

  • auth: store/clear workspace or inbox API keys
  • inbox: create, list, select, claim/link
  • mail: list/watch/extract/code/send
  • memory: subject-keyed memory via self-email
  • billing: confirm x402 payments or list entitlements
  • guide: prints an agent-oriented markdown guide

The CLI tends to:

  • prefer workspace keys for workspace-wide actions
  • lazily mint inbox keys when needed for message actions
  • persist those inbox keys for later use

That means a typical user flow is:

  1. authenticate once with a workspace key
  2. choose/select an inbox
  3. let the CLI transparently fetch the inbox-scoped key when needed

dispatch inbox create supports a zero-state paid flow.

If the workspace lacks multi-inbox entitlement and payment flags/private keys are provided, the CLI can:

  • request x402 quote from the API
  • sign a USDC transfer locally
  • submit the transaction over RPC
  • confirm it through /v1/billing/payments/confirm
  • retry inbox creation

That makes the CLI both a control client and a wallet-aware upgrade path.

The memory command is a pragmatic hack built on the inbox model:

  • memory write sends an email to the same inbox
  • email subject becomes the memory key
  • body becomes the memory value
  • memory read scans recent inbound mail for exact subject match and returns the latest value

This is clever because it reuses the mail pipeline instead of inventing a separate KV store.

The web app is a client-only Next.js app using the App Router.

Primary pages:

  • /: landing page with public metrics and install flow
  • /signup: operator signup/signin/email verification and inbox linking
  • /console: workspace console for inbox management, message fetch, OTP pull, send, inbound injection, and upgrade checkout
  • /mail: mail-desk UI for queue navigation, OTP pull, search, and replies
  • /ops: admin-only queue health and beta invite code management

The web app stores a local session in localStorage under dispatch-console-session-v3.

The session tracks:

  • operator JWT auth token
  • operator email
  • many claimed agents
  • active agent id
  • workspace/account key for active agent
  • selected inbox id
  • cached inbox API keys
  • API base URL

The app is designed for one human operator to manage multiple linked agents, but one active agent at a time.

web/lib/dispatch-api.ts is the typed fetch layer.

It mirrors the backend routes almost 1:1:

  • auth endpoints
  • agent signup/claim
  • inbox create/list/key issue
  • message list / OTP fetch / send
  • inbound injection
  • checkout session creation
  • ops status
  • beta invite code management

The web app is therefore very thin. Most business rules remain on the backend.

/signup

  • handles operator auth and email verification
  • can create a brand-new workspace via /v1/agent/signup
  • can claim existing agent/inbox via claim token or API key
  • supports x402 wallet-signature claim mode

/console

  • workspace-level inbox management using account API key
  • inbox-level operations like list messages, pull OTP, send mail
  • dev inbound injection using the shared inbound token
  • Stripe checkout initiation for upgrade path

/mail

  • Gmail-style inbox reader over listMessages
  • search is local in-browser over the fetched message set
  • reply uses sendEmail
  • OTP pull uses getLatestOtp
  • can issue missing inbox keys on demand using account key

/ops

  • only exposed to beta admins
  • polls /v1/ops/status every 15s
  • manages invite codes over /auth/beta-codes

Terraform creates a deliberately lean AWS footprint:

  • VPC with public subnets for ALB/ECS and isolated DB subnets for RDS
  • Application Load Balancer for public API ingress
  • ECS Fargate cluster with separate API and worker services
  • ECR repository for the shared app image
  • RDS Postgres
  • S3 bucket for raw SES inbound email
  • SNS topic for SES inbound and outbound event delivery
  • SSM Parameter Store for runtime secrets/config
  • IAM execution and task roles
  • CloudWatch log groups and alarms
  • optional Route53 + ACM for custom API domain

The stack intentionally avoids a NAT gateway by default.

Implications:

  • ECS tasks run in public subnets with public IPs
  • RDS remains private in DB subnets
  • ALB is public
  • security groups tightly scope traffic

Security group pattern:

  • internet -> ALB on 80/443
  • ALB -> ECS on API port 8080
  • ECS -> RDS on 5432
  • ECS egress to the internet for AWS APIs, Stripe, RPC endpoints, etc.

Terraform creates placeholder task definitions using busybox sleep infinity.

GitHub Actions later replaces them with real task definitions that:

  • run dispatch-api in the API service
  • run /app/dispatch-worker in the worker service
  • inject runtime env vars and SSM-backed secrets

This split keeps Terraform responsible for static infra, while deployments handle app image/task revision rollout.

Terraform pre-creates SSM parameters under:

/dispatch/prod/*

Examples:

  • DATABASE_URL
  • JWT_SECRET_KEY
  • DISPATCH_INBOUND_TOKEN
  • DISPATCH_INBOX_DOMAIN
  • DISPATCH_APP_BASE_URL
  • STRIPE_SECRET_KEY
  • STRIPE_WEBHOOK_SECRET
  • chain RPC and recipient settings

Terraform deliberately ignores later value changes so operators can update secrets out-of-band without drift fights.

The recommended production path is:

SES receipt rule -> S3 raw mail bucket -> S3 event notification -> SQS -> worker -> DB

Terraform provisions:

  • S3 bucket with versioning and SSE-S3
  • bucket policy allowing SES writes from the same AWS account
  • SQS queue + DLQ + queue policy for S3 writes

The app task role gets S3 read permissions so workers can fetch raw mail objects.

HTTP health endpoints:

  • /health: liveness
  • /ready: readiness, includes DB query

CloudWatch alarms include:

  • ALB target 5xxs
  • ALB unhealthy hosts
  • API running task count low
  • worker running task count low
  • RDS high CPU
  • RDS low free storage

App-side ops visibility includes:

  • worker heartbeats in DB
  • queue depth and staleness in /v1/ops/status

Execution role:

  • standard ECS task execution permissions
  • SSM + KMS decrypt access for runtime parameters

Task role:

  • read inbound S3 bucket
  • send outbound email via SES

GitHub Actions handles three delivery tracks:

ci.yml

  • Rust fmt, clippy, check, test
  • boots Postgres service for tests

terraform.yml

  • init/fmt/validate/plan
  • optional manual apply

deploy-api.yml

  • builds and pushes Docker image to ECR
  • registers real ECS task definitions for API and worker
  • runs migrations as a one-shot ECS task
  • rolls both ECS services and waits for stability

release-cli.yml

  • builds CLI binaries for Linux, macOS, and Windows
  • publishes GitHub release assets
  • optionally publishes installer and archives to S3 downloads bucket

make dev runs:

  • Postgres via Docker Compose
  • migrations
  • API process
  • worker process
  • Next.js web app

This means local development mirrors production shape surprisingly well:

  • separate API and worker
  • same S3 -> SQS inbound queue path as production
  • same web/client contract
  • optional mock email provider instead of SES

Dispatch uses two distinct auth systems:

  • operator JWTs for human account actions
  • API keys for machine/runtime inbox operations

This is a good separation because inbox actions can be delegated without exposing full operator credentials.

API keys and claim tokens are never stored raw in DB.

They are:

  • minted as uuid.secret
  • hashed before persistence
  • verified by comparing incoming secret to stored hash

The backend validates:

  • SNS signatures via signing certificate
  • topic ARN and AWS account id
  • Stripe HMAC signature with timestamp tolerance

Route limiters exist for:

  • signup
  • webhooks
  • billing

They support:

  • local in-memory limiter
  • optional shared Redis-backed limiter

The backend enforces:

  • send quotas
  • inbound quotas
  • unique recipient quotas
  • multi-inbox upgrade requirement
  • recipient suppression after bounces/complaints
  • message retention window on reads

The repo has meaningful integration coverage around the critical paths.

API tests cover:

  • auth routes
  • private beta rules
  • device/file/user routes
  • core Dispatch flows
  • production-style SNS + S3 ingestion behavior
  • billing and webhook behaviors

CLI tests cover:

  • e2e command behavior
  • billing integration
  • memory integration

This is important because many of the hardest behaviors are integration-heavy rather than purely functional:

  • queueing
  • webhooks
  • payment verification
  • API key scoping
  • inbox lifecycle

What is legacy vs current center of gravity

Section titled “What is legacy vs current center of gravity”

Current center of gravity:

  • api/src/domains/dispatch/*
  • api/src/domains/auth/*
  • cli/*
  • web/app/signup, web/app/console, web/app/mail, web/app/ops
  • infra/terraform/*

Older supporting/legacy modules:

  • user
  • device
  • file

Those modules are still wired into the router and service container, but the product narrative and deployment complexity are now clearly driven by Dispatch inbox workflows.

The easiest way to understand Dispatch is this:

  • auth creates and verifies the human operator
  • dispatch provisions or links the inbox runtime that operator will control
  • API keys are the durable machine credentials for workspace and inbox actions
  • inbound mail is the main data source
  • artifact extraction turns raw inbound mail into operational values like OTPs
  • outbound mail and SES feedback complete the loop
  • billing gates access to higher-capacity workspace behavior like multiple inboxes
  • the CLI and web app are just two clients over the same control plane
  • Terraform + GitHub Actions define and ship the AWS runtime around that control plane

Practical “how it flows together” summary

Section titled “Practical “how it flows together” summary”

If you follow one real-world happy path, the whole system looks like this:

  1. An operator signs up on the web and verifies email.
  2. They create a workspace/inbox or claim an existing one.
  3. The API provisions workspace, agent, inbox, and API keys in Postgres.
  4. The web app or CLI stores those credentials locally.
  5. Inbound mail arrives through SES/S3/SNS in production or direct webhook in development.
  6. The worker normalizes and stores messages plus extracted artifacts.
  7. CLI and web both read those messages through inbox-scoped API keys.
  8. Operators pull OTPs, search mail, and send outbound messages from the same inbox identity.
  9. SES feedback updates outbound status and suppresses bad recipients.
  10. Billing webhooks or x402 confirmations change workspace entitlements and unlock more inbox capacity.
  11. Terraform and GitHub Actions keep the AWS runtime, secrets, migrations, and deploys in sync.

That is the core Dispatch system.