Skip to content

Three-tier credential lifecycle

Status: Accepted

Credentials are classified into three tiers based on what the provider supports. Each tier has a different injection and protection strategy.

Tier 1 — Ephemeral. Credentials generated on demand with a TTL. Auto-expire. No rotation needed.

ProviderMechanism
AWSSTS assume-role (1-12hr session token)
Google CloudOAuth2 access token (auto-refresh)
GitHubApp installation token (1hr TTL)
Neon (DB)Dynamic Postgres role via secret provider
LLM providersVirtual keys with TTL via gateway

Tier 2 — Proxied. Real key hidden behind the gateway. Agent gets a virtual key. Never sees the underlying API key.

ProviderWhat the gateway holds
OpenAI, AnthropicAPI keys — agent uses gateway virtual key
MCP serversReal credentials — agent uses gateway tool group

Tier 3 — Vault-managed. Long-lived key stored in the secret provider. Injected at runtime via credential proxy. Sandbox prevents exfiltration.

ProviderRotationRisk if leaked
SlackManual (reinstall app)Can post as bot
WHAPI/AuauManual (dashboard)Can message customers — HIGH
LinearOAuth refresh (24hr tokens, cron updates vault)Can manage issues
Neon (API key)API (create new, delete old)Can manage databases
TemporalAPI (new SA key, delete old)Can trigger workflows

Tier 3 secrets are ONLY injected in sandbox mode. When a worker runs on local compute (no [compute] section), arpi infers required credentials from the template’s capabilities via registry metadata and warns if they are missing from the shell. It does not fetch or inject secrets.

Without kernel isolation, secret injection is security theater — any process on the host can read env vars, any network call can exfiltrate them.

Compute: Tier 3 behavior:
-------- ----------------
local (no sandbox) infer required credentials from capabilities, warn if missing
sandbox fetch from secret provider, inject via credential proxy

Credential inference. Templates do not list credentials directly. The registry knows what each MCP and skill requires. arpi resolves the full credential set from registry metadata and the worker’s identity. Only [overrides].credentials in a template can declare explicit credentials — for bespoke keys not tied to registered capabilities. See template-schema.md.

In sandbox mode, secrets are injected via the credential proxy (part of our OpenSandbox contributions), not as plain container env vars:

  1. arpi spawn authenticates to the IAM provider using the human’s token.
  2. The credential proxy fetches secrets and makes them available inside the sandbox.
  3. The proxy can revoke access mid-session and logs every access.
  4. Egress filtering ensures secrets cannot leave the sandbox except through the gateway.

Providers have different capabilities. A one-size-fits-all approach either under-protects (vault-only for everything) or is impossible (ephemeral for providers that don’t support it). The three tiers map to what each provider actually supports.

The sandbox rule exists because injecting secrets without isolation gives a false sense of security. Only a sandbox (egress filtering + syscall isolation) makes injection meaningful.

When running on local compute (no sandbox), arpi spawn infers required credentials from the template’s capabilities via registry metadata and checks each one exists in the developer’s shell $ENV. Missing variables produce warnings. arpi does NOT fetch from the IAM provider, does NOT inject anything, and does NOT fail — it only warns. The developer is responsible for having the right env vars set.

  • Bare mode does not fetch or inject any secrets — only checks shell env
  • Sandbox mode fetches secrets from provider and injects via credential proxy
  • Secrets are never written to disk in any mode
  • Agent inside sandbox cannot exfiltrate Tier 3 secrets via network egress
  • Rotating a secret in the vault takes effect on next arpi spawn without manual intervention
  • Credential proxy logs every secret access with uid, euid, and timestamp