Fleet Management

One tool,
every machine

Manage access to your entire infrastructure with roles, tags, and one-command invites. Same hop simplicity, from 5 machines to 5,000.

How It Works ← Back to hop

Three concepts, that's it

Fleet management in hop comes down to orchestrators, tags, and roles.

Orchestrator

Any hop host can be an orchestrator. It keeps track of your fleet members, role definitions, and distributes access.

Tags

Label your hosts with tags like developer, production, or web. Tags describe what a machine is.

Roles

Define what access people get. A role maps to host tags — "developers access developer + staging hosts." Each person gets their own account.

How it works

Set up fleet management in four steps.

Install & connect to the orchestrator

Install hop as a daemon on any machine. It prints a creator invite on first startup. Redeem it from your laptop with hop connect — this saves the orchestrator as a known host (e.g. orch-1).

Define roles

Create roles that map to host tags. Use hop admin <host> to manage the orchestrator remotely, e.g. hop admin orch-1 role create developer

Register hosts

Create a fleet invite and use it when installing hop on each machine. Tags are assigned at registration time.

Invite your team

Invite people by role. They automatically get access to every matching host — with their own user account on each.

Terminal — Setting up a fleet
# 1. Install hop daemon (orchestrator) $ curl -fsSL https://hop.keik.ai/install-daemon.sh | bash Creator invite: eyJ0eX... (valid for 1 hour) # 2. Redeem creator invite from your laptop # This saves the orchestrator as a known host — here it's "orch-1" $ hop connect eyJ0eX... Saved as known host: orch-1 # 3. Now use "hop admin orch-1" to manage it remotely $ hop admin orch-1 role create developer --tags developer,staging $ hop admin orch-1 role create ops --tags '*' --sudo # 4. Create a fleet invite for registering hosts $ hop admin orch-1 fleet-invite --tags developer,web Fleet invite: abCdEf... (100 uses, 24h expiry) # 5. Register a host (on the host machine) $ curl -fsSL https://hop.keik.ai/install-daemon.sh | bash -s -- --register abCdEf... # 6. Invite a developer $ hop admin orch-1 invite --role developer --name alice Invite: xYz123... (covers all developer+staging hosts)

Role-based access

Define who can access what. Each role maps to host tags and access settings.

Sensible defaults, full control

Roles default to individual accounts with no sudo. Override as needed — give ops sudo, create shared service accounts for CI, or add users to specific Unix groups.

  • Individual accounts — each person gets their own Unix user
  • No sudo by default — principle of least privilege
  • No passwords — access only via hop's key-based auth
  • Auto-created — user accounts created on first connection
  • Git-committable — roles live in roles.json, version-control your access policy
Role
Host Tags
Sudo
User Mode
Sandbox
developer
developer, staging
No
Individual
ops
* (all hosts)
Yes
Individual
security
production, staging
No
Individual
audit
ci
build
No
Shared
deploy
roles.json
{ "roles": [ { "name": "developer", "host_tags": ["developer", "staging"], "user_mode": "individual", "sudo": false }, { "name": "security", "host_tags": ["production", "staging"], "sandbox": { "read_only": true, "no_network": true } }, { "name": "ops", "host_tags": ["*"], "user_mode": "individual", "sudo": true, "groups": ["docker"] } ] }

Sandbox Policies

Restrict what peers can do at the OS level. Policies are enforced via macOS Seatbelt and Linux Landlock.

Three layers of enforcement

Sandbox policies flow through three layers: role definition (orchestrator sets the baseline), invite creation (host can further restrict), and client connection (client can self-restrict). The result is always the strictest combination.

  • Presetsmonitor (read-only, no network, scoped paths), audit (read-only, no network), deploy (scoped write, dangerous commands blocked)
  • Custom policies — combine --read-only, --no-network, --scope, --allow-command
  • Role-based — assign sandbox policies per role in roles.json
  • Client self-restriction — peers can request stricter sandbox at connect time
  • Merge logic — restrictions only tighten, never loosen
Terminal — Sandbox examples
# Create a monitor-only invite $ hop invite --preset monitor # Create a custom sandboxed invite $ hop invite --read-only --no-network --scope /var/log # Connect with self-imposed restrictions $ hop web-1 --preset audit # Execute a command in read-only mode $ hop web-1 --read-only -- cat /etc/hosts

Using the fleet

Once set up, your team interacts with the fleet using familiar commands.

Terminal — Connecting
# Alice redeems her invite $ hop connect xYz123... Added 12 hosts (role: developer) # List available hosts $ hop fleet list developer web-1 online developer,web web-2 online developer,web staging-1 online staging # Connect to a specific host $ hop web-1 web-1 $ _ # Quick command on a single host $ hop web-1 -- uptime 14:32:01 up 42 days
Terminal — Fleet operations
# Run a command across all developer hosts $ hop fleet exec developer -- uname -a [web-1] Linux 6.1 x86_64 [web-2] Linux 6.1 x86_64 [staging-1] Linux 6.1 x86_64 # Check fleet status $ hop fleet status Fleet: orch-1 3 hosts registered 3 online, 0 offline # Admin: see all peers $ hop admin orch-1 peers alice developer online bob ops online

Scales with you

No infrastructure to manage. The same tool works at every size.

Small Team (5-50)

Single orchestrator, JSON config files, zero infrastructure beyond hop itself. Set up in minutes.

Growing (50-500)

Same architecture. Role definitions keep access organized. Heartbeats keep fleet status current.

Large (500+)

Shard orchestrators by region or team. The P2P foundation means no central bottleneck for data transfer.

Ready to get started?

Fleet features are built into hop. Install the daemon, redeem the creator invite, and you're managing infrastructure.

Install hop