Manage access to your entire infrastructure with roles, tags, and one-command invites. Same hop simplicity, from 5 machines to 5,000.
Fleet management in hop comes down to orchestrators, tags, and roles.
Any hop host can be an orchestrator. It keeps track of your fleet members, role definitions, and distributes access.
Label your hosts with tags like developer, production, or web. Tags describe what a machine is.
Define what access people get. A role maps to host tags — "developers access developer + staging hosts." Each person gets their own account.
Set up fleet management in four steps.
Install hop as a daemon on any machine. It prints a creator invite on first startup. Redeem it from your laptop with hop connect — this saves the orchestrator as a known host (e.g. orch-1).
Create roles that map to host tags. Use hop admin <host> to manage the orchestrator remotely, e.g. hop admin orch-1 role create developer
Create a fleet invite and use it when installing hop on each machine. Tags are assigned at registration time.
Invite people by role. They automatically get access to every matching host — with their own user account on each.
Define who can access what. Each role maps to host tags and access settings.
Roles default to individual accounts with no sudo. Override as needed — give ops sudo, create shared service accounts for CI, or add users to specific Unix groups.
roles.json, version-control your access policyRestrict what peers can do at the OS level. Policies are enforced via macOS Seatbelt and Linux Landlock.
Sandbox policies flow through three layers: role definition (orchestrator sets the baseline), invite creation (host can further restrict), and client connection (client can self-restrict). The result is always the strictest combination.
monitor (read-only, no network, scoped paths), audit (read-only, no network), deploy (scoped write, dangerous commands blocked)--read-only, --no-network, --scope, --allow-commandroles.jsonOnce set up, your team interacts with the fleet using familiar commands.
No infrastructure to manage. The same tool works at every size.
Single orchestrator, JSON config files, zero infrastructure beyond hop itself. Set up in minutes.
Same architecture. Role definitions keep access organized. Heartbeats keep fleet status current.
Shard orchestrators by region or team. The P2P foundation means no central bottleneck for data transfer.
Fleet features are built into hop. Install the daemon, redeem the creator invite, and you're managing infrastructure.