Quick Start
This guide provides copy-and-paste instructions to try out the Admiralty open source cluster agent with or without Admiralty Cloud. We use kind (Kubernetes in Docker) to create Kubernetes clusters, but feel free to use something else—though don't just copy and paste instructions then.
Example Use Case
We're going to model a centralized cluster topology made of a management cluster (named cd
) where applications are deployed, and two workload clusters (named us
and eu
) where containers actually run. We'll deploy a batch job utilizing both workload clusters, and another targeting a specific region. If you're interested in other topologies or other kinds of applications (e.g., micro-services), this guide is still helpful to get familiar with Admiralty in general. When you're done, you may want to continue with the "Multi-Region AWS Fargate on EKS" tutorial.
- Global batch
- Regional batch
Prerequisites
We recommend you to use a separate kubeconfig for this exercise, so you can simply delete it when you're done:
export KUBECONFIG=kubeconfig-admiralty-getting-startedCreate three clusters (a management cluster named
cd
and two workload clusters namedus
andeu
):for CLUSTER_NAME in cd us eudokind create cluster --name $CLUSTER_NAMEdoneLabel the workload cluster nodes as if they were in different regions (we'll use these labels as node selectors):
for CLUSTER_NAME in us eudokubectl --context kind-$CLUSTER_NAME label nodes --all topology.kubernetes.io/region=$CLUSTER_NAMEdonetip
Most cloud distributions of Kubernetes pre-label nodes with the names of their cloud regions.
(optional speed-up) Pull images on your machine and load them into the kind clusters. Otherwise, each kind cluster would pull images, which could take three times as long.
images=(# cert-manager dependencyquay.io/jetstack/cert-manager-controller:v0.16.1quay.io/jetstack/cert-manager-webhook:v0.16.1quay.io/jetstack/cert-manager-cainjector:v0.16.1# admiralty open sourcequay.io/admiralty/multicluster-scheduler-agent:0.13.2quay.io/admiralty/multicluster-scheduler-scheduler:0.13.2quay.io/admiralty/multicluster-scheduler-remove-finalizers:0.13.2quay.io/admiralty/multicluster-scheduler-restarter:0.13.2# admiralty cloud/enterprisequay.io/admiralty/admiralty-cloud-controller-manager:0.13.2quay.io/admiralty/kube-mtls-proxy:0.10.0quay.io/admiralty/kube-oidc-proxy:v0.3.0 # jetstack's image rebuilt for multiple architectures)for image in "${images[@]}"dodocker pull $imagefor CLUSTER_NAME in cd us eudokind load docker-image $image --name $CLUSTER_NAMEdonedoneInstall cert-manager in each cluster:
helm repo add jetstack https://charts.jetstack.iohelm repo updatefor CLUSTER_NAME in cd us eudokubectl --context kind-$CLUSTER_NAME create namespace cert-managerkubectl --context kind-$CLUSTER_NAME apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.16.1/cert-manager.crds.yamlhelm install cert-manager jetstack/cert-manager \--kube-context kind-$CLUSTER_NAME \--namespace cert-manager \--version v0.16.1 \--wait --debug# --wait to ensure release is ready before next steps# --debug to show progress, for lack of a better way,# as this may take a few minutesdonenote
Admiralty Open Source uses cert-manager to generate a server certificate for its mutating pod admission webhook. In addition, Admiralty Cloud and Admiralty Enterprise use cert-manager to generate server certificates for Kubernetes API authenticating proxies (mTLS for clusters, OIDC for users), and client certificates for cluster identities (talking to the mTLS proxies of other clusters).
Installation
Admiralty Cloud, its command line interface (CLI), and additional cluster-agent components complement the open-source cluster agent in useful ways. The CLI makes it easy to register clusters; Kubernetes custom resource definitions (CRDs) make it easy to connect them (with automatic certificate rotations), so you don't have to craft (and re-craft) cross-cluster kubeconfigs and think about routing and certificates.
Admiralty Cloud works with private clusters too. In this context, a private cluster is a cluster whose Kubernetes API isn't routable from another cluster. Cluster-to-cluster communications to private clusters transit through HTTPS/WebSocket/HTTPS tunnels exposed on the Admiralty Cloud API.
Privacy Notice
We don't want to see your data. Admiralty Cloud cannot decrypt cluster-to-cluster communications, because private keys never leave the clusters. All clusters ever share with Admiralty Cloud are their CA certificates (public keys) to give to other clusters. Admiralty Cloud acts as a public key directory—"Keybase for Kubernetes clusters" if you'd like.
If you decide to use the open-source cluster agent only, no problem. There's no CLI nor cluster registration, but configuring cross-cluster authentication takes more care, and doesn't support private clusters. In production, you would have to rotate tokens manually.
- Cloud/Enterprise
- Open Source
Download the Admiralty CLI:
- Linux/amd64
- Mac
- Windows
- Linux/arm64
- Linux/ppc64le
- Linux/s390x
curl -Lo admiralty "https://artifacts.admiralty.io/admiralty-v0.13.2-linux-amd64"chmod +x admiraltysudo mv admiralty /usr/local/binLog in (sign up) to Admiralty Cloud:
admiralty configurenote
The
admiralty configure
command takes you through an OIDC log-in/sign-up flow, and eventually saves an Admiralty Cloud API kubeconfig—used to register clusters—and user tokens under~/.admiralty
. Don't forget to runadmiralty logout
to delete the tokens if needed when you're done.Install Admiralty in each cluster:
helm repo add admiralty https://charts.admiralty.iohelm repo updatefor CLUSTER_NAME in cd us eudokubectl --context kind-$CLUSTER_NAME create namespace admiraltyhelm install admiralty admiralty/admiralty \--kube-context kind-$CLUSTER_NAME \--namespace admiralty \--version 0.13.2 \--set accountName=$(admiralty get-account-name) \--set clusterName=$CLUSTER_NAME \--wait --debug# --wait to ensure release is ready before next steps# --debug to show progress, for lack of a better way,# as this may take a few minutesdoneRegister each cluster:
for CLUSTER_NAME in cd us eudoadmiralty register-cluster --context kind-$CLUSTER_NAMEdone
Configuration
Cross-Cluster Authentication
- Cloud/Enterprise
- Open Source
In the management cluster, create a Kubeconfig for each workload cluster:
for CLUSTER_NAME in us eudocat <<EOF | kubectl --context kind-cd apply -f -apiVersion: multicluster.admiralty.io/v1alpha1kind: Kubeconfigmetadata:name: $CLUSTER_NAMEspec:secretName: $CLUSTER_NAMEcluster:admiraltyReference:clusterName: $CLUSTER_NAMEEOFdoneIn each workload cluster, create a TrustedIdentityProvider for the management cluster:
for CLUSTER_NAME in us eudocat <<EOF | kubectl --context kind-$CLUSTER_NAME apply -f -apiVersion: multicluster.admiralty.io/v1alpha1kind: TrustedIdentityProvidermetadata:name: cdspec:prefix: "spiffe://cd/"admiraltyReference:clusterName: cdEOFdone
Multi-Cluster Scheduling
In the management cluster, create a Target for each workload cluster:
for CLUSTER_NAME in us eudocat <<EOF | kubectl --context kind-cd apply -f -apiVersion: multicluster.admiralty.io/v1alpha1kind: Targetmetadata:name: $CLUSTER_NAMEspec:kubeconfigSecret:name: $CLUSTER_NAMEEOFdoneIn the workload clusters, create a Source for the management cluster:
- Cloud/Enterprise
- Open Source
for CLUSTER_NAME in us eudocat <<EOF | kubectl --context kind-$CLUSTER_NAME apply -f -apiVersion: multicluster.admiralty.io/v1alpha1kind: Sourcemetadata:name: cdspec:userName: spiffe://cd/ns/default/id/defaultEOFdone
Demo
Check that virtual nodes have been created in the management cluster to represent workload clusters:
kubectl --context kind-cd get nodes --watch# --watch until virtual nodes are created,# this may take a few minutes, then control-CLabel the
default
namespace in the management cluster to enable multi-cluster scheduling at the namespace level:kubectl --context kind-cd label ns default multicluster-scheduler=enabledCreate Kubernetes Jobs in the management cluster, utilizing all workload clusters (multi-cluster scheduling is enabled at the pod level with the
multicluster.admiralty.io/elect
annotation):for i in $(seq 1 10)docat <<EOF | kubectl --context kind-cd apply -f -apiVersion: batch/v1kind: Jobmetadata:name: global-$ispec:template:metadata:annotations:multicluster.admiralty.io/elect: ""spec:containers:- name: cimage: busyboxcommand: ["sh", "-c", "echo Processing item $i && sleep 5"]resources:requests:cpu: 100mrestartPolicy: NeverEOFdoneCheck that proxy pods for this job have been created in the management cluster, "running" on virtual nodes, and delegate pods have been created in the workload clusters, actually running their containers on real nodes:
while truedoclearfor CLUSTER_NAME in cd us eudokubectl --context kind-$CLUSTER_NAME get pods -o widedonesleep 2done# control-C when all pods have CompletedCreate Kubernetes Jobs in the management cluster, targeting a specific region with a node selector:
for i in $(seq 1 10)docat <<EOF | kubectl --context kind-cd apply -f -apiVersion: batch/v1kind: Jobmetadata:name: eu-$ispec:template:metadata:annotations:multicluster.admiralty.io/elect: ""spec:nodeSelector:topology.kubernetes.io/region: eucontainers:- name: cimage: busyboxcommand: ["sh", "-c", "echo Processing item $i && sleep 5"]resources:requests:cpu: 100mrestartPolicy: NeverEOFdoneCheck that proxy pods for this job have been created in the management cluster, and delegate pods have been created in the
eu
cluster only:while truedoclearfor CLUSTER_NAME in cd us eudokubectl --context kind-$CLUSTER_NAME get pods -o widedonesleep 2done# control-C when all pods have CompletedYou may observe transient pending candidate pods in the
us
cluster.