Multi-Region AWS Fargate on EKS
This tutorial walks you through exposing a global (multi-region) hello-world service using AWS Fargate on EKS, ALB ingress controllers, the Admiralty open source multi-cluster scheduler, and Admiralty Cloud, with copy-paste instructions. For some context, read the companion article in The New Stack.
Prerequisites
You'll need the following command line tools:
Installation
We're going to:
- provision a management cluster, and four Fargate-enabled workload clusters across two regions with eksctl;
- install cert-manager (a dependency of Admiralty) and Admiralty in all clusters with Helm and register them with Admiralty Cloud;
- install the ALB ingress controller in the workload clusters;
- connect the management cluster to the workload clusters (i.e., configure authentication and scheduling with CRDs);
Clusters
The script below creates a management cluster and two workload clusters in the us-west-2 (Oregon) region, and two additional workload clusters in the eu-west-3 (Paris) region. The --fargate
option installs a default Fargate profile for the "default" and "kube-system" namespaces in the workload clusters. We do provision a managed node pool with the --managed
option in all clusters, because the cert-manager and Admiralty webhooks cannot run on Fargate.
tip
We're creating the five clusters asynchronously by sending the eksctl create cluster
commands to the background with &
, getting their process IDs right after with $!
, storing them in an array, and wait
ing on all of them at the end. Otherwise, provisioning five cluster sequentially could take half an hour. Who needs a workflow engine anyway? We're also building a list of cluster names from region names and indices, to loop through in subsequent steps.
Coffee Break ☕
Even though we're creating the clusters in parallel, creating an EKS cluster still takes a few minutes. It's a good time to read ahead—or make some coffee.
Cert-Manager
Admiralty uses cert-manager to manage various certificates (for mutating webhooks, authenticating proxies, and cluster identities). Let's install it with Helm in all clusters:
By default, Helm doesn't wait for releases to be ready, meaning the commands above ran fairly quickly, but we now need to ensure cert-manager's webhook in each cluster is available or Helm could fail to install Admiralty in the next step:
Admiralty
Download the Admiralty CLI if not already installed. We'll use it to register clusters with Admiralty Cloud.
Log in (sign up) to Admiralty Cloud::
note
The admiralty configure
command takes you through an OIDC log-in/sign-up flow, and eventually saves an Admiralty Cloud API kubeconfig—used to register clusters—and user tokens under ~/.admiralty
.
Install Admiralty in each cluster with Helm:
Register each cluster.
ALB Ingress Controller
AWS Fargate works best with the ALB ingress controller. Install it in each workload cluster. We've condensed the official instructions into one copy-paste-ready script:
Configuration
Kubeconfigs are matched with TrustedIdentityProviders across clusters via Admiralty Cloud to authenticate controllers in the management cluster calling the Kubernetes APIs of the workload clusters. Targets are matched with Sources to configure scheduling from the management cluster to the workload clusters.
In the management cluster, create a Kubeconfig and a Target for each workload cluster:
In each workload cluster, create a TrustedIdentityProvider and a Source for the management cluster:
Check that virtual nodes have been created in the management cluster to represent workload clusters:
Label the "default" namespace in the management cluster to enable multi-cluster scheduling at the namespace level:
Demo
Phew! We could still simplify the installation procedure, but after all you've just provisioned five clusters in two regions, ready to serve global traffic. Now you're about to see how simple Admiralty is to use.
We're going to create two regional Deployments, a Service and an Ingress in the management cluster. They'll propagate to the workload clusters and eventually run as Fargate containers connected to ALBs. Admiralty Cloud will configure DNS with smart routing policies to guarantee high availability (HA) and performance.
Hello World
Create two regional Deployments with 4 replicas each.
- We recommend regional Deployments because, in production, you'll probably want to control scale based on regional throughput. To do that, you would associate a HorizontalPodAutoscaler with each regional Deployment.
- The two Deployments share an
app
label, which will be used as a Service selector (they also have distinctiveregion
labels). - The
multicluster.admiralty.io/elect
template annotation enables multi-cluster scheduling at the pod level. The two other template annotations are required to work with AWS Fargate. Read the note below if you're interested in the algorithm. - The standard node selector is used to scope each Deployment to a region.
- Each pod running this hello-world application will respond to HTTP requests by printing the name of the Fargate node (micro-VM) it runs on.
note
- Fargate uses its own scheduler in the workload clusters, so we can't use Admiralty's candidate scheduler. Admiralty's proxy scheduler usually sends a candidate pod in each target cluster as a way to filter virtual nodes, whether the candidate schedulers reserve actual nodes for the candidate pods or not (cf. algorithm). Instead, with the
multicluster.admiralty.io/no-reservation
annotation, we send a single candidate pod per scheduling cycle to the target cluster that scores the highest, and "filter" over multiple cycles until one is scheduled. - The AWS Fargate scheduler also doesn't consider node selectors, so, with the
multicluster.admiralty.io/use-constraints-from-spec-for-proxy-pod-scheduling
annotation, we tell Admiralty to use our node selector to filter virtual nodes, instead of using it to filter actual nodes in target clusters. Virtual nodes aggregate labels with unique values from target clusters; it's not the full picture, but we can work with it.
Fargate takes about a minute to start pods. Run the following command a few times over a minute to see the multi-cluster scheduling algorithm in action. Eventually, you'll see that proxy pods have been created in the management cluster, "running" on virtual nodes, and delegate pods have been created in the workload clusters, running their containers in real Fargate micro-VMs:
Then, create a Service and Ingress targeting the two Deployments in the management cluster.
tip
We use a host name in your Admiralty Cloud account's subdomain, under admiralty.cloud
. You could configure a CNAME record for each application in your domain, or configure a custom domain in Admiralty Enterprise and an NS record in your domain once and for all.
Services and Ingresses follow the delegate pods in the workload clusters, and ALB ingress controllers will soon provision load balancers for the Ingresses (the Ingress in the management cluster doesn't have a load balancer because there's no ingress controller in that cluster):
Testing Performance
Call your application. Depending on where you are in the world, you'll see a response from a node in Oregon or Paris:
Call from multiple locations and check the latency, e.g., by visiting https://tools.keycdn.com/performance:
- Linux
- Mac
- Windows
Testing HA
Let's simulate a regional outage (or a failed rollout) by scaling one of the regional Deployments down to 0 replicas, ideally the one closest to you, so you can monitor the failover with curl:
Admiralty Cloud uses 1s TTLs, but global DNS record propagation can take up to a minute. In practice, failover usually takes about 10s. That's definitely better than average, but if you need the best, you should consider integrations with anycast load balancers available with Admiralty Enterprise.
For a blue/green cluster upgrade, you would drain a virtual node while the corresponding cluster is being upgraded:
Cleanup
Summary
In this tutorial, we've seen how to integrate Admiralty with AWS Fargate on EKS to deploy a service in multiple regions for high availability and performance. Don't hesitate to contact us if you have any questions about this tutorial or if you're interested in custom domains and/or other load balancing integrations available with Admiralty Enterprise.