Replicated Deployments

In this tutorial you will learn how to deploy an application, and use Liqo to replicate it on multiple clusters.

Offloading Multiple Pods Overview

In this example you will configure a scenario composed of a single entry point cluster used for the deployment of the applications (called origin cluster) and two destination clusters. The deployed application will be replicated on all destination clusters in order to deploy exactly one identical application on each destination cluster.

Provision the playground

First, make sure that the requirements for Liqo are satisfied.

Then, let’s open a terminal on your machine and launch the following script, which creates the three above-mentioned clusters with KinD and installs Liqo on all of them.

git clone https://github.com/liqotech/liqo.git
cd liqo
git checkout v0.7.2
cd examples/replicated-deployments
./setup.sh

Export the kubeconfigs environment variables to use them in the rest of the tutorial:

export KUBECONFIG=liqo_kubeconf_europe-cloud
export KUBECONFIG_EUROPE_ROME_EDGE=liqo_kubeconf_europe-rome-edge
export KUBECONFIG_EUROPE_MILAN_EDGE=liqo_kubeconf_europe-milan-edge

Note

We suggest exporting the kubeconfig of the origin cluster as default (i.e., KUBECONFIG), since you will mainly interact with it.

Now you should have three clusters with Liqo running. The setup script named them europe-cloud, europe-rome-edge and europe-milan-edge, and respectively configured the following cluster labels:

  • origin: topology.liqo.io/type=origin

  • europe-rome-edge: topology.liqo.io/type=destination

  • europe-milan-edge: topology.liqo.io/type=destination

You can check that the clusters are correctly labeled through:

liqoctl status
liqoctl status --kubeconfig "$KUBECONFIG_EUROPE_ROME_EDGE"
liqoctl status --kubeconfig "$KUBECONFIG_EUROPE_MILAN_EDGE"

Peer the clusters

Now, you can establish new Liqo peerings from origin to destination clusters, e.g., using the out-of-band peering approach:

To implement the desired scenario, let’s first retrieve the peer command from the destination clusters:

PEER_EUROPE_ROME_EDGE=$(liqoctl generate peer-command --only-command --kubeconfig $KUBECONFIG_EUROPE_ROME_EDGE)
PEER_EUROPE_MILAN_EDGE=$(liqoctl generate peer-command --only-command --kubeconfig $KUBECONFIG_EUROPE_MILAN_EDGE)

Then, establish the peerings from the origin cluster:

echo "$PEER_EUROPE_ROME_EDGE" | bash
echo "$PEER_EUROPE_MILAN_EDGE" | bash

When the above commands return successfully, you can check the peering status by running:

kubectl get foreignclusters

The output should look like the following, indicating that an outgoing peering is currently active towards both the europe-rome-edge and the europe-milan-edge clusters, and that the cross-cluster network tunnels have been established:

NAME                TYPE        OUTGOING PEERING   INCOMING PEERING   NETWORKING    AUTHENTICATION   AGE
europe-rome-edge    OutOfBand   Established        None               Established   Established      41s
europe-milan-edge   OutOfBand   Established        None               Established   Established      7s

Additionally, you should have two new virtual nodes in the origin cluster, characterized by the labels set at install-time:

kubectl get node --selector=liqo.io/type=virtual-node --show-labels
NAME                     STATUS   ROLES   AGE   VERSION   LABELS
liqo-europe-milan-edge   Ready    agent   27s   v1.25.0   liqo.io/remote-cluster-id=9636366f-2718-464e-b1df-3eca5a71aaf6,liqo.io/type=virtual-node,topology.liqo.io/type=destination
liqo-europe-rome-edge    Ready    agent   34s   v1.25.0   liqo.io/remote-cluster-id=7a0f5f75-e98e-4927-b65f-d0274ca03d9c,liqo.io/type=virtual-node,topology.liqo.io/type=destination

Note

Some of the default labels were omitted for the sake of clarity.

Tune namespace offloading

Now, let’s pretend you want to deploy an application that needs to be scheduled on all destination clusters, but not in the origin one. First, we create a new namespace, then enable Liqo offloading to it:

kubectl create namespace liqo-demo

Then, enable Liqo offloading for that namespace:

liqoctl offload namespace liqo-demo \
  --namespace-mapping-strategy EnforceSameName \
  --pod-offloading-strategy Remote \
  --selector 'topology.liqo.io/type=destination'

The above command configures Liqo with the following behaviour (see the dedicated usage page for additional information concerning namespace offloading configurations):

  • the liqo-demo namespace, and the contained resources, are offloaded only to the clusters with the topology.liqo.io/type=destination label.

  • the pods living in the liqo-demo namespace only on virtual nodes.

Selectors

This example uses selectors, but they are not strictly necessary here, as all peered clusters have been targeted as destination. Selectors become necessary in case you want to target a subset of peered clusters. More information are available in the offloading with policies example.

You can now query for the namespaces either in the europe-rome-edge or europe-milan-edge cluster to see if the remote namespace has been correctly created by Liqo:

kubectl get namespaces liqo-demo --kubeconfig="$KUBECONFIG_EUROPE_ROME_EDGE"
kubectl get namespaces liqo-demo --kubeconfig="$KUBECONFIG_EUROPE_MILAN_EDGE"

If everything is correct, both commands should return an output similar to the following:

NAME        STATUS   AGE
liqo-demo   Active   70s

Deploy applications

Now it is time to deploy the application.

In order to create a replica of the application in each destination cluster, you need to enforce the following conditions:

  • The deployment resource must produce at least one pod for each destination cluster.

  • Each destination cluster must schedule at most one pod on its nodes.

To obtain this result you can leverage the following features available in kubernetes:

  • Set a number of replicas in the deployment which is equal to the number of destination clusters

  • Set topologySpreadConstraints inside the deployment’s template, which sets the maxSkew equal to 1.

The file ./manifests/deploy.yaml contains an example of a deployment which satisfies these conditions. Let’s deploy it:

kubectl apply -f ./manifests/deploy.yaml -n liqo-demo

More replicas

If the deployment uses a number of replicas which is higher than the number of virtual nodes, the pods will be scheduled respecting the maxSkew value, which guarantees that the difference between the maximum number of pods (scheduled on a single node) and the minimum will be 1.

We can check the pod status and verify that each destination cluster has scheduled one pod on its nodes, i.e., one pod has been scheduled onto the europe-rome-edge cluster, and the other on europe-milan-edge, and they are both correctly running:

kubectl get pod -n liqo-demo -o wide
NAME                            READY   STATUS    RESTARTS   AGE     IP            NODE                NOMINATED NODE   READINESS GATES
liqo-demo-app-777fb9fc8-bbt4d   1/1     Running   0          7m28s   10.113.0.65   liqo-europe-rome-edge   <none>           <none>
liqo-demo-app-777fb9fc8-wrjph   1/1     Running   0          7m28s   10.109.0.62   liqo-europe-milan-edge   <none>           <none>

Tear down the playground

Our example is finished; now we can remove all the created resources and tear down the playground.

Unoffload namespaces

Before starting the uninstallation process, make sure that all namespaces are unoffloaded:

liqoctl unoffload namespace liqo-demo

Revoke peerings

Similarly, make sure that all the peerings are revoked:

liqoctl unpeer out-of-band europe-rome-edge
liqoctl unpeer out-of-band europe-milan-edge

At the end of the process, the virtual nodes are removed from the local cluster.

Uninstall Liqo

Now you can uninstall Liqo from your clusters:

liqoctl uninstall
liqoctl uninstall --kubeconfig="$KUBECONFIG_EUROPE_ROME_EDGE" 
liqoctl uninstall --kubeconfig="$KUBECONFIG_EUROPE_MILAN_EDGE" 

Purge

By default the Liqo CRDs will remain in the cluster, but they can be removed with the --purge flag.

Destroy clusters

To teardown the KinD clusters, you can issue:

kind delete cluster --name origin
kind delete cluster --name europe-rome-edge
kind delete cluster --name europe-milan-edge