Offloading with Policies
This tutorial aims to guide you through a tour to learn how to use the core Liqo features. You will learn how to tune namespace offloading, and specify the target clusters through the cluster selector concept.
More specifically, you will configure a scenario composed of a single entry point cluster leveraged for the deployment of the applications (i.e., the Venice cluster, located in north Italy) and two worker clusters characterized by different geographical regions (i.e., the Florence and Naples clusters, respectively located in center and south Italy). Then, you will offload a given namespace (and the applications contained therein) to a subset of the worker clusters (i.e., only to the Naples cluster), while allowing pods to be also scheduled on the local cluster (i.e., the Venice one).
Provision the playground
First, check that you are compliant with the requirements.
Then, let’s open a terminal on your machine and launch the following script, which creates the three above-mentioned clusters with KinD and installs Liqo on all of them. Each cluster is made by a single combined control-plane + worker node.
git clone https://github.com/liqotech/liqo.git cd liqo git checkout v0.8.1 cd examples/offloading-with-policies ./setup.sh
Export the kubeconfigs environment variables to use them in the rest of the tutorial:
export KUBECONFIG="$PWD/liqo_kubeconf_venice" export KUBECONFIG_FLORENCE="$PWD/liqo_kubeconf_florence" export KUBECONFIG_NAPLES="$PWD/liqo_kubeconf_naples"
We suggest exporting the kubeconfig of the first cluster as default (i.e.,
KUBECONFIG), since it will be the entry point of the virtual cluster and you will mainly interact with it.
At this point, you should have three clusters with Liqo installed on them. The setup script named them venice, florence and naples, and respectively configured the following cluster labels:
You can check that the clusters are correctly labeled through:
liqoctl status liqoctl --kubeconfig $KUBECONFIG_FLORENCE status liqoctl --kubeconfig $KUBECONFIG_NAPLES status
These labels will be propagated to the virtual nodes corresponding to each cluster. In this way, you can easily identify the clusters through their characterizing labels, and define the appropriate scheduling policies.
Peer the clusters
Once Liqo is installed in your clusters, you can establish new peerings. In this example, since the two API Servers are mutually reachable, you will use the out-of-band peering approach.
To implement the desired scenario, let’s first retrieve the peer command from the Florence and Naples clusters:
PEER_FLORENCE=$(liqoctl generate peer-command --only-command --kubeconfig $KUBECONFIG_FLORENCE) PEER_NAPLES=$(liqoctl generate peer-command --only-command --kubeconfig $KUBECONFIG_NAPLES)
Then, establish the peerings from the Venice cluster:
echo "$PEER_FLORENCE" | bash echo "$PEER_NAPLES" | bash
When the above commands return successfully, you can check the peering status by running:
kubectl get foreignclusters
The output should look like the following, indicating that an outgoing peering is currently active towards both the Florence and the Naples clusters, as well as the cross-cluster network tunnels have been established:
NAME TYPE OUTGOING PEERING INCOMING PEERING NETWORKING AUTHENTICATION AGE florence OutOfBand Established None Established Established 111s naples OutOfBand Established None Established Established 98s
Additionally, you should have two new virtual nodes in the Venice cluster, characterized by the install-time provided labels:
kubectl get node --selector=liqo.io/type=virtual-node --show-labels
NAME STATUS ROLES AGE VERSION LABELS liqo-florence Ready agent 19s v1.25.0 liqo.io/remote-cluster-id=5f3b5abd-cccb-4f75-931b-d6b1ca95fa7d,liqo.io/type=virtual-node,topology.liqo.io/region=center liqo-naples Ready agent 14s v1.25.0 liqo.io/remote-cluster-id=edc8c24a-4c11-48b8-8b0e-2a95cf7464af,liqo.io/type=virtual-node,topology.liqo.io/region=south
Some of the default labels were omitted for the sake of clarity.
Tune namespace offloading
Now, let’s suppose you want to deploy an application that needs to be scheduled in the north and in the south region, but not in the center one. This constraint needs to be respected at the infrastructural level: the dev team does not need to be aware of required affinities and/or node selectors, nor it should be able to bypass them.
First, you should create a new namespace in the Venice cluster, which will host the application:
kubectl create namespace liqo-demo
Then, enable Liqo offloading for that namespace:
liqoctl offload namespace liqo-demo \ --namespace-mapping-strategy EnforceSameName \ --pod-offloading-strategy LocalAndRemote \ --selector 'topology.liqo.io/region=south'
The above command configures the following aspects (see the dedicated usage page for additional information concerning namespace offloading configurations):
liqo-demonamespace is replicated with the same name in the other clusters.
liqo-demonamespace, and the contained resources, are offloaded only to the clusters with the
the pods living in the
liqo-demonamespace are free to be scheduled onto both physical and virtual nodes.
The NamespaceOffloading resource created by liqoctl in the
liqo-demo namespace exposes the status of the offloading process, including a global OffloadingPhase, which is expected to be
Ready, and a list of RemoteNamespaceConditions, one for each remote cluster.
In this case:
the Florence cluster has not been selected to offload the namespace
liqo-demo, since it does not match the cluster selector;
the Naples cluster has been selected to offload the namespace
liqo-demo, and the namespace has been correctly created.
kubectl get namespaceoffloadings offloading -n liqo-demo -o yaml
... status: observedGeneration: 1 offloadingPhase: Ready remoteNamespaceName: liqo-demo remoteNamespacesConditions: florence-7ab115: - lastTransitionTime: "2023-01-30T09:50:05Z" message: The remote cluster has not been selected through the ClusterSelector field reason: ClusterNotSelected status: "False" type: OffloadingRequired naples-5eada1: - lastTransitionTime: "2023-01-30T09:50:05Z" message: The remote cluster has been selected through the ClusterSelector field reason: ClusterSelected status: "True" type: OffloadingRequired - lastTransitionTime: "2023-01-30T09:50:05Z" message: Namespace correctly offloaded to the remote cluster reason: NamespaceCreated status: "True" type: Ready
Indeed, if you query for the namespaces in the Naples cluster, you should see the following output, confirming that the remote namespace has been correctly created by Liqo:
kubectl get namespaces liqo-demo --kubeconfig="$KUBECONFIG_NAPLES"
NAME STATUS AGE liqo-demo Active 70s
Instead, the same command executed in the Florence cluster should return an error, as the namespace has not been replicated:
kubectl get namespaces liqo-demo --kubeconfig="$KUBECONFIG_FLORENCE"
Error from server (NotFound): namespaces "liqo-demo" not found
All constraints specified during namespace offloading are automatically enforced by Liqo, and merged with other pod-level specifications.
To verify this, you can now create two deployments in the
liqo-demo namespace, characterized by additional NodeAffinity constraints.
More precisely, one (
app-south) is forced to be scheduled onto the virtual node representing the Naples cluster, while the other (
app-center) is forced onto the Florence virtual cluster (which is incompatible with the namespace-level constraints).
kubectl apply -f ./manifests/deploy.yaml -n liqo-demo
Checking the pod status, it is possible to verify that one has been scheduled onto the Naples cluster, and it is correctly running, while the other remained Pending due to conflicting requirements (i.e., no node is available to satisfy all its constraints).
kubectl get pod -n liqo-demo -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES app-center-58d8ff79c9-xf6pz 0/1 Pending 0 27s <none> <none> <none> <none> app-south-545766885-zn4nx 1/1 Running 0 27s 10.204.0.13 liqo-naples <none> <none>
You can remove the conflicting node affinity from the
app-center deployment, and check that the generated pod gets scheduled onto either the Venice (i.e., locally) or the Naples cluster, as constrained by the namespace offloading configuration.
Tear down the playground
Our example is finished; now we can remove all the created resources and tear down the playground.
Before starting the uninstallation process, make sure that all namespaces are unoffloaded:
liqoctl unoffload namespace liqo-demo
Every pod that was offloaded to a remote cluster is going to be rescheduled onto the local cluster.
Similarly, make sure that all the peerings are revoked:
liqoctl unpeer out-of-band florence liqoctl unpeer out-of-band naples
At the end of the process, the virtual nodes are removed from the local cluster.
Now you can uninstall Liqo from your clusters with liqoctl:
liqoctl uninstall liqoctl uninstall --kubeconfig="$KUBECONFIG_FLORENCE" liqoctl uninstall --kubeconfig="$KUBECONFIG_NAPLES"
By default the Liqo CRDs will remain in the cluster, but they can be removed with the
liqoctl uninstall --purge liqoctl uninstall --kubeconfig="$KUBECONFIG_FLORENCE" --purge liqoctl uninstall --kubeconfig="$KUBECONFIG_NAPLES" --purge
To teardown the KinD clusters, you can issue:
kind delete cluster --name venice kind delete cluster --name florence kind delete cluster --name naples