Skupper proof of concept: 2 clusters & gateways, resiliency walkthrough
This walkthrough shows how Skupper can be used to provide service resiliency across 2 clusters. Each cluster is running a Gateway with a HttpRoute in front of an application Service. By leveraging Skupper, the application Service can be exposed (using the skupper cli) from either cluster. If the Service is unavailable on the local cluster, it will be routed to another cluster that has exposed that Service. This can be very useful in a situation where directing traffic to a specific Gateway via other means (like DNS) may take some time to take effect.
- Local environment has been set up with a hub and spoke cluster, as per the Multicluster Gateways Walkthrough.
- The example multi-cluster Gateway has been deployed to both clusters
- The example echo HttpRoute, Service and Deployment have been deployed to both clusters in the
defaultnamespace, and the
MGC_SUB_DOMAINenv var set in your terminal
- Skupper CLI has been installed.
Continuing on from the previous walkthrough, in first terminal,
Skupper on the hub & spoke clusters using the following command:
T1 expose the Service in the
Do the same in the workload cluster
Verify the application route can be hit, taking note of the pod name in the response:
Locate the pod that is currently serving requests. It is either in the hub or
spoke cluster. There goal is to scale down the deployment to 0 replicas.
Check in both
Run this command to scale down the deployment in the right cluster:
Verify the application route can still be hit, and the pod name matches the one that has not been scaled down.
You can also force resolve the DNS result to alternate between the 2 Gateway clusters to verify requests get routed across the Skupper network.
If you get an error response
no healthy upstream from curl, there may be a
problem with the skupper network or link. Check back on the output from earlier
commands for any indication of problems setting up the network or link. The
skupper router & service controller logs can be checked in the
namespace in both clusters.
You may see an error like below when running the
make skupper-setup cmd.