Skip to content

Secure, protect, and connect APIs with Kuadrant

Overview

This guide walks you through using Kuadrant to secure, protect, and connect an API exposed by a Gateway using Kubernetes Gateway API. You can use this walkthrough for a Gateway on a single cluster or a Gateway distributed across multiple clusters with a shared listener hostname. This guide shows how specific user personas can each use Kuadrant to achieve their goals.

Prerequisites

This guide expects that you have successfully installed Kuadrant on at least one cluster: - You have completed the steps in Install Kuadrant on an OpenShift cluster for one or more clusters. - For multicluster scenarios, you have installed Kuadrant on at least two different clusters, and have a shared accessible Redis store. - kubectl command line tool is installed. - (Optional) User workload monitoring is configured to remote write to a central storage system such as Thanos (also covered in the installation steps).

What Kuadrant can do for you in a multicluster environment

You can leverage Kuadrant's capabilities in single or multiple clusters. The following features are designed to work across multiple clusters as well as in a single-cluster environment.

  • Multicluster ingress: Kuadrant provides multicluster ingress connectivity using DNS to bring traffic to your Gateways using a strategy defined in a DNSPolicy.
  • Global rate limiting: Kuadrant can enable global rate limiting use cases when configured to use a shared Redis store for counters based on limits defined by a RateLimitPolicy.
  • Global auth: You can configure Kuadrant's AuthPolicy to leverage external auth providers to ensure different clusters exposing the same API are authenticating and authorizing in the same way.
  • Integration with federated metrics stores: Kuadrant has example dashboards and metrics for visualizing your Gateways and observing traffic hitting those Gateways across multiple clusters.

Platform engineer user

This guide walks you through deploying a Gateway that provides secure communication and is protected and ready for use by development teams to deploy an API. It then walks through using this Gateway in clusters in different geographic regions, leveraging Kuadrant to bring specific traffic to your geo-located Gateways to reduce latency and distribute load, while still being protected and secured with global rate limiting and auth.

As an optional extra this guide highlights how, with the user workload monitoring observability stack deployed, these Gateways can then be observed and monitored.

Application developer user

This guide walks through how you can use the Kuadrant OAS extensions and CLI to generate an HTTPRoute for your API and add specific auth and rate limiting requirements to your API.

Platform engineer workflow

You must perform the following steps in each cluster individually unless specifically excluded.

Environment variables

For convenience, this guide uses the following environment variables:

export zid=change-this-to-your-zone-id
export rootDomain=example.com
export gatewayNS=api-gateway
export gatewayName=external
export devNS=toystore
export AWS_ACCESS_KEY_ID=xxxx
export AWS_SECRET_ACCESS_KEY=xxxx
export AWS_REGION=us-east-1
export clusterIssuerName=lets-encrypt
export EMAIL=foo@example.com

Deployment management tooling

While this document uses kubectl, working with multiple clusters is complex, and it is best to use a tool such as Argo CD to manage the deployment of resources to multiple clusters.

Set up a managed DNS zone

The managed DNS zone declares a zone and credentials to access that zone that Kuadrant can use to set up DNS configuration.

Create the ManagedZone resource

Apply the following ManagedZone resource and aws credentials to each cluster or, if you are adding an additional cluster, add it to the new cluster:

kubectl create ns ${gatewayNS}

Ensure the zone credential is created:

kubectl -n ${gatewayNS} create secret generic aws-credentials \
  --type=kuadrant.io/aws \
  --from-literal=AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID \
  --from-literal=AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY

Then create a ManagedZone:

kubectl apply -f - <<EOF
apiVersion: kuadrant.io/v1alpha1
kind: ManagedZone
metadata:
  name: managedzone
  namespace: ${gatewayNS}
spec:
  id: ${zid}
  domainName: ${rootDomain}
  description: "Kuadrant managed zone"
  dnsProviderSecretRef:
    name: aws-credentials
EOF

Wait for the ManagedZone to be ready in your cluster(s):

kubectl wait managedzone/managedzone --for=condition=ready=true -n ${gatewayNS}

Add a TLS issuer

To secure communication to the Gateways, you will define a TLS issuer for TLS certificates. This example uses Let's Encrypt, but you can use any issuer supported by cert-manager.

The following example uses Let's Encrypt staging, which should also be applied to all clusters.

kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: ${clusterIssuerName}
spec:
  acme:
    email: ${EMAIL} 
    privateKeySecretRef:
      name: le-secret
    server: https://acme-staging-v02.api.letsencrypt.org/directory
    solvers:
      - dns01:
          route53:
            hostedZoneID: ${zid}
            region: ${AWS_REGION}
            accessKeyIDSecretRef:
              key: AWS_ACCESS_KEY_ID
              name: aws-credentials
            secretAccessKeySecretRef:
              key: AWS_SECRET_ACCESS_KEY
              name: aws-credentials
EOF

Then wait for the ClusterIssuer to become ready:

kubectl wait clusterissuer/${clusterIssuerName} --for=condition=ready=true

Set up a Gateway

For Kuadrant to balance traffic using DNS across two or more clusters, you must define a Gateway with a shared host. You will define this by using an HTTPS listener with a wildcard hostname based on the root domain. As mentioned earlier, these resources must be applied to all clusters.

NOTE: For now, the Gateway is set to accept an HTTPRoute from the same namespace only. This allows you to restrict who can use the Gateway until it is ready for general use.

kubectl apply -f - <<EOF
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: ${gatewayName}
  namespace: ${gatewayNS}
  labels:
    kuadrant.io/gateway: "true"
spec:
    gatewayClassName: istio
    listeners:
    - allowedRoutes:
        namespaces:
          from: Same
      hostname: "*.${rootDomain}"
      name: api
      port: 443
      protocol: HTTPS
      tls:
        certificateRefs:
        - group: ""
          kind: Secret
          name: api-${gatewayName}-tls
        mode: Terminate
EOF

Check the status of your Gateway:

kubectl get gateway ${gatewayName} -n ${gatewayNS} -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}'
kubectl get gateway ${gatewayName} -n ${gatewayNS} -o=jsonpath='{.status.conditions[?(@.type=="Programmed")].message}'

Our gateway should be accepted and programmed (i.e. valid and assigned an external address).

However, if we check our listener status we will it is not yet "programmed" or ready to accept traffic due to bad TLS configuration.

kubectl get gateway ${gatewayName} -n ${gatewayNS} -o=jsonpath='{.status.listeners[0].conditions[?(@.type=="Programmed")].message}'

Kuadrant can help with this via TLSPolicy.

Secure and protect the Gateway with TLS rate limiting and auth policies.

While your Gateway is now deployed, it has no exposed endpoints and your listener is not programmed. Next, you can set up a TLSPolicy that leverages your CertificateIssuer to set up your listener certificates.

You will also define an AuthPolicy that will set up a default 403 response for any unprotected endpoints, as well as a RateLimitPolicy that will set up a default artificially low global limit to further protect any endpoints exposed by this Gateway.

Set up a default, deny-all AuthPolicy for your Gateway as follows:

kubectl apply -f - <<EOF
apiVersion: kuadrant.io/v1beta2
kind: AuthPolicy
metadata:
  name: ${gatewayName}-auth
  namespace: ${gatewayNS}
spec:
  targetRef:
    group: gateway.networking.k8s.io
    kind: Gateway
    name: ${gatewayName}
  defaults:
    rules:
      authorization:
        "deny":
          opa:
            rego: "allow = false"
EOF

Check that your policy was accepted by the controller:

kubectl get authpolicy ${gatewayName}-auth -n ${gatewayNS} -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}'
kubectl apply -f - <<EOF
apiVersion: kuadrant.io/v1alpha1
kind: TLSPolicy
metadata:
  name: ${gatewayName}-tls
  namespace: ${gatewayNS}
spec:
  targetRef:
    name: ${gatewayName}
    group: gateway.networking.k8s.io
    kind: Gateway
  issuerRef:
    group: cert-manager.io
    kind: ClusterIssuer
    name: ${clusterIssuerName}
EOF

Check that your policy was accepted by the controller:

kubectl get tlspolicy ${gatewayName}-tls -n ${gatewayNS} -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}'
kubectl apply -f  - <<EOF
apiVersion: kuadrant.io/v1beta2
kind: RateLimitPolicy
metadata:
  name: ${gatewayName}-rlp
  namespace: ${gatewayNS}
spec:
  targetRef:
    group: gateway.networking.k8s.io
    kind: Gateway
    name: ${gatewayName}
  defaults:
    limits:
      "low-limit":
        rates:
        - limit: 2
          duration: 10
          unit: second
EOF

To check your rate limits have been accepted, enter the following command:

kubectl get ratelimitpolicy ${gatewayName}-rlp -n ${gatewayNS} -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}'
kubectl apply -f - <<EOF
apiVersion: kuadrant.io/v1alpha1
kind: DNSPolicy
metadata:
  name: ${gatewayName}-dnspolicy
  namespace: ${gatewayNS}
spec:
  routingStrategy: loadbalanced
  loadBalancing:
    geo: 
      defaultGeo: US 
    weighted:
      defaultWeight: 120 
  targetRef:
    name: ${gatewayName}
    group: gateway.networking.k8s.io
    kind: Gateway
EOF

NOTE: The DNSPolicy will leverage the ManagedZone that you defined earlier based on the listener hosts defined in the Gateway.

Check that your DNSPolicy has been accepted:

kubectl get dnspolicy ${gatewayName}-dnspolicy -n ${gatewayNS} -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}'
kubectl apply -f - <<EOF
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: test
  namespace: ${gatewayNS}
spec:
  parentRefs:
  - name: ${gatewayName}
    namespace: ${gatewayNS}
  hostnames:
  - "test.${rootDomain}"
  rules:
  - backendRefs:
    - name: toystore
      port: 80
EOF

Check your Gateway policies are enforced:

kubectl get dnspolicy ${gatewayName}-dnspolicy -n ${gatewayNS} -o=jsonpath='{.status.conditions[?(@.type=="Enforced")].message}'
kubectl get authpolicy ${gatewayName}-auth -n ${gatewayNS} -o=jsonpath='{.status.conditions[?(@.type=="Enforced")].message}'
kubectl get ratelimitpolicy ${gatewayName}-rlp -n ${gatewayNS} -o=jsonpath='{.status.conditions[?(@.type=="Enforced")].message}'

Check your listener is ready:

kubectl get gateway ${gatewayName} -n ${gatewayNS} -o=jsonpath='{.status.listeners[0].conditions[?(@.type=="Programmed")].message}'

Test connectivity and deny all auth

You can use curl to hit your endpoint. You should see a 403 Because this example uses Let's Encrypt staging, you can pass the -k flag:

curl -k -w "%{http_code}" https://$(kubectl get httproute test -n ${gatewayNS} -o=jsonpath='{.spec.hostnames[0]}')

Extending this Gateway to multiple clusters and configuring geo-based routing

To distribute this Gateway across multiple clusters, repeat this setup process for each cluster. By default, this will implement a round-robin DNS strategy to distribute traffic evenly across the different clusters. Setting up your Gateways to serve clients based on their geographic location is straightforward with your current configuration.

Assuming you have deployed Gateway instances across multiple clusters as per this guide, the next step involves updating the DNS controller with the geographic regions of the visible Gateways.

For instance, if you have one cluster in North America and another in the EU, you can direct traffic to these Gateways based on their location by applying the appropriate labels:

For your North American cluster:

kubectl label --overwrite gateway ${gatewayName} kuadrant.io/lb-attribute-geo-code=US -n ${gatewayNS}

Application developer workflow

This section of the walkthrough focuses on using an OpenAPI Specification (OAS) to define an API. You will use Kuadrant OAS extensions to specify the routing, authentication, and rate-limiting requirements. Next, you will use the kuadrantctl tool to generate an AuthPolicy, an HTTPRoute, and a RateLimitPolicy, which you will then apply to your cluster to enforce the settings defined in your OAS.

NOTE: While this section uses the kuadrantctl tool, this is not essential. You can also create and apply an AuthPolicy, RateLimitPolicy, and HTTPRoute by using the oc or kubectl commands.

To begin, you will deploy a new version of the toystore app to a developer namespace:

kubectl apply -f https://raw.githubusercontent.com/Kuadrant/Kuadrant-operator/main/examples/toystore/toystore.yaml -n ${devNS}

Prerequisites

  • Install kuadrantctl. You can find a compatible binary and download it from the kuadrantctl releases page.
  • Ability to distribute resources generated by kuadrantctl to multiple clusters, as though you are a platform engineer.

Set up HTTPRoute and backend

Copy at least one of the following example OAS to a local location:

Set up some new environment variables:

export oasPath=examples/oas-apikey.yaml
# Ensure you still have these environment variables setup from the start of this guide:
export rootDomain=example.com
export gatewayNS=api-gateway

Use OAS to define our HTTPRoute rules

You can generate Kuadrant and Gateway API resources directly from OAS documents by using an x-kuadrant extension.

NOTE: For a more in-depth look at the OAS extension, see the kuadrantctl documentation.

Use kuadrantctl to generate your HTTPRoute.

NOTE: The sample OAS has some placeholders for namespaces and domains. You will inject valid values into these placeholders based on your previous environment variables.

Generate the resource from your OAS, (envsubst will replace the placeholders):

cat $oasPath | envsubst | kuadrantctl generate gatewayapi httproute --oas -

```bash
kubectl get httproute toystore -n ${devNS} -o=yaml

We should see that this route is affected by the AuthPolicy and RateLimitPolicy defined as defaults on the gateway in the gateway namespace.

- lastTransitionTime: "2024-04-26T13:37:43Z"
        message: Object affected by AuthPolicy demo/external
        observedGeneration: 2
        reason: Accepted
        status: "True"
        type: kuadrant.io/AuthPolicyAffected
- lastTransitionTime: "2024-04-26T14:07:28Z"
        message: Object affected by RateLimitPolicy demo/external
        observedGeneration: 1
        reason: Accepted
        status: "True"
        type: kuadrant.io/RateLimitPolicyAffected        

Test connectivity and deny-all auth

We'll use curl to hit an endpoint in the toystore app. As we are using LetsEncrypt staging in this example, we pass the -k flag:

curl -s -k -o /dev/null -w "%{http_code}" "https://$(kubectl get httproute toystore -n ${devNS} -o=jsonpath='{.spec.hostnames[0]}')/v1/toys"

We are getting a 403 because of the existing default, deny-all AuthPolicy applied at the Gateway. Let's override this for our HTTPRoute.

Choose one of the following options:

API key auth flow

Set up an example API key in your cluster(s):

kubectl apply -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
  name: toystore-api-key
  namespace: ${devNS}
  labels:
    authorino.kuadrant.io/managed-by: authorino
    kuadrant.io/apikeys-by: api_key
stringData:
  api_key: secret
type: Opaque
EOF

Next, generate an AuthPolicy that uses secrets in our cluster as APIKeys:

cat $oasPath | envsubst | kuadrantctl generate kuadrant authpolicy --oas -

From this, you can see an AuthPolicy generated based on your OAS that will look for API keys in secrets labeled api_key and look for that key in the header api_key. You can now apply this to the Gateway:

cat $oasPath | envsubst | kuadrantctl generate kuadrant authpolicy --oas -  | kubectl apply -f -

We should get a 200 from the GET, as it has no auth requirement:

curl -s -k -o /dev/null -w "%{http_code}" "https://$(kubectl get httproute toystore -n ${devNS} -o=jsonpath='{.spec.hostnames[0]}')/v1/toys"

We should get a 401 for a POST request, as it does not have any auth requirements:

curl -XPOST -s -k -o /dev/null -w "%{http_code}" "https://$(kubectl get httproute toystore -n ${devNS} -o=jsonpath='{.spec.hostnames[0]}')/v1/toys"

Finally, if we add our API key header, with a valid key, we should get a 200 response:

curl -XPOST -H 'api_key: secret' -s -k -o /dev/null -w "%{http_code}" "https://$(kubectl get httproute toystore -n ${devNS} -o=jsonpath='{.spec.hostnames[0]}')/v1/toys"

OpenID Connect auth flow (skip if interested in API key only)

This section of the walkthrough uses the kuadrantctl tool to create an AuthPolicy that integrates with an OpenID provider and a RateLimitPolicy that leverages JWT values for per-user rate limiting. It is important to note that OpenID requires an external provider. Therefore, you should adapt the following example to suit your specific needs and provider.

The platform engineer workflow established some default policies for authentication and rate limiting at your Gateway. The new developer-defined policies, which you will create, are intended to target your HTTPRoute and will supersede the existing policies for requests to your API endpoints, similar to your previous API key example.

The example OAS uses Kuadrant-based extensions. These extensions enable you to define routing and service protection requirements. For more details, see OpenAPI Kuadrant extensions.

Prerequisites

  • You have installed and configured an OpenID Connect provider, such as https://www.keycloak.org/.
  • You have a realm, client, and users set up. This example assumes a realm in a Keycloak instance called toystore
  • Copy the OAS from sample OAS for rate-limiting and OIDC to a local location.

Set up an OpenID AuthPolicy

export openIDHost=some.keycloak.com
export oasPath=examples/oas-oidc.yaml

Note: the sample OAS has some placeholders for namespaces and domains - we will inject valid values into these placeholders based on our previous env vars

Let's use our OAS and kuadrantctl to generate an AuthPolicy to replace the default on the Gateway.

cat $oasPath | envsubst | kuadrantctl generate kuadrant authpolicy --oas -

If we're happy with the generated resource, let's apply it to the cluster:

cat $oasPath | envsubst | kuadrantctl generate kuadrant authpolicy --oas - | kubectl apply -f -

We should see in the status of the AuthPolicy that it has been accepted and enforced:

kubectl get authpolicy -n ${devNS} toystore -o=jsonpath='{.status.conditions}'

On our HTTPRoute, we should also see it now affected by this AuthPolicy in the toystore namespace:

kubectl get httproute toystore -n ${devNS} -o=jsonpath='{.status.parents[0].conditions[?(@.type=="kuadrant.io/AuthPolicyAffected")].message}'

Let's now test our AuthPolicy:

export ACCESS_TOKEN=$(curl -k -H "Content-Type: application/x-www-form-urlencoded" \
        -d 'grant_type=password' \
        -d 'client_id=toystore' \
        -d 'scope=openid' \
        -d 'username=bob' \
        -d 'password=p' "https://${openIDHost}/auth/realms/toystore/protocol/openid-connect/token" | jq -r '.access_token')
curl -k -XPOST --write-out '%{http_code}\n' --silent --output /dev/null "https://$(kubectl get httproute toystore -n ${devNS} -o=jsonpath='{.spec.hostnames[0]}')/v1/toys"

You should see a 401 response code. Make a request with a valid bearer token:

curl -k -XPOST --write-out '%{http_code}\n' --silent --output /dev/null -H "Authorization: Bearer $ACCESS_TOKEN" "https://$(kubectl get httproute toystore -n ${devNS} -o=jsonpath='{.spec.hostnames[0]}')/v1/toys"

You should see a 200 response code.

Set up rate limiting

Lastly, you can generate your RateLimitPolicy to add your rate limits, based on your OAS file. Rate limiting is simplified for this walkthrough and is based on either the bearer token or the API key value. There are more advanced examples in the How-to guides on the Kuadrant documentation site, for example: Authenticated rate limiting with JWTs and Kubernetes RBAC

You can continue to use this sample OAS document, which includes both authentication and a rate limit:

export oasPath=examples/oas-oidc.yaml
Again, you should see the rate limit policy accepted and enforced:

```bash
kubectl get ratelimitpolicy -n ${devNS} toystore -o=jsonpath='{.status.conditions}'
On your HTTPoute we should now see it is affected by the RateLimitPolicy in the same namespace:

kubectl get httproute toystore -n ${devNS} -o=jsonpath='{.status.parents[0].conditions[?(@.type=="kuadrant.io/RateLimitPolicyAffected")].message}'

Let's now test your rate-limiting.

Note you may need to wait a minute for the new rate limits to be applied. With the below requests you should see some number of 429 responses.

API Key Auth:

for i in {1..3}
do
printf "request $i "
curl -XPOST -H 'api_key:secret' -s -k -o /dev/null -w "%{http_code}"  "https://$(kubectl get httproute toystore -n ${devNS} -o=jsonpath='{.spec.hostnames[0]}')/v1/toys"
printf "\n -- \n"
done 

And with OpenID Connect Auth:

export ACCESS_TOKEN=$(curl -k -H "Content-Type: application/x-www-form-urlencoded" \
        -d 'grant_type=password' \
        -d 'client_id=toystore' \
        -d 'scope=openid' \
        -d 'username=bob' \
        -d 'password=p' "https://${openIDHost}/auth/realms/toystore/protocol/openid-connect/token" | jq -r '.access_token')
for i in {1..3}
do
curl -k -XPOST --write-out '%{http_code}\n' --silent --output /dev/null -H "Authorization: Bearer $ACCESS_TOKEN" https://$(kubectl get httproute toystore -n ${devNS}-o=jsonpath='{.spec.hostnames[0]}')/v1/toys
done

Conclusion

You've completed the secure, protect, and connect walkthrough. To learn more about Kuadrant, visit https://docs.kuadrant.io