Understanding Kubernetes Networking and Ingress

In this lab you will dive into three core components:

  • Kubernetes Service Discovery
  • Kubernetes Services Extended
  • Ingress and Ingress Controllers

To begin, create a dedicated directory for this lab and switch into it:

cd ~

mkdir networking-ingress && cd networking-ingress

Kubernetes Service Discovery #

Kubernetes Services play an important role in Service Discovery. In the Understanding Kubernetes Deployments lab, you created a ClusterIP service type to sit in front of your application. This gave you a single, internal, entrypoint to your application and also created an A record in kube-dns with the following format <service-name>.<namespace-name>.svc.cluster.local.

Let’s validate this by deploying a “debug” Pod and using the dig command. Run the following command to deploy your Pod:

kubectl run --rm -it toolbox --image=jacobmammoliti/toolbox -- sh

Once you have a shell, run the following DiG command:

Note: DiG is a common command-line tool for querying DNS name servers.

dig random-facts-app-service.random-facts-app-deployment.svc.cluster.local

Under the answer section, you will see an A record with an IP address. If you now exit the Pod and use the kubectl get service command in the random-facts-app-deployment namespace, you will see the ClusterIP matches the IP address returned from the DiG command.

Kubernetes also has the concept of Headless Services where kube-dns will return the set of Pod IPs selected by the Service versus a single Service IP.

Using your Service manifest from the previous lab as a starting point, create another one that defines a Headless service in the random-facts-app-deployment namespace.

Hint: Use kubectl explain to help you figure out how this can be done.

Once done, apply it to your GKE cluster. Create another “debug” Pod and use the dig command again, but this time against your new headless service, to see how the response has changed.

In the answer section, you will now see multiple A records.

Kubernetes Services Extended #

Let’s expose a service outside of the cluster with the LoadBalancer type service.

Using your previous Service manifests as a starting point, create a new Service manifest and change the type to LoadBalancer. Once done, apply your new service to the cluster. Your services should now look as shown below:

Note: The external IP may show Pending for a few moments while GCP assigns a load balancer, give it a few moments and an external IP will show up.

NAME                                TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)          AGE
random-facts-app-service            ClusterIP      172.30.162.150   <none>          5000/TCP         4h8m
random-facts-app-service-external   LoadBalancer   172.30.158.159   104.197.2.154   5000:32318/TCP   11m
random-facts-app-service-headless   ClusterIP      None             <none>          5000/TCP         66m

Open a new tab in your browser and navigate to http://<YOUR_EXTERNAL_IP>:5000.

Ingress Controllers #

In the previous section, we discussed one method for making a service available outside of your cluster. However, relying on the LoadBalancer type to expose each service can quickly become expensive and inefficient. With this approach, you would need to pay for multiple public IP addresses and each service would require its own unique IP address, potentially leading to more complex DNS management. Additionally, LoadBalancer type does not provide support for Layer 7 routing.

Ingress Controllers use Ingress objects to dictate what service it should send traffic to based on the hostname or path that it receives.

In this lab, you will deploy the NGINX Ingress Controller. To deploy it to your cluster, run the following command:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.7.1/deploy/static/provider/cloud/deploy.yaml

Part of your lab setup, a static IP address was reserved for you named wildcard that is to be used by the NGINX Ingress Controller. You can get that IP by running the following gcloud command:

gcloud compute addresses list

Run the following patch command to tell the Ingress Controller to use this IP:

kubectl patch svc ingress-nginx-controller \
--namespace ingress-nginx \
--patch '{"spec":{"loadBalancerIP":"<YOUR_EXTERNAL_IP>"}}'

You can validate your Ingress Controller is up and running with the following command:

kubectl get pods,services --namespace ingress-nginx

Validate that the IP address you got from the gcloud command matches the “EXTERNAL-IP” of the ingress-nginx-controller service. If it does not match, please inform an instructor.

Path Based Routing #

One way to direct traffic to upstream services is through the use of paths. Below is a sample Ingress manifest that can be used as a starting point to define the correct configuration for routing traffic to your application via the NGINX Ingress Controller. Make sure to update the Ingress object to point to your random-facts-app-service service.

Note: The following manifest purposefully has mistakes and is meant to be treated as a starting point.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: random-facts-app-ingress
  namespace: default
spec:
  ingressClassName: nginx
  rules:
  - http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: my-app
            port:
              number: 8080

Apply the Ingress manifest to your cluster. If your Ingress object was configured correctly, you will be able to access your application at the IP address you noted above.

Host Based Routing #

Another way to direct traffic to upstream services is by hostname, or more specifically, “Host” header. As part of your lab setup, DNS has been configured for you and a wildcard A record has been added to point to your NGINX Ingress Controller’s external IP. The DNS format is as follows: *.<YOUR_STUDENT_ID>.<RANDOM_ID>.workshops.acceleratorlabs.ca, where:

  • <YOUR_STUDENT_ID> is the first six letters of your GCP project ID (before the hyphen)
  • <RANDOM_ID> is the 5 digit number in your GCP project ID (following the hyphen)

You can use app as the subdomain.

For example, for the GCP project ID of: jonsmi-12345:

  • <YOUR_STUDENT_ID>: jonsmi
  • <RANDOM_ID>: 12345

Your app’s address will be: app.jonsmi.12345.workshops.acceleratorlabs.ca.

Update your Ingress object to now specify the host as an additional rule. To validate if your Ingress object is defined properly, the output should look like below:

NAMESPACE                     NAME                       CLASS   HOSTS                                                                         ADDRESS          PORTS   AGE
random-facts-app-deployment   random-facts-app-ingress   nginx   app.<YOUR_STUDENT_ID>.<RANDOM_ID>.workshops.acceleratorlabs.ca                35.239.236.127   80      29s

Adding TLS with cert-manager #

So far, you have interacted with your application over HTTP. Let’s look at how we can now secure our connections and connect over HTTPS.

To begin, we will install cert-manager. cert-manager is a certificate management tool that can integrate with many certificate issuers including Let’s Encrypt, HashiCorp Vault, and Venafi. You can install cert-manager with the following command:

kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.11.0/cert-manager.yaml

Once deployed, we can deploy a ClusterIssuer. A ClusterIssuer represents a certificate authority (CA) that are able to generate signed certificates. The first ClusterIssuer we will deploy is the Let’s Encrypt Staging CA. The Staging CA from Let’s Encrypt allows us to test we can properly retrieve a signed certificate without hitting API limits before requesting one from their production environment.

Note: cert-manager also has an Issuer type which allows you to define a certificate authority at the namespace level.

Use the following YAML to create your ClusterIssuer:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-staging
spec:
  acme:
    server: https://acme-staging-v02.api.letsencrypt.org/directory
    email: <YOUR_EMAIL> # This is used by Let's Encrypt to notify you on certificate expiry
    privateKeySecretRef:
      name: letsencrypt-staging
    solvers:
    - http01:
        ingress:
          class: nginx

Once created, validate that your ClusterIssuer is created successfully. The output will be similar to the output shown below:

NAME                  READY   AGE
letsencrypt-staging   True    40s

The next step is to update your Ingress object so that it servers your application over HTTPS. This is done via the spec.tls stanza. Update your current Ingress object and add the following snippet:

  tls:
  - hosts:
      - app.<YOUR_STUDENT_ID>.<RANDOM_ID>.workshops.acceleratorlabs.ca
    secretName: random-facts-app-tls

The last piece is to let cert-manager know that you want to request a certificate for this Ingress object. This is by adding annotations to the Ingress resource. From there, cert-manager will facilitate creating the Certificate. Using the documentation here, determine the single annotation you need to have cert-manager create your certificate.

Once you update your Ingress object, validate that cert-manager successfully created your certificate:

kubectl get certificate --namespace random-facts-app-deployment

Your output will look similar to the following:

NAME                   READY   SECRET                 AGE
random-facts-app-tls   True    random-facts-app-tls   1m

If you now look at your Ingress object, you will now see 443 listed under the PORTS column.

If you visit your application at your domain name, you will still see a Privacy error. This is because we are using the Let’s Encrypt staging CA which is not trusted.

Create a new ClusterIssuer with the name: letsencrypt-production that uses the Let’s Encrypt production server (https://acme-v02.api.letsencrypt.org/directory) and update your Ingress object to request a certificate from there.

Once completed, you will be able to access your application through your web browser via a HTTPS. Validate your Ingress with an instructor before moving to the next labs.