Skip to content

Scaling and exposing

Horizontal scaling

Orchestration is often used to spread the load over multiple nodes.

In this exercise, we will launch multiple Web servers.

To make distinguishing the two servers easier, we will force the nodename into their homepages. Using stock images, we achieve this by using an init container.

You can copy-and-paste the lines below.

http2.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: test-http
  labels:
    k8s-app: test-http
spec:
  replicas: 2
  selector:
    matchLabels:
      k8s-app: test-http
  template:
    metadata: 
      labels:
        k8s-app: test-http
    spec:
      initContainers:
      - name: myinit
        image: busybox
        command: ["sh", "-c", "echo '<html><body><h1>I am ' `hostname` '</h1></body></html>' > /usr/local/apache2/htdocs/index.html"]
        volumeMounts:
        - name: dataroot
          mountPath: /usr/local/apache2/htdocs
      containers:
      - name: mypod
        image: httpd:alpine
        resources:
           limits:
             memory: 200Mi
             cpu: 1
           requests:
             memory: 50Mi
             cpu: 50m
        volumeMounts:
        - name: dataroot
          mountPath: /usr/local/apache2/htdocs
      volumes:
      - name: dataroot
        emptyDir: {}

BTW: Feel free to change the number of replicas (within reason) and the text it is shown in home page of each server, if so desired.

Note that the httpd continer defines the command to run, which is the web server in this case. If you're running some other container that doesn't define the command, you'd have to specify it in the command field (instead of sleep infinity in previous examples), so that container would do the right thing when (re)started.

Launch the deployment:

kubectl create -f http2.yaml

Also launch the pod1 from basic hand on excercise.

Check the pods you have, alongside the IPs they were assigned to:

kubectl get pods -o wide

Log into pod1

kubectl exec -it test-pod -- /bin/sh

Now try to pull the home pages from the two Web servers; use the IPs you obtained above:\ curl http://IPofPod

You should get a different answer from the two.

Load balancing

Having to manually switch between the two Pods is obviously tedious. What we really want is to have a single logical address that will automatically load-balance between them.

You can copy-and-paste the lines below.

svc2.yaml:
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: test-svc
  name: test-svc
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    k8s-app: test-http
  type: ClusterIP

Let’s now start the service:

kubectl create -f svc2.yaml

Look up your service, and write down the IP it is reporting under:

kubectl get services

Log into pod1

kubectl exec -it test-pod -- /bin/sh

Now try to pull the home page from the service IP:

curl http://*IPofService*

Try it a few times… Which Web server is serving you?

Note that you can also use the local DNS name for this (from pod1)

curl http://test-svc.<namespace>.svc.cluster.local

Exposing public services

Sometimes you have the opposite problem; you want to export resources of a single node to the public internet.

The above Web services only serve traffic on the private IP network LAN. If you try curl from your laptops, you will never reach those Pods!

What we need, is set up an Ingress instance for our service.

ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: haproxy
  name: test-ingress
spec:
  rules:
  - host: test-service.nrp-nautilus.io
    http:
      paths:
      - backend:
          service:
            name: test-svc
            port:
              number: 80
        path: /
        pathType: ImplementationSpecific
  tls:
  - hosts:
    - test-service.nrp-nautilus.io

Launch the new ingress

kubectl create -f ingress.yaml

You should now be able to fetch the Web pages from your browser by opening https://test-service.nrp-nautilus.io. Note that SSL termination is already provided for you. More information is available in Ingress section.

You can now delete the deployment:

kubectl delete -f http2.yaml
kubectl delete -f svc2.yaml
kubectl delete -f ingress.yaml

The end

Please make sure you did not leave any running pods, deployments, ingresses or services behind.