In the first post we created two subdomain certificates and in the second post we created two docker images. Each image offer a simple self-hosted service which includes the Kestrel Server and additionally configured for SSL.
Now we want to set up a Kubernetes cluster, configure an ingress service and enable the SSL passthrough option. The Ingress then passes the requests directly to the services and the client receives the certificates from the pods. This scenario is usually not very useful because the Ingress should be used for SSL offloading.
Create the Kubernets Cluster (ACS)
First an Azure Container Service is created with Kubernetes as orchestrator and then the credentials are retrieved and inserted into the kubectl configuration:
az acs create --orchestrator-type=kubernetes --agent-vm-size=Standard_D1_v2 --agent-count=1 --resource-group ACS-SSL-Ingress --name=acs-2019-02-05 --dns-prefix=acs-2019-02-05dns --generate-ssh-keys --admin-username=acsroot az acs kubernetes get-credentials --resource-group=ACS-SSL-Ingress --name=acs-2019-02-05
(The ACS will not be further developed and the AKS should be used instead. My original example was based on Windows Worker Nodes, which is why ACS is used.)
Grant access to the Azure Container Registry
I already had created an Azure Container Registry (CR2019). We now authorize the Kubernetes cluster with read permissions on the CR. This allows the ACS to read the images directly without having to save the registry credentials as K8s secrets. Use acs list to get the Service Principal:
az acs list -g ACS-SSL-Ingress
And set read permissions for this Service Principal at our Azure Container Registry (CR2019):
Upload the local Docker Images to ACR
To upload the images from the previous post to the ACR, the images must be tagged accordingly (lines 1 and 2). Then, after login with the personal Azure user in line 3 (the user is already authenticated in the shell), the images can be pushed (line 4 and 5):
docker tag sslservice1 cr2019.azurecr.io/sslservice1 docker tag sslservice2 cr2019.azurecr.io/sslservice2 az acr login --name cr2019 docker push cr2019.azurecr.io/sslservice1 docker push cr2019.azurecr.io/sslservice2
Install Helm/Tiller and the Ingress Pods/Services
Helm (the client component) and Tiller (the serverside component) are used as a package manager for Kubernetes. You can easy install and manage complex environments and solutions. We use Helm/Tiller to install and configure the Ingress Service. The first step is to check the status of the cluster:
PS C:\> kubectl get pods --all-namespaces --server-print=false NAMESPACE NAME READY STATUS RESTARTS AGE kube-system heapster-342135353-71xvg 2/2 Running 0 28h kube-system kube-addon-manager-k8s-master-75a98bfb-0 1/1 Running 0 28h kube-system kube-apiserver-k8s-master-75a98bfb-0 1/1 Running 0 28h kube-system kube-controller-manager-k8s-master-75a98bfb-0 1/1 Running 0 28h kube-system kube-dns-v20-3003781527-2flz8 3/3 Running 0 28h kube-system kube-dns-v20-3003781527-96j22 3/3 Running 0 28h kube-system kube-proxy-dpghv 1/1 Running 0 28h kube-system kube-proxy-gjhlg 1/1 Running 0 28h kube-system kube-scheduler-k8s-master-75a98bfb-0 1/1 Running 0 28h kube-system kubernetes-dashboard-924040265-52cx5 1/1 Running 0 28h kube-system tiller-deploy-272121893-vfqs4 1/1 Running 0 25h
You can check the installation instructions at the Helm GitHub page https://github.com/helm/helm#install. But we recognize that Tiller is already available. Therefore, an upgrade command (line 1) from helm is enougth. Otherwise the installation (line 2) should be performed:
helm init --upgrade helm init
We setup now the Ingress Environment. In a Linux-Worker-Node Cluster the nodeSelector properties are not necessary, only in a mixed cluster to install the Ingress components on Linux machines. The argument –set controller.extraArgs.enable-ssl-passthrough=”” is very important, it enables the possibility to use the SSL-pasthrough option later.
PS C:\> helm install stable/nginx-ingress --namespace kube-system --set controller.replicaCount=1 --set rbac.create=false --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux --set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux --set controller.extraArgs.enable-ssl-passthrough="" NAME: roiling-yak LAST DEPLOYED: Wed Feb 6 16:19:51 2019 NAMESPACE: kube-system STATUS: DEPLOYED RESOURCES: ==> v1beta1/PodDisruptionBudget NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE roiling-yak-nginx-ingress-controller 1 N/A 0 2s roiling-yak-nginx-ingress-default-backend 1 N/A 0 2s ==> v1/ConfigMap NAME DATA AGE roiling-yak-nginx-ingress-controller 1 2s ==> v1/ServiceAccount NAME SECRETS AGE roiling-yak-nginx-ingress 1 2s ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE roiling-yak-nginx-ingress-controller LoadBalancer 10.0.254.247 <pending> 80:30045/TCP,443:31418/TCP 2s roiling-yak-nginx-ingress-default-backend ClusterIP 10.0.48.27 <none> 80/TCP 1s ==> v1beta1/Deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE roiling-yak-nginx-ingress-controller 1 1 1 0 1s roiling-yak-nginx-ingress-default-backend 1 1 1 0 1s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE roiling-yak-nginx-ingress-controller-4094551179-7c0zh 0/1 ContainerCreating 0 1s roiling-yak-nginx-ingress-default-backend-3517156865-g44vf 0/1 ContainerCreating 0 1s NOTES: The nginx-ingress controller has been installed. ...
After a few minutes, our Ingress environment is deployed. It contains 2 Pods (one for the default backend and one for the ingress controller itself) and one Service for each pod:
PS C:\> kubectl get pods --all-namespaces --server-print=false NAMESPACE NAME READY STATUS RESTARTS AGE kube-system heapster-342135353-71xvg 2/2 Running 0 28h kube-system kube-addon-manager-k8s-master-75a98bfb-0 1/1 Running 0 28h kube-system kube-apiserver-k8s-master-75a98bfb-0 1/1 Running 0 28h kube-system kube-controller-manager-k8s-master-75a98bfb-0 1/1 Running 0 28h kube-system kube-dns-v20-3003781527-2flz8 3/3 Running 0 28h kube-system kube-dns-v20-3003781527-96j22 3/3 Running 0 28h kube-system kube-proxy-dpghv 1/1 Running 0 28h kube-system kube-proxy-gjhlg 1/1 Running 0 28h kube-system kube-scheduler-k8s-master-75a98bfb-0 1/1 Running 0 28h kube-system kubernetes-dashboard-924040265-52cx5 1/1 Running 0 28h kube-system roiling-yak-nginx-ingress-controller-4094551179-7c0zh 1/1 Running 0 4m59s kube-system roiling-yak-nginx-ingress-default-backend-3517156865-g44vf 1/1 Running 0 4m59s kube-system tiller-deploy-272121893-vfqs4 1/1 Running 0 26h PS C:\> kubectl get services --all-namespaces --server-print=false NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 28h kube-system heapster ClusterIP 10.0.243.35 <none> 80/TCP 28h kube-system kube-dns ClusterIP 10.0.0.10 <none> 53/UDP,53/TCP 28h kube-system kubernetes-dashboard NodePort 10.0.129.204 <none> 80:30159/TCP 28h kube-system roiling-yak-nginx-ingress-controller LoadBalancer 10.0.254.247 13.80.115.0 80:30045/TCP,443:31418/TCP 5m18s kube-system roiling-yak-nginx-ingress-default-backend ClusterIP 10.0.48.27 <none> 80/TCP 5m17s kube-system tiller-deploy ClusterIP 10.0.20.254 <none> 44134/TCP 28h
Deploy everything
Now, we can deploy our containers. The YAML-Files for the pods/services are easy. I added them in the source control on GitHub and you can check the status with kubectl. Please recognize, that there is no public IP for the services, because the traffic should go through the Ingress.
PS C:\_git\SampleAPIsSSL> kubectl create -f .\service1\depl_sslservice1.yaml service/sslservice1-service created deployment.apps/sslservice1-deployment created PS C:\_git\SampleAPIsSSL> kubectl create -f .\service2\depl_sslservice2.yaml service/sslservice2-service created deployment.apps/sslservice2-deployment created PS C:\_git\SampleAPIsSSL> kubectl get pods --server-print=false NAME READY STATUS RESTARTS AGE sslservice1-deployment-2071119030-xqrqc 1/1 Running 0 4m sslservice2-deployment-1077568253-jq36x 1/1 Running 0 2m53s PS C:\_git\SampleAPIsSSL> kubectl get services --server-print=false NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 29h sslservice1-service NodePort 10.0.220.78 <none> 443:31778/TCP 4m9s sslservice2-service NodePort 10.0.183.211 <none> 443:31200/TCP 3m2s
Now we have a look at the Ingress configuration (the file is also in GitHub):
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: example-ingress-host-ssl-routing annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/ssl-passthrough: "true" nginx.ingress.kubernetes.io/ssl-redirect: "true" spec: rules: - host: service1.thomas-zuehlke.de http: paths: - path: / backend: serviceName: sslservice1-service servicePort: 443 - host: service2.thomas-zuehlke.de http: paths: - path: / backend: serviceName: sslservice2-service servicePort: 443
Line 7 defines that the SSL-Passthrough option should be used. Line 8 additionally provides a redirect to port 443 if a request arrives via port 80. In lines 11 and 18, a redirect is performed based on the host name in the request. These subdomains have yet to be created and configured to the IP of the Ingress.
If we now call the Ingress directly, we see the Kubernetes certifcate and that the routing does not work, because of the missing host-part in our request. We are redirected to the default backend:
We must configure our subdomains now with a new A-record that routes to the Ingress IP:
The call will now be correct routed and the Pods are delivering the certificate:
Februar 27, 2020 um 6:07 pm Uhr
Hi Thomas, thanks for the great article. I wanted to ask you – what is the reason that SSL-passthrough is not very useful as you wrote in the article. Why you said that SSL offloading should be done at Ingress.
März 4, 2020 um 8:43 pm Uhr
The idea behind the Ingress is that it is the only external access point to the internal services in the cluster. If the services communicate with each other, this communication can usually be trusted and does not have to be encrypted. Therefore, certificates and encryption effort in the services can be avoided if communication is encrypted until the Ingress.
Mai 28, 2020 um 5:00 pm Uhr
Hi Thomas,
Great article. I am unable to configure ssl offload with AKS following your article.
While reading articles i came across –enable-ssl-passthrough flag that needs to be passed in the controller arg for the entire thing to work. Where do i add that flag??
Juni 14, 2020 um 10:24 am Uhr
I pass the argument during the Ingress installation in the helm command:
helm install stable/nginx-ingress –namespace kube-system –set controller.replicaCount=1 –set rbac.create=false –set controller.nodeSelector.”beta\.kubernetes\.io/os”=linux –set defaultBackend.nodeSelector.”beta\.kubernetes\.io/os”=linux –set controller.extraArgs.enable-ssl-passthrough=””