Have you ever deployed a Kubernetes inside of a WSL just to discover that you don’t know how to access services behind its Load Balancer like Ingress? Have you ever tried to solve this with a Node Port just to realize that WSL’s IP is changing every restart making it extra difficult to update your /etc/hosts file? Well, I was there and I think I have a solution 🙂
Table of Contents
Why is it a problem?
When any application on WSL binds to 0.0.0.0 on a port it will be automatically available for the Windows host, and its applications on its localhost. For example when you start an nginx server on WSL on port 80, then you can use your favorite browser on Windows and type http://localhost:80 and it just works! So why doesn’t it work with Kubernetes exposed ports?
Kubernetes does not use bind, instead it uses iptables to listen and route incoming traffic. While this is more efficient, it does not work well with WSL. Therefore it requires us to access services by either Node IP, or by Load Balancer IP (after deploying for example MetaLB with overlapping address range). In fact, this would be acceptable if not for the fact that these IPs and subnet ranges change every Windows restart making it hard to use in practice. How can we solve this then?
Solution
- Setup a simple port forwarder using bind instead of iptables allowing WSL to discover bounded ports, and make it available on the localhost on Windows.
- Use Kubernetes API to discover services of type Load Balancer or Node Port to generate xinetd configuration based on their configured ports.
- Create a Kubernetes CronJob that will run the script periodically to keep our environment updated.
Configuration is as simple as running a single helm deployment. Later in this post we are going to explain how it works, but if you are in the hurry here you go:
helm upgrade --install devopsifyme-xinetd xinetd-port-forwarder --repo https://piotr-rojek.github.io/devopsifyme-charts/
Before we start
Importantly, run all the commands as an Administrator / root. You can become root with sudo -s
command.
Not surprisingly most of you probably already have WSL installed, but in case you don’t you can quickly get it directly from Microsoft Store for Windows 10/11
- Windows Subsystem for Linux
- Ubuntu 22.04
- k8s – How to run microk8s on your WSL Ubuntu 22.04? – DevOpsify Me.
Windows Terminal – optional, but highly recommended
Port Forwarding
At this point, goal for this section is to setup a port forwarder on WSL that will listen to incoming traffic on specified ports and forward them to iptables routes where k8s is expecting to receive traffic.
What is xinetd?
It is highly likely that you have never heard of, nor used xinetd before. So let’s analyze the configuration file first to understand what we will be automating and what it does for us. To demonstrate, a sample configuration exposes Traefik Ingress Controller on port 443 and forwards traffic to Node Port 30433.
service srv-traefik-lb-443 { disable = no type = UNLISTED socket_type = stream protocol = tcp wait = no redirect = 127.0.0.1 30443 bind = 0.0.0.0 port = 443 user = nobody }
In this situation, most important options that we will be parametrizing are:
service srv-traefik-lb-443 | An unique name in the configuration file |
protocol = tcp | Note that UDP is not supported by xinetd for port forwarding |
redirect = 127.0.0.1 30443 | Forward to localhost, inside of WSL where xinetd and k8s are running, to service’s node port. This is not from Windows, it is all inside of our WSL instance. |
bind = 0.0.0.0 | Listen on all interfaces, it has to be like this for WSL to apply its magic |
port = 443 | Which port to listen on. It is a port you will use in your web browser on your Windows host to access the service (https://localhost:443) |
How to use xinetd?
In this section, we want to connect to our Kubernetes service from a web browser using localhost’s host name. We will perform all configurations manually, which can be enough for many of you. Because it is a common scenario to expose only one port towards Ingress controller and then hide all other services behind it. Therefore configuration for port forwarding can be done once, and doesn’t have to be dynamic.
First, deploy a nginx server which will be our target running inside of our Kubernetes. Notice that we are specifying node port explicitly (even though service is still deployed a Load Balancer). This allows us to match it in the xinetd configuration file, otherwise it would have been assigned dynamically. After running following commands you should be able to see the service with the correct ports.
helm upgrade --install devopsify nginx --repo https://charts.bitnami.com/bitnami --set service.nodePorts.http=32080 kubectl get service devopsify-nginx
Now that we have something to connect to, lets configure xinetd by running commands below. Please use the sample configuration file, that will forward port 8080 to localhost:32080 (where service is listening). Afterwards restart the service and check if xinetd is listening on 0.0.0.0:8080 as defined.
#install xinetd apt-get install xinetd #configure nano /etc/xinetd.d/devopsify #restart systemctl restart xinetd systemctl status xinetd netstat -nlt
service sample-service { disable = no type = UNLISTED socket_type = stream protocol = tcp wait = no redirect = 127.0.0.1 32080 bind = 0.0.0.0 port = 8080 user = nobody }
Done! You should now be able to access your ngnix instance by visiting http://localhost:8080!
Dynamic xinetd configuration
Discover services using kubectl
At this point we have static configuration file, but what if we want it to be generated dynamically? We have to query Kubernetes API to discover all services with Node Ports assigned (service type Load Balancer or Node Port) and simply generate this text configuration file 😉
You can get all the services running within your cluster using kubectl
command below. First, take a look at the attached response where you will find familiar port 32080 that we have defined earlier. In fact, port configuration is a section that we will be translating into xinetd configuration in the script.
kubectl get service --all-namespaces -o json
{ "apiVersion": "v1", "items": [ { "apiVersion": "v1", "kind": "Service", "metadata": { "creationTimestamp": "2023-01-12T21:33:57Z", "labels": { "component": "apiserver", "provider": "kubernetes" }, "name": "kubernetes", "namespace": "default", "resourceVersion": "73", "uid": "849975b4-1d20-4d44-8039-3455bdf40a22" }, "spec": { "clusterIP": "10.152.183.1", "clusterIPs": [ "10.152.183.1" ], "internalTrafficPolicy": "Cluster", "ipFamilies": [ "IPv4" ], "ipFamilyPolicy": "SingleStack", "ports": [ { "name": "https", "port": 443, "protocol": "TCP", "targetPort": 16443 } ], "sessionAffinity": "None", "type": "ClusterIP" }, "status": { "loadBalancer": {} } }, { "apiVersion": "v1", "kind": "Service", "metadata": { "annotations": { "meta.helm.sh/release-name": "devopsify", "meta.helm.sh/release-namespace": "default" }, "creationTimestamp": "2023-01-15T20:58:02Z", "labels": { "app.kubernetes.io/instance": "devopsify", "app.kubernetes.io/managed-by": "Helm", "app.kubernetes.io/name": "nginx", "helm.sh/chart": "nginx-13.2.21" }, "name": "devopsify-nginx", "namespace": "default", "resourceVersion": "439918", "uid": "f98da3a4-6d64-45e5-aec3-4066b604acf9" }, "spec": { "allocateLoadBalancerNodePorts": true, "clusterIP": "10.152.183.97", "clusterIPs": [ "10.152.183.97" ], "externalTrafficPolicy": "Cluster", "internalTrafficPolicy": "Cluster", "ipFamilies": [ "IPv4" ], "ipFamilyPolicy": "SingleStack", "ports": [ { "name": "http", "nodePort": 32080, "port": 80, "protocol": "TCP", "targetPort": "http" } ], "selector": { "app.kubernetes.io/instance": "devopsify", "app.kubernetes.io/name": "nginx" }, "sessionAffinity": "None", "type": "LoadBalancer" }, "status": { "loadBalancer": {} } } ], "kind": "List", "metadata": { "resourceVersion": "" } }
Generate xinetd configuration
In brief, we will find all port definitions by iterating through the response. Then we will add them to xinetd’s configuration by applying following logic for the service of type:
- Load Balancer
port
property, should be accessible on the same port from Windows, - Load Balancer
nodePort
, should be exposed on different port to avoid collision with k8s iptables listeners. Therefore subtract 10000 (32080 becomes 22080 on Windows), - Node Port, as above.
Notice that by that definition, our ngnix service is accessible both on port 80 and 22080! In fact, both ports are redirecting to the same node port 32080, therefore same nginx service. However port 80 uses a Load Balancer resource type (with fallback to Node Port if External-IP is not available), while port 22080 uses Node Port directly.
Finally, run the script below to generate the configuration file based on the Kubernetes deployed services. Afterwards compare the generated file with the one created earlier – it should look very similar. Lastly, update the configuration file, restart services and verify the results. At this point you should be able to access nginx service on both http://localhost:80 and http://localhost:22080.
# scipt.ps1 file from the second tab ./script.ps1 -ConfigFilePath "xinetd.config.txt" -ForwardLoadBalancer #configure nano /etc/xinetd.d/devopsify-auto #restart systemctl restart xinetd systemctl status xinetd netstat -nlt
[CmdletBinding()] param( $ConfigFilePath = '/etc/xinetd.d/k8s', [Switch]$ForwardNodePort = $false, [Switch]$ForwardLoadBalancer = $false ) function Get-K8sServices { $services = kubectl get service --all-namespaces -o json | ConvertFrom-Json return $services } function Handle-Service($srv) { foreach($port in $srv.spec.ports) { Handle-Port -srv $srv -port $port } } function Handle-Port($srv, $port) { # we support only TCP ports if($port.protocol -ne 'TCP') { Write-Warning "$($srv.metadata.name):$($port.port) not supported protocol $($port.protocol)" return } # forwward LoadBalancer port if($ForwardLoadBalancer -and $srv.spec.type -eq "LoadBalancer") { Write-Host "$($srv.metadata.name):$($port.port) registered port $($port.port) forwarding to 127.0.0.1" #if load balancer IP is not assigned, use node port $address = $srv.status.loadBalancer.ingress.ip ?? "127.0.0.1" @{ name = "lb-$($port.port)" port = $port.port targetPort = $address -eq "127.0.0.1" ? $port.nodePort : $port.port service = $srv.metadata.name address = $address } } # forward NodePort port if($ForwardNodePort -and $null -ne $port.nodePort -and ($srv.spec.type -eq "LoadBalancer" -or $srv.spec.type -eq "NodePort")) { Write-Host ($port.nodePort.GetType()) Write-Host "$($srv.metadata.name):$($port.port) registered port $($port.nodePort) forwarding to 127.0.0.1" @{ name = "nodeport-$($port.nodePort)" port = $port.nodePort - 10000 targetPort = $port.nodePort service = $srv.metadata.name address = "127.0.0.1" } } } function Get-ConfigContent($forwardings) { foreach($forwading in $forwardings) { @" service srv-$($forwading.service)-$($forwading.name) { disable = no type = UNLISTED socket_type = stream protocol = tcp wait = no redirect = $($forwading.address) $($forwading.targetPort) bind = 0.0.0.0 port = $($forwading.port) user = nobody } "@ } } function Start-Main() { Write-Host "Discovering services..." $services = Get-K8sServices Write-Verbose ($services | ConvertTo-Json -Depth 99) $forwardings = $services.items | % { Handle-Service -srv $_ } Write-Host "Generating configuration..." $configContent = Get-ConfigContent -forwardings $forwardings $configContent ??= "# $(Get-Date): No services to expose found, check configuration if this is unexpected" $configContent = $configContent -join '' Write-Host "Created following xinetd configuration..." Write-Host $configContent Write-Host "Saving changes to $ConfigFilePath..." $configContent -replace "`r","" | Set-Content $ConfigFilePath -NoNewLine } Start-Main
Automated xinetd management
Now that we are able to generate configuration file based on the Kubernetes API, we know which commands to run to restart the service – it is time to automate whole solution and package the script as a Kubernetes CronJob.
Our script is scheduled every 60 seconds and it updates xinetd configuration files on WSL host. Moreover the script restarts xinetd service to apply the changes. Because of this it requires privileged access, which is not recommended for production environments.
Discovering services using Kubernetes API
Not surprisingly, the job has to be authenticated and authorized to use Kubernetes API. Therefore we need to:
- First create a Service Account that then Cron Job is assigned – take a look at ServiceAccount resource.
- Then grant view cluster role to the Service Account – take a look at ClusteRoleBinding resource.
- Finally authenticate using service account access token and invoke rest endpoint
# equivalent of # kubectl get service --all-namespaces -o json $accessToken = (Get-Content '/var/run/secrets/kubernetes.io/serviceaccount/token') $apiurl = "https://kubernetes.default.svc/api/v1/services?limit=500" $k8sheader = @{authorization="Bearer $accessToken"} $services = Invoke-RestMethod -Method GET -Uri $apiurl -Headers $k8sheader -SkipCertificateCheck
In this case, when the job is using a service account, the token can be read from a file /var/run/secrets/kubernetes.io/serviceaccount/token
. Furthermore we can access Kubernetes API on https://kubernetes.default.svc
using this token. However this requires CoreDNS to be deployed on our cluster. What is more, for simplification we are not going to setup certificate trust, therefore we -SkipCertificateCheck in the script above.
Deploying using kubectl
First lets deploy following resources using kubectl, and afterwards we will make it a helm chart.
- CronJob – to run and define schedule for the job,
- ConfigMap – to load the script and configuration,
- ServiceAccount – to assign identity to the job,
- ClusteRoleBinding – to give access to the API
kubectl replace -f ./serviceaccount.yaml kubectl replace -f ./clusterrolebinding.yaml kubectl replace -f ./configmap.yaml kubectl replace -f ./cronjob.yaml
# Source: xinetd-port-forwarder/templates/cronjob.yaml apiVersion: batch/v1 kind: CronJob metadata: name: release-name-xinetd-port-forwarder spec: schedule: 1/1 * * * * selector: app.kubernetes.io/name: xinetd-port-forwarder app.kubernetes.io/instance: release-name jobTemplate: spec: template: metadata: spec: hostPID: true serviceAccountName: release-name-xinetd-port-forwarder securityContext: {} containers: - name: xinetd-port-forwarder image: "mcr.microsoft.com/powershell:lts-alpine-3.14" imagePullPolicy: IfNotPresent command: - pwsh - -File - /etc/config/script.ps1 securityContext: allowPrivilegeEscalation: true capabilities: add: - SYS_ADMIN privileged: true resources: {} volumeMounts: - name: config mountPath: /etc/config - name: xinetd-config mountPropagation: "Bidirectional" mountPath: "/etc/xinetd.d" volumes: - name: config configMap: name: release-name-xinetd-port-forwarder - name: xinetd-config hostPath: path: /etc/xinetd.d type: DirectoryOrCreate restartPolicy: OnFailure
# Source: xinetd-port-forwarder/templates/serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: name: release-name-xinetd-port-forwarder
# Source: xinetd-port-forwarder/templates/clusterrolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: "release-name-xinetd-port-forwarder-view" subjects: - kind: ServiceAccount name: release-name-xinetd-port-forwarder namespace: default roleRef: kind: ClusterRole name: view apiGroup: ""
# Source: xinetd-port-forwarder/templates/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: release-name-xinetd-port-forwarder data: script.ps1: |- [CmdletBinding()] param( $ApiServerUrl = 'https://kubernetes.default.svc', $AccessToken = (Get-Content '/var/run/secrets/kubernetes.io/serviceaccount/token'), $ConfigFilePath = '/etc/xinetd.d/k8s', [Switch]$ForwardNodePort = $true, [Switch]$ForwardLoadBalancer = $true, [Switch]$RunCommandsOnHost = $true ) function Get-K8sServices { $apiurl = "$($ApiServerUrl)/api/v1/services?limit=500" $k8sheader = @{authorization="Bearer $($AccessToken)"} $services = Invoke-RestMethod -Method GET -Uri $apiurl -Headers $k8sheader -SkipCertificateCheck return $services } function Handle-Service($srv) { foreach($port in $srv.spec.ports) { Handle-Port -srv $srv -port $port } } function Handle-Port($srv, $port) { # we support only TCP ports if($port.protocol -ne 'TCP') { Write-Warning "$($srv.metadata.name):$($port.port) not supported protocol $($port.protocol)" return } # forwward LoadBalancer port if($ForwardLoadBalancer -and $srv.spec.type -eq "LoadBalancer") { Write-Host "$($srv.metadata.name):$($port.port) registered port $($port.port) forwarding to 127.0.0.1" #if load balancer IP is not assigned, use node port $address = $srv.status.loadBalancer.ingress.ip ?? "127.0.0.1" @{ name = "lb-$($port.port)" port = $port.port targetPort = $address -eq "127.0.0.1" ? $port.nodePort : $port.port service = $srv.metadata.name address = $address } } # forward NodePort port if($ForwardNodePort -and $null -ne $port.nodePort -and ($srv.spec.type -eq "LoadBalancer" -or $srv.spec.type -eq "NodePort")) { Write-Host ($port.nodePort.GetType()) Write-Host "$($srv.metadata.name):$($port.port) registered port $($port.nodePort) forwarding to 127.0.0.1" @{ name = "nodeport-$($port.nodePort)" port = $port.nodePort - 10000 targetPort = $port.nodePort service = $srv.metadata.name address = "127.0.0.1" } } } function Get-ConfigContent($forwardings) { foreach($forwading in $forwardings) { @" service srv-$($forwading.service)-$($forwading.name) { disable = no type = UNLISTED socket_type = stream protocol = tcp wait = no redirect = $($forwading.address) $($forwading.targetPort) bind = 0.0.0.0 port = $($forwading.port) user = nobody } "@ } } function Start-Main() { Write-Host "Discovering services..." $services = Get-K8sServices Write-Verbose ($services | ConvertTo-Json -Depth 99) $forwardings = $services.items | % { Handle-Service -srv $_ } Write-Host "Generating configuration..." $configContent = Get-ConfigContent -forwardings $forwardings $configContent ??= "# $(Get-Date): No services to expose found, check configuration if this is unexpected" $configContent = $configContent -join '' Write-Host "Created following xinetd configuration..." Write-Host $configContent Write-Host "Saving changes to $ConfigFilePath..." $configContent -replace "`r","" | Set-Content $ConfigFilePath -NoNewLine if($RunCommandsOnHost) { Write-Host "Checking if xinetd is installed on host" nsenter --target 1 --mount --uts --ipc --net sh -c "! command -v xinetd && apt-get install xinetd --yes" Write-Host "Restarting xinetd on the host..." nsenter --target 1 --mount --uts --ipc --net sh -c "systemctl restart xinetd && systemctl status xinetd" } } Start-Main
Deploying using helm chart
Feel free to view the complete helm chart here devopsifyme-charts (github.com). This chart deploys the same resources that we just looked at above, but this time it is easier to use as all you need to do is to deploy it with the following command:
helm upgrade --install devopsifyme-xinetd xinetd-port-forwarder --repo https://piotr-rojek.github.io/devopsifyme-charts/
Container accessing host’s xinetd
Definitely it is important to note that the solution presented above is not a production recommended solution because it uses privileged container with host PID access. In brief this is required to run commands on the host node from a container. Because presented solution makes sense only in a development environment like WSL, it is acceptable.
Hence in the CronJob resource we have defined hostPID: true
allowing us to run nsenter --target 1 --mount --uts --ipc --net COMMAND
, essentially running commands as ANY process or container being executed on the host (!). In our case we run commands as systemd (root) to install and restart xinetd.
Furthermore, we are mounting /etc/xinet.d host directory, to write updated configuration files directly from the container. Because we are using bidirectional mount, it is important to make sure that host’s filesystem mount is ‘shared’ by running mount --make-shared /
on every system boot. Now you can add this to startup.sh script we talked about here How to run microk8s on your WSL Ubuntu 22.04? – DevOpsify Me.
apiVersion: batch/v1 kind: CronJob metadata: name: release-name-xinetd-port-forwarder spec: ... jobTemplate: spec: template: metadata: spec: ... containers: - name: xinetd-port-forwarder ... volumeMounts: ... - name: xinetd-config mountPropagation: "Bidirectional" mountPath: "/etc/xinetd.d" volumes: ... - name: xinetd-config hostPath: path: /etc/xinetd.d type: DirectoryOrCreate
Installing xinetd automatically
Because we want to make things as simple as possible, our k8s cronjob will install xinetd on the node if it is missing. We achieve this by running the following command as part of the scheduled job:
nsenter --target 1 --mount --uts --ipc --net sh -c "! command -v xinetd && apt-get install xinetd --yes"
! command -v xinetd
checks if the command is present, but we negate the result since we want to execute apt-get install xinetd --yes
when it actually is not present. Rest of the script is just nsenter
attaching to systemd PID and running sh
on the node with the commands just mentioned.
What can we improve?
Probably a lot! 😀 You may be wondering about one of this
- Am I always required to access my services by port?
- Can I have custom domains for my ingress, or do I have to always use localhost?
- Can I have similar solution for pure docker?
Future us will definitely have a look!