Minikube and Bridged Networking
This week I was running some experiments with Kubernetes. Typically I use minikube locally, but, having just cleaned off an old Mac, I wanted to try hosting the cluster there instead. This was a bit of a challenge since minikube is focused on local development — the Kubernetes cluster it creates is not accessible outside the machine where it is running. Eventually, after some trial and error, I was able to automate the setup so that any device on my local network could access the development cluster and its services.
To start, I need to make sure minikube and VirtualBox are installed.
brew install minikube virtualbox
With nothing else running on the old Mac, I want Kubernetes to have all the available resources. I use the sysctl
tool to check the available CPUs and memory on the system and configure minikube to use it.
minikube config set cpus "$( sysctl hw.ncpu | awk '{ print $2 }' )"
minikube config set disk-size 65536
minikube config set memory "$(( ( $( sysctl hw.memsize | awk '{ print $2 }' ) - 1073741824 ) / ( 1024 * 1024 ) ))"
minikube config set vm-driver virtualbox
minikube start
Once minikube finishes provisioning the virtual machine, I reconfigure it with an additional bridged network. Using the VBoxManage
CLI I create a new NIC on the next empty interface, bridge it to the Mac's default network interface, and, optionally, use a specific MAC address to support DHCP reservations.
VBoxManage controlvm minikube poweroff
opts=""
nic=$( VBoxManage showvminfo minikube --machinereadable | grep ^nic | grep '"none"' | head -n1 | cut -d= -f1 | cut -c4- )
int=$( route get google.com | grep interface: | awk '{ print $2 }' )
if [ -n "${MINIKUBE_NIC_BRIDGED_MAC:-}" ]; then
opts="--macaddress$nic $MINIKUBE_NIC_BRIDGED_MAC"
fi
VBoxManage modifyvm minikube --nic$nic bridged --bridgeadapter$nic $int $opts
minikube start
After the VM is restarted, I can use minikube ssh
to verify it has received an IP on my local network (e.g. 192.0.2.113
).
minikube ssh -- ip addr show eth2 | grep inet | awk '{ print $2 }' | cut -d/ -f1
Now that it has an IP on my local network, I want to actually use it. However, the certificates that Kubernetes has are only signed for the host-only IP address (i.e. 192.168.99.106
). Rather than trying to regenerate those certificates, I decided to use one of the internal domains from the certificate — kubernetes.default.svc.cluster.local
. After I update my network's DNS server (or /etc/hosts
file), I update $KUBECONFIG
to use the hostname, too.
desired="https://kubernetes.default.svc:8443"
kubectl config set-cluster minikube --server="$desired"
With everything working locally from the old Mac, it's time to get a $KUBECONFIG
for accessing it from my laptop.
kubectl config view --flatten
Now, with a copy of $KUBECONFIG
on my laptop I'm able to deploy something to my remote, minikube Kubernetes cluster. For example, the Dashboard UI. After all the resources are ready, I can use kubectl proxy
to see it up and running.
$
kubectl cluster-info
Kubernetes master is running at https://kubernetes.default.svc:8443
KubeDNS is running at https://kubernetes.default.svc:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
$
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc7/aio/deploy/recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
$
kubectl -n kubernetes-dashboard get service/kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard ClusterIP 10.107.154.9 <none> 443/TCP 32s
$
kubectl proxy &
$
open http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login
While proxy
works for some cases, it is often better to access services directly. Traditionally you could use minikube tunnel
, but that command only works from the same machine where the minikube Kubernetes is running. Instead, I ended up configuring a static route for the (large) minikube cluster IP range on my router (or locally with route
).
sudo route add -net 10.96.0.0/12 $( get_bridged_ip )
Once routed, I can directly access the dashboard by its Cluster IP.
$
sudo tcptraceroute 10.107.154.9 443
Tracing the path to 10.107.154.9 on TCP port 443 (https), 30 hops max
1 kubernetes.default.svc (192.0.2.113) 2.732 ms 1.108 ms 0.983 ms
2 10.107.154.9 1.222 ms 1.075 ms 0.994 ms
3 10.107.154.9 [open] 1.026 ms 1.514 ms 1.334 ms
$
open https://10.107.154.9/#/login
Success — my minikube-managed Kubernetes cluster is now available for use by the rest of my local network!