public master endpoint but private nodes without restriction on IP addresses
gcloud container clusters create bartek-private-gke-cluster \ --num-nodes 1 --machine-type n1-standard-2 --region=us-central1 \ --enable-private-nodes --enable-ip-alias --master-ipv4-cidr 172.16.0.0/28 \ --no-enable-master-authorized-networks NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS bartek-private-gke-cluster us-central1-c 1.21.6-gke.1503 35.xxx.xxx.xxx n1-standard-2 1.21.6-gke.1503 2 RUNNING
private master, private nodes, authorize only internal subnet 10.128.0.0/20 to execute kubectl on master
gcloud container clusters create bartek-private-gke-cluster --num-nodes 2 --machine-type n1-standard-2 --zone=us-central1-c --enable-private-nodes --enable-ip-alias --master-ipv4-cidr 172.16.0.0/28 --enable-master-authorized-networks --enable-private-endpoint --master-authorized-networks 10.128.0.0/20 NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS bartek-private-gke-cluster us-central1-c 1.21.6-gke.1503 172.16.0.2 n1-standard-2 1.21.6-gke.1503 2 RUNNING
deleted
then other networks will get:
kubectl version
Unable to connect to the server: dial tcp 172.16.0.2:443: i/o timeout
or with external IP master but only authorized for 1 my vm:
dig +short myip.opendns.com @resolver1.opendns.com xxx.xxx.xxx.xxx gcloud container clusters create bartek-private-gke-cluster --num-nodes 1 --machine-type n1-standard-2 --zone=us-central1-c --enable-private-nodes --enable-ip-alias --master-ipv4-cidr 172.16.0.0/28 --enable-master-authorized-networks --master-authorized-networks xxx.xxx.xxx.xxx/32
Simplest possible pod:
apiVersion: v1 kind: Pod metadata: name: bartek-spring-boot-pod spec: containers: - name: bartek-spring-boot-container image: gcr.io/project-123/bartek/spring-boot:latest
me@ubuntu-vm:~$ kubectl apply -f bartek-spring-boot.yml pod/bartek-spring-boot-pod created
Cluster basics Name bartek-private-gke-cluster Location type Regional Region us-central1 Default node zones us-central1-a us-central1-c us-central1-f Control plane address range 172.16.0.0/28 Control plane global access Enabled Network default Subnet default Cluster pod address range (default) 10.28.0.0/14 Service address range 10.32.0.0/20 VPC networking: Subnet default VPC Network: default Region: us-central1 IP address range: 10.128.0.0/20 Secondary IP ranges Subnet range name Secondary IP range gke-bartek-private-gke-cluster-pods-08a9d91d 10.28.0.0/14 gke-bartek-private-gke-cluster-services-08a9d91d 10.32.0.0/20 Gateway: 10.128.0.1 Private Google Access: On Node details: Node pool: default-pool Zone: us-central1-f Pod CIDR: 10.28.2.0/24 <-- internal IP aliases for pods gke-bartek-private-gke-c-default-pool-5b5f6af5-mkvp Pods: (system pods) bartek-spring-boot-pod Running 0 CPU 0 B 0 B default 0 Compute Engine: VM instances gke-bartek-private-gke-c-default-pool-5b5f6af5-mkvp zone:us-central1-f in use by: gke-bartek-private-gke-c-default-pool-5b5f6af5-grp internal IP: 10.128.0.61 (nic0). <-- node IP, belongs to default subnet
Login to pod:
kubectl exec -it pods/bartek-spring-boot-pod -c bartek-spring-boot-container --context gke_project-123_us-central1_bartek-private-gke-cluster --namespace default -- sh
/ # hostname bartek-spring-boot-pod
/ # ls bin etc tmp spring-boot.jar ...
/ # /sbin/ifconfig eth0 Link encap:Ethernet HWaddr AE:7F:24:77:B9:3C inet addr:10.28.2.4 Bcast:10.28.2.255 Mask:255.255.255.0. <-- pod IP from secondary IP range
/ # wget http://localhost:8080 -O - Connecting to localhost:8080 (127.0.0.1:8080) Greetings from Spring Boot! (8080) , request.getLocalAddr()=127.0.0.1 , request.getLocalName()=localhost , request.getMethod()=GET , request.getRemoteAddr()=127.0.0.1 , request.getRemoteHost()=127.0.0.1 , request.getServerName()=localhost , request.getServletPath()=/ , response.getStatus()=200
ssh to node from google UI console:
bartek@gke-bartek-private-gke-c-default-pool-5b5f6af5-mkvp~ $ ifconfig eth0: flags=4163 mtu 1460 inet 10.128.0.61 netmask 255.255.255.255 broadcast 0.0.0.0 veth11afdcb8: flags=4163 mtu 1460 inet 10.28.2.1 netmask 255.255.255.255 broadcast 0.0.0.0 docker0: flags=4099 mtu 1500 inet 169.254.123.1 netmask 255.255.255.0 broadcast 169.254.123.255 lo: flags=73 mtu 65536 inet 127.0.0.1 netmask 255.0.0.0
bartek@gke-bartek-private-gke-c-default-pool-5b5f6af5-mkvp ~ $ route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 10.128.0.1 0.0.0.0 UG 1024 0 0 eth0 10.28.2.4 0.0.0.0 255.255.255.255 UH 0 0 0 veth11afdcb8 10.128.0.1 0.0.0.0 255.255.255.255 UH 1024 0 0 eth0 169.254.123.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0
bartek@gke-bartek-private-gke-c-default-pool-5b5f6af5-mkvp ~ $ curl 10.28.2.4:8080 Greetings from Spring Boot! (8080) , request.getLocalAddr()=10.28.2.4 , request.getLocalName()=bartek-spring-boot-pod request.getRemoteAddr()=10.28.2.1 , request.getRemoteHost()=10.28.2.1 , request.getServerName()=10.28.2.4 , response.getStatus()=200
Adding internal load balancer:
1) lets add a custom label to pod so that load balancer can send the traffic to pods with that label
apiVersion: v1 kind: Pod metadata: name: bartek-spring-boot-pod labels: bartek-label: bartek-app spec: containers: - name: bartek-spring-boot-container image: gcr.io/project-123/bartek/spring-boot:latest
bartek@ubuntu-vm:~$ kubectl apply -f bartek-spring-boot.yml pod/bartek-spring-boot-pod configured
Workloads bartek-spring-boot-pod details: Cluster bartek-private-gke-cluster Namespace default Labels: bartek-label: bartek-app
2) Let’s create a load balancer service:
apiVersion: v1 kind: Service metadata: name: bartek-internal-loadbalancer-service annotations: networking.gke.io/load-balancer-type: "Internal" spec: type: LoadBalancer selector: bartek-label: bartek-app ports: - port: 80 targetPort: 8080 protocol: TCP
bartek@ubuntu-vm:~$ kubectl apply -f bartek-internal-loadbalancer-service.yaml service/bartek-internal-loadbalancer-service created
bartek@ubuntu-vm:~$ kubectl get pods NAME READY STATUS RESTARTS AGE bartek-spring-boot-pod 1/1 Running 0 112m bartek@ubuntu-vm:~$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE bartek-internal-loadbalancer-service LoadBalancer 10.32.9.226 10.128.0.63 80:30048/TCP 2m3s kubernetes ClusterIP 10.32.0.1 443/TCP 115m bartek@ubuntu-vm:~$ kubectl get deployments No resources found in default namespace.
Service details: Cluster bartek-private-gke-cluster Namespace default Labels No labels set Type LoadBalancer External endpoints 10.128.0.63:80 Load Balancer Cluster IP 10.32.9.226 Load balancer IP 10.128.0.63 Serving pods Name Status Endpoints Restarts Created on bartek-spring-boot-pod Running 10.28.2.4 0 Apr 12, 2022, 10:50:08 AM Ports Port Node Port Target Port Protocol 80 30048 8080 TCP
Run a new GCE vm in default subnet, ip 10.128.0.64 and curl load balancer:
bartek@vm ~ $ curl 10.128.0.63:80 Greetings from Spring Boot! (8080) , request.getHeaderNames()=host:10.128.0.63,user-agent:curl/7.78.0,accept:*/* , request.getLocalAddr()=10.28.2.4 , request.getLocalName()=bartek-spring-boot-pod , request.getRemoteAddr()=10.28.2.1 , request.getRemoteHost()=10.28.2.1 , request.getServerName()=10.128.0.63 , request.getServletPath()=/ , response.getStatus()=200