0

Kubernetes with Calico using kind

Recently after analysing requirements for application that I manage I realised I’m in need of a way to secure communication within my cluster – so in a nutshell is not an open wilderness.

While looking at several alternatives one was very appealing especially after watching the following video….

And yes it is project Calico.

So I decided to do some more testing with it. And spin it up in a locally running cluster. To have some more fun this time – there are more nodes 🙂

The difference in the below config is that we disable the default CNI.

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
  podSubnet: "10.240.0.0/16"
  disableDefaultCNI: true
nodes:
- role: control-plane
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true,zone=cookie,region=oo-space-1"
  extraPortMappings:
  - containerPort: 30080
    hostPort: 88
    protocol: TCP
  - containerPort: 30443
    hostPort: 444
    protocol: TCP
- role: worker
  kubeadmConfigPatches:
  - |
    kind: JoinConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "zone=alpha,region=eu-west-1"
- role: worker
  kubeadmConfigPatches:
  - |
    kind: JoinConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "zone=alpha,region=eu-west-1"
- role: worker
  kubeadmConfigPatches:
  - |
    kind: JoinConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "zone=beta,region=eu-west-1"
- role: worker
  kubeadmConfigPatches:
  - |
    kind: JoinConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "zone=beta,region=eu-west-1"
- role: worker
  kubeadmConfigPatches:
  - |
    kind: JoinConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "zone=gamma,region=eu-centra
l-1"
- role: worker
  kubeadmConfigPatches:
  - |
    kind: JoinConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "zone=gamma,region=eu-central-1"
- role: worker
  kubeadmConfigPatches:
  - |
    kind: JoinConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "zone=gamma,region=eu-central-1"

Once the cluster is up and running I used kapp to deploy Calico by issuing the following command:

kapp deploy -a calico -f <(curl https://docs.projectcalico.org/v3.17/manifests/calico.yaml)

Shortly after the nodes applied configuration change Calico was running on all nodes

That gets you going right away! But in order to really understand now the power you have I can highly recommend looking at example networkPolicies

Once you have done that there is also a great tool to validate not only NetworkPolicies but your kubernetes cluster configuration in general called sonobuoy

sonobuoy run --e2e-focus "NetworkPolicy" --e2e-skip ""

Happy securing of your k8s cluster!

0

AWS – EKS and “insufficient pods”

So the day has come when I noticed that one of my pods was not running and I received the above mentioned message insufficient pods”.

What I then realised was that I run out of maximal number of pods I can run :O which in AWS EKS is associated with ENI value.

To get the number of maximal pods you can run execute the following:

❯ kubectl get node -o yaml | grep pods
      pods: "17" => this is allocatable pods that can be allocated in node
      pods: "17" => this is how many running pods you have created

The details of number of pods per instance can be found via https://github.com/awslabs/amazon-eks-ami/blob/master/files/eni-max-pods.txt

In kubernetes v1.19 we will get GA of EvenPodsSpread which will definitely help in managing how pods are distributed

Also In my troubleshooting I found helpful to use some of the below scripts.

# find number of pods running per node 
❯ kubectl get pod --all-namespaces -o json | jq -r '.items[] |select( .kind=="Pod")| "\(.status.hostIP),\(.metadata.name)"'| awk -F, '{a[$1]++;}END{for (i in a)print i, a[i];}'

# find pods running on specific node 
> kubectl get pods --all-namespaces -o wide --field-selector spec.nodeName=ip-10-10-1-55.eu-central-1.compute.internal

# Find pods running wiuth specific status ( or not )
> kubectl get pods --all-namespaces -o wide --field-selector status.phase!=Running