0

Kubernetes with Calico using kind

Recently after analysing requirements for application that I manage I realised I’m in need of a way to secure communication within my cluster – so in a nutshell is not an open wilderness.

While looking at several alternatives one was very appealing especially after watching the following video….

And yes it is project Calico.

So I decided to do some more testing with it. And spin it up in a locally running cluster. To have some more fun this time – there are more nodes πŸ™‚

The difference in the below config is that we disable the default CNI.

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
  podSubnet: "10.240.0.0/16"
  disableDefaultCNI: true
nodes:
- role: control-plane
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true,zone=cookie,region=oo-space-1"
  extraPortMappings:
  - containerPort: 30080
    hostPort: 88
    protocol: TCP
  - containerPort: 30443
    hostPort: 444
    protocol: TCP
- role: worker
  kubeadmConfigPatches:
  - |
    kind: JoinConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "zone=alpha,region=eu-west-1"
- role: worker
  kubeadmConfigPatches:
  - |
    kind: JoinConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "zone=alpha,region=eu-west-1"
- role: worker
  kubeadmConfigPatches:
  - |
    kind: JoinConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "zone=beta,region=eu-west-1"
- role: worker
  kubeadmConfigPatches:
  - |
    kind: JoinConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "zone=beta,region=eu-west-1"
- role: worker
  kubeadmConfigPatches:
  - |
    kind: JoinConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "zone=gamma,region=eu-centra
l-1"
- role: worker
  kubeadmConfigPatches:
  - |
    kind: JoinConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "zone=gamma,region=eu-central-1"
- role: worker
  kubeadmConfigPatches:
  - |
    kind: JoinConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "zone=gamma,region=eu-central-1"

Once the cluster is up and running I used kapp to deploy Calico by issuing the following command:

kapp deploy -a calico -f <(curl https://docs.projectcalico.org/v3.17/manifests/calico.yaml)

Shortly after the nodes applied configuration change Calico was running on all nodes

That gets you going right away! But in order to really understand now the power you have I can highly recommend looking at example networkPolicies

Once you have done that there is also a great tool to validate not only NetworkPolicies but your kubernetes cluster configuration in general called sonobuoy

sonobuoy run --e2e-focus "NetworkPolicy" --e2e-skip ""

Happy securing of your k8s cluster!

0

AWS – EKS and “insufficient pods”

So the day has come when I noticed that one of my pods was not running and I received the above mentioned message insufficient pods”.

What I then realised was that I run out of maximal number of pods I can run :O which in AWS EKS is associated with ENI value.

To get the number of maximal pods you can run execute the following:

❯ kubectl get node -o yaml | grep pods
      pods: "17" => this is allocatable pods that can be allocated in node
      pods: "17" => this is how many running pods you have created

The details of number of pods per instance can be found via https://github.com/awslabs/amazon-eks-ami/blob/master/files/eni-max-pods.txt

In kubernetes v1.19 we will get GA of EvenPodsSpread which will definitely help in managing how pods are distributed

Also In my troubleshooting I found helpful to use some of the below scripts.

# find number of pods running per node 
❯ kubectl get pod --all-namespaces -o json | jq -r '.items[] |select( .kind=="Pod")| "\(.status.hostIP),\(.metadata.name)"'| awk -F, '{a[$1]++;}END{for (i in a)print i, a[i];}'

# find pods running on specific node 
> kubectl get pods --all-namespaces -o wide --field-selector spec.nodeName=ip-10-10-1-55.eu-central-1.compute.internal

# Find pods running wiuth specific status ( or not )
> kubectl get pods --all-namespaces -o wide --field-selector status.phase!=Running
0

Kubernetes context per terminal session

If you are like me working with multiple kubernetes clusters it becomes really unhandy to work with them at the same time.

For a while I have been using ktx to manage that – however as mentioned above – at the moment of writing it sets context to be global.

So as a small workaround I have those 2 functions in my `.zshrc` file

k8ctx-switch() { 
# create a temp file for our config 
TEMP_CONFIG="$(mktemp "kubectx.$1")" 
kubectl config view --minify --flatten --context=$1 > $TEMP_CONFIG export KUBECONFIG="${TEMP_CONFIG}:${KUBECONFIG}" cat ${TEMP_CONFIG}
}

k8ctx-list() {
KUBECONFIG="${HOME}/.kube/config" kubectl config get-contexts
}

This is a MVP πŸ™‚ to get multiple tabs opened with different k8s clusters to manage. Pretty handy πŸ™‚

1

Kubernetes kind with Traefik

If you have been working with kubernetes you already know that sometimes you just need a quick way to spin a cluster.

Well if it is so you must have came across kind . It allows you to spin k8s cluster locally.

kind is a tool for running local Kubernetes clusters using Docker container β€œnodes”.

With just one line you can get and spin up your cluster πŸ™‚


Once thats up and running I was interested to have some more fun with Traefik however on the “kind” documentation you could see only subset was mentioned as supported ones ….

So this felt like a good sunday challenge πŸ™‚ since I really had some plans to explore Traefik and its potential more.

I started off by finding the details of creating a new cluster with exposed mapped ports

cat <<EOF | kind create cluster --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true"
  extraPortMappings:
  - containerPort: 30080
    hostPort: 88
    protocol: TCP
  - containerPort: 30443
    hostPort: 444
    protocol: TCP
- role: worker
- role: worker
- role: worker
EOF

After the above it will create you 4 “nodes” from which one ( in my case control plane node ) will have the mapping and the specific node label ( the node label you can change to your needs )

Once your cluster is ready you can apply the bundle from the code below.It does not have host defined for ingress route or the middleware for dashboard but it is something you can easily add with CRDs

# Traefik bundle
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: ingressroutes.traefik.containo.us

spec:
  group: traefik.containo.us
  version: v1alpha1
  names:
    kind: IngressRoute
    plural: ingressroutes
    singular: ingressroute
  scope: Namespaced

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: middlewares.traefik.containo.us

spec:
  group: traefik.containo.us
  version: v1alpha1
  names:
    kind: Middleware
    plural: middlewares
    singular: middleware
  scope: Namespaced

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: ingressroutetcps.traefik.containo.us

spec:
  group: traefik.containo.us
  version: v1alpha1
  names:
    kind: IngressRouteTCP
    plural: ingressroutetcps
    singular: ingressroutetcp
  scope: Namespaced

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: ingressrouteudps.traefik.containo.us

spec:
  group: traefik.containo.us
  version: v1alpha1
  names:
    kind: IngressRouteUDP
    plural: ingressrouteudps
    singular: ingressrouteudp
  scope: Namespaced

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: tlsoptions.traefik.containo.us

spec:
  group: traefik.containo.us
  version: v1alpha1
  names:
    kind: TLSOption
    plural: tlsoptions
    singular: tlsoption
  scope: Namespaced

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: tlsstores.traefik.containo.us

spec:
  group: traefik.containo.us
  version: v1alpha1
  names:
    kind: TLSStore
    plural: tlsstores
    singular: tlsstore
  scope: Namespaced

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: traefikservices.traefik.containo.us

spec:
  group: traefik.containo.us
  version: v1alpha1
  names:
    kind: TraefikService
    plural: traefikservices
    singular: traefikservice
  scope: Namespaced
---
apiVersion: v1
kind: Namespace
metadata:
  name:  traefik
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik
rules:
  - apiGroups:
      - ""
    resources:
      - services
      - endpoints
      - secrets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - traefik.containo.us
    resources:
      - middlewares
      - ingressroutes
      - traefikservices
      - ingressroutetcps
      - ingressrouteudps
      - tlsoptions
      - tlsstores
    verbs:
      - get
      - list
      - watch
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name:  traefik
  namespace: traefik
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: traefik
subjects:
  - kind: ServiceAccount
    name: traefik
    namespace: traefik
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: traefik
  namespace: traefik
spec:
  replicas: 1
  strategy:
    type: RollingUpdate
  selector:
    matchLabels:
      app.kubernetes.io/name: traefik
  template:
    metadata:
      labels:
        app.kubernetes.io/name: traefik
    spec:
      containers:
        - args:
            - --global.checknewversion
            - --global.sendanonymoususage
            - --entryPoints.traefik.address=:9000/tcp
            - --entryPoints.web.address=:9080/tcp
            - --entryPoints.websecure.address=:9443/tcp
            - --api.dashboard=true
            - --ping=true
            - --providers.kubernetescrd
            - --providers.kubernetesingress
            - --log.level=DEBUG
          image: traefik:2.2.11
          imagePullPolicy: IfNotPresent
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /ping
              port: 9000
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 2
          name: traefik
          ports:
          - containerPort: 9000
            name: traefik
            protocol: TCP
          - containerPort: 9080
            name: web
            protocol: TCP
          - containerPort: 9443
            name: websecure
            protocol: TCP
          readinessProbe:
            failureThreshold: 1
            httpGet:
              path: /ping
              port: 9000
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 2
          resources: {}
          securityContext:
            capabilities:
              drop:
              - ALL
            readOnlyRootFilesystem: true
            runAsGroup: 65532
            runAsNonRoot: true
            runAsUser: 65532
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      tolerations:
        - key: "node-role.kubernetes.io/master"
          effect: "NoSchedule"
          operator: "Exists"
      nodeSelector:
        ingress-ready: "true"
      restartPolicy: Always
      securityContext:
        fsGroup: 65532
      serviceAccountName: traefik
---
apiVersion: v1
kind: Service
metadata:
  name: traefik
  namespace: traefik
spec:
  type: NodePort
  selector:
    app.kubernetes.io/name: traefik
  ports:
    - name: web
      port: 9080
      nodePort: 30080
    - name: websecure
      port: 9443
      nodePort: 30443
---
apiVersion: v1
kind: Service
metadata:
  name: traefik-dashboard
  namespace: traefik
  labels:
    app.kubernetes.io/name: traefik
spec:
  type: ClusterIP
  ports:
  - port: 9000
    name: traefik
  selector:
     app.kubernetes.io/name: traefik
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: traefik-dashboard
spec:
  routes:
  - match: (PathPrefix(`/api`) || PathPrefix(`/dashboard`))
    kind: Rule
    services:
    - name: [email protected]
      kind: TraefikService
    # middlewares:
    #   - name: auth
# ---
# apiVersion: traefik.containo.us/v1alpha1
# kind: Middleware
# metadata:
#   name: auth
# spec:
#   basicAuth:
#     secret: secretName # Kubernetes secret named "secretName"

Once the installation is completed you can access the dashboard. Of course that dashboard is exposed via IngressRoute so the fun can begin πŸ™‚

Hope this will enable you to quickly discover Traefik/Kubernetes/Kind πŸ™‚

Happy coding!