Placement Constraints in K8s cluster

Yogesh Kumar
3 min readApr 5, 2021

--

Photo by Davide Pietralunga on Unsplash

Introduction

The placement constraints in K8s clusters for POD/Job scheduling can be achieved by assigning node labeling directly on kubernetes cluster via kubectl. The created label can be used in kubernetes deployment as node selector.

The presence of node selector determines , the resources scheduling on desired host/hardware.

Demonstration

The reference for assigning labels to kubernetes node is here:

https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#step-one-attach-label-to-the-node

Command:

kubectl label nodes <node-name> <label-key>=<label-value>

Example:

kubectl label nodes <node-host-name> testlabel=label1

The reference for using node selector to kubernetes pod configuration is here:

https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#step-two-add-a-nodeselector-field-to-your-pod-configuration

Example like Pod-Configuration:

apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
— name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
testLabel: label1

Following is the exercise done on k8s cluster to demonstrate node labelling as placement constraints

Create test label as testLabel=wlbl on worker node

C:\Users\Yogesh\Downloads>kubectl.exe label nodes test-cp-machine-0.test-org.local testLabel=wlbl — kubeconfig k8s21_kubeconfig
node/test-cp-machine-0.test-org.local labeled

2. Verifying labels on existing k8s nodes/hosts

>kubectl.exe get nodes — kubeconfig k8s21_kubeconfig — show-labels
NAME STATUS ROLES AGE VERSION LABELS
test-cp-machine-0.test-org.local Ready worker 130m v1.18.6 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,gpu=true,org.com/compute=true,org.com/dataplatform=false,org.com/exclusivecluster=none,org.com/usenode=true,inference_node=false,kubernetes.io/arch=amd64,kubernetes.io/hostname=test-cp-machine-0.test-org.local,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,testLabel=wlbl
kf-4.test-org.local Ready master 133m v1.18.6 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=kf-4.test-org.local,kubernetes.io/os=linux,node-role.kubernetes.io/master=

3. Creating kubernetes resources/Pods with different node selector value

Case 3.1: POD configuration with different node selector w.r.t. worker node

pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
— name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
testLabel: wlb2

>kubectl create -f pod.yaml — kubeconfig k8s21_kubeconfig
pod/nginx created

C:\User

>kubectl.exe get pods — kubeconfig k8s21_kubeconfig
NAME READY STATUS RESTARTS AGE
nginx 0/1 Pending 0 101s

>kubectl.exe describe pod nginx1 — kubeconfig k8s21_kubeconfig
Name: nginx1
Namespace: k8s21
— —
— —
Events:
Type Reason Age From Message
— — — — — — — — — — — — -
Warning FailedScheduling <unknown> default-scheduler 0/2 nodes are available: 2 node(s) didn’t match node selector.
Warning FailedScheduling <unknown> default-scheduler 0/2 nodes are available: 2 node(s) didn’t match node selector.


Conclusion: Since no k8s hosts matches Node selector, the pod is in pending state and waiting for scheduling.

Case-3.2 : new_pod.yaml with same node selector as worker node:

apiVersion: v1
kind: Pod
metadata:
name: nginx3
labels:
env: test
spec:
containers:
— name: nginx3
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
testLabel: wlbl
>kubectl create -f new_pod.yaml — kubeconfig k8s21_kubeconfig
pod/nginx3 created

>kubectl describe pod nginx3 — kubeconfig k8s21_kubeconfig
Name: nginx3
Namespace: k8s21
— —
— — -
Type Reason Age From Message
— — — — — — — — — — — — -
Normal Scheduled <unknown> default-scheduler Successfully assigned k8s21/nginx3 to test-cp-machine-0.test-org.local
Warning Failed 2m19s kubelet, test-cp-machine-0.test-org.local Failed to pull image “nginx”: rpc error: code = Unknown desc = toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning Failed 2m19s kubelet, test-cp-machine-0.test-org.local Error: ErrImagePull
Normal BackOff 2m19s kubelet, test-cp-machine-0.test-org.local Back-off pulling image “nginx”
Warning Failed 2m19s kubelet, test-cp-machine-0.test-org.local Error: ImagePullBackOff
Normal Pulling 2m7s (x2 over 2m24s) kubelet, test-cp-machine-0.test-org.local Pulling image “nginx”
Normal Pulled 112s kubelet, test-cp-machine-0.test-org.local Successfully pulled image “nginx”
Normal Created 112s kubelet, test-cp-machine-0.test-org.local Created container nginx3
Normal Started 112s kubelet, test-cp-machine-0.test-org.local Started container nginx3

>kubectl.exe get pods — kubeconfig k8s21_kubeconfig
NAME READY STATUS RESTARTS AGE
nginx 0/1 Pending 0 18m
nginx3 1/1 Running 0 2m53s

Conclusion:

Since one k8s hosts matches Node selector, the pod is scheduled and in Running state.

--

--

Yogesh Kumar
0 Followers

Believe in learn, share and grow principle. Passion to learn new technologies and tool sets