You Select Me - Deploy Applications on desired OpenShift Nodes

You Select Me - Deploy Applications on desired OpenShift Nodes

:heavy_exclamation_mark: This post is older than a year. Consider some information might not be accurate anymore. :heavy_exclamation_mark:

Used:   oc v3.11.16  kubernetes v1.11.0+d4cacc0 

This article describes how to deploy pods (applications) on the desired node or nodes. OpenShift uses K8S (Kubernetes) to do that, so we will also cover the K8S basics about that.

  • We have one OpenShift node with a persistent storage problem.
  • The storage provisioning works, but K8S could not mount the storage from that particular node.
  • Our cluster consists of three master nodes and six compute nodes.
  • One of the six compute nodes is faulty.
  • We have a probability of 1/6 (16.667%) of faulty deployments.

We narrow down the problem to one specific node. Now we are going to troubleshoot the node but must ensure that the deployment gets only on that node.

Table of Contents

Node Management

To list the nodes in the OpenShift cluster, use this command:

oc get nodes

Our third node in data centre 2 is the problem.

NAME          STATUS                     ROLES          AGE       VERSION
dc-1-node-1   Ready                      compute        114d      v1.11.0+d4cacc0
dc-1-node-2   Ready                      compute        114d      v1.11.0+d4cacc0
dc-1-node-3   Ready                      compute        114d      v1.11.0+d4cacc0
dc-2-node-1   Ready                      compute        114d      v1.11.0+d4cacc0
dc-2-node-2   Ready                      compute        114d      v1.11.0+d4cacc0
dc-2-node-3   Ready,SchedulingDisabled   compute        114d      v1.11.0+d4cacc0
master-1      Ready                      infra,master   114d      v1.11.0+d4cacc0
master-2      Ready                      infra,master   114d      v1.11.0+d4cacc0
master-3      Ready                      infra,master   114d      v1.11.0+d4cacc0

After we have identified the culprit, we can prevent the deployment of new pods to that node with:

oc adm manage-node dc-2-node-3 --schedulable=false

If we want to troubleshoot, we make it schedulable again with this command:

oc adm manage-node dc-2-node-3 --schedulable

Label Node for troubleshooting

We label the node for the troubleshooting deployment, with the label type and the value problem. You can use any other suitable label and value.

oc label nodes dc-2-node-3 type=problem

node/dc-2-node-3 labeled

Labels are a K8S concept. Labelling an object in OpenShift or Kubernetes is an excellent method to organise, group, or select API objects.

Verify label is applied

After the labelling, we double check by looking into the node description.

oc describe node dc-2-node-3 | head

Name:               dc-2-node-3
Roles:              compute
Labels:             beta.kubernetes.io/arch=amd64


We create new test project.

oc new-project friendly-broccoli

We use a PostgreSQL deployment with persistent storage to troubleshoot and test the storage capability of each node. Any other deployment of an application with persistent storage would also do.

  • We create a new template for PostgreSQL.
  • We use as a basis the template from Origin (the OpenShift OpenSource community).
  • Download the template and extend the template with the nodeSelector key to our previously labelled node.
  • The nodeSelector belongs to the spec root in the service definition.
  "spec": {
    "nodeSelector": {
      "type": "problem"

Create the new template:

oc create -f postgresql-persistent-template.json

We can check the deployment parameters of the template with their default values:

oc process --parameters -n friendly-broccoli postgresql-persistent
NAME                    DESCRIPTION                                                                  VALUE
MEMORY_LIMIT            Maximum amount of memory the container can use.                              512Mi
NAMESPACE               The OpenShift Namespace where the ImageStream resides.                       openshift
DATABASE_SERVICE_NAME   The name of the OpenShift Service exposed for the database.                  postgresql
POSTGRESQL_USER         Username for PostgreSQL user that will be used for accessing the database.   user[A-Z0-9]{3}
POSTGRESQL_PASSWORD     Password for the PostgreSQL connection user.                                 [a-zA-Z0-9]{16}
POSTGRESQL_DATABASE     Name of the PostgreSQL database accessed.                                    sampledb
VOLUME_CAPACITY         Volume space available for data, e.g. 512Mi, 2Gi.                            1Gi
POSTGRESQL_VERSION      Version of PostgreSQL image to be used (10 or latest).                       10

Test Deployment

We create a new deployment for testing:

oc new-app postgresql-persistent -p POSTGRESQL_USER=friend -p POSTGRESQL_PASSWORD=sweet

The OpenShift cluster logs:

--> Deploying template "friendly-broccoli/postgresql-persistent" to project friendly-broccoli

     PostgreSQL database service, with persistent storage. For more information about using this template, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/.

     NOTE: Scaling to more than one replica is not supported. You must have persistent volumes available in your cluster to use this template.

     The following service(s) have been created in your project: postgresql.

            Username: friend
            Password: sweet
       Database Name: sampledb
      Connection URL: postgresql://postgresql:5432/

     For more information about using this template, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/.

     * With parameters:
        * Memory Limit=512Mi
        * Namespace=openshift
        * Database Service Name=postgresql
        * PostgreSQL Connection Username=friend
        * PostgreSQL Connection Password=sweet
        * PostgreSQL Database Name=sampledb
        * Volume Capacity=1Gi
        * Version of PostgreSQL Image=10

--> Creating resources ...
    secret "postgresql" created
    service "postgresql" created
    persistentvolumeclaim "postgresql" created
    deploymentconfig.apps.openshift.io "postgresql" created
--> Success
    Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
     'oc expose svc/postgresql'
    Run 'oc status' to view your app.

We now can check on which node the pod runs:

oc get pods -o wide

The pod is on the labelled node. OpenShift tries to deploy PostgreSQL on the dedicated node without success.

NAME                  READY     STATUS              RESTARTS   AGE       IP            NODE          NOMINATED NODE
postgresql-1-4kt8t    0/1       ContainerCreating   0          10s       <none>        dc-2-node-3   <none>
postgresql-1-deploy   1/1       Running             0          12s   dc-2-node-3   <none>

We expected the above outcome, and now we can troubleshoot the events:

oc get events

The events indicate our problem.

2m          2m           1         postgresql-1-4kt8t.15b9495f8da0acd4    Pod
 Normal    SuccessfulAttachVolume   attachdetach-controller                        
 AttachVolume.Attach succeeded for volume "pvc-dcf1bb72-bab6-11e9-ac47-005056ab11cb"
45s         45s          1         postgresql-1-4kt8t.15b9497c12ede9b2    Pod
 Warning   FailedMount              kubelet, dc-2-node-3   
 Unable to mount volumes for pod "postgresql-1-friendly-broccoli(ded32df0-bab6-11e9-a7eb-005056abfe40)":
 timeout expired waiting for volumes to attach or mount for pod "friendly-broccoli"/"postgresql-1-4kt8t".
 list of unmounted volumes=[postgresql-data]. list of unattached volumes=[postgresql-data default-token-c26w7]


In OpenShift, it is useful to have pods land on specific sets of nodes for troubleshooting and isolation purposes.

  • We use during runtime, labels and selectors to deploy the PostgreSQL example on a specific node.
  • We had no downtime! There was no need to restart a node. One major strength of K8s.
  • This post demonstrates how to use labels for troubleshooting.
  • Another scenario is to label infrastructure or database nodes in production for the distinctive deployment of PostgreSQL.

If you are wondering if we could solve the persistent storage problem, look into this previous blog post.

Please remember the terms for blog comments.