[SOLVED]: Openshift Run Container as root with runAsUser In securityContext

  • Openshift project assigns, uid for the containers running under the project. This is by default behavior of the Openshift, each project will be assigned with a range of uid with which new pods will get its uid assigned.
  • In some cases, it is necessary to run the container with a static user  id, rather running with a user which is assigned by the openshift project
  • Some processes in the microservices are dependent on the uid, which will not work as expected with the uid assigned by the openshift project

In this Article, we will see how to run a pod with a custom uid which is not in the range given by the openshift project. Usually the users are created at the image level with uid. Since we are using http dummy image available in the image repo, so  we will running the pod with uid as 0, which is root user

Setup Details

Existing Default Behavior

Before we jump into to  the solution, lets check the default behavior which is seen in the containers in the openshift

  • Create a project called test
PS C:\Users\****> oc new-project test
Now using project "test" on server "https://api.crc.testing:6443".
You can add applications to this project with the 'new-app' command. For example, try:
oc new-app rails-postgresql-example
to build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application:
kubectl create deployment hello-node --image=k8s.gcr.io/e2e-test-images/agnhost:2.33 -- /agnhost serve-hostname
  • Check uid range assigned to the project test. This can be done be describing the namespace 
PS C:\Users\****> oc describe ns test
Name: test
Labels: kubernetes.io/metadata.name=test
pod-security.kubernetes.io/audit=restricted
pod-security.kubernetes.io/audit-version=v1.24
pod-security.kubernetes.io/warn=restricted
pod-security.kubernetes.io/warn-version=v1.24
Annotations: openshift.io/sa.scc.mcs: s0:c27,c4
openshift.io/sa.scc.supplemental-groups: 1000710000/10000
openshift.io/sa.scc.uid-range: 1000710000/10000
Status: Active
  • Check the uid from the pod running on the new test project. It can be seen that, id is from the range allocated by the project, and we are getting permission denied while accessing some files
PS C:\Users\****> oc exec -it example -n test bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
bash-4.4$ id
uid=1000710000(1000710000) gid=0(root) groups=0(root),1000710000
bash-4.4$ cat /etc/crypttab
cat: /etc/crypttab: Permission denied

Openshift run Container as root or with a static uid

Inorder to run the Container as root or  with a static uid, we will have to create a service account, and we will have to add some role binding to the service account. We will be adding the SecurityContextConstraints "anyuid" to the service account

Create Service Account

Below show the declarative way of creating a service account (abhi-sa) in the project test

apiVersion: v1
kind: ServiceAccount
metadata:
  name: abhi-sa
#automountServiceAccountToken: false
PS C:\Users\****> oc apply -f .\sa.yaml -n test
serviceaccount/abhi-sa created

Create Role Binding

We Need to add SCC anyuid  to the service account (abhi-sa), this can be done by either  imperative way or declarative way.

Imperative Configuration

Below is the syntax for imperative command

oc adm policy add-scc-to-user anyuid -z <service-ac-name> -n <project-name>
PS C:\Users\****> oc adm policy add-scc-to-user anyuid -z abhi-sa -n test
clusterrole.rbac.authorization.k8s.io/system:openshift:scc:anyuid added: "test"

Declarative  Configuration

In Declarative configuration, we have to create a manifest file, for RoleBinding for our service account (abhi-sa) with a clusterRole, system:openshift:scc:anyuid.
Below code snippet shows the content of the manifest file for the same.

apiVersion: rbac.authorization.k8s.io/v1
# This role binding allows "jane" to read pods in the "default" namespace.
# You need to already have a Role named "pod-reader" in that namespace.
kind: RoleBinding
metadata:
name: abhi-rl
namespace: test
subjects:
# You can specify more than one "subject"
- namespace: root-abhi
kind: ServiceAccount
name: abhi-sa
roleRef:
# "roleRef" specifies the binding to a Role / ClusterRole
kind: ClusterRole #this must be Role or ClusterRole
name: system:openshift:scc:anyuid # this must match the name of the Role or ClusterRole you wish to bind to
apiGroup: rbac.authorization.k8s.io
PS C:\Users\****> oc apply -f .\rolebinding.yaml -n test
rolebinding.rbac.authorization.k8s.io/abhi-rl created

Create Deployment with service account and runAsUser

We will be using declarative mode to create deployment with the created service account and with static uid via runAsUser. Below manifest file shows the same

apiVersion: apps/v1
kind: Deployment #K8 object, it can be deployment,statefulset
#metadata is for unique identification, 3 allowed parameters are name,UID, ns
metadata:
  name: metadata-name-abhi-deployment
  namespace:  test
  labels:
    UID: "32679" #This need to be in quotes other wise.Below error will come
#error: unable to decode "simple_pod_with_deployment.yaml": json: cannot unmarshal number into Go struct field ObjectMeta.metadata.labels of type string
    abhi: test
spec:
  selector:
    matchLabels:
      app: matchLabel-abhi #replicaset will manage pod with matching label
  replicas: 2
  template:
    metadata:
      labels:
        app: matchLabel-abhi #This is the identification used by replicaset
    spec:
      containers:
      - name: simple
        image: 'image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest'
        command: ["/bin/bash","-c","while true; do sleep 1000; done"]
        securityContext:
          runAsUser: 0 #Provided static uid of user root
      serviceAccountName: "abhi-sa"

Verify Static User & Uid

Once the service account, role-binding, and deployment is deployed, we can verify the uid in the pods spawned by the deployment

PS C:\Users\****> oc exec -it metadata-name-abhi-deployment-5f478768bd-lcgmz -n test bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
bash-4.4# id
uid=0(root) gid=0(root) groups=0(root)
bash-4.4# cat /etc/crypttab
test

It can be seen that, the uid of the pod is 0, since we gave 0 in the runAsUser section in the manifest file. It can be seen that, we are able to access the file, which was giving permission denied while running the pod with the uid assigned by the project

Search on LinuxDataHub

Leave a Comment