This version of the documentation is archived and no longer supported. To learn how to upgrade your version of MongoDB Kubernetes Operator, refer to the upgrade documentation.
  • Reference >
  • Troubleshooting the Kubernetes Operator

Troubleshooting the Kubernetes Operator

Get Status of MongoDB Resource

To find the status of a MongoDB Resource (replica set, sharded cluster, or standalone), invoke this command:

kubectl get mdb <resourcename> -n <namespace> -o yaml -w

The command’s response describes the status of the resource using the following key-value pairs:

Key Value
message Error message explaining why the resource is in a failed state.
Status Meaning
Pending Resource is transitioning between two states.
Running Resource has completed reconciliation successfully.
Failed Resource had failures.
lastTransition Timestamp in ISO 8601 date and time format in UTC when the last reconciliation happened.
link Deployment URL in Ops Manager.
Resource specific fields For descriptions of these fields, see MongoDB Kubernetes Object Specification.


If you want to see what the status of a replica set named my-replica-set in the developer namespace, run:

kubectl get mdb my-replica-set -n developer -o yaml -w

If my-replica-set is running, you should see:

    lastTransition: "2019-01-30T10:51:40Z"
    members: 1
    phase: Running
    version: 4.0.0

If my-replica-set is not running, you should see:

  lastTransition: 2019-02-01T13:00:24Z
  members: 1
  message: 'Failed to create/update replica set in Ops Manager: Status: 400 (Bad Request),
    Detail: Something went wrong validating your Automation Config. Sorry!'
  phase: Failed
  version: 4.0.0

Review the Logs

Review Logs from the Kubernetes Operator

To review the Kubernetes Operator logs, invoke this command:

kubectl logs -f deployment/mongodb-enterprise-operator -n <metadata.namespace>

You could check the Ops Manager Logs as well to see if any issues were reported to Ops Manager.

Find a Specific Pod

To find which pods are available, invoke this command first:

kubectl get pods -n <metadata.namespace>

See also

Kubernetes documentation on kubectl get.

Review Logs from Specific Pod

If you want to narrow your review to a specific pod, you can invoke this command:

kubectl logs <podName> -n <metadata.namespace>


If your replica set is labeled myrs, the pod log command is invoked as:

kubectl logs myrs-0 -n <metadata.namespace>

This returns the Automation Agent Log for this replica set.

View All MongoDB Kubernetes resource Specifications

To view all MongoDB Kubernetes resource specifications in the provided namespace:

kubectl get mdb -n <namespace>


To read details about the dublin standalone resource, invoke this command:

kubectl get mdb dublin -n <namespace> -o yaml

This returns the following response:

kind: MongoDB
  annotations: |
  clusterName: ""
  creationTimestamp: 2018-09-12T17:15:32Z
  generation: 1
  name: dublin
  namespace: mongodb
  resourceVersion: "337269"
  selfLink: /apis/
  uid: 7442095b-b6af-11e8-87df-0800271b001d
  credentials: my-credentials
  type: Standalone
  persistent: false
    memory: 1G
  project: my-om-config
  version: 4.0.0-ent

Restore StatefulSet that Failed to Deploy

A StatefulSet pod may hang with a status of Pending if it encounters an error during deployment.

Pending pods do not automatically terminate, even if you make and apply configuration changes to resolve the error.

To return the StatefulSet to a healthy state, apply the configuration changes to the MongoDB resource in the Pending state, then delete those pods.


A host system has a number of running pods:

kubectl get pods

my-replica-set-0     1/1 Running 2 2h
my-replica-set-1     1/1 Running 2 2h
my-replica-set-2     0/1 Pending 0 2h

my-replica-set-2 is stuck in the Pending stage. To gather more data on the error, run the following:

kubectl describe pod my-replica-set-2

<describe output omitted>

Warning FailedScheduling 15s (x3691 over 3h) default-scheduler 0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient memory.

The output indicates an error in memory allocation.

Updating the memory allocations in the MongoDB resource is insufficient, as the pod does not terminate automatically after applying configuration updates.

To remedy this issue, update the configuration, apply the configuration, then delete the hung pod:

vi <my-replica-set>.yaml

kubectl apply -f <my-replica-set>.yaml

kubectl delete pod my-replica-set-2

Once this hung pod is deleted, the other pods restart with your new configuration as part of rolling upgrade of the Statefulset.


To learn more about this issue, see Kubernetes Issue 67250.

Remove a MongoDB Kubernetes resource

To remove any instance that Kubernetes deployed, you must use Kubernetes.


You can only use the Kubernetes Operator to remove Kubernetes-deployed instances. If you use Ops Manager to remove the instance, Ops Manager throws an error.


To remove a single MongoDB instance you created using Kubernetes:

kubectl delete mdb <name> -n <metadata.namespace>

To remove all MongoDB instances you created using Kubernetes:

kubectl delete mdb --all -n <metadata.namespace>

Remove the Kubernetes Operator

To remove the Kubernetes Operator:

  1. Remove all Kubernetes resources:

    kubectl delete mdb --all -n <metadata.namespace>
  2. Remove the Kubernetes Operator:

    kubectl delete deployment mongodb-enterprise-operator -n <metadata.namespace>

Remove the namespace

To remove the namespace:

  1. Remove all Kubernetes resources:

    kubectl delete mdb --all -n <metadata.namespace>
  2. Remove the namespace:

    kubectl delete namespace <metadata.namespace>

Remove the CustomResourceDefinitions

To remove the CustomResourceDefinitions:

  1. Remove all Kubernetes resources:

    kubectl delete mdb --all -n <metadata.namespace>
  2. Remove the CustomResourceDefinitions:

    kubectl delete crd --all