Navigation
This version of the documentation is archived and no longer supported. To learn how to upgrade your version of MongoDB Kubernetes Operator, refer to the upgrade documentation.

Upgrade from Operator Version 0.9 and Earlier

Warning

Version 0.10 of the MongoDB Enterprise Kubernetes Operator included breaking changes and requires some additional preparation before upgrading. The following procedure outlines the upgrade process for Kubernetes Operator versions 0.9 and earlier. If you are already running version 0.10 or later, see Upgrade the Operator for upgrade instructions.

Version 0.10 of the Kubernetes Operator consolidated the MongoDbStandalone, MongoDbShardedCluster, and MongoDbReplicaSet CustomResourceDefinitions into a single CustomResourceDefinition called MongoDB.

Important

The following upgrade procedure allows you to keep data stored in persistent volumes from previous deployments that the Kubernetes Operator managed. If you do not wish to retain data from previous deployments and plan on deploying new resources, skip to the Upgrade section.

Prerequisites

  1. Verify you have the .yaml configuration file for each MongoDB resource you have deployed.

    Standalone Resources

    If you have standalone resources but do not have the .yaml configuration file for them, run the following command to generate the configuration file:

    kubectl mst <standalone-name> -n <namespace> -o yaml > <standalone-conf-name>.yaml
    
    Replica Set Resources

    If you have replica set resources but do not have the .yaml configuration file for them, run the following command to generate the configuration file:

    kubectl get mrs <replicaset-name> -n <namespace> -o yaml > <replicaset-conf-name>.yaml
    
    Sharded Cluster Resources

    If you have sharded cluster resources but do not have the .yaml configuration file for them, run the following command to generate the configuration file:

    kubectl get msc <shardedcluster-name> -n <namespace> -o yaml > <shardedcluster-conf-name>.yaml
    
  2. Edit each .yaml configuration file match the new CustomResourceDefinition:

    • Change the kind to MongoDB

    • Add the spec.type field and set it to Standalone, ReplicaSet, or ShardedCluster depending on your resource.

      Note

      The Kubernetes Operator does not support changing the type of an existing configuration even though it will accept a valid configuration for a different type.

      For example, if your MongoDB resource is a standalone, you cannot set the value of spec.type to ReplicaSet and set spec.members. If you do, the Kubernetes Operator throws an error and requires you to revert to the previously working configuration.

    After you edit each .yaml file, it should look like the following example:

    ---
    apiVersion: mongodb.com/v1
    kind: MongoDB
    metadata:
      name: <my-standalone>
      namespace: <metadata.namespace>       # Should match
                                            # metadata.namespace in
                                            # your configmap file.
    spec:
      version: 4.2.1
      opsManager:                           # Alias of cloudManager
        configMapRef:
          name: <configMap.metadata.name>   # Should match metadata.name
                                            # in your configmap file.
      credentials: <mycredentials>
      type: Standalone
      persistent: true
    ...
    
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    ---
    apiVersion: mongodb.com/v1
    kind: MongoDB
    metadata:
      name: <my-secure-replica-set>
      namespace: <configMap.metadata.namespace>
                 # Must match metadata.namespace in ConfigMap file
    spec:
      members: 3
      version: 4.2.1
      opsManager:
        configMapRef:
          name: <configMap.metadata.name>
                # Must match metadata.name in ConfigMap file
      credentials: <mycredentials>
      type: ReplicaSet
      persistent: true
    ...
    
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    ---
    apiVersion: mongodb.com/v1
    kind: MongoDB
    metadata:
      name: <my-secure-sharded-cluster>
      namespace: <configMap.metadata.namespace>
                 # Must match metadata.namespace in ConfigMap file
    spec:
      shardCount: 2
      mongodsPerShardCount: 3
      mongosCount: 2
      configServerCount: 3
      version: 4.2.1
      opsManager:
        configMapRef:
          name: <configMap.metadata.name>
                # Must match metadata.name in ConfigMap file
      credentials: <mycredentials>
      type: ShardedCluster
      persistent: true
    ...
    

    Warning

    If you change the metadata.name field you will lose your resource’s data.

Upgrade the Kubernetes Operator

To upgrade to the latest version of the Kubernetes Operator from version v0.9 or earlier:

  1. Change to the directory in which you cloned the Kubernetes Operator repository. The following steps depend on how your environment is configured:
  1. Upgrade the CustomResourceDefinitions for MongoDB deployments using the following kubectl command:

    kubectl apply -f crds.yaml
    
  2. If you use OpenShift as your Kubernetes orchestrator, you need to allow OpenShift to manage the Security Context for the Kubernetes Operator.

    Change the MANAGED_SECURITY_CONTEXT value as described in the next step.

  3. You can edit the Operator YAML file to further customize your Operator before upgrading it.

    1. Open your mongodb-enterprise.yaml in your preferred text editor.

    2. You may need to add one or more of the following options:

      Environment Variable When to Use
      OPERATOR_ENV

      Label for the Operator’s deployment environment. The env value affects default timeouts and the format and level of logging.

      If OPERATOR_ENV is Log Level is set to Log Format is set to
      dev debug text
      prod info json

      Accepted values are: dev, prod.

      Default value is: prod.

      You can set the following pair of values:

      spec.template.spec.containers.name.env.name: OPERATOR_ENV
      spec.template.spec.containers.name.env.value: prod
      

      Example

      spec:
        template:
          spec:
            serviceAccountName: mongodb-enterprise-operator
            containers:
            - name: mongodb-enterprise-operator
              image: <operatorVersionUrl>
              imagePullPolicy: <policyChoice>
              env:
              - name: OPERATOR_ENV
                value: prod
      
      WATCH_NAMESPACE

      Namespace that the Operator watches for MongoDB Kubernetes resource changes. If this namespace differs from the default, ensure that the Operator’s ServiceAccount can access that different namespace.

      * means all namespaces and requires the ClusterRole assigned to the mongodb-enterprise-operator ServiceAccount which is the ServiceAccount used to run the Kubernetes Operator.

      Default value is: <metadata.namespace>.

      One Namespace or All Namespaces

      If you need to watch more than one namespace, set the value of WATCH_NAMESPACE to * (all). This environment variable can watch one namespace or all namespaces.

      You can set the following pair of values:

      spec.template.spec.containers.name.env.name: WATCH_NAMESPACE
      spec.template.spec.containers.name.env.value: "<testNamespace>"
      

      Example

      spec:
        template:
          spec:
            serviceAccountName: mongodb-enterprise-operator
            containers:
            - name: mongodb-enterprise-operator
              image: <operatorVersionUrl>
              imagePullPolicy: <policyChoice>
              env:
              - name: WATCH_NAMESPACE
                value: "<testNamespace>"
      
      MANAGED_SECURITY_CONTEXT

      If you use OpenShift as your Kubernetes orchestrator, set this to 'true' to allow OpenShift to manage the Security Context for the Kubernetes Operator.

      Accepted values are: 'true', 'false'.

      Default value is: 'false'.

      You can set the following pair of values:

      spec.template.spec.containers.name.env.name: MANAGED_SECURITY_CONTEXT
      spec.template.spec.containers.name.env.value: 'true'
      

      Example

      spec:
        template:
          spec:
            serviceAccountName: mongodb-enterprise-operator
            containers:
            - name: mongodb-enterprise-operator
              image: <operatorVersionUrl>
              imagePullPolicy: <policyChoice>
              env:
              - name: MANAGED_SECURITY_CONTEXT
                value: 'true'
      
      OPS_MANAGER_IMAGE_REPOSITORY

      URL of the repository from which the image for an Ops Manager resource is downloaded.

      Default value is: quay.io/mongodb/mongodb-enterprise-ops-manager

      spec.template.spec.containers.name.env.name:
      OPS_MANAGER_IMAGE_REPOSITORY
      spec.template.spec.containers.name.env.value:
      quay.io/mongodb/mongodb-enterprise-ops-manager
      

      Example

       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      spec:
        template:
          spec:
            serviceAccountName: mongodb-enterprise-operator
            containers:
            - name: mongodb-enterprise-operator
              image: <operatorVersionUrl>
              imagePullPolicy: <policyChoice>
              env:
              - name: OPS_MANAGER_IMAGE_REPOSITORY
                value: quay.io/mongodb/mongodb-enterprise-ops-manager
              - name: OPS_MANAGER_IMAGE_PULL_POLICY
                value: Always
      
      OPS_MANAGER_IMAGE_PULL_POLICY

      Pull policy for the image deployed to an Ops Manager resource.

      Accepted values are: Always, IfNotPresent, Never

      Default value is: Always

      spec.template.spec.containers.name.env.name:
      OPS_MANAGER_IMAGE_PULL_POLICY
      spec.template.spec.containers.name.env.value:
      <policy>
      

      Example

       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      spec:
        template:
          spec:
            serviceAccountName: mongodb-enterprise-operator
            containers:
            - name: mongodb-enterprise-operator
              image: <operatorVersionUrl>
              imagePullPolicy: <policyChoice>
              env:
              - name: OPS_MANAGER_IMAGE_REPOSITORY
                value: quay.io/mongodb/mongodb-enterprise-ops-manager
              - name: OPS_MANAGER_IMAGE_PULL_POLICY
                value: Always
      

      Note

      Any values enclosed in single or double quotes require those quotes. Include the quotes when setting these values.

  4. Upgrade the Kubernetes Operator using the following kubectl command:

    kubectl apply -f mongodb-enterprise.yaml
    

To troubleshoot your Kubernetes Operator, see Review Logs from the Kubernetes Operator.

Important

If you need to remove the Kubernetes Operator or the namespace, you first must remove MongoDB resources.

  1. Upgrade the latest version of the Kubernetes Operator using the following helm command:

    helm template helm_chart > operator.yaml
    kubectl apply -f operator.yaml
    

    You can customize your Chart before installing it by using the --set option. For this Chart, you may need to add one or more of the following options:

    --set option When to Use
    namespace

    To use a different namespace, you need to specify that namespace.

    Default value is: mongodb.

    Example

    helm template \
      --set namespace=<testNamespace> \
      helm_chart > operator.yaml
    kubectl apply -f operator.yaml
    
    operator.env

    Label for the Operator’s deployment environment. The env value affects default timeouts and the format and level of logging.

    If operator.env is Log Level is set to Log Format is set to
    dev debug text
    prod info json

    Accepted values are: dev, prod.

    Default value is: prod.

    Example

    helm template \
      --set operator.env=dev \
      helm_chart > operator.yaml
    kubectl apply -f operator.yaml
    
    operator.watchNamespace

    Namespace that the Operator watches for MongoDB Kubernetes resource changes. If this namespace differs from the default, ensure that the Operator’s ServiceAccount can access that different namespace.

    * means all namespaces and requires the ClusterRole assigned to the mongodb-enterprise-operator ServiceAccount which is the ServiceAccount used to run the Kubernetes Operator.

    Default value is: <metadata.namespace>.

    Example

    helm template \
      --set operator.watchNamespace=<testNamespace> \
      helm_chart > operator.yaml
    kubectl apply -f operator.yaml
    
    managedSecurityContext

    If you use OpenShift as your Kubernetes orchestrator, set this to true to allow OpenShift to manage the Security Context for the Kubernetes Operator.

    Accepted values are: true, false.

    Default value is: false.

    Example

    helm template \
      --set managedSecurityContext=false \
      helm_chart > operator.yaml
    kubectl apply -f operator.yaml
    
    operator.podWaitSeconds

    Time in seconds that the Operator waits for StatefulSets to start when MongoDB Kubernetes resources are being created or updated before retrying.

    Default values depend upon operator.env:

    If operator.env is operator.podWaitSeconds is set to
    dev 3
    prod 5

    Example

    helm template \
      --set operator.env=dev \
      --set operator.podWaitSeconds=10 \
      helm_chart > operator.yaml
    kubectl apply -f operator.yaml
    
    operator.podSetWaitRetries

    Maximum number of retries that the Operator attempts when waiting for StatefulSets to start after MongoDB Kubernetes resources are created or updated.

    Default values depend upon operator.env:

    If operator.env is operator.podSetWaitRetries is set to
    dev 60
    prod 180

    Example

    helm template
      --set operator.env=dev \
      --set operator.podWaitSeconds=10 \
      --set operator.podSetWaitRetries=20 \
      helm_chart > operator.yaml
    kubectl apply -f operator.yaml
    

To troubleshoot your Kubernetes Operator, see Review Logs from the Kubernetes Operator.

Important

If you need to remove the Kubernetes Operator or the namespace, you first must remove MongoDB resources.

To upgrade the Kubernetes Operator on a host not connected to the Internet, you have two options, you can download the Kubernetes Operator files from either:

  1. Upgrade the latest version of the Kubernetes Operator with modified pull policy values using the following helm command:

    helm template --set registry.pullPolicy=IfNotPresent \
      helm_chart > operator.yaml
    kubectl apply -f operator.yaml
    

    You can further customize your Chart before installing it by using the --set option. For this Chart, you may need to add one or more of the following options:

    --set option When to Use
    namespace

    To use a different namespace, you need to specify that namespace.

    Default value is: mongodb.

    Example

    helm template \
      --set registry.pullPolicy=IfNotPresent \
      --set namespace=<testNamespace> \
      helm_chart > operator.yaml
    kubectl apply -f operator.yaml
    
    operator.env

    Label for the Operator’s deployment environment. The env value affects default timeouts and the format and level of logging.

    If operator.env is Log Level is set to Log Format is set to
    dev debug text
    prod info json

    Accepted values are: dev, prod.

    Default value is: prod.

    Example

    helm template \
      --set registry.pullPolicy=IfNotPresent \
      --set operator.env=dev \
      helm_chart > operator.yaml
    kubectl apply -f operator.yaml
    
    operator.watchNamespace

    Namespace that the Operator watches for MongoDB Kubernetes resource changes. If this namespace differs from the default, ensure that the Operator’s ServiceAccount can access that different namespace.

    * means all namespaces and requires the ClusterRole assigned to the mongodb-enterprise-operator ServiceAccount which is the ServiceAccount used to run the Kubernetes Operator.

    Default value is: <metadata.namespace>.

    Example

    helm template \
      --set registry.pullPolicy=IfNotPresent \
      --set operator.watchNamespace=<testNamespace> \
      helm_chart > operator.yaml
    kubectl apply -f operator.yaml
    
    managedSecurityContext

    If you use OpenShift as your Kubernetes orchestrator, set this to true to allow OpenShift to manage the Security Context for the Kubernetes Operator.

    Accepted values are: true, false.

    Default value is: false.

    Example

    helm template \
      --set registry.pullPolicy=IfNotPresent \
      --set managedSecurityContext=false \
      helm_chart > operator.yaml
    kubectl apply -f operator.yaml
    
    operator.podWaitSeconds

    Time in seconds that the Operator waits for StatefulSets to start when MongoDB Kubernetes resources are being created or updated before retrying.

    Default values depend upon operator.env:

    If operator.env is operator.podWaitSeconds is set to
    dev 3
    prod 5

    Example

    helm template \
      --set registry.pullPolicy=IfNotPresent \
      --set operator.env=dev \
      --set operator.podWaitSeconds=10 \
      helm_chart > operator.yaml
    kubectl apply -f operator.yaml
    
    operator.podSetWaitRetries

    Maximum number of retries that the Operator attempts when waiting for StatefulSets to start after MongoDB Kubernetes resources are created or updated.

    Default values depend upon operator.env:

    If operator.env is operator.podSetWaitRetries is set to
    dev 60
    prod 180

    Example

    helm template \
      --set registry.pullPolicy=IfNotPresent \
      --set operator.env=dev \
      --set operator.podWaitSeconds=10 \
      --set operator.podSetWaitRetries=20 \
      helm_chart > operator.yaml
    kubectl apply -f operator.yaml
    

To troubleshoot your Kubernetes Operator, see Review Logs from the Kubernetes Operator.

Important

If you need to remove the Kubernetes Operator or the namespace, you first must remove MongoDB resources.

  1. Upgrade the latest version of the Kubernetes Operator with modified pull policy values using the following helm command:

    helm template --set registry.pullPolicy=IfNotPresent \
      helm_chart > operator.yaml
    kubectl apply -f operator.yaml
    

    You can further customize your Chart before installing it by using the --set option. For this Chart, you may need to add one or more of the following options:

    --set option When to Use
    namespace

    To use a different namespace, you need to specify that namespace.

    Default value is: mongodb.

    Example

    helm template \
      --set registry.pullPolicy=IfNotPresent \
      --set namespace=<testNamespace> \
      helm_chart > operator.yaml
    kubectl apply -f operator.yaml
    
    operator.env

    Label for the Operator’s deployment environment. The env value affects default timeouts and the format and level of logging.

    If operator.env is Log Level is set to Log Format is set to
    dev debug text
    prod info json

    Accepted values are: dev, prod.

    Default value is: prod.

    Example

    helm template \
      --set registry.pullPolicy=IfNotPresent \
      --set operator.env=dev \
      helm_chart > operator.yaml
    kubectl apply -f operator.yaml
    
    operator.watchNamespace

    Namespace that the Operator watches for MongoDB Kubernetes resource changes. If this namespace differs from the default, ensure that the Operator’s ServiceAccount can access that different namespace.

    * means all namespaces and requires the ClusterRole assigned to the mongodb-enterprise-operator ServiceAccount which is the ServiceAccount used to run the Kubernetes Operator.

    Default value is: <metadata.namespace>.

    Example

    helm template \
      --set registry.pullPolicy=IfNotPresent \
      --set operator.watchNamespace=<testNamespace> \
      helm_chart > operator.yaml
    kubectl apply -f operator.yaml
    
    managedSecurityContext

    If you use OpenShift as your Kubernetes orchestrator, set this to true to allow OpenShift to manage the Security Context for the Kubernetes Operator.

    Accepted values are: true, false.

    Default value is: false.

    Example

    helm template \
      --set registry.pullPolicy=IfNotPresent \
      --set managedSecurityContext=false \
      helm_chart > operator.yaml
    kubectl apply -f operator.yaml
    
    operator.podWaitSeconds

    Time in seconds that the Operator waits for StatefulSets to start when MongoDB Kubernetes resources are being created or updated before retrying.

    Default values depend upon operator.env:

    If operator.env is operator.podWaitSeconds is set to
    dev 3
    prod 5

    Example

    helm template \
      --set registry.pullPolicy=IfNotPresent \
      --set operator.env=dev \
      --set operator.podWaitSeconds=10 \
      helm_chart > operator.yaml
    kubectl apply -f operator.yaml
    
    operator.podSetWaitRetries

    Maximum number of retries that the Operator attempts when waiting for StatefulSets to start after MongoDB Kubernetes resources are created or updated.

    Default values depend upon operator.env:

    If operator.env is operator.podSetWaitRetries is set to
    dev 60
    prod 180

    Example

    helm template \
      --set registry.pullPolicy=IfNotPresent \
      --set operator.env=dev \
      --set operator.podWaitSeconds=10 \
      --set operator.podSetWaitRetries=20 \
      helm_chart > operator.yaml
    kubectl apply -f operator.yaml
    

Recreate MongoDB Resources and Delete the Version 0.9 CRDs

  1. After you upgrade the Kubernetes Operator, verify you have four CRDs by running the following command:

    kubectl get crds
    

    The following output contains the new mongodb.mongodb.com CRD and the version 0.9 CRDs:

    NAME                                 CREATED AT
    mongodb.mongodb.com                  2019-03-27T19:30:09Z
    mongodbreplicasets.mongodb.com       2018-12-07T18:25:42Z
    mongodbshardedclusters.mongodb.com   2018-12-07T18:25:42Z
    mongodbstandalones.mongodb.com       2018-12-07T18:25:42Z
    
  2. Remove the old resources from Kubernetes.

    Important

    Removing MongoDB resources will remove the database server pods and drop any client connections to the database. Connections are reestablished when the new MongoDB resources are created in Kubernetes.

    Run each of the following commands to remove all MongoDB resources:

    kubectl delete mst --all
    
    kubectl delete mrs --all
    
    kubectl delete msc --all
    

    Note

    MongoDB resources that have persistent: true set in their .yaml configuration file will not lose data as it is stored in persistent volumes. The previous command only deletes pods containing MongoDB and not the persistent volumes containing the data. Persistent volume claims referencing persistent volumes stay alive and are reused by the new MongoDB resources.

  3. Create the MongoDB resources again.

    Use the .yaml resource configuration file to recreate each resource:

    kubectl apply -f <resource-conf>.yaml
    

    Note

    If the old resources had persistent: true set and the metadata.name haven’t changed, the new MongoDB pods will reuse the data from the old pods.

    Run the following command to check the status of each resource and verify that the phase reaches the Running status:

    kubectl get mdb <resource-name> -n <namespace> -o yaml -w
    

    For an example of this command’s output, see Get Status of a Deployed Resource.

  1. Delete the old CRDs.

    Once all the resources are up and running, delete all of the v0.9 CRDs as the Kubernetes Operator no longer watches them:

    kubectl delete crd mongodbreplicasets.mongodb.com
    
    kubectl delete crd mongodbshardedclusters.mongodb.com
    
    kubectl delete crd mongodbstandalones.mongodb.com
    

    Run the following command to verify the old CRDs were removed:

    kubectl get crds
    

    The output of the command above should look similar to the following:

    NAME                  CREATED AT
    mongodb.mongodb.com   2019-03-27T19:30:09Z
    

Once the version 0.9 CustomResourceDefinitions are deleted, the MongoDB Enterprise Kubernetes Operator upgrade is complete.

Troubleshooting

To troubleshoot your Kubernetes Operator, see Review Logs from the Kubernetes Operator.

Important

If you need to remove the Kubernetes Operator or the namespace, you first must remove MongoDB resources.