Release Notes for MongoDB Enterprise Kubernetes Operator

MongoDB Enterprise Kubernetes Operator 1.4.4

MongoDB Resource Changes

Supports changes in the Cloud Manager API.

Ops Manager Resource Changes (Beta Release)

  • Properly terminates resources with a termination hook.
  • Implements stricter validations.

Bug Fixes

  • MongoDB resources:
    • Fixes an issue when working with Ops Manager with custom HTTPS certificates.

MongoDB Enterprise Kubernetes Operator 1.4.3

Released 2020-02-24

Kubernetes Operator Changes

Adds a webhook to validate a Kubernetes Operator configuration.

MongoDB Resource Changes

  • Adds support for sidecars for MongoDB Kubernetes resource pods using the spec.podSpec.podTemplate setting.
  • Allows users to change the PodSecurityContext to allow privileged sidecar containers.

Ops Manager Resource Changes (Beta Release)

  • Adds the spec.podSpec configuration settings for Ops Manager, the Backup Daemon, and the Application Database. See Ops Manager Resource Specification.
  • Ops Manager image for version 4.2.8 is available.

Bug Fixes

  • MongoDB resources:
    • Fixes potential race conditions when deleting MongoDB Kubernetes resources.
  • Ops Manager resources:
    • Supports the spec.clusterDomain setting for Ops Manager and Application Database resources.
    • No longer starts monitoring and backup processes for the Application Database.

See the sample YAML files for new feature usage examples.

MongoDB Enterprise Kubernetes Operator 1.4.2

Released 2020-01-24

MongoDB Resource Changes

  • Runs MongoDB database Kubernetes pods under a dedicated Kubernetes service account: mongodb-enterprise-database-pods.
  • Adds the spec.podSpec.podTemplate setting, which allows you to apply templates to Kubernetes pods that the Kubernetes Operator generates for each database StatefulSet.
  • Renames the spec.clusterName setting to spec.clusterDomain.

Ops Manager Resource Changes (Beta Release)

  • Adds offline mode support for the Application Database. Bundles MongoDB Enterprise version 4.2.2 with the Application Database image. Internet access is not required to install the application database if spec.applicationDatabase.version is set to 4.2.2-ent or omitted.
  • Renames the spec.clusterName setting to spec.clusterDomain.
  • Ops Manager images for versions 4.2.6 and 4.2.7 are available.

Bug Fixes

  • MongoDB resources:
    • Fixes the order of sharded cluster component creation.
    • Allows TLS to be enabled on Amazon EKS.
  • Ops Manager resources:
    • Enables the Kubernetes Operator to use the spec.clusterDomain setting.

See the sample YAML files for new feature usage examples.

MongoDB Enterprise Kubernetes Operator 1.4.1

Released 2019-12-13

MongoDB Enterprise Kubernetes Operator 1.4.0

Released 2019-12-09

MongoDB Resource Changes

  • Adds split horizon DNS support for MongoDB replica sets, which allows clients to connect to a replica set from outside of the Kubernetes cluster.
  • Supports requests for Kubernetes Operator-generated certificates for additional certificate domains, which makes them valid for the specified subdomains.

Ops Manager Resource Changes (Beta Release)

  • Promotes the MongoDBOpsManager resource to Beta. Ops Manager version 4.2.4 is available.
  • Supports Backup and restore in Kubernetes Operator-deployed Ops Manager instances. This is a semi-automated process that deploys everything you need to enable backups in Ops Manager. You can enable Backup by setting the spec.backup.enabled setting in the Ops Manager custom resource. You can configure the Head Database, Oplog Store, and S3 Snapshot Store by using the MongoDBOpsManager resource specification.
  • Supports access to Ops Manager from outside the Kubernetes cluster through the spec.externalConnectivity setting.
  • Enables SCRAM-SHA-1 authentication on Ops Manager’s Application Database by default.
  • Adds support for OpenShift (Red Hat UBI Images).

For more information on how to enable new features, see the sample YAML files in the samples directory.

Bug Fixes

  • Improves overall stability of X.509 user management.

MongoDB Enterprise Kubernetes Operator 1.3.1

Released 2019-11-08


This release introduces significant changes that may not be compatible with previous deployments or resource configurations. Read Migrate to One Resource per Project (Required for Version 1.3.0) before upgrading the Kubernetes Operator.

MongoDB Resource Changes

  • Requires one MongoDB resource per Ops Manager project. If you have more than one MongoDB resource in a project, all resources will change to a Pending status and the Kubernetes Operator won’t perform any changes on them. The existing MongoDB databases will still be accessible. You must migrate to one resource per project.
  • Supports SCRAM-SHA authentication mode. See the MongoDB Enterprise Kubernetes Operator GitHub repository for examples.
  • Requires that the project (ConfigMap) and credentials (secret) referenced from a MongoDB resource be in the same namespace.
  • Adds OpenShift installation files (YAML file and Helm chart configuration).

Ops Manager Resource Changes (Alpha Release)

MongoDB Enterprise Kubernetes Operator 1.3.0

Released 2019-10-25


This release introduces significant changes that may not be compatible with previous deployments or resource configurations. Read Migrate to One Resource per Project (Required for Version 1.3.0) before installing or upgrading the Kubernetes Operator.

Specification Schema Changes

Ops Manager Resource Changes (Alpha Release)

This release introduces signficant changes to the Ops Manager resource’s architecture. The Ops Manager application database is now managed by the Kubernetes Operator, not by Ops Manager.

Bug Fixes

  • Stops unnecessary recreation of NodePorts.
  • Fixes logging so it’s always in JSON format.
  • Sets USER in the Kubernetes Operator Docker image.

MongoDB Enterprise Kubernetes Operator 1.2.4

Released 2019-10-02

  • Increases stability of Sharded Cluster deployments.
  • Improves internal testing infrastructure.

MongoDB Enterprise Kubernetes Operator 1.2.3

Released 2019-09-13

  • Update: The MongoDB Enterprise Kubernetes Operator will remove support for multiple clusters per project in a future release. If a project contains more than one cluster, a warning will be added to the status of the MongoDB Resources. Additionally, any new cluster being added to a non-empty project will result in a Failed state, and won’t be processed.
  • Fix: The overall stability of the operator has been improved. The operator is now more conservative in resource updates both on Kubernetes and Cloud Manager or Ops Manager.

MongoDB Enterprise Kubernetes Operator 1.2.2

Released 2019-08-30

  • Security Fix: Clusters configured by Kubernetes Operator versions 1.0 through 1.2.1 used an insufficiently strong keyfile for internal cluster authentication between mongod processes. This only affects clusters which are using X.509 for user authentication, but are not using X.509 for internal cluster authentication. Users are advised to upgrade to version 1.2.2, which will replace all managed keyfiles.
  • Security Fix: Clusters configured by Kubernetes Operator versions 1.0 through 1.2.1 used an insufficiently strong password to authenticate the MongoDB Agent. This only affects clusters which have been manually configured to enable SCRAM-SHA-1, which is not a supported configuration. Users are advised to upgrade to version 1.2.2, which will reset these passwords.

MongoDB Enterprise Kubernetes Operator 1.2.1

Released 2019-08-23

  • Fix: The Kubernetes Operator no longer recreates CSRs when X.509 authentication is enabled and the approved CSRs have been deleted.
  • Fix: If the OPERATOR_ENV environment variable is set to something unrecognized by the Kubernetes Operator, it will no longer result in a CrashLoopBackOff of the pod. A default value of prod is used.
  • The Kubernetes Operator now supports more than 100 agents in a given project.

MongoDB Enterprise Kubernetes Operator 1.2

Released 2019-08-13

GA Release

  • Adds a readinessprobe to the MongoDB Pods to improve the reliability of rolling upgrades.

Alpha Release

This feature is an alpha release. It is not ready for production use.

MongoDB Enterprise Kubernetes Operator 1.1

Released 2019-07-19

  • Fix: Sample yaml files, in particular, the attribute related to featureCompatibilityVersion.
  • Fix: TLS can be disabled in a deployment.
  • Improvement: Added script in the support directory that can gather information of your MongoDB resources in Kubernetes.
  • Improvement: In a TLS environment, the Kubernetes Operator can use a custom Certificate Authority. All the certificates must be passed as Kubernetes Secret objects.

MongoDB Enterprise Kubernetes Operator 1.0

Released 2019-06-18

  • Supports Kubernetes v1.11 or later.
  • Provisions any kind of MongoDB deployment in the Kubernetes Cluster of your Organization:
  • Configures TLS on the MongoDB deployments and encrypt all traffic. Hosts and clients can verify each other’s identities.
  • Manages MongoDB users.
  • Supports X.509 authentication to your MongoDB databases.

See also

To learn how to install and configure the Operator, see Install and Configure the Kubernetes Operator.

Questions about the Kubernetes Operator GA release

If you have any questions regarding this release, use the #enterprise-kubernetes Slack channel.

MongoDB Enterprise Kubernetes Operator 0.12

Released 2019-06-07

  • Rolling upgrades of MongoDB resources ensure that rs.stepDown() is called for the primary member. Requires MongoDB patch version 4.0.8 and later or MongoDB patch version 4.1.10 and later.
  • During a MongoDB major version upgrade, the featureCompatibilityVersion field can be set.
  • Fixed a bug where replica sets with more than seven members could not be created.
  • X.509 Authentication can be enabled at the Project level. Requires Cloud Manager, Ops Manager patch version 4.0.11 and later, or Ops Manager patch version 4.1.7 and later.
  • Internal cluster authentication based on X.509 can be enabled at the deployment level.
  • MongoDB users with X.509 authentication can be created, using the new MongoDBUser custom resource.

MongoDB Enterprise Kubernetes Operator 0.11

Released 2019-04-29

  • NodePort service creation can be disabled.
  • TLS can be enabled for internal authentication between MongoDB in replica sets and sharded clusters. The TLS certificates are created automatically by the Kubernetes Operator. Please refer to the sample .yaml files in the GitHub repository for examples.
  • Wide or asterisk roles have been replaced with strict listing of verbs in roles.yaml.
  • Printing mdb objects with kubectl will provide more information about the MongoDB object: type, state, and MongoDB server version.

MongoDB Enterprise Kubernetes Operator 0.10

Released 2019-04-02

  • The Kubernetes Operator and database images are now based on ubuntu:16.04.

  • The Kubernetes Operator now uses a single CustomResourceDefinition named MongoDB instead of the MongoDbReplicaSet, MongoDbShardedCluster, and MongoDbStandalone CRDs.


    Follow the upgrade procedure to transfer existing MongoDbReplicaSet, MongoDbShardedCluster, and MongoDbStandalone resources to the new format.

  • For a list of the packages installed and any security vulnerabilities detected in our build process, see:

MongoDB Enterprise Kubernetes Operator 0.9

Released 2019-03-19

  • The Operator and Database images are now based on debian:stretch-slim which is the latest and up-to-date Docker image for Debian 9.

MongoDB Enterprise Kubernetes Operator 0.8

Released 2019-02-26

  • Perform Ops Manager clean-up on deletion of MongoDB resource without the use of finalisers.
  • Bug fix: Race conditions when communicating with Ops Manager.
  • Bug fix: ImagePullSecrets being incorrectly initialized in OpenShift.
  • Bug fix: Unintended fetching of closed projects.
  • Bug fix: Creation of duplicate organizations.
  • Bug fix: Reconciliation could fail for the MongoDB resource if some other resources in Ops Manager were in error state.

MongoDB Enterprise Kubernetes Operator 0.7

Released 2019-02-01

  • Improved detailed status field for MongoDB resources.
  • The Kubernetes Operator watches changes to configuration parameters in a project configMap and the credentials secret then performs a rolling upgrade for relevant Kubernetes resources.
  • Added JSON structured logging for Automation Agent pods.
  • Support DNS SRV records for MongoDB access.
  • Bug fix: Avoiding unnecessary reconciliation.
  • Bug fix: Improved Ops Manager/Cloud Manager state management for deleted resources.

MongoDB Enterprise Kubernetes Operator 0.6

Released 2018-12-17

  • Refactored code to use the controller-runtime library to fix issues where Operator could leave resources in inconsistent state. This also introduced a proper reconciliation process.
  • Added new status field for all MongoDB Kubernetes resources.
  • Can configure Operator to watch any single namespace or all namespaces in a cluster (requires cluster role).
  • Improved database logging by adding a new configuration property logLevel. This property is set to INFO by default. Automation Agent and MongoDB logs are merged in to a single log stream.
  • Added new configuration Operator timeout. It defines waiting time for database pods start while updating MongoDB Kubernetes resources.
  • Fix: Fixed failure detection for mongos.

MongoDB Enterprise Kubernetes Operator 0.5

Released 2018-11-14

  • Image for database no longer includes the binary for the Automation Agent. The container downloads the Automation Agent binary from Ops Manager when it starts.
  • Fix: Communication with Ops Manager failed if the project with the same name existed in different organization.

MongoDB Enterprise Kubernetes Operator 0.4

Released 2018-10-04

  • If a backup was enabled in Ops Manager for a Replica Set or Sharded Cluster that the Kubernetes Operator created, then the Kubernetes Operator disables the backup before removing a resource.

  • Improved persistence support:

    • The data, journal and log directories are mounted to three mountpoints in one or three volumes depending upon the podSpec.persistence setting.

      Setting Mount Directories to
      podSpec.persistence.single One volume
      podSpec.persistence.multiple Three volumes

      Prior to this release, only the data directory was mounted to persistent storage.

    • A new parameter, labelSelector, allows you to specify the selector for volumes that Kubernetes Operator should consider mounting.

    • If StorageClass is not specified in the persistence configuration, then the default StorageClass for the cluster is used. In most of public cloud providers, this results in dynamic volume provisioning.

MongoDB Enterprise Kubernetes Operator 0.3

Released 2018-08-07

  • The Operator no longer creates the CustomResourceDefinition objects. The user needs to create them manually. Download and apply this new yaml file (crd.yaml) to create/configure these objects.

  • ClusterRoles are no longer required. How the Operator watches resources has changed. Until the last release, the Operator would watch for any resource on any namespace. With 0.3, the Operator watches for resources in the same namespace in which it was created. To support multiple namespaces, multiple Operators can be installed. This allows isolation of MongoDB deployments.

  • Permissions changes were made to how PersistentVolumes are mounted.

  • Added configuration to Operator to not create SecurityContexts for pods. This solves an issue with OpenShift which does not allow this setting when SecurityContextContraints are used.

    If you are using Helm, set managedSecurityContext to true. This tells the Operator to not create SecurityContext for pods, satisfying the OpenShift requirement.

  • The combination of projectName and orgId replaces projectId alone to configure the connection to Ops Manager. The project is created if it doesn’t exist.

MongoDB Enterprise Kubernetes Operator 0.2

Released 2018-08-03

  • Calculates WiredTiger memory cache.

MongoDB Enterprise Kubernetes Operator 0.1

Released 2018-06-27

Initial Release

  • Can deploy standalone instances, replica sets, sharded clusters using Kubernetes configuration files.