Navigation

Release Notes for MongoDB Enterprise Kubernetes Operator

MongoDB Enterprise Kubernetes Operator 1.3.1

Released 2019-11-08

Important

This release introduces significant changes that may not be compatible with previous deployments or resource configurations. Read Migrate to One Resource per Project (Required for Version 1.3.0) before upgrading the Kubernetes Operator.

MongoDB Resource Changes

  • Requires one MongoDB resource per Ops Manager project. If you have more than one MongoDB resource in a project, all resources will change to a Pending status and the Kubernetes Operator won’t perform any changes on them. The existing MongoDB databases will still be accessible. You must migrate to one resource per project.
  • Supports SCRAM-SHA authentication mode. See the MongoDB Enterprise Kubernetes Operator GitHub repository for examples.
  • Requires that the project (ConfigMap) and credentials (secret) referenced from a MongoDB resource be in the same namespace.
  • Adds OpenShift installation files (YAML file and Helm chart configuration).

Ops Manager Resource Changes (Alpha Release)

MongoDB Enterprise Kubernetes Operator 1.3.0

Released 2019-10-25

Important

This release introduces significant changes that may not be compatible with previous deployments or resource configurations. Read Migrate to One Resource per Project (Required for Version 1.3.0) before installing or upgrading the Kubernetes Operator.

Specification Schema Changes

Ops Manager Resource Changes (Alpha Release)

This release introduces signficant changes to the Ops Manager resource’s architecture. The Ops Manager application database is now managed by the Kubernetes Operator, not by Ops Manager.

Bug Fixes

  • Stops unnecessary recreation of NodePorts.
  • Fixes logging so it’s always in JSON format.
  • Sets USER in the Kubernetes Operator Docker image.

MongoDB Enterprise Kubernetes Operator 1.2.4

Released 2019-10-02

  • Increases stability of Sharded Cluster deployments.
  • Improves internal testing infrastructure.

MongoDB Enterprise Kubernetes Operator 1.2.3

Released 2019-09-13

  • Update: The MongoDB Enterprise Kubernetes Operator will remove support for multiple clusters per project in a future release. If a project contains more than one cluster, a warning will be added to the status of the MongoDB Resources. Additionally, any new cluster being added to a non-empty project will result in a Failed state, and won’t be processed.
  • Fix: The overall stability of the operator has been improved. The operator is now more conservative in resource updates both on Kubernetes and Cloud Manager or Ops Manager.

MongoDB Enterprise Kubernetes Operator 1.2.2

Released 2019-08-30

  • Security Fix: Clusters configured by Kubernetes Operator versions 1.0 through 1.2.1 used an insufficiently strong keyfile for internal cluster authentication between mongod processes. This only affects clusters which are using X.509 for user authentication, but are not using X.509 for internal cluster authentication. Users are advised to upgrade to version 1.2.2, which will replace all managed keyfiles.
  • Security Fix: Clusters configured by Kubernetes Operator versions 1.0 through 1.2.1 used an insufficiently strong password to authenticate the MongoDB Agent. This only affects clusters which have been manually configured to enable SCRAM-SHA-1, which is not a supported configuration. Users are advised to upgrade to version 1.2.2, which will reset these passwords.

MongoDB Enterprise Kubernetes Operator 1.2.1

Released 2019-08-23

  • Fix: The Kubernetes Operator no longer recreates CSRs when X.509 authentication is enabled and the approved CSRs have been deleted.
  • Fix: If the OPERATOR_ENV environment variable is set to something unrecognized by the Kubernetes Operator, it will no longer result in a CrashLoopBackOff of the pod. A default value of prod is used.
  • The Kubernetes Operator now supports more than 100 agents in a given project.

MongoDB Enterprise Kubernetes Operator 1.2

Released 2019-08-13

GA Release

  • Adds a readinessprobe to the MongoDB Pods to improve the reliability of rolling upgrades.

Alpha Release

This feature is an alpha release. It is not ready for production use.

MongoDB Enterprise Kubernetes Operator 1.1

Released 2019-07-19

  • Fix: Sample yaml files, in particular, the attribute related to featureCompatibilityVersion.
  • Fix: TLS can be disabled in a deployment.
  • Improvement: Added script in the support directory that can gather information of your MongoDB resources in Kubernetes.
  • Improvement: In a TLS environment, the Kubernetes Operator can use a custom Certificate Authority. All the certificates must be passed as Kubernetes Secret objects.

MongoDB Enterprise Kubernetes Operator 1.0

Released 2019-06-18

  • Supports Kubernetes v1.11 or later.
  • Provisions any kind of MongoDB deployment in the Kubernetes Cluster of your Organization:
  • Configures TLS on the MongoDB deployments and encrypt all traffic. Hosts and clients can verify each other’s identities.
  • Manages MongoDB users.
  • Supports X.509 authentication to your MongoDB databases.

See also

To learn how to install and configure the Operator, see Install and Configure the Kubernetes Operator.

Questions about the Kubernetes Operator GA release

If you have any questions regarding this release, use the #enterprise-kubernetes Slack channel.

MongoDB Enterprise Kubernetes Operator 0.12

Released 2019-06-07

  • Rolling upgrades of MongoDB resources ensure that rs.stepDown() is called for the primary member. Requires MongoDB patch version 4.0.8 and later or MongoDB patch version 4.1.10 and later.
  • During a MongoDB major version upgrade, the featureCompatibilityVersion field can be set.
  • Fixed a bug where replica sets with more than seven members could not be created.
  • X.509 Authentication can be enabled at the Project level. Requires Cloud Manager, Ops Manager patch version 4.0.11 and later, or Ops Manager patch version 4.1.7 and later.
  • Internal cluster authentication based on X.509 can be enabled at the deployment level.
  • MongoDB users with X.509 authentication can be created, using the new MongoDBUser custom resource.

MongoDB Enterprise Kubernetes Operator 0.11

Released 2019-04-29

  • NodePort service creation can be disabled.
  • TLS can be enabled for internal authentication between MongoDB in replica sets and sharded clusters. The TLS certificates are created automatically by the Kubernetes Operator. Please refer to the sample .yaml files in the GitHub repository for examples.
  • Wide or asterisk roles have been replaced with strict listing of verbs in roles.yaml.
  • Printing mdb objects with kubectl will provide more information about the MongoDB object: type, state, and MongoDB server version.

MongoDB Enterprise Kubernetes Operator 0.10

Released 2019-04-02

  • The Kubernetes Operator and database images are now based on ubuntu:16.04.

  • The Kubernetes Operator now uses a single CustomResourceDefinition named MongoDB instead of the MongoDbReplicaSet, MongoDbShardedCluster, and MongoDbStandalone CRDs.

    Important

    Follow the upgrade procedure to transfer existing MongoDbReplicaSet, MongoDbShardedCluster, and MongoDbStandalone resources to the new format.

  • For a list of the packages installed and any security vulnerabilities detected in our build process, see:

MongoDB Enterprise Kubernetes Operator 0.9

Released 2019-03-19

  • The Operator and Database images are now based on debian:stretch-slim which is the latest and up-to-date Docker image for Debian 9.

MongoDB Enterprise Kubernetes Operator 0.8

Released 2019-02-26

  • Perform Ops Manager clean-up on deletion of MongoDB resource without the use of finalisers.
  • Bug fix: Race conditions when communicating with Ops Manager.
  • Bug fix: ImagePullSecrets being incorrectly initialized in OpenShift.
  • Bug fix: Unintended fetching of closed projects.
  • Bug fix: Creation of duplicate organizations.
  • Bug fix: Reconciliation could fail for the MongoDB resource if some other resources in Ops Manager were in error state.

MongoDB Enterprise Kubernetes Operator 0.7

Released 2019-02-01

  • Improved detailed status field for MongoDB resources.
  • The Kubernetes Operator watches changes to configuration parameters in a project configMap and the credentials secret then performs a rolling upgrade for relevant Kubernetes resources.
  • Added JSON structured logging for Automation Agent pods.
  • Support DNS SRV records for MongoDB access.
  • Bug fix: Avoiding unnecessary reconciliation.
  • Bug fix: Improved Ops Manager/Cloud Manager state management for deleted resources.

MongoDB Enterprise Kubernetes Operator 0.6

Released 2018-12-17

  • Refactored code to use the controller-runtime library to fix issues where Operator could leave resources in inconsistent state. This also introduced a proper reconciliation process.
  • Added new status field for all MongoDB Kubernetes resources.
  • Can configure Operator to watch any single namespace or all namespaces in a cluster (requires cluster role).
  • Improved database logging by adding a new configuration property logLevel. This property is set to INFO by default. Automation Agent and MongoDB logs are merged in to a single log stream.
  • Added new configuration Operator timeout. It defines waiting time for database pods start while updating MongoDB Kubernetes resources.
  • Fix: Fixed failure detection for mongos.

MongoDB Enterprise Kubernetes Operator 0.5

Released 2018-11-14

  • Image for database no longer includes the binary for the Automation Agent. The container downloads the Automation Agent binary from Ops Manager when it starts.
  • Fix: Communication with Ops Manager failed if the project with the same name existed in different organization.

MongoDB Enterprise Kubernetes Operator 0.4

Released 2018-10-04

  • If a backup was enabled in Ops Manager for a Replica Set or Sharded Cluster that the Kubernetes Operator created, then the Kubernetes Operator disables the backup before removing a resource.

  • Improved persistence support:

    • The data, journal and log directories are mounted to three mountpoints in one or three volumes depending upon the podSpec.persistence setting.

      Setting Mount Directories to
      podSpec.persistence.single One volume
      podSpec.persistence.multiple Three volumes

      Prior to this release, only the data directory was mounted to persistent storage.

    • A new parameter, labelSelector, allows you to specify the selector for volumes that Kubernetes Operator should consider mounting.

    • If StorageClass is not specified in the persistence configuration, then the default StorageClass for the cluster is used. In most of public cloud providers, this results in dynamic volume provisioning.

MongoDB Enterprise Kubernetes Operator 0.3

Released 2018-08-07

  • The Operator no longer creates the CustomResourceDefinition objects. The user needs to create them manually. Download and apply this new yaml file (crd.yaml) to create/configure these objects.

  • ClusterRoles are no longer required. How the Operator watches resources has changed. Until the last release, the Operator would watch for any resource on any namespace. With 0.3, the Operator watches for resources in the same namespace in which it was created. To support multiple namespaces, multiple Operators can be installed. This allows isolation of MongoDB deployments.

  • Permissions changes were made to how PersistentVolumes are mounted.

  • Added configuration to Operator to not create SecurityContexts for pods. This solves an issue with OpenShift which does not allow this setting when SecurityContextContraints are used.

    If you are using Helm, set managedSecurityContext to true. This tells the Operator to not create SecurityContext for pods, satisfying the OpenShift requirement.

  • The combination of projectName and orgId replaces projectId alone to configure the connection to Ops Manager. The project is created if it doesn’t exist.

MongoDB Enterprise Kubernetes Operator 0.2

Released 2018-08-03

  • Calculates WiredTiger memory cache.

MongoDB Enterprise Kubernetes Operator 0.1

Released 2018-06-27

Initial Release

  • Can deploy standalone instances, replica sets, sharded clusters using Kubernetes configuration files.