Navigation

MongoDB Enterprise Kubernetes Operator Production Notes

This page details system configuration recommendations for the MongoDB Enterprise Kubernetes Operator when running in production.

  • All sizing and performance recommendations for common MongoDB deployments through the Kubernetes Operator in this section are subject to change. Do not treat these recommendations as guarantees or limitations of any kind.
  • These recommendations reflect performance testing findings and represent our suggestions for production deployments. We ran the tests on a cluster comprised of seven AWS EC2 instances of type t2.2xlarge and a master node of type t2.medium.
  • The recommendations in this section do not take into account individual characteristics of any deployment. Numerous factors might make your deployment’s characteristics differ from the assumptions made to create these recommendations. Contact MongoDB support for further assistance with sizings.

Ensure Proper Persistence Configuration

The Kubernetes deployments orchestrated by the Kubernetes Operator are stateful. The Kubernetes container uses Persistent Volumes to maintain the cluster state between restarts.

To satisfy the statefulness requirement, the Kubernetes Operator performs the following actions:

  • Creates Persistent Volumes for your MongoDB deployment.
  • Mounts storage devices to one or more directories called mount points.
  • Creates one persistent volume for each MongoDB mount point.
  • Sets the default path in each Kubernetes container to /data.

To meet your MongoDB cluster’s storage needs, make the following changes in your configuration for each replica set deployed with the Kubernetes Operator:

  • Verify that persistent volumes are enabled in spec.persistent. This setting defaults to true.
  • Specify a sufficient amount of storage for the Kubernetes Operator to allocate for each of the volumes. The volumes store the data and the logs.

The following abbreviated example shows recommended persistent storage sizes.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
apiVersion: mongodb.com/v1
kind: MongoDB
metadata:
  name: my-replica-cluster
spec:

  ...
  persistent: true


  shardPodSpec:
  ...
    persistence:
      multiple:
        data:
          storage: "20Gi"
        logs:
          storage: "4Gi"
          storageClass: standard

For a full example of persistent volumes configuration, see replica-set-persistent-volumes.yaml in the Persistent Volumes Samples directory. This directory also contains sample persistent volumes configurations for sharded clusters and standalone deployments.

Name Your MongoDB Service with its Purpose

Set the spec.service parameter to a value that identifies this deployment’s purpose, as illustrated in the following example.

1
2
3
4
5
6
7
8
9
apiVersion: mongodb.com/v1
kind: MongoDB
metadata:
  name: my-replica-set
spec:
  members: 3
  version: "4.4.0-ent"
  service: drilling-pumps-geosensors
  featureCompatibilityVersion: "4.0"

See also

spec.service

Specify CPU and Memory Resource Requirements

In Kubernetes, each Pod includes parameters that allow you to specify CPU resources and memory resources for each container in the Pod.

To indicate resource bounds, Kubernetes uses the requests and limits parameters, where:

  • request indicates a lower bound of a resource.
  • limit indicates an upper bound of a resource.

The following sections illustrate how to:

For the Pods hosting Ops Manager, use the default resource limits configurations.

Set CPU and Memory Utilization Bounds for the Kubernetes Operator Pod

When you deploy replica sets with the Kubernetes Operator, CPU usage for Pod used to host the Kubernetes Operator is initially high during the reconciliation process, however, by the time the deployment completes, it lowers.

For production deployments, to satisfy deploying up to 50 MongoDB replica sets or sharded clusters in parallel with the Kubernetes Operator, set the CPU and memory resources and limits for the Kubernetes Operator Pod as follows:

  • spec.template.spec.containers.resources.requests.cpu to 500m
  • spec.template.spec.containers.resources.limits.cpu to 1100m
  • spec.template.spec.containers.resources.requests.memory to 200Mi
  • spec.template.spec.containers.resources.limits.memory to 1Gi

If you don’t include the unit of measurement for CPUs, Kubernetes interprets it as the number of cores. If you specify m, such as 500m, Kubernetes interprets it as millis. To learn more, see Meaning of CPU.

The following abbreviated example shows the configuration with recommended CPU and memory bounds for the Kubernetes Operator Pod in your deployment of 50 replica sets or sharded clusters. If you are deploying fewer than 50 MongoDB clusters, you may use lower numbers in the configuration file for the Kubernetes Operator Pod.

Note

Monitoring tools report the size of the node rather than the actual size of the container.

Example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
apiVersion: apps/v1
kind: Deployment
metadata:
 name: mongodb-enterprise-operator
 namespace: mongodb
spec:
 replicas: 1
 selector:
  matchLabels:
     app.kubernetes.io/component: controller
     app.kubernetes.io/name: mongodb-enterprise-operator
     app.kubernetes.io/instance: mongodb-enterprise-operator
 template:
  metadata:
   labels:
     app.kubernetes.io/component: controller
     app.kubernetes.io/name: mongodb-enterprise-operator
     app.kubernetes.io/instance: mongodb-enterprise-operator
   spec:
     serviceAccountName: mongodb-enterprise-operator
     securityContext:
       runAsNonRoot: true
       runAsUser: 2000
     containers:
     - name: mongodb-enterprise-operator
       image: quay.io/mongodb/mongodb-enterprise-operator:1.9.2
       imagePullPolicy: Always
       args:
        - "-watch-resource=mongodb"
        - "-watch-resource=opsmanagers"
        - "-watch-resource=mongodbusers"
       command:
        - "/usr/local/bin/mongodb-enterprise-operator"
       resources:
         limits:
           cpu: 1100m
           memory: 1Gi
         requests:
           cpu: 500m
           memory: 200Mi

For a full example of CPU and memory utilization resources and limits for the Kubernetes Operator Pod that satisfy parallel deployment of up to 50 MongoDB replica sets, see the mongodb-enterprise.yaml file.

Set CPU and Memory Utilization Bounds for MongoDB Pods

The values for Pods hosting replica sets or sharded clusters map to the requests field for CPU and memory for the created Pod. These values are consistent with considerations stated for MongoDB hosts.

The Kubernetes Operator uses its allocated memory for processing, for the WiredTiger cache, and for storing packages during the deployments.

For production deployments, set the CPU and memory resources and limits for the MongoDB Pod as follows:

  • spec.podSpec.podTemplate.spec.containers.resources.requests.cpu to 0.25
  • spec.podSpec.podTemplate.spec.containers.resources.limits.cpu to 0.25
  • spec.podSpec.podTemplate.spec.containers.resources.requests.memory to 512M
  • spec.podSpec.podTemplate.spec.containers.resources.limits.memory to 512M

If you don’t include the unit of measurement for CPUs, Kubernetes interprets it as the number of cores. If you specify m, such as 500m, Kubernetes interprets it as millis. To learn more, see Meaning of CPU.

The following abbreviated example shows the configuration with recommended CPU and memory bounds for each Pod hosting a MongoDB replica set member in your deployment.

Example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
apiVersion: mongodb.com/v1
kind: MongoDB
metadata:
name: my-replica-set
spec:
  members: 3
  version: 4.0.0-ent
  service: my-service
  ...

  persistent: true
  podSpec:
    podTemplate:
      spec:
        containers:
        - name: mongodb-enterprise-database
          resources:
            limits:
              cpu: "0.25"
              memory: 512M

For a full example of CPU and memory utilization resources and limits for Pods hosting MongoDB replica set members, see the replica-set-podspec.yaml file in the the MongoDB Podspec Samples directory.

This directory also contains sample CPU and memory limits configurations for Pods used for:

Use Multiple Availability Zones

Set the Kubernetes Operator and StatefulSets to distribute all members of one replica set to different nodes to ensure high availability.

The following abbreviated example shows affinity and multiple availability zones configuration.

Example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
apiVersion: mongodb.com/v1
kind: MongoDB
metadata:
  name: my-replica-set
spec:
  members: 3
  version: 4.2.1-ent
  service: my-service
  ...
    podAntiAffinityTopologyKey: nodeId
    podAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
        matchExpressions:
        - key: security
          operator: In
          values:
          - S1
        topologyKey: failure-domain.beta.kubernetes.io/zone

    nodeAffinity:
       requiredDuringSchedulingIgnoredDuringExecution:
         nodeSelectorTerms:
         - matchExpressions:
           - key: kubernetes.io/e2e-az-name
           operator: In
           values:
           - e2e-az1
           - e2e-az2

In this example, the Kubernetes Operator schedules the Pods deployment to the nodes which have the label kubernetes.io/e2e-az-name in e2e-az1 or e2e-az2 availability zones. Change nodeAffinity to schedule the deployment of Pods to the desired availability zones.

See the full example of multiple availability zones configuration in replica-set-affinity.yaml in the Affinity Samples directory.

This directory also contains sample affinity and multiple zones configurations for sharded clusters and standalone MongoDB deployments.

Co-locate mongos Pods with Your Applications

You can run the lightweight mongos instance on the same node as your apps using MongoDB. The Kubernetes Operator supports standard Kubernetes node affinity and anti-affinity features. Using these features, you can force install the mongos on the same Pod as your application.

The following abbreviated example shows affinity and multiple availability zones configuration.

The podAffinity key determines whether to install an application on the same Pod, node, or data center as another application.

To specify Pod affinity:

  1. Add a label and value in the spec.podSpec.podTemplate.metadata.labels YAML collection to tag the deployment. See spec.podSpec.podTemplate.metadata, and the Kubernetes PodSpec v1 core API.
  2. Specify which label the mongos uses in the spec.mongosPodSpec.podAffinity .requiredDuringSchedulingIgnoredDuringExecution.labelSelector YAML collection. The matchExpressions collection defines the label that the Kubernetes Operator uses to identify the Pod for hosting the mongos.

Example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
apiVersion: mongodb.com/v1
kind: MongoDB
metadata:
  name: my-replica-set
spec:
  members: 3
  version: 4.2.1-ent
  service: my-service

  ...
    podAntiAffinityTopologyKey: nodeId
    podAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
        matchExpressions:
        - key: security
          operator: In
          values:
          - S1
        topologyKey: failure-domain.beta.kubernetes.io/zone

    nodeAffinity:
       requiredDuringSchedulingIgnoredDuringExecution:
         nodeSelectorTerms:
         - matchExpressions:
           - key: kubernetes.io/e2e-az-name
           operator: In
           values:
           - e2e-az1
           - e2e-az2

See the full example of multiple availability zones and node affinity configuration in replica-set-affinity.yaml in the Affinity Samples directory.

This directory also contains sample affinity and multiple zones configurations for sharded clusters and standalone MongoDB deployments.

Use Labels to Differentiate Between Deployments

Use the Pod affinity Kubernetes feature to:

  • Separate different MongoDB resources, such as test, staging, and production environments.
  • Place Pods on some specific nodes to take advantage of features such as SSD support.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
mongosPodSpec:
  podAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
        matchExpressions:
        - key: security
          operator: In
          values:
          - S1
        topologyKey: failure-domain.beta.kubernetes.io/zone

See also

Pod affinity

Verify Permissions

Objects in the Kubernetes Operator configuration use the following default permissions.

Kubernetes Resources Verbs
Configmaps

Require the following permissions:

  • get, list, watch. The Kubernetes Operator reads the organization and project data from the specified configmap.
  • create, update. The Kubernetes Operator creates and updates configmap objects for configuring the Application Database instances.
  • delete. The Kubernetes Operator needs the delete configmap permission to support its older versions. This permission will be deleted when older versions reach their End of Life Date.
Secrets

Require the following permissions:

  • get, list, watch. The Kubernetes Operator reads secret objects to retrieve sensitive data, such as TLS or X.509 access information. For example, it reads the credentials from a secret object to connect to the Ops Manager.
  • create, update. The Kubernetes Operator creates secret objects holding TLS or X.509 access information.
  • delete. The Kubernetes Operator deletes secret objects (containing passwords) related to the Application Database.
Services

Require the following permissions:

  • get, list, watch. The Kubernetes Operator reads and watches MongoDB services. For example, to communicate with the Ops Manager service, the Kubernetes Operator needs get, list and watch permissions to use the Ops Manager service’s URL.
  • create, update. To communicate with services, the Kubernetes Operator creates and updates service objects corresponding to Ops Manager and MongoDB custom resources.
StatefulSets

Require the following permissions:

  • get, list, watch. The Kubernetes Operator reacts to the changes in the StatefulSets it creates for the MongoDB custom resources. It also reads the fields of the StatefulSets it manages.
  • create, update. The Kubernetes Operator creates and updates StatefulSets corresponding to the mongoDB custom resources.
  • delete. The Kubernetes Operator needs permissions to delete the StatefulSets when you delete the MongoDB custom resource.
Pods

Require the following permissions:

  • get, list, watch. The Kubernetes Operator queries the Application Database Pods to get information about its state.
Namespaces

Require the following permissions:

  • list, watch. When you run the Kubernetes Operator in the cluster-wide mode, it needs list and watch permissions to all namespaces for the MongoDB custom resources.

Enable HTTPS

The Kubernetes Operator supports configuring Ops Manager to run over HTTPS.

Enable HTTPS before deploying your Ops Manager resources to avoid a situation where the Kubernetes Operator reports your resources’ status as Failed.

Enable TLS

The Kubernetes Operator supports TLS encryption. Use TLS with your MongoDB deployment to encrypt your data over the network.

The configuration in the following example enables TLS for the replica set. When TLS is enabled, all traffic between members of the replica set and clients is encrypted using TLS certificates.

The Kubernetes Operator generates TLS certificates using the Kubernetes Certificate Authority. To learn more, see Managing TLS in Kubernetes.

The default TLS mode is requireTLS. You can customize it using the spec.additionalMongodConfig.net.ssl.mode configuration parameter, as shown in the following abbreviated example.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
apiVersion: mongodb.com/v1
kind: MongoDB
metadata:
name: my-tls-enabled-rs
spec:
  type: ReplicaSet

members: 3
version: 4.0.4-ent

opsManager:
 configMapRef:
   name: my-project
credentials: my-credentials

security:
  tls:
    enabled: true

...
additionalMongodConfig:
  net:
    ssl:
     mode: "preferSSL"

See the full TLS configuration example in replica-set.yaml in the TLS samples directory. This directory also contains sample TLS configurations for sharded clusters and standalone deployments.

Enable Authentication

The Kubernetes Operator supports X.509, LDAP, and SCRAM user authentication.

Note

For LDAP configuration, see the spec.security.authentication.ldap.automationLdapGroupDN setting.

You must create an additional CustomResourceDefinition for your MongoDB users and the MongoDB Agent instances. The Kubernetes Operator generates and distributes the certificate.

See the full X.509 certificates configuration examples in the x509 Authentication directory in the Authentication samples directory. This directory also contains sample LDAP and SCRAM configurations.

Example Deployment CRD

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
apiVersion: mongodb.com/v1
kind: MongoDB
metadata:
  name: my-tls-enabled-rs
spec:
  type: ReplicaSet
  members: 3
  version: "4.0.4-ent"
  project: my-project
  credentials: my-credentials
  security:
    tls:
      enabled: true
    authentication:
      enabled: true
      modes: ["X509"]
      internalCluster: "X509"

Example User CRD

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
apiVersion: mongodb.com/v1
kind: MongoDBUser
metadata:
  name: user-with-roles
spec:
  username: "CN=mms-user-1,OU=cloud,O=MongoDB,L=New York,ST=New York,C=US"
  db: "$external"
  project: my-project
  roles:
    - db: "admin"
      name: "clusterAdmin"