Navigation
This version of the documentation is archived and no longer supported. To learn how to upgrade your version of MongoDB Kubernetes Operator, refer to the upgrade documentation.

Deploy a Sharded Cluster

Note

At any place on this page that says Ops Manager, you can substitute Cloud Manager.

Important

  • You can use the Kubernetes Operator to deploy MongoDB resources with Cloud Manager and with Ops Manager version 5.0.x or later.
  • You can use the Atlas Operator to deploy MongoDB resources to Atlas.

Warning

Kubernetes Operator doesn’t support arbiter nodes.

Sharded clusters provide horizontal scaling for large data sets and enable high throughput operations by distributing the data set across a group of servers.

To learn more about sharding, see Sharding Introduction in the MongoDB manual.

Use this procedure to deploy a new sharded cluster that Ops Manager manages. Later, you can use Ops Manager to add shards and perform other maintenance operations on the cluster.

Considerations

Do Not Deploy Monitoring Agents Inside and Outside Kubernetes

Due to Kubernetes network translation, a monitoring agent outside Kubernetes cannot monitor MongoDB instances inside Kubernetes. For this reason, k8s and non-k8s deployments in the same project are not supported. Use separate projects.

Choose Whether to Encrypt Connections

When you deploy your sharded cluster via the Kubernetes Operator, you must choose whether to encrypt connections using TLS certificates.

The following procedure for Non-Encrypted Connections:

  • Doesn’t encrypt connections between cluster shards.
  • Doesn’t encrypt connections between client applications and MongoDB deployments.
  • Has fewer setup requirements than a deployment with TLS-encrypted connections.

The following procedure for TLS-Encrypted connections:

  • Establishes TLS-encrypted connections between cluster shards.
  • Establishes TLS-encrypted connections between client applications and MongoDB deployments.
  • Requires valid certificates for TLS encryption.

Note

You can’t secure a Standalone Instance of MongoDB in a Kubernetes cluster.

To set up TLS encryption for a replica set, see Deploy a Replica Set.

Select the appropriate tab based on whether you want to encrypt your replica set connections with TLS.

Prerequisites

To deploy a sharded cluster using an object, you must:

This Kubernetes secret, along with other secrets that Kubernetes Operator creates, can later be migrated to a different secret storage tool to avoid storing secrets in Kubernetes.

  • Generate one TLS certificate for each of the following components:

    • Each shard in your sharded cluster. Ensure that you add SANs for each Kubernetes pod that hosts a shard member to the certificate.

    • Your config servers. Ensure that you add SANs for each Kubernetes pod that hosts your config servers to the certificate.

    • Your mongos instances. Ensure that you add SANs for each Kubernetes pod that hosts a mongos to the certificate.

      In your TLS certificates, the SAN for each pod must use this format:

      <pod-name>.<metadata.name>-svc.<namespace>.svc.cluster.local
      
    • Your project’s MongoDB Agent. For the MongoDB Agent certificate, ensure that you meet the following requirements:
      • The Common Name in the TLS certificate is not empty.
      • The combined Organization and Organizational Unit in each TLS certificate differs from the Organization and Organizational Unit in the TLS certificate for your replica set members.
  • You must possess the CA certificate and the key that you used to sign your TLS certificates.

Important

For fresh Kubernetes Operator installations starting with version 1.13, the Kubernetes Operator uses kubernetes.io/tls secrets to store TLS certificates and private keys for Ops Manager and MongoDB resources.

Previous Kubernetes Operator versions required you to concatenate your TLS certificates and private keys into a PEM file and store this file in an Opaque secret.

To maintain backwards compatibility, the Kubernetes Operator continues to support storing PEM files in Opaque secrets. Support of this feature might be removed in a future release.

We recommend that you upgrade to Kubernetes Operator version 1.15.1 or later.

If you have a broken Application Database after upgrading to Kubernetes Operator version 1.14.0 or 1.15.0, see Ops Manager in Failed State.

Prerequisites

To deploy a sharded cluster using an object, you must:

This Kubernetes secret, along with other secrets that Kubernetes Operator creates, can later be migrated to a different secret storage tool to avoid storing secrets in Kubernetes.

Deploy a Sharded Cluster

1

Configure kubectl to default to your namespace.

If you have not already, run the following command to execute all kubectl commands in the namespace you created:

kubectl config set-context $(kubectl config current-context) --namespace=<metadata.namespace>
2

Copy the highlighted section of this sharded cluster resource.

Change the highlighted settings of this YAML file to match your desired sharded cluster configuration.

This is a YAML file that you can modify to meet your desired configuration. Change the highlighted settings to match your desired sharded cluster configuration.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
---
apiVersion: mongodb.com/v1
kind: MongoDB
metadata:
  name: <my-sharded-cluster>
spec:
  shardCount: 2
  mongodsPerShardCount: 3
  mongosCount: 2
  configServerCount: 3
  version: "4.2.2-ent"
  opsManager:
    configMapRef:
      name: <configMap.metadata.name>
            # Must match metadata.name in ConfigMap file
  credentials: <mycredentials>
  type: ShardedCluster
  persistent: true
...
3

Paste the copied example to create a new sharded cluster resource.

Open your preferred text editor and paste the object specification into a new text file.

4

Configure the settings highlighted in the preceding step as follows.

Key Type Description Example
metadata.name string

Label for this Kubernetes sharded cluster object.

Resource names must be 44 characters or less.

See also

myproject
spec.shardCount integer Number of shards to deploy. 2
spec.mongodsPerShardCount integer Number of shard members per shard. 3
spec.mongosCount integer Number of shard routers to deploy. 2
spec.configServerCount integer Number of members of the config server replica set. 3
spec.version string

Version of MongoDB that this sharded cluster should run.

The format should be X.Y.Z for the Community edition and X.Y.Z-ent for the Enterprise edition.

Important

Ensure that you choose a compatible MongoDB Server version.

Compatible versions differ depending on the base image that the MongoDB database resource uses.

To learn more about MongoDB versioning, see MongoDB Versioning in the MongoDB Manual.

For best results, use the latest available enterprise MongoDB version that is compatible with your Ops Manager version.
spec.opsManager.configMapRef.name string

Name of the ConfigMap with the Ops Manager connection configuration. The spec.cloudManager.configMapRef.name setting is an alias for this setting and can be used in its place.

Note

This value must exist on the same namespace as the resource you want to create.

Operator manages changes to the ConfigMap

The Kubernetes Operator tracks any changes to the ConfigMap and reconciles the state of the MongoDB Kubernetes resource.

<myproject>
spec.credentials string

Name of the secret you created as Ops Manager API authentication credentials for the Kubernetes Operator to communicate with Ops Manager.

The Ops Manager Kubernetes Secret object holding the Credentials must exist on the same Namespace as the resource you want to create.

Operator manages changes to the Secret

The Kubernetes Operator tracks any changes to the Secret and reconciles the state of the MongoDB Kubernetes resource.

<mycredentials>
spec.type string Type of MongoDB Kubernetes resource to create. ShardedCluster
spec.persistent string

Optional.

Flag indicating if this MongoDB Kubernetes resource should use Persistent Volumes for storage. Persistent volumes are not deleted when the MongoDB Kubernetes resource is stopped or restarted.

If this value is true, then the following values are set to their default value of 16Gi:

To change your Persistent Volume Claims configuration, configure the following collections to meet your deployment requirements:

Warning

Your containers must have permissions to write to your Persistent Volume. The Kubernetes Operator sets fsGroup = 2000 in securityContext This makes Kubernetes try to fix write permissions for the Persistent Volume. If redeploying the deployment item does not fix issues with your Persistent Volumes, contact MongoDB Support.

Note

If you do not use Persistent Volumes, the Disk Usage and Disk IOPS charts cannot be displayed in either the Processes tab on the Deployment page or in the Metrics page when reviewing the data for this deployment.

true
5

Add any additional accepted settings for a sharded cluster deployment.

You can also add any of the following optional settings to the object specification file for a sharded cluster deployment:

Warning

You must set spec.clusterDomain if your Kubernetes cluster has a default domain other than the default cluster.local. If you neither use the default nor set the spec.clusterDomain option, the Kubernetes Operator might not function as expected.

For config server

For shard routers

For shard members

6

Save this file with a .yaml file extension.

7

Start your sharded cluster deployment.

Invoke the following Kubernetes command to create your sharded cluster:

kubectl apply -f <sharded-cluster-conf>.yaml

Check the log after running this command. If the creation was successful, you should see a message similar to the following:

2018-06-26T10:30:30.346Z INFO operator/shardedclusterkube.go:52 Created! {"sharded cluster": "my-sharded-cluster"}
8

Track the status of your sharded cluster deployment.

To check the status of your MongoDB Kubernetes resource, invoke the following command:

kubectl get mdb <resource-name> -o yaml -w

The -w flag means “watch”. With the “watch” flag set, the output refreshes immediately when the configuration changes until the status phase achieves the Running state.

See Troubleshoot the Kubernetes Operator for information about the resource deployment statuses.

1

Configure kubectl to default to your namespace.

If you have not already, run the following command to execute all kubectl commands in the namespace you created:

kubectl config set-context $(kubectl config current-context) --namespace=<metadata.namespace>
2

Create the secret for your Shards’ TLS certificates.

Run this kubectl command to create a new secret that stores the sharded cluster shards’ certificates:

kubectl -n mongodb create secret tls <metadata.name>-0-cert \
  --cert=<shard-0-tls-cert> \
  --key=<shard-0-tls-key>

kubectl -n mongodb create secret tls <metadata.name>-1-cert \
  --cert=<shard-1-tls-cert> \
  --key=<shard-1-tls-key>

If you’re using HashiCorp Vault as your secret storage tool, you can Create a Vault Secret instead.

3

Create the secret for your config servers’ TLS certificate.

Run this kubectl command to create a new secret that stores the sharded cluster config servers’ certificate:

kubectl -n mongodb create secret tls <metadata.name>-config-cert \
  --cert=<config-tls-cert> \
  --key=<config-tls-key>

If you’re using HashiCorp Vault as your secret storage tool, you can Create a Vault Secret instead.

4

Create the secret for your mongos servers’ TLS certificate.

Run this kubectl command to create a new secret that stores the sharded cluster mongos certificate:

kubectl -n mongodb create secret tls <metadata.name>-mongos-cert \
  --cert=<mongos-tls-cert> \
  --key=<mongos-tls-key>

If you’re using HashiCorp Vault as your secret storage tool, you can Create a Vault Secret instead.

5

Create the secret for your agent’s TLS certificate.

Run this kubectl command to create a new secret that stores the agent’s TLS certificate:

kubectl create secret tls <metadata.name>-agent-certs \
  --cert=<agent-tls-cert> \
  --key=<agent-tls-key>

If you’re using HashiCorp Vault as your secret storage tool, you can Create a Vault Secret instead.

6
7

Copy the highlighted section of this sharded cluster resource.

Change the highlighted settings of this YAML file to match your desired sharded cluster configuration.

Change the settings to match your desired sharded cluster configuration.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
---
apiVersion: mongodb.com/v1
kind: MongoDB
metadata:
  name: <my-sharded-cluster>
spec:
  shardCount: 2
  mongodsPerShardCount: 3
  mongosCount: 2
  configServerCount: 3
  version: "4.2.2-ent"
  opsManager:
    configMapRef:
      name: <configMap.metadata.name>
            # Must match metadata.name in ConfigMap file
  credentials: <mycredentials>
  type: ShardedCluster
  persistent: true
19
20
21
22
23
24
25
  security:
    tls:
      enabled: true
      ca: <custom-ca>
      secretRef:
        prefix: <prefix>
...
8

Paste the copied example to create a new sharded cluster resource.

Open your preferred text editor and paste the object specification into a new text file.

9

Configure the settings highlighted in the preceding step as follows.

Key Type Description Example
metadata.name string

Label for this Kubernetes sharded cluster object.

Resource names must be 44 characters or less.

See also

myproject
spec.shardCount integer Number of shards to deploy. 2
spec.mongodsPerShardCount integer Number of shard members per shard. 3
spec.mongosCount integer Number of shard routers to deploy. 2
spec.configServerCount integer Number of members of the config server replica set. 3
spec.version string

Version of MongoDB that this sharded cluster should run.

The format should be X.Y.Z for the Community edition and X.Y.Z-ent for the Enterprise edition.

Important

Ensure that you choose a compatible MongoDB Server version.

Compatible versions differ depending on the base image that the MongoDB database resource uses.

To learn more about MongoDB versioning, see MongoDB Versioning in the MongoDB Manual.

For best results, use the latest available enterprise MongoDB version that is compatible with your Ops Manager version.
spec.opsManager.configMapRef.name string

Name of the ConfigMap with the Ops Manager connection configuration. The spec.cloudManager.configMapRef.name setting is an alias for this setting and can be used in its place.

Note

This value must exist on the same namespace as the resource you want to create.

Operator manages changes to the ConfigMap

The Kubernetes Operator tracks any changes to the ConfigMap and reconciles the state of the MongoDB Kubernetes resource.

<myproject>
spec.credentials string

Name of the secret you created as Ops Manager API authentication credentials for the Kubernetes Operator to communicate with Ops Manager.

The Ops Manager Kubernetes Secret object holding the Credentials must exist on the same Namespace as the resource you want to create.

Operator manages changes to the Secret

The Kubernetes Operator tracks any changes to the Secret and reconciles the state of the MongoDB Kubernetes resource.

<mycredentials>
spec.type string Type of MongoDB Kubernetes resource to create. ShardedCluster
spec.persistent string

Optional.

Flag indicating if this MongoDB Kubernetes resource should use Persistent Volumes for storage. Persistent volumes are not deleted when the MongoDB Kubernetes resource is stopped or restarted.

If this value is true, then the following values are set to their default value of 16Gi:

To change your Persistent Volume Claims configuration, configure the following collections to meet your deployment requirements:

Warning

Your containers must have permissions to write to your Persistent Volume. The Kubernetes Operator sets fsGroup = 2000 in securityContext This makes Kubernetes try to fix write permissions for the Persistent Volume. If redeploying the deployment item does not fix issues with your Persistent Volumes, contact MongoDB Support.

Note

If you do not use Persistent Volumes, the Disk Usage and Disk IOPS charts cannot be displayed in either the Processes tab on the Deployment page or in the Metrics page when reviewing the data for this deployment.

true
10

Configure the TLS settings for your sharded cluster resource using a Custom Certificate Authority.

To enable TLS in your deployment, configure the following settings in your Kubernetes object:

Key Type Necessity Description Example
spec.security
boolean Required

If this value is true, TLS is enabled on the MongoDB deployment.

By default, Kubernetes Operator requires hosts to use and accept TLS encrypted connections.

true
spec.security
string Required Add the ConfigMap’s name that stores the custom CA that you used to sign your deployment’s TLS certificates. <custom-ca>
spec.security
.tls.certsSecretPrefix
string Optional

If applicable, add the <prefix> of the secret name that contains your MongoDB deployment’s TLS certificates.

Example

If you call your deployment my-deployment and you set the prefix to mdb, you must name the TLS secret for the client TLS communications mdb-my-deployment-cert. Also, you must name the TLS secret for internal cluster authentication (if enabled) mdb-my-deployment-clusterfile.

devDb
11

Add any additional accepted settings for a sharded cluster deployment.

You can also add any of the following optional settings to the object specification file for a sharded cluster deployment:

Warning

You must set spec.clusterDomain if your Kubernetes cluster has a default domain other than the default cluster.local. If you neither use the default nor set the spec.clusterDomain option, the Kubernetes Operator might not function as expected.

For config server

For shard routers

For shard members

12

Save this file with a .yaml file extension.

13

Start your sharded cluster deployment.

Invoke the following Kubernetes command to create your sharded cluster:

kubectl apply -f <sharded-cluster-conf>.yaml

Check the log after running this command. If the creation was successful, you should see a message similar to the following:

2018-06-26T10:30:30.346Z INFO operator/shardedclusterkube.go:52 Created! {"sharded cluster": "my-sharded-cluster"}
14

Track the status of your sharded cluster deployment.

To check the status of your MongoDB Kubernetes resource, invoke the following command:

kubectl get mdb <resource-name> -o yaml -w

The -w flag means “watch”. With the “watch” flag set, the output refreshes immediately when the configuration changes until the status phase achieves the Running state.

See Troubleshoot the Kubernetes Operator for information about the resource deployment statuses.

After you encrypt your database resource with TLS, you can secure the following:

Renew TLS Certificates for a Sharded Cluster

Renew your TLS certificates periodically using the following procedure:

1

Configure kubectl to default to your namespace.

If you have not already, run the following command to execute all kubectl commands in the namespace you created:

kubectl config set-context $(kubectl config current-context) --namespace=<metadata.namespace>
2

Renew the secret for your Shards’ TLS certificates.

Run this kubectl command to renew an existing secret that stores the sharded cluster shards’ certificates:

kubectl -n mongodb create secret tls <metadata.name>-0-cert \
  --cert=<shard-0-tls-cert> \
  --key=<shard-0-tls-key> \
  --dry-run=client \
  -o yaml |
kubectl apply -f -

kubectl -n mongodb create secret tls <metadata.name>-1-cert \
  --cert=<shard-1-tls-cert> \
  --key=<shard-1-tls-key> \
  --dry-run=client \
  -o yaml |
kubectl apply -f -
3

Renew the secret for your config server’s TLS certificates.

Run this kubectl command to renew an existing secret that stores the sharded cluster config server’s certificates:

kubectl -n mongodb create secret tls <metadata.name>-config-cert \
  --cert=<config-tls-cert> \
  --key=<config-tls-key> \
  --dry-run=client \
  -o yaml |
kubectl apply -f -
4

Renew the secret for your mongos server’s TLS certificates.

Run this kubectl command to renew an existing secret that stores the sharded cluster mongos certificates:

kubectl -n mongodb create secret tls <metadata.name>-mongos-cert \
  --cert=<mongos-tls-cert> \
  --key=<mongos-tls-key> \
  --dry-run=client \
  -o yaml |
kubectl apply -f -