Navigation

Plan your MongoDB Enterprise Kubernetes Operator Installation

Use the MongoDB Enterprise Kubernetes Operator to deploy:

  • Ops Manager resources
  • MongoDB standalone, replica set, and sharded cluster resources

Cloud Manager and Ops Manager 4.0.11 Support MongoDB Resources

You can use the Kubernetes Operator to deploy MongoDB resources with Ops Manager version 4.0.11 or later and Cloud Manager. At any place in this guide that says Ops Manager, you can substitute Cloud Manager.

To deploy MongoDB resources with the Kubernetes Operator, you need an Ops Manager instance. Deploy this instance to Kubernetes using the Operator or outside Kubernetes using traditional installation methods. The Operator uses Ops Manager API methods to deploy then manage MongoDB resources.

Considerations

Kubernetes Compatibility

MongoDB Enterprise Kubernetes Operator is compatible with Kubernetes v1.13 or later.

Docker Container Details

MongoDB builds the container images from the latest builds of the following operating systems:

If you get your Kubernetes Operator from… …the Container uses
quay.io or GitHub Ubuntu 16.04
OpenShift Red Hat Enterprise Linux 7

MongoDB, Inc. updates all packages on these images before releasing them every three weeks.

Validation Webhook

The Kubernetes Operator uses a webhook to prevent users from applying invalid resource definitions. The webhook rejects these requests immediately and the Kubernetes Operator doesn’t create or update the resource.

The ClusterRole and ClusterRoleBinding for the webhook are included in the default configuration files that you apply during installation. To create the role and binding, you must have cluster-admin privileges.

If you apply an invalid resource definition, the webhook returns a message that describes the error to the shell:

Error from server (shardPodSpec field is not configurable for
application databases as it is for sharded clusters and appdbs are
replica sets): error when creating "my-ops-manager.yaml":
admission webhook "ompolicy.mongodb.com" denied the request:
shardPodSpec field is not configurable for application databases as
it is for sharded clusters and appdbs are replica sets

The validation webhook is not required to create or update resources. If you omit the validation webhook, remove its role and binding from the default configuration, or have insufficient privileges to run it, the Kubernetes Operator performs the same validations when it reconciles each resource. The Kubernetes Operator marks resources as Failed if validation encounters a critical error. For non-critical errors, the Kubernetes Operator issues warnings.

Kubernetes Operator Deployment Scopes

You can deploy the Kubernetes Operator with different scopes based on where you want to deploy Ops Manager and MongoDB Kubernetes resources resources:

Operator in Same Namespace as Resources

You scope the Kubernetes Operator to a namespace. The Kubernetes Operator watches Ops Manager and MongoDB Kubernetes resources in that same namespace.

This is the default scope when you install the Kubernetes Operator using the installation instructions.

Operator in Different Namespace Than Resources

You scope the Kubernetes Operator to a namespace. The Kubernetes Operator watches Ops Manager and MongoDB Kubernetes resources in the namespace you specify.

You must use helm to install the Kubernetes Operator with this scope. Follow the relevant helm installation instructions, but use the following command to set the namespace for the Kubernetes Operator to watch:

helm template --set operator.watchNamespace=<namespace> \
helm_chart | kubectl apply -f -

Setting the namespace ensures that:

  • The namespace you want the Kubernetes Operator to watch has the correct roles and role bindings.
  • The Kubernetes Operator can watch and create resources in the namespace.

Cluster-Wide Scope

You scope the Kubernetes Operator to a cluster. The Kubernetes Operator watches Ops Manager and MongoDB Kubernetes resources in all namespaces in the Kubernetes cluster.

Important

You can deploy only one Operator with a cluster-wide scope per Kubernetes cluster.

You must use helm to install the Kubernetes Operator with this scope. Follow the relevant helm installation instructions, but make the following adjustments:

  1. Use the following command to set the Kubernetes Operator to watch all namespaces:

    helm template --set operator.watchNamespace=* \
    helm_chart | kubectl apply -f -
    
  2. Create the required service accounts for each namespace where you want to deploy Ops Manager and MongoDB Kubernetes resources:

    helm template --set namespace=<namespace> \
    helm_chart -x templates/database-roles.yaml | kubectl apply -f -
    

Prerequisites

To install the MongoDB Kubernetes Operator, you must:

  1. Have a Kubernetes solution available to use.

    If you need a Kubernetes solution, see the Kubernetes documentation on picking the right solution.

  2. Have a running Ops Manager.

    Important

    Your Ops Manager installation must run an active NTP service. If the Ops Manager host’s clock falls out of sync, that host can’t communicate with the Kubernetes Operator.

    To learn how to check your NTP service for your Ops Manager host, see the documentation for Ubuntu or RHEL.

  3. Clone the MongoDB Enterprise Kubernetes Operator repository.

    git clone https://github.com/mongodb/mongodb-enterprise-kubernetes.git
    

    Note

    You can use Helm to install the Kubernetes Operator. To learn how to install Helm, see its documentation on GitHub.

  4. Create a namespace for your Kubernetes deployment. By default, The Kubernetes Operator uses the mongodb namespace. To simplify your installation, consider creating a namespace labeled mongodb using the following kubectl command:

    kubectl create namespace mongodb
    

    If you do not want to use the mongodb namespace, you can label your namespace anything you like:

    kubectl create namespace <namespaceName>
    
  5. (Required for OpenShift Installs) Create a secret that contains credentials authorized to pull images from the registry.connect.redhat.com repository:

    1. If you have not already, obtain a Red Hat subscription.

    2. Create a Registry Service Account.

    3. Click on your Registry Service Account, then click the Docker Configuration tab.

    4. Download the <account-name>-auth.json file and open it in a text editor.

    5. Copy the registry.redhat.io object, and paste another instance of this object into the file. Remember to add a comma after the first object. Rename the second object registry.connect.redhat.com, then save the file:

      {
        "auths": {
         "registry.redhat.io": {
          "auth": "<encoded-string>"
         },
        "auths": {
         "registry.connect.redhat.com": {
          "auth": "<encoded-string>"
         }
        }
      }
      
    6. Create a openshift-pull-secret.yaml file with the contents of the modified <account-name>-auth.json file as stringData named .dockerconfigjson:

      apiVersion: v1
      kind: Secret
      metadata:
        name: openshift-pull-secret
      stringData:
        .dockerconfigjson: |
            {
              "auths": {
                "registry.redhat.io": {
                  "auth": "<encoded-string>"
                },
                "registry.connect.redhat.com": {
                  "auth": "<encoded-string>"
                }
              }
            }
      type: kubernetes.io/dockerconfigjson
      

      The value you provide in the metadata.name field contains the secret name. Provide this value when asked for the <openshift-pull-secret>.

    7. Create a secret from the openshift-pull-secret.yaml file:

      oc apply -f openshift-pull-secret.yaml -n <namespace>