Navigation
This version of the documentation is archived and no longer supported. To learn how to upgrade your version of MongoDB Kubernetes Operator, refer to the upgrade documentation.

Multi-Cluster Quick Start

Important

Use the beta release of the multi-cluster deployments only in development environments.

Overview

Using multi-cluster deployments, you can deploy MongoDB Enterprise Kubernetes Operator to manage MongoDB deployments that span more than one Kubernetes cluster.

This tutorial demonstrates how you can use the Kubernetes Operator to deploy a MongoDB replica set across three Kubernetes member clusters, using GKE (Google Kubernetes Engine) and Istio service mesh.

The beta release of the multi-cluster deployments offers you different layers of availability, depending on the needs of your enterprise application. You can use this tutorial to deploy:

  • Single Region, Multi AZ. One or more Kubernetes clusters where each cluster has nodes deployed in different zones in the same region. Such deployments protect MongoDB instances backing your enterprise applications against failures and offer increased availability, disaster recovery, and data distribution within one cloud region.

  • Multi Region. One or more Kubernetes clusters where you:

    • Deploy each cluster in a different region, and
    • Within each region, deploy cluster nodes in different availability zones.

    Such deployments allow you to add MongoDB instances in global clusters that span multiple geographic regions for increased availability and global distribution of data.

Central Cluster and Member Clusters

Istio manages the discovery of MongoDB nodes deployed in different Kubernetes member clusters. Each multi-cluster deployment that uses Istio comprises one Kubernetes central cluster and one or more member clusters.

  • Central cluster in Kubernetes contains:
    • MongoDB Enterprise Kubernetes Operator
    • Ops Manager, if you deploy it with the Kubernetes Operator
    • Kubernetes Operator MongoDBMulti CustomResource spec for the MongoDB replica set.
  • Member clusters in Kubernetes host the MongoDB replica sets.

You can host your application on any of the member clusters inside the Istio service mesh, either on Kubernetes clusters outside of the ones that you deploy with the Kubernetes Operator, or on the member clusters that you deploy as part of this tutorial.

To learn more, see the Multi-Cluster Deployment Architecture.

Services and Tools

This tutorial relies on the following services, tools, and their documentation:

  • Kubernetes clusters. This tutorial uses GKE (Google Kubernetes Engine) to provision multiple Kubernetes clusters. Each Kubernetes member cluster hosts a MongoDB replica set deployment and represents a data center that serves your application.
  • Istio service mesh. This tutorial uses Istio to facilitate DNS resolution for MongoDB replica sets deployed in different Kubernetes clusters.
  • MongoDB Enterprise Kubernetes Operator repository with configuration files that the Kubernetes Operator needs to deploy a Kubernetes cluster.
  • MongoDB Helm Charts for Kubernetes with charts for multi-cluster deployments.
  • Documentation from Istio to Install Multicluster.
  • install_istio_separate_network script that is based on Istio documentation and provides an example installation that uses the multi-primary mode on different networks.
  • multi-cluster kubeconfig creator tool that performs the following actions:
    • Creates a single mongodb namespace in the central cluster and each member cluster.
    • Creates Service Accounts, Roles, and RoleBindings in the central cluster and each member cluster.
    • Puts Service Account token secrets from each member cluster into a single kubeconfig file and saves the file in the central cluster. This enables authorized access from the Kubernetes Operator installed in the central cluster to the member clusters.

Prerequisites

This tutorial requires that you:

  1. Set the environment variables with cluster names and the available GKE zones where you deploy the cluster, as in this example:

    export MDB_GKE_PROJECT={GKE project name}
    
    export MDB_CENTRAL_CLUSTER="mdb-central"
    export MDB_CENTRAL_CLUSTER_ZONE="us-west1-a"
    
    export MDB_CLUSTER_1="mdb-1"
    export MDB_CLUSTER_1_ZONE="us-west1-b"
    
    export MDB_CLUSTER_2="mdb-2"
    export MDB_CLUSTER_2_ZONE="us-east1-b"
    
    export MDB_CLUSTER_3="mdb-3"
    export MDB_CLUSTER_3_ZONE="us-central1-a"
    
    export MDB_CENTRAL_CLUSTER_FULL_NAME="gke_${MDB_GKE_PROJECT}_${MDB_CENTRAL_CLUSTER_ZONE}_${MDB_CENTRAL_CLUSTER}"
    
    export MDB_CLUSTER_1_FULL_NAME="gke_${MDB_GKE_PROJECT}_${MDB_CLUSTER_1_ZONE}_${MDB_CLUSTER_1}"
    export MDB_CLUSTER_2_FULL_NAME="gke_${MDB_GKE_PROJECT}_${MDB_CLUSTER_2_ZONE}_${MDB_CLUSTER_2}"
    export MDB_CLUSTER_3_FULL_NAME="gke_${MDB_GKE_PROJECT}_${MDB_CLUSTER_3_ZONE}_${MDB_CLUSTER_3}"
    
  2. Set up GKE (Google Kubernetes Engine) clusters:

    1. Set up your Google Cloud account and the gcloud tool, using the Google Kubernetes Engine Quickstart.

    2. Create one central cluster and one or more member clusters, specifying the zones, the number of nodes, and the instance types, as in these examples:

      gcloud container clusters create $MDB_CENTRAL_CLUSTER \
        --zone=$MDB_CENTRAL_CLUSTER_ZONE \
        --num-nodes=5 \
        --machine-type "e2-standard-2"
      
      gcloud container clusters create $MDB_CLUSTER_1 \
        --zone=$MDB_CLUSTER_1_ZONE \
        --num-nodes=5 \
        --machine-type "e2-standard-2"
      
      gcloud container clusters create $MDB_CLUSTER_2 \
        --zone=$MDB_CLUSTER_2_ZONE \
        --num-nodes=5 \
        --machine-type "e2-standard-2"
      
      gcloud container clusters create $MDB_CLUSTER_3 \
        --zone=$MDB_CLUSTER_3_ZONE \
        --num-nodes=5 \
        --machine-type "e2-standard-2"
      
  3. Obtain user authentication credentials for the central and member clusters and save the credentials. You will later use these authentication credentials for running kubectl commands on these Kubernetes clusters. Run the following commands:

    gcloud container clusters get-credentials $MDB_CENTRAL_CLUSTER \
      --zone=$MDB_CENTRAL_CLUSTER_ZONE
    
    gcloud container clusters get-credentials $MDB_CLUSTER_1 \
      --zone=$MDB_CLUSTER_1_ZONE
    
    gcloud container clusters get-credentials $MDB_CLUSTER_2 \
      --zone=$MDB_CLUSTER_2_ZONE
    
    gcloud container clusters get-credentials $MDB_CLUSTER_3 \
      --zone=$MDB_CLUSTER_3_ZONE
    
  4. Install Istio in a multi-primary mode on different networks, using the install_istio_separate_network script. To learn more, see the Install Multicluster documentation from Istio.

  5. Install Go v1.16 or later.

  6. Install Helm.

Procedure

1

Clone the MongoDB Enterprise Kubernetes Operator repository.

git clone https://github.com/mongodb/mongodb-enterprise-kubernetes.git
2

Run the multi-cluster kubeconfig creator tool.

By default, the Kubernetes Operator uses the mongodb namespace. To simplify your installation, the tool creates one central cluster, three member clusters, and a namespace labeled mongodb in each of the clusters.

  1. Change to the directory to which you cloned the Kubernetes Operator repository, and then to the directory that has the multi-cluster kubeconfig creator tool.

  2. Run the multi-cluster kubeconfig creator tool:

    cd tools/multicluster
    go run main.go \
      -central-cluster="${MDB_CENTRAL_CLUSTER_FULL_NAME}" \
      -member-clusters="${MDB_CLUSTER_1_FULL_NAME},${MDB_CLUSTER_2_FULL_NAME},${MDB_CLUSTER_3_FULL_NAME}" \
      -member-cluster-namespace="mongodb" \
      -central-cluster-namespace="mongodb"
    
3

Set the Istio injection webhook in each member cluster.

Run the following command on the central cluster, specifying the context for each of the member clusters in the deployment. These commands add the``istio-injection=enabled`` label to the mongodb namespace on each member cluster. This label configures Istio’s injection webhook which enables adding a sidecar to any Pods that you create in this namespace. To learn more, see Automatic sidecar injection in the Istio documentation.

kubectl label \
  --context=$MDB_CLUSTER_1_FULL_NAME \
  namespace mongodb \
  istio-injection=enabled
kubectl label \
  --context=$MDB_CLUSTER_2_FULL_NAME \
  namespace mongodb \
  istio-injection=enabled
kubectl label \
  --context=$MDB_CLUSTER_3_FULL_NAME \
  namespace mongodb \
  istio-injection=enabled
4

Configure kubectl to use the central cluster’s namespace.

If you have not done so already, run the following commands to run all kubectl commands on the central cluster in the default namespace. In the following steps, you will install the Kubernetes Operator into this namespace.

kubectl config use-context $MDB_CENTRAL_CLUSTER_FULL_NAME
kubectl config set-context $(kubectl config current-context) \
--namespace=mongodb
5

Add the MongoDB Helm Charts for Kubernetes repository to Helm.

helm repo add mongodb https://mongodb.github.io/helm-charts
6

Install the MongoDB Enterprise Kubernetes Operator in the central cluster.

Use the Helm charts for the Kubernetes Operator and multi-cluster deployments to install Kubernetes Operator for managing your multi-cluster deployment:

helm upgrade \
  --install \
    mongodb-enterprise-operator-multi-cluster \
    mongodb/enterprise-operator \
      --namespace mongodb \
      --set namespace=mongodb \
      --version <mongodb-kubernetes-operator-version> \
      --set operator.name=mongodb-enterprise-operator-multi-cluster \
      --set operator.createOperatorServiceAccount=false \
      --set "multiCluster.clusters=$MDB_CLUSTER_1_FULL_NAME,$MDB_CLUSTER_2_FULL_NAME,$MDB_CLUSTER_3_FULL_NAME"
7

Deploy the MongoDB resource.

  1. On the central cluster, create a secret so that the Kubernetes Operator can create and update objects in your Ops Manager project. To learn more, see Create Credentials for the Kubernetes Operator.

  2. On the central cluster, create a ConfigMap to link the Kubernetes Operator to your Ops Manager project. To learn more, see Create One Project using a ConfigMap.

  3. On the central cluster, configure the required service accounts for each member cluster:

    helm template --show-only \
      templates/database-roles.yaml \
      mongodb/enterprise-operator \
      --set namespace=mongodb | \
    kubectl apply -f - \
      --context=$MDB_CLUSTER_1_FULL_NAME \
      --namespace mongodb
    
    helm template --show-only \
      templates/database-roles.yaml \
      mongodb/enterprise-operator \
      --set namespace=mongodb | \
    kubectl apply -f - \
      --context=$MDB_CLUSTER_2_FULL_NAME \
      --namespace mongodb
    
    helm template --show-only \
      templates/database-roles.yaml \
      mongodb/enterprise-operator \
      --set namespace=mongodb | \
    kubectl apply -f - \
      --context=$MDB_CLUSTER_3_FULL_NAME \
      --namespace mongodb
    
  4. Set spec.credentials and spec.opsManager.configMapRef.name and deploy the MongoDB resource. In the following code sample, duplicateServiceObjects is set to true to enable DNS proxying in Istio.

    Note

    To enable the cross-cluster DNS resolution by the Istio service mesh, this tutorial creates service objects with a single ClusterIP address per each Kubernetes Pod.

    kubectl apply -f - <<EOF
    apiVersion: mongodb.com/v1
    kind: MongoDBMulti
    metadata:
     name: multi-replica-set
    spec:
     version: 4.4.0-ent
     type: ReplicaSet
     persistent: false
     duplicateServiceObjects: true
     credentials: my-credentials
     opsManager:
       configMapRef:
         name: my-project
     clusterSpecList:
       clusterSpecs:
       - clusterName: ${MDB_CLUSTER_1_FULL_NAME}
         members: 3
       - clusterName: ${MDB_CLUSTER_2_FULL_NAME}
         members: 2
       - clusterName: ${MDB_CLUSTER_3_FULL_NAME}
         members: 3
    EOF
    
8

Verify that the MDB resources are running.

  1. For member clusters, run the following commands to verify that the MongoDB Pods are in the running state:

    kubectl get pods \
     --context=$MDB_CLUSTER_1_FULL_NAME \
     --namespace mongodb
    
    kubectl get pods \
     --context=$MDB_CLUSTER_2_FULL_NAME \
     --namespace mongodb
    
    kubectl get pods \
     --context=$MDB_CLUSTER_3_FULL_NAME \
     --namespace mongodb
    
  2. In the central cluster, run the following commands to verify that the MongoDBMulti CustomResource is in the running state:

    kubectl --context=$MDB_CENTRAL_CLUSTER_FULL_NAME \
      --namespace mongodb \
      get mdbm multi-replica-set -o yaml -w
    

Troubleshooting Mutli-Cluster Deployments

To troubleshoot your multi-cluster deployments, use the procedures in this section.

Recovering from Cluster Failure

This procedure uses the same cluster names as in the Prerequisites. If the cluster MDB_CLUSTER_1 that holds MongoDB nodes goes down, and if you provision a new cluster named MDB_CLUSTER_4 instead of MDB_CLUSTER_1 to hold the new MongoDB nodes, run the multi-cluster kubeconfig creator tool with the updated list of member clusters, and then edit the MongoDBMulti CustomResource spec on the central cluster.

To reconfigure the multi-cluster deployment after a cluster failure, replace the failed cluster with the newly provisioned cluster as follows:

  1. Run the multi-cluster kubeconfig creator tool with the new cluster MDB_CLUSTER_4 specified in the -member-clusters flag. This enables the Kubernetes Operator to communicate with the new cluster to schedule MongoDB nodes on it. In the following example, -member-clusters contains ${MDB_CLUSTER_4_FULL_NAME}.

    go run tools/multicluster/main.go \
      -central-cluster="${MDB_CENTRAL_CLUSTER_FULL_NAME}" \
      -member-clusters="${MDB_CLUSTER_4_FULL_NAME},${MDB_CLUSTER_2_FULL_NAME},${MDB_CLUSTER_3_FULL_NAME}" \
      -member-cluster-namespace="mongodb" \
      -central-cluster-namespace="mongodb"
    
  2. On the central cluster, locate and edit the MongoDBMulti CustomResource spec to add the new cluster name to the clusterSpecList and remove the failed cluster from this list. The resulting list of cluster names should be similar to the following:

    clusterSpecList:
       clusterSpecs:
        - clusterName: ${MDB_CLUSTER_4_FULL_NAME}
          members: 3
        - clusterName: ${MDB_CLUSTER_2_FULL_NAME}
          members: 2
        - clusterName: ${MDB_CLUSTER_3_FULL_NAME}
          members: 3
    
  3. Restart the Kubernetes Operator Pod. After the restart, the Kubernetes Operator should reconcile the MongoDB deployment on the newly created MDB_CLUSTER_4 cluster that has been created as a replacement for the MDB_CLUSTER_1 failure. To learn more about resource reconciliation, see Multi-Cluster Deployment Architecture.