Navigation

Multi-Cluster Quick Start

Important

Use the beta release of the Multi-Cluster Deployments only in development environments.

Overview

Using Multi-Cluster Deployments, you can deploy MongoDB Enterprise Kubernetes Operator to manage MongoDB deployments that span more than one Kubernetes cluster.

This tutorial demonstrates how you can use the Kubernetes Operator to deploy a MongoDB replica set across three Kubernetes member clusters, using GKE (Google Kubernetes Engine) and Istio service mesh.

The beta release of the Multi-Cluster Deployments offers you different layers of availability, depending on the needs of your enterprise application. You can use this tutorial to deploy:

  • Single Region, Multi AZ. One or more Kubernetes clusters where each cluster has nodes deployed in different zones in the same region. Such deployments protect MongoDB instances backing your enterprise applications against failures and offer increased availability, disaster recovery, and data distribution within one cloud region.

  • Multi Region. One or more Kubernetes clusters where you:

    • Deploy each cluster in a different region, and
    • Within each region, deploy cluster nodes in different availability zones.

    Such deployments allow you to add MongoDB instances in global clusters that span multiple geographic regions for increased availability and global distribution of data.

Istio manages the discovery of MongoDB nodes deployed in different Kubernetes member clusters. Each Multi-Cluster Deployment that uses Istio comprises one Kubernetes central cluster and one or more member clusters.

  • Central cluster contains:
    • MongoDB Enterprise Kubernetes Operator
    • Ops Manager, if you deploy it with the Kubernetes Operator
    • Kubernetes Operator MongoDBMulti CustomResource spec for the MongoDB replica set.
  • Member Kubernetes clusters host the MongoDB replica sets.

You can host your application on any of the member clusters inside the Istio service mesh, either on Kubernetes clusters outside of the ones that you deploy with the Kubernetes Operator, or on the member clusters that you deploy as part of this tutorial.

To learn more, see the Multi-Cluster Deployment Architecture.

Services and Tools

This tutorial relies on the following services, tools, and their documentation:

  • Kubernetes clusters. This tutorial uses GKE (Google Kubernetes Engine) to provision multiple Kubernetes clusters. Each Kubernetes member cluster hosts a MongoDB replica set deployment and represents a data center that serves your application.
  • Istio service mesh. This tutorial uses Istio to facilitate DNS resolution for MongoDB replica sets deployed in different Kubernetes clusters.
  • MongoDB Enterprise Kubernetes Operator repository with configuration files that the Kubernetes Operator needs to deploy a Kubernetes cluster.
  • Documentation from Istio to Install Multicluster.
  • install_istio_separate_network script that is based on Istio documentation and provides an example installation that uses the multi-primary mode on different networks.
  • multi-cluster kubeconfig creator tool that performs the following actions:
    • Creates a single mongodb namespace in the central cluster and each member cluster.
    • Creates Service Accounts, Roles, and RoleBindings in the central cluster and each member cluster.
    • Puts Service Account token secrets from each member cluster into a single kubeconfig file and saves the file in the central cluster. This enables authorized access from the Kubernetes Operator installed in the central cluster to the member clusters.

Prerequisites

This tutorial requires that you:

  1. Set the environment variables with cluster names and the available GKE zones where you deploy the cluster, as in this example:

    export MDB_GKE_PROJECT={GKE project name}
    
    export MDB_CENTRAL_CLUSTER="mdb-central"
    export MDB_CENTRAL_CLUSTER_ZONE="us-west1-a"
    
    export MDB_CLUSTER_1="mdb-1"
    export MDB_CLUSTER_1_ZONE="us-west1-b"
    
    export MDB_CLUSTER_2="mdb-2"
    export MDB_CLUSTER_2_ZONE="us-east1-b"
    
    export MDB_CLUSTER_3="mdb-3"
    export MDB_CLUSTER_3_ZONE="us-central1-a"
    
    export MDB_CENTRAL_CLUSTER_FULL_NAME="gke_${MDB_GKE_PROJECT}_${MDB_CENTRAL_CLUSTER_ZONE}_${MDB_CENTRAL_CLUSTER}"
    
    export MDB_CLUSTER_1_FULL_NAME="gke_${MDB_GKE_PROJECT}_${MDB_CLUSTER_1_ZONE}_${MDB_CLUSTER_1}"
    export MDB_CLUSTER_2_FULL_NAME="gke_${MDB_GKE_PROJECT}_${MDB_CLUSTER_2_ZONE}_${MDB_CLUSTER_2}"
    export MDB_CLUSTER_3_FULL_NAME="gke_${MDB_GKE_PROJECT}_${MDB_CLUSTER_3_ZONE}_${MDB_CLUSTER_3}"
    
  2. Set up GKE (Google Kubernetes Engine) clusters:

    1. Set up your Google Cloud account and the gcloud tool, using the Google Kubernetes Engine Quickstart.

    2. Create one central cluster and one or more member clusters, specifying the zones, the number of nodes, and the instance types, as in these examples:

      gcloud container clusters create $MDB_CENTRAL_CLUSTER \
        --zone=$MDB_CENTRAL_CLUSTER_ZONE \
        --num-nodes=5 \
        --machine-type "e2-standard-2"
      
      gcloud container clusters create $MDB_CLUSTER_1 \
        --zone=$MDB_CLUSTER_1_ZONE \
        --num-nodes=5 \
        --machine-type "e2-standard-2"
      
      gcloud container clusters create $MDB_CLUSTER_2 \
        --zone=$MDB_CLUSTER_2_ZONE \
        --num-nodes=5 \
        --machine-type "e2-standard-2"
      
      gcloud container clusters create $MDB_CLUSTER_3 \
        --zone=$MDB_CLUSTER_3_ZONE \
        --num-nodes=5 \
        --machine-type "e2-standard-2"
      
  3. Obtain user authentication credentials for the central and member clusters and save the credentials. You will later use these authentication credentials for running kubectl commands on these Kubernetes clusters. Run the following commands:

    gcloud container clusters get-credentials $MDB_CENTRAL_CLUSTER \
      --zone=$MDB_CENTRAL_CLUSTER_ZONE
    
    gcloud container clusters get-credentials $MDB_CLUSTER_1 \
      --zone=$MDB_CLUSTER_1_ZONE
    
    gcloud container clusters get-credentials $MDB_CLUSTER_2 \
      --zone=$MDB_CLUSTER_2_ZONE
    
    gcloud container clusters get-credentials $MDB_CLUSTER_3 \
      --zone=$MDB_CLUSTER_3_ZONE
    
  4. Install Istio in a multi-primary mode on different networks, using the install_istio_separate_network script. To learn more, see the Install Multicluster documentation from Istio.

  5. Install Go v1.16 or later.

  6. Install Helm.

Procedure

1

Clone the MongoDB Enterprise Kubernetes Operator repository.

git clone https://github.com/mongodb/mongodb-enterprise-kubernetes.git
2

Run the multi-cluster kubeconfig creator tool.

By default, the Kubernetes Operator uses the mongodb namespace. To simplify your installation, the tool creates one central cluster, three member clusters, and a namespace labeled mongodb in each of the clusters.

  1. Change to the directory in which you cloned the repository.

  2. Run the multi-cluster kubeconfig creator tool:

    go run tools/multicluster/main.go \
      -central-cluster="${MDB_CENTRAL_CLUSTER_FULL_NAME}" \
      -member-clusters="${MDB_CLUSTER_1_FULL_NAME},${MDB_CLUSTER_2_FULL_NAME},${MDB_CLUSTER_3_FULL_NAME}" \
      -member-cluster-namespace="mongodb" \
      -central-cluster-namespace="mongodb"
    
3

Set the Istio injection webhook in each member cluster.

In each member cluster, label namespaces with the istio-injection=enabled label to enable Istio’s injection webhook. This ensures that any Pods that you create in these namespaces will have a sidecar added to them. To learn more, see Automatic sidecar injection in the Istio documentation.

kubectl label \
  --context=$MDB_CLUSTER_1_FULL_NAME \
   --namespace mongodb \
   istio-injection=enabled
kubectl label \
  --context=$MDB_CLUSTER_2_FULL_NAME \
  --namespace mongodb \
  istio-injection=enabled
kubectl label \
  --context=$MDB_CLUSTER_3_FULL_NAME \
  --namespace mongodb \
  istio-injection=enabled
4

Configure kubectl to use the central cluster’s namespace.

If you have not done so already, run the following commands to execute all kubectl commands on the central cluster in the default namespace. In the following steps, you will install the Kubernetes Operator into this namespace.

kubectl config use-context $MDB_CENTRAL_CLUSTER_FULL_NAME
kubectl config set-context $(kubectl config current-context) \
--namespace=mongodb
5

Install the MongoDB Enterprise Kubernetes Operator in the central cluster.

Use Helm to install the Kubernetes Operator for managing your Multi-Cluster Deployment:

helm upgrade \
  --install \
    mongodb-enterprise-operator-multi-cluster \
    public/helm_chart \
      --namespace mongodb \
      --set namespace=mongodb \
      --set operator.name=mongodb-enterprise-operator-multi-cluster \
      --set operator.createOperatorServiceAccount=false \
      --set "multiCluster.clusters={${MDB_CLUSTER_1_FULL_NAME},${MDB_CLUSTER_2_FULL_NAME},${MDB_CLUSTER_3_FULL_NAME}}"
6

Deploy the MongoDB resource.

  1. Create a secret in each member cluster so that the Kubernetes Operator can create and update objects in your Ops Manager project. To learn more, see Create Credentials for the Kubernetes Operator.

  2. Create a ConfigMap in each member cluster to link the Kubernetes Operator to your Ops Manager project. To learn more, see Create One Project using a ConfigMap.

  3. Configure the required service accounts in each member cluster:

    helm template --show-only \
      templates/database-roles.yaml \
      public/helm_chart \
      --set namespace=mongodb | \
    kubectl apply -f - \
      --context=$MDB_CLUSTER_1_FULL_NAME \
      --namespace mongodb
    
    helm template --show-only \
      templates/database-roles.yaml \
      public/helm_chart \
      --set namespace=mongodb | \
    kubectl apply -f - \
      --context=$MDB_CLUSTER_2_FULL_NAME \
      --namespace mongodb
    
    helm template --show-only \
      templates/database-roles.yaml \
      public/helm_chart \
      --set namespace=mongodb | \
    kubectl apply -f - \
      --context=$MDB_CLUSTER_3_FULL_NAME \
      --namespace mongodb
    
  4. Set spec.credentials and spec.opsManager.configMapRef.name and deploy the MongoDB resource. In the following code sample, duplicateServiceObjects is set to true to enable DNS proxying in Istio.

    Note

    To enable the cross-cluster DNS resolution by the Istio service mesh, this tutorial creates service objects with a single ClusterIP address per each Kubernetes Pod.

    kubectl apply -f - <<EOF
    apiVersion: mongodb.com/v1
    kind: MongoDBMulti
    metadata:
      name: multi-replica-set
    spec:
      version: 4.4.0-ent
      type: ReplicaSet
      persistent: false
      duplicateServiceObjects: true
      credentials: my-credentials
      security:
        authentication:
          enabled: true
          modes: ["SCRAM"]
        opsManager:
          configMapRef:
            name: my-project
        clusterSpecList:
          clusterSpecs:
           - clusterName: ${MDB_CLUSTER_1_FULL_NAME}
             members: 3
           - clusterName: ${MDB_CLUSTER_2_FULL_NAME}
             members: 2
           - clusterName: ${MDB_CLUSTER_3_FULL_NAME}
             members: 3
    EOF
    
7

Verify that the MDB resources are running.

  1. For member clusters, run the following commands to verify that the MongoDB Pods are in the running state:

    kubectl get pods \
     --context=$MDB_CLUSTER_1_FULL_NAME \
     --namespace mongodb
    
    kubectl get pods \
     --context=$MDB_CLUSTER_2_FULL_NAME \
     --namespace mongodb
    
    kubectl get pods \
     --context=$MDB_CLUSTER_3_FULL_NAME \
     --namespace mongodb
    
  2. In the central cluster, run the following commands to verify that the MongoDBMulti CustomResource is in the running state:

    kubectl --context=$MDB_CENTRAL_CLUSTER_FULL_NAME \
      --namespace mongodb \
      get mdbm multi-replica-set -o yaml -w