Navigation
This version of the documentation is archived and no longer supported.

Configure an Ops Manager Resource to use Local Mode

Important

Configuring Ops Manager to use Local Mode in Kubernetes is not recommended. Consider configuring Ops Manager to use Remote Mode instead.

In a default configuration, the MongoDB Agents and Backup Daemons access MongoDB installation archives over the Internet from MongoDB, Inc.

You can configure Ops Manager to run in Local Mode with the Kubernetes Operator if the nodes in your Kubernetes cluster don’t have access to the Internet. The Backup Daemons and managed MongoDB resources download installation archives only from a Persistent Volume that you create for the Ops Manager StatefulSet.

This procedure covers uploading installation archives to Ops Manager.

Considerations

When you upgrade the version of an Ops Manager resource in Local Mode, you might need to install the latest version of the MongoDB Database Tools.

Ops Manager Version Action
4.4.4+ No installation necessary.
4.4.0 - 4.4.3 Install the latest version of the MongoDB Database Tools. automation.versions.directory specifies the location of the Database Tools, which defaults to /mongodb-ops-manager/mongodb-releases/.

Prerequisites

  • Deploy an Ops Manager Resource. The following procedure shows you how to update your Ops Manager Kubernetes object to enable Local Mode.

  • To avoid downtime when you enable Local Mode, ensure that you set spec.replicas to a value greater than 1 in your Ops Manager resource definition.

    If you updated your Ops Manager resource definition to make Ops Manager highly available, apply your changes before you begin this tutorial:

    kubectl apply -f <opsmgr-resource>.yaml -n <namespace>
    

Procedure

1

Configure kubectl to default to your namespace.

If you have not already, run the following command to execute all kubectl commands in the namespace you created:

kubectl config set-context $(kubectl config current-context) --namespace=<namespace>
2

Delete the StatefulSet that manages your Ops Manager Pods.

In this tutorial, you update the StatefulSet that manages the Ops Manager Pods in your Kubernetes cluster.

You must first delete the Ops Manager StatefulSet so that Kubernetes can apply the updates that Local Mode requires.

  1. Find the name of your Ops Manager StatefulSet:

    kubectl get statefulsets
    

    The entry in the response that matches the metadata.name of your

    Your Ops Manager StatefulSet is the entry in the response that matches the metadata.name in your Ops Manager resource definition.

    kubectl get statefulsets -n mongodb
    NAME                       READY   AGE
    ops-manager-localmode      2/2     2m31s
    ops-manager-localmode-db   3/3     4m46s
    
  2. Delete the Ops Manager StatefulSet:

    Warning

    Ensure that you include the --cascade=false flag when you delete your Ops Manager StatefulSet. If you don’t include this flag, Kubernetes also deletes your Ops Manager Pods.

    kubectl delete statefulset --cascade=false <ops-manager-statefulset>
    
3

Copy the highlighted fields of this Ops Manager resource.

The highlighted section:

  • Uses the Ops Manager configuration setting automation.versions.source: local in spec.configuration to enable Local Mode.
  • Defines a Persistent Volume for the Ops Manager StatefulSet to store the MongoDB installation archive. MongoDB Agents running in MongoDB database resource containers that you create with the Kubernetes Operator download the installation archives from Ops Manager instead of from the Internet.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
apiVersion: mongodb.com/v1
kind: MongoDBOpsManager
metadata:
 name: ops-manager-localmode
spec:
 replicas: 2
 version: "4.2.12"
 adminCredentials: ops-manager-admin-secret
 configuration:
   # this enables local mode in Ops Manager
   automation.versions.source: local

 statefulSet:
   spec:
     # the Persistent Volume Claim will be created for each Ops Manager Pod
     volumeClaimTemplates:
       - metadata:
           name: mongodb-versions
         spec:
           accessModes: [ "ReadWriteOnce" ]
           resources:
             requests:
               storage: "20Gi"
     template:
       spec:
         containers:
           - name: mongodb-ops-manager
             volumeMounts:
               - name: mongodb-versions
                 # this is the directory in each Pod where all MongoDB
                 # archives must be put
                 mountPath: /mongodb-ops-manager/mongodb-releases


 backup:
   enabled: false

 applicationDatabase:
   members: 3
   persistent: true
4

Paste the copied example section into your existing Ops Manager resource.

Open your preferred text editor and paste the object specification into the appropriate location in your resource file.

5

Save your Ops Manager config file.

6

Apply changes to your Ops Manager deployment.

  1. Invoke the following kubectl command on the filename of the Ops Manager resource definition:

    kubectl apply -f <opsmgr-resource>.yaml
    
  2. Kubernetes creates a new Ops Manager StatefulSet when you apply the changes to your Ops Manager resource definition. Before proceeding to the next step, run the following command to ensure that the Ops Manager StatefulSet exists:

    kubectl get statefulsets
    

    The new Ops Manager StatefulSet should show 0 members ready:

    kubectl get statefulsets -n mongodb
    NAME                       READY   AGE         ops-manager-localmode      0/2     2m31s
    ops-manager-localmode-db   3/3     4m46s
    
7

In a rolling fashion, delete your old Ops Manager Pods.

  1. List the Ops Manager Pods in your Kubernetes cluster:

    kubectl get pods
    
  2. Delete one Ops Manager Pod:

    kubectl delete pod <om-pod-0>
    
  3. Kubernetes recreates the Ops Manager Pod you deleted. Continue to get the status of the new Pod until it is ready:

    kubectl get pods
    

    The output looks like the following when the new Pod is initializing:

    NAME                                          READY   STATUS    RESTARTS   AGE
    mongodb-enterprise-operator-5648d4c86-k5brh   1/1     Running   0          5m24s
    ops-manager-localmode-0                       0/1     Running   0          0m55s
    ops-manager-localmode-1                       1/1     Running   0          5m45s
    ops-manager-localmode-db-0                    1/1     Running   0          5m19s
    ops-manager-localmode-db-1                    1/1     Running   0          4m54s
    ops-manager-localmode-db-2                    1/1     Running   0          4m12s
    

    The output looks like the following when the new Pod is ready:

    NAME                                          READY   STATUS    RESTARTS   AGE
    mongodb-enterprise-operator-5648d4c86-k5brh   1/1     Running   0          5m24s
    ops-manager-localmode-0                       1/1     Running   0          3m55s
    ops-manager-localmode-1                       1/1     Running   0          5m45s
    ops-manager-localmode-db-0                    1/1     Running   0          5m19s
    ops-manager-localmode-db-1                    1/1     Running   0          4m54s
    ops-manager-localmode-db-2                    1/1     Running   0          4m12s
    
  4. Repeat Steps b and c until you’ve deleted all of your Ops Manager Pods and confirmed that all of the new Pods are ready.

8

Track the status of your Ops Manager instance.

To check the status of your Ops Manager resource, invoke the following command:

kubectl get om -o yaml -w

See Troubleshoot the Kubernetes Operator for information about the resource deployment statuses.

After the Ops Manager resource completes the Reconciling phase, the command returns output similar to the following:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
status:
  applicationDatabase:
    lastTransition: "2020-05-15T16:20:22Z"
    members: 3
    phase: Running
    type: ReplicaSet
    version: "4.2.11-ent"
  backup:
    phase: ""
  opsManager:
    lastTransition: "2020-05-15T16:20:26Z"
    phase: Running
    replicas: 1
    url: http://ops-manager-localmode-svc.mongodb.svc.cluster.local:8080
    version: "4.2.12"

Copy the value of the status.opsManager.url field, which states the resource’s connection URL. You use this value when you create a ConfigMap later in the procedure.

9

Download the MongoDB installation archive to your local machine.

The installers that you download depend on the environment to which you deployed the operator:

Note

The examples below provide you with a link to quickly download the specified versions of MongoDB Community edition and the MongoDB Database tools.

To download MongoDB Enterprise Edition, or any other version of MongoDB Community Edition, visit the MongoDB Download Center.

  1. Download the Ubuntu 16.04 installation tarball for the MongoDB Server version you want the Kubernetes Operator to deploy. For example, to download the 4.4.0 release:

    curl -OL https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-ubuntu1604-4.4.0.tgz
    
  2. If you deployed a version of Ops Manager from 4.4.0 up to and including 4.4.3, you must also download the Ubuntu 16.04 MongoDB Database Tools installation tarball. For example, to download the 100.1.0 release:

    curl -OL https://fastdl.mongodb.org/tools/db/mongodb-database-tools-ubuntu1604-x86_64-100.1.0.tgz
    
  1. Download the RHEL installation tarball for the MongoDB database version you want the Kubernetes Operator to deploy. For example, to download the 4.4.0 release:

    curl -OL https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-rhel<version>-4.4.0.tgz
    
  2. If you deployed a version of Ops Manager from 4.4.0 up to and including 4.4.3, you must also download the RHEL MongoDB Database Tools installation tarball. For example, to download the 100.1.0 release:

    curl -OL https://fastdl.mongodb.org/tools/db/mongodb-database-tools-rhel<version>-x86_64-100.1.0.tgz
    
10

Copy the MongoDB archive to the Ops Manager Persistent Volume.

Copy the MongoDB archive for each MongoDB version you intend to deploy to the Ops Manager Persistent Volume.

The commands that you use depend on the environment to which you deployed the Kubernetes Operator:

Note

If you deployed more than one Ops Manager replica, copy only the MongoDB installation tarball packages to Replica 1 and beyond.

To copy the MongoDB installation archive to the Ops Manager PersistentVolume:

  1. Copy the MongoDB Server installation tarball to the Ops Manager PersistentVolume. For example, to copy the 4.4.0 release:

    Replica 0:
    kubectl cp mongodb-linux-x86_64-ubuntu1604-4.4.0.tgz \
    "ops-manager-localmode-0:/mongodb-ops-manager/mongodb-releases/mongodb-linux-x86_64-ubuntu1604-4.4.0.tgz"
    
    Replica 1:
    kubectl cp mongodb-linux-x86_64-ubuntu1604-4.4.0.tgz \
    "ops-manager-localmode-1:/mongodb-ops-manager/mongodb-releases/mongodb-linux-x86_64-ubuntu1604-4.4.0.tgz"
    
  2. If you deployed a version of Ops Manager from 4.4.0 up to and including 4.4.3, copy the MongoDB Database Tools installation tarball to the Ops Manager PersistentVolume. For example, to copy the 100.1.0 release:

    Replica 0:
    kubectl cp mongodb-database-tools-ubuntu1604-x86_64-100.1.0.tgz \
    "ops-manager-localmode-0:/mongodb-ops-manager/mongodb-releases/mongodb-database-tools-ubuntu1604-x86_64-100.1.0.tgz"
    
    Replica 1:
    kubectl cp mongodb-database-tools-ubuntu1604-x86_64-100.1.0.tgz \
    "ops-manager-localmode-1:/mongodb-ops-manager/mongodb-releases/mongodb-database-tools-ubuntu1604-x86_64-100.1.0.tgz"
    

To copy the MongoDB installation archive to the Ops Manager PersistentVolume:

  1. Copy the MongoDB Server installation tarball to the Ops Manager PersistentVolume. For example, to copy the 4.4.0 release:

    Replica 0:
    oc rsync  "ops-manager-localmode-0:/mongodb-ops-manager/mongodb-releases/mongodb-linux-x86_64-rhel<version>-4.4.0.tgz" \
    mongodb-linux-x86_64-rhel<version>-4.4.0.tgz
    
    Replica 1:
    oc rsync  "ops-manager-localmode-1:/mongodb-ops-manager/mongodb-releases/mongodb-linux-x86_64-rhel<version>-4.4.0.tgz" \
    mongodb-linux-x86_64-rhel<version>-4.4.0.tgz
    
  2. If you deployed a version of Ops Manager from 4.4.0 up to and including 4.4.3, copy the MongoDB Database Tools installation tarball to the Ops Manager PersistentVolume. For example, to copy the 100.1.0 release:

    Replica 0:
    kubectl cp mongodb-database-tools-ubuntu1604-x86_64-100.1.0.tgz \
    "ops-manager-localmode-0:/mongodb-ops-manager/mongodb-releases/mongodb-database-tools-ubuntu1604-x86_64-100.1.0.tgz"
    
    Replica 1:
    kubectl cp mongodb-database-tools-rhel<version>-x86_64-100.1.0.tgz \
    "ops-manager-localmode-1:/mongodb-ops-manager/mongodb-releases/mongodb-database-tools-rhel<version>-x86_64-100.1.0.tgz"
    
11

Deploy a MongoDB Database Resource.

  1. If you have not done so already, complete the following prerequisites:
  2. Deploy a MongoDB database resource in the same namespace to which you deployed Ops Manager. Ensure that you:
    1. Match the spec.opsManager.configMapRef.name of the resource to the metadata.name of your ConfigMap.
    2. Match the spec.credentials of the resource to the name of the secret you created that contains an Ops Manager programmatic API key pair.

MongoDB Agents run in MongoDB database resource containers that you create with the Kubernetes Operator. Download the installation archives from Ops Manager instead of downloading them from the Internet.