Navigation

Deploy an Ops Manager Resource

You can deploy Ops Manager in a container with the Kubernetes Operator.

Prerequisites and Considerations

Before you deploy an Ops Manager resource, make sure you plan for your Ops Manager resource:

Considerations for Ops Manager Deployments over HTTPS

You can configure your deployed Ops Manager resource to run over HTTPS, rather than HTTP. A full description of TLS, PKI (Public Key Infrastructure) certificates, and Certificate Authority is beyond the scope of this tutorial. This tutorial assumes prior knowledge of TLS/SSL as well as access to valid certificates.

Procedure

Select the appropriate tab based on whether you want your Ops Manager instance to run over HTTP or HTTPS:

1

Configure kubectl to default to your namespace.

If you have not already, run the following command to execute all kubectl commands in the namespace you created:

kubectl config set-context $(kubectl config current-context) --namespace=<namespace>
2

Copy the following example Ops Manager Kubernetes object.

Change the highlighted settings to match your desired Ops Manager configuration.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
---
apiVersion: mongodb.com/v1
kind: MongoDBOpsManager
metadata:
  name: <myopsmanager>
spec:
  replicas: 1
  version: <opsmanagerversion>
  adminCredentials: <adminusercredentials> # Should match metadata.name
                                           # in the Kubernetes secret
                                           # for the admin user
  externalConnectivity:
    type: LoadBalancer

  applicationDatabase:
    members: 3
    version: <mongodbversion>
    persistent: true
...
3

Open your preferred text editor and paste the object specification into a new text file.

4

Configure the settings highlighted in the prior example.

Key Type Description Example
metadata.name string

Name for this Kubernetes Ops Manager object.

Resource names must be 44 characters or less.

See also

om
spec.replicas number

Number of Ops Manager instances to run in parallel.

The minimum valid value is 1.

Highly Available Ops Manager Resources

For high availability, set this value to more than 1. Multiple Ops Manager instances can read from the same Application Database, ensuring failover if one instance is unavailable and enabling you to update the Ops Manager resource without downtime.

1
spec.version string

Version of Ops Manager to be installed.

The format should be X.Y.Z. To view available Ops Manager versions, view the container registry.

4.2.12
spec.adminCredentials string

Name of the secret you created for the Ops Manager admin user.

Note

Configure the secret to use the same namespace as the Ops Manager resource.

om-admin-secret
spec
.externalConnectivity
string

Optional.

The Kubernetes service ServiceType that exposes Ops Manager outside of Kubernetes.

Note

Exclude the spec.externalConnectivity setting and its children if you don’t want the Kubernetes Operator to create a Kubernetes service to route external traffic to the Ops Manager application.

LoadBalancer
spec
.applicationDatabase
integer Number of members of the Ops Manager Application Database replica set. 3
spec
.applicationDatabase
string

Optional.

Version of MongoDB that the Ops Manager Application Database should run.

The format should be X.Y.Z for the Community edition and X.Y.Z-ent for the Enterprise edition.

Deploy Ops Manager Resource Offline

To deploy Ops Manager inside Kubernetes without an Internet connection, omit this setting or leave the value empty. The Kubernetes Operator installs the bundled MongoDB Enterprise version 4.2.2 by default.

To learn more about MongoDB versioning, see see MongoDB Versioning in the MongoDB Manual.

4.2.2-ent
spec
.applicationDatabase
boolean

Optional.

Flag indicating if this MongoDB Kubernetes resource should use Persistent Volumes for storage. Persistent volumes are not deleted when the MongoDB Kubernetes resource is stopped or restarted.

If this value is true, then spec.applicationDatabase.podSpec.persistence. single is set to its default value of 16Gi.

To change your Persistent Volume Claims configuration, configure the following collections to meet your deployment requirements:

  • If you want one Persistent Volume for each pod, configure the spec.applicationDatabase. single collection.

  • If you want separate Persistent Volumes for data, journals, and logs for each pod, configure the following collections:

    • spec.applicationDatabase
      .podSpec.persistence.multiple.
    • spec.applicationDatabase
      .podSpec.persistence.multiple.
    • spec.applicationDatabase
      .podSpec.persistence.multiple.

Warning

Grant your containers permission to write to your Persistent Volume. The Kubernetes Operator sets fsGroup = 2000 in securityContext This makes Kubernetes try to fix write permissions for the Persistent Volume. If redeploying the resource does not fix issues with your Persistent Volumes, contact MongoDB support.

true
5

Optional: Configure Backup settings.

If you want to enable backup, you must configure all of the following settings:

Key Type Description Example
spec
.backup
boolean Flag that indicates that Backup is enabled. You must specify spec.backup.enabled: true to configure settings for the head database, oplog store, and snapshot store. true
spec
.backup
.opLogStores
string Name of the oplog store. oplog1
spec
.backup
.opLogStores
.mongodbResourceRef
string Name of the MongoDB database resource for the oplog store. my-oplog-db

You must also configure an S3 snapshot store or a blockstore.

Note

If you deploy both an S3 snapshot store and a blockstore, Ops Manager randomly choses one to use for Backup.

To configure a snapshot store, configure the following settings:

Key Type Description Example
spec
.backup
.s3Stores
string Name of the S3 snapshot store. s3store1
spec
.backup
.s3Stores
.s3SecretRef
string Name of the secret that contains the accessKey and secretKey fields. The Backup Daemon Service uses the values of these fields as credentials to access the S3 or S3-compatible bucket. my-s3-credentials
spec
.backup
.s3Stores
string URL of the S3 or S3-compatible bucket that stores the database Backup snapshots. s3.us-east-1.amazonaws.com
spec
.backup
.s3Stores
string Name of the S3 or S3-compatible bucket that stores the database Backup snapshots. my-bucket

To configure a blockstore, configure the following settings:

Key Type Description Example
spec
.backup
.blockStores
string Name of the blockstore. blockStore1
spec
.backup
.blockStores
.mongodbResourceRef
string Name of the MongoDB database resource that you create for the blockstore. You must deploy this database resource in the same namespace as the Ops Manager resource. my-mongodb-blockstore
6

Optional: Configure any additional settings for an Ops Manager deployment.

Add any optional settings that you want to apply to your deployment to the object specification file.

7

Save this file with a .yaml file extension.

8

Create your Ops Manager instance.

Invoke the following kubectl command on the filename of the Ops Manager resource definition:

kubectl apply -f <opsmgr-resource>.yaml
9

Track the status of your Ops Manager instance.

To check the status of your Ops Manager resource, invoke the following command:

kubectl get om -o yaml -w

The command returns the following output under the status field while the resource deploys:

status:
 applicationDatabase:
  lastTransition: "2020-04-01T09:49:22Z"
  message: AppDB Statefulset is not ready yet
  phase: Reconciling
  type: ""
  version: ""
 backup:
  phase: ""
 opsManager:
  phase: ""

The Kubernetes Operator reconciles the resources in the following order:

  1. Application Database.
  2. Ops Manager.
  3. Backup.

The Kubernetes Operator doesn’t reconcile a resource until the preceding one enters the Running phase.

After the Ops Manager resource completes the Reconciling phase, the command returns the following output under the status field if you enabled backup:

 status:
   applicationDatabase:
     lastTransition: "2020-04-01T09:50:20Z"
     members: 3
     phase: Running
     type: ReplicaSet
     version: 4.2.0
  backup:
   lastTransition: "2020-04-01T09:57:42Z"
   message: The MongoDB object <namespace>/<oplogresourcename>
     doesn't exist
   phase: Pending
   opsManager:
     lastTransition: "2020-04-01T09:57:40Z"
     phase: Running
     replicas: 1
     url: http://om-svc.cloudqa.svc.cluster.local:8080
     version: 4.2.8

Backup remains in a Pending state until you configure the Backup databases.

Tip

The status.opsManager.url field states the resource’s connection URL. Using this URL, you can reach Ops Manager from inside the Kubernetes cluster or create a project using a ConfigMap.

10

Access the Ops Manager application.

The steps you take differ based on how you are routing traffic to the Ops Manager application in Kubernetes. If you configured the Kubernetes Operator to create a Kubernetes service for you, or you created a Kubernetes service manually, use one of the following methods to access the Ops Manager application:

  1. Query your cloud provider to get the FQDN of the load balancer service. See your cloud provider’s documentation for details.

  2. Open a browser window and navigate to the Ops Manager application using the FQDN and port number of your load balancer service.

    http://ops.example.com:8080
    
  3. Log in to Ops Manager using the admin user credentials.

  1. Set your firewall rules to allow access from the Internet to the spec.externalConnectivity.port on the host on which your Kubernetes cluster is running.

  2. Open a browser window and navigate to the Ops Manager application using the FQDN and the spec.externalConnectivity.port.

    http://ops.example.com:30036
    
  3. Log in to Ops Manager using the admin user credentials.

To learn how to access the Ops Manager application using a third-party service, refer to the documentation for your solution.

11

Optional: Create credentials for the Kubernetes Operator.

If you enabled Backup, you must create an Ops Manager organization, generate programmatic API keys, and create a secret. These activities follow the prerequisites and procedure on the Create Credentials for the Kubernetes Operator page.

12

Optional: Create a project using a ConfigMap.

If you enabled Backup, create a project by following the prerequisites and procedure on the Create One Project using a ConfigMap page.

You must set data.baseUrl in the ConfigMap to the Ops Manager Application’s URL. To find this URL, invoke the following command:

kubectl get om -o yaml -w

The command returns the URL of the Ops Manager Application in the status.opsManager.url field.

 status:
   applicationDatabase:
     lastTransition: "2020-04-01T10:00:32Z"
     members: 3
     phase: Running
     type: ReplicaSet
     version: 4.2.0
  backup:
   lastTransition: "2020-04-01T09:57:42Z"
   message: The MongoDB object <namespace>/<oplogresourcename>
     doesn't exist
   phase: Pending
   opsManager:
     lastTransition: "2020-04-01T09:57:40Z"
     phase: Running
     replicas: 1
     url: http://om-svc.cloudqa.svc.cluster.local:8080
     version: 4.2.8

Important

If you deploy Ops Manager with the Kubernetes Operator and Ops Manager will manage MongoDB database resources deployed outside of the Kubernetes cluster it’s deployed to, you must set data.baseUrl to the same value of the spec.configuration.mms.centralUrl setting in the Ops Manager resource specification.

13

Optional: Deploy MongoDB database resources to complete the Backup configuration.

If you enabled Backup, create a MongoDB database resource for the oplog and snapshot stores to complete the configuration.

  1. Deploy a MongoDB database resource for the oplog store in the same namespace as the Ops Manager resource.

    Note

    Create this database as a replica set.

    Match the metadata.name of the resource with the spec.backup.opLogStores.mongodbResourceRef.name that you specified in your Ops Manager resource definition.

  2. Choose one of the following:

    1. Deploy a MongoDB database resource for the blockstore in the same namespace as the Ops Manager resource.

      Match the metadata.name of the resource to the spec.backup.blockStores.mongodbResourceRef.name that you specified in your Ops Manager resource definition.

    2. Configure an S3 bucket to use as the S3 snapshot store.

      Ensure that you can access the S3 bucket using the details that you specified in your Ops Manager resource definition.

14

Optional: Confirm that the Ops Manager resource is running.

If you enabled backup, check the status of your Ops Manager resource by invoking the following command:

kubectl get om -o yaml -w

When Ops Manager is running, the command returns the following output under the status field:

status:
  applicationDatabase:
    lastTransition: "2020-04-01T10:00:32Z"
    members: 3
    phase: Running
    type: ReplicaSet
    version: 4.2.0
  backup:
    lastTransition: "2020-04-01T10:00:53Z"
    phase: Running
    version: 4.2.8
  opsManager:
    lastTransition: "2020-04-01T10:00:34Z"
    phase: Running
    replicas: 1
    url: http://om-svc.cloudqa.svc.cluster.local:8080
    version: 4.2.8

See Troubleshooting the Kubernetes Operator for information about the resource deployment statuses.

1

Configure kubectl to default to your namespace.

If you have not already, run the following command to execute all kubectl commands in the namespace you created:

kubectl config set-context $(kubectl config current-context) --namespace=<namespace>
2

Concatenate your TLS certificate and Private Key.

Important

The Kubernetes Operator requires that the Ops Manager instance’s TLS certificate and Private Key are concatenated into a single file called server.pem.

If your TLS certificate and Private Key are separate files, run the following command to concatenate them:

cat <private-key>.key <tls-certificate>.crt > server.pem
3

Create a Kubernetes secret for your certificates.

Once you have your TLS certificate and Private Key in a file called server.pem, run the following command to store the certificates in a secret:

kubectl create secret generic om-http-cert --from-file="server.pem" -n <namespace>
4

If necessary, validate your TLS Certificate

If your TLS certificate is signed by a Custom Certificate Authority, you must provide a CA certificate to validate the TLS certificate. To validate the TLS certificate, create a ConfigMap to hold the CA certificate:

kubectl create configmap om-http-cert-ca --from-file="mms-ca.crt"

Important

The Kubernetes Operator requires that the certificate is named mms-ca.crt in the ConfigMap.

5

Copy the following example Ops Manager Kubernetes object.

Change the highlighted settings to match your desired Ops Manager configuration.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
---
apiVersion: mongodb.com/v1
kind: MongoDBOpsManager
metadata:
  name: <myopsmanager>
spec:
  replicas: 1
  version: <opsmanagerversion>
  adminCredentials: <adminusercredentials> # Should match metadata.name
                                           # in the Kubernetes secret
                                           # for the admin user
  security:
    tls:
      secretRef:
        name: <tlscertificate> # Should match metadata.name
                               # in the Kubernetes secret
                               # for the TLS Certificate / Private Key
  externalConnectivity:
    type: LoadBalancer

  applicationDatabase:
    members: 3
    version: <mongodbversion>
    persistent: true
...
6

Open your preferred text editor and paste the object specification into a new text file.

7

Configure the settings highlighted in the prior example.

Key Type Description Example
metadata.name string

Name for this Kubernetes Ops Manager object.

Resource names must be 44 characters or less.

See also

om
spec.replicas number

Number of Ops Manager instances to run in parallel.

The minimum valid value is 1.

Highly Available Ops Manager Resources

For high availability, set this value to more than 1. Multiple Ops Manager instances can read from the same Application Database, ensuring failover if one instance is unavailable and enabling you to update the Ops Manager resource without downtime.

1
spec.version string

Version of Ops Manager to be installed.

The format should be X.Y.Z. To view available Ops Manager versions, view the container registry.

4.2.12
spec.adminCredentials string

Name of the secret you created for the Ops Manager admin user.

Note

Configure the secret to use the same namespace as the Ops Manager resource.

om-admin-secret
spec
.security
.tls
.secretRef
string Name of of the secret you created for the TLS certificate. om-http-cert
spec
.externalConnectivity
string

The Kubernetes service ServiceType that exposes Ops Manager outside of Kubernetes.

Note

Exclude the spec.externalConnectivity setting and its children if you don’t want the Kubernetes Operator to create a Kubernetes service to route external traffic to the Ops Manager application.

LoadBalancer
spec
.applicationDatabase
integer Number of members of the Ops Manager Application Database replica set. 3
spec
.applicationDatabase
string

Optional.

Version of MongoDB that the Ops Manager Application Database should run.

The format should be X.Y.Z for the Community edition and X.Y.Z-ent for the Enterprise edition.

Deploy Ops Manager Resource Offline

To deploy Ops Manager inside Kubernetes without an Internet connection, omit this setting or leave the value empty. The Kubernetes Operator installs the bundled MongoDB Enterprise version 4.2.2 by default.

To learn more about MongoDB versioning, see see MongoDB Versioning in the MongoDB Manual.

4.2.2-ent
spec
.applicationDatabase
boolean

Optional.

Flag indicating if this MongoDB Kubernetes resource should use Persistent Volumes for storage. Persistent volumes are not deleted when the MongoDB Kubernetes resource is stopped or restarted.

If this value is true, then spec.applicationDatabase.podSpec.persistence. single is set to its default value of 16Gi.

To change your Persistent Volume Claims configuration, configure the following collections to meet your deployment requirements:

  • If you want one Persistent Volume for each pod, configure the spec.applicationDatabase. single collection.

  • If you want separate Persistent Volumes for data, journals, and logs for each pod, configure the following collections:

    • spec.applicationDatabase
      .podSpec.persistence.multiple.
    • spec.applicationDatabase
      .podSpec.persistence.multiple.
    • spec.applicationDatabase
      .podSpec.persistence.multiple.

Warning

Grant your containers permission to write to your Persistent Volume. The Kubernetes Operator sets fsGroup = 2000 in securityContext This makes Kubernetes try to fix write permissions for the Persistent Volume. If redeploying the resource does not fix issues with your Persistent Volumes, contact MongoDB support.

true
8

Optional: Configure Backup settings

If you want to enable backup for your Ops Manager instance, you must configure all of the following settings:

Key Type Description Example
spec
.backup
boolean Flag that indicates that Backup is enabled for your You must specify spec.backup.enabled: true to configure settings for the head database, oplog store, and snapshot store. true
spec
.backup
.opLogStores
string Name of the oplog store. oplog1
spec
.backup
.opLogStores
.mongodbResourceRef
string Name of the MongoDB database resource for the oplog store. my-oplog-db

You must also configure an :term:` S3 snapshot store <s3 snapshot store>` or a blockstore.

Note

If you deploy both an S3 snapshot store and a blockstore, Ops Manager randomly choses one to use for Backup.

To configure a snapshot store, configure the following settings:

Key Type Description Example
spec
.backup
.s3Stores
string Name of the S3 snapshot store. s3store1
spec
.backup
.s3Stores
.s3SecretRef
string Name of the secret that contains the accessKey and secretKey fields. The Backup Daemon Service uses the values of these fields as credentials to access the S3 or S3-compatible bucket. my-s3-credentials
spec
.backup
.s3Stores
string URL of the S3 or S3-compatible bucket that stores the database Backup snapshots. s3.us-east-1.amazonaws.com
spec
.backup
.s3Stores
string Name of the S3 or S3-compatible bucket that stores the database Backup snapshots. my-bucket

To configure a blockstore, configure the following settings:

Key Type Description Example
spec
.backup
.blockStores
string Name of the blockstore. blockStore1
spec
.backup
.blockStores
.mongodbResourceRef
string Name of the MongoDB database resource that you create for the blockstore. You must deploy this database resource in the same namespace as the Ops Manager resource. my-mongodb-blockstore
9

Optional: Configure any additional settings for an Ops Manager deployment.

Add any optional settings that you want to apply to your deployment to the object specification file.

10

Save this file with a .yaml file extension.

11

Create your Ops Manager instance.

Invoke the following kubectl command on the filename of the Ops Manager resource definition:

kubectl apply -f <opsmgr-resource>.yaml
12

Track the status of your Ops Manager instance.

To check the status of your Ops Manager resource, invoke the following command:

kubectl get om -o yaml -w

The command returns the following output under the status field while the resource deploys:

status:
 applicationDatabase:
  lastTransition: "2020-04-01T09:49:22Z"
  message: AppDB Statefulset is not ready yet
  phase: Reconciling
  type: ""
  version: ""
 backup:
  phase: ""
 opsManager:
  phase: ""

The Kubernetes Operator reconciles the resources in the following order:

  1. Application Database.
  2. Ops Manager.
  3. Backup.

The Kubernetes Operator doesn’t reconcile a resource until the preceding one enters the Running phase.

After the Ops Manager resource completes the Reconciling phase, the command returns the following output under the status field if you enabled backup:

 status:
   applicationDatabase:
     lastTransition: "2020-04-01T09:50:20Z"
     members: 3
     phase: Running
     type: ReplicaSet
     version: 4.2.0
  backup:
   lastTransition: "2020-04-01T09:57:42Z"
   message: The MongoDB object <namespace>/<oplogresourcename>
     doesn't exist
   phase: Pending
   opsManager:
     lastTransition: "2020-04-01T09:57:40Z"
     phase: Running
     replicas: 1
     url: http://om-svc.cloudqa.svc.cluster.local:8080
     version: 4.2.8

Backup remains in a Pending state until you configure the Backup databases.

Tip

The status.opsManager.url field states the resource’s connection URL. Using this URL, you can reach Ops Manager from inside the Kubernetes cluster or create a project using a ConfigMap.

After the resource completes the Reconciling phase, the command returns the following output under the status field:

 status:
   applicationDatabase:
     lastTransition: "2019-12-06T18:23:22Z"
     members: 3
     phase: Running
     type: ReplicaSet
     version: 4.2.2-ent
   opsManager:
     lastTransition: "2019-12-06T18:23:26Z"
     message: The MongoDB object namespace/oplogdbname doesn't exist
     phase: Pending
     url: http://om-svc.dev.svc.cluster.local:8080
     version: ""

Backup remains in a Pending state until you configure the Backup databases.

Tip

The status.opsManager.url field states the resource’s connection URL. Using this URL, you can reach Ops Manager from inside the Kubernetes cluster or create a project using a ConfigMap.

13

Access the Ops Manager application.

The steps you take differ based on how you are routing traffic to the Ops Manager application in Kubernetes. If you configured the Kubernetes Operator to create a Kubernetes service for you, or you created a Kubernetes service manually, use one of the following methods to access the Ops Manager application:

  1. Query your cloud provider to get the FQDN of the load balancer service. See your cloud provider’s documentation for details.

  2. Open a browser window and navigate to the Ops Manager application using the FQDN and port number of your load balancer service.

    http://ops.example.com:8080
    
  3. Log in to Ops Manager using the admin user credentials.

  1. Set your firewall rules to allow access from the Internet to the spec.externalConnectivity.port on the host on which your Kubernetes cluster is running.

  2. Open a browser window and navigate to the Ops Manager application using the FQDN and the spec.externalConnectivity.port.

    http://ops.example.com:30036
    
  3. Log in to Ops Manager using the admin user credentials.

To learn how to access the Ops Manager application using a third-party service, refer to the documentation for your solution.

14

Create credentials for the Kubernetes Operator.

To configure credentials, you must create an Ops Manager organization, generate programmatic API keys, and create a secret. These activities follow the prerequisites and procedure on the Create Credentials for the Kubernetes Operator page.

15

Create a project using a ConfigMap.

To create a project, follow the prerequisites and procedure on the Create One Project using a ConfigMap page.

Set the following fields in your project ConfigMap:

  • Set data.baseUrl in the ConfigMap to the Ops Manager Application’s URL. To find this URL, invoke the following command:


    kubectl get om -o yaml -w
    

    The command returns the URL of the Ops Manager Application in the status.opsManager.url field.


      status:
        applicationDatabase:
          lastTransition: "2019-12-06T18:23:22Z"
          members: 3
          phase: Running
          type: ReplicaSet
          version: 4.2.2-ent
        opsManager:
          lastTransition: "2019-12-06T18:23:26Z"
          message: The MongoDB object namespace/oplogdbname doesn't exist
          phase: Pending
          url: http://om-svc.dev.svc.cluster.local:8080
          version: ""
    

    Important

    If you deploy Ops Manager with the Kubernetes Operator and Ops Manager will manage MongoDB database resources deployed outside of the Kubernetes cluster it’s deployed to, you must set data.baseUrl to the same value of the spec.configuration.mms.centralUrl setting in the Ops Manager resource specification.

  • Set data.sslMMSCAConfigMap to the name of your ConfigMap containing the root CA certificate used to sign the Ops Manager host’s certificate. The Kubernetes Operator requires this name to be mms-ca.crt.

16

Deploy MongoDB database resources to complete the Backup configuration.

By default, Ops Manager enables Backup. Create a MongoDB database resource for the oplog and snapshot stores to complete the configuration.

  1. Deploy a MongoDB database resource for the oplog store in the same namespace as the Ops Manager resource.

    Note

    Create this database as a three-member replica set.

    Match the metadata.name of the resource with the spec.backup.opLogStores.mongodbResourceRef.name that you specified in your Ops Manager resource definition.

  2. Deploy a MongoDB database resource for the S3 snapshot store in the same namespace as the Ops Manager resource.

    Note

    Create the S3 snapshot store as a replica set.

    Match the metadata.name of the resource to the spec.backup.s3Stores.mongodbResourceRef.name that you specified in your Ops Manager resource definition.

17

Confirm that the Ops Manager resource is running.

To check the status of your Ops Manager resource, invoke the following command:

kubectl get om -o yaml -w

When Ops Manager is running, the command returns the following output under the status field:

status:
  applicationDatabase:
    lastTransition: "2019-12-06T17:46:15Z"
    members: 3
    phase: Running
    type: ReplicaSet
    version: 4.2.2-ent
  opsManager:
    lastTransition: "2019-12-06T17:46:32Z"
    phase: Running
    replicas: 1
    url: http://om-backup-svc.dev.svc.cluster.local:8080
    version: 4.2.6

See Troubleshooting the Kubernetes Operator for information about the resource deployment statuses.