Fix This Page
Navigation

Restore a Sharded Cluster

On this page

Overview

Important

In version 3.4, MongoDB removes support for SCCC config servers. To upgrade your config servers from SCCC to CSRS, see Upgrade Config Servers to Replica Set.

The following procedure applies to 3.4 config servers.

You can restore a sharded cluster either from snapshots or from BSON database dumps created by the mongodump tool. This document describes procedures to

Procedures

Changed in version 3.4.

For MongoDB 3.4 sharded clusters, mongod instances for the shards must explicitly specify its role as a shardsvr, either via the configuration file setting sharding.clusterRole or via the command line option --shardsvr.

Note

Default port for mongod instances with the shardsvr role is 27018. To use a different port, specify net.port setting or --port option.

The following procedures assume shard mongod instances include the --shardsvr and --port options (or corresponding settings in the configuration file).

Restore a Sharded Cluster with Filesystem Snapshots

The following procedure outlines the steps to restore a sharded cluster from filesystem snapshots. To create filesystem snapshots of sharded clusters, see Back Up a Sharded Cluster with File System Snapshots.

1

Shut down the entire cluster.

Stop all mongos and mongod processes, including all shards and all config servers. To stop all members, connect to each member and issue following operations:

use admin
db.shutdownServer()
2

Restore the data files.

On each server, extract the data files to the location where the mongod instance will access them and restore the following:

See also

Restore a Snapshot.

3

Restart the config servers.

Restart each member of the CSRS.

mongod --configsvr --replSet <CSRS name> --dbpath <config dbpath> --port 27019
4

Start one mongos instance.

Start mongos with the --configdb option set to the name of the config server replica set and seed list of the members started in the step Restart the config servers.

5

If shard hostnames have changed, update the config database.

If shard hostnames have changed, connect a mongo shell to the mongos instance and update the shards collection in the Config Database to reflect the new hostnames.

6

Clear per-shard sharding recovery information.

If the backup data was from a deployment using CSRS, clear out the no longer applicable recovery information on each shard. For each shard:

  1. Restart the replica set members for the shard with the recoverShardingState parameter set to false. Include additional options as required for your specific configuration.

    mongod --setParameter=recoverShardingState=false --replSet <replSetName> --shardsvr --port <port>
    
  2. Connect mongo shell to the primary of the replica set and delete from the admin.system.version collection the document where _id equals minOpTimeRecovery id. Use write concern "majority".

    use admin
    db.system.version.remove(
       { _id: "minOpTimeRecovery" },
       { writeConcern: { w: "majority" } }
    )
    
  3. Shut down the replica set members for the shard.

7

Restart all the shard mongod instances.

Do not include the recoverShardingState parameter.

Changed in version 3.4: Include the --shardsvr option and, if appropriate, the --port option.

8

Restart the other mongos instances.

Specify for --configdb the config server replica set name and a seed list of the CSRS started in the step Restart the config servers.

9

Verify that the cluster is operational.

Connect to a mongos instance from a mongo shell and use the db.printShardingStatus() method to ensure that the cluster is operational.

db.printShardingStatus()
show collections

Restore a Sharded Cluster with Database Dumps

The following procedure outlines the steps to restore a sharded cluster from the BSON database dumps created by mongodump. For information on using mongodump to backup sharded clusters, see Back Up a Sharded Cluster with Database Dumps.

Changed in version 3.0: mongorestore requires a running MongoDB instances. Earlier versions of mongorestore did not require a running MongoDB instances and instead used the --dbpath option. For instructions specific to your version of mongorestore, refer to the appropriate version of the manual.

1

Deploy a new replica set for each shard.

For each shard, deploy a new replica set:

  1. Start a new mongod for each member of the replica set. Include the --shardsvr and the --port options. Include any other configuration as appropriate.
  2. Connect a mongo to one of the mongod instances. In the mongo shell:
    1. Run rs.initiate().
    2. Use rs.add() to add the other members of the replica set.

For detailed instructions on deploying a replica set, see Deploy a Replica Set.

2

Deploy new config servers.

See Create the Config Server Replica Set.

3

Start the mongos instances.

Start the mongos instances, specifying the name of the config server replica set and a seed list of members in the --configdb. Include any other configuration as appropriate.

For detailed instructions, see Connect a mongos to the Sharded Cluster.

4

Add shards to the cluster.

Connect a mongo shell to a mongos instance. Use sh.addShard() to add each replica sets as a shard.

For detailed instructions in adding shards to the cluster, see Add Shards to the Cluster.

5

Shut down the mongos instances.

Once the new sharded cluster is up, shut down all mongos instances.

6

Restore the shard data.

For each shard, use mongorestore to restore the data dump to the primary’s data directory. Include the --drop option to drop the collections before restoring and, because the backup procedure included the --oplog option, include the --oplogReplay option for mongorestore.

For example, on the primary for ShardA, run the mongorestore. Specify any other configuration as appropriate.

mongorestore --drop --oplogReplay /data/dump/shardA --port <port>

After you have finished restoring all the shards, shut down all shard instances.

7

Restore the config server data.

mongorestore --drop --oplogReplay /data/dump/configData
8

Start one mongos instance.

Start mongos with the --configdb option set to the name of the config server replica set and seed list of the members started in the step Deploy new config servers.

9

If shard hostnames have changed, update the config database.

If shard hostnames have changed, connect a mongo shell to the mongos instance and update the shards collection in the Config Database to reflect the new hostnames.

10

Restart all the shard mongod instances.

Do not include the recoverShardingState parameter.

Changed in version 3.4: Include the --shardsvr option and, if appropriate, the --port option.

11

Restart the other mongos instances.

Specify for --configdb the config server replica set name and a seed list of the CSRS started in the step Deploy new config servers.

12

Verify that the cluster is operational.

Connect to a mongos instance from a mongo shell and use the db.printShardingStatus() method to ensure that the cluster is operational.

db.printShardingStatus()
show collections