Migrate a Sharded Cluster to Different Hardware¶
On this page
This procedure moves the components of the sharded cluster to a new hardware system without downtime for reads and writes.
While the migration is in progress, do not attempt to change to the cluster metadata. Do not use any operation that modifies the cluster metadata in any way. For example, do not create or drop databases, create or drop collections, or use any sharding commands.
If your cluster includes a shard backed by a standalone
mongod instance, consider converting the standalone
to a replica set to
simplify migration and to let you keep the cluster online during
future maintenance. Migrating a shard as standalone is a multi-step
process that may require downtime.
To migrate a cluster to new hardware, perform the following tasks.
Disable the Balancer¶
Disable the balancer to stop chunk migration and do not perform any metadata write operations until the process finishes. If a migration is in progress, the balancer will complete the in-progress migration before stopping.
To disable the balancer, connect to one of the cluster’s
mongos instances and issue the following method:
To check the balancer state, issue the
For more information, see Disable the Balancer.
Migrate Each Config Server Separately¶
Migrate each config server by starting
with the last config server listed in the
Proceed in reverse order of the
configDB string. Migrate and
restart a config server before proceeding to the next.
Do not rename a config server during this process.
If the name or address that a sharded cluster uses to connect
to a config server changes, you must restart every
mongos instance in the sharded
cluster. Avoid downtime by using CNAMEs to identify config servers
within the MongoDB deployment.
See Migrate Config Servers with Different Hostnames for more information.
Start with the last config server listed in
Shut down the config server.
This renders all config data for the sharded cluster “read only.”
Change the DNS entry that points to the system that provided the old config server, so that the same hostname points to the new system. How you do this depends on how you organize your DNS and hostname resolution services.
Copy the contents of
dbPathfrom the old config server to the new config server.
For example, to copy the contents of
dbPathto a machine named
mongodb.config2.example.net, you might issue a command similar to the following:
rsync -az /data/configdb/ mongodb.config2.example.net:/data/configdb
Start the config server instance on the new system. The default invocation is:
configDB string will change as part of the
migration, you must shut down all
mongos instances before
configDB string. This avoids errors in the
sharded cluster over
configDB string conflicts.
If the hostname has changed for any of the config servers, update the
configDBstring for each
mongosinstances must all use the same
configDBstring. The strings must list identical host names in identical order.
For more information, see Start the mongos Instances.
Migrate the Shards¶
Migrate the shards one at a time. For each shard, follow the appropriate procedure in this section.
Migrate a Replica Set Shard¶
To migrate a sharded cluster, migrate each member separately. First migrate the non-primary members, and then migrate the primary last.
If the replica set has two voting members, add an arbiter to the replica set to ensure the set keeps a majority of its votes available during the migration. You can remove the arbiter after completing the migration.
Migrate a Member of a Replica Set Shard¶
Move the data directory (i.e., the
dbPath) to the new machine.
mongodprocess at the new location.
Connect to the replica set’s current primary.
For example, the following sequence of commands updates the hostname for the instance at position
cfg = rs.conf() cfg.members.host = "pocatello.example.net:27017" rs.reconfig(cfg)
For more information on updating the configuration document, see Examples.
To confirm the new configuration, issue
Wait for the member to recover. To check the member’s state, issue
Migrate the Primary in a Replica Set Shard¶
While migrating the replica set’s primary, the set must elect a new primary. This failover process which renders the replica set unavailable to perform reads or accept writes for the duration of the election, which typically completes quickly. If possible, plan the migration during a maintenance window.
Step down the primary to allow the normal failover process. To step down the primary, connect to the primary and issue the either the
replSetStepDowncommand or the
rs.stepDown()method. The following example shows the
You can check the output of
rs.status()to confirm the change in status.
Migrate a Standalone Shard¶
The ideal procedure for migrating a standalone shard is to convert the standalone to a replica set and then use the procedure for migrating a replica set shard. In production clusters, all shards should be replica sets, which provides continued availability during maintenance windows.
Migrating a shard as standalone is a multi-step process during which
part of the shard may be unavailable. If the shard is the
primary shard for a database,the process includes the
movePrimary command. While the
runs, you should stop modifying data in that database. To migrate the
standalone shard, use the Remove Shards from an Existing Sharded Cluster
Re-Enable the Balancer¶
To complete the migration, re-enable the balancer to resume chunk migrations.
To check the balancer state, issue the
For more information, see Enable the Balancer.