- Sharding >
- Sharded Cluster Tutorials >
- Sharded Cluster Maintenance Tutorials >
- Configure Behavior of Balancer Process in Sharded Clusters
Configure Behavior of Balancer Process in Sharded Clusters¶
On this page
The balancer is a process that runs on one of the mongos
instances in a cluster and ensures that chunks are
evenly distributed throughout a sharded cluster. In most deployments,
the default balancer configuration is sufficient for normal
operation. However, administrators might need to modify balancer
behavior depending on application or operational requirements. If you
encounter a situation where you need to modify the behavior of the
balancer, use the procedures described in this document.
For conceptual information about the balancer, see Sharded Collection Balancing and Cluster Balancer.
Schedule a Window of Time for Balancing to Occur¶
You can schedule a window of time during which the balancer can migrate chunks, as described in the following procedures:
The mongos
instances use their own local timezones when
respecting balancer window.
Configure Default Chunk Size¶
The default chunk size for a sharded cluster is 64 megabytes. In most situations, the default size is appropriate for splitting and migrating chunks. For information on how chunk size affects deployments, see details, see Chunk Size.
Changing the default chunk size affects chunks that are processes during migrations and auto-splits but does not retroactively affect all chunks.
To configure default chunk size, see Modify Chunk Size in a Sharded Cluster.
Change the Maximum Storage Size for a Given Shard¶
The maxSize
field in the shards
collection in the
config database sets the maximum size for a
shard, allowing you to control whether the balancer will migrate chunks
to a shard. If mapped
size [1] is above a shard’s
maxSize
, the balancer will not move chunks to the shard. Also, the
balancer will not move chunks off an overloaded shard. This must happen
manually. The maxSize
value only affects the balancer’s selection of
destination shards.
By default, maxSize
is not specified, allowing shards to consume the
total amount of available space on their machines if necessary.
You can set maxSize
both when adding a shard and once a shard is
running.
To set maxSize
when adding a shard, set the addShard
command’s maxSize
parameter to the maximum size in megabytes. For
example, the following command run in the mongo
shell adds a
shard with a maximum size of 125 megabytes:
To set maxSize
on an existing shard, insert or update the
maxSize
field in the shards
collection in the
config database. Set the maxSize
in
megabytes.
Example
Assume you have the following shard without a maxSize
field:
Run the following sequence of commands in the mongo
shell
to insert a maxSize
of 125 megabytes:
To later increase the maxSize
setting to 250 megabytes, run the
following:
[1] | This value includes the mapped size of all data
files including the``local`` and admin databases. Account for
this when setting maxSize . |
Change Replication Behavior for Chunk Migration¶
Secondary Throttle¶
Changed in version 3.0.0: The balancer configuration document added configurable
writeConcern
to control the semantics of the
_secondaryThrottle
option.
The _secondaryThrottle
parameter of the balancer and the
moveChunk
command affects the replication behavior during
chunk migration. By default,
_secondaryThrottle
is true
, which means each document move
during chunk migration propagates to at least one secondary before the
balancer proceeds with the next document: this is equivalent to a write
concern of { w: 2 }
.
You can also configure the writeConcern
for the
_secondaryThrottle
operation, to configure how migrations will
wait for replication to complete. For more information on the
replication behavior during various steps of chunk migration,
see Chunk Migration and Replication.
To change the balancer’s _secondaryThrottle
and writeConcern
values, connect to a mongos
instance and directly update
the _secondaryThrottle
value in the settings
collection of the config database. For
example, from a mongo
shell connected to a
mongos
, issue the following command:
The effects of changing the _secondaryThrottle
and
writeConcern
value may not be
immediate. To ensure an immediate effect, stop and restart the balancer
to enable the selected value of _secondaryThrottle
. See
Manage Sharded Cluster Balancer for details.
Wait for Delete¶
The _waitForDelete
setting of the balancer and the
moveChunk
command affects how the balancer migrates
multiple chunks from a shard. By default, the balancer does not wait
for the on-going migration’s delete phase to complete before starting
the next chunk migration. To have the delete phase block the start
of the next chunk migration, you can set the _waitForDelete
to
true.
For details on chunk migration, see Chunk Migration. For details on the chunk migration queuing behavior, see Chunk Migration Queuing.
The _waitForDelete
is generally for internal testing purposes. To
change the balancer’s _waitForDelete
value:
Connect to a
mongos
instance.Update the
_waitForDelete
value in thesettings
collection of the config database. For example:
Once set to true
, to revert to the default behavior:
Connect to a
mongos
instance.Update or unset the
_waitForDelete
field in thesettings
collection of the config database: