Navigation
This version of the documentation is archived and no longer supported.

Change the Size of the Oplog

Warning

In MongoDB versions 3.4 and earlier, the oplog was resized by dropping and recreating the local.oplog.rs collection.

In MongoDB versions 3.6 and later, use the replSetResizeOplog command to resize the oplog as shown in this tutorial.

Starting in MongoDB 4.0, MongoDB forbids dropping the local.oplog.rs collection. For more information on this restriction, see Oplog Collection Behavior.

This procedure changes the size of the oplog [1] on each member of a replica set using the replSetResizeOplog command, starting with the secondary members before proceeding to the primary. The replSetResizeOplog command only supports the WiredTiger Storage Engine storage engine.

Perform these steps on each secondary replica set member first. Once you have changed the oplog size for all secondary members, perform these steps on the primary.

A. Connect to the replica set member

Connect to the replica set member using the mongo shell:

mongo --host <hostname>:<port>

Note

If the replica set enforces authentication, you must authenticate as a user with privileges to modify the local database, such as the clusterManager or clusterAdmin role.

B. (Optional) Verify the current size of the oplog

To view the current size of the oplog, switch to the local database and run db.collection.stats() against the oplog.rs collection. stats() displays the oplog size as maxSize.

use local
db.oplog.rs.stats().maxSize

The maxSize field displays the collection size in bytes.

C. Change the oplog size of the replica set member

To resize the oplog, run the replSetResizeOplog command, passing the desired size in megabytes as the size parameter. The specified size must be greater than 990, or 990 megabytes.

The following operation changes the oplog size of the replica set member to 16 gigabytes, or 16000 megabytes.

db.adminCommand({replSetResizeOplog: 1, size: 16000})
[1]Starting in MongoDB 4.0, the oplog can grow past its configured size limit to avoid deleting the majority commit point.

D. (Optional) Compact oplog.rs to reclaim disk space

Reducing the size of the oplog does not automatically reclaim the disk space allocated to the original oplog size. You must run compact against the oplog.rs collection in the local database to reclaim disk space. There are no benefits to running compact on the oplog.rs collection after increasing the oplog size.

Important

The replica set member cannot replicate oplog entries while the compact operation is ongoing. While compact runs, the member may fall so far behind the primary that it cannot resume replication. The likelihood of a member becoming “stale” during the compact procedure increases with cluster write throughput, and may be further exacerbated by the reduced oplog size.

Consider scheduling a maintenance window during which writes are throttled or stopped to mitigate the risk of the member becoming “stale” and requiring a full resync.

Do not run compact against the primary replica set member. Connect a mongo shell to the primary and run rs.stepDown(). If successful, the primary steps down and closes all open connections. Reconnect the mongo shell to the member and run the compact command on the member.

The following operation runs the compact command against the oplog.rs collection:

use local
db.runCommand({ "compact" : "oplog.rs" })

If the disk space allocated to the original oplog size is not reclaimed, restart mongod and run the commands from step D again. Restarting mongod runs recalculations in WiredTiger that might allow compact to release more space to the OS.

For clusters enforcing authentication, authenticate as a user with the compact privilege action on the local database and the oplog.rs collection. For complete documentation on compact authentication requirements, see compact Required Privileges.