- Replication >
- Replica Set Tutorials >
- Troubleshoot Replica Sets
Troubleshoot Replica Sets¶
On this page
This section describes common strategies for troubleshooting replica set deployments.
Check Replica Set Status¶
To display the current state of the replica set and current state of
each member, run the rs.status()
method in a mongo
shell connected to the replica set’s primary. For descriptions
of the information displayed by rs.status()
, see
replSetGetStatus.
Note
The rs.status()
method is a wrapper that runs the
replSetGetStatus
database command.
Check the Replication Lag¶
Replication lag is a delay between an operation on the primary and the application of that operation from the oplog to the secondary. Replication lag can be a significant issue and can seriously affect MongoDB replica set deployments. Excessive replication lag makes “lagged” members ineligible to quickly become primary and increases the possibility that distributed read operations will be inconsistent.
To check the current length of replication lag:
In a
mongo
shell connected to the primary, call thedb.printSlaveReplicationInfo()
method.The returned document displays the
syncedTo
value for each member, which shows you when each member last read from the oplog, as shown in the following example:Note
The
rs.status()
method is a wrapper around thereplSetGetStatus
database command.Monitor the rate of replication by watching the oplog time in the “replica” graph in the MongoDB Cloud Manager. For more information, see the MongoDB Cloud Manager documentation.
Possible causes of replication lag include:
Network Latency
Check the network routes between the members of your set to ensure that there is no packet loss or network routing issue.
Use tools including
ping
to test latency between set members andtraceroute
to expose the routing of packets network endpoints.Disk Throughput
If the file system and disk device on the secondary is unable to flush data to disk as quickly as the primary, then the secondary will have difficulty keeping state. Disk-related issues are incredibly prevalent on multi-tenant systems, including vitalized instances, and can be transient if the system accesses disk devices over an IP network (as is the case with Amazon’s EBS system.)
Use system-level tools to assess disk status, including
iostat
orvmstat
.Concurrency
In some cases, long-running operations on the primary can block replication on secondaries. For best results, configure write concern to require confirmation of replication to secondaries, as described in replica set write concern. This prevents write operations from returning if replication cannot keep up with the write load.
Use the database profiler to see if there are slow queries or long-running operations that correspond to the incidences of lag.
Appropriate Write Concern
If you are performing a large data ingestion or bulk load operation that requires a large number of writes to the primary, particularly with unacknowledged write concern, the secondaries will not be able to read the oplog fast enough to keep up with changes.
To prevent this, require write acknowledgment or journaled write concern after every 100, 1,000, or an another interval to provide an opportunity for secondaries to catch up with the primary.
For more information see:
Test Connections Between all Members¶
All members of a replica set must be able to connect to every other member of the set to support replication. Always verify connections in both “directions.” Networking topologies and firewall configurations can prevent normal and required connectivity, which can block replication.
Consider the following example of a bidirectional test of networking:
Example
Given a replica set with three members running on three separate hosts:
m1.example.net
m2.example.net
m3.example.net
Test the connection from
m1.example.net
to the other hosts with the following operation setm1.example.net
:Test the connection from
m2.example.net
to the other two hosts with the following operation set fromm2.example.net
, as in:You have now tested the connection between
m2.example.net
andm1.example.net
in both directions.Test the connection from
m3.example.net
to the other two hosts with the following operation set from them3.example.net
host, as in:
If any connection, in any direction fails, check your networking and firewall configuration and reconfigure your environment to allow these connections.
Socket Exceptions when Rebooting More than One Secondary¶
When you reboot members of a replica set, ensure that the set is able
to elect a primary during the maintenance. This means ensuring that a majority of
the set’s ‘votes
are
available.
When a set’s active members can no longer form a majority, the set’s primary steps down and becomes a secondary. The former primary closes all open connections to client applications. Clients attempting to write to the former primary receive socket exceptions and Connection reset errors until the set can elect a primary.
Example
Given a three-member replica set where every member has one vote, the set can elect a primary if at least two members can connect to each other. If you reboot the two secondaries simultaneously, the primary steps down and becomes a secondary. Until at least another secondary becomes available, i.e. at least one of the rebooted secondaries also becomes available, the set has no primary and cannot elect a new primary.
For more information on votes, see Replica Set Elections. For related information on connection errors, see Does TCP keepalive time affect sharded clusters and replica sets?.
Check the Size of the Oplog¶
A larger oplog can give a replica set a greater tolerance for lag, and make the set more resilient.
To check the size of the oplog for a given replica set member,
connect to the member in a mongo
shell and run the
db.printReplicationInfo()
method.
The output displays the size of the oplog and the date ranges of the operations contained in the oplog. In the following example, the oplog is about 10MB and is able to fit about 26 hours (94400 seconds) of operations:
The oplog should be long enough to hold all transactions for the longest downtime you expect on a secondary. At a minimum, an oplog should be able to hold minimum 24 hours of operations; however, many users prefer to have 72 hours or even a week’s work of operations.
For more information on how oplog size affects operations, see:
Note
You normally want the oplog to be the same size on all members. If you resize the oplog, resize it on all members.
To change oplog size, see the Change the Size of the Oplog tutorial.
Oplog Entry Timestamp Error¶
Consider the following error in mongod
output and logs:
Often, an incorrectly typed value in the ts
field in the last
oplog entry causes this error. The correct data type is
Timestamp.
Check the type of the ts
value using the following two queries
against the oplog collection:
The first query returns the last document in the oplog, while the
second returns the last document in the oplog where the ts
value
is a Timestamp. The $type
operator allows you to select
BSON type 17, is the Timestamp data type.
If the queries don’t return the same document, then the last document in
the oplog has the wrong data type in the ts
field.
Example
If the first query returns this as the last oplog entry:
And the second query returns this as the last entry where ts
has the Timestamp
type:
Then the value for the ts
field in the last oplog entry is of the
wrong data type.
To set the proper type for this value and resolve this issue, use an update operation that resembles the following:
Modify the timestamp values as needed based on your oplog entry. This operation may take some period to complete because the update must scan and pull the entire oplog into memory.
Duplicate Key Error on local.slaves
¶
The duplicate key on local.slaves error, occurs when a
secondary or slave changes its hostname and the
primary or master tries to update its local.slaves
collection with the new name. The update fails because it contains the
same _id
value as the document containing the previous hostname. The
error itself will resemble the following.
This is a benign error and does not affect replication operations on the secondary or slave.
To prevent the error from appearing, drop the local.slaves
collection from the primary or master, with the
following sequence of operations in the mongo
shell:
The next time a secondary or slave polls the
primary or master, the primary or master
recreates the local.slaves
collection.