Navigation
This version of the documentation is archived and no longer supported.

FAQ: Replication and Replica Sets

This document answers common questions about database replication in MongoDB.

If you don’t find the answer you’re looking for, check the complete list of FAQs or post your question to the MongoDB User Mailing List.

What kinds of replication does MongoDB support?

MongoDB supports replica sets.

Changed in version 3.0.0: In MongoDB 3.0.0, replica sets can have up to 50 nodes. Previous versions limited the maximum number of replica set members to 12.

MongoDB also supports master-slave replication; however, replica sets are the recommended replication topology. However, if your deployment requires more than 50 nodes, you must use master/slave replication.

What does the term “primary” mean?

Primary is a replica set member that can accept writes. Only the primary can accept write operations. [#edge-cases-2-primaries]

[1]In some circumstances, two nodes in a replica set may transiently believe that they are the primary, but at most, only one of them will be able to complete writes with { w: "majority" } write concern. The node that can complete { w: "majority" } writes is the current primary, and the other node is a former primary that has not yet recognized its demotion, typically due to a network partition. When this occurs, clients that connect to the former primary may observe stale data despite having requested read preference primary, and new writes to the former primary will eventually roll back.

What does the term “secondary” mean?

Secondary nodes are the read-only nodes in replica sets.

How long does replica set failover take?

It varies, but a replica set will select a new primary within a minute.

It may take 10-30 seconds for the members of a replica set to declare a primary inaccessible. This triggers an election. During the election, the cluster is unavailable for writes.

The election itself may take another 10-30 seconds.

Does replication work over the Internet and WAN connections?

Yes.

For example, a deployment may maintain a primary and secondary in an East-coast data center along with a secondary member for disaster recovery in a West-coast data center.

Can MongoDB replicate over a “noisy” connection?

Yes, but not without connection failures and the obvious latency.

Members of the set will attempt to reconnect to the other members of the set in response to networking flaps. This does not require administrator intervention. However, if the network connections among the nodes in the replica set are very slow, it might not be possible for the members of the node to keep up with the replication.

If the TCP connection between the secondaries and the primary instance breaks, a replica set will automatically elect one of the secondary members of the set as primary.

Why use journaling if replication already provides data redundancy?

Journaling facilitates faster crash recovery. Prior to journaling, crashes often required database repairs or full data resync. Both were slow, and the first was unreliable.

Journaling is particularly useful for protection against power failures, especially if your replica set resides in a single data center or power circuit.

When a replica set runs with journaling, mongod instances can safely restart without any administrator intervention.

Note

Journaling requires some resource overhead for write operations. Journaling has no effect on read performance, however.

Journaling is enabled by default on all 64-bit builds of MongoDB v2.0 and greater.

How many arbiters do replica sets need?

Some configurations do not require any arbiter instances. Arbiters vote in elections for primary but do not replicate the data like secondary members.

Replica sets require a majority of the remaining nodes present to elect a primary. Arbiters allow you to construct this majority without the overhead of adding replicating nodes to the system.

There are many possible replica set architectures.

A replica set with an odd number of voting nodes does not need an arbiter.

A common configuration consists of two replicating nodes that include a primary and a secondary, as well as an arbiter for the third node. This configuration makes it possible for the set to elect a primary in the event of failure, without requiring three replicating nodes.

You may also consider adding an arbiter to a set if it has an equal number of nodes in two facilities and network partitions between the facilities are possible. In these cases, the arbiter will break the tie between the two facilities and allow the set to elect a new primary.

What information do arbiters exchange with the rest of the replica set?

Arbiters never receive the contents of a collection but do exchange the following data with the rest of the replica set:

  • Credentials used to authenticate the arbiter with the replica set. All MongoDB processes within a replica set use keyfiles. These exchanges are encrypted.
  • Replica set configuration data and voting data. This information is not encrypted. Only credential exchanges are encrypted.

If your MongoDB deployment uses TLS/SSL, then all communications between arbiters and the other members of the replica set are secure. See the documentation for Configure mongod and mongos for TLS/SSL for more information. Run all arbiters on secure networks, as with all MongoDB components.

See

The overview of Arbiter Members of Replica Sets.

Which members of a replica set vote in elections?

All members of a replica set, unless the value of votes is equal to 0, vote in elections. This includes all delayed, hidden and secondary-only members. Arbiters always vote in elections and always have 1 vote.

Additionally, the state of the voting members also determine whether the member can vote. Only voting members in the following states are eligible to vote:

  • PRIMARY
  • SECONDARY
  • RECOVERING
  • ARBITER
  • ROLLBACK

Do hidden members vote in replica set elections?

Hidden members of replica sets do vote in elections. To exclude a member from voting in an election, change the value of the member’s votes configuration to 0.

Is it normal for replica set members to use different amounts of disk space?

Yes.

Factors including: different oplog sizes, different levels of storage fragmentation, and MongoDB’s data file pre-allocation can lead to some variation in storage utilization between nodes. Storage use disparities will be most pronounced when you add members at different times.