Frequently Asked Questions:
This document answers common questions about database replication in MongoDB.
MongoDB supports master-slave replication and a variation on master-slave replication known as replica sets. Replica sets are the recommended replication topology.
In a replica set, if the current “primary” node fails or becomes inaccessible, the other members can autonomously elect one of the other members of the set to be the new “primary”.
By default, clients send all reads to the primary; however, read preference is configurable at the client level on a per-connection basis, which makes it possible to send reads to secondary nodes instead.
Replication operates by way of an oplog, from which secondary/slave members apply new operations to themselves. This replication process is asynchronous, so secondary/slave nodes may not always reflect the latest writes to the primary. But usually, the gap between the primary and secondary nodes is just few milliseconds on a local network connection.
It varies, but a replica set will select a new primary within a minute.
The election itself may take another 10-30 seconds.
Yes, but not without connection failures and the obvious latency.
Members of the set will attempt to reconnect to the other members of the set in response to networking flaps. This does not require administrator intervention. However, if the network connections among the nodes in the replica set are very slow, it might not be possible for the members of the node to keep up with the replication.
Journaling is particularly useful for protection against power failures, especially if your replica set resides in a single data center or power circuit.
Journaling requires some resource overhead for write operations. Journaling has no effect on read performance, however.
Journaling is enabled by default on all 64-bit builds of MongoDB v2.0 and greater.
However, if you want confirmation that a given write has arrived at the server, use write concern. The getLastError command provides the facility for write concern. However, after the default write concern change, the default write concern acknowledges all write operations, and unacknowledged writes must be explicitly configured. See the MongoDB Drivers and Client Libraries documentation for your driver for more information.
Replica sets require a majority of the remaining nodes present to elect a primary. Arbiters allow you to construct this majority without the overhead of adding replicating nodes to the system.
There are many possible replica set architectures.
If you have a three node replica set, you don’t need an arbiter.
But a common configuration consists of two replicating nodes, one of which is primary and the other is secondary, as well as an arbiter for the third node. This configuration makes it possible for the set to elect a primary in the event of a failure without requiring three replicating nodes.
You may also consider adding an arbiter to a set if it has an equal number of nodes in two facilities and network partitions between the facilities are possible. In these cases, the arbiter will break the tie between the two facilities and allow the set to elect a new primary.
Arbiters never receive the contents of a collection but do exchange the following data with the rest of the replica set:
If your MongoDB deployment uses SSL, then all communications between arbiters and the other members of the replica set are secure. See the documentation for Connect to MongoDB with SSL for more information. Run all arbiters on secure networks, as with all MongoDB components.
The overview of Arbiter Members of Replica Sets.
Additionally, the state of the voting members also determine whether the member can vote. Only voting members in the following states are eligible to vote:
Factors including: different oplog sizes, different levels of storage fragmentation, and MongoDB’s data file pre-allocation can lead to some variation in storage utilization between nodes. Storage use disparities will be most pronounced when you add members at different times.