Navigation

FAQ: Concurrency

MongoDB allows multiple clients to read and write the same data. In order to ensure consistency, it uses locking and other concurrency control measures to prevent multiple clients from modifying the same piece of data simultaneously. Together, these mechanisms guarantee that all writes to a single document occur either in full or not at all and that clients never see an inconsistent view of the data.

MongoDB uses multi-granularity locking [1] that allows operations to lock at the global, database or collection level, and allows for individual storage engines to implement their own concurrency control below the collection level (e.g., at the document-level in WiredTiger).

MongoDB uses reader-writer locks that allow concurrent readers shared access to a resource, such as a database or collection.

In addition to a shared (S) locking mode for reads and an exclusive (X) locking mode for write operations, intent shared (IS) and intent exclusive (IX) modes indicate an intent to read or write a resource using a finer granularity lock. When locking at a certain granularity, all higher levels are locked using an intent lock.

For example, when locking a collection for writing (using mode X), both the corresponding database lock and the global lock must be locked in intent exclusive (IX) mode. A single database can simultaneously be locked in IS and IX mode, but an exclusive (X) lock cannot coexist with any other modes, and a shared (S) lock can only coexists with intent shared (IS) locks.

Locks are fair, with reads and writes being queued in order. However, to optimize throughput, when one request is granted, all other compatible requests will be granted at the same time, potentially releasing them before a conflicting request. For example, consider a case in which an X lock was just released, and in which the conflict queue contains the following items:

IS IS X X S IS

In strict first-in, first-out (FIFO) ordering, only the first two IS modes would be granted. Instead MongoDB will actually grant all IS and S modes, and once they all drain, it will grant X, even if new IS or S requests have been queued in the meantime. As a grant will always move all other requests ahead in the queue, no starvation of any request is possible.

In db.serverStatus() and db.currentOp() output, the lock modes are represented as follows:

Lock Mode
Description
R
Represents Shared (S) lock.
W
Represents Exclusive (X) lock.
r
Represents Intent Shared (IS) lock.
w
Represents Intent Exclusive (IX) lock.
[1] See the Wikipedia page on Multiple granularity locking for more information.

For most read and write operations, WiredTiger uses optimistic concurrency control. WiredTiger uses only intent locks at the global, database and collection levels. When the storage engine detects conflicts between two operations, one will incur a write conflict causing MongoDB to transparently retry that operation.

Some global operations, typically short lived operations involving multiple databases, still require a global "instance-wide" lock. Some other operations, such as collMod, still require an exclusive database lock.

For reporting on lock utilization information on locks, use any of the following methods:

Specifically, the locks document in the output of serverStatus, or the locks field in the current operation reporting provides insight into the type of locks and amount of lock contention in your mongod instance.

In db.serverStatus() and db.currentOp() output, the lock modes are represented as follows:

Lock Mode
Description
R
Represents Shared (S) lock.
W
Represents Exclusive (X) lock.
r
Represents Intent Shared (IS) lock.
w
Represents Intent Exclusive (IX) lock.

To terminate an operation, use db.killOp().

In some situations, read and write operations can yield their locks.

Long running read and write operations, such as queries, updates, and deletes, yield under many conditions. MongoDB operations can also yield locks between individual document modifications in write operations that affect multiple documents like update() with the multi parameter.

For storage engines supporting document level concurrency control, such as WiredTiger, yielding is not necessary when accessing storage as the intent locks, held at the global, database and collection level, do not block other readers and writers. However, operations will periodically yield, such as:

  • to avoid long-lived storage transactions because these can potentially require holding a large amount of data in memory;
  • to serve as interruption points so that you can kill long running operations;
  • to allow operations that require exclusive access to a collection such as index/collection drops and creations.

The following table lists some operations and the types of locks they use for document level locking storage engines:

Operation
Database
Collection
Issue a query
r (Intent Shared)
r (Intent Shared)
Insert data
w (Intent Exclusive)
w (Intent Exclusive)
Remove data
w (Intent Exclusive)
w (Intent Exclusive)
Update data
w (Intent Exclusive)
w (Intent Exclusive)
Perform Aggregation
r (Intent Shared)
r (Intent Shared)
Create an index (Foreground)
W (Exclusive)
Create an index (Background)
w (Intent Exclusive)
w (Intent Exclusive)
List collections

r (Intent Shared)

Changed in version 4.0.

Map-reduce
W (Exclusive) and R (Shared)
w (Intent Exclusive) and r (Intent Shared)

Some administrative commands can exclusively lock a database for extended time periods. For large database deployments, you may consider taking the mongod instance offline so that clients are not affected. For example, if a mongod is part of a replica set, take the mongod offline and let other members of the replica set process requests while maintenance is performed.

These administrative operations require an exclusive lock at the database level for extended periods:

In addition, the renameCollection command and corresponding db.collection.renameCollection() shell method take the following locks depending on the version of MongoDB:

Command
MongoDB 4.2.2 or later
MongoDB 4.2.0 - 4.2.1
MongoDB 4.0.X and previous
renameCollection database command

If renaming a collection within the same database, the renameCollection command takes an exclusive (W) lock on the source and target collections.

If the target namespace is in a different database as the source collection, The renameCollection command takes an exclusive (W) lock on the target database when renaming a collection across databases and blocks other operations on that database until it finishes.

If renaming a collection within the same database, the renameCollection command takes an exclusive (W) lock on the source and target collections.

If the target namespace is in a different database as the source collection, The renameCollection command takes a global exclusive (W) lock when renaming a collection across databases and blocks other operations until it finishes.

Prior to MongoDB 4.2, the renameCollection command takes an exclusive (W) lock on the database when renaming within the same database.
renameCollection() shell helper method
If renaming a collection within the same database, the renameCollection() method takes an exclusive (W) lock on the source and target collections.
(same behavior as MongoDB 4.2.2 or later)
Prior to MongoDB 4.2, the renameCollection() method takes an exclusive (W) lock on the database when renaming within the same database.

These administrative operations lock a database but only hold the lock for a very short time:

Changed in version 4.2.

The following administrative operations require an exclusive lock at the collection level:

  • create command and corresponding db.createCollection() and db.createView() shell methods
  • createIndexes command and corresponding db.collection.createIndex() and db.collection.createIndexes() shell methods
  • drop command and corresponding db.collection.drop() shell methods
  • dropIndexes command and corresponding db.collection.dropIndex() and db.collection.dropIndexes() shell methods
  • the renameCollection command and corresponding db.collection.renameCollection() shell method take the following locks, depending on version:

    • For renameCollection and db.collection.renameCollection(): If renaming a collection within the same database, the operation takes an exclusive (W) lock on the source and target collections. Prior to MongoDB 4.2, the operation takes an exclusive (W) lock on the database when renaming within the same database.
    • For renameCollection only: If the target namespace is in a different database as the source collection, the locking behavior is version dependent:

      • MongoDB 4.2.2 and later The operation takes an exclusive (W) lock on the target database when renaming a collection across databases and blocks other operations on that database until it finishes.
      • MongoDB 4.2.1 and earlier The operation takes a global exclusive (W) lock when renaming a collection across databases and blocks other operations until it finishes.
  • the reIndex command and corresponding db.collection.reIndex() shell method take the following locks, depending on version:

    • For MongoDB 4.2.2 and later, these operations obtain an exclusive (W) lock on the collection and block other operations on the collection until finished.
    • For MongoDB 4.0.0 through 4.2.1, these operations take a global exclusive (W) lock and block other operations until finished.
  • the replSetResizeOplog command takes the following locks, depending on version:

    • For MongoDB 4.2.2 and later, this operation takes an exclusive (W) lock on the oplog collection and blocks other operations on the collection until it finishes.
    • For MongoDB 4.2.1 and earlier, this operation takes a global exclusive (W) lock and blocks other operations until it finishes.

Prior to MongoDB 4.2, these operations took an exclusive lock on the database, blocking all operations on the database and its collections until the operation completed.

The following MongoDB operations may obtain and hold a lock on more than one database:

Operation
Behavior
This operation obtains a global (W) exclusive lock and blocks other operations until it finishes.

Changed in version 4.2.

For MongoDB 4.0.0 through 4.2.1, these operations take a global exclusive (W) lock and block other operations until finished.

Starting in MongoDB 4.2.2, these operations only obtain an exclusive (W) collection lock instead of a global exclusive lock.

Prior to MongoDB 4.0, these operations obtained an exclusive (W) database lock.

Changed in version 4.2.

For MongoDB 4.2.1 and earlier, this operation obtains a global exclusive (W) lock when renaming a collection between databases and blocks other operations until finished.

Starting in MongoDB 4.2.2, this operation only obtains an exclusive (W) lock on the target database, an intent shared (r) lock on the source database, and a shared (S) lock on the source collection instead of a global exclusive lock.

Changed in version 4.2.

For MongoDB 4.2.1 and earlier, this operation obtains a global exclusive (W) lock and blocks other operations until finished.

Starting in MongoDB 4.2.2, this operation only obtains an exclusive (W) lock on the oplog collection instead of a global exclusive lock.

Sharding improves concurrency by distributing collections over multiple mongod instances, allowing shard servers (i.e. mongos processes) to perform any number of operations concurrently to the various downstream mongod instances.

In a sharded cluster, locks apply to each individual shard, not to the whole cluster; i.e. each mongod instance is independent of the others in the sharded cluster and uses its own locks. The operations on one mongod instance do not block the operations on any others.

With replica sets, when MongoDB writes to a collection on the primary, MongoDB also writes to the primary's oplog, which is a special collection in the local database. Therefore, MongoDB must lock both the collection's database and the local database. The mongod must lock both databases at the same time to keep the database consistent and ensure that write operations, even with replication, are "all-or-nothing" operations.

When writing to a replica set, the lock's scope applies to the primary.

In replication, MongoDB does not apply writes serially to secondaries. Secondaries collect oplog entries in batches and then apply those batches in parallel. Writes are applied in the order that they appear in the oplog.

Starting in MongoDB 4.0, reads which target secondaries read from a WiredTiger snapshot of the data if the secondary is undergoing replication. This allows the read to occur simultaneously with replication, while still guaranteeing a consistent view of the data. Previous to MongoDB 4.0, read operations on secondaries would be blocked until any ongoing replication completes. See Multithreaded Replication for more information.

Because a single document can contain related data that would otherwise be modeled across separate parent-child tables in a relational schema, MongoDB's atomic single-document operations already provide transaction semantics that meet the data integrity needs of the majority of applications. One or more fields may be written in a single operation, including updates to multiple sub-documents and elements of an array. The guarantees provided by MongoDB ensure complete isolation as a document is updated; any errors cause the operation to roll back so that clients receive a consistent view of the document.

However, for situations that require atomicity of reads and writes to multiple documents (in a single or multiple collections), MongoDB supports multi-document transactions:

  • In version 4.0, MongoDB supports multi-document transactions on replica sets.
  • In version 4.2, MongoDB introduces distributed transactions, which adds support for multi-document transactions on sharded clusters and incorporates the existing support for multi-document transactions on replica sets.

    For details regarding transactions in MongoDB, see the Transactions page.

Important

In most cases, multi-document transaction incurs a greater performance cost over single document writes, and the availability of multi-document transactions should not be a replacement for effective schema design. For many scenarios, the denormalized data model (embedded documents and arrays) will continue to be optimal for your data and use cases. That is, for many scenarios, modeling your data appropriately will minimize the need for multi-document transactions.

For additional transactions usage considerations (such as runtime limit and oplog size limit), see also Production Considerations.

Depending on the read concern, clients can see the results of writes before the writes are durable. To control whether the data read may be rolled back or not, clients can use the readConcern option.

For information, see:

Give Feedback