Navigation
This version of the documentation is archived and no longer supported.

MongoDB Performance

As you develop and operate applications with MongoDB, you may need to analyze the performance of the application and its database. When you encounter degraded performance, it is often a function of database access strategies, hardware availability, and the number of open database connections.

Some users may experience performance limitations as a result of inadequate or inappropriate indexing strategies, or as a consequence of poor schema design patterns. Locking Performance discusses how these can impact MongoDB’s internal locking.

Performance issues may indicate that the database is operating at capacity and that it is time to add additional capacity to the database. In particular, the application’s working set should fit in the available physical memory. See Memory and the MMAPv1 Storage Engine for more information on the working set.

In some cases performance issues may be temporary and related to abnormal traffic load. As discussed in Number of Connections, scaling can help relax excessive traffic.

Database Profiling can help you to understand what operations are causing degradation.

Locking Performance

MongoDB uses a locking system to ensure data set consistency. If certain operations are long-running or a queue forms, performance will degrade as requests and operations wait for the lock.

Lock-related slowdowns can be intermittent. To see if the lock has been affecting your performance, refer to the locks section and the globalLock section of the serverStatus output.

Dividing locks.timeAcquiringMicros by locks.acquireWaitCount can give an approximate average wait time for a particular lock mode.

locks.deadlockCount provide the number of times the lock acquisitions encountered deadlocks.

If globalLock.currentQueue.total is consistently high, then there is a chance that a large number of requests are waiting for a lock. This indicates a possible concurrency issue that may be affecting performance.

If globalLock.totalTime is high relative to uptime, the database has existed in a lock state for a significant amount of time.

Long queries can result from ineffective use of indexes; non-optimal schema design; poor query structure; system architecture issues; or insufficient RAM resulting in page faults and disk reads.

Memory and the MMAPv1 Storage Engine

Memory Use

With the MMAPv1 storage engine, MongoDB uses memory-mapped files to store data. Given a data set of sufficient size, the mongod process will allocate all available memory on the system for its use.

While this is intentional and aids performance, the memory mapped files make it difficult to determine if the amount of RAM is sufficient for the data set.

The memory usage statuses metrics of the serverStatus output can provide insight into MongoDB’s memory use.

The mem.resident field provides the amount of resident memory in use. If this exceeds the amount of system memory and there is a significant amount of data on disk that isn’t in RAM, you may have exceeded the capacity of your system.

You can inspect mem.mapped to check the amount of mapped memory that mongod is using. If this value is greater than the amount of system memory, some operations will require a page faults to read data from disk.

Page Faults

With the MMAPv1 storage engine, page faults can occur as MongoDB reads from or writes data to parts of its data files that are not currently located in physical memory. In contrast, operating system page faults happen when physical memory is exhausted and pages of physical memory are swapped to disk.

MongoDB reports its triggered page faults as the total number of page faults in one second. To check for page faults, see the extra_info.page_faults value in the serverStatus output.

Rapid increases in the MongoDB page fault counter may indicate that the server has too little physical memory. Page faults also can occur while accessing large data sets or scanning an entire collection.

A single page fault completes quickly and is not problematic. However, in aggregate, large volumes of page faults typically indicate that MongoDB is reading too much data from disk.

MongoDB can often “yield” read locks after a page fault, allowing other database processes to read while mongod loads the next page into memory. Yielding the read lock following a page fault improves concurrency, and also improves overall throughput in high volume systems.

Increasing the amount of RAM accessible to MongoDB may help reduce the frequency of page faults. If this is not possible, you may want to consider deploying a sharded cluster or adding shards to your deployment to distribute load among mongod instances.

See What are page faults? for more information.

Number of Connections

In some cases, the number of connections between the applications and the database can overwhelm the ability of the server to handle requests. The following fields in the serverStatus document can provide insight:

If there are numerous concurrent application requests, the database may have trouble keeping up with demand. If this is the case, then you will need to increase the capacity of your deployment.

For read-heavy applications, increase the size of your replica set and distribute read operations to secondary members.

For write-heavy applications, deploy sharding and add one or more shards to a sharded cluster to distribute load among mongod instances.

Spikes in the number of connections can also be the result of application or driver errors. All of the officially supported MongoDB drivers implement connection pooling, which allows clients to use and reuse connections more efficiently. Extremely high numbers of connections, particularly without corresponding workload is often indicative of a driver or other configuration error.

Unless constrained by system-wide limits, MongoDB has no limit on incoming connections. On Unix-based systems, you can modify system limits using the ulimit command, or by editing your system’s /etc/sysctl file. See UNIX ulimit Settings for more information.

Database Profiling

MongoDB’s “Profiler” is a database profiling system that can help identify inefficient queries and operations.

The following profiling levels are available:

Level Setting
0 Off. No profiling
1 On. Only includes “slow” operations
2 On. Includes all operations

Enable the profiler by setting the profile value using the following command in the mongo shell:

db.setProfilingLevel(1)

The slowOpThresholdMs setting defines what constitutes a “slow” operation. To set the threshold above which the profiler considers operations “slow” (and thus, included in the level 1 profiling data), you can configure slowOpThresholdMs at runtime as an argument to the db.setProfilingLevel() operation.

See

The documentation of db.setProfilingLevel() for more information.

By default, mongod records all “slow” queries to its log, as defined by slowOpThresholdMs.

Note

Because the database profiler can negatively impact performance, only enable profiling for strategic intervals and as minimally as possible on production systems.

You may enable profiling on a per-mongod basis. This setting will not propagate across a replica set or sharded cluster.

You can view the output of the profiler in the system.profile collection of your database by issuing the show profile command in the mongo shell, or with the following operation:

db.system.profile.find( { millis : { $gt : 100 } } )

This returns all operations that lasted longer than 100 milliseconds. Ensure that the value specified here (100, in this example) is above the slowOpThresholdMs threshold.

You must use the $query operator to access the query field of documents within system.profile.

Full Time Diagnostic Data Capture

To facilitate analysis of the MongoDB server behavior by MongoDB Inc. engineers, the mongod process includes a Full Time Diagnostic Data Collection (FTDC) mechanism. FTDC data files are compressed, are not human-readable, and inherit the same file access permissions as the MongoDB data files. Only users with access to FTDC data files can transmit the FTDC data. MongoDB Inc. engineers cannot access FTDC data independent of system owners or operators. MongoDB processes run with FTDC on by default. For more information on MongoDB Support options, visit Getting Started With MongoDB Support.

FTDC Privacy

FTDC data files are compressed and not human-readable. MongoDB Inc. engineers cannot access FTDC data without explicit permission and assistance from system owners or operators.

FTDC data never contains any of the following information:

  • Samples of queries, query predicates, or query results
  • Data sampled from any end-user collection or index
  • System or MongoDB user credentials or security certificates

FTDC data contains certain host machine information such as hostnames, operating system information, and the options or settings used to start the mongod. This information may be considered protected or confidential by some organizations or regulatory bodies, but is not typically considered to be Personally Identifiable Information (PII). For clusters where these fields were configured with protected, confidential, or PII data, please notify MongoDB Inc. engineers before sending the FTDC data so appropriate measures can be taken.

FTDC periodically collects statistics produced by the following commands:

Starting with MongoDB 3.2.18, the diagnostic data may include one or more of the following statistics depending on the host operating system:

  • CPU utilization
  • Memory utilization
  • Disk utilization related to performance. FTDC does not include data related to storage capacity.

FTDC collects statistics produced by the following commands on file rotation or startup:

mongod processes store FTDC data files in a diagnostic.data directory under the instances storage.dbPath. All diagnostic data files are stored under this directory. For example, given a dbPath of /data/db, the diagnostic data directory would be /data/db/diagnostic.data.

FTDC runs with the following defaults:

  • Data capture every 1 second
  • 200MB maximum diagnostic.data folder size.

These defaults are designed to provide useful data to MongoDB Inc. engineers with minimal impact on performance or storage size. These values only require modifications if requested by MongoDB Inc. engineers for specific diagnostic purposes.

You can view the FTDC source code on the MongoDB Github Repository. The ftdc_system_stats_*.ccp files specifically define any system-specific diagnostic data captured.

To disable FTDC, start up the mongod with the diagnosticDataCollectionEnabled: false option specified to the setParameter setting in your configuration file:

setParameter:
  diagnosticDataCollectionEnabled: false

Disabling FTDC may increase the time or resources required when analyzing or debugging issues with support from MongoDB Inc. Engineers.