Navigation
This version of the documentation is archived and no longer supported.

Production Notes

This page details system configurations that affect MongoDB, especially when running in production.

Note

MongoDB Atlas is a cloud-hosted database-as-a-service. MongoDB Cloud Manager, a hosted service, and Ops Manager, an on-premise solution, provide monitoring, backup, and automation of MongoDB instances. For documentation, see Atlas documentation, the MongoDB Cloud Manager documentation and Ops Manager documentation

MongoDB Binaries

Supported Platforms

For running in production, refer to the Recommended Platforms for operating system recommendations.

Changed in version 3.2: MongoDB can now use the WiredTiger storage engine on all supported platforms.

x86_64

Platform Support EOL Notice

SLES 11 Support removed from MongoDB 3.2.20+ and 3.4.15+.
Ubuntu 12.04 Support removed from MongoDB 3.2.20+ and 3.4.15+.
Debian 7 Support removed from MongoDB 3.2.21+ and 3.4.16+.

Platform 3.4 Community & Enterprise 3.2 Community & Enterprise
Amazon Linux 2013.03 and later
Debian 8
RHEL/CentOS/Oracle Linux [1] 6.2 and later
RHEL/CentOS/Oracle Linux [1] 7.0 and later
SLES 12  
Solaris 11 64-bit Community only Community only
Ubuntu 14.04
Ubuntu 16.04
Windows Server 2008R2 and later
Windows Vista and later
macOS 10.8 and later
[1](1, 2) MongoDB only supports Oracle Linux running the Red Hat Compatible Kernel (RHCK). MongoDB does not support the Unbreakable Enterprise Kernel (UEK).

ARM64

Platform 3.4 Community & Enterprise
Ubuntu 16.04

PPC64LE (MongoDB Enterprise Edition)

Platform 3.4 Enterprise
RHEL/CentOS 7.1
Ubuntu 16.04 Removed starting in 3.4.21

s390x (MongoDB Enterprise Edition)

Platform Support EOL Notice

  • Support for SLES 11 has been removed in MongoDB 3.4.15+.
  • Support for RHEL/CentOS 6 has been removed in MongoDB 3.4.22+.
Platform 3.4 Enterprise
RHEL/CentOS 6 Removed starting in 3.4.22
RHEL/CentOS 7
SLES 11 Removed starting in 3.4.15
SLES 12
Ubuntu 16.04

Important

MMAPv1 is not supported on big-endian architectures such as s390x. MongoDB returns an error if you set MMAPv1 as the storage engine on a big-endian system.

Use the Latest Stable Packages

Be sure you have the latest stable release.

All MongoDB releases are available on the Downloads page. The Downloads page is a good place to verify the current stable release, even if you are installing via a package manager.

The following summarizes the supported architecture for the latest version of MongoDB products:

Product x86_64/amd64 s390x POWER8 (little endian) ARMv8-A
MongoDB 3.4 MongoDB Enterprise only MongoDB Enterprise only
BI Connector  
Compass      
Spark Connector      
Ops Manager      
Automation Agent    
Monitoring Agent    
Backup Agent    

MongoDB dbPath

The files in the dbPath directory must correspond to the configured storage engine. mongod will not start if dbPath contains data files created by a storage engine other than the one specified by --storageEngine.

mongod must possess read and write permissions for the specified dbPath.

Concurrency

MMAPv1

Changed in version 3.0: Beginning with MongoDB 3.0, MMAPv1 provides collection-level locking: All collections have a unique readers-writer lock that allows multiple clients to modify documents in different collections at the same time.

For MongoDB versions 2.2 through 2.6 series, each database has a readers-writer lock that allows concurrent read access to a database, but gives exclusive access to a single write operation per database. See the Concurrency page for more information. In earlier versions of MongoDB, all write operations contended for a single readers-writer lock for the entire mongod instance.

WiredTiger

WiredTiger supports concurrent access by readers and writers to the documents in a collection. Clients can read documents while write operations are in progress, and multiple threads can modify different documents in a collection at the same time.

See also

Allocate Sufficient RAM and CPU provides information about how WiredTiger takes advantage of multiple CPU cores and how to improve operation throughput.

Data Consistency

Journaling

MongoDB uses write ahead logging to an on-disk journal. Journaling guarantees that MongoDB can quickly recover write operations that were written to the journal but not written to data files in cases where mongod terminated due to a crash or other serious failure.

Leave journaling enabled in order to ensure that mongod will be able to recover its data files and keep the data files in a valid state following a crash. See Journaling for more information.

Read Concern

New in version 3.2.

If using "majority" or "linearizable" read concern for read operations, use { w: "majority" } write concern for write operations on the primary to ensure that a single thread can read its own writes.

To use read concern level of "majority",

Write Concern

Write concern describes the level of acknowledgement requested from MongoDB for write operations. The level of the write concerns affects how quickly the write operation returns. When write operations have a weak write concern, they return quickly. With stronger write concerns, clients must wait after sending a write operation until MongoDB confirms the write operation at the requested write concern level. With insufficient write concerns, write operations may appear to a client to have succeeded, but may not persist in some cases of server failure.

See the Write Concern document for more information about choosing an appropriate write concern level for your deployment.

Networking

Use Trusted Networking Environments

Always run MongoDB in a trusted environment, with network rules that prevent access from all unknown machines, systems, and networks. As with any sensitive system that is dependent on network access, your MongoDB deployment should only be accessible to specific systems that require access, such as application servers, monitoring services, and other MongoDB components.

Important

By default, authorization is not enabled, and mongod assumes a trusted environment. Enable authorization mode as needed. For more information on authentication mechanisms supported in MongoDB as well as authorization in MongoDB, see Authentication and Role-Based Access Control.

For additional information and considerations on security, refer to the documents in the Security Section, specifically:

For Windows users, consider the Windows Server Technet Article on TCP Configuration when deploying MongoDB on Windows.

Disable HTTP Interface

MongoDB provides an HTTP interface to check the status of the server and, optionally, run queries. The HTTP interface is disabled by default. Do not enable the HTTP interface in production environments.

Deprecated since version 3.2: HTTP interface for MongoDB

See HTTP Status Interface.

Manage Connection Pool Sizes

Avoid overloading the connection resources of a mongod or mongos instance by adjusting the connection pool size to suit your use case. Start at 110-115% of the typical number of current database requests, and modify the connection pool size as needed. Refer to the Connection Pool Options for adjusting the connection pool size.

The connPoolStats command returns information regarding the number of open connections to the current database for mongos and mongod instances in sharded clusters.

See also Allocate Sufficient RAM and CPU.

Hardware Considerations

MongoDB is designed specifically with commodity hardware in mind and has few hardware requirements or limitations. MongoDB’s core components run on little-endian hardware, primarily x86/x86_64 processors. Client libraries (i.e. drivers) can run on big or little endian systems.

Allocate Sufficient RAM and CPU

MMAPv1

Due to its concurrency model, the MMAPv1 storage engine does not require many CPU cores. As such, increasing the number of cores can improve performance but does not provide significant return.

At a minimum, ensure that your mongod or mongos has access to two real cores or one physical CPU.

Increasing the amount of RAM accessible to MongoDB may help reduce the frequency of page faults.

WiredTiger

The WiredTiger storage engine is multithreaded and can take advantage of additional CPU cores. Specifically, the total number of active threads (i.e. concurrent operations) relative to the number of available CPUs can impact performance:

  • Throughput increases as the number of concurrent active operations increases up to the number of CPUs.
  • Throughput decreases as the number of concurrent active operations exceeds the number of CPUs by some threshold amount.

The threshold depends on your application. You can determine the optimum number of concurrent active operations for your application by experimenting and measuring throughput. The output from mongostat provides statistics on the number of active reads/writes in the (ar|aw) column.

With WiredTiger, MongoDB utilizes both the WiredTiger internal cache and the filesystem cache.

Starting in MongoDB 3.4, the default WiredTiger internal cache size is the larger of either:

  • 50% of (RAM - 1 GB), or
  • 256 MB.

By default, WiredTiger uses Snappy block compression for all collections and prefix compression for all indexes. Compression defaults are configurable at a global level and can also be set on a per-collection and per-index basis during collection and index creation.

Different representations are used for data in the WiredTiger internal cache versus the on-disk format:

  • Data in the filesystem cache is the same as the on-disk format, including benefits of any compression for data files. The filesystem cache is used by the operating system to reduce disk I/O.
  • Indexes loaded in the WiredTiger internal cache have a different data representation to the on-disk format, but can still take advantage of index prefix compression to reduce RAM usage. Index prefix compression deduplicates common prefixes from indexed fields.
  • Collection data in the WiredTiger internal cache is uncompressed and uses a different representation from the on-disk format. Block compression can provide significant on-disk storage savings, but data must be uncompressed to be manipulated by the server.

Via the filesystem cache, MongoDB automatically uses all free memory that is not used by the WiredTiger cache or by other processes.

To adjust the size of the WiredTiger internal cache, see storage.wiredTiger.engineConfig.cacheSizeGB and --wiredTigerCacheSizeGB. Avoid increasing the WiredTiger internal cache size above its default value.

Note

The storage.wiredTiger.engineConfig.cacheSizeGB limits the size of the WiredTiger internal cache. The operating system will use the available free memory for filesystem cache, which allows the compressed MongoDB data files to stay in memory. In addition, the operating system will use any free RAM to buffer file system blocks and file system cache.

To accommodate the additional consumers of RAM, you may have to decrease WiredTiger internal cache size.

The default WiredTiger internal cache size value assumes that there is a single mongod instance per machine. If a single machine contains multiple MongoDB instances, then you should decrease the setting to accommodate the other mongod instances.

If you run mongod in a container (e.g. lxc, cgroups, Docker, etc.) that does not have access to all of the RAM available in a system, you must set storage.wiredTiger.engineConfig.cacheSizeGB to a value less than the amount of RAM available in the container. The exact amount depends on the other processes running in the container.

To view statistics on the cache and eviction rate, see the wiredTiger.cache field returned from the serverStatus command.

Compression and Encryption

When using encryption, CPUs equipped with AES-NI instruction-set extensions show significant performance advantages. If you are using MongoDB Enterprise with the Encrypted Storage Engine, choose a CPU that supports AES-NI for better performance.

See also

Concurrency

Use Solid State Disks (SSDs)

MongoDB has good results and a good price-performance ratio with SATA SSD (Solid State Disk).

Use SSD if available and economical. Spinning disks can be performant, but SSDs’ capacity for random I/O operations works well with the update model of MMAPv1.

Commodity (SATA) spinning drives are often a good option, as the random I/O performance increase with more expensive spinning drives is not that dramatic (only on the order of 2x). Using SSDs or increasing RAM may be more effective in increasing I/O throughput.

MongoDB and NUMA Hardware

Running MongoDB on a system with Non-Uniform Access Memory (NUMA) can cause a number of operational problems, including slow performance for periods of time and high system process usage.

When running MongoDB servers and clients on NUMA hardware, you should configure a memory interleave policy so that the host behaves in a non-NUMA fashion. MongoDB checks NUMA settings on start up when deployed on Linux (since version 2.0) and Windows (since version 2.6) machines. If the NUMA configuration may degrade performance, MongoDB prints a warning.

See also

Configuring NUMA on Windows

On Windows, memory interleaving must be enabled through the machine’s BIOS. Consult your system documentation for details.

Configuring NUMA on Linux

When running MongoDB on Linux, you should disable zone reclaim in the sysctl settings using one of the following commands:

echo 0 | sudo tee /proc/sys/vm/zone_reclaim_mode
sudo sysctl -w vm.zone_reclaim_mode=0

Then, you should use numactl to start your mongod instances, including the config servers, mongos instances, and any clients. If you do not have the numactl command, refer to the documentation for your operating system to install the numactl package.

The following operation demonstrates how to start a MongoDB instance using numactl:

numactl --interleave=all <path> <options>

The <path> is the path to the program you are starting and the <options> are any optional arguments to pass to the program.

To fully disable NUMA behavior, you must perform both operations. For more information, see the Documentation for /proc/sys/vm/*.

Disk and Storage Systems

Swap

Assign swap space for your systems. Allocating swap space can avoid issues with memory contention and can prevent the OOM Killer on Linux systems from killing mongod.

For the MMAPv1 storage engine, the method mongod uses to map files to memory ensures that the operating system will never store MongoDB data in swap space. On Windows systems, using MMAPv1 requires extra swap space due to commitment limits. For details, see MongoDB on Windows.

For the WiredTiger storage engine, given sufficient memory pressure, WiredTiger may store data in swap space.

RAID

For optimal performance in terms of the storage layer, use disks backed by RAID-10. RAID-5 and RAID-6 do not typically provide sufficient performance to support a MongoDB deployment.

Remote Filesystems

With the MMAPv1 storage engine, the Network File System protocol (NFS) is not recommended as you may see performance problems when both the data files and the journal files are hosted on NFS. You may experience better performance if you place the journal on local or iscsi volumes.

With the WiredTiger storage engine, WiredTiger objects may be stored on remote file systems if the remote file system conforms to ISO/IEC 9945-1:1996 (POSIX.1). Because remote file systems are often slower than local file systems, using a remote file system for storage may degrade performance.

If you decide to use NFS, add the following NFS options to your /etc/fstab file: bg, nolock, and noatime.

Separate Components onto Different Storage Devices

For improved performance, consider separating your database’s data, journal, and logs onto different storage devices, based on your application’s access and write pattern. Mount the components as separate filesystems and use symbolic links to map each component’s path to the device storing it.

For the WiredTiger storage engine, you can also store the indexes on a different storage device. See storage.wiredTiger.engineConfig.directoryForIndexes.

Note

Using different storage devices will affect your ability to create snapshot-style backups of your data, since the files will be on different devices and volumes.

Scheduling

Scheduling for Virtual or Cloud Hosted Devices

For local block devices attached to a virtual machine instance via the hypervisor or hosted by a cloud hosting provider, the guest operating system should use a noop scheduler for best performance. The noop scheduler allows the operating system to defer I/O scheduling to the underlying hypervisor.

Scheduling for Physical Servers

For physical servers, the operating system should use a deadline scheduler. The deadline scheduler caps maximum latency per request and maintains a good disk throughput that is best for disk-intensive database applications.

Architecture

Replica Sets

See the Replica Set Architectures document for an overview of architectural considerations for replica set deployments.

Sharded Clusters

See Sharded Cluster Production Architecture for an overview of recommended sharded cluster architectures for production deployments.

Compression

WiredTiger can compress collection data using either snappy or zlib compression library. snappy provides a lower compression rate but has little performance cost, whereas zlib provides better compression rate but has a higher performance cost.

By default, WiredTiger uses snappy compression library. To change the compression setting, see storage.wiredTiger.collectionConfig.blockCompressor.

WiredTiger uses prefix compression on all indexes by default.

Clock Synchronization

MongoDB components keep logical clocks for supporting time-dependent operations. Using NTP to synchronize host machine clocks mitigates the risk of clock drift between components. NTP synchronization is required for deployments running MongoDB lower than 3.4.6 or 3.2.17 with the Wired Tiger storage engine, where clock drift could lead to checkpoint hangs. The issue was fixed in MongoDB 3.4.6+ and MongoDB 3.2.17+.

Clock drift between components also increases the likelihood of incorrect or abnormal behavior of time-dependent operations like the following:

  • Two cluster members with different system clocks may return different values for operations whose return value depends on the system clock, such as Date().

  • Features which rely on timekeeping may have inconsistent or unpredictable behavior in clusters with clock drift between MongoDB components.

    For example, TTL indexes rely on the system clock to calculate when to delete a given document. If two members have different system clock times, each member could delete a given document covered by the TTL index at a different time.

Platform Specific Considerations

MongoDB on Linux

Kernel and File Systems

When running MongoDB in production on Linux, you should use Linux kernel version 2.6.36 or later, with either the XFS or EXT4 filesystem. If possible, use XFS as it generally performs better with MongoDB.

With the WiredTiger storage engine, use of XFS is strongly recommended to avoid performance issues that may occur when using EXT4 with WiredTiger.

With the MMAPv1 storage engine, MongoDB preallocates its database files before using them and often creates large files. As such, you should use the XFS or EXT4 file systems. If possible, use XFS as it generally performs better with MongoDB.

  • In general, if you use the XFS file system, use at least version 2.6.25 of the Linux Kernel.
  • If you use the EXT4 file system, use at least version 2.6.28 of the Linux Kernel.
  • On Red Hat Enterprise Linux and CentOS, use at least version 2.6.18-194 of the Linux kernel.

System C Library

MongoDB uses the GNU C Library (glibc) on Linux. Generally, each Linux distro provides its own vetted version of this library. For best results, use the latest update available for this system-provided version. You can check whether you have the latest version installed by using your system’s package manager. For example:

  • On RHEL / CentOS, the following command updates the system-provided GNU C Library:

    sudo yum update glibc
    
  • On Ubuntu / Debian, the following command updates the system-provided GNU C Library:

    sudo apt-get install libc6
    

fsync() on Directories

Important

MongoDB requires a filesystem that supports fsync() on directories. For example, HGFS and Virtual Box’s shared folders do not support this operation.

MongoDB and TLS/SSL Libraries

On Linux platforms, you may observe one of the following statements in the MongoDB log:

<path to TLS/SSL libs>/libssl.so.<version>: no version information available (required by /usr/bin/mongod)
<path to TLS/SSL libs>/libcrypto.so.<version>: no version information available (required by /usr/bin/mongod)

These warnings indicate that the system’s TLS/SSL libraries are different from the TLS/SSL libraries that the mongod was compiled against. Typically these messages do not require intervention; however, you can use the following operations to determine the symbol versions that mongod expects:

objdump -T <path to mongod>/mongod | grep " SSL_"
objdump -T <path to mongod>/mongod | grep " CRYPTO_"

These operations will return output that resembles one the of the following lines:

0000000000000000      DF *UND*       0000000000000000  libssl.so.10 SSL_write
0000000000000000      DF *UND*       0000000000000000  OPENSSL_1.0.0 SSL_write

The last two strings in this output are the symbol version and symbol name. Compare these values with the values returned by the following operations to detect symbol version mismatches:

objdump -T <path to TLS/SSL libs>/libssl.so.1*
objdump -T <path to TLS/SSL libs>/libcrypto.so.1*

This procedure is neither exact nor exhaustive: many symbols used by mongod from the libcrypto library do not begin with CRYPTO_.

MongoDB on Windows

MongoDB 3.0 Using WiredTiger

For MongoDB instances using the WiredTiger storage engine, performance on Windows is comparable to performance on Linux.

MongoDB Using MMAPv1

Install Hotfix for MongoDB 2.6.6 and Later

Microsoft has released a hotfix for Windows 7 and Windows Server 2008 R2, KB2731284, that repairs a bug in these operating systems’ use of memory-mapped files that adversely affects the performance of MongoDB using the MMAPv1 storage engine.

Install this hotfix to obtain significant performance improvements on MongoDB 2.6.6 and later releases in the 2.6 series, which use MMAPv1 exclusively, and on 3.0 and later when using MMAPv1 as the storage engine.

Configure Windows Page File For MMAPv1

Configure the page file such that the minimum and maximum page file size are equal and at least 32 GB. Use a multiple of this size if, during peak usage, you expect concurrent writes to many databases or collections. However, the page file size does not need to exceed the maximum size of the database.

A large page file is needed as Windows requires enough space to accommodate all regions of memory mapped files made writable during peak usage, regardless of whether writes actually occur.

The page file is not used for database storage and will not receive writes during normal MongoDB operation. As such, the page file will not affect performance, but it must exist and be large enough to accommodate Windows’ commitment rules during peak database use.

Note

Dynamic page file sizing is too slow to accommodate the rapidly fluctuating commit charge of an active MongoDB deployment. This can result in transient overcommitment situations that may lead to abrupt server shutdown with a VirtualProtect error 1455.

MongoDB on Virtual Environments

This section describes considerations when running MongoDB in some of the more common virtual environments.

For all platforms, consider Scheduling.

EC2

When available, enable AWS’s Enhanced Networking for your instance. Not all instance types support Enhanced Networking. Refer to the AWS documentation for more information.

Azure

Use Premium Storage. Microsoft Azure offers two general types of storage: Standard storage, and Premium storage. MongoDB on Azure has better performance when using Premium storage than it does with Standard storage.

For all MMAPv1 MongoDB deployments using Azure, you must mount the volume that hosts the mongod instance’s dbPath with the Host Cache Preference READ/WRITE. This applies to all Azure deployments running MMAPv1, using any guest operating system.

If your volumes have inappropriate cache settings, MongoDB may eventually shut down with the following error:

[DataFileSync] FlushViewOfFile for <data file> failed with error 1 ...
[DataFileSync] Fatal Assertion 16387

These shut downs do not produce data loss when storage.journal.enabled is set to true. You can safely restart mongod at any time following this event.

The performance characteristics of MongoDB may change with READ/WRITE caching enabled.

The TCP idle timeout on the Azure load balancer is 240 seconds by default, which can cause it to silently drop connections if the TCP keepalive on your Azure systems is greater than this value. You should set tcp_keepalive_time to 120 to ameliorate this problem.

Note

You will need to restart mongod and mongos processes for new system-wide keepalive settings to take effect.

  • To view the keepalive setting on Linux, use one of the following commands:

    sysctl net.ipv4.tcp_keepalive_time
    

    Or:

    cat /proc/sys/net/ipv4/tcp_keepalive_time
    

    The value is measured in seconds.

    Note

    Although the setting name includes ipv4, the tcp_keepalive_time value applies to both IPv4 and IPv6.

  • To change the tcp_keepalive_time value, you can use one of the following commands, supplying a <value> in seconds:

    sudo sysctl -w net.ipv4.tcp_keepalive_time=<value>
    

    Or:

    echo <value> | sudo tee /proc/sys/net/ipv4/tcp_keepalive_time
    

    These operations do not persist across system reboots. To persist the setting, add the following line to /etc/sysctl.conf, supplying a <value> in seconds, and reboot the machine:

    net.ipv4.tcp_keepalive_time = <value>
    

    Keepalive values greater than 300 seconds, (5 minutes) will be overridden on mongod and mongos sockets and set to 300 seconds.

  • To view the keepalive setting on Windows, issue the following command:

    reg query HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters /v KeepAliveTime
    

    The registry value is not present by default. The system default, used if the value is absent, is 7200000 milliseconds or 0x6ddd00 in hexadecimal.


  • To change the KeepAliveTime value, use the following command in an Administrator Command Prompt, where <value> is expressed in hexadecimal (e.g. 120000 is 0x1d4c0):

    reg add HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\ /t REG_DWORD /v KeepAliveTime /d <value>
    

    Windows users should consider the Windows Server Technet Article on KeepAliveTime for more information on setting keepalive for MongoDB deployments on Windows systems. Keepalive values greater than or equal to 600000 milliseconds (10 minutes) will be ignored by mongod and mongos.

VMware

MongoDB is compatible with VMware.

VMware supports memory overcommitment, where you can assign more memory to your virtual machines than the physical machine has available. When memory is overcommitted, the hypervisor reallocates memory between the virtual machines. VMware’s balloon driver (vmmemctl) reclaims the pages that are considered least valuable. The balloon driver resides inside the guest operating system. When the balloon driver expands, it may induce the guest operating system to reclaim memory from guest applications, which can interfere with MongoDB’s memory management and affect MongoDB’s performance.

You can disable the balloon driver and VMware’s memory overcommitment feature to mitigate these problems. However, disabling the balloon driver can cause the hypervisor to use its swap, as there is no other available mechanism to perform the memory reclamation. Accessing data in swap is much slower than accessing data in memory, which can in turn affect performance. Instead of disabling the balloon driver and memory overcommitment features, map and reserve the full amount of memory for the virtual machine running MongoDB. This ensures that the balloon will not be inflated in the local operating system if there is memory pressure in the hypervisor due to an overcommitted configuration.

Ensure that virtual machines stay on a specific ESX/ESXi host by setting VMware’s affinity rules. If you must manually migrate a virtual machine to another host and the mongod instance on the virtual machine is the primary, you must first step down the primary and then shut down the instance.

Follow the networking best practices for vMotion and the VMKernel. Failure to follow the best practices can result in performance problems and affect replica set and sharded cluster high availability mechanisms.

It is possible to clone a virtual machine running MongoDB. You might use this function to spin up a new virtual host to add as a member of a replica set. If you clone a VM with journaling enabled, the clone snapshot will be valid. If not using journaling, first stop mongod, then clone the VM, and finally, restart mongod.

KVM

MongoDB is compatible with KVM.

KVM supports memory overcommitment, where you can assign more memory to your virtual machines than the physical machine has available. When memory is overcommitted, the hypervisor reallocates memory between the virtual machines. KVM’s balloon driver reclaims the pages that are considered least valuable. The balloon driver resides inside the guest operating system. When the balloon driver expands, it may induce the guest operating system to reclaim memory from guest applications, which can interfere with MongoDB’s memory management and affect MongoDB’s performance.

You can disable the balloon driver and KVM’s memory overcommitment feature to mitigate these problems. However, disabling the balloon driver can cause the hypervisor to use its swap, as there is no other available mechanism to perform the memory reclamation. Accessing data in swap is much slower than accessing data in memory, which can in turn affect performance. Instead of disabling the balloon driver and memory overcommitment features, map and reserve the full amount of memory for the virtual machine running MongoDB. This ensures that the balloon will not be inflated in the local operating system if there is memory pressure in the hypervisor due to an overcommitted configuration.

Performance Monitoring

iostat

On Linux, use the iostat command to check if disk I/O is a bottleneck for your database. Specify a number of seconds when running iostat to avoid displaying stats covering the time since server boot.

For example, the following command will display extended statistics and the time for each displayed report, with traffic in MB/s, at one second intervals:

iostat -xmt 1

Key fields from iostat:

  • %util: this is the most useful field for a quick check, it indicates what percent of the time the device/drive is in use.
  • avgrq-sz: average request size. Smaller number for this value reflect more random IO operations.

bwm-ng

bwm-ng is a command-line tool for monitoring network use. If you suspect a network-based bottleneck, you may use bwm-ng to begin your diagnostic process.

Backups

To make backups of your MongoDB database, please refer to MongoDB Backup Methods Overview.