- Release Notes >
- Release Notes for MongoDB 3.0
Release Notes for MongoDB 3.0¶
On this page
March 3, 2015
MongoDB 3.0 is now available. Key features include support for the
WiredTiger storage engine, pluggable storage engine API,
SCRAM-SHA-1 authentication mechanism, and improved
3.0.14 – Nov 4, 2016¶
- Incorrect memory access on 3.0.13 triggers segmentation fault: SERVER-26889
3.0.13 – Oct 31, 2016¶
3.0.12 – May 9, 2016¶
- Background index build may result in extra index key entries that do not correspond to indexed documents: SERVER-22970
- Documents that contain embedded null characters can be created: SERVER-7005
- IX GlobalLock being held while waiting for WiredTiger cache eviction: SERVER-22964
- All issues closed in 3.0.12
3.0.11 – Mar 31, 2016¶
- For MongoDB 3.0.9 and MongoDB 3.0.10, during chunk migration, insert and update operations to documents in the migrating chunk are not reflected in the destination shard: SERVER-23425
3.0.10 – Mar 8, 2016¶
- Read preference of
secondaryPreferredcan end up using unversioned connections: SERVER-18671
- For MMAPv1 journaling, the “last sequence number” file (
lsnfile) may be ahead of what is synced to the data files: SERVER-22261.
- Data size change for oplog deletes can overflow 32-bit int: SERVER-22634
- High fragmentation on WiredTiger databases under write workloads: SERVER-22898.
- All issues closed in 3.0.10
3.0.9 – Jan 26, 2016¶
- Queries which specify sort and batch size can return results out of order if documents are concurrently updated. SERVER-19996
- Large amounts of create and drop collections can cause
listDatabasesto be slow under WiredTiger. SERVER-20961
- Authentication failure message includes server IP address instead of the client IP address. SERVER-22054
- All issues closed in 3.0.9
3.0.8 – Dec 15, 2015¶
3.0.7 – Oct 13, 2015¶
- WiredTiger memory handling and performance issues: SERVER-20159, SERVER-20204, SERVER-20091, and SERVER-20176.
- Reconfig during a pending step down may prevent a primary from stepping down: SERVER-20262.
- Built-in roles requires additional privileges: SERVER-19131, SERVER-15893, and SERVER-13647.
- All issues closed in 3.0.7
3.0.6 – August 24, 2015¶
3.0.5 – July 28, 2015¶
Issues fixed and improvements:
- Improvements to WiredTiger for capped collections and replication (SERVER-19178, SERVER-18875 and SERVER-19513).
- Additional WiredTiger improvements for performance (SERVER-19189) and improvements related to cache and session use (SERVER-18829 SERVER-17836).
- Performance improvements for longer running queries, particularly
- All issues closed in 3.0.5
3.0.4 – June 16, 2015¶
- Missed writes with concurrent inserts during chunk migration from shards with WiredTiger primaries: SERVER-18822
- Write conflicts with multi-update updates with
upsert=truewith the Wired Tiger Storage engine: SERVER-18213
- Secondary reads could block replication: SERVER-18190
- Performance on Windows with WiredTiger and documents larger than 16kb: SERVER-18079
- WiredTiger data files are not correctly recovered following unexpected system restarts: SERVER-18316
- All issues closed in 3.0.4
3.0.3 – May 12, 2015¶
db.eval()and add warnings: SERVER-17453
- Potential for abrupt termination with the Windows service stop operation: SERVER-17802
- Crash caused by update with a key too large to index on WiredTiger and RocksDB storage engines: SERVER-17882
- Inconsistent support for
- All issues closed in 3.0.3
3.0.2 – April 9, 2015¶
- Inefficient query plans for
mongodduring repair operations with WiredTiger: SERVER-17652 and SERVER-17729
- Invalid compression stream error with WiredTiger and
zlibblock compression: SERVER-17713
- Memory use issue for inserts into large indexed arrays: SERVER-17616
- All issues closed in 3.0.2
3.0.1 – March 17, 2015¶
- Race condition in WiredTiger between inserts and checkpoints that could result in lost records: SERVER-17506.
- WiredTiger’s capped collections implementation causes a server crash: SERVER-17345.
- Initial sync with duplicate
- Deadlock condition in MMAPv1 between the journal lock and the oplog collection lock: SERVER-17416.
- All issues closed in 3.0.1
Pluggable Storage Engine API¶
MongoDB 3.0 introduces a pluggable storage engine API that allows third parties to develop storage engines for MongoDB.
MongoDB 3.0 introduces support for the WiredTiger storage engine. With the support for WiredTiger, MongoDB now supports two storage engines:
- MMAPv1, the storage engine available in previous versions of MongoDB and the default storage engine for MongoDB 3.0, and
- WiredTiger, available only in the 64-bit versions of MongoDB 3.0.
WiredTiger is an alternate to the default MMAPv1 storage engine. WiredTiger supports all MongoDB features, including operations that report on server, database, and collection statistics. Switching to WiredTiger, however, requires a change to the on-disk storage format. For instructions on changing the storage engine to WiredTiger, see the appropriate sections in the Upgrade MongoDB to 3.0 documentation.
MongoDB 3.0 replica sets and sharded clusters can have members with different storage engines; however, performance can vary according to workload. For details, see the appropriate sections in the Upgrade MongoDB to 3.0 documentation.
The WiredTiger storage engine requires the latest official MongoDB drivers. For more information, see WiredTiger and Driver Version Compatibility.
To configure the behavior and properties of the WiredTiger storage
storage.wiredTiger configuration options. You
can set WiredTiger options on the command line.
WiredTiger Concurrency and Compression¶
The 3.0 WiredTiger storage engine provides document-level locking and compression.
MMAPv1 Concurrency Improvement¶
In version 3.0, the MMAPv1 storage engine adds support for collection-level locking.
MMAPv1 Configuration Changes¶
To support multiple storage engines, some configuration settings for MMAPv1 have changed. See Configuration File Options Changes.
MMAPv1 Record Allocation Behavior Changes¶
MongoDB 3.0 no longer implements dynamic record allocation and
deprecates paddingFactor. The default
allocation strategy for collections in instances that use MMAPv1 is
power of 2 allocation, which has been
improved to better handle large document sizes. In 3.0, the
usePowerOf2Sizes flag is ignored, so the power of 2 strategy is
used for all collections that do not have
noPadding flag set.
For collections with workloads that consist only of inserts or in-place
updates (such as incrementing counters), you can disable the power of 2
strategy. To disable the power of 2 strategy for a collection, use the
collMod command with the
noPadding flag or the
db.createCollection() method with the
Do not set
noPadding if the workload includes removes or any
updates that may cause documents to grow. For more information, see
No Padding Allocation Strategy.
When low on disk space, MongoDB 3.0 no longer errors on all writes but only when the required disk allocation fails. As such, MongoDB now allows in-place updates and removes when low on disk space.
Increased Number of Replica Set Members¶
In MongoDB 3.0, replica sets can have up to 50 members.  The following drivers support the larger replica sets:
- C# (.NET) Driver 1.10
- Java Driver 2.13
- Python Driver (PyMongo) 3.0
- Ruby Driver 2.0
- Node.JS Driver 2.0
The C, C++, Perl, and PHP drivers, as well as the earlier versions of the Ruby, Python, and Node.JS drivers, discover and monitor replica set members serially, and thus are not suitable for use with large replica sets.
|||The maximum number of voting members remains at 7.|
Replica Set Step Down Behavior Changes¶
- Before stepping down,
replSetStepDownwill attempt to terminate long running user operations that would block the primary from stepping down, such as an index build, a write operation or a map-reduce job.
- To help prevent rollbacks, the
replSetStepDownwill wait for an electable secondary to catch up to the state of the primary before stepping down. Previously, a primary would wait for a secondary to catch up to within 10 seconds of the primary (i.e. a secondary with a replication lag of 10 seconds or less) before stepping down.
replSetStepDownnow allows users to specify a
secondaryCatchUpPeriodSecsparameter to specify how long the primary should wait for a secondary to catch up before stepping down.
Other Replica Set Operational Changes¶
- Initial sync builds indexes more efficiently for each collection and applies oplog entries in batches using threads.
- Definition of w: “majority” write concern changed to mean majority of voting nodes.
- Stronger restrictions on Replica Set Configuration. For details, see Replica Set Configuration Validation.
- For pre-existing collections on secondary members, MongoDB 3.0 no
longer automatically builds missing
MongoDB 3.0 includes the following security enhancements:
- MongoDB 3.0 adds a new SCRAM-SHA-1 challenge-response user authentication
SCRAM-SHA-1requires a driver upgrade if your current driver version does not support
SCRAM-SHA-1. For the driver versions that support
SCRAM-SHA-1, see Upgrade Drivers.
- Increases restrictions when using the Localhost Exception to access MongoDB. For details, see Localhost Exception Changed.
New Query Introspection System¶
MongoDB 3.0 includes a new query introspection system that provides an improved output format and a finer-grained introspection into both query plan and query execution.
For information on the format of the new output, see Explain Results.
To improve usability of the log messages for diagnosis, MongoDB categorizes some log messages under specific components, or operations, and provides the ability to set the verbosity level for these components. For information, see Log Messages.
MongoDB Tools Enhancements¶
- New options for parallelized
mongorestore. You can control the number of collections that
mongorestorewill restore at a time with the
- New options
mongodumpto exclude collections.
mongorestorecan now accept BSON data input from standard input in addition to reading BSON data from file.
mongotopcan now return output in JSON format with the
- Added configurable write concern to
mongofiles. Use the
--writeConcernoption. The default writeConcern has been changed to ‘w:majority’.
mongofilesnow allows you to configure the GridFS prefix with the
--prefixoption so that you can use custom namespaces and store multiple GridFS namespaces in a single database.
- Background index builds will no longer automatically interrupt if
dropIndexesoperations occur for the database or collection affected by the index builds. The
dropIndexescommands will still fail with the error message
a background operation is currently running, as in 2.6.
- If you specify multiple indexes to the
- the command only scans the collection once, and
- if at least one index is to be built in the foreground, the operation will build all the specified indexes in the foreground.
- For sharded collections, indexes can now cover queries that execute against the
mongosif the index includes the shard key.
MongoDB 3.0 includes the following query enhancements:
- For geospatial queries, adds support for “big” polygons for
$geoWithinqueries. “Big” polygons are single-ringed GeoJSON polygons with areas greater than that of a single hemisphere. See
aggregate(), adds a new
$dateToStringoperator to facilitate converting a date to a formatted string.
- Adds the
$eqquery operator to query for equality conditions.
Distributions and Supported Versions¶
Most non-Enterprise MongoDB distributions now include support for TLS/SSL.
Previously, only MongoDB Enterprise distributions came with TLS/SSL support
included; for non-Enterprise distributions, you had to build MongoDB
locally with the
--ssl flag (i.e.
32-bit MongoDB builds are available for testing, but are not for production use. 32-bit MongoDB builds do not include the WiredTiger storage engine.
MongoDB builds for Solaris do not support the WiredTiger storage engine.
MongoDB builds are available for Windows Server 2003 and Windows Vista (as “64-bit Legacy”), but the minimum officially supported Windows version is Windows Server 2008.
MongoDB Enterprise Features¶
Auditing in MongoDB Enterprise can filter on any
field in the audit message, including the
fields returned in the param
document. This enhancement, along with the
auditAuthorizationSuccess parameter, enables auditing to
filter on CRUD operations. However, enabling
auditAuthorizationSuccess to audit of all authorization
successes degrades performance more than auditing only the