Navigation
This version of the documentation is archived and no longer supported.

aggregate

On this page

aggregate

Performs aggregation operation using the aggregation pipeline. The pipeline allows users to process data from a collection with a sequence of stage-based manipulations.

The command has following syntax:

Changed in version 3.2.

{
  aggregate: "<collection>",
  pipeline: [ <stage>, <...> ],
  explain: <boolean>,
  allowDiskUse: <boolean>,
  cursor: <document>,
  bypassDocumentValidation: <boolean>,
  readConcern: <document>
}

The aggregate command takes the following fields as arguments:

Field Type Description
aggregate string The name of the collection to as the input for the aggregation pipeline.
pipeline array An array of aggregation pipeline stages that process and transform the document stream as part of the aggregation pipeline.
explain boolean

Optional. Specifies to return the information on the processing of the pipeline.

New in version 2.6.

allowDiskUse boolean

Optional. Enables writing to temporary files. When set to true, aggregation stages can write data to the _tmp subdirectory in the dbPath directory.

New in version 2.6.

cursor document

Optional. Specify a document that contains options that control the creation of the cursor object.

New in version 2.6.

maxTimeMS non-negative integer

Optional. The maximum time period in milliseconds the getMore() operation will block waiting for new data to be inserted into the capped collection.

Requires that the cursor on which this getMore() is acting is an awaitData cursor. See the awaitData parameter for find().

bypassDocumentValidation boolean

Optional. Available only if you specify the $out aggregation operator.

Enables aggregate to bypass document validation during the operation. This lets you insert documents that do not meet the validation requirements.

New in version 3.2.

readConcern document

Optional. Specifies the read concern. The default level is "local".

To use a read concern level of "majority", you must use the WiredTiger storage engine and start the mongod instances with the --enableMajorityReadConcern command line option (or the replication.enableMajorityReadConcern setting if using a configuration file).

Only replica sets using protocol version 1 support "majority" read concern. Replica sets running protocol version 0 do not support "majority" read concern.

To ensure that a single thread can read its own writes, use "majority" read concern and "majority" write concern against the primary of the replica set.

To use a read concern level of "majority", you cannot include the $out stage.

New in version 3.2.

Changed in version 2.6: aggregation pipeline introduces the $out operator to allow aggregate command to store results to a collection.

For more information about the aggregation pipeline Aggregation Pipeline, Aggregation Reference, and Aggregation Pipeline Limits.

Example

Aggregate Data with Multi-Stage Pipeline

A collection articles contains documents such as the following:

{
   _id: ObjectId("52769ea0f3dc6ead47c9a1b2"),
   author: "abc123",
   title: "zzz",
   tags: [ "programming", "database", "mongodb" ]
}

The following example performs an aggregate operation on the articles collection to calculate the count of each distinct element in the tags array that appears in the collection.

db.runCommand(
   { aggregate: "articles",
     pipeline: [
                 { $project: { tags: 1 } },
                 { $unwind: "$tags" },
                 { $group: {
                             _id: "$tags",
                             count: { $sum : 1 }
                           }
                 }
               ]
   }
)

In the mongo shell, this operation can use the aggregate() helper as in the following:

db.articles.aggregate(
                       [
                          { $project: { tags: 1 } },
                          { $unwind: "$tags" },
                          { $group: {
                                      _id: "$tags",
                                      count: { $sum : 1 }
                                    }
                          }
                       ]
)

Note

In 2.6 and later, the aggregate() helper always returns a cursor.

If an error occurs, the aggregate() helper throws an exception.

Return Information on the Aggregation Operation

The following aggregation operation sets the optional field explain to true to return information about the aggregation operation.

db.runCommand( { aggregate: "orders",
                 pipeline: [
                             { $match: { status: "A" } },
                             { $group: { _id: "$cust_id", total: { $sum: "$amount" } } },
                             { $sort: { total: -1 } }
                           ],
                 explain: true
              } )

Note

The intended readers of the explain output document are humans, and not machines, and the output format is subject to change between releases.

See also

db.collection.aggregate() method

Aggregate Data using External Sort

Aggregation pipeline stages have maximum memory use limit. To handle large datasets, set allowDiskUse option to true to enable writing data to temporary files, as in the following example:

db.runCommand(
   { aggregate: "stocks",
     pipeline: [
                 { $project : { cusip: 1, date: 1, price: 1, _id: 0 } },
                 { $sort : { cusip : 1, date: 1 } }
               ],
     allowDiskUse: true
   }
)

Aggregate Command Returns a Cursor

Note

Using the aggregate command to return a cursor is a low-level operation, intended for authors of drivers. Most users should use the db.collection.aggregate() helper provided in the mongo shell or in their driver. In 2.6 and later, the aggregate() helper always returns a cursor.

The following command returns a document that contains results with which to instantiate a cursor object.

db.runCommand(
   { aggregate: "records",
     pipeline: [
        { $project: { name: 1, email: 1, _id: 0 } },
        { $sort: { name: 1 } }
     ],
     cursor: { }
   }
)

To specify an initial batch size, specify the batchSize in the cursor field, as in the following example:

db.runCommand(
   { aggregate: "records",
     pipeline: [
        { $project: { name: 1, email: 1, _id: 0 } },
        { $sort: { name: 1 } }
     ],
     cursor: { batchSize: 0 }
   }
)

The {batchSize: 0 } document specifies the size of the initial batch size only. Specify subsequent batch sizes to OP_GET_MORE operations as with other MongoDB cursors. A batchSize of 0 means an empty first batch and is useful if you want to quickly get back a cursor or failure message, without doing significant server-side work.

Override Default Read Concern

To override the default read concern level of "local", use the readConcern option.

The following operation on a replica set specifies a read concern of "majority" to read the most recent copy of the data confirmed as having been written to a majority of the nodes.

Important

db.runCommand(
   {
      aggregate: "orders",
      pipeline: [ { $match: { status: "A" } } ],
      readConcern: { level: "majority" }
   }
)

To ensure that a single thread can read its own writes, use "majority" read concern and "majority" write concern against the primary of the replica set.

The getMore command uses the readConcern level specified in the originating aggregate command.