Navigation

    Kafka Sink Connector Guide

    The MongoDB Kafka Sink Connector consumes records from a Kafka topic and saves the data to a MongoDB database.

    This section of the guide covers the configuration settings necessary to set up a Kafka Sink connector.

    Important With Circle IconCreated with Sketch.Important
    Add Indexes to Your Collections for Consistent Performance

    Writes performed by the Kafka Sink Connector take additional time to complete as the size of the underlying MongoDB collection grows. To prevent performance deterioration, use an index to support these queries.

    The Sink Connector guarantees "at-least-once" message delivery by default. If there is an error while processing data from a topic, the connector retries the write.

    An "exactly-once" message delivery guarantee can be achieved using an idempotent operation such as insert or update. Configure the connector to ensure messages include a value for the _id field.

    Info With Circle IconCreated with Sketch.Note

    You can configure the DocumentIdAdder post processor to define a custom behavior and a value for a _id field. By default, the sink connector uses the BsonOidStrategy which generates a new BSON ObjectId for the _id field if one does not exist.

    If you need an "exactly-once" message delivery guarantee, configure the connector to ensure messages include a value for the _id field. For example, you can specify the DocumentIdAdder post processor to add a value for the _id field.

    The sink connector does not support the "at-most-once" guarantee.

    Give Feedback