Kafka Sink Connector Guide¶
The MongoDB Kafka Sink Connector consumes records from a Kafka topic and saves the data to a MongoDB database.
This section of the guide covers the configuration settings necessary to set up a Kafka Sink connector.
Writes performed by the Kafka Sink Connector take additional time to complete as the size of the underlying MongoDB collection grows. To prevent performance deterioration, use an index to support these queries.
Message Delivery Guarantee¶
The Sink Connector guarantees "at-least-once" message delivery by default. If there is an error while processing data from a topic, the connector retries the write.
An "exactly-once" message delivery guarantee can be achieved using an
idempotent operation such as insert or update. Configure the connector to
ensure messages include a value for the
If you need an "exactly-once" message delivery guarantee, configure the
connector to ensure messages include a value for the
_id field. For
example, you can specify the DocumentIdAdder post processor
to add a value for the
The sink connector does not support the "at-most-once" guarantee.