Support large buffer sizes in Kafka Sink Layer
Description
Acceptance / Success Criteria
Lucidchart Diagrams
Activity

Chandra Gorantla July 12, 2019 at 8:31 PMEdited
The corner case described in above comment ( Caveat) is also taken care with https://github.com/OpenNMS/opennms/pull/2571/commits/69735a645a42cb5778ba0061fc58e4eb427cf16f
When a large buffer is being sent, the logic now checks if all the chunks are being sent to same partition, if not re-send the all the chunks again.

Chandra Gorantla May 9, 2019 at 2:00 AMEdited
Here is short description on how this works:
The large message is divided into chunks with max configured size of 900KB ( is configurable by setting `max.buffer.size` on the producer config) So anything more than >900KB is divided into chunks and each chunk of message is sent as a different message with the same key. When Kafka producer receives message with the same key, it will send all the messages into same partition. Consumers share the partitions but each consumer generally processes from the same partition. So this whole message go to the same consumer which merges all the chunks and processes the message once it receives all chunks.
Caveat:
The scenario where multiple sentinels are involved and If a sentinel is being added dynamically and rebalance of partitions is happening in the middle of sending large message, this message may be dropped if the same partition where this message exists is now attached to different consumer ( Sentinel) after rebalance.

Chandra Gorantla April 15, 2019 at 6:50 PM
This should address issues in https://issues.opennms.org/browse/HZN-1509
Details
Assignee
Chandra GorantlaChandra GorantlaReporter
Chandra GorantlaChandra GorantlaSprint
NoneFix versions
Affects versions
Priority
Major
Details
Details
Assignee

Reporter

Sprint
Fix versions
Affects versions
Priority
PagerDuty
PagerDuty Incident
PagerDuty
PagerDuty Incident
PagerDuty

Kafka by default handles buffer sizes up to 1000012 bytes. Although this default size limit can be changed at the broker level with
max.message.bytes
, this can also be solved by dividing large buffer into chunks and handling the message at the Kafka Client/Server layer.This will ensure that there are no configuration changes required at the broker/topic level.