Support large buffer sizes in Kafka RPC

Description

Support larger buffer sizes in Kafka RPC as Kafka only allows buffers upto 1MB.   

To support larger buffers in RPC request/response, the buffer can be divided into chunks and handle them in inside OpenNMS/Minion code itself.

Also fix  logging when Kafka throws any exceptions while sending messages. 

=============================================================================================================================================

Working with a customer, I figured that the message limits that haha has should be increased in order to be able to discover and collect data from big devices like Cisco Nexus.

When using a standalone Karaf, it shows an exception when a producer tries to send a big message. Unfortunately, Minion is not showing that exception which complicates troubleshooting.

When sending an RPC request against a big switch, the Minion will gather the SNMP data and build the response that for sure will be over the current default limit (1MB). When it tries to send it back to OpenNMS it is rejected, but it doesn't show the exception on karaf.log. Because OpenNMS never receives the data, it dies after TTL (regardless how bit it could be). This ends into nodeScanAborted or dataCollectionFailed (depending on which service has triggered the request).

Here is what I did to reproduce the problem with just pure Kafka:

1) Create a single line file with 2MB of random data:

The reason for 2MB is that the Kafka limit is 1MB, so it will complain

2) Send the message to Kafka using the console producer:

The answer is:

Note that even with compression, the error is sent. That's because Kafka will check the message size prior compressing it.

This is the exception that will be useful to see on karaf.log on Minions.

Of course, the solution is increasing the message sizes and enable compression (which I believe it should be explained on the documentation).

Here is how:

The following has been tested against Kafka 2.0.0. In theory, it should work with 1.x, but might require changes on older versions. This will increase the limits from 1MB to 5MB and enable gzip compression.

Broker Settings

Producer Settings

Consumer Settings

 For more details, check Kafka's documentation.

 In terms of OpenNMS configuration, which involves producers and consumers:

 For the OpenNMS server, add the following to /opt/opennms/etc/opennms.properties.d/kafka.properties

For Minion, add the following to /opt/minion/etc/org.opennms.core.ipc.rpc.kafka.cfg

 

Acceptance / Success Criteria

None

Lucidchart Diagrams

Activity

Show:

Chandra Gorantla December 3, 2018 at 3:40 PM

Alejandro Galue November 13, 2018 at 12:31 PM

Added the support label as this problem was discovered during consulting.

Fixed

Details

Assignee

Reporter

Labels

Components

Sprint

Fix versions

Affects versions

Priority

PagerDuty

Created November 13, 2018 at 12:30 PM
Updated January 16, 2019 at 10:17 PM
Resolved January 16, 2019 at 10:17 PM