Status: Resolved (View Workflow)
Affects Version/s: 22.0.4
Fix Version/s: 23.0.2
Security Level: Default (Default Security Scheme)
Sprint:Horizon - November 28th 2018, Horizon - December 5th 2018, Horizon - December 12th 2018, Horizon - December 19th 2018, Horizon - January 9th 2019, Horizon - January 23rd 2019
Support larger buffer sizes in Kafka RPC as Kafka only allows buffers upto 1MB.
To support larger buffers in RPC request/response, the buffer can be divided into chunks and handle them in inside OpenNMS/Minion code itself.
Also fix logging when Kafka throws any exceptions while sending messages.
Working with a customer, I figured that the message limits that haha has should be increased in order to be able to discover and collect data from big devices like Cisco Nexus.
When using a standalone Karaf, it shows an exception when a producer tries to send a big message. Unfortunately, Minion is not showing that exception which complicates troubleshooting.
When sending an RPC request against a big switch, the Minion will gather the SNMP data and build the response that for sure will be over the current default limit (1MB). When it tries to send it back to OpenNMS it is rejected, but it doesn't show the exception on karaf.log. Because OpenNMS never receives the data, it dies after TTL (regardless how bit it could be). This ends into nodeScanAborted or dataCollectionFailed (depending on which service has triggered the request).
Here is what I did to reproduce the problem with just pure Kafka:
1) Create a single line file with 2MB of random data:
The reason for 2MB is that the Kafka limit is 1MB, so it will complain
2) Send the message to Kafka using the console producer:
The answer is:
Note that even with compression, the error is sent. That's because Kafka will check the message size prior compressing it.
This is the exception that will be useful to see on karaf.log on Minions.
Of course, the solution is increasing the message sizes and enable compression (which I believe it should be explained on the documentation).
Here is how:
The following has been tested against Kafka 2.0.0. In theory, it should work with 1.x, but might require changes on older versions. This will increase the limits from 1MB to 5MB and enable gzip compression.
For more details, check Kafka's documentation.
In terms of OpenNMS configuration, which involves producers and consumers:
For the OpenNMS server, add the following to /opt/opennms/etc/opennms.properties.d/kafka.properties
For Minion, add the following to /opt/minion/etc/org.opennms.core.ipc.rpc.kafka.cfg