Uploaded image for project: 'OpenNMS'
  1. OpenNMS
  2. NMS-10446

Support large buffer sizes in Kafka RPC



    • Type: Bug
    • Status: Resolved (View Workflow)
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 22.0.4
    • Fix Version/s: 23.0.2
    • Component/s: Minion
    • Security Level: Default (Default Security Scheme)
    • Labels:
    • Sprint:
      Horizon - November 28th 2018, Horizon - December 5th 2018, Horizon - December 12th 2018, Horizon - December 19th 2018, Horizon - January 9th 2019, Horizon - January 23rd 2019


      Support larger buffer sizes in Kafka RPC as Kafka only allows buffers upto 1MB.   

      To support larger buffers in RPC request/response, the buffer can be divided into chunks and handle them in inside OpenNMS/Minion code itself.

      Also fix  logging when Kafka throws any exceptions while sending messages. 


      Working with a customer, I figured that the message limits that haha has should be increased in order to be able to discover and collect data from big devices like Cisco Nexus.

      When using a standalone Karaf, it shows an exception when a producer tries to send a big message. Unfortunately, Minion is not showing that exception which complicates troubleshooting.

      When sending an RPC request against a big switch, the Minion will gather the SNMP data and build the response that for sure will be over the current default limit (1MB). When it tries to send it back to OpenNMS it is rejected, but it doesn't show the exception on karaf.log. Because OpenNMS never receives the data, it dies after TTL (regardless how bit it could be). This ends into nodeScanAborted or dataCollectionFailed (depending on which service has triggered the request).

      Here is what I did to reproduce the problem with just pure Kafka:

      1) Create a single line file with 2MB of random data:

      tr -dc A-Za-z0-9 </dev/urandom | head -c 2097152 > a.tx 

      The reason for 2MB is that the Kafka limit is 1MB, so it will complain

      2) Send the message to Kafka using the console producer:

      cat a.txt | kafka-console-producer.sh --broker-list kafka:9092 --topic test01 --compression-codec 

      The answer is:

      >[2018-11-12 00:26:17,898] ERROR Error when sending message to topic test01 with key: null, value: 2097152 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
      org.apache.kafka.common.errors.RecordTooLargeException: The message is 2097240 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration. 

      Note that even with compression, the error is sent. That's because Kafka will check the message size prior compressing it.

      This is the exception that will be useful to see on karaf.log on Minions.

      Of course, the solution is increasing the message sizes and enable compression (which I believe it should be explained on the documentation).

      Here is how:

      The following has been tested against Kafka 2.0.0. In theory, it should work with 1.x, but might require changes on older versions. This will increase the limits from 1MB to 5MB and enable gzip compression.

      Broker Settings


      Producer Settings


      Consumer Settings


       For more details, check Kafka's documentation.

       In terms of OpenNMS configuration, which involves producers and consumers:

       For the OpenNMS server, add the following to /opt/opennms/etc/opennms.properties.d/kafka.properties

      # Producer
      # Consumer

      For Minion, add the following to /opt/minion/etc/org.opennms.core.ipc.rpc.kafka.cfg

      # Producer
      # Consumer





            • Assignee:
              cgorantla Chandra Gorantla
              agalue Alejandro Galue
            • Votes:
              0 Vote for this issue
              2 Start watching this issue


              • Created: