Hi all! We faced this problem while trying to set-up a containerized OpenNMS in an environment which will be primarily for syslog-based fault management. In an already existing deployment we are currently collecting around 20k syslogs per minute, and since that OpenNMS instance is consuming ~6 cores and 12 GB of RAM we configured our new OpenNMS with 8 cores and 16GB of RAM (as we are expecting a pretty similar level of messages).
Nevertheless, what we saw was that the OpenNMS was unable to process such amount of syslogs. Instead, the messages were getting accumulated in the ActiveMQ, causing an increasing delay in their handling.
After some tuning and investigation, we think the problem is related with the number of dispatcher threads (OpenNMS.Sink.AsyncDispatcher.Syslog threads in this case) which are continually dequeuing from the message queue. What we saw was that the amount of these threads defaults to 2x<#Cores> (16 threads in our case as we assigned a limit of 8 cores) and we believe this amount cannot be overriden with any configuration.
We have already compared this amount with our other working deployment and saw that it was running with a CPU limit of 32 cores (resulting in 64 dispatcher threads). When we changed the CPU limit in our new environment to 32 cores as well, the OpenNMS started responding properly and we are now processing ~15k syslogs per minute without problems.
We believe, however, that this configuration should be available from the XML OpenNMS configuration files, and not as a workaround by changing the CPU limits. We have been observing OpenNMS for a week and it has never consumed more than 8 cores in a week of monitoring. We think being able to configure the amount of syslog dispatcher threads without increasing the application limits is important, as OpenNMS will not likely be the only service running in its host.