Uploaded image for project: 'OpenNMS'
  1. OpenNMS
  2. NMS-9318

Traps failing to be ingested by Elasticsearch

    XMLWordPrintable

    Details

    • Type: Bug
    • Status: Resolved (View Workflow)
    • Priority: Critical
    • Resolution: Configuration
    • Affects Version/s: 19.0.1
    • Fix Version/s: None
    • Security Level: Default (Default Security Scheme)
    • Labels:
      None
    • Environment:
      RHEL 6.6

      Description

      For some traps, we are getting the following error for OpenNMS events forwarded to Elasticsearch:
      java.lang.IllegalArgumentException: Limit of total fields [1000] in index [opennms-events-raw-2017.04] has been exceeded
      at org.elasticsearch.index.mapper.MapperService.checkTotalFieldsLimit(MapperService.java:576) ~[elasticsearch-5.2.1.jar:5.2.1]
      at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:409) ~[elasticsearch-5.2.1.jar:5.2.1]
      at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:327) ~[elasticsearch-5.2.1.jar:5.2.1]
      at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:260) ~[elasticsearch-5.2.1.jar:5.2.1]
      at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.applyRequest(MetaDataMappingService.java:311) ~[elasticsearch-5.2.1.jar:5.2.1]
      at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.execute(MetaDataMappingService.java:230) ~[elasticsearch-5.2.1.jar:5.2.1]
      at org.elasticsearch.cluster.service.ClusterService.executeTasks(ClusterService.java:674) ~[elasticsearch-5.2.1.jar:5.2.1]
      at org.elasticsearch.cluster.service.ClusterService.calculateTaskOutputs(ClusterService.java:653) ~[elasticsearch-5.2.1.jar:5.2.1]
      at org.elasticsearch.cluster.service.ClusterService.runTasks(ClusterService.java:612) ~[elasticsearch-5.2.1.jar:5.2.1]
      at org.elasticsearch.cluster.service.ClusterService$UpdateTask.run(ClusterService.java:1112) ~[elasticsearch-5.2.1.jar:5.2.1]
      at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:527) ~[elasticsearch-5.2.1.jar:5.2.1]
      at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:238) ~[elasticsearch-5.2.1.jar:5.2.1]
      at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:201) ~[elasticsearch-5.2.1.jar:5.2.1]
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_73]
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_73]
      at java.lang.Thread.run(Thread.java:745) [?:1.8.0_73]

      Events that exceed the 1000 fields limit do not appear to be ingested by Elasticsearch. Is there a way to configure the ES Rest forwarded to limit the number of parameters are included in the event?

        Attachments

          Activity

            People

            • Assignee:
              Unassigned
              Reporter:
              tim.fite Tim Fite
            • Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: