Using the OpenNMS Kafka datasource, it is currently possible for 1+ alarms and a node to contribute the same InventoryObject via the InventoryDatasource to the handlers.
The current implementation of the clustering engine (currently the primary correlator) handles duplicates by ignoring subsequent adds for the same objects. However, the object is removed from the graph on the first delete.
This means that if alarm A1, and alarm A2 and node N1 all contribute the same object, A1 gets cleared and the InventoryObject derived from A1 is expired, then the object will be removed from the graph.
The logic should be updated, such that the inventory object is not removed until it is expired from all the providers.