Unlike we do for RPC and Sink, the Kafka Producer doesn't split messages. It would take the whole CollectionSet, convert it to Protobuf and send it to Kafka over a single message.
It is common for large network equipment to pass the message limit in Kafka (which is 1MB), and to work around this problem, broker-level settings are required in each Kafka Node. Otherwise, consumers would never see the CollectionSets as Kafka would ignore those messages and would never be persisted.
For this reason, I recommend to add to that section a visible warning with something like this:
WARNING: the producer's code sends a full CollectionSet per message to Kafka. Large network equipment might have hundreds of thousands of metrics that could generate messages over the Kafka limit (1MB in size by default). Increasing max.request.size at the producer level and message.max.bytes and replica.fetch.max.bytes at the broker level is mandatory to avoid problems with the consumers.