Description0.70.8.00.8.10.8.20.9.00.10.00.10.10.10.20.11.01.01.12.02.12.22.32.42.52.62.72.83.03.13.23.33.43.53.63.73.8
advertised.listenersListeners to publish to ZooKeeper for clients to use, if different than the listeners config property. In IaaS environments, this ..



null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
alter.config.policy.class.nameThe alter configs policy class that should be used for validation. The class should implement the org.apache.kafka.server.policy.A..







null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
alter.log.dirs.replication.quota.window.numThe number of samples to retain in memory for alter log dirs replication quotas









11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
alter.log.dirs.replication.quota.window.size.secondsThe time span of each sample for alter log dirs replication quotas









1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
authorizer.class.nameThe fully qualified name of a class that implements org.apache.kafka.server.authorizer.Authorizer interface, which is used by the ..




























auto.create.topics.enableEnable auto creation of topic on the server.
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
auto.include.jmx.reporterDeprecated. Whether to automatically include JmxReporter even if it's not listed in metric.reporters. This configuration will be r..























true
true
true
true
true
auto.leader.rebalance.enableEnables auto leader balancing. A background thread checks the distribution of partition leaders at regular intervals, configurable..

false
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
background.threadsThe number of threads to use for various background processing tasks

4
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
broker.heartbeat.interval.msThe length of time in milliseconds between broker heartbeats. Used when running in KRaft mode.



















2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
broker.idThe broker id for this server. If unset, a unique broker id will be generated.To avoid conflicts between ZooKeeper generated broke..
null
null
null
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
broker.id.generation.enableEnable automatic broker id generation on the server. When enabled the value configured for reserved.broker.max.id should be review..



true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
broker.rackRack of the broker. This will be used in rack aware replication assignment for fault tolerance. Examples: RACK1, us-east-1d




null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
broker.session.timeout.msThe length of time in milliseconds that a broker lease lasts if no heartbeats are made. Used when running in KRaft mode.



















9000
9s
9000
9s
9000
9s
9000
9s
9000
9s
9000
9s
9000
9s
9000
9s
9000
9s
client.quota.callback.classThe fully qualified name of a class that implements the ClientQuotaCallback interface, which is used to determine quota limits app..










null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
compression.gzip.levelThe compression level to use if compression.type is set to gzip.



























-1
compression.lz4.levelThe compression level to use if compression.type is set to lz4.



























9
compression.typeSpecify the final compression type for a given topic. This configuration accepts the standard compression codecs ('gzip', 'snappy'..



producer
producer
producer
producer
producer
producer
producer
producer
producer
producer
producer
producer
producer
producer
producer
producer
producer
producer
producer
producer
producer
producer
producer
producer
producer
compression.zstd.levelThe compression level to use if compression.type is set to zstd.



























3
connection.failed.authentication.delay.msConnection close delay on failed authentication: this is the time (in milliseconds) by which connection close will be delayed on a..











100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
connections.max.idle.msClose idle connections after the number of milliseconds specified by this config.


600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
connections.max.reauth.msWhen explicitly set to a positive number (the default is 0, not a positive number), a session lifetime that will not exceed the co..












0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
control.plane.listener.nameName of listener used for communication between controller and brokers. A broker will use the control.plane.listener.name to locat..












null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
controlled.shutdown.enableEnable controlled shutdown of the server.
false
false
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
controlled.shutdown.max.retriesControlled shutdown can fail for multiple reasons. This determines the number of retries when such failure happens
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
controlled.shutdown.retry.backoff.msBefore each retry, the system needs time to recover from the state that caused the previous failure (Controller fail over, replica..
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
controller.listener.namesA comma-separated list of the names of the listeners used by the controller. This is required if running in KRaft mode. When commu..



















null
null
null
null
null
null
null
null
null
controller.quorum.append.linger.msThe duration in milliseconds that the leader will wait for writes to accumulate before flushing them to disk.



















25
25ms
25
25ms
25
25ms
25
25ms
25
25ms
25
25ms
25
25ms
25
25ms
25
25ms
controller.quorum.election.backoff.max.msMaximum time in milliseconds before starting new elections. This is used in the binary exponential backoff mechanism that helps pr..



















1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
controller.quorum.election.timeout.msMaximum time in milliseconds to wait without being able to fetch from the leader before triggering a new election



















1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
controller.quorum.fetch.timeout.msMaximum time without a successful fetch from the current leader before becoming a candidate and triggering an election for voters;..



















2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
controller.quorum.request.timeout.msThe configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not r..



















2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
controller.quorum.retry.backoff.msThe amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending ..



















20
20ms
20
20ms
20
20ms
20
20ms
20
20ms
20
20ms
20
20ms
20
20ms
20
20ms
controller.quorum.votersMap of id/endpoint information for the set of voters in a comma-separated list of {id}@{host}:{port} entries. For example: 1@local..




























controller.quota.window.numThe number of samples to retain in memory for controller mutation quotas

















11
11
11
11
11
11
11
11
11
11
11
controller.quota.window.size.secondsThe time span of each sample for controller mutations quotas

















1
1
1
1
1
1
1
1
1
1
1
controller.socket.timeout.msThe socket timeout for controller-to-broker channels.
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
create.topic.policy.class.nameThe create topic policy class that should be used for validation. The class should implement the org.apache.kafka.server.policy.Cr..






null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
default.replication.factorThe default replication factors for automatically created topics.
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
delegation.token.expiry.check.interval.msScan interval to remove expired delegation tokens.









3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
delegation.token.expiry.time.msThe token validity time in milliseconds before the token needs to be renewed. Default value 1 day.









86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
delegation.token.master.keyDEPRECATED: An alias for delegation.token.secret.key, which should be used instead of this config.









null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
delegation.token.max.lifetime.msThe token has a maximum lifetime beyond which it cannot be renewed anymore. Default value 7 days.









604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
delegation.token.secret.keySecret key to generate and verify delegation tokens. The same key must be configured across all the brokers. If using Kafka with ..


















null
null
null
null
null
null
null
null
null
null
delete.records.purgatory.purge.interval.requestsThe purge interval (in number of requests) of the delete records request purgatory







1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
delete.topic.enableEnables delete topic. Delete topic through the admin tool will have no effect if this config is turned off


false
false
false
false
false
false
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
early.start.listenersA comma-separated list of listener names which may be started before the authorizer has finished initialization. This is useful wh..






















null
null
null
null
null
null
eligible.leader.replicas.enableEnable the Eligible leader replicas


























false
false
fetch.max.bytesThe maximum amount of data the server should return for a fetch request. Records are fetched in batches by the consumer, and if th..















57671680
55 MB
57671680
55 MB
57671680
55 MB
57671680
55 MB
57671680
55 MB
57671680
55 MB
57671680
55 MB
57671680
55 MB
57671680
55 MB
57671680
55 MB
57671680
55 MB
57671680
55 MB
57671680
55 MB
fetch.purgatory.purge.interval.requestsThe purge interval (in number of requests) of the fetch request purgatory
10000
10000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
group.consumer.assignorsThe server side assignors as a list of full class names. The first one in the list is considered as the default assignor to be use..


























org.apache.kafka.coordinator.group.assignor.UniformAssignor,org.apache.kafka.coordinator.group.assignor.RangeAssignor
org.apache.kafka.coordinator.group.assignor.UniformAssignor,org.apache.kafka.coordinator.group.assignor.RangeAssignor
group.consumer.heartbeat.interval.msThe heartbeat interval given to the members of a consumer group.


























5000
5s
5000
5s
group.consumer.max.heartbeat.interval.msThe maximum heartbeat interval for registered consumers.


























15000
15s
15000
15s
group.consumer.max.session.timeout.msThe maximum allowed session timeout for registered consumers.


























60000
1min
60000
1min
group.consumer.max.sizeThe maximum number of consumers that a single consumer group can accommodate. This value will only impact the new consumer coordin..


























2147483647
2147483647
group.consumer.migration.policyThe config that enables converting the non-empty classic group using the consumer embedded protocol to the non-empty consumer grou..



























disabled
group.consumer.min.heartbeat.interval.msThe minimum heartbeat interval for registered consumers.


























5000
5s
5000
5s
group.consumer.min.session.timeout.msThe minimum allowed session timeout for registered consumers.


























45000
45s
45000
45s
group.consumer.session.timeout.msThe timeout to detect client failures when using the consumer group protocol.


























45000
45s
45000
45s
group.coordinator.append.linger.msThe duration in milliseconds that the coordinator will wait for writes to accumulate before flushing them to disk. Transactional w..



























10
10ms
group.coordinator.rebalance.protocolsThe list of enabled rebalance protocols. Supported protocols: consumer,classic,unknown. The consumer rebalance protocol is in earl..


























classic
classic
group.coordinator.threadsThe number of threads used by the group coordinator.


























1
1
group.initial.rebalance.delay.msThe amount of time the group coordinator will wait for more consumers to join a new group before performing the first rebalance. A..







3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
group.max.session.timeout.msThe maximum allowed session timeout for registered consumers. Longer timeouts give consumers more time to process messages in betw..



30000
30s
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
1800000
30min
1800000
30min
1800000
30min
1800000
30min
1800000
30min
1800000
30min
1800000
30min
1800000
30min
1800000
30min
1800000
30min
1800000
30min
1800000
30min
1800000
30min
1800000
30min
1800000
30min
group.max.sizeThe maximum number of consumers that a single consumer group can accommodate.












2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
group.min.session.timeout.msThe minimum allowed session timeout for registered consumers. Shorter timeouts result in quicker failure detection at the cost of ..



6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
initial.broker.registration.timeout.msWhen initially registering with the controller quorum, the number of milliseconds to wait before declaring failure and exiting the..



















60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
inter.broker.listener.nameName of listener used for communication between brokers. If this is unset, the listener name is defined by security.inter.broker.p..






null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
inter.broker.protocol.versionSpecify which version of the inter-broker protocol will be used.. This is typically bumped after all brokers were upgraded to a ne..



0.9.0.X
0.10.0-IV1
0.10.1-IV2
0.10.2-IV0
0.11.0-IV2
1.0-IV0
1.1-IV0
2.0-IV1
2.1-IV2
2.2-IV1
2.3-IV1
2.4-IV1
2.5-IV0
2.6-IV0
2.7-IV2
2.8-IV1
3.0-IV1
3.1-IV0
3.2-IV0
3.3-IV3
3.4-IV0
3.5-IV2
3.6-IV2
3.7-IV4
3.8-IV0
kafka.metrics.polling.interval.secsThe metrics polling interval (in seconds) which can be used inkafka.metrics.reporters implementations.











10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
kafka.metrics.reportersA list of classes to use as Yammer metrics custom reporters. The reporters should implement kafka.metrics.KafkaMetricsReporter tra..




























leader.imbalance.check.interval.secondsThe frequency with which the partition rebalance check is triggered by the controller

300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
leader.imbalance.per.broker.percentageThe ratio of leader imbalance allowed per broker. The controller would trigger a leader balance if it goes above this value per br..

10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
listener.security.protocol.mapMap between listener names and security protocols. This must be defined for the same security protocol to be usable in more than o..






SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,TRACE:TRACE,SASL_SSL:SASL_SSL,PLAINTEXT:PLAINTEXT
SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,TRACE:TRACE,SASL_SSL:SASL_SSL,PLAINTEXT:PLAINTEXT
PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
PLAINTEXT
PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
SASL_SSL:SASL_SSL,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT
listenersList of comma-separated URIs the REST API will listen on. The supported protocols are HTTP and HTTPS. Specify hostname as 0.0.0.0 ..



null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
PLAINTEXT://:9092
PLAINTEXT://:9092
PLAINTEXT://:9092
PLAINTEXT://:9092
PLAINTEXT://:9092
PLAINTEXT://:9092
PLAINTEXT://:9092
PLAINTEXT://:9092
PLAINTEXT://:9092
log.cleaner.backoff.msThe amount of time to sleep when there are no logs to clean

15000
15s
15000
15s
15000
15s
15000
15s
15000
15s
15000
15s
15000
15s
15000
15s
15000
15s
15000
15s
15000
15s
15000
15s
15000
15s
15000
15s
15000
15s
15000
15s
15000
15s
15000
15s
15000
15s
15000
15s
15000
15s
15000
15s
15000
15s
15000
15s
15000
15s
15000
15s
15000
15s
log.cleaner.dedupe.buffer.sizeThe total memory used for log deduplication across all cleaner threads

500*1024*1024
500*1024*1024
134217728
134217728
134217728
134217728
134217728
134217728
134217728
134217728
134217728
134217728
134217728
134217728
134217728
134217728
134217728
134217728
134217728
134217728
134217728
134217728
134217728
134217728
134217728
134217728
134217728
log.cleaner.delete.retention.msThe amount of time to retain tombstone message markers for log compacted topics. This setting also gives a bound on the time in wh..

1 day
0ms
1 day
0ms
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
log.cleaner.enableEnable the log cleaner process to run on the server. Should be enabled if using any topics with a cleanup.policy=compact including..

false
false
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
log.cleaner.io.buffer.load.factorLog cleaner dedupe buffer load factor. The percentage full the dedupe buffer can become. A higher value will allow more log to be ..

0.9
0.9
0.9
0.9
0.9
0.9
0.9
0.9
0.9
0.9
0.9
0.9
0.9
0.9
0.9
0.9
0.9
0.9
0.9
0.9
0.9
0.9
0.9
0.9
0.9
0.9
0.9
log.cleaner.io.buffer.sizeThe total memory used for log cleaner I/O buffers across all cleaner threads

512*1024
512*1024
524288
524288
524288
524288
524288
524288
524288
524288
524288
524288
524288
524288
524288
524288
524288
524288
524288
524288
524288
524288
524288
524288
524288
524288
524288
log.cleaner.io.max.bytes.per.secondThe log cleaner will be throttled so that the sum of its read and write i/o will be less than this value on average

None
Double.MaxValue
1.7976931348623157E308
1.7976931348623157E308
1.7976931348623157E308
1.7976931348623157E308
1.7976931348623157E308
1.7976931348623157E308
1.7976931348623157E308
1.7976931348623157E308
1.7976931348623157E308
1.7976931348623157E308
1.7976931348623157E308
1.7976931348623157E308
1.7976931348623157E308
1.7976931348623157E308
1.7976931348623157E308
1.7976931348623157E308
1.7976931348623157E308
1.7976931348623157E308
1.7976931348623157E308
1.7976931348623157E308
1.7976931348623157E308
1.7976931348623157E308
1.7976931348623157E308
1.7976931348623157E308
1.7976931348623157E308
log.cleaner.max.compaction.lag.msThe maximum time a message will remain ineligible for compaction in the log. Only applicable for logs that are being compacted.













9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
log.cleaner.min.cleanable.ratioThe minimum ratio of dirty log to total log for a log to eligible for cleaning. If the log.cleaner.max.compaction.lag.ms or the lo..

0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
log.cleaner.min.compaction.lag.msThe minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted.





0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
log.cleaner.threadsThe number of background threads to use for log cleaning

1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
log.cleanup.policyThe default cleanup policy for segments beyond the retention window. A comma separated list of valid policies. Valid policies are:..

delete
delete
delete
delete
[delete]
delete
delete
delete
delete
delete
delete
delete
delete
delete
delete
delete
delete
delete
delete
delete
delete
delete
delete
delete
delete
delete
delete
log.dirThe directory in which the log data is kept (supplemental for log.dirs property)none



/tmp/kafka-logs
/tmp/kafka-logs
/tmp/kafka-logs
/tmp/kafka-logs
/tmp/kafka-logs
/tmp/kafka-logs
/tmp/kafka-logs
/tmp/kafka-logs
/tmp/kafka-logs
/tmp/kafka-logs
/tmp/kafka-logs
/tmp/kafka-logs
/tmp/kafka-logs
/tmp/kafka-logs
/tmp/kafka-logs
/tmp/kafka-logs
/tmp/kafka-logs
/tmp/kafka-logs
/tmp/kafka-logs
/tmp/kafka-logs
/tmp/kafka-logs
/tmp/kafka-logs
/tmp/kafka-logs
/tmp/kafka-logs
/tmp/kafka-logs
log.dir.failure.timeout.msIf the broker is unable to successfully communicate to the controller that some log directory has failed for longer than this time..



























30000
30s
log.dirsA comma-separated list of the directories where the log data is stored. If not set, the value in log.dir is used.
/tmp/kafka-logs
/tmp/kafka-logs
/tmp/kafka-logs
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
log.flush.interval.messagesThe number of messages accumulated on a log partition before messages are flushed to disk.
10000
None
Long.MaxValue
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
log.flush.interval.msThe maximum time in ms that a message in any topic is kept in memory before flushed to disk. If not set, the value in log.flush.sc..
3000
3s
None
Long.MaxValue
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
log.flush.offset.checkpoint.interval.msThe frequency with which we update the persistent record of the last flush which acts as the log recovery point.

60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
log.flush.scheduler.interval.msThe frequency in ms that the log flusher checks whether any log needs to be flushed to disk
3000
3s
3000
3s
Long.MaxValue
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
log.flush.start.offset.checkpoint.interval.msThe frequency with which we update the persistent record of log start offset







60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
log.index.interval.bytesThe interval with which we add an entry to the offset index.
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
log.index.size.max.bytesThe maximum size in bytes of the offset index
10 * 1024 * 1024
10 * 1024 * 1024
10 * 1024 * 1024
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
log.local.retention.bytesThe maximum size of local log segments that can grow for a partition before it gets eligible for deletion. Default value is -2, it..

























-2
-2
-2
log.local.retention.msThe number of milliseconds to keep the local log segments before it gets eligible for deletion. Default value is -2, it represents..

























-2
-2
-2
log.message.downconversion.enableThis configuration controls whether down-conversion of message formats is enabled to satisfy consume requests. When set to false, ..










true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
log.message.format.versionSpecify the message format version the broker will use to append messages to the logs. The value should be a valid MetadataVersion..




0.10.0-IV1
0.10.1-IV2
0.10.2-IV0
0.11.0-IV2
1.0-IV0
1.1-IV0
2.0-IV1
2.1-IV2
2.2-IV1
2.3-IV1
2.4-IV1
2.5-IV0
2.6-IV0
2.7-IV2
2.8-IV1
3.0-IV1
3.0-IV1
3.0-IV1
3.0-IV1
3.0-IV1
3.0-IV1
3.0-IV1
3.0-IV1
3.0-IV1
log.message.timestamp.after.max.msThis configuration sets the allowable timestamp difference between the message timestamp and the broker's timestamp. The message t..

























9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
log.message.timestamp.before.max.msThis configuration sets the allowable timestamp difference between the broker's timestamp and the message timestamp. The message t..

























9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
log.message.timestamp.difference.max.ms[DEPRECATED] The maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in ..




9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
log.message.timestamp.typeDefine whether the timestamp in the message is message create time or log append time. The value should be either CreateTime or Lo..




CreateTime
CreateTime
CreateTime
CreateTime
CreateTime
CreateTime
CreateTime
CreateTime
CreateTime
CreateTime
CreateTime
CreateTime
CreateTime
CreateTime
CreateTime
CreateTime
CreateTime
CreateTime
CreateTime
CreateTime
CreateTime
CreateTime
CreateTime
CreateTime
log.preallocateShould pre allocate file when create new segment? If you are using Kafka on Windows, you probably need to set it to true.



false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
log.retention.bytesThe maximum size of the log before deleting it
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
log.retention.check.interval.msThe frequency in milliseconds that the log cleaner checks whether any log is eligible for deletion
300000
5min
5 minutes
0ms
5 minutes
0ms
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
log.retention.hoursThe number of hours to keep a log file before deleting it (in hours), tertiary to log.retention.ms property168
24 * 7


168
168
168
168
168
168
168
168
168
168
168
168
168
168
168
168
168
168
168
168
168
168
168
168
168
log.retention.minutesThe number of minutes to keep a log file before deleting it (in minutes), secondary to log.retention.ms property. If not set, the ..



null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
log.retention.msThe number of milliseconds to keep a log file before deleting it (in milliseconds), If not set, the value in log.retention.minutes..



null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
log.roll.hoursThe maximum time before a new log segment is rolled out (in hours), secondary to log.roll.ms property24 * 7
24 * 7
24 * 7

168
168
168
168
168
168
168
168
168
168
168
168
168
168
168
168
168
168
168
168
168
168
168
168
168
log.roll.jitter.hoursThe maximum jitter to subtract from logRollTimeMillis (in hours), secondary to log.roll.jitter.ms property



0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
log.roll.jitter.msThe maximum jitter to subtract from logRollTimeMillis (in milliseconds). If not set, the value in log.roll.jitter.hours is used



null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
log.roll.msThe maximum time before a new log segment is rolled out (in milliseconds). If not set, the value in log.roll.hours is used



null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
log.segment.bytesThe maximum size of a single log file
1024 * 1024 * 1024
1024 * 1024 * 1024
1024 * 1024 * 1024
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
log.segment.delete.delay.msThe amount of time to wait before deleting a file from the filesystem. If the value is 0 and there is no file to delete, the syste..


60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
max.connection.creation.rateThe maximum connection creation rate we allow in the broker at any time. Listener-level limits may also be configured by prefixing..

















2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
max.connectionsThe maximum number of connections we allow in the broker at any time. This limit is applied in addition to any per-ip limits confi..













2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
max.connections.per.ipThe maximum number of connections we allow from each ip address. This can be set to 0 if there are overrides configured using max...


Int.MaxValue
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
max.connections.per.ip.overridesA comma-separated list of per-ip or hostname overrides to the default maximum number of connections. An example value is "hostName..


null

























max.incremental.fetch.session.cache.slotsThe maximum number of total incremental fetch sessions that we will maintain. FetchSessionCache is sharded into 8 shards and the l..









1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
max.request.partition.size.limitThe maximum number of partitions can be served in one request.



























2000
message.max.bytesThe largest record batch size allowed by Kafka (after compression if compression is enabled). If this is increased and there are c..
1000000
976,56 KB
1000000
976,56 KB
1000000
976,56 KB
1000012
976,57 KB
1000012
976,57 KB
1000012
976,57 KB
1000012
976,57 KB
1000012
976,57 KB
1000012
976,57 KB
1000012
976,57 KB
1000012
976,57 KB
1000012
976,57 KB
1000012
976,57 KB
1000012
976,57 KB
1000012
976,57 KB
1048588
1 MB
1048588
1 MB
1048588
1 MB
1048588
1 MB
1048588
1 MB
1048588
1 MB
1048588
1 MB
1048588
1 MB
1048588
1 MB
1048588
1 MB
1048588
1 MB
1048588
1 MB
1048588
1 MB
metadata.log.dirThis configuration determines where we put the metadata log for clusters in KRaft mode. If it is not set, the metadata log is plac..



















null
null
null
null
null
null
null
null
null
metadata.log.max.record.bytes.between.snapshotsThis is the maximum number of bytes in the log between the latest snapshot and the high-watermark needed before generating a new s..



















20971520
20971520
20971520
20971520
20971520
20971520
20971520
20971520
20971520
metadata.log.max.snapshot.interval.msThis is the maximum number of milliseconds to wait to generate a snapshot if there are committed records in the log that are not i..























3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
metadata.log.segment.bytesThe maximum size of a single metadata log file.



















1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
metadata.log.segment.msThe maximum time before a new metadata log file is rolled out (in milliseconds).



















604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
metadata.max.idle.interval.msThis configuration controls how often the active controller should write no-op records to the metadata partition. If the value is ..






















500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
metadata.max.retention.bytesThe maximum combined size of the metadata log and snapshots before deleting old snapshots and log files. Since at least one snapsh..



















-1
-1
-1
-1
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
metadata.max.retention.msThe number of milliseconds to keep a metadata log file or snapshot before deleting it. Since at least one snapshot must exist befo..



















604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
metric.reportersA list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows p..



[]
[]
[]






















metrics.num.samplesThe number of samples maintained to compute metrics.



2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
metrics.recording.levelThe highest recording level for metrics.






INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
metrics.sample.window.msThe window of time a metrics sample is computed over.



30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
min.insync.replicasWhen a producer sets acks to "all" (or "-1"), this configuration specifies the minimum number of replicas that must acknowledge a ..



1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
node.idThe node ID associated with the roles this process is playing when process.roles is non-empty. This is required configuration when..



















-1
-1
-1
-1
-1
-1
-1
-1
-1
num.io.threadsThe number of threads that the server uses for processing requests, which may include disk I/O
8
8
8
8
8
8
8
8
8
8
8
8
8
8
8
8
8
8
8
8
8
8
8
8
8
8
8
8
num.network.threadsThe number of threads that the server uses for receiving requests from the network and sending responses to the network. Noted: ea..
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
num.partitionsThe default number of log partitions per topic1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
num.recovery.threads.per.data.dirThe number of threads per data directory to be used for log recovery at startup and flushing at shutdown


1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
num.replica.alter.log.dirs.threadsThe number of threads that can move replicas between log directories, which may include disk I/O









null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
num.replica.fetchersNumber of fetcher threads used to replicate records from each source broker. The total number of fetchers on each broker is bound ..
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
offset.metadata.max.bytesThe maximum size for a metadata entry associated with an offset commit.

1024
1 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
offsets.commit.required.acksDEPRECATED: The required acks before the commit can be accepted. In general, the default (-1) should not be overridden.


-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
offsets.commit.timeout.msOffset commit will be delayed until all replicas for the offsets topic receive the commit or this timeout is reached. This is simi..


5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
offsets.load.buffer.sizeBatch size for reading from the offsets segments when loading offsets into the cache (soft-limit, overridden if records are too la..


5242880
5242880
5242880
5242880
5242880
5242880
5242880
5242880
5242880
5242880
5242880
5242880
5242880
5242880
5242880
5242880
5242880
5242880
5242880
5242880
5242880
5242880
5242880
5242880
5242880
5242880
offsets.retention.check.interval.msFrequency at which to check for stale offsets


600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
offsets.retention.minutesFor subscribed consumers, committed offset of a specific partition will be expired and discarded when 1) this retention period has..



1440
1440
1440
1440
1440
1440
1440
10080
10080
10080
10080
10080
10080
10080
10080
10080
10080
10080
10080
10080
10080
10080
10080
10080
10080
offsets.topic.compression.codecCompression codec for the offsets topic - compression may be used to achieve "atomic" commits.



0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
offsets.topic.num.partitionsThe number of partitions for the offset commit topic (should not change after deployment).


50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
offsets.topic.replication.factorThe replication factor for the offsets topic (set higher to ensure availability). Internal topic creation will fail until the clus..


3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
offsets.topic.segment.bytesThe offsets topic segment bytes should be kept relatively small in order to facilitate faster log compaction and cache loads.


104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
password.encoder.cipher.algorithmThe Cipher algorithm used for encoding dynamically configured passwords.









AES/CBC/PKCS5Padding
AES/CBC/PKCS5Padding
AES/CBC/PKCS5Padding
AES/CBC/PKCS5Padding
AES/CBC/PKCS5Padding
AES/CBC/PKCS5Padding
AES/CBC/PKCS5Padding
AES/CBC/PKCS5Padding
AES/CBC/PKCS5Padding
AES/CBC/PKCS5Padding
AES/CBC/PKCS5Padding
AES/CBC/PKCS5Padding
AES/CBC/PKCS5Padding
AES/CBC/PKCS5Padding
AES/CBC/PKCS5Padding
AES/CBC/PKCS5Padding
AES/CBC/PKCS5Padding
AES/CBC/PKCS5Padding
AES/CBC/PKCS5Padding
password.encoder.iterationsThe iteration count used for encoding dynamically configured passwords.









4096
4096
4096
4096
4096
4096
4096
4096
4096
4096
4096
4096
4096
4096
4096
4096
4096
4096
4096
password.encoder.key.lengthThe key length used for encoding dynamically configured passwords.









128
128
128
128
128
128
128
128
128
128
128
128
128
128
128
128
128
128
128
password.encoder.keyfactory.algorithmThe SecretKeyFactory algorithm used for encoding dynamically configured passwords. Default is PBKDF2WithHmacSHA512 if available an..









null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
password.encoder.old.secretThe old secret that was used for encoding dynamically configured passwords. This is required only when the secret is updated. If s..









null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
password.encoder.secretThe secret used for encoding dynamically configured passwords for this broker.









null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
principal.builder.classThe fully qualified name of a class that implements the KafkaPrincipalBuilder interface, which is used to build the KafkaPrincipal..



class org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
class org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
class org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
null
null
null
null
null
null
null
null
null
null
null
org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder
org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder
org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder
org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder
org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder
org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder
org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder
org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder
org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder
process.rolesThe roles that this process plays: 'broker', 'controller', or 'broker,controller' if it is both. This configuration is only applic..




























producer.id.expiration.msThe time in ms that a topic partition leader will wait before expiring producer IDs. Producer IDs will not expire while a transact..























86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
producer.purgatory.purge.interval.requestsThe purge interval (in number of requests) of the producer request purgatory
10000
10000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
queued.max.request.bytesThe number of queued bytes allowed before no more requests are read








-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
queued.max.requestsThe number of queued requests allowed for data-plane, before blocking the network threads
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
quota.window.numThe number of samples to retain in memory for client quotas



11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
quota.window.size.secondsThe time span of each sample for client quotas



1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
remote.fetch.max.wait.msThe maximum amount of time the server will wait before answering the remote fetch request



























500
500ms
remote.log.index.file.cache.total.size.bytesThe total size of the space allocated to store index files fetched from remote storage in the local storage.


























1073741824
1 GB
1073741824
1 GB
remote.log.manager.copy.max.bytes.per.secondThe maximum number of bytes that can be copied from local storage to remote storage per second. This is a global limit for all the..



























9223372036854775807
Infinity
remote.log.manager.copy.quota.window.numThe number of samples to retain in memory for remote copy quota management. The default value is 11, which means there are 10 whol..



























11
remote.log.manager.copy.quota.window.size.secondsThe time span of each sample for remote copy quota management. The default value is 1 second.



























1
remote.log.manager.fetch.max.bytes.per.secondThe maximum number of bytes that can be fetched from remote storage to local storage per second. This is a global limit for all th..



























9223372036854775807
Infinity
remote.log.manager.fetch.quota.window.numThe number of samples to retain in memory for remote fetch quota management. The default value is 11, which means there are 10 who..



























11
remote.log.manager.fetch.quota.window.size.secondsThe time span of each sample for remote fetch quota management. The default value is 1 second.



























1
remote.log.manager.task.interval.msInterval at which remote log manager runs the scheduled tasks like copy segments, and clean up remote log segments.

























30000
30s
30000
30s
30000
30s
remote.log.manager.thread.pool.sizeSize of the thread pool used in scheduling tasks to copy segments, fetch remote log indexes and clean up remote log segments.

























10
10
10
remote.log.metadata.custom.metadata.max.bytesThe maximum size of custom metadata in bytes that the broker should accept from a remote storage plugin. If custom metadata excee..

























128
128 B
128
128 B
128
128 B
remote.log.metadata.manager.class.nameFully qualified class name of `RemoteLogMetadataManager` implementation.

























org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager
org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager
org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager
remote.log.metadata.manager.class.pathClass path of the `RemoteLogMetadataManager` implementation. If specified, the RemoteLogMetadataManager implementation and its dep..

























null
null
null
remote.log.metadata.manager.impl.prefixPrefix used for properties to be passed to RemoteLogMetadataManager implementation. For example this value can be `rlmm.config.`.

























rlmm.config.
rlmm.config.
rlmm.config.
remote.log.metadata.manager.listener.nameListener name of the local broker to which it should get connected if needed by RemoteLogMetadataManager implementation.

























null
null
null
remote.log.reader.max.pending.tasksMaximum remote log reader thread pool task queue size. If the task queue is full, fetch requests are served with an error.

























100
100
100
remote.log.reader.threadsSize of the thread pool that is allocated for handling remote log reads.

























10
10
10
remote.log.storage.manager.class.nameFully qualified class name of `RemoteStorageManager` implementation.

























null
null
null
remote.log.storage.manager.class.pathClass path of the `RemoteStorageManager` implementation. If specified, the RemoteStorageManager implementation and its dependent l..

























null
null
null
remote.log.storage.manager.impl.prefixPrefix used for properties to be passed to RemoteStorageManager implementation. For example this value can be `rsm.config.`.

























rsm.config.
rsm.config.
rsm.config.
remote.log.storage.system.enableWhether to enable tiered storage functionality in a broker or not. Valid values are `true` or `false` and the default value is fal..

























false
false
false
replica.fetch.backoff.msThe amount of time to sleep when fetch partition error occurs.



1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
replica.fetch.max.bytesThe number of bytes of messages to attempt to fetch for each partition. This is not an absolute maximum, if the first record batch..
1024 * 1024
1024 * 1024
1024 * 1024
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
replica.fetch.min.bytesMinimum bytes expected for each fetch response. If not enough bytes, wait up to replica.fetch.wait.max.ms (broker config).
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
replica.fetch.response.max.bytesMaximum bytes expected for the entire fetch response. Records are fetched in batches, and if the first record batch in the first n..





10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
replica.fetch.wait.max.msThe maximum wait time for each fetcher request issued by follower replicas. This value should always be less than the replica.lag...
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
replica.high.watermark.checkpoint.interval.msThe frequency with which the high watermark is saved out to disk
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
replica.lag.time.max.msIf a follower hasn't sent any fetch requests or hasn't consumed up to the leaders log end offset for at least this time, the leade..
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
replica.selector.classThe fully qualified class name that implements ReplicaSelector. This is used by the broker to find the preferred read replica. By ..














null
null
null
null
null
null
null
null
null
null
null
null
null
null
replica.socket.receive.buffer.bytesThe socket receive buffer for network requests to the leader for replicating data
64 * 1024
64 * 1024
64 * 1024
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
replica.socket.timeout.msThe socket timeout for network requests. Its value should be at least replica.fetch.wait.max.ms
30 * 1000
0ms
30 * 1000
0ms
30 * 1000
0ms
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
replication.quota.window.numThe number of samples to retain in memory for replication quotas





11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
replication.quota.window.size.secondsThe time span of each sample for replication quotas





1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
request.timeout.msThe configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not r..



30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
reserved.broker.max.idMax number that can be used for a broker.id



1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
sasl.client.callback.handler.classThe fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface.










null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
sasl.enabled.mechanismsThe list of SASL mechanisms enabled in the Kafka server. The list may contain any mechanism for which a security provider is avail..




[GSSAPI]
[GSSAPI]
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
sasl.jaas.configJAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format ..









null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
sasl.kerberos.kinit.cmdKerberos kinit command path.



/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
sasl.kerberos.min.time.before.reloginLogin thread sleep time between refresh attempts.



60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
sasl.kerberos.principal.to.local.rulesA list of rules for mapping from principal names to short names (typically operating system usernames). The rules are evaluated in..



[DEFAULT]
[DEFAULT]
[DEFAULT]
DEFAULT
DEFAULT
DEFAULT
DEFAULT
DEFAULT
DEFAULT
DEFAULT
DEFAULT
DEFAULT
DEFAULT
DEFAULT
DEFAULT
DEFAULT
DEFAULT
DEFAULT
DEFAULT
DEFAULT
DEFAULT
DEFAULT
DEFAULT
DEFAULT
DEFAULT
sasl.kerberos.service.nameThe Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.



null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
sasl.kerberos.ticket.renew.jitterPercentage of random jitter added to the renewal time.



0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
sasl.kerberos.ticket.renew.window.factorLogin thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which ..



0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
sasl.login.callback.handler.classThe fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For bro..










null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
sasl.login.classThe fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener ..










null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
sasl.login.connect.timeout.msThe (optional) value in milliseconds for the external authentication provider connection timeout. Currently applies only to OAUTHB..




















null
null
null
null
null
null
null
null
sasl.login.read.timeout.msThe (optional) value in milliseconds for the external authentication provider read timeout. Currently applies only to OAUTHBEARER.




















null
null
null
null
null
null
null
null
sasl.login.refresh.buffer.secondsThe amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would ot..










300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
sasl.login.refresh.min.period.secondsThe desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between..










60
60
60
60
60
60
60
60
60
60
60
60
60
60
60
60
60
60
sasl.login.refresh.window.factorLogin refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which..










0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
sasl.login.refresh.window.jitterThe maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. ..










0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
sasl.login.retry.backoff.max.msThe (optional) value in milliseconds for the maximum wait between login attempts to the external authentication provider. Login us..




















10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
sasl.login.retry.backoff.msThe (optional) value in milliseconds for the initial wait between login attempts to the external authentication provider. Login us..




















100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
sasl.mechanism.controller.protocolSASL mechanism used for communication with controllers. Default is GSSAPI.



















GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
sasl.mechanism.inter.broker.protocolSASL mechanism used for inter-broker communication. Default is GSSAPI.




GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
sasl.oauthbearer.clock.skew.secondsThe (optional) value in seconds to allow for differences between the time of the OAuth/OIDC identity provider and the broker.




















30
30
30
30
30
30
30
30
sasl.oauthbearer.expected.audienceThe (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences. ..




















null
null
null
null
null
null
null
null
sasl.oauthbearer.expected.issuerThe (optional) setting for the broker to use to verify that the JWT was created by the expected issuer. The JWT will be inspected ..




















null
null
null
null
null
null
null
null
sasl.oauthbearer.jwks.endpoint.refresh.msThe (optional) value in milliseconds for the broker to wait between refreshing its JWKS (JSON Web Key Set) cache that contains the..




















3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.msThe (optional) value in milliseconds for the maximum wait between attempts to retrieve the JWKS (JSON Web Key Set) from the extern..




















10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
sasl.oauthbearer.jwks.endpoint.retry.backoff.msThe (optional) value in milliseconds for the initial wait between JWKS (JSON Web Key Set) retrieval attempts from the external aut..




















100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
sasl.oauthbearer.jwks.endpoint.urlThe OAuth/OIDC provider URL from which the provider's JWKS (JSON Web Key Set) can be retrieved. The URL can be HTTP(S)-based or fi..




















null
null
null
null
null
null
null
null
sasl.oauthbearer.scope.claim.nameThe OAuth claim for the scope is often named "scope", but this (optional) setting can provide a different name to use for the scop..




















scope
scope
scope
scope
scope
scope
scope
scope
sasl.oauthbearer.sub.claim.nameThe OAuth claim for the subject is often named "sub", but this (optional) setting can provide a different name to use for the subj..




















sub
sub
sub
sub
sub
sub
sub
sub
sasl.oauthbearer.token.endpoint.urlThe URL for the OAuth/OIDC identity provider. If the URL is HTTP(S)-based, it is the issuer's token endpoint URL to which requests..




















null
null
null
null
null
null
null
null
sasl.server.callback.handler.classThe fully qualified name of a SASL server callback handler class that implements the AuthenticateCallbackHandler interface. Server..










null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
sasl.server.max.receive.sizeThe maximum receive size allowed before and during initial SASL authentication. Default receive size is 512KB. GSSAPI limits reque..






















524288
524288
524288
524288
524288
524288
security.inter.broker.protocolSecurity protocol used to communicate between brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. It is an error ..



PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
security.providersA list of configurable creator classes each returning a provider implementing security algorithms. These classes should implement ..














null
null
null
null
null
null
null
null
null
null
null
null
null
null
socket.connection.setup.timeout.max.msThe maximum amount of time the client will wait for the socket connection to be established. The connection setup timeout will inc..

















127000
2min 7s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
socket.connection.setup.timeout.msThe amount of time the client will wait for the socket connection to be established. If the connection is not built before the tim..

















10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
socket.listen.backlog.sizeThe maximum number of pending connections on the socket. In Linux, you may also need to configure somaxconn and tcp_max_syn_backlo..





















50
50
50
50
50
50
50
socket.receive.buffer.bytesThe SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used.
100 * 1024
100 * 1024
100 * 1024
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
socket.request.max.bytesThe maximum number of bytes in a socket request
100 * 1024 * 1024
100 * 1024 * 1024
100 * 1024 * 1024
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
socket.send.buffer.bytesThe SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used.
100 * 1024
100 * 1024
100 * 1024
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
ssl.allow.dn.changesIndicates whether changes to the certificate distinguished name should be allowed during a dynamic reconfiguration of certificates..


























false
false
ssl.allow.san.changesIndicates whether changes to the certificate subject alternative names should be allowed during a dynamic reconfiguration of certi..


























false
false
ssl.cipher.suitesA list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotia..



null
null
null
null
null
null



















ssl.client.authConfigures kafka broker to request client authentication. The following settings are common:



none
none
none
none
none
none
none
none
none
none
none
none
none
none
none
none
none
none
none
none
none
none
none
none
none
ssl.enabled.protocolsThe list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' ..



[TLSv1.2, TLSv1.1, TLSv1]
[TLSv1.2, TLSv1.1, TLSv1]
[TLSv1.2, TLSv1.1, TLSv1]
TLSv1.2,TLSv1.1,TLSv1
TLSv1.2,TLSv1.1,TLSv1
TLSv1.2,TLSv1.1,TLSv1
TLSv1.2,TLSv1.1,TLSv1
TLSv1.2,TLSv1.1,TLSv1
TLSv1.2,TLSv1.1,TLSv1
TLSv1.2,TLSv1.1,TLSv1
TLSv1.2,TLSv1.1,TLSv1
TLSv1.2,TLSv1.1,TLSv1
TLSv1.2
TLSv1.2
TLSv1.2
TLSv1.2
TLSv1.2
TLSv1.2,TLSv1.3
TLSv1.2
TLSv1.2,TLSv1.3
TLSv1.2
TLSv1.2,TLSv1.3
TLSv1.2,TLSv1.3
TLSv1.2,TLSv1.3
TLSv1.2,TLSv1.3
ssl.endpoint.identification.algorithmThe endpoint identification algorithm to validate server hostname using server certificate.



null
null
null
null
null
null
null
https
https
https
https
https
https
https
https
https
https
https
https
https
https
https
https
https
https
ssl.engine.factory.classThe class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache..
















null
null
null
null
null
null
null
null
null
null
null
null
ssl.key.passwordThe password of the private key in the key store file or the PEM key specified in 'ssl.keystore.key'.



null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ssl.keymanager.algorithmThe algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for t..



SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
ssl.keystore.certificate.chainCertificate chain in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with a list ..

















null
null
null
null
null
null
null
null
null
null
null
ssl.keystore.keyPrivate key in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with PKCS#8 keys. ..

















null
null
null
null
null
null
null
null
null
null
null
ssl.keystore.locationThe location of the key store file. This is optional for client and can be used for two-way authentication for client.



null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ssl.keystore.passwordThe store password for the key store file. This is optional for client and only needed if 'ssl.keystore.location' is configured. K..



null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ssl.keystore.typeThe file format of the key store file. This is optional for client. The values currently supported by the default `ssl.engine.fact..



JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
ssl.principal.mapping.rulesA list of rules for mapping from distinguished name from the client certificate to short name. The rules are evaluated in order an..












DEFAULT
DEFAULT
DEFAULT
DEFAULT
DEFAULT
DEFAULT
DEFAULT
DEFAULT
DEFAULT
DEFAULT
DEFAULT
DEFAULT
DEFAULT
DEFAULT
DEFAULT
DEFAULT
ssl.protocolThe SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise..



TLS
TLS
TLS
TLS
TLS
TLS
TLS
TLS
TLS
TLS
TLS
TLS
TLSv1.2
TLSv1.2
TLSv1.2
TLSv1.2
TLSv1.2
TLSv1.3
TLSv1.2
TLSv1.3
TLSv1.2
TLSv1.3
TLSv1.3
TLSv1.3
TLSv1.3
ssl.providerThe name of the security provider used for SSL connections. Default value is the default security provider of the JVM.



null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ssl.secure.random.implementationThe SecureRandom PRNG implementation to use for SSL cryptography operations.





null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ssl.trustmanager.algorithmThe algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured f..



PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
ssl.truststore.certificatesTrusted certificates in the format specified by 'ssl.truststore.type'. Default SSL engine factory supports only PEM format with X...

















null
null
null
null
null
null
null
null
null
null
null
ssl.truststore.locationThe location of the trust store file.



null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ssl.truststore.passwordThe password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity che..



null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ssl.truststore.typeThe file format of the trust store file. The values currently supported by the default `ssl.engine.factory.class` are [JKS, PKCS12..



JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
telemetry.max.bytesThe maximum size (after compression if compression is used) of telemetry metrics pushed from a client to the broker. The default v..


























1048576
1 MB
1048576
1 MB
transaction.abort.timed.out.transaction.cleanup.interval.msThe interval at which to rollback transactions that have timed out







60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
transaction.max.timeout.msThe maximum allowed timeout for transactions. If a client’s requested transaction time exceed this, then the broker will return an..







900000
15min
900000
15min
900000
15min
900000
15min
900000
15min
900000
15min
900000
15min
900000
15min
900000
15min
900000
15min
900000
15min
900000
15min
900000
15min
900000
15min
900000
15min
900000
15min
900000
15min
900000
15min
900000
15min
900000
15min
900000
15min
transaction.partition.verification.enableEnable verification that checks that the partition has been added to the transaction before writing transactional records to the p..

























true
true
true
transaction.remove.expired.transaction.cleanup.interval.msThe interval at which to remove transactions that have expired due to transactional.id.expiration.ms passing







3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
transaction.state.log.load.buffer.sizeBatch size for reading from the transaction log segments when loading producer ids and transactions into the cache (soft-limit, ov..







5242880
5242880
5242880
5242880
5242880
5242880
5242880
5242880
5242880
5242880
5242880
5242880
5242880
5242880
5242880
5242880
5242880
5242880
5242880
5242880
5242880
transaction.state.log.min.isrThe minimum number of replicas that must acknowledge a write to transaction topic in order to be considered successful.







2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
transaction.state.log.num.partitionsThe number of partitions for the transaction topic (should not change after deployment).







50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
transaction.state.log.replication.factorThe replication factor for the transaction topic (set higher to ensure availability). Internal topic creation will fail until the ..







3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
transaction.state.log.segment.bytesThe transaction topic segment bytes should be kept relatively small in order to facilitate faster log compaction and cache loads







104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
transactional.id.expiration.msThe time in ms that the transaction coordinator will wait without receiving any transaction status updates for the current transac..







604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
unclean.leader.election.enableIndicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result ..


true
true
true
true
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zookeeper.clientCnxnSocketTypically set to org.apache.zookeeper.ClientCnxnSocketNetty when using TLS connectivity to ZooKeeper. Overrides any explicit value..















null
null
null
null
null
null
null
null
null
null
null
null
null
zookeeper.connectSpecifies the ZooKeeper connection string in the form hostname:port where host and port are the host and port of a ZooKeeper serve..
null
null
null
null
null
null
null
null
null
null
null
null
null
null




null
null
null
null
null
null
null
null
null
null
zookeeper.connection.timeout.msThe max time that the client waits to establish a connection to ZooKeeper. If not set, the value in zookeeper.session.timeout.ms i..
6000
6s
6000
6s
6000
6s
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
zookeeper.max.in.flight.requestsThe maximum number of unacknowledged requests the client will send to ZooKeeper before blocking.









10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
zookeeper.metadata.migration.enableEnable ZK to KRaft migration























false
false
false
false
false
zookeeper.session.timeout.msZookeeper session timeout
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
18000
18s
18000
18s
18000
18s
18000
18s
18000
18s
18000
18s
18000
18s
18000
18s
18000
18s
18000
18s
18000
18s
18000
18s
18000
18s
zookeeper.set.aclSet client to use secure ACLs



false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zookeeper.ssl.cipher.suitesSpecifies the enabled cipher suites to be used in ZooKeeper TLS negotiation (csv). Overrides any explicit value set via the zookee..















null
null
null
null
null
null
null
null
null
null
null
null
null
zookeeper.ssl.client.enableSet client to use TLS when connecting to ZooKeeper. An explicit value overrides any value set via the zookeeper.client.secure syst..















false
false
false
false
false
false
false
false
false
false
false
false
false
zookeeper.ssl.crl.enableSpecifies whether to enable Certificate Revocation List in the ZooKeeper TLS protocols. Overrides any explicit value set via the z..















false
false
false
false
false
false
false
false
false
false
false
false
false
zookeeper.ssl.enabled.protocolsSpecifies the enabled protocol(s) in ZooKeeper TLS negotiation (csv). Overrides any explicit value set via the zookeeper.ssl.enabl..















null
null
null
null
null
null
null
null
null
null
null
null
null
zookeeper.ssl.endpoint.identification.algorithmSpecifies whether to enable hostname verification in the ZooKeeper TLS negotiation process, with (case-insensitively) "https" mean..















HTTPS
HTTPS
HTTPS
HTTPS
HTTPS
HTTPS
HTTPS
HTTPS
HTTPS
HTTPS
HTTPS
HTTPS
HTTPS
zookeeper.ssl.keystore.locationKeystore location when using a client-side certificate with TLS connectivity to ZooKeeper. Overrides any explicit value set via th..















null
null
null
null
null
null
null
null
null
null
null
null
null
zookeeper.ssl.keystore.passwordKeystore password when using a client-side certificate with TLS connectivity to ZooKeeper. Overrides any explicit value set via th..















null
null
null
null
null
null
null
null
null
null
null
null
null
zookeeper.ssl.keystore.typeKeystore type when using a client-side certificate with TLS connectivity to ZooKeeper. Overrides any explicit value set via the zo..















null
null
null
null
null
null
null
null
null
null
null
null
null
zookeeper.ssl.ocsp.enableSpecifies whether to enable Online Certificate Status Protocol in the ZooKeeper TLS protocols. Overrides any explicit value set vi..















false
false
false
false
false
false
false
false
false
false
false
false
false
zookeeper.ssl.protocolSpecifies the protocol to be used in ZooKeeper TLS negotiation. An explicit value overrides any value set via the same-named zooke..















TLSv1.2
TLSv1.2
TLSv1.2
TLSv1.2
TLSv1.2
TLSv1.2
TLSv1.2
TLSv1.2
TLSv1.2
TLSv1.2
TLSv1.2
TLSv1.2
TLSv1.2
zookeeper.ssl.truststore.locationTruststore location when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the zookeeper.ssl.trustStore.lo..















null
null
null
null
null
null
null
null
null
null
null
null
null
zookeeper.ssl.truststore.passwordTruststore password when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the zookeeper.ssl.trustStore.pa..















null
null
null
null
null
null
null
null
null
null
null
null
null
zookeeper.ssl.truststore.typeTruststore type when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the zookeeper.ssl.trustStore.type s..















null
null
null
null
null
null
null
null
null
null
null
null
null
zookeeper.sync.time.ms
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s







advertised.host.name

null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null









advertised.port

null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null









host.name
null
null
null

























port
6667
6667
9092
9092
9092
9092
9092
9092
9092
9092
9092
9092
9092
9092
9092
9092
9092
9092
9092









quota.consumer.default



9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity









quota.producer.default



9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity









controller.message.queue.size
10
10
Int.MaxValue

























log.delete.delay.ms

60000
1min
60000
1min

























log.retention.{ms,minutes,hours}


7 days

























log.roll.jitter.{ms,hours}


0

























log.roll.{ms,hours}


24 * 7 hours

























offsets.topic.retention.minutes


1440

























replica.lag.max.messages
4000
4000
4000

























log.retention.{minutes,hours}

7 days


























log.flush.interval.ms.per.topic




























log.retention.bytes.per.topic




























log.retention.hours.per.topic




























log.roll.hours.per.topic




























log.segment.bytes.per.topic




























brokeridnone




























enable.zookeepertrue




























log.cleanup.interval.mins10




























log.default.flush.interval.mslog.default.flush.scheduler.interval.ms




























log.default.flush.scheduler.interval.ms3000
3s




























log.file.size1*1024*1024*1024




























log.flush.interval500




























log.retention.size-1




























max.socket.request.bytes104857600
100 MB




























monitoring.period.secs600




























num.threadsRuntime.getRuntime.availableProcessors




























socket.receive.buffer102400




























socket.send.buffer102400




























topic.flush.intervals.msnone




























topic.log.retention.hoursnone




























topic.partition.count.mapnone




























zk.connectlocalhost:2182/kafka




























zk.connectiontimeout.ms6000
6s




























zk.sessiontimeout.ms6000
6s




























zk.synctime.ms2000
2s




























Description0.70.8.00.8.10.8.20.9.00.10.00.10.10.10.20.11.01.01.12.02.12.22.32.42.52.62.72.83.03.13.23.33.43.53.63.73.8
allow.auto.create.topicsAllow automatic topic creation on the broker when subscribing to or assigning a topic. A topic being subscribed to will be automat..













true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
auto.commit.interval.msThe frequency in milliseconds that the consumer offsets are auto-committed to Kafka if enable.auto.commit is set to true.
60 * 1000
0ms
60 * 1000
0ms
60 * 1000
0ms
60 * 1000
0ms
60 * 1000
0ms
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
auto.include.jmx.reporterDeprecated. Whether to automatically include JmxReporter even if it's not listed in metric.reporters. This configuration will be r..























true
true
true
true
true
auto.offset.resetWhat to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server (e.g. because t..
largest
largest
largest
largest
largest
latest
latest
latest
latest
latest
latest
latest
latest
latest
latest
latest
latest
latest
latest
latest
latest
latest
latest
latest
latest
latest
latest
latest
bootstrap.serversA list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all ser..



null
null
null
null
null
null



















check.crcsAutomatically check the CRC32 of the records consumed. This ensures no on-the-wire or on-disk corruption to the messages occurred...



true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
client.dns.lookupControls how the client uses DNS lookups. If set to use_all_dns_ips, connect to each returned IP address in sequence until a succe..











default
default
default
default
default
use_all_dns_ips
use_all_dns_ips
use_all_dns_ips
use_all_dns_ips
use_all_dns_ips
use_all_dns_ips
use_all_dns_ips
use_all_dns_ips
use_all_dns_ips
use_all_dns_ips
use_all_dns_ips
use_all_dns_ips
client.idAn ID prefix string used for the client IDs of internal (main, restore, and global) consumers , producers, and admin clients with ..
group id value
group id value
group id value
group id value
group id value























client.rackA rack identifier for this client. This can be any string value which indicates where this client is physically located. It corres..




























connections.max.idle.msClose idle connections after the number of milliseconds specified by this config.



540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
default.api.timeout.msSpecifies the timeout (in milliseconds) for client APIs. This configuration is used as the default timeout for all client operatio..










60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
enable.auto.commitIf true the consumer's offset will be periodically committed in the background.



true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
enable.metrics.pushWhether to enable pushing of internal client metrics for (main, restore, and global) consumers, producers, and admin clients. The ..


























true
true
exclude.internal.topicsWhether internal topics matching a subscribed pattern should be excluded from the subscription. It is always possible to explicitl..


true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
true
fetch.max.bytesThe maximum amount of data the server should return for a fetch request. Records are fetched in batches by the consumer, and if th..





52428800
50 MB
52428800
50 MB
52428800
50 MB
52428800
50 MB
52428800
50 MB
52428800
50 MB
52428800
50 MB
52428800
50 MB
52428800
50 MB
52428800
50 MB
52428800
50 MB
52428800
50 MB
52428800
50 MB
52428800
50 MB
52428800
50 MB
52428800
50 MB
52428800
50 MB
52428800
50 MB
52428800
50 MB
52428800
50 MB
52428800
50 MB
52428800
50 MB
52428800
50 MB
fetch.max.wait.msThe maximum amount of time the server will block before answering the fetch request there isn't sufficient data to immediately sat..



500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
fetch.min.bytesThe minimum amount of data the server should return for a fetch request. If insufficient data is available the request will wait f..
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
group.idA unique string that identifies the Connect cluster group this worker belongs to.
null
null
null
null
null







null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
group.instance.idA unique identifier of the consumer instance provided by the end user. Only non-empty strings are permitted. If set, the consumer ..













null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
group.protocolThe group protocol consumer should use. We currently support "classic" or "consumer". If "consumer" is specified, then the consume..


























classic
classic
group.remote.assignorThe server-side assignor to use. If no assignor is specified, the group coordinator will pick one. This configuration is applied o..


























null
null
heartbeat.interval.msThe expected time between heartbeats to the group coordinator when using Kafka's group management facilities. Heartbeats are used ..



3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
interceptor.classesA list of classes to use as interceptors. Implementing the org.apache.kafka.clients.producer.ProducerInterceptor interface allows ..




null
null
null
null
null



















isolation.levelControls how to read messages written transactionally. If set to read_committed, consumer.poll() will only return transactional me..







read_uncommitted
read_uncommitted
read_uncommitted
read_uncommitted
read_uncommitted
read_uncommitted
read_uncommitted
read_uncommitted
read_uncommitted
read_uncommitted
read_uncommitted
read_uncommitted
read_uncommitted
read_uncommitted
read_uncommitted
read_uncommitted
read_uncommitted
read_uncommitted
read_uncommitted
read_uncommitted
read_uncommitted
key.deserializerDeserializer class for key that implements the org.apache.kafka.common.serialization.Deserializer interface.



null
null
null
null
null
null
null
null
null
null
null














max.partition.fetch.bytesThe maximum amount of data per-partition the server will return. Records are fetched in batches by the consumer. If the first reco..



1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
max.poll.interval.msThe maximum delay between invocations of poll() when using consumer group management. This places an upper bound on the amount of ..





300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
max.poll.recordsThe maximum number of records returned in a single call to poll(). Note, that max.poll.records does not impact the underlying fetc..




2147483647
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
metadata.max.age.msThe period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership cha..



300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
metadata.recovery.strategyControls how the client recovers when none of the brokers known to it is available. If set to none, the client fails. If set to re..



























none
metric.reportersA list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows p..



[]
[]
[]






















metrics.num.samplesThe number of samples maintained to compute metrics.



2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
metrics.recording.levelThe highest recording level for metrics.






INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
metrics.sample.window.msThe window of time a metrics sample is computed over.



30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
partition.assignment.strategyA list of class names or class types, ordered by preference, of supported partition assignment strategies that the client will use..


range
range
range
[class org.apache.kafka.clients.consumer.RangeAssignor]
class org.apache.kafka.clients.consumer.RangeAssignor
class org.apache.kafka.clients.consumer.RangeAssignor
class org.apache.kafka.clients.consumer.RangeAssignor
class org.apache.kafka.clients.consumer.RangeAssignor
class org.apache.kafka.clients.consumer.RangeAssignor
class org.apache.kafka.clients.consumer.RangeAssignor
class org.apache.kafka.clients.consumer.RangeAssignor
class org.apache.kafka.clients.consumer.RangeAssignor
class org.apache.kafka.clients.consumer.RangeAssignor
class org.apache.kafka.clients.consumer.RangeAssignor
class org.apache.kafka.clients.consumer.RangeAssignor
class org.apache.kafka.clients.consumer.RangeAssignor
class org.apache.kafka.clients.consumer.RangeAssignor
class org.apache.kafka.clients.consumer.RangeAssignor,class org.apache.kafka.clients.consumer.CooperativeStickyAssignor
class org.apache.kafka.clients.consumer.RangeAssignor,class org.apache.kafka.clients.consumer.CooperativeStickyAssignor
class org.apache.kafka.clients.consumer.RangeAssignor,class org.apache.kafka.clients.consumer.CooperativeStickyAssignor
class org.apache.kafka.clients.consumer.RangeAssignor,class org.apache.kafka.clients.consumer.CooperativeStickyAssignor
class org.apache.kafka.clients.consumer.RangeAssignor,class org.apache.kafka.clients.consumer.CooperativeStickyAssignor
class org.apache.kafka.clients.consumer.RangeAssignor,class org.apache.kafka.clients.consumer.CooperativeStickyAssignor
class org.apache.kafka.clients.consumer.RangeAssignor,class org.apache.kafka.clients.consumer.CooperativeStickyAssignor
class org.apache.kafka.clients.consumer.RangeAssignor,class org.apache.kafka.clients.consumer.CooperativeStickyAssignor
class org.apache.kafka.clients.consumer.RangeAssignor,class org.apache.kafka.clients.consumer.CooperativeStickyAssignor
receive.buffer.bytesThe size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used.



32768
32 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
reconnect.backoff.max.msThe maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provide..







1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
reconnect.backoff.msThe base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a t..



50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
request.timeout.msThe configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not r..



40000
40s
40000
40s
305000
5min 5s
305000
5min 5s
305000
5min 5s
305000
5min 5s
305000
5min 5s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
retry.backoff.max.msThe maximum amount of time in milliseconds to wait when retrying a request to the broker that has repeatedly failed. If provided, ..


























1000
1s
1000
1s
retry.backoff.msThe amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending ..



100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
sasl.client.callback.handler.classThe fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface.










null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
sasl.jaas.configJAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format ..






null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
sasl.kerberos.kinit.cmdKerberos kinit command path.



/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
sasl.kerberos.min.time.before.reloginLogin thread sleep time between refresh attempts.



60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
sasl.kerberos.service.nameThe Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.



null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
sasl.kerberos.ticket.renew.jitterPercentage of random jitter added to the renewal time.



0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
sasl.kerberos.ticket.renew.window.factorLogin thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which ..



0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
sasl.login.callback.handler.classThe fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For bro..










null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
sasl.login.classThe fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener ..










null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
sasl.login.connect.timeout.msThe (optional) value in milliseconds for the external authentication provider connection timeout. Currently applies only to OAUTHB..




















null
null
null
null
null
null
null
null
sasl.login.read.timeout.msThe (optional) value in milliseconds for the external authentication provider read timeout. Currently applies only to OAUTHBEARER.




















null
null
null
null
null
null
null
null
sasl.login.refresh.buffer.secondsThe amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would ot..










300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
sasl.login.refresh.min.period.secondsThe desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between..










60
60
60
60
60
60
60
60
60
60
60
60
60
60
60
60
60
60
sasl.login.refresh.window.factorLogin refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which..










0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
sasl.login.refresh.window.jitterThe maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. ..










0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
sasl.login.retry.backoff.max.msThe (optional) value in milliseconds for the maximum wait between login attempts to the external authentication provider. Login us..




















10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
sasl.login.retry.backoff.msThe (optional) value in milliseconds for the initial wait between login attempts to the external authentication provider. Login us..




















100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
sasl.mechanismSASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the de..




GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
sasl.oauthbearer.clock.skew.secondsThe (optional) value in seconds to allow for differences between the time of the OAuth/OIDC identity provider and the broker.




















30
30
30
30
30
30
30
30
sasl.oauthbearer.expected.audienceThe (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences. ..




















null
null
null
null
null
null
null
null
sasl.oauthbearer.expected.issuerThe (optional) setting for the broker to use to verify that the JWT was created by the expected issuer. The JWT will be inspected ..




















null
null
null
null
null
null
null
null
sasl.oauthbearer.jwks.endpoint.refresh.msThe (optional) value in milliseconds for the broker to wait between refreshing its JWKS (JSON Web Key Set) cache that contains the..




















3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.msThe (optional) value in milliseconds for the maximum wait between attempts to retrieve the JWKS (JSON Web Key Set) from the extern..




















10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
sasl.oauthbearer.jwks.endpoint.retry.backoff.msThe (optional) value in milliseconds for the initial wait between JWKS (JSON Web Key Set) retrieval attempts from the external aut..




















100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
sasl.oauthbearer.jwks.endpoint.urlThe OAuth/OIDC provider URL from which the provider's JWKS (JSON Web Key Set) can be retrieved. The URL can be HTTP(S)-based or fi..




















null
null
null
null
null
null
null
null
sasl.oauthbearer.scope.claim.nameThe OAuth claim for the scope is often named "scope", but this (optional) setting can provide a different name to use for the scop..




















scope
scope
scope
scope
scope
scope
scope
scope
sasl.oauthbearer.sub.claim.nameThe OAuth claim for the subject is often named "sub", but this (optional) setting can provide a different name to use for the subj..




















sub
sub
sub
sub
sub
sub
sub
sub
sasl.oauthbearer.token.endpoint.urlThe URL for the OAuth/OIDC identity provider. If the URL is HTTP(S)-based, it is the issuer's token endpoint URL to which requests..




















null
null
null
null
null
null
null
null
security.protocolProtocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.



PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
security.providersA list of configurable creator classes each returning a provider implementing security algorithms. These classes should implement ..














null
null
null
null
null
null
null
null
null
null
null
null
null
null
send.buffer.bytesThe size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.



131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
session.timeout.msThe timeout used to detect worker failures. The worker sends periodic heartbeats to indicate its liveness to the broker. If no hea..



30000
30s
30000
30s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
45000
45s
45000
45s
45000
45s
45000
45s
45000
45s
45000
45s
45000
45s
45000
45s
45000
45s
socket.connection.setup.timeout.max.msThe maximum amount of time the client will wait for the socket connection to be established. The connection setup timeout will inc..

















127000
2min 7s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
socket.connection.setup.timeout.msThe amount of time the client will wait for the socket connection to be established. If the connection is not built before the tim..

















10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
ssl.cipher.suitesA list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotia..



null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ssl.enabled.protocolsThe list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' ..



[TLSv1.2, TLSv1.1, TLSv1]
[TLSv1.2, TLSv1.1, TLSv1]
[TLSv1.2, TLSv1.1, TLSv1]
TLSv1.2,TLSv1.1,TLSv1
TLSv1.2,TLSv1.1,TLSv1
TLSv1.2,TLSv1.1,TLSv1
TLSv1.2,TLSv1.1,TLSv1
TLSv1.2,TLSv1.1,TLSv1
TLSv1.2,TLSv1.1,TLSv1
TLSv1.2,TLSv1.1,TLSv1
TLSv1.2,TLSv1.1,TLSv1
TLSv1.2,TLSv1.1,TLSv1
TLSv1.2
TLSv1.2
TLSv1.2
TLSv1.2
TLSv1.2
TLSv1.2,TLSv1.3
TLSv1.2
TLSv1.2,TLSv1.3
TLSv1.2
TLSv1.2,TLSv1.3
TLSv1.2,TLSv1.3
TLSv1.2,TLSv1.3
TLSv1.2,TLSv1.3
ssl.endpoint.identification.algorithmThe endpoint identification algorithm to validate server hostname using server certificate.



null
null
null
null
null
null
null
https
https
https
https
https
https
https
https
https
https
https
https
https
https
https
https
https
https
ssl.engine.factory.classThe class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache..
















null
null
null
null
null
null
null
null
null
null
null
null
ssl.key.passwordThe password of the private key in the key store file or the PEM key specified in 'ssl.keystore.key'.



null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ssl.keymanager.algorithmThe algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for t..



SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
ssl.keystore.certificate.chainCertificate chain in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with a list ..

















null
null
null
null
null
null
null
null
null
null
null
ssl.keystore.keyPrivate key in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with PKCS#8 keys. ..

















null
null
null
null
null
null
null
null
null
null
null
ssl.keystore.locationThe location of the key store file. This is optional for client and can be used for two-way authentication for client.



null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ssl.keystore.passwordThe store password for the key store file. This is optional for client and only needed if 'ssl.keystore.location' is configured. K..



null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ssl.keystore.typeThe file format of the key store file. This is optional for client. The values currently supported by the default `ssl.engine.fact..



JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
ssl.protocolThe SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise..



TLS
TLS
TLS
TLS
TLS
TLS
TLS
TLS
TLS
TLS
TLS
TLS
TLSv1.2
TLSv1.2
TLSv1.2
TLSv1.2
TLSv1.2
TLSv1.3
TLSv1.2
TLSv1.3
TLSv1.2
TLSv1.3
TLSv1.3
TLSv1.3
TLSv1.3
ssl.providerThe name of the security provider used for SSL connections. Default value is the default security provider of the JVM.



null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ssl.secure.random.implementationThe SecureRandom PRNG implementation to use for SSL cryptography operations.





null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ssl.trustmanager.algorithmThe algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured f..



PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
ssl.truststore.certificatesTrusted certificates in the format specified by 'ssl.truststore.type'. Default SSL engine factory supports only PEM format with X...

















null
null
null
null
null
null
null
null
null
null
null
ssl.truststore.locationThe location of the trust store file.



null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ssl.truststore.passwordThe password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity che..



null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ssl.truststore.typeThe file format of the trust store file. The values currently supported by the default `ssl.engine.factory.class` are [JKS, PKCS12..



JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
value.deserializerDeserializer class for value that implements the org.apache.kafka.common.serialization.Deserializer interface.



null
null
null
null
null
null
null
null
null
null
null














auto.commit.enable
true
true
true
true
true
true
true
true
true
true
true

















consumer.id
null
null
null
null
null
null
null
null
null
null
null

















consumer.timeout.ms-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1

















dual.commit.enabled


true
true
true
true
true
true
true
true
true

















fetch.message.max.bytes
1024 * 1024
1024 * 1024
1024 * 1024
1024 * 1024
1024 * 1024
1024 * 1024
1024 * 1024
1024 * 1024
1024 * 1024
1024 * 1024
1024 * 1024

















fetch.wait.max.ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms

















num.consumer.fetchers


1
1
1
1
1
1
1
1
1

















offsets.channel.backoff.ms


1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s

















offsets.channel.socket.timeout.ms


10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s

















offsets.commit.max.retries


5
5
5
5
5
5
5
5
5

















offsets.storage


zookeeper
zookeeper
zookeeper
zookeeper
zookeeper
zookeeper
zookeeper
zookeeper
zookeeper

















queued.max.message.chunks
10
10
2
2
2
2
2
2
2
2
2

















rebalance.backoff.ms
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s

















rebalance.max.retries
4
4
4
4
4
4
4
4
4
4
4

















refresh.leader.backoff.ms
200
200ms
200
200ms
200
200ms
200
200ms
200
200ms
200
200ms
200
200ms
200
200ms
200
200ms
200
200ms
200
200ms

















socket.receive.buffer.bytesThe SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used.
64 * 1024
64 * 1024
64 * 1024
64 * 1024
64 * 1024
64 * 1024
64 * 1024
64 * 1024
64 * 1024
64 * 1024
64 * 1024

















socket.timeout.ms30000
30s
30 * 1000
0ms
30 * 1000
0ms
30 * 1000
0ms
30 * 1000
0ms
30 * 1000
0ms
30 * 1000
0ms
30 * 1000
0ms
30 * 1000
0ms
30 * 1000
0ms
30 * 1000
0ms
30 * 1000
0ms

















zookeeper.connectSpecifies the ZooKeeper connection string in the form hostname:port where host and port are the host and port of a ZooKeeper serve..
null
null
null
null
null
null
null
null
null
null
null

















zookeeper.connection.timeout.msThe max time that the client waits to establish a connection to ZooKeeper. If not set, the value in zookeeper.session.timeout.ms i..
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s

















zookeeper.session.timeout.msZookeeper session timeout
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s

















zookeeper.sync.time.ms
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s

















autocommit.enabletrue




























autocommit.interval.ms10000
10s




























autooffset.resetsmallest




























backoff.increment.ms1000
1s




























fetch.size300 * 1024




























groupidgroupid




























mirror.consumer.numthreads4




























mirror.topics.blacklist




























mirror.topics.whitelist




























queuedchunks.max100




























rebalance.retries.max4




























socket.buffersize64*1024




























Description0.70.8.00.8.10.8.20.9.00.10.00.10.10.10.20.11.01.01.12.02.12.22.32.42.52.62.72.83.03.13.23.33.43.53.63.73.8
acksThe number of acknowledgments the producer requires the leader to have received before considering a request complete. This contro..

1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
all
all
all
all
all
all
all
all
all
auto.include.jmx.reporterDeprecated. Whether to automatically include JmxReporter even if it's not listed in metric.reporters. This configuration will be r..























true
true
true
true
true
batch.sizeThe producer will attempt to batch records together into fewer requests whenever multiple records are being sent to the same parti..200

16384
16384
16384
16384
16384
16384
16384
16384
16384
16384
16384
16384
16384
16384
16384
16384
16384
16384
16384
16384
16384
16384
16384
16384
16384
16384
16384
bootstrap.serversA list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all ser..

null
null
null
null
null
null
null
null



















buffer.memoryThe total bytes of memory the producer can use to buffer records waiting to be sent to the server. If records are sent faster than..

33554432
33554432
33554432
33554432
33554432
33554432
33554432
33554432
33554432
33554432
33554432
33554432
33554432
33554432
33554432
33554432
33554432
33554432
33554432
33554432
33554432
33554432
33554432
33554432
33554432
33554432
33554432
client.dns.lookupControls how the client uses DNS lookups. If set to use_all_dns_ips, connect to each returned IP address in sequence until a succe..











default
default
default
default
default
use_all_dns_ips
use_all_dns_ips
use_all_dns_ips
use_all_dns_ips
use_all_dns_ips
use_all_dns_ips
use_all_dns_ips
use_all_dns_ips
use_all_dns_ips
use_all_dns_ips
use_all_dns_ips
use_all_dns_ips
client.idAn ID prefix string used for the client IDs of internal (main, restore, and global) consumers , producers, and admin clients with ..




























compression.gzip.levelThe compression level to use if compression.type is set to gzip.



























-1
compression.lz4.levelThe compression level to use if compression.type is set to lz4.



























9
compression.typeSpecify the final compression type for a given topic. This configuration accepts the standard compression codecs ('gzip', 'snappy'..

none
none
none
none
none
none
none
none
none
none
none
none
none
none
none
none
none
none
none
none
none
none
none
none
none
none
none
compression.zstd.levelThe compression level to use if compression.type is set to zstd.



























3
connections.max.idle.msClose idle connections after the number of milliseconds specified by this config.



540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
delivery.timeout.msAn upper bound on the time to report success or failure after a call to send() returns. This limits the total time that a record w..











120000
2min
120000
2min
120000
2min
120000
2min
120000
2min
120000
2min
120000
2min
120000
2min
120000
2min
120000
2min
120000
2min
120000
2min
120000
2min
120000
2min
120000
2min
120000
2min
120000
2min
enable.idempotenceWhen set to 'true', the producer will ensure that exactly one copy of each message is written in the stream. If 'false', producer ..







false
false
false
false
false
false
false
false
false
false
false
false
true
true
true
true
true
true
true
true
true
enable.metrics.pushWhether to enable pushing of internal client metrics for (main, restore, and global) consumers, producers, and admin clients. The ..


























true
true
interceptor.classesA list of classes to use as interceptors. Implementing the org.apache.kafka.clients.producer.ProducerInterceptor interface allows ..




null
null
null
null
null



















key.serializerSerializer class for key that implements the org.apache.kafka.common.serialization.Serializer interface.



null
null
null
null
null
null
null
null
null
null
null














linger.msThe producer groups together any records that arrive in between request transmissions into a single batched request. Normally this..

0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
max.block.msThe configuration controls how long the KafkaProducer's send(), partitionsFor(), initTransactions(), sendOffsetsToTransaction(), c..



60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
max.in.flight.requests.per.connectionThe maximum number of unacknowledged requests the client will send on a single connection before blocking. Note that if this confi..



5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
max.request.sizeThe maximum size of a request in bytes. This setting will limit the number of record batches the producer will send in a single re..

1048576
1048576
1048576
1048576
1048576
1048576
1048576
1048576
1048576
1048576
1048576
1048576
1048576
1048576
1048576
1048576
1048576
1048576
1048576
1048576
1048576
1048576
1048576
1048576
1048576
1048576
1048576
metadata.max.age.msThe period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership cha..

300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
metadata.max.idle.msControls how long the producer will cache metadata for a topic that's idle. If the elapsed time since a topic was last produced to..















300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
metadata.recovery.strategyControls how the client recovers when none of the brokers known to it is available. If set to none, the client fails. If set to re..



























none
metric.reportersA list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows p..

[]
[]
[]
[]
[]






















metrics.num.samplesThe number of samples maintained to compute metrics.

2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
metrics.recording.levelThe highest recording level for metrics.







INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
metrics.sample.window.msThe window of time a metrics sample is computed over.

30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
partitioner.adaptive.partitioning.enableWhen set to 'true', the producer will try to adapt to broker performance and produce more messages to partitions hosted on faster ..






















true
true
true
true
true
true
partitioner.availability.timeout.msIf a broker cannot process produce requests from a partition for partitioner.availability.timeout.ms time, the partitioner treats ..






















0
0
0
0
0
0
partitioner.classDetermines which partition to send a record to when records are produced. Available options are:kafka.producer.DefaultPartitioner<T> - uses the partitioning strategy hash%num_partitions. If key is null, then it picks a random partition.
kafka.producer.DefaultPartitioner
kafka.producer.DefaultPartitioner
kafka.producer.DefaultPartitioner
class org.apache.kafka.clients.producer.internals.DefaultPartitioner
class org.apache.kafka.clients.producer.internals.DefaultPartitioner
class org.apache.kafka.clients.producer.internals.DefaultPartitioner
org.apache.kafka.clients.producer.internals.DefaultPartitioner
org.apache.kafka.clients.producer.internals.DefaultPartitioner
org.apache.kafka.clients.producer.internals.DefaultPartitioner
org.apache.kafka.clients.producer.internals.DefaultPartitioner
org.apache.kafka.clients.producer.internals.DefaultPartitioner
org.apache.kafka.clients.producer.internals.DefaultPartitioner
org.apache.kafka.clients.producer.internals.DefaultPartitioner
org.apache.kafka.clients.producer.internals.DefaultPartitioner
org.apache.kafka.clients.producer.internals.DefaultPartitioner
org.apache.kafka.clients.producer.internals.DefaultPartitioner
org.apache.kafka.clients.producer.internals.DefaultPartitioner
org.apache.kafka.clients.producer.internals.DefaultPartitioner
org.apache.kafka.clients.producer.internals.DefaultPartitioner
org.apache.kafka.clients.producer.internals.DefaultPartitioner
org.apache.kafka.clients.producer.internals.DefaultPartitioner
org.apache.kafka.clients.producer.internals.DefaultPartitioner
null
null
null
null
null
null
partitioner.ignore.keysWhen set to 'true' the producer won't use record keys to choose a partition. If 'false', producer would choose a partition based o..






















false
false
false
false
false
false
receive.buffer.bytesThe size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used.

32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
reconnect.backoff.max.msThe maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provide..







1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
reconnect.backoff.msThe base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a t..

10
10ms
10
10ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
request.timeout.msThe configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not r..
10000
10s
10000
10s
10000
10s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
retriesSetting a value greater than zero will cause the client to resend any request that fails with a potentially transient error. It is..

0
0
0
0
0
0
0
0
0
0
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
2147483647
retry.backoff.max.msThe maximum amount of time in milliseconds to wait when retrying a request to the broker that has repeatedly failed. If provided, ..


























1000
1s
1000
1s
retry.backoff.msThe amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending ..
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
sasl.client.callback.handler.classThe fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface.










null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
sasl.jaas.configJAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format ..






null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
sasl.kerberos.kinit.cmdKerberos kinit command path.



/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
sasl.kerberos.min.time.before.reloginLogin thread sleep time between refresh attempts.



60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
sasl.kerberos.service.nameThe Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.



null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
sasl.kerberos.ticket.renew.jitterPercentage of random jitter added to the renewal time.



0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
sasl.kerberos.ticket.renew.window.factorLogin thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which ..



0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
sasl.login.callback.handler.classThe fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For bro..










null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
sasl.login.classThe fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener ..










null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
sasl.login.connect.timeout.msThe (optional) value in milliseconds for the external authentication provider connection timeout. Currently applies only to OAUTHB..




















null
null
null
null
null
null
null
null
sasl.login.read.timeout.msThe (optional) value in milliseconds for the external authentication provider read timeout. Currently applies only to OAUTHBEARER.




















null
null
null
null
null
null
null
null
sasl.login.refresh.buffer.secondsThe amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would ot..










300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
sasl.login.refresh.min.period.secondsThe desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between..










60
60
60
60
60
60
60
60
60
60
60
60
60
60
60
60
60
60
sasl.login.refresh.window.factorLogin refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which..










0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
sasl.login.refresh.window.jitterThe maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. ..










0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
sasl.login.retry.backoff.max.msThe (optional) value in milliseconds for the maximum wait between login attempts to the external authentication provider. Login us..




















10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
sasl.login.retry.backoff.msThe (optional) value in milliseconds for the initial wait between login attempts to the external authentication provider. Login us..




















100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
sasl.mechanismSASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the de..




GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
sasl.oauthbearer.clock.skew.secondsThe (optional) value in seconds to allow for differences between the time of the OAuth/OIDC identity provider and the broker.




















30
30
30
30
30
30
30
30
sasl.oauthbearer.expected.audienceThe (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences. ..




















null
null
null
null
null
null
null
null
sasl.oauthbearer.expected.issuerThe (optional) setting for the broker to use to verify that the JWT was created by the expected issuer. The JWT will be inspected ..




















null
null
null
null
null
null
null
null
sasl.oauthbearer.jwks.endpoint.refresh.msThe (optional) value in milliseconds for the broker to wait between refreshing its JWKS (JSON Web Key Set) cache that contains the..




















3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.msThe (optional) value in milliseconds for the maximum wait between attempts to retrieve the JWKS (JSON Web Key Set) from the extern..




















10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
sasl.oauthbearer.jwks.endpoint.retry.backoff.msThe (optional) value in milliseconds for the initial wait between JWKS (JSON Web Key Set) retrieval attempts from the external aut..




















100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
sasl.oauthbearer.jwks.endpoint.urlThe OAuth/OIDC provider URL from which the provider's JWKS (JSON Web Key Set) can be retrieved. The URL can be HTTP(S)-based or fi..




















null
null
null
null
null
null
null
null
sasl.oauthbearer.scope.claim.nameThe OAuth claim for the scope is often named "scope", but this (optional) setting can provide a different name to use for the scop..




















scope
scope
scope
scope
scope
scope
scope
scope
sasl.oauthbearer.sub.claim.nameThe OAuth claim for the subject is often named "sub", but this (optional) setting can provide a different name to use for the subj..




















sub
sub
sub
sub
sub
sub
sub
sub
sasl.oauthbearer.token.endpoint.urlThe URL for the OAuth/OIDC identity provider. If the URL is HTTP(S)-based, it is the issuer's token endpoint URL to which requests..




















null
null
null
null
null
null
null
null
security.protocolProtocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.



PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
security.providersA list of configurable creator classes each returning a provider implementing security algorithms. These classes should implement ..














null
null
null
null
null
null
null
null
null
null
null
null
null
null
send.buffer.bytesThe size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.
100 * 1024
100 * 1024
100 * 1024
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
socket.connection.setup.timeout.max.msThe maximum amount of time the client will wait for the socket connection to be established. The connection setup timeout will inc..

















127000
2min 7s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
socket.connection.setup.timeout.msThe amount of time the client will wait for the socket connection to be established. If the connection is not built before the tim..

















10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
ssl.cipher.suitesA list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotia..



null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ssl.enabled.protocolsThe list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' ..



[TLSv1.2, TLSv1.1, TLSv1]
[TLSv1.2, TLSv1.1, TLSv1]
[TLSv1.2, TLSv1.1, TLSv1]
TLSv1.2,TLSv1.1,TLSv1
TLSv1.2,TLSv1.1,TLSv1
TLSv1.2,TLSv1.1,TLSv1
TLSv1.2,TLSv1.1,TLSv1
TLSv1.2,TLSv1.1,TLSv1
TLSv1.2,TLSv1.1,TLSv1
TLSv1.2,TLSv1.1,TLSv1
TLSv1.2,TLSv1.1,TLSv1
TLSv1.2,TLSv1.1,TLSv1
TLSv1.2
TLSv1.2
TLSv1.2
TLSv1.2
TLSv1.2
TLSv1.2,TLSv1.3
TLSv1.2
TLSv1.2,TLSv1.3
TLSv1.2
TLSv1.2,TLSv1.3
TLSv1.2,TLSv1.3
TLSv1.2,TLSv1.3
TLSv1.2,TLSv1.3
ssl.endpoint.identification.algorithmThe endpoint identification algorithm to validate server hostname using server certificate.



null
null
null
null
null
null
null
https
https
https
https
https
https
https
https
https
https
https
https
https
https
https
https
https
https
ssl.engine.factory.classThe class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache..
















null
null
null
null
null
null
null
null
null
null
null
null
ssl.key.passwordThe password of the private key in the key store file or the PEM key specified in 'ssl.keystore.key'.



null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ssl.keymanager.algorithmThe algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for t..



SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
ssl.keystore.certificate.chainCertificate chain in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with a list ..

















null
null
null
null
null
null
null
null
null
null
null
ssl.keystore.keyPrivate key in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with PKCS#8 keys. ..

















null
null
null
null
null
null
null
null
null
null
null
ssl.keystore.locationThe location of the key store file. This is optional for client and can be used for two-way authentication for client.



null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ssl.keystore.passwordThe store password for the key store file. This is optional for client and only needed if 'ssl.keystore.location' is configured. K..



null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ssl.keystore.typeThe file format of the key store file. This is optional for client. The values currently supported by the default `ssl.engine.fact..



JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
ssl.protocolThe SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise..



TLS
TLS
TLS
TLS
TLS
TLS
TLS
TLS
TLS
TLS
TLS
TLS
TLSv1.2
TLSv1.2
TLSv1.2
TLSv1.2
TLSv1.2
TLSv1.3
TLSv1.2
TLSv1.3
TLSv1.2
TLSv1.3
TLSv1.3
TLSv1.3
TLSv1.3
ssl.providerThe name of the security provider used for SSL connections. Default value is the default security provider of the JVM.



null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ssl.secure.random.implementationThe SecureRandom PRNG implementation to use for SSL cryptography operations.





null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ssl.trustmanager.algorithmThe algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured f..



PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
ssl.truststore.certificatesTrusted certificates in the format specified by 'ssl.truststore.type'. Default SSL engine factory supports only PEM format with X...

















null
null
null
null
null
null
null
null
null
null
null
ssl.truststore.locationThe location of the trust store file.



null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ssl.truststore.passwordThe password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity che..



null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ssl.truststore.typeThe file format of the trust store file. The values currently supported by the default `ssl.engine.factory.class` are [JKS, PKCS12..



JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
transaction.timeout.msThe maximum amount of time in milliseconds that a transaction will remain open before the coordinator proactively aborts it. The s..







60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
transactional.idThe TransactionalId to use for transactional delivery. This enables reliability semantics which span multiple producer sessions si..







null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
value.serializerSerializer class for value that implements the org.apache.kafka.common.serialization.Serializer interface.



null
null
null
null
null
null
null
null
null
null
null














block.on.buffer.full

true
true
false
false
false
false





















metadata.fetch.timeout.ms

60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min





















timeout.ms

30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s





















batch.num.messages
200
200
200

























compressed.topicsnull
null
null
null

























compression.codec0
none
none
none

























key.serializer.class
null
null
null

























message.send.max.retries
3
3
3

























metadata.broker.list
null
null
null

























producer.typesync
sync
sync
sync

























queue.buffering.max.messages
10000
10000
10000

























queue.buffering.max.ms
5000
5s
5000
5s
5000
5s

























queue.enqueue.timeout.ms
-1
-1
-1

























request.required.acks
0
0
0

























serializer.classkafka.serializer.DefaultEncoder. This is a no-op encoder. The serialization of data to Message should be handled outside the Producer
kafka.serializer.DefaultEncoder
kafka.serializer.DefaultEncoder
kafka.serializer.DefaultEncoder

























topic.metadata.refresh.interval.ms
600 * 1000
0ms
600 * 1000
0ms
600 * 1000
0ms

























broker.listnull. Either this parameter or zk.connect needs to be specified by the user.




























buffer.size102400




























callback.handlernull




























callback.handler.propsnull




























connect.timeout.ms5000
5s




























event.handlerkafka.producer.async.EventHandler<T>




























event.handler.propsnull




























max.message.size1000000




























queue.size10000




























queue.time5000




























reconnect.interval30000




























reconnect.time.interval.ms10 * 1000 * 1000
0ms




























socket.timeout.ms30000
30s




























zk.connectnull. Either this parameter or broker.partition.info needs to be specified by the user




























zk.read.num.retries3




























Description0.11.01.01.12.02.12.22.32.42.52.62.72.83.03.13.2
cleanup.policyThis config designates the retention policy to use on log segments. The "delete" policy (which is the default) will discard old se..delete
delete
delete
delete
delete
delete
delete
delete
delete
delete
delete
delete
delete
delete
delete
compression.typeSpecify the final compression type for a given topic. This configuration accepts the standard compression codecs ('gzip', 'snappy'..producer
producer
producer
producer
producer
producer
producer
producer
producer
producer
producer
producer
producer
producer
producer
delete.retention.msThe amount of time to retain delete tombstone markers for log compacted topics. This setting also gives a bound on the time in whi..86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
file.delete.delay.msThe time to wait before deleting a file from the filesystem60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
flush.messagesThis setting allows specifying an interval at which we will force an fsync of data written to the log. For example if this was set..9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
flush.msThis setting allows specifying a time interval at which we will force an fsync of data written to the log. For example if this was..9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
follower.replication.throttled.replicasA list of replicas for which log replication should be throttled on the follower side. The list should describe a set of replicas ..














index.interval.bytesThis setting controls how frequently Kafka adds an index entry to its offset index. The default setting ensures that we index a me..4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
leader.replication.throttled.replicasA list of replicas for which log replication should be throttled on the leader side. The list should describe a set of replicas in..














max.compaction.lag.msThe maximum time a message will remain ineligible for compaction in the log. Only applicable for logs that are being compacted.





9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
max.message.bytesThe largest record batch size allowed by Kafka (after compression if compression is enabled). If this is increased and there are c..1000012
976,57 KB
1000012
976,57 KB
1000012
976,57 KB
1000012
976,57 KB
1000012
976,57 KB
1000012
976,57 KB
1000012
976,57 KB
1000012
976,57 KB
1048588
1 MB
1048588
1 MB
1048588
1 MB
1048588
1 MB
1048588
1 MB
1048588
1 MB
1048588
1 MB
message.downconversion.enableThis configuration controls whether down-conversion of message formats is enabled to satisfy consume requests. When set to false, ..


true
true
true
true
true
true
true
true
true
true
true
true
message.format.version[DEPRECATED] Specify the message format version the broker will use to append messages to the logs. The value of this config is al..0.11.0-IV2
1.0-IV0
1.1-IV0
2.0-IV1
2.1-IV2
2.2-IV1
2.3-IV1
2.4-IV1
2.5-IV0
2.6-IV0
2.7-IV2
2.8-IV1
3.0-IV1
3.0-IV1
3.0-IV1
message.timestamp.difference.max.ms[DEPRECATED] The maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in ..9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
message.timestamp.typeDefine whether the timestamp in the message is message create time or log append time. The value should be either `CreateTime` or ..CreateTime
CreateTime
CreateTime
CreateTime
CreateTime
CreateTime
CreateTime
CreateTime
CreateTime
CreateTime
CreateTime
CreateTime
CreateTime
CreateTime
CreateTime
min.cleanable.dirty.ratioThis configuration controls how frequently the log compactor will attempt to clean the log (assuming log compaction is enabled). B..0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
min.compaction.lag.msThe minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted.0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
min.insync.replicasWhen a producer sets acks to "all" (or "-1"), this configuration specifies the minimum number of replicas that must acknowledge a ..1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
preallocateTrue if we should preallocate the file on disk when creating a new log segment.false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
retention.bytesThis configuration controls the maximum size a partition (which consists of log segments) can grow to before we will discard old l..-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
retention.msThis configuration controls the maximum time we will retain a log before we will discard old log segments to free up space if we a..604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
segment.bytesThis configuration controls the segment file size for the log. Retention and cleaning is always done a file at a time so a larger ..1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
segment.index.bytesThis configuration controls the size of the index that maps offsets to file positions. We preallocate this index file and shrink i..10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
segment.jitter.msThe maximum random jitter subtracted from the scheduled segment roll time to avoid thundering herds of segment rolling0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
segment.msThis configuration controls the period of time after which Kafka will force the log to roll even if the segment file isn't full to..604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
unclean.leader.election.enableIndicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result ..false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
local.retention.bytesThe maximum size of local log segments that can grow for a partition before it deletes the old segments. Default value is -2, it r..











-2


local.retention.msThe number of milliseconds to keep the local log segment before it gets deleted. Default value is -2, it represents `retention.ms`..











-2


remote.storage.enableTo enable tiered storage for a topic, set this configuration as true. You can not disable this config once it is enabled. It will ..











false


Description0.9.00.10.00.10.10.10.20.11.01.01.12.02.12.22.32.42.52.62.72.83.03.13.23.33.43.53.63.73.8
access.control.allow.methodsSets the methods supported for cross origin requests by setting the Access-Control-Allow-Methods header. The default value of the ..
























access.control.allow.originValue to set the Access-Control-Allow-Origin header to for REST API requests.To enable cross origin access, set this to the domain..
























admin.listenersList of comma-separated URIs the Admin REST API will listen on. The supported protocols are HTTP and HTTPS. An empty or blank stri..










null
null
null
null
null
null
null
null
null
null
null
null
null
null
auto.include.jmx.reporterDeprecated. Whether to automatically include JmxReporter even if it's not listed in metric.reporters. This configuration will be r..



















true
true
true
true
true
bootstrap.serversA list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all ser..[localhost:9092]
[localhost:9092]
[localhost:9092]
localhost:9092
localhost:9092
localhost:9092
localhost:9092
localhost:9092
localhost:9092
localhost:9092
localhost:9092
localhost
localhost:9092
localhost:9092
localhost:9092
localhost:9092
localhost:9092
localhost:9092
localhost:9092
localhost:9092
localhost:9092
localhost:9092
localhost:9092
localhost:9092
localhost:9092
client.dns.lookupControls how the client uses DNS lookups. If set to use_all_dns_ips, connect to each returned IP address in sequence until a succe..







default
default
default
default
default
use_all_dns_ips
use_all_dns_ips
use_all_dns_ips
use_all_dns_ips
use_all_dns_ips
use_all_dns_ips
use_all_dns_ips
use_all_dns_ips
use_all_dns_ips
use_all_dns_ips
use_all_dns_ips
use_all_dns_ips
client.idAn ID prefix string used for the client IDs of internal (main, restore, and global) consumers , producers, and admin clients with ..
























config.providersComma-separated names of ConfigProvider classes, loaded and used in the order specified. Implementing the interface ConfigProvide..
























config.storage.replication.factorReplication factor used when creating the configuration storage topic



3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
config.storage.topicThe name of the Kafka topic where connector configurations are stored
null
null
null
null
null
null
null
null
null
null














connect.protocolCompatibility mode for Kafka Connect Protocol









compatible
sessioned
sessioned
sessioned
sessioned
sessioned
sessioned
sessioned
sessioned
sessioned
sessioned
sessioned
sessioned
sessioned
sessioned
connections.max.idle.msClose idle connections after the number of milliseconds specified by this config.540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
connector.client.config.override.policyClass name or alias of implementation of ConnectorClientConfigOverridePolicy. Defines what client configurations can be overridden..









None
None
None
None
None
None
All
All
All
All
All
All
All
All
All
exactly.once.source.supportWhether to enable exactly-once support for source connectors in the cluster by using transactions to write source records and thei..


















disabled
disabled
disabled
disabled
disabled
disabled
group.idA unique string that identifies the Connect cluster group this worker belongs to.null
null
null
null
null
null
null
null
null
null
null














header.converterHeaderConverter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls..





org.apache.kafka.connect.storage.SimpleHeaderConverter
org.apache.kafka.connect.storage.SimpleHeaderConverter
org.apache.kafka.connect.storage.SimpleHeaderConverter
org.apache.kafka.connect.storage.SimpleHeaderConverter
org.apache.kafka.connect.storage.SimpleHeaderConverter
org.apache.kafka.connect.storage.SimpleHeaderConverter
org.apache.kafka.connect.storage.SimpleHeaderConverter
org.apache.kafka.connect.storage.SimpleHeaderConverter
org.apache.kafka.connect.storage.SimpleHeaderConverter
org.apache.kafka.connect.storage.SimpleHeaderConverter
org.apache.kafka.connect.storage.SimpleHeaderConverter
org.apache.kafka.connect.storage.SimpleHeaderConverter
org.apache.kafka.connect.storage.SimpleHeaderConverter
org.apache.kafka.connect.storage.SimpleHeaderConverter
org.apache.kafka.connect.storage.SimpleHeaderConverter
org.apache.kafka.connect.storage.SimpleHeaderConverter
org.apache.kafka.connect.storage.SimpleHeaderConverter
org.apache.kafka.connect.storage.SimpleHeaderConverter
org.apache.kafka.connect.storage.SimpleHeaderConverter
heartbeat.interval.msThe expected time between heartbeats to the group coordinator when using Kafka's group management facilities. Heartbeats are used ..3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
inter.worker.key.generation.algorithmThe algorithm to use for generating internal request keys. The algorithm 'HmacSHA256' will be used as a default on JVMs that suppo..










HmacSHA256
HmacSHA256
HmacSHA256
HmacSHA256
HmacSHA256
HmacSHA256
HmacSHA256
HmacSHA256
HmacSHA256
HmacSHA256
HmacSHA256
HmacSHA256
HmacSHA256
HmacSHA256
inter.worker.key.sizeThe size of the key to use for signing internal requests, in bits. If null, the default key size for the key generation algorithm ..










null
null
null
null
null
null
null
null
null
null
null
null
null
null
inter.worker.key.ttl.msThe TTL of generated session keys used for internal request validation (in milliseconds)










3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
inter.worker.signature.algorithmThe algorithm used to sign internal requestsThe algorithm 'inter.worker.signature.algorithm' will be used as a default on JVMs tha..










HmacSHA256
HmacSHA256
HmacSHA256
HmacSHA256
HmacSHA256
HmacSHA256
HmacSHA256
HmacSHA256
HmacSHA256
HmacSHA256
HmacSHA256
HmacSHA256
HmacSHA256
HmacSHA256
inter.worker.verification.algorithmsA list of permitted algorithms for verifying internal requests, which must include the algorithm used for the inter.worker.signatu..










HmacSHA256
HmacSHA256
HmacSHA256
HmacSHA256
HmacSHA256
HmacSHA256
HmacSHA256
HmacSHA256
HmacSHA256
HmacSHA256
HmacSHA256
HmacSHA256
HmacSHA256
HmacSHA256
key.converterConverter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the f..null
null
null
null
null
null
null
null
null
null
null














listenersList of comma-separated URIs the REST API will listen on. The supported protocols are HTTP and HTTPS. Specify hostname as 0.0.0.0 ..





null
null
null
null
null
null
null
null
null
null
http://:8083
http://:8083
http://:8083
http://:8083
http://:8083
http://:8083
http://:8083
http://:8083
http://:8083
metadata.max.age.msThe period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership cha..300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
metadata.recovery.strategyControls how the client recovers when none of the brokers known to it is available. If set to none, the client fails. If set to re..























none
metric.reportersA list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows p..[]
[]
[]






















metrics.num.samplesThe number of samples maintained to compute metrics.2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
metrics.recording.levelThe highest recording level for metrics.




INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
metrics.sample.window.msThe window of time a metrics sample is computed over.30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
offset.flush.interval.msInterval at which to try committing offsets for tasks.60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
offset.flush.timeout.msMaximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before can..5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
offset.storage.partitionsThe number of partitions used when creating the offset storage topic



25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
offset.storage.replication.factorReplication factor used when creating the offset storage topic



3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
offset.storage.topicThe name of the Kafka topic where source connector offsets are stored
null
null
null
null
null
null
null
null
null
null














plugin.discoveryMethod to use to discover plugins present in the classpath and plugin.path configuration. This can be one of multiple values with ..





















hybrid_warn
hybrid_warn
hybrid_warn
plugin.pathList of paths separated by commas (,) that contain plugins (connectors, converters, transformations). The list should consist of t..



null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
rebalance.timeout.msThe maximum allowed time for each worker to join the group once a rebalance has begun. This is basically a limit on the amount of ..

60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
receive.buffer.bytesThe size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used.32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
reconnect.backoff.max.msThe maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provide..



1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
reconnect.backoff.msThe base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a t..50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
request.timeout.msThe configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not r..40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
response.http.headers.configRules for REST API HTTP response headers
























rest.advertised.host.nameIf this is set, this is the hostname that will be given out to other workers to connect to.null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
rest.advertised.listenerSets the advertised listener (HTTP or HTTPS) which will be given to other workers to use.





null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
rest.advertised.portIf this is set, this is the port that will be given out to other workers to connect to.null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
rest.extension.classesComma-separated names of ConnectRestExtension classes, loaded and called in the order specified. Implementing the interface Conne..
























retry.backoff.max.msThe maximum amount of time in milliseconds to wait when retrying a request to the broker that has repeatedly failed. If provided, ..






















1000
1s
1000
1s
retry.backoff.msThe amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending ..100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
sasl.client.callback.handler.classThe fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface.






null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
sasl.jaas.configJAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format ..


null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
sasl.kerberos.kinit.cmdKerberos kinit command path./usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
/usr/bin/kinit
sasl.kerberos.min.time.before.reloginLogin thread sleep time between refresh attempts.60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
60000
sasl.kerberos.service.nameThe Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
sasl.kerberos.ticket.renew.jitterPercentage of random jitter added to the renewal time.0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
sasl.kerberos.ticket.renew.window.factorLogin thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which ..0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
sasl.login.callback.handler.classThe fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For bro..






null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
sasl.login.classThe fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener ..






null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
sasl.login.connect.timeout.msThe (optional) value in milliseconds for the external authentication provider connection timeout. Currently applies only to OAUTHB..
















null
null
null
null
null
null
null
null
sasl.login.read.timeout.msThe (optional) value in milliseconds for the external authentication provider read timeout. Currently applies only to OAUTHBEARER.
















null
null
null
null
null
null
null
null
sasl.login.refresh.buffer.secondsThe amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would ot..






300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
sasl.login.refresh.min.period.secondsThe desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between..






60
60
60
60
60
60
60
60
60
60
60
60
60
60
60
60
60
60
sasl.login.refresh.window.factorLogin refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which..






0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
sasl.login.refresh.window.jitterThe maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. ..






0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
sasl.login.retry.backoff.max.msThe (optional) value in milliseconds for the maximum wait between login attempts to the external authentication provider. Login us..
















10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
sasl.login.retry.backoff.msThe (optional) value in milliseconds for the initial wait between login attempts to the external authentication provider. Login us..
















100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
sasl.mechanismSASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the de..
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
GSSAPI
sasl.oauthbearer.clock.skew.secondsThe (optional) value in seconds to allow for differences between the time of the OAuth/OIDC identity provider and the broker.
















30
30
30
30
30
30
30
30
sasl.oauthbearer.expected.audienceThe (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences. ..
















null
null
null
null
null
null
null
null
sasl.oauthbearer.expected.issuerThe (optional) setting for the broker to use to verify that the JWT was created by the expected issuer. The JWT will be inspected ..
















null
null
null
null
null
null
null
null
sasl.oauthbearer.jwks.endpoint.refresh.msThe (optional) value in milliseconds for the broker to wait between refreshing its JWKS (JSON Web Key Set) cache that contains the..
















3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.msThe (optional) value in milliseconds for the maximum wait between attempts to retrieve the JWKS (JSON Web Key Set) from the extern..
















10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
sasl.oauthbearer.jwks.endpoint.retry.backoff.msThe (optional) value in milliseconds for the initial wait between JWKS (JSON Web Key Set) retrieval attempts from the external aut..
















100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
sasl.oauthbearer.jwks.endpoint.urlThe OAuth/OIDC provider URL from which the provider's JWKS (JSON Web Key Set) can be retrieved. The URL can be HTTP(S)-based or fi..
















null
null
null
null
null
null
null
null
sasl.oauthbearer.scope.claim.nameThe OAuth claim for the scope is often named "scope", but this (optional) setting can provide a different name to use for the scop..
















scope
scope
scope
scope
scope
scope
scope
scope
sasl.oauthbearer.sub.claim.nameThe OAuth claim for the subject is often named "sub", but this (optional) setting can provide a different name to use for the subj..
















sub
sub
sub
sub
sub
sub
sub
sub
sasl.oauthbearer.token.endpoint.urlThe URL for the OAuth/OIDC identity provider. If the URL is HTTP(S)-based, it is the issuer's token endpoint URL to which requests..
















null
null
null
null
null
null
null
null
scheduled.rebalance.max.delay.msThe maximum delay that is scheduled in order to wait for the return of one or more departed workers before rebalancing and reassig..









300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
security.protocolProtocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
send.buffer.bytesThe size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
session.timeout.msThe timeout used to detect worker failures. The worker sends periodic heartbeats to indicate its liveness to the broker. If no hea..30000
30s
30000
30s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
socket.connection.setup.timeout.max.msThe maximum amount of time the client will wait for the socket connection to be established. The connection setup timeout will inc..













127000
2min 7s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
socket.connection.setup.timeout.msThe amount of time the client will wait for the socket connection to be established. If the connection is not built before the tim..













10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
ssl.cipher.suitesA list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotia..null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ssl.client.authConfigures kafka broker to request client authentication. The following settings are common:





none
none
none
none
none
none
none
none
none
none
none
none
none
none
none
none
none
none
none
ssl.enabled.protocolsThe list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' ..[TLSv1.2, TLSv1.1, TLSv1]
[TLSv1.2, TLSv1.1, TLSv1]
[TLSv1.2, TLSv1.1, TLSv1]
TLSv1.2,TLSv1.1,TLSv1
TLSv1.2,TLSv1.1,TLSv1
TLSv1.2,TLSv1.1,TLSv1
TLSv1.2,TLSv1.1,TLSv1
TLSv1.2,TLSv1.1,TLSv1
TLSv1.2,TLSv1.1,TLSv1
TLSv1.2,TLSv1.1,TLSv1
TLSv1.2,TLSv1.1,TLSv1
TLSv1.2,TLSv1.1,TLSv1
TLSv1.2
TLSv1.2
TLSv1.2
TLSv1.2
TLSv1.2
TLSv1.2,TLSv1.3
TLSv1.2
TLSv1.2,TLSv1.3
TLSv1.2
TLSv1.2,TLSv1.3
TLSv1.2,TLSv1.3
TLSv1.2,TLSv1.3
TLSv1.2,TLSv1.3
ssl.endpoint.identification.algorithmThe endpoint identification algorithm to validate server hostname using server certificate.null
null
null
null
null
null
null
https
https
https
https
https
https
https
https
https
https
https
https
https
https
https
https
https
https
ssl.engine.factory.classThe class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache..












null
null
null
null
null
null
null
null
null
null
null
null
ssl.key.passwordThe password of the private key in the key store file or the PEM key specified in 'ssl.keystore.key'.null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ssl.keymanager.algorithmThe algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for t..SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
SunX509
ssl.keystore.certificate.chainCertificate chain in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with a list ..













null
null
null
null
null
null
null
null
null
null
null
ssl.keystore.keyPrivate key in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with PKCS#8 keys. ..













null
null
null
null
null
null
null
null
null
null
null
ssl.keystore.locationThe location of the key store file. This is optional for client and can be used for two-way authentication for client.null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ssl.keystore.passwordThe store password for the key store file. This is optional for client and only needed if 'ssl.keystore.location' is configured. K..null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ssl.keystore.typeThe file format of the key store file. This is optional for client. The values currently supported by the default `ssl.engine.fact..JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
ssl.protocolThe SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise..TLS
TLS
TLS
TLS
TLS
TLS
TLS
TLS
TLS
TLS
TLS
TLS
TLSv1.2
TLSv1.2
TLSv1.2
TLSv1.2
TLSv1.2
TLSv1.3
TLSv1.2
TLSv1.3
TLSv1.2
TLSv1.3
TLSv1.3
TLSv1.3
TLSv1.3
ssl.providerThe name of the security provider used for SSL connections. Default value is the default security provider of the JVM.null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ssl.secure.random.implementationThe SecureRandom PRNG implementation to use for SSL cryptography operations.

null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ssl.trustmanager.algorithmThe algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured f..PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
PKIX
ssl.truststore.certificatesTrusted certificates in the format specified by 'ssl.truststore.type'. Default SSL engine factory supports only PEM format with X...













null
null
null
null
null
null
null
null
null
null
null
ssl.truststore.locationThe location of the trust store file.null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ssl.truststore.passwordThe password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity che..null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ssl.truststore.typeThe file format of the trust store file. The values currently supported by the default `ssl.engine.factory.class` are [JKS, PKCS12..JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
JKS
status.storage.partitionsThe number of partitions used when creating the status storage topic



5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
status.storage.replication.factorReplication factor used when creating the status storage topic



3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
status.storage.topicThe name of the Kafka topic where connector and task status are stored
null
null
null
null
null
null
null
null
null
null














task.shutdown.graceful.timeout.msAmount of time to wait for tasks to shutdown gracefully. This is the total amount of time, not per task. All task have shutdown tr..5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
topic.creation.enableWhether to allow automatic creation of topics used by source connectors, when source connectors are configured with `topic.creatio..












true
true
true
true
true
true
true
true
true
true
true
true
topic.tracking.allow.resetIf set to true, it allows user requests to reset the set of active topics per connector.











true
true
true
true
true
true
true
true
true
true
true
true
true
topic.tracking.enableEnable tracking the set of active topics per connector during runtime.











true
true
true
true
true
true
true
true
true
true
true
true
true
value.converterConverter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the f..null
null
null
null
null
null
null
null
null
null
null














worker.sync.timeout.msWhen the worker is out of sync with other workers and needs to resynchronize configurations, wait up to this amount of time before..3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
worker.unsync.backoff.msWhen the worker is out of sync with other workers and fails to catch up within worker.sync.timeout.ms, leave the Connect cluster ..300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
internal.key.converternull
null
null
null
null
null
null
org.apache.kafka.connect.json.JsonConverter
org.apache.kafka.connect.json.JsonConverter
org.apache.kafka.connect.json.JsonConverter
org.apache.kafka.connect.json.JsonConverter
org.apache.kafka.connect.json.JsonConverter
org.apache.kafka.connect.json.JsonConverter
org.apache.kafka.connect.json.JsonConverter
org.apache.kafka.connect.json.JsonConverter
org.apache.kafka.connect.json.JsonConverter









internal.value.converternull
null
null
null
null
null
null
org.apache.kafka.connect.json.JsonConverter
org.apache.kafka.connect.json.JsonConverter
org.apache.kafka.connect.json.JsonConverter
org.apache.kafka.connect.json.JsonConverter
org.apache.kafka.connect.json.JsonConverter
org.apache.kafka.connect.json.JsonConverter
org.apache.kafka.connect.json.JsonConverter
org.apache.kafka.connect.json.JsonConverter
org.apache.kafka.connect.json.JsonConverter









rest.host.namenull
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null









rest.port8083
8083
8083
8083
8083
8083
8083
8083
8083
8083
8083
8083
8083
8083
8083
8083









clusterconnect
connect























Description2.12.22.32.42.52.62.72.83.03.13.23.33.43.53.63.73.8
config.action.reloadThe action that Connect should take on the connector when changes in external configuration providers result in a change in the co..RESTART
restart
restart
restart
restart
restart
restart
restart
restart
restart
restart
restart
restart
restart
restart
restart
restart
connector.className or alias of the class for this connector. Must be a subclass of org.apache.kafka.connect.connector.Connector. If the connecto..null
null
null














errors.log.enableIf true, write each error and the details of the failed operation and problematic record to the Connect application log. This is '..false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
errors.log.include.messagesWhether to include in the log the Connect record that resulted in a failure. For sink records, the topic, partition, offset, and t..false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
errors.retry.delay.max.msThe maximum duration in milliseconds between consecutive retry attempts. Jitter will be added to the delay once this limit is reac..60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
errors.retry.timeoutThe maximum duration in milliseconds that a failed operation will be reattempted. The default is 0, which means no retries will be..0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
errors.toleranceBehavior for tolerating errors during connector operation. 'none' is the default value and signals that any error will result in a..none
none
none
none
none
none
none
none
none
none
none
none
none
none
none
none
none
exactly.once.supportPermitted values are requested, required. If set to "required", forces a preflight check for the connector to ensure that it can p..










requested
requested
requested
requested
requested
requested
header.converterHeaderConverter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls..null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
key.converterConverter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the f..null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
nameGlobally unique name to use for this connector.null
null
null














offsets.storage.topicThe name of a separate offsets topic to use for this connector. If empty or not specified, the worker’s global offsets topic name ..










null
null
null
null
null
null
predicatesAliases for the predicates used by transformations.
















tasks.maxMaximum number of tasks to use for this connector.1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
tasks.max.enforce(Deprecated) Whether to enforce that the tasks.max property is respected by the connector. By default, connectors that generate to..















true
topic.creation.groupsGroups of configurations for topics created by source connectors
















transaction.boundaryPermitted values are: poll, interval, connector. If set to 'poll', a new producer transaction will be started and committed for ev..










poll
poll
poll
poll
poll
poll
transaction.boundary.interval.msIf 'transaction.boundary' is set to 'interval', determines the interval for producer transaction commits by connector tasks. If un..










null
null
null
null
null
null
transformsAliases for the transformations to be applied to records.
















value.converterConverter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the f..null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Description2.12.22.32.42.52.62.72.83.03.13.23.33.43.53.63.73.8
config.action.reloadThe action that Connect should take on the connector when changes in external configuration providers result in a change in the co..RESTART
restart
restart
restart
restart
restart
restart
restart
restart
restart
restart
restart
restart
restart
restart
restart
restart
connector.className or alias of the class for this connector. Must be a subclass of org.apache.kafka.connect.connector.Connector. If the connecto..null
null
null














errors.deadletterqueue.context.headers.enableIf true, add headers containing error context to the messages written to the dead letter queue. To avoid clashing with headers fro..false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
errors.deadletterqueue.topic.nameThe name of the topic to be used as the dead letter queue (DLQ) for messages that result in an error when processed by this sink c..
















errors.deadletterqueue.topic.replication.factorReplication factor used to create the dead letter queue topic when it doesn't already exist.3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
errors.log.enableIf true, write each error and the details of the failed operation and problematic record to the Connect application log. This is '..false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
errors.log.include.messagesWhether to include in the log the Connect record that resulted in a failure. For sink records, the topic, partition, offset, and t..false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
errors.retry.delay.max.msThe maximum duration in milliseconds between consecutive retry attempts. Jitter will be added to the delay once this limit is reac..60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
errors.retry.timeoutThe maximum duration in milliseconds that a failed operation will be reattempted. The default is 0, which means no retries will be..0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
errors.toleranceBehavior for tolerating errors during connector operation. 'none' is the default value and signals that any error will result in a..none
none
none
none
none
none
none
none
none
none
none
none
none
none
none
none
none
header.converterHeaderConverter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls..null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
key.converterConverter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the f..null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
nameGlobally unique name to use for this connector.null
null
null














predicatesAliases for the predicates used by transformations.
















tasks.maxMaximum number of tasks to use for this connector.1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
tasks.max.enforce(Deprecated) Whether to enforce that the tasks.max property is respected by the connector. By default, connectors that generate to..















true
topicsList of topics to consume, separated by commas
















topics.regexRegular expression giving topics to consume. Under the hood, the regex is compiled to a java.util.regex.Pattern. Only one of topic..
















transformsAliases for the transformations to be applied to records.
















value.converterConverter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the f..null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Description0.10.00.10.10.10.20.11.01.01.12.02.12.22.32.42.52.62.72.83.03.13.23.33.43.53.63.73.8
acceptable.recovery.lagThe maximum acceptable lag (number of offsets to catch up) for a client to be considered caught-up enough to receive an active tas..











10000
10000
10000
10000
10000
10000
10000
10000
10000
10000
10000
10000
application.idAn identifier for the stream processing application. Must be unique within the Kafka cluster. It is used as 1) the default client-..null
null
null
null
null
null
null
null
null
null














application.serverA host:port pair pointing to a user-defined endpoint that can be used for state store discovery and interactive queries on this Ka..























auto.include.jmx.reporterDeprecated. Whether to automatically include JmxReporter even if it's not listed in metric.reporters. This configuration will be r..


















true
true
true
true
true
bootstrap.serversA list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all ser..null
null
null
null
null
null
null
null
null
null














buffered.records.per.partitionMaximum number of records to buffer per partition.1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
built.in.metrics.versionVersion of the built-in metrics to use.










latest
latest
latest
latest
latest
latest
latest
latest
latest
latest
latest
latest
latest
cache.max.bytes.bufferingMaximum number of memory bytes to be used for buffering across all threads
10485760
10485760
10485760
10485760
10485760
10485760
10485760
10485760
10485760
10485760
10485760
10485760
10485760
10485760
10485760
10485760
10485760
10485760
10485760
10485760
10485760
10485760
10485760
client.idAn ID prefix string used for the client IDs of internal (main, restore, and global) consumers , producers, and admin clients with ..






















<application.id>-<random-UUID>
commit.interval.msThe frequency in milliseconds with which to commit processing progress. For at-least-once processing, committing means to save the..30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
connections.max.idle.msClose idle connections after the number of milliseconds specified by this config.

540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
default.client.supplierClient supplier class that implements the org.apache.kafka.streams.KafkaClientSupplier interface.



















org.apache.kafka.streams.processor.internals.DefaultKafkaClientSupplier
org.apache.kafka.streams.processor.internals.DefaultKafkaClientSupplier
org.apache.kafka.streams.processor.internals.DefaultKafkaClientSupplier
org.apache.kafka.streams.processor.internals.DefaultKafkaClientSupplier
default.deserialization.exception.handlerException handling class that implements the org.apache.kafka.streams.errors.DeserializationExceptionHandler interface.



org.apache.kafka.streams.errors.LogAndFailExceptionHandler
org.apache.kafka.streams.errors.LogAndFailExceptionHandler
org.apache.kafka.streams.errors.LogAndFailExceptionHandler
org.apache.kafka.streams.errors.LogAndFailExceptionHandler
org.apache.kafka.streams.errors.LogAndFailExceptionHandler
org.apache.kafka.streams.errors.LogAndFailExceptionHandler
org.apache.kafka.streams.errors.LogAndFailExceptionHandler
org.apache.kafka.streams.errors.LogAndFailExceptionHandler
org.apache.kafka.streams.errors.LogAndFailExceptionHandler
org.apache.kafka.streams.errors.LogAndFailExceptionHandler
org.apache.kafka.streams.errors.LogAndFailExceptionHandler
org.apache.kafka.streams.errors.LogAndFailExceptionHandler
org.apache.kafka.streams.errors.LogAndFailExceptionHandler
org.apache.kafka.streams.errors.LogAndFailExceptionHandler
org.apache.kafka.streams.errors.LogAndFailExceptionHandler
org.apache.kafka.streams.errors.LogAndFailExceptionHandler
org.apache.kafka.streams.errors.LogAndFailExceptionHandler
org.apache.kafka.streams.errors.LogAndFailExceptionHandler
org.apache.kafka.streams.errors.LogAndFailExceptionHandler
org.apache.kafka.streams.errors.LogAndFailExceptionHandler
default.dsl.storeThe default state store type used by DSL operators.
















rocksDB
rocksDB
rocksDB
rocksDB
rocksDB
rocksDB
rocksDB
default.key.serdeDefault serializer / deserializer class for key that implements the org.apache.kafka.common.serialization.Serde interface. Note wh..


org.apache.kafka.common.serialization.Serdes$ByteArraySerde
org.apache.kafka.common.serialization.Serdes$ByteArraySerde
org.apache.kafka.common.serialization.Serdes$ByteArraySerde
org.apache.kafka.common.serialization.Serdes$ByteArraySerde
org.apache.kafka.common.serialization.Serdes$ByteArraySerde
org.apache.kafka.common.serialization.Serdes$ByteArraySerde
org.apache.kafka.common.serialization.Serdes$ByteArraySerde
org.apache.kafka.common.serialization.Serdes$ByteArraySerde
org.apache.kafka.common.serialization.Serdes$ByteArraySerde
org.apache.kafka.common.serialization.Serdes$ByteArraySerde
org.apache.kafka.common.serialization.Serdes$ByteArraySerde
org.apache.kafka.common.serialization.Serdes$ByteArraySerde
null
null
null
null
null
null
null
null
null
default.list.key.serde.innerDefault inner class of list serde for key that implements the org.apache.kafka.common.serialization.Serde interface. This configur..














null
null
null
null
null
null
null
null
null
default.list.key.serde.typeDefault class for key that implements the java.util.List interface. This configuration will be read if and only if default.key.ser..














null
null
null
null
null
null
null
null
null
default.list.value.serde.innerDefault inner class of list serde for value that implements the org.apache.kafka.common.serialization.Serde interface. This config..














null
null
null
null
null
null
null
null
null
default.list.value.serde.typeDefault class for value that implements the java.util.List interface. This configuration will be read if and only if default.value..














null
null
null
null
null
null
null
null
null
default.production.exception.handlerException handling class that implements the org.apache.kafka.streams.errors.ProductionExceptionHandler interface.




org.apache.kafka.streams.errors.DefaultProductionExceptionHandler
org.apache.kafka.streams.errors.DefaultProductionExceptionHandler
org.apache.kafka.streams.errors.DefaultProductionExceptionHandler
org.apache.kafka.streams.errors.DefaultProductionExceptionHandler
org.apache.kafka.streams.errors.DefaultProductionExceptionHandler
org.apache.kafka.streams.errors.DefaultProductionExceptionHandler
org.apache.kafka.streams.errors.DefaultProductionExceptionHandler
org.apache.kafka.streams.errors.DefaultProductionExceptionHandler
org.apache.kafka.streams.errors.DefaultProductionExceptionHandler
org.apache.kafka.streams.errors.DefaultProductionExceptionHandler
org.apache.kafka.streams.errors.DefaultProductionExceptionHandler
org.apache.kafka.streams.errors.DefaultProductionExceptionHandler
org.apache.kafka.streams.errors.DefaultProductionExceptionHandler
org.apache.kafka.streams.errors.DefaultProductionExceptionHandler
org.apache.kafka.streams.errors.DefaultProductionExceptionHandler
org.apache.kafka.streams.errors.DefaultProductionExceptionHandler
org.apache.kafka.streams.errors.DefaultProductionExceptionHandler
org.apache.kafka.streams.errors.DefaultProductionExceptionHandler
org.apache.kafka.streams.errors.DefaultProductionExceptionHandler
default.timestamp.extractorDefault timestamp extractor class that implements the org.apache.kafka.streams.processor.TimestampExtractor interface.


org.apache.kafka.streams.processor.FailOnInvalidTimestamp
org.apache.kafka.streams.processor.FailOnInvalidTimestamp
org.apache.kafka.streams.processor.FailOnInvalidTimestamp
org.apache.kafka.streams.processor.FailOnInvalidTimestamp
org.apache.kafka.streams.processor.FailOnInvalidTimestamp
org.apache.kafka.streams.processor.FailOnInvalidTimestamp
org.apache.kafka.streams.processor.FailOnInvalidTimestamp
org.apache.kafka.streams.processor.FailOnInvalidTimestamp
org.apache.kafka.streams.processor.FailOnInvalidTimestamp
org.apache.kafka.streams.processor.FailOnInvalidTimestamp
org.apache.kafka.streams.processor.FailOnInvalidTimestamp
org.apache.kafka.streams.processor.FailOnInvalidTimestamp
org.apache.kafka.streams.processor.FailOnInvalidTimestamp
org.apache.kafka.streams.processor.FailOnInvalidTimestamp
org.apache.kafka.streams.processor.FailOnInvalidTimestamp
org.apache.kafka.streams.processor.FailOnInvalidTimestamp
org.apache.kafka.streams.processor.FailOnInvalidTimestamp
org.apache.kafka.streams.processor.FailOnInvalidTimestamp
org.apache.kafka.streams.processor.FailOnInvalidTimestamp
org.apache.kafka.streams.processor.FailOnInvalidTimestamp
org.apache.kafka.streams.processor.FailOnInvalidTimestamp
default.value.serdeDefault serializer / deserializer class for value that implements the org.apache.kafka.common.serialization.Serde interface. Note ..


org.apache.kafka.common.serialization.Serdes$ByteArraySerde
org.apache.kafka.common.serialization.Serdes$ByteArraySerde
org.apache.kafka.common.serialization.Serdes$ByteArraySerde
org.apache.kafka.common.serialization.Serdes$ByteArraySerde
org.apache.kafka.common.serialization.Serdes$ByteArraySerde
org.apache.kafka.common.serialization.Serdes$ByteArraySerde
org.apache.kafka.common.serialization.Serdes$ByteArraySerde
org.apache.kafka.common.serialization.Serdes$ByteArraySerde
org.apache.kafka.common.serialization.Serdes$ByteArraySerde
org.apache.kafka.common.serialization.Serdes$ByteArraySerde
org.apache.kafka.common.serialization.Serdes$ByteArraySerde
org.apache.kafka.common.serialization.Serdes$ByteArraySerde
null
null
null
null
null
null
null
null
null
dsl.store.suppliers.classDefines which store implementations to plug in to DSL operators. Must implement the org.apache.kafka.streams.state.DslStoreSupplie..





















org.apache.kafka.streams.state.BuiltInDslStoreSuppliers$RocksDBDslStoreSuppliers
org.apache.kafka.streams.state.BuiltInDslStoreSuppliers$RocksDBDslStoreSuppliers
enable.metrics.pushWhether to enable pushing of internal client metrics for (main, restore, and global) consumers, producers, and admin clients. The ..





















true
true
max.task.idle.msThis config controls whether joins and merges may produce out-of-order results. The config value is the maximum amount of time in ..






0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
max.warmup.replicasThe maximum number of warmup replicas (extra standbys beyond the configured num.standbys) that can be assigned at once for the pur..











2
2
2
2
2
2
2
2
2
2
2
2
metadata.max.age.msThe period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership cha..

300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
metric.reportersA list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows p..[]
[]






















metrics.num.samplesThe number of samples maintained to compute metrics.2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
metrics.recording.levelThe highest recording level for metrics.

INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
metrics.sample.window.msThe window of time a metrics sample is computed over.30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
num.standby.replicasThe number of standby replicas for each task.0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
num.stream.threadsThe number of threads to execute stream processing.1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
poll.msThe amount of time in milliseconds to block waiting for input.100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
probing.rebalance.interval.msThe maximum time in milliseconds to wait before triggering a rebalance to probe for warmup replicas that have finished warming up ..











600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
processing.guaranteeThe processing guarantee that should be used. Possible values are at_least_once (default) and exactly_once_v2 (requires brokers ve..


at_least_once
at_least_once
at_least_once
at_least_once
at_least_once
at_least_once
at_least_once
at_least_once
at_least_once
at_least_once
at_least_once
at_least_once
at_least_once
at_least_once
at_least_once
at_least_once
at_least_once
at_least_once
at_least_once
at_least_once
at_least_once
rack.aware.assignment.non_overlap_costCost associated with moving tasks from existing assignment. This config and rack.aware.assignment.traffic_cost controls whether th..




















null
null
null
rack.aware.assignment.strategyThe strategy we use for rack aware assignment. Rack aware assignment will take client.rack and racks of TopicPartition into accoun..




















none
none
none
rack.aware.assignment.tagsList of client tag keys used to distribute standby replicas across Kafka Streams instances. When configured, Kafka Streams will ma..























rack.aware.assignment.traffic_costCost associated with cross rack traffic. This config and rack.aware.assignment.non_overlap_cost controls whether the optimization ..




















null
null
null
receive.buffer.bytesThe size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used.

32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
reconnect.backoff.max.msThe maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provide..


1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
reconnect.backoff.msThe base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a t..

50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
repartition.purge.interval.msThe frequency in milliseconds with which to delete fully consumed records from repartition topics. Purging will occur after at lea..
















30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
replication.factorThe replication factor for change log topics and repartition topics created by the stream processing application. The default of -..1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
-1
-1
-1
-1
-1
-1
-1
-1
-1
request.timeout.msThe configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not r..

40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
retriesSetting a value greater than zero will cause the client to resend any request that fails with a potentially transient error. It is..




0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
retry.backoff.msThe amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending ..

100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
rocksdb.config.setterA Rocks DB config setter class or class name that implements the org.apache.kafka.streams.state.RocksDBConfigSetter interface
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
security.protocolProtocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.

PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
PLAINTEXT
send.buffer.bytesThe size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.

131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
state.cleanup.delay.msThe amount of time in milliseconds to wait before deleting state when a partition has migrated. Only state directories that have n..60000
1min
60000
1min
60000
1min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
state.dirDirectory location for state store. This path must be unique for each streams instance sharing the same underlying filesystem. Not../tmp/kafka-streams
/tmp/kafka-streams
/tmp/kafka-streams
/tmp/kafka-streams
/tmp/kafka-streams
/tmp/kafka-streams
/tmp/kafka-streams
/tmp/kafka-streams
/tmp/kafka-streams
/tmp/kafka-streams
/tmp/kafka-streams
/tmp/kafka-streams
/tmp/kafka-streams
/tmp/kafka-streams
/var/folders/st/wn8xlbk16ml31qrqpyh28rlc0000gn/T//kafka-streams
/var/folders/5w/m48dfpps5fj1byw1ldmq3v5w0000gp/T//kafka-streams
/tmp/kafka-streams
/var/folders/ds/dq10m26j2kjcypywn_lt0b0m0000gn/T//kafka-streams
/var/folders/8t/s723rqwx1h78qt3w98cp_gsm0000gp/T//kafka-streams
/var/folders/0t/68svdzmx1sld0mxjl8dgmmzm0000gq/T//kafka-streams
/var/folders/qq/2qmvd8cd11x3fcd6wbgpn9pw0000gn/T//kafka-streams
/var/folders/3j/8r9d0znd5pzgp8ww95yn8g140000gp/T//kafka-streams
/var/folders/tj/qjgd_zb13w19nkmt8rfpnssc0000gn/T//kafka-streams
${java.io.tmpdir}
statestore.cache.max.bytesMaximum number of memory bytes to be used for statestore cache across all threads


















10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
task.assignor.classA task assignor class or class name implementing the org.apache.kafka.streams.processor.assignment.TaskAssignor interface. Default..






















null
task.timeout.msThe maximum amount of time in milliseconds a task might stall due to internal errors and retries until an error is raised. For a t..












300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
topology.optimizationA configuration telling Kafka Streams if it should optimize the topology and what optimizations to apply. Acceptable values are: "..





none
none
none
none
none
none
none
none
none
none
none
none
none
none
none
none
none
none
upgrade.fromAllows upgrading in a backward compatible way. This is needed when upgrading from [0.10.0, 1.1] to 2.0+, or when upgrading from [2..

null
null
null

null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
window.size.msSets window size for the deserializer in order to calculate window end times.













null
null
null
null
null
null
null
null
null
null
windowed.inner.class.serdeDefault serializer / deserializer for the inner class of a windowed record. Must implement the org.apache.kafka.common.serializati..














null
null
null
null
null
null
null
null
null
windowstore.changelog.additional.retention.msAdded to a windows maintainMs to ensure data is not deleted from the log prematurely. Allows for clock drift. Default is 1 day
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
default.windowed.key.serde.inner











null
null
null









default.windowed.value.serde.inner











null
null
null









partition.grouperclass org.apache.kafka.streams.processor.DefaultPartitionGrouper
class org.apache.kafka.streams.processor.DefaultPartitionGrouper
org.apache.kafka.streams.processor.DefaultPartitionGrouper
org.apache.kafka.streams.processor.DefaultPartitionGrouper
org.apache.kafka.streams.processor.DefaultPartitionGrouper
org.apache.kafka.streams.processor.DefaultPartitionGrouper
org.apache.kafka.streams.processor.DefaultPartitionGrouper
org.apache.kafka.streams.processor.DefaultPartitionGrouper
org.apache.kafka.streams.processor.DefaultPartitionGrouper
org.apache.kafka.streams.processor.DefaultPartitionGrouper
org.apache.kafka.streams.processor.DefaultPartitionGrouper
org.apache.kafka.streams.processor.DefaultPartitionGrouper
org.apache.kafka.streams.processor.DefaultPartitionGrouper
org.apache.kafka.streams.processor.DefaultPartitionGrouper
org.apache.kafka.streams.processor.DefaultPartitionGrouper









key.serdeclass org.apache.kafka.common.serialization.Serdes$ByteArraySerde
class org.apache.kafka.common.serialization.Serdes$ByteArraySerde
org.apache.kafka.common.serialization.Serdes$ByteArraySerde
null
null
null


















timestamp.extractorclass org.apache.kafka.streams.processor.ConsumerRecordTimestampExtractor
class org.apache.kafka.streams.processor.ConsumerRecordTimestampExtractor
org.apache.kafka.streams.processor.FailOnInvalidTimestamp
null
null
null


















value.serdeclass org.apache.kafka.common.serialization.Serdes$ByteArraySerde
class org.apache.kafka.common.serialization.Serdes$ByteArraySerde
org.apache.kafka.common.serialization.Serdes$ByteArraySerde
null
null
null


















zookeeper.connectSpecifies the ZooKeeper connection string in the form hostname:port where host and port are the host and port of a ZooKeeper serve..