🔸 QUIZ (CHOOSE ONE)
▪️ It defines the maximum number of consumer records processed per minute.
▪️ It determines the interval at which offsets are committed.
▪️ It controls the time Kafka remembers offsets in a special topic.
▪️ It sets the frequency of consumer group rebalancing.
🔸 WHAT IT REALLY MEANS
▪️ Kafka stores committed offsets in the __consumer_offsets topic.
▪️ offsets.retention.minutes = how long those offsets are kept before being purged.
▪️ Default is 10,080 minutes (7 days).
▪️ If your app is stopped longer than this, offsets may be deleted → on restart, the consumer may reprocess data (based on auto.offset.reset).
▪️ Active groups keep their offsets fresh by committing—expiry mostly hits inactive groups.
🔸 WHEN TO TUNE IT
▪️ Increase it if your consumers can be down for long maintenance windows.
▪️ Keep broker storage in mind: longer retention = more offset data stored.
▪️ Pair with clear restart policies (auto.offset.reset, commit frequency) to avoid surprises.
🔸 TL;DR
▪️ offsets.retention.minutes = how long Kafka remembers your committed offsets in __consumer_offsets (default 7 days).
▪️ Raise it if downtime can exceed a week, so you don’t reprocess data after a restart.
🔸 TAKEAWAYS
▪️ Know where offsets live: __consumer_offsets.
▪️ Default is 7 days—good, but not universal.
▪️ Longer downtime? Increase retention to protect from rewind.
▪️ Offset expiry mainly affects inactive groups.
▪️ Test restart scenarios; don’t learn this in production. 🚨
#Kafka #ApacheKafka #Streaming #DataEngineering #EventDriven #DevOps #SRE #BigData #KafkaTips #Java
——
Answer: It controls the time Kafka remembers offsets in a special topic.
Go further with Java certification:
Java👇
Spring👇
SpringBook👇