Kafka Streams ProductionExceptionHandler not skipping records with EOS

I have a simple Kafka Streams App with Exactly Once enabled that reads from a topic, performs some transformations and then publishes to another topic.
I have a ProductionExceptionHandler attached which is setup to log the record and then continue processing using ProductionExceptionHandlerResponse.CONTINUE;
I would have expected the app to continue processing by skipping the produced record and move on to the next record, but the app instead fails with the following exception indicating that the client shuts down with a non-recoverable error.
Using Kafka Streams Version: 2.8.2

Is there anyway to skip the record or do I just need to do this manually in the DSL by checking the size and filtering it out (and logging as needed)

2024-01-27 16:24:05,672 ERROR [testApp1111-vramkrishnan-mbp-StreamThread-1] KafkaStreams - stream-client [testApp1111-vramkrishnan-mbp] Encountered the following exception during processing and the registered exception handler opted to SHUTDOWN_CLIENT. The streams client is going to shut down now. 
org.apache.kafka.streams.errors.StreamsException: Error encountered trying to commit a transaction [stream-thread [testApp1111-vramkrishnan-mbp-StreamThread-1] task [0_0]]
	at org.apache.kafka.streams.processor.internals.StreamsProducer.commitTransaction(StreamsProducer.java:260) ~[kafka-streams-2.8.2.jar:?]
	at org.apache.kafka.streams.processor.internals.TaskManager.commitOffsetsOrTransaction(TaskManager.java:1107) ~[kafka-streams-2.8.2.jar:?]
	at org.apache.kafka.streams.processor.internals.TaskManager.commitAndFillInConsumedOffsetsAndMetadataPerTaskMap(TaskManager.java:1059) ~[kafka-streams-2.8.2.jar:?]
	at org.apache.kafka.streams.processor.internals.TaskManager.commit(TaskManager.java:1025) ~[kafka-streams-2.8.2.jar:?]
	at org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:1010) ~[kafka-streams-2.8.2.jar:?]
	at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:786) ~[kafka-streams-2.8.2.jar:?]
	at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:583) ~[kafka-streams-2.8.2.jar:?]
	at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:556) ~[kafka-streams-2.8.2.jar:?]
Caused by: org.apache.kafka.common.KafkaException: Cannot execute transactional method because we are in an error state
	at org.apache.kafka.clients.producer.internals.TransactionManager.maybeFailWithError(TransactionManager.java:1112) ~[kafka-clients-2.8.2.jar:?]
	at org.apache.kafka.clients.producer.internals.TransactionManager.sendOffsetsToTransaction(TransactionManager.java:408) ~[kafka-clients-2.8.2.jar:?]
	at org.apache.kafka.clients.producer.KafkaProducer.sendOffsetsToTransaction(KafkaProducer.java:698) ~[kafka-clients-2.8.2.jar:?]
	at org.apache.kafka.streams.processor.internals.StreamsProducer.commitTransaction(StreamsProducer.java:247) ~[kafka-streams-2.8.2.jar:?]
	... 7 more

Given that you are using 2.8.2 it should actually work…

There was a change in 3.2 which broke it: [KAFKA-15259] Kafka Streams does not continue processing due to rollback despite ProductionExceptionHandlerResponse.CONTINUE if using exactly_once - ASF JIRA – this tickets actually reports that it’s working as expected using 2.8.2