Not work Kafka Transactions client developed by Python

Hello anyone!! We have some problem with Apache Kafka transactions.
The developed software is designed to synchronize Apache Kafka clients (hereinafter referred to as Kafka) in the event of a connection failure.
The software was developed in Python version 3.8.6 (console application).
To interact with Kafka version 2.6.2, the confluent-Kafka library version 1.7.0 is used
The client that receives messages is a cluster of 3 Kafka brokers + 1 Zookeeper version 3.5.8.
When importing messages into Kafka topics, the transactional Kafka model is used.
A multi-threaded producer has been developed (Import messages to Kafka), 1 thread - 1 topic.
Producer config:

'bootstrap.servers': configuration.get_bootstrapservers(),
'retries': 1,
'acks': 'all',
'request.required.acks': 'all',
'compression type': 'gzip',
'transactional.id': uuid.uuid4().hex,
'queue.buffering.max.kbytes': 104857600, # 100 Gb
'transaction.timeout.ms': 600000, # 10 min
'max.in.flight.requests.per.connection': 1.

During testing, the console-consumer (Consumer) is used to check the receipt of imported messages.
The console consumer is connected to a client cluster (3 Kafka brokers + 1 Zookeeper).
Consumer configuration:

'bootstrap.servers': configuration.get_bootstrapservers(),
'auto.offset.reset': 'earliest',
'client.id': "ClientImp",
'group.id': "{}".format(group_id),
'enable.auto.commit': False,
'session.timeout.ms': 6000,
'fetch.message.max.bytes': 5242880,
'isolation.level':'read_commited' ( Show only committed messages).

To track imported messages, the PostgresSQL database version …
The database is a kind of log for tracking previously imported messages, so that duplicate messages are not created in Kafka topics.

The problem is that in the process of importing messages, if the connection with Kafka brokers is interrupted, or the process is interrupted by the user by pressing the ctrl + c key combination, messages that the software managed to transfer to the Kafka client are committed to Kafka topics. The consumer sees them.
It is not possible to find out why this happens, in the source code, in case of an error or interruption by the user, the abort_transaction method is called, which, according to the official documentation of confluent, should mark previously transmitted messages in one transaction as not committed, thus the consumer should not display these messages, since for consumer is set to 'isolation.level':'read_commited'.

Does anyone have any idea why uncommitted messages are available to the consumer?
Or perhaps the transactions are not working correctly?
If the description of the problem is not clear, I’m waiting for questions? Thank you in advance!

Looks like a typo on the console consumer. It should be isolation.level:read_committed.

Thanks Dave. This is a typo in the text. In the console consumer settings is isolation.level: read_committed.