Kafka-connect-jdbc Oracle Datatypes

Hi, I tried to tackle down a problem with the sourceConnector of kafka-connect-jdbc and the oracle column-type FLOAT.

Connection::.getColumns returns a java.sql.Type.FLOAT but unfortunatly ResultSetMetaData return java.sql.Type.NUMERIC with a big precision (126) and a undefined scale (-127).

At least for NumericMapping.BEST_FIT_EAGER_DOUBLE the schemageneration falls back to Decimal because undefined scale. For me it looks like there should be expressions in the ifs that should consider NUMERIC_TYPE_SCALE_UNSET.

What do you think? I would create a issue including a first patch for this if there is no good reason to ignore the NUMERIC_TYPE_SCALE_UNSET in this setting.

As I am new to the Confluent Community I am not sure where to discuss such questions. There is a thread in slack too:

Welcome to the forum @sebastian.sickelmann! I’m not quite following what the issue is, can you help me and others by explaining in a bit more detail? What’s the definition of the field in Oracle itself?

I wrote this article a while ago which covers decimals etc with the JDBC connector.

Here is a good place :slight_smile:

Sure. It is defined as FLOAT(126) or FLOAT in short.
There is only a precision specification and no scale.
To clarify some details: Demystifying Oracle FLOAT Data Type with Practical Examples got me a little bit smarter in this topic.

So we cannot compare the precision of an NUMERIC and a FLOAT as it is decimal digits vs. binary bits.
If you use jdbc (Connection::getMetaData().getColumns() to query the metadata of such fields you get: jdbcType = 6 (java.sql.Types::FLOAT) precision=126 and scale=0 .
But when you query (with ex. SELECT *) and analyse the ResultSetMetadata for this “Field” it is jdbcType = 2(java.sql.Types::NUMERIC) precision=126 and scale=-127.

This looks strange as it looks like an numeric value with 126 decimal digits. I am not sure if the ojdbc-Driver is wrong. But we can detect that it is a floating-type by retrieving the getColumnTypeName which returns java.lang.Double in case of the FLOAT and java.math.BigDecimal in case of a NUMBER.

No back to my mentioned point regarding NUMERIC_TYPE_SCALE_UNSET for the NummericMapping Implementations BEST_FIT and BEST_FIT_EAGER_DOUBLE.

The implementation in kafka-connect-jdbc/GenericDatabaseDialect.java at c5ff1ae48669eba529c1f963876ff8ed209064ab · confluentinc/kafka-connect-jdbc · GitHub and line 1292ff are not handling the scale = -127 case as “undefined scale” resulting in wrong assumptions and falling back to the “DECIMAL” implementation which produces a 50+byte-sized array in the sink-data-stream.
Furthermore it does not distinguish between decimal-digit precision and binary bit precision. So for ex. FLOAT(20) it would come to wrong conclusions because it checks the binary bit precision agains a decimal digit limit.

A quick POC-Implementation which helps to keep our project going can be found here: ISSUE_FLOAT · picpromusic/kafka-connect-jdbc@5776b3a · GitHub

It is by far not correct yet. But it helps us at least in the NumericMapping.BEST_FIT_EAGER_DOUBLE case.

@sebastian.sickelmann , thanks for reporting this. While this does seem to be misleading behavior in the ojdbc driver to report the FLOAT type as NUMERIC with precision, the connector should be able to handle this quirk in the Oracle driver. IMO, the fix should be to override describeColumn(ResultSetMetaData rsMetadata, int column) in OracleDatabaseDialect to construct the column definitions correctly though.

Would you agree that we can proceed with a PR to do this in OracleDatabaseDialect?

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.