[bug] [kafka-connect-gcp-bigtable 2.0.26] No support for Kafka Connect logical data type `Date` in message value

Hello,
I think I encountered a bug when evaluating Google Cloud BigTable Sink Connector in a docker compose setup based on cp-all-in-one/cp-all-in-one-community/docker-compose.yml at dd2eb847f183e65b6a1cca4045649ba0d48cf51d · confluentinc/cp-all-in-one · GitHub.
The sink connector throws an exception when encountering a logical Date in value:

org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
        at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:624)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:342)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:242)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:211)
        at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:204)
        at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:259)
        at org.apache.kafka.connect.runtime.isolation.Plugins.lambda$withClassLoader$1(Plugins.java:181)
        at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
        at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
        at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.lang.ClassCastException: class java.util.Date cannot be cast to class java.lang.Integer (java.util.Date and java.lang.Integer are in module java.base of loader 'bootstrap')
        at io.confluent.connect.bigtable.client.BufferedWriter.getColumnValueFromField(BufferedWriter.java:271)
        at io.confluent.connect.bigtable.client.BufferedWriter.addStructRecordWriteToBatch(BufferedWriter.java:204)
        at io.confluent.connect.bigtable.client.BufferedWriter.addWriteToBatch(BufferedWriter.java:86)
        at io.confluent.connect.bigtable.client.UpsertWriter.lambda$flush$0(UpsertWriter.java:70)
        at java.base/java.util.HashMap.forEach(HashMap.java:1337)
        at io.confluent.connect.bigtable.client.UpsertWriter.flush(UpsertWriter.java:68)
        at io.confluent.connect.bigtable.BaseBigtableSinkTask.lambda$put$2(BaseBigtableSinkTask.java:104)
        at java.base/java.util.HashMap.forEach(HashMap.java:1337)
        at io.confluent.connect.bigtable.BaseBigtableSinkTask.put(BaseBigtableSinkTask.java:104)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:593)
        ... 11 more

The problem happens with all: io.confluent.connect.json.JsonSchemaConverter, io.confluent.connect.avro.AvroConverter, io.confluent.connect.protobuf.ProtobufConverter configured to use schema registry from the same docker compose file. The stack trace above is identical for all of them.

Sink connector config:

{
  "name": "confluent_postgres",
  "config": {
    "connector.class": "io.confluent.connect.gcp.bigtable.BigtableSinkConnector",
    "row.key.definition": "",
    "table.name.format": "postgres_confluent_table",
    "confluent.topic.bootstrap.servers": "kafka:29092",
    "gcp.bigtable.credentials.path": "/gcp_key.json",
    "tasks.max": "1",
    "topics": "postgres_logical",
    "gcp.bigtable.project.id": "unoperate-test",
    "confluent.license": "",
    "row.key.delimiter": "#",
    "confluent.topic.replication.factor": "1",
    "name": "confluent_postgres",
    "gcp.bigtable.instance.id": "prawilny-dataflow",
    "auto.create.tables": "true",
    "auto.create.column.families": "true",
    "insert.mode": "upsert"
  },
  "tasks": [
    {
      "connector": "confluent_postgres",
      "task": 0
    }
  ],
  "type": "sink"
}

Source connector config (confluentinc-kafka-connect-jdbc-10.8.0):

{
  "name": "postgres",
  "config": {
    "connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
    "incrementing.column.name": "id",
    "errors.log.include.messages": "true",
    "transforms.createKey.type": "org.apache.kafka.connect.transforms.ValueToKey",
    "connection.password": "password",
    "tasks.max": "1",
    "transforms": "createKey,extractInt",
    "transforms.extractInt.type": "org.apache.kafka.connect.transforms.ExtractField$Key",
    "batch.max.rows": "1000",
    "table.whitelist": "logical",
    "mode": "incrementing",
    "topic.prefix": "postgres_",
    "transforms.extractInt.field": "id",
    "connection.user": "user",
    "transforms.createKey.fields": "id",
    "poll.interval.ms": "1000",
    "name": "postgres",
    "errors.tolerance": "all",
    "connection.url": "jdbc:postgresql://postgres:5432/db",
    "errors.log.enable": "true"
  },
  "tasks": [
    {
      "connector": "postgres",
      "task": 0
    }
  ],
  "type": "source"
}

Database content (select * from logical;):

id | logical_date 
----+--------------
  1 | 2025-01-17

Database schema:

CREATE TABLE logical (
    id serial PRIMARY KEY,
    logical_date date
)

Schema in schema registry (curl http://localhost:8081/schemas/ids/1 | jq .schema -r | jq):

{
  "type": "object",
  "title": "logical",
  "properties": {
    "id": {
      "type": "integer",
      "connect.index": 0,
      "connect.type": "int32"
    },
    "logical_date": {
      "connect.index": 1,
      "oneOf": [
        {
          "type": "null"
        },
        {
          "type": "integer",
          "title": "org.apache.kafka.connect.data.Date",
          "connect.version": 1,
          "connect.type": "int32"
        }
      ]
    }
  }
}

From my looking at the Kafka Connect source code, it looks as if the connector didn’t take org.apache.kafka.connect.data.Schema#name() into account and only used org.apache.kafka.connect.data.Schema#type().

I can provide a reproducer in form of docker-compose.yml and related scripts if needed.

Getting date or timestamp fields to shake hands across databases can be tricky with Kafka Connect. There’s a TimestampConverter SMT to help in these cases. I personally think it’s easiest to keep the transformation sane by converting to Unix epoch on the way into Kafka. (I’m not sure if there is a bug here or feature request around date handling but dealing in epochs should skirt the issue.) What does the Bigtable schema look like?

The bigtable has no schema - it’s empty and tables (and column families) are to created by the connector (see the connector’s config:

{
(...)
    "auto.create.tables": "true",
    "auto.create.column.families": "true",
(...)
}

). Besides, Bigtable generally has no schema (apart from column families) - it just stores values within columns (which aren’t necessarily the same in different rows) as byte arrays.

Also, I started this topic as a bug report and not as a feature request since it seemed to me that Date is documented to be supported here: Google Cloud BigTable Sink Connector for Confluent Platform | Confluent Documentation.

This does look like it might be a bug and not feature request… to add info to the report, can you try your repro with a timestamp rather than a date on the Postgres side? I’m poking around source code and it seems like that would work. If it does work with timestamp, that’ll give a head start for how to fix it.

Interesingly enough, timestamp seems to work:

$ BIGTABLE_EMULATOR_HOST=localhost:8086 cbt -project unoperate-test -instance prawilny-dataflow read postgres_confluent_table
2025/01/22 10:48:24 -creds flag unset, will use gcloud credential
----------------------------------------
1
  postgres_logical:id                      @ 2025/01/22-10:48:19.883000
    "\x00\x00\x00\x01"
  postgres_logical:logical                 @ 2025/01/22-10:48:19.883000
    "\x00\x00\x01\x94\x8di\x9b\""

It’s rather interesting since time throws an exception with the same stack trace as in the Postgres date/Kafka Connect Date:

org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
        at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:624)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:342)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:242)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:211)
        at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:204)
        at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:259)
        at org.apache.kafka.connect.runtime.isolation.Plugins.lambda$withClassLoader$1(Plugins.java:181)
        at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
        at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
        at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.lang.ClassCastException: class java.util.Date cannot be cast to class java.lang.Integer (java.util.Date and java.lang.Integer are in module java.base of loader 'bootstrap')
        at io.confluent.connect.bigtable.client.BufferedWriter.getColumnValueFromField(BufferedWriter.java:271)
        at io.confluent.connect.bigtable.client.BufferedWriter.addStructRecordWriteToBatch(BufferedWriter.java:204)
        at io.confluent.connect.bigtable.client.BufferedWriter.addWriteToBatch(BufferedWriter.java:86)
        at io.confluent.connect.bigtable.client.UpsertWriter.lambda$flush$0(UpsertWriter.java:70)
        at java.base/java.util.HashMap.forEach(HashMap.java:1337)
        at io.confluent.connect.bigtable.client.UpsertWriter.flush(UpsertWriter.java:68)
        at io.confluent.connect.bigtable.BaseBigtableSinkTask.lambda$put$2(BaseBigtableSinkTask.java:104)
        at java.base/java.util.HashMap.forEach(HashMap.java:1337)
        at io.confluent.connect.bigtable.BaseBigtableSinkTask.put(BaseBigtableSinkTask.java:104)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:593)
        ... 11 more

Postgres numeric/Kafka Connect Decimal also throws an exception with nearly the same stack trace (only first entry in ClassCastException stack trace differs - perhaps a different branch in a switch over schema.type()?):

org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
        at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:624)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:342)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:242)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:211)
        at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:204)
        at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:259)
        at org.apache.kafka.connect.runtime.isolation.Plugins.lambda$withClassLoader$1(Plugins.java:181)
        at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
        at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
        at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.lang.ClassCastException: class java.math.BigDecimal cannot be cast to class [B (java.math.BigDecimal and [B are in module java.base of loader 'bootstrap')
        at io.confluent.connect.bigtable.client.BufferedWriter.getColumnValueFromField(BufferedWriter.java:285)
        at io.confluent.connect.bigtable.client.BufferedWriter.addStructRecordWriteToBatch(BufferedWriter.java:204)
        at io.confluent.connect.bigtable.client.BufferedWriter.addWriteToBatch(BufferedWriter.java:86)
        at io.confluent.connect.bigtable.client.UpsertWriter.lambda$flush$0(UpsertWriter.java:70)
        at java.base/java.util.HashMap.forEach(HashMap.java:1337)
        at io.confluent.connect.bigtable.client.UpsertWriter.flush(UpsertWriter.java:68)
        at io.confluent.connect.bigtable.BaseBigtableSinkTask.lambda$put$2(BaseBigtableSinkTask.java:104)
        at java.base/java.util.HashMap.forEach(HashMap.java:1337)
        at io.confluent.connect.bigtable.BaseBigtableSinkTask.put(BaseBigtableSinkTask.java:104)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:593)
        ... 11 more

From my understanding, it’s very likely that your code assumes that the type of the object returned by SinkRecord#value() is equal to SinkRecord#valueSchema().type() whereas the Converter already deserialized it into java.util.date or java.math.BigDecimal.
But why timestamp works - that is something I don’t understand. Maybe the JDBC source connector serializes it as an int64 without a logical type?

Thanks for running those additional tests. I think it comes down to special handling only for the 64 bit case. If it’s less (date) or more (was it a time with time zone = 12 bytes?) then it hits the cast exception.

Yes please - if you have a repro packaged up please share! I’ll include it in the bug that I open.

Repro:

docker-compose.yml:

---
services:
  zookeeper:
    image: confluentinc/cp-zookeeper:latest
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000

  kafka:
    image: confluentinc/cp-kafka:7.5.7
    depends_on:
      - zookeeper
    ports:
      - 9092:9092
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1

  schema-registry:
    image: confluentinc/cp-schema-registry:7.5.7
    depends_on:
      - kafka
    ports:
      - "8081:8081"
    environment:
      SCHEMA_REGISTRY_HOST_NAME: schema-registry
      SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: zookeeper:2181
      SCHEMA_REGISTRY_LISTENERS: http://0.0.0.0:8081
      SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: PLAINTEXT://kafka:29092
      SCHEMA_REGISTRY_DEBUG: 'true'

  kafka-connect:
    image: confluentinc/cp-kafka-connect:7.5.7
    depends_on:
      - schema-registry
      - postgres
    ports:
      - "8083:8083"
    volumes:
      - ./confluentinc-kafka-connect-gcp-bigtable-2.0.26:/usr/share/java/confluentinc-kafka-connect-gcp-bigtable-2.0.26
    environment:
      BIGTABLE_EMULATOR_HOST: bigtable:8086
      CONNECT_BOOTSTRAP_SERVERS: kafka:29092
      CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs
      CONNECT_GROUP_ID: compose-connect-group
      CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
      CONNECT_LOG4J_LOGGERS: org.apache.zookeeper=ERROR,org.I0Itec.zkclient=ERROR,org.reflections=ERROR
      CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000
      CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets
      CONNECT_PLUGIN_PATH: /usr/share/java/
      CONNECT_REST_ADVERTISED_HOST_NAME: kafka-connect
      CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status
      CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
      CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry:8081

  bigtable:
    image: google/cloud-sdk:latest
    ports:
      - 127.0.0.1:8086:8086
    entrypoint:
      - gcloud
      - beta
      - emulators
      - bigtable
      - start
      - --host-port=0.0.0.0:8086

  postgres:
    image: postgres:17
    environment:
      POSTGRES_USER: user
      POSTGRES_PASSWORD: password
      POSTGRES_DB: db
    volumes:
      - ./logical.sql:/docker-entrypoint-initdb.d/logical.sql
    ports:
      - '127.0.0.1:5432:5432'
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U user"]
      interval: 5s
      timeout: 5s
      retries: 5

restart.sh

#!/bin/bash

set -x

docker compose down -v
docker compose up -d
until diff <(echo '[]') <(curl http://localhost:8083/connectors/ | jq -rc) ; do
    sleep 1
done

echo "restarted"
./create_postgres_source.sh | jq
./create_confluent_bigtable_sink.sh | jq

echo 'insert into logical(logical) values (1.234567890)' | PGPASSWORD=password psql -h localhost -p 5432 -d db -U user

logical.sql

CREATE TABLE logical (
    id serial PRIMARY KEY,
    logical numeric
)

create_confluent_bigtable_sink.sh:

#!/bin/bash

set -euo pipefail

function create_connector() {

[[ $# -eq 4 ]] # Ensure correct number of arguments
local connector_name=$1
local source_topic=$2
local target_table=$3
local row_key_definition=$4

curl \
    -X POST \
    -H "Content-Type: application/json" \
    http://localhost:8083/connectors \
    --data '{
    "name": "'"$connector_name"'",
    "config": {
        "auto.create.column.families": "true",
        "auto.create.tables": "true",
        "confluent.license": "",
        "confluent.topic.bootstrap.servers": "kafka:29092",
        "confluent.topic.replication.factor": "1",
        "connector.class": "io.confluent.connect.gcp.bigtable.BigtableSinkConnector",
        "gcp.bigtable.credentials.json": "FIXME",
        "gcp.bigtable.instance.id": "prawilny-dataflow",
        "gcp.bigtable.project.id": "unoperate-test",
        "insert.mode": "upsert",
        "name": "'"$connector_name"'",
        "row.key.definition": "'"$row_key_definition"'",
        "row.key.delimiter": "#",
        "table.name.format": "'"$target_table"'",
        "tasks.max": "1",
        "topics": "'"$source_topic"'"
    }
}'
}

create_connector confluent_postgres postgres_logical postgres_confluent_table ""

create_postgres_source.sh:

#!/bin/bash

set -euo pipefail

function create_connector() {

[[ $# -eq 1 ]] # Ensure correct number of arguments
local connector_name=$1

curl \
    -X POST \
    -H "Content-Type: application/json" \
    http://localhost:8083/connectors \
    --data '{
    "name": "'"$connector_name"'",
    "config": {
        "name": "'"$connector_name"'",
        "connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
        "tasks.max": "1",
        "connection.url": "jdbc:postgresql://postgres:5432/db",
        "connection.user": "user",
        "connection.password": "password",
        "table.whitelist": "logical",
        "mode": "incrementing",
        "incrementing.column.name": "id",
        "topic.prefix": "postgres_",
        "poll.interval.ms": "1000",
        "batch.max.rows": "1000",
        "errors.tolerance": "all",
        "errors.log.enable": "true",
        "errors.log.include.messages": "true",
        "transforms":"createKey,extractInt",
        "transforms.createKey.type":"org.apache.kafka.connect.transforms.ValueToKey",
        "transforms.createKey.fields":"id",
        "transforms.extractInt.type":"org.apache.kafka.connect.transforms.ExtractField$Key",
        "transforms.extractInt.field":"id"
    }
}'
}

create_connector postgres

There are some dependencies:

  • postgres client
  • bash
  • docker (with compose plugin)
  • jq
  • .jar with the bigtable sink

Also the sink will complain about "FIXME" as GCP credentials so provide something in a correct shape. I don’t remember if it works with invalid credentials, you can try starting with oauth2l/integration/fixtures/fake-service-account.json at 41b41476bdc146b1bb25c08a6ad1cb2468205fff · google/oauth2l · GitHub

Hi @dtroiano,
I also observed analogous stack trace when simply trying to serialize avro bytes within a Kafka messaage value using the sink.
The stack trace:

org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
        at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:624)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:342)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:242)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:211)
        at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:204)
        at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:259)
        at org.apache.kafka.connect.runtime.isolation.Plugins.lambda$withClassLoader$1(Plugins.java:181)
        at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
        at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
        at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.lang.ClassCastException: class java.nio.HeapByteBuffer cannot be cast to class [B (java.nio.HeapByteBuffer and [B are in module java.base of loader 'bootstrap')
        at io.confluent.connect.bigtable.client.BufferedWriter.getColumnValueFromField(BufferedWriter.java:285)
        at io.confluent.connect.bigtable.client.BufferedWriter.addStructRecordWriteToBatch(BufferedWriter.java:204)
        at io.confluent.connect.bigtable.client.BufferedWriter.addWriteToBatch(BufferedWriter.java:86)
        at io.confluent.connect.bigtable.client.UpsertWriter.lambda$flush$0(UpsertWriter.java:70)
        at java.base/java.util.HashMap.forEach(HashMap.java:1337)
        at io.confluent.connect.bigtable.client.UpsertWriter.flush(UpsertWriter.java:68)
        at io.confluent.connect.bigtable.BaseBigtableSinkTask.lambda$put$2(BaseBigtableSinkTask.java:104)
        at java.base/java.util.HashMap.forEach(HashMap.java:1337)
        at io.confluent.connect.bigtable.BaseBigtableSinkTask.put(BaseBigtableSinkTask.java:104)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:593)
        ... 11 more

Let me know if I need to provide you with the reproducer.

Hi @dtroiano,
what is the status of these class casting bugs? Can I help somehow?

Thank you for the repro steps! I raised this internally and will pass along any updates.