-
Notifications
You must be signed in to change notification settings - Fork 952
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
JDBC sink connector doesn't support tables using declarative partitioning in Postgresql 10. #309
Comments
Here's the Exception that was thrown. org.apache.kafka.connect.errors.ConnectException: Table cdrin is missing and auto-creation is disabled |
I hate digging out old stuff, but I encoutered the same issue on Postgres 11. Since data written in the table can become huge over time, it seems a valid use case to partition this table. Regards, EDIT : So for those facing the same issue, upgrade postgres driver to 42.1.2 or higher for PG 10+ I've made some test both locally and on production kafka-connect, and it runs fine. |
I see the postgresql driver version has since been upgraded to 42.2.10: When running confluentinc/cp-kafka-connect:5.5.1 I indeed see: Appears as though this issue has been resolved and can be closed. |
Actually, I tested confluentinc/cp-kafka-connect:5.4.1 could support PostgreSQL partition tables, although cp-kafka-connect:5.4.0 could not.
It supports PostgreSQL partition tables very well. |
I needed this case for normal table to partitioned table migration. I got same error. |
Postgresql 10 added support for Declarative Partitioning, https://www.postgresql.org/docs/10/static/ddl-partitioning.html. It appears that the sink connector doesn't support this new table type. It can't retrieve the meta data of the table, and throws the exception that the table doesn't exist.
The text was updated successfully, but these errors were encountered: