New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot insert a message longer than 32K to CLOB (Oracle) #690
Comments
I think the ORA-01461 error is occurring because of how RECORD_ID is formatted. I don't think this has anything to do with the SOURCECODE string. |
Hi wicknicks. I do not think so. See a similar issue in Spring Framework spring-projects/spring-framework#16854 They started using setClob for big strings. |
Hello, i have the same issue in "upsert" mode but in "insert" it works fine. Regards |
Any update or workaround on this ? I would like something like "insert and ignore on constraint" so I have to use merge. Insert throws a lot of |
No solution yet ? I have run into the same issue. Note that since the error is thrown by the sink connector we can not use errors.tolerance=all property cause it will not help. |
Did we got any solution for this problem?. As we cannot go with the insert.mode = "insert" it will cause problems during updates. |
Because I did not find any solution on this I decided to go with a dirty hack : In the connector configuration I added : This will skip the messages from the topic that have a field with size > 32000 bytes. |
If the messages are JSON, that will make them unparseable. |
Have the same issue here - cannot insert any json-data to a CLOB database column because of this exception in
Do we have any news on the issue? |
It seems to be fixed and merged in #925, and released with version |
Any solution for this issue. |
@aashokggupta, which version of JDBC connector are you using? Upgrade to the latest version of the connector, it should work there. |
Where can i check the versions of JDBC connector. Can you please share. I am using 5.5.1 version of jdbc connector |
这个问题,我解决了,希望能给你参考。目标表的字段,如果设置为NCLOB,而不是CLOB,那就不会报错。原理还不清楚。 |
Hi All |
Hello,
In the study of the functionality of the sink kafka-connect-jdbc with Oracle: cannot insert a message longer 32767 bytes to CLOB column. I think that the problem with the binding statement for CLOB column to string in GenericDatabaseDialect.java. If the field length is less than 32767 there are no problems with inserting, otherwise must use setClob instead setString.
Maybe I'm wrong, please tell me how to get around this problem ?
Error:
ORA-01461: can bind a LONG value only for insert into a LONG column
Driver:
ojdbc7.jar
Sink:
connector.class=io.confluent.connect.jdbc.JdbcSinkConnector
table.name.format=TEST
topics.regex=^CONNECT-TEST$
auto.create=true
auto.evolve=true
name=TEST-SINK
insert.mode=upsert
pk.mode=record_value
pk.fields=RECORD_ID
connection.url=jdbc:oracle:thin:@x.x.x.x:1521:x
connection.user=x
connection.password=x
Table:
CREATE TABLE "TEST" (
"RECORD_ID" NUMBER(*,0) NOT NULL,
"SOURCECODE" CLOB NULL,
PRIMARY KEY("RECORD_ID"))
Schema:
key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=http://localhost:8081
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://localhost:8081
{
"subject": "CONNECT-TEST-value",
"version": 1,
"id": 963,
"schema": "{"type":"record","name":"TEST","fields":[{"name":"RECORD_ID","type":{"type":"bytes","scale":0,"precision":64,"connect.version":1,"connect.parameters":{"scale":"0"},"connect.name":"org.apache.kafka.connect.data.Decimal
","logicalType":"decimal"}},{"name":"SOURCECODE","type":["null","string"],"default":null}],"connect.name":"TEST"}"
}
Topic:
kafka-avro-console-consumer --bootstrap-server x.x.x.x:9092 --topic CONNECT-TEST --offset=0 --partition 0 --max-messages=1
{"RECORD_ID":"\u0001","SOURCECODE":{"string":"GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG…....More 32K.......GGGG}}
The text was updated successfully, but these errors were encountered: