StarRocks/starrocks-connector-for-apache-flink

In the 3.1 separated storage and computation version, the use of the "insert into" statement in Flink SQL will lose the data with delete rowkind.

andystenhe opened this issue · 0 comments

I am using the upsert Kafka connector to read Kafka messages and then write them to the primary key table in StarRocks, but I noticed that the data with row kind as delete is not being written to StarRocks.

My StarRocks version is 3.1.2 with the separated storage and computation architecture, the Flink connector version is 2.8.1, and the Flink version is 1.14.5.