confluentinc/kafka-connect-storage-cloud

S3 Kafka Sink connector can't change default path and filename

FantFRS opened this issue · 0 comments

Hi,

I searched a lot on google before posting my issue.
I'm using S3 sink connector:

{
"name": "s3-sink-connector",
"config": {
"connector.class": "io.confluent.connect.s3.S3SinkConnector",
"tasks.max": "1",
"topics": "topic-test,topic-dev",
"s3.bucket.name": "xxxxxxxxxxxxxxxx",
"store.url": "xxxxxxxxxxxxxxxxx",
"s3.part.size": "5242880",
"flush.size": "1",
"storage.class": "io.confluent.connect.s3.storage.S3Storage",
"format.class": "io.confluent.connect.s3.format.json.JsonFormat",
"schema.generator.class": "io.confluent.connect.storage.hive.schema.DefaultSchemaGenerator",
"aws.access.key.id": "xxxxxxxxxx",
"aws.secret.access.key": "xxxxxxxxxxx/xxxxxxxxx",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"key.converter.schemas.enable": "false",
"value.converter.schemas.enable": "false",
"s3.part.size": 5242880,
"flush.size": 3,
"topics.dir": "",
"topics.file.name.format": "dir1(xxxxxxxx)(${topic})(${kafkaConnectVersion})(${date1})(${date2})(xxxxxxxx).json",
"transforms": "InsertTimestamp",
"transforms.InsertTimestamp.type": "org.apache.kafka.connect.transforms.InsertField$Value",
"transforms.InsertTimestamp.timestamp.field": "date1",
"transforms.InsertTimestamp.format": "YYYYMMddHHmmss"

With this the path at S3 target is like this:

s3://mybucket/topic-test/partition=x/topic-test+1+0000000055.json

I tried everything to have the file a root path like this:

s3://mybucket/topic-test+1+0000000055.json

Filename that I'm trying to change too still the same.

Anyone have done this before please to help me?

Thank you!