confluentinc/kafka-connect-blog

Failed to serialize Avro data .when connect-standalone /mnt/etc/connect-avro-standalone.properties \ > /mnt/etc/mysql.properties /mnt/etc/hdfs.properties &

Closed this issue · 2 comments

when i connect-standalone /mnt/etc...

vagrant@ubuntu:~$ connect-standalone /mnt/etc/connect-avro-standalone.properties \

/mnt/etc/mysql.properties /mnt/etc/hdfs.properties &
[1] 1814
vagrant@ubuntu:~$ mkdir: cannot create directory '/opt/confluent/bin/../logs': Permission denied
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/confluent-2.0.0/share/java/confluent-common/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/confluent-2.0.0/share/java/kafka-serde-tools/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/confluent-2.0.0/share/java/kafka-connect-hdfs/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/confluent-2.0.0/share/java/kafka/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
[2017-03-18 05:22:01,664] INFO StandaloneConfig values:
rest.advertised.port = null
rest.advertised.host.name = null
bootstrap.servers = [localhost:9092]
value.converter = class io.confluent.connect.avro.AvroConverter
task.shutdown.graceful.timeout.ms = 5000
internal.value.converter = class org.apache.kafka.connect.json.JsonConverter
rest.host.name = null
cluster = connect
internal.key.converter = class org.apache.kafka.connect.json.JsonConverter
key.converter = class io.confluent.connect.avro.AvroConverter
offset.flush.timeout.ms = 5000
rest.port = 8083
offset.flush.interval.ms = 60000
(org.apache.kafka.connect.runtime.standalone.StandaloneConfig:165)
[2017-03-18 05:22:02,061] INFO Logging initialized @907ms (org.eclipse.jetty.util.log:186)
[2017-03-18 05:22:02,093] INFO Kafka Connect starting (org.apache.kafka.connect.runtime.Connect:53)
[2017-03-18 05:22:02,094] INFO Worker starting (org.apache.kafka.connect.runtime.Worker:89)
[2017-03-18 05:22:02,108] INFO ProducerConfig values:
request.timeout.ms = 2147483647
retry.backoff.ms = 100
buffer.memory = 33554432
ssl.truststore.password = null
batch.size = 16384
ssl.keymanager.algorithm = SunX509
receive.buffer.bytes = 32768
ssl.cipher.suites = null
ssl.key.password = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.provider = null
sasl.kerberos.service.name = null
max.in.flight.requests.per.connection = 1
sasl.kerberos.ticket.renew.window.factor = 0.8
bootstrap.servers = [localhost:9092]
client.id =
max.request.size = 1048576
acks = all
linger.ms = 0
sasl.kerberos.kinit.cmd = /usr/bin/kinit
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
metadata.fetch.timeout.ms = 60000
ssl.endpoint.identification.algorithm = null
ssl.keystore.location = null
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
ssl.truststore.location = null
ssl.keystore.password = null
key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
block.on.buffer.full = false
metrics.sample.window.ms = 30000
metadata.max.age.ms = 300000
security.protocol = PLAINTEXT
ssl.protocol = TLS
sasl.kerberos.min.time.before.relogin = 60000
timeout.ms = 30000
connections.max.idle.ms = 540000
ssl.trustmanager.algorithm = PKIX
metric.reporters = []
compression.type = none
ssl.truststore.type = JKS
max.block.ms = 9223372036854775807
retries = 2147483647
send.buffer.bytes = 131072
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
reconnect.backoff.ms = 50
metrics.num.samples = 2
ssl.keystore.type = JKS
(org.apache.kafka.clients.producer.ProducerConfig:165)
[2017-03-18 05:22:02,157] INFO Kafka version : 0.9.0.0-cp1 (org.apache.kafka.common.utils.AppInfoParser:82)
[2017-03-18 05:22:02,157] INFO Kafka commitId : d1555e3a21980fa9 (org.apache.kafka.common.utils.AppInfoParser:83)
[2017-03-18 05:22:02,158] INFO Starting FileOffsetBackingStore with file /mnt/connect.offsets (org.apache.kafka.connect.storage.FileOffsetBackingStore:53)
[2017-03-18 05:22:02,159] INFO Worker started (org.apache.kafka.connect.runtime.Worker:111)
[2017-03-18 05:22:02,159] INFO Herder starting (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:57)
[2017-03-18 05:22:02,159] INFO Herder started (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:58)
[2017-03-18 05:22:02,159] INFO Starting REST server (org.apache.kafka.connect.runtime.rest.RestServer:91)
[2017-03-18 05:22:02,345] INFO jetty-9.2.12.v20150709 (org.eclipse.jetty.server.Server:327)
Mar 18, 2017 5:22:03 AM org.glassfish.jersey.internal.Errors logErrors
WARNING: The following warnings have been detected: WARNING: The (sub)resource method listConnectors in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation.
WARNING: The (sub)resource method createConnector in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation.
WARNING: The (sub)resource method serverInfo in org.apache.kafka.connect.runtime.rest.resources.RootResource contains empty path annotation.

[2017-03-18 05:22:03,224] INFO Started o.e.j.s.ServletContextHandler@7040d616{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:744)
[2017-03-18 05:22:03,232] INFO Started ServerConnector@13793140{HTTP/1.1}{0.0.0.0:8083} (org.eclipse.jetty.server.ServerConnector:266)
[2017-03-18 05:22:03,232] INFO Started @2079ms (org.eclipse.jetty.server.Server:379)
[2017-03-18 05:22:03,234] INFO REST server listening at http://127.0.1.1:8083/, advertising URL http://127.0.1.1:8083/ (org.apache.kafka.connect.runtime.rest.RestServer:132)
[2017-03-18 05:22:03,234] INFO Kafka Connect started (org.apache.kafka.connect.runtime.Connect:60)
[2017-03-18 05:22:03,249] INFO ConnectorConfig values:
topics = []
name = test-mysql-jdbc
tasks.max = 1
connector.class = class io.confluent.connect.jdbc.JdbcSourceConnector
(org.apache.kafka.connect.runtime.ConnectorConfig:165)
[2017-03-18 05:22:03,251] INFO Creating connector test-mysql-jdbc of type io.confluent.connect.jdbc.JdbcSourceConnector (org.apache.kafka.connect.runtime.Worker:170)
[2017-03-18 05:22:03,253] INFO Instantiated connector test-mysql-jdbc with version 2.0.0 of type io.confluent.connect.jdbc.JdbcSourceConnector (org.apache.kafka.connect.runtime.Worker:183)
[2017-03-18 05:22:03,257] INFO JdbcSourceConnectorConfig values:
table.poll.interval.ms = 60000
incrementing.column.name = id
connection.url = jdbc:mysql://localhost:3306/demo?user=root&password=mypassword
timestamp.column.name = modified
query =
poll.interval.ms = 5000
topic.prefix = test_jdbc_
batch.max.rows = 100
table.whitelist = []
mode = timestamp+incrementing
table.blacklist = []
(io.confluent.connect.jdbc.JdbcSourceConnectorConfig:135)
[2017-03-18 05:22:03,491] INFO Finished creating connector test-mysql-jdbc (org.apache.kafka.connect.runtime.Worker:193)
[2017-03-18 05:22:03,498] INFO TaskConfig values:
task.class = class io.confluent.connect.jdbc.JdbcSourceTask
(org.apache.kafka.connect.runtime.TaskConfig:165)
[2017-03-18 05:22:03,499] INFO Creating task test-mysql-jdbc-0 (org.apache.kafka.connect.runtime.Worker:256)
[2017-03-18 05:22:03,501] INFO Instantiated task test-mysql-jdbc-0 with version 2.0.0 of type io.confluent.connect.jdbc.JdbcSourceTask (org.apache.kafka.connect.runtime.Worker:267)
[2017-03-18 05:22:03,506] INFO JdbcSourceTaskConfig values:
tables = [users]
table.poll.interval.ms = 60000
incrementing.column.name = id
connection.url = jdbc:mysql://localhost:3306/demo?user=root&password=mypassword
timestamp.column.name = modified
query =
poll.interval.ms = 5000
topic.prefix = test_jdbc_
batch.max.rows = 100
table.whitelist = []
mode = timestamp+incrementing
table.blacklist = []
(io.confluent.connect.jdbc.JdbcSourceTaskConfig:135)
[2017-03-18 05:22:03,515] INFO Created connector test-mysql-jdbc (org.apache.kafka.connect.cli.ConnectStandalone:82)
[2017-03-18 05:22:03,518] INFO ConnectorConfig values:
topics = [test_jdbc_users]
name = hdfs-sink
tasks.max = 1
connector.class = class io.confluent.connect.hdfs.HdfsSinkConnector
(org.apache.kafka.connect.runtime.ConnectorConfig:165)
[2017-03-18 05:22:03,520] INFO Creating connector hdfs-sink of type io.confluent.connect.hdfs.HdfsSinkConnector (org.apache.kafka.connect.runtime.Worker:170)
[2017-03-18 05:22:03,522] INFO Instantiated connector hdfs-sink with version 2.0.0 of type io.confluent.connect.hdfs.HdfsSinkConnector (org.apache.kafka.connect.runtime.Worker:183)
[2017-03-18 05:22:03,532] INFO HdfsSinkConnectorConfig values:
kerberos.ticket.renew.period.ms = 3600000
hadoop.home =
rotate.interval.ms = -1
partition.duration.ms = -1
hdfs.namenode.principal =
format.class = io.confluent.connect.hdfs.avro.AvroFormat
schema.cache.size = 1000
locale =
hive.metastore.uris = thrift://localhost:9083
storage.class = io.confluent.connect.hdfs.storage.HdfsStorage
hive.integration = true
retry.backoff.ms = 5000
hive.database = default
timezone =
partition.field.name = department
hadoop.conf.dir =
connect.hdfs.principal =
path.format =
filename.offset.zero.pad.width = 10
hive.conf.dir =
flush.size = 2
topics.dir = topics
schema.compatibility = BACKWARD
shutdown.timeout.ms = 3000
hdfs.url = hdfs://localhost:9000
connect.hdfs.keytab =
hdfs.authentication.kerberos = false
partitioner.class = io.confluent.connect.hdfs.partitioner.FieldPartitioner
hive.home =
logs.dir = logs
(io.confluent.connect.hdfs.HdfsSinkConnectorConfig:135)
[2017-03-18 05:22:03,539] INFO Finished creating connector hdfs-sink (org.apache.kafka.connect.runtime.Worker:193)
[2017-03-18 05:22:03,540] INFO TaskConfig values:
task.class = class io.confluent.connect.hdfs.HdfsSinkTask
(org.apache.kafka.connect.runtime.TaskConfig:165)
[2017-03-18 05:22:03,541] INFO Creating task hdfs-sink-0 (org.apache.kafka.connect.runtime.Worker:256)
[2017-03-18 05:22:03,542] INFO Instantiated task hdfs-sink-0 with version 2.0.0 of type io.confluent.connect.hdfs.HdfsSinkTask (org.apache.kafka.connect.runtime.Worker:267)
[2017-03-18 05:22:03,554] INFO ConsumerConfig values:
request.timeout.ms = 40000
check.crcs = true
retry.backoff.ms = 100
ssl.truststore.password = null
ssl.keymanager.algorithm = SunX509
receive.buffer.bytes = 32768
ssl.cipher.suites = null
ssl.key.password = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.provider = null
sasl.kerberos.service.name = null
session.timeout.ms = 30000
sasl.kerberos.ticket.renew.window.factor = 0.8
bootstrap.servers = [localhost:9092]
client.id =
fetch.max.wait.ms = 500
fetch.min.bytes = 1024
key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
sasl.kerberos.kinit.cmd = /usr/bin/kinit
auto.offset.reset = earliest
value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
ssl.endpoint.identification.algorithm = null
max.partition.fetch.bytes = 1048576
ssl.keystore.location = null
ssl.truststore.location = null
ssl.keystore.password = null
metrics.sample.window.ms = 30000
metadata.max.age.ms = 300000
security.protocol = PLAINTEXT
auto.commit.interval.ms = 5000
ssl.protocol = TLS
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
ssl.trustmanager.algorithm = PKIX
group.id = connect-hdfs-sink
enable.auto.commit = false
metric.reporters = []
ssl.truststore.type = JKS
send.buffer.bytes = 131072
reconnect.backoff.ms = 50
metrics.num.samples = 2
ssl.keystore.type = JKS
heartbeat.interval.ms = 3000
(org.apache.kafka.clients.consumer.ConsumerConfig:165)
[2017-03-18 05:22:03,570] INFO Source task Thread[WorkerSourceTask-test-mysql-jdbc-0,5,main] finished initialization and start (org.apache.kafka.connect.runtime.WorkerSourceTask:342)
[2017-03-18 05:22:03,586] INFO Kafka version : 0.9.0.0-cp1 (org.apache.kafka.common.utils.AppInfoParser:82)
[2017-03-18 05:22:03,586] INFO Kafka commitId : d1555e3a21980fa9 (org.apache.kafka.common.utils.AppInfoParser:83)
[2017-03-18 05:22:03,593] INFO Created connector hdfs-sink (org.apache.kafka.connect.cli.ConnectStandalone:82)
[2017-03-18 05:22:03,671] ERROR Task test-mysql-jdbc-0 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerSourceTask:362)
[2017-03-18 05:22:03,671] ERROR Task is being killed and will not recover until manually restarted: (org.apache.kafka.connect.runtime.WorkerSourceTask:363)
org.apache.kafka.connect.errors.DataException: Failed to serialize Avro data:
at io.confluent.connect.avro.AvroConverter.fromConnectData(AvroConverter.java:92)
at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:142)
at org.apache.kafka.connect.runtime.WorkerSourceTask.access$600(WorkerSourceTask.java:50)
at org.apache.kafka.connect.runtime.WorkerSourceTask$WorkerSourceTaskThread.execute(WorkerSourceTask.java:356)
at org.apache.kafka.connect.util.ShutdownableThread.run(ShutdownableThread.java:82)
Caused by: org.apache.kafka.common.errors.SerializationException: Error serializing Avro message
Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at sun.net.www.http.HttpClient.(HttpClient.java:211)
at sun.net.www.http.HttpClient.New(HttpClient.java:308)
at sun.net.www.http.HttpClient.New(HttpClient.java:326)
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:997)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:933)
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:851)
at sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1092)
at io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:139)
at io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:174)
at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:225)
at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:217)
at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:212)
at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.registerAndGetId(CachedSchemaRegistryClient.java:57)
at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.register(CachedSchemaRegistryClient.java:89)
at io.confluent.kafka.serializers.AbstractKafkaAvroSerializer.serializeImpl(AbstractKafkaAvroSerializer.java:50)
at io.confluent.connect.avro.AvroConverter$Serializer.serialize(AvroConverter.java:120)
at io.confluent.connect.avro.AvroConverter.fromConnectData(AvroConverter.java:90)
at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:142)
at org.apache.kafka.connect.runtime.WorkerSourceTask.access$600(WorkerSourceTask.java:50)
at org.apache.kafka.connect.runtime.WorkerSourceTask$WorkerSourceTaskThread.execute(WorkerSourceTask.java:356)
at org.apache.kafka.connect.util.ShutdownableThread.run(ShutdownableThread.java:82)

how can i change the /opt/confluent/bin/../logs/kafka-request.log path

vagrant@ubuntu:/mnt/logs$ more kafka.log
mkdir: cannot create directory '/opt/confluent/bin/../logs': Permission denied
Java HotSpot(TM) 64-Bit Server VM warning: Cannot open file /opt/confluent/bin/.
./logs/kafkaServer-gc.log due to No such file or directory

log4j:ERROR setFile(null,true) call failed.
java.io.FileNotFoundException: /opt/confluent/bin/../logs/kafka-request.log (No
such file or directory)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.(FileOutputStream.java:221)
at java.io.FileOutputStream.(FileOutputStream.java:142)
at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
at org.apache.log4j.DailyRollingFileAppender.activateOptions(DailyRollin
gFileAppender.java:223)
at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:3
07)
at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.j
ava:172)
at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.j
ava:104)
at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigura
tor.java:842)
at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigura
tor.java:768)
at org.apache.log4j.PropertyConfigurator.parseCatsAndRenderers(PropertyC
onfigurator.java:672)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurato
r.java:516)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)
at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
at org.apache.log4j.LogManager.(LogManager.java:127)
at org.slf4j.impl.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:66)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:277)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:288)
at org.apache.kafka.common.utils.Utils.(Utils.java:54)
at kafka.Kafka$.getPropsFromArgs(Kafka.scala:41)
at kafka.Kafka.getPropsFromArgs(Kafka.scala)
at io.confluent.support.metrics.SupportedKafka.main(SupportedKafka.java:34)
log4j:ERROR Either File or DatePattern options are not set for appender [requestAppender].
log4j:ERROR setFile(null,true) call failed.
java.io.FileNotFoundException: /opt/confluent/bin/../logs/kafka-authorizer.log (No such file or directory)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.(FileOutputStream.java:221)
at java.io.FileOutputStream.(FileOutputStream.java:142)
at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
at org.apache.log4j.DailyRollingFileAppender.activateOptions(DailyRollingFileAppender.java:223)
at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172)
at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104)
at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:842)
at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
at org.apache.log4j.PropertyConfigurator.parseCatsAndRenderers(PropertyConfigurator.java:672)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:516)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)
at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
at org.apache.log4j.LogManager.(LogManager.java:127)
at org.slf4j.impl.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:66)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:277)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:288)
at org.apache.kafka.common.utils.Utils.(Utils.java:54)
at kafka.Kafka$.getPropsFromArgs(Kafka.scala:41)
at kafka.Kafka.getPropsFromArgs(Kafka.scala)
at io.confluent.support.metrics.SupportedKafka.main(SupportedKafka.java:34)
log4j:ERROR Either File or DatePattern options are not set for appender [authorizerAppender]

[2017-03-18 05:20:09,474] INFO Result of znode creation is: NODEEXISTS (kafka.utils
.ZKCheckedEphemeral)
[2017-03-18 05:20:09,478] FATAL [Kafka Server 0], Fatal error during KafkaServer st
artup. Prepare to shutdown (kafka.server.KafkaServer)
java.lang.RuntimeException: A broker is already registered on the path /brokers/ids
/0. This probably indicates that you either have configured a brokerid that is alre
ady in use, or else you have shutdown this broker and restarted it faster than the
zookeeper timeout so it appears to be re-registering.
at kafka.utils.ZkUtils.registerBrokerInZk(ZkUtils.scala:295)
at kafka.utils.ZkUtils.registerBrokerInZk(ZkUtils.scala:281)
at kafka.server.KafkaHealthcheck.register(KafkaHealthcheck.scala:64)
at kafka.server.KafkaHealthcheck.startup(KafkaHealthcheck.scala:45)
at kafka.server.KafkaServer.startup(KafkaServer.scala:231)
at io.confluent.support.metrics.SupportedServerStartable.startup(SupportedS
erverStartable.java:99)
at io.confluent.support.metrics.SupportedKafka.main(SupportedKafka.java:45)
[2017-03-18 05:20:09,480] INFO [Kafka Server 0], shutting down (kafka.server.KafkaS
erver)

vagrant@ubuntu:/opt/confluent$ sudo mkdir logs