charithe/kafka-junit

Failed to construct Kafka Producer

Closed this issue · 9 comments

On a unit test that I am writing, I have the following code below to test out that my Kafka Consumer is obtaining stuff from any relevant Producer. My Kafka Consumer code is wrapped within the KafkaStreamObtainer class. With the code below,

package com.termmerge.nlpcore.obtainer;

import junit.framework.TestCase;
import org.junit.Test;
import org.junit.ClassRule;
import com.github.charithe.kafka.KafkaJunitRule;
import com.github.charithe.kafka.EphemeralKafkaBroker;
import net.jodah.concurrentunit.Waiter;

import java.util.Map;

import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;
import java.util.Properties;


public class KafkaStreamObtainerTest extends TestCase
{

  @ClassRule
  public KafkaJunitRule kafkaRule =
          new KafkaJunitRule(EphemeralKafkaBroker.create());

  @Test
  public void testOneMessage() throws Throwable
  {
    Waiter waiter = new Waiter();

    KafkaProducer testProducer = kafkaRule.helper().createStringProducer();
    testProducer.send(
            new ProducerRecord<>("testTopic", "testKey", "testValue")
    );
    testProducer.close();

    Map consumerSettings = new Properties();
    consumerSettings.put(
            "connection_string",
            "localhost:" + Integer.toString(kafkaRule.helper().kafkaPort())
    );
    consumerSettings.put("group_id", "test");
    KafkaStreamObtainer kafkaStream =
            new KafkaStreamObtainer(consumerSettings);
    kafkaStream.addListener((record) -> {
      waiter.assertEquals(record.get("key"), "testKey");
      waiter.assertEquals(record.get("value"), "testValue");
      waiter.resume();
    });
    kafkaStream.listenToStream("testTopic");

    waiter.await(50000);
  }

}

I keep getting the following error:

org.apache.kafka.common.KafkaException: Failed to construct kafka producer

I'm completely stumped. What could possibly cause this?

On another instance where I've debugged around this problem, I get:
java.lang.IllegalStateException: KafkaBroker is not running

ClassRules must be static. That's probably the reason why it's not getting started up.

Cool. I revised my code as below and it worked:

package com.termmerge.nlpcore.obtainer;

import junit.framework.TestCase;
import org.junit.Test;
import com.github.charithe.kafka.KafkaJunitRule;
import com.github.charithe.kafka.EphemeralKafkaBroker;
import net.jodah.concurrentunit.Waiter;

import java.util.Map;

import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;
import java.util.Properties;


public class KafkaStreamObtainerTest extends TestCase
{

  @Test
  public void testOneMessage() throws Throwable
  {
    Waiter waiter = new Waiter();
    EphemeralKafkaBroker kafkaBroker =
            EphemeralKafkaBroker.create(8000, 8001);
    kafkaBroker.start().get();

    KafkaJunitRule kafkaRule = new KafkaJunitRule(kafkaBroker);
    KafkaProducer testProducer = kafkaRule.helper().createStringProducer();
    testProducer.send(
            new ProducerRecord<>("testTopic", "testKey", "testValue")
    );
    testProducer.close();

    Map consumerSettings = new Properties();
    consumerSettings.put(
            "connection_string",
            "localhost:" + Integer.toString(kafkaRule.helper().kafkaPort())
    );
    consumerSettings.put("group_id", "test");
    KafkaStreamObtainer kafkaStream =
            new KafkaStreamObtainer(consumerSettings);
    kafkaStream.addListener((record) -> {
      waiter.assertEquals(record.get("key"), "testKey");
      waiter.assertEquals(record.get("value"), "testValue");
      waiter.resume();
    });
    kafkaStream.listenToStream("testTopic");

    waiter.await(50000);
  }

}

Now, however, I have a problem with my Kafka consumer hanging up at the poll() stage where it is not returning at all (after calling poll). Can we use the regular Kafka API client to interface with this?

The rule starts a standard Kafka broker that can be accessed using any Kafka client.

Your revised test is not quite correct for a couple of reasons. Firstly, you are calling get on a future at the start of the test. It would just block there and not execute the rest of the code. Secondly, you're mixing JUnit3 style with JUnit4 style. I recommend switching completely to the JUnit4 style as that's the version that introduced rules and what this project has been tested with.

@charithe I know I was doing a blocking call with get() for debugging purposes. I specifically wanted it to block so that I know full well that the broker has started before I even mount any producers or consumers on it. I'm using IntelliJ and I know that using the debugger, I am getting passed the blocking call without a problem. My problem lies in the fact that it takes so long for a Mock Producer to send a data and my consumer to poll for such data (even though my poll interval is only 100ms).

The error I get is:

[kafka-producer-network-thread | kafka-junit] WARN org.apache.kafka.clients.NetworkClient - Error while fetching metadata with correlation id 2 : {testTopic=UNKNOWN}
[kafka-request-handler-3] ERROR kafka.server.KafkaApis - [KafkaApi-1] Error when handling request {topics=[testTopic]}

Just confirmed that using a normal Apache Kafka Docker container, that I am able to pass my unit tests, but using the EphemeralKafkaBroker, my tests seem to fail. The image below shows that my unit tests pass using Apache Kafka straight on.

screen shot 2016-12-12 at 2 19 41 pm

For reference, I am using the Apache Kafka Docker container at https://hub.docker.com/r/wurstmeister/kafka/

I am not sure how you could move past the Future.get() because that means the broker had stopped.

Anyway, I noticed in your screenshot that you're using Kafka 0.10.1.0. The JUnit rule is still on 0.10.0.1. The problem could be due to the library version mismatch.

This is an example of how to use the class rule: https://github.com/charithe/kafka-junit/blob/master/src/test/java/com/github/charithe/kafka/KafkaJunitClassRuleTest.java. I hope that helps.

Got it running. The fix was to set my Kafka consumer's "auto.offset.reset" to "earliest" policy.