/nstream-kafka-starter

Starter Nstream application against Kafka-hosted source data

Primary LanguageJava

nstream-kafka-starter

A baseline Nstream application that processes Kafka-hosted source data.

We highly recommend following our walkthrough as you explore this codebase.

Component Overview

There are three backend components to this repository:

  • An Nstream toolkit-empowered Swim server that consumes from Kafka topics and processes responses in Web Agents with minimal boilerplate (package nstream.starter in the Java code)
  • A standalone Kafka broker (broker/ directory)
  • A means to populate the former with reasonably frequent messages (package nstream.starter.sim in the Java code)

In practice, you will develop applications against an existing broker (or its spec). Thus, the last two components mentioned are primarily for experimentation and come with limited warranty.

There is also a minimal, general-purpose frontend component under index.html that is available in a browser window under localhost:9001 while (at minimum) the first backened component runs.

Prerequisites

Run Instructions

With Provided Broker

  1. Build the broker (working directory: broker/)

    docker build . -t nstream/kafka-starter-broker:0.1.0
    
  2. Run the broker (working directory: broker/)

    docker-compose up
    
  3. Run the Nstream server (working directory: this one)

    *nix Environment:

    ./gradlew run 
    

    Windows Environment:

    .\gradlew.bat run 
    
  4. Run the broker populator (working directory: this one)

    *nix Environment:

    ./gradlew runSim
    

    Windows Environment:

    .\gradlew.bat runSim
    

With Independent Broker

  1. Ensure that your independent kafka topic running and network-reachable.

  2. Run the Nstream server (working directory: this one)

    • server.recon has been configured in a way that allows environment variable overrides. You may source these in your shell, or you may enable them just for the command (fairly normal in *nix, but hackier in Windows)

    *nix Environment:

    KAFKA_BOOTSTRAP_SERVERS=yourserverhere:port \
      KAFKA_GROUP_ID=your-group-id \
      ./gradlew run
    

    Windows Environment:

    cmd /V /C "set KAFKA_BOOTSTRAP_SERVERS=yourserverhere:port&& \
      set KAFKA_GROUP_ID=your-group-id&& \
      .\gradlew.bat run"
    

    The @config syntax may be applied to additional fields; modify server.recon as you see fit.

  3. (optionally) Run the broker populator

    *nix Environment:

    KAFKA_BOOTSTRAP_SERVERS=yourserverhere:port \
      ./gradlew runSim
    

    Windows Environment:

    cmd /V /C "set KAFKA_BOOTSTRAP_SERVERS=yourserverhere:port&& \
      .\gradlew.bat runSim"
    

Shutdown Instructions

  • Java processes can be terminated by a plain SIGINT (ctrl + C in most shells)
  • To tear down the broker (and be okay with some persisted data), run docker-compose down from the broker/ directory.
    • To additionally completely wipe persisted data, run docker-compose down --volumes instead (note that restarting the broker will take longer if you do this)