Description
Semantic versioning
How to authenticate using Istio and JWT
How to sync mongodb & elasticsearch (via mongodb change streams)
Kafka connector to elasticsearch
User service provides endpoints to receive data related to users.
This microservice is connected with another microservice via service registry pattern. A key advantage of a microservices architecture is the ability to create new instances of each service to meet the current load, respond to failures and roll out upgrades. One side effect of this dynamic server-side environment is that IP address and port of a service instances change constantly. In order to route an API request to a particular service, the client or API gateway needs to find out the address of the service instance it should use. Whereas in the past it might have been feasible to record these locations in a config file, in a dynamic environment where instances are added and removed on the fly, an automated solution is needed. Service discovery provides a mechanism for keeping track of the available instances and distributing requests across them.
Circuite breaker pattern is also implemented there. Use of the Circuit Breaker pattern can allow a microservice to continue operating when a related service fails, preventing the failure from cascading and giving the failing service time to recover. You wrap a protected function call in a circuit breaker object, which monitors for failures. Once the failures reach a certain threshold, the circuit breaker trips, and all further calls to the circuit breaker return with an error, without the protected call being made at all.
Github action (docker.yml) creates tag via https://github.com/mathieudutour/github-tag-action
and then docker image with this generated tag is pushed to dockerhub.
When deploying app to kubernetes set image version in ./src/infra/minikube.sh
to TAG
variable
- Set your data to
payload
variable in./src/infra/istio/rsa-jwt-generator/generate-keys-and-tokens-using-python.py
- To generate private & public keys and token run
python ./src/infra/istio/rsa-jwt-generator/generate-keys-and-tokens-using-python.py
- Set issuer and generated public key to
./src/infra/istio/request-auth.yaml
- Apply changed to kubernetes
kubectl apply -f ./src/infra/istio/request-auth.yaml
- Add
Authorization header
to your request withBearer <generated_token>
- Set your private & public keys in
./src/infra/istio/rsa-jwt-generator/generate-token.py
- To generate token run
python ./src/infra/istio/rsa-jwt-generator/generate-token.py
- Set issuer and generated public key to
./src/infra/istio/request-auth.yaml
- Apply changed to kubernetes
kubectl apply -f ./src/infra/istio/request-auth.yaml
- Add
Authorization header
to your request withBearer <generated_token>
- Start mongodb replica set using
./docker-compose.yaml
(To create replica set image created via./src/infra/mongo-rs/Dockerfile
is used) - Start elasticsearch using
./docker-compose.yaml
- Add
127.0.0.1 mongodb
line to the hosts file on your machine - Run
npm instal
in./src/infra/mongodb-elasticsearch
- Run
node index.js
in./src/infra/mongodb-elasticsearch
- Start mongodb replica set, elasticsearch and synchronizer using
./docker-compose.yaml
CAUTION: without docker set mongodb and elasticsearch domains as localhost,
otherwise set their service names
- Start kafka-connect container using
./docker-compose.yaml
- Create kafka-connector by sending a POST request to
http://localhost:8083/connectors
url
with body like insrc/infra/kafka-connect-elastic/kafka-elastic-connector.json
example
(kafka topic and elasticsearch index will have the same name mentioned intopics
field)
Kafka connector also transforms some fields names (! kafka connector is unable to transform nested fields !) - Create mapping for elasticsearch by sending a PUT request to
http://localhost:9200/syncmongoelastic
url
with body like insrc/infra/kafka-connect-elastic/elastic-mapping.json
example - Send messages to kafka
To list all connectors make a GET request to http://localhost:8083/connectors
To delete a connecter make a DELETE-request to http://localhost:8083/connectors/<connector-name>
To get connector status make a GET request to http://localhost:8083/connectors/<connector-name>/tasks/0/status
To get index mapping make a GET request to http://localhost:9200/<index-name>/_mapping
CAUTION: if you want to change your connector config (after you already created it),
you have to delete the existing one and create a new one with changed configs and
!!! necessarily a new name !!!
(even if you have deleted kafka-connect container, new kafka-connector name is required!)