This demo describes how to show the life cycle of an API in online fuse and how it is synchronized in an ecosystem of Openshift Service Mesh and Securitized under 3Scale.
https://www.cncf.io/blog/2020/03/06/the-difference-between-api-gateways-and-service-mesh/
THE CURRENT SUPPORTED VERSION of RHMI WORKSHOP is 2.3.0!! PLEASE INSTALL THE CORRECT VERSION!
When the lab enviroment its ready, please go to the Solution Explorer and then click in Red Hat Fuse Online (version 7.6), Open console
login with the "evalsx" user using copy login command
oc login --token=<token web> --server=https://api.cluster-xxx-xxxxx.xxxxx.example.opentlc.com:6443
Prepare the environment in Openshift Cluster
oc new-project evals3-fuse-db
oc new-app --template=postgresql-persistent --param=POSTGRESQL_PASSWORD=redhat --param=POSTGRESQL_USER=redhat --param=POSTGRESQL_DATABASE=sampledb
When the database pod its ready, we will Create and populate database
oc get pods
oc rsh <pod postgress>
psql -U redhat -d sampledb
CREATE TABLE users(
id serial PRIMARY KEY,
name VARCHAR (50),
phone VARCHAR (50),
age integer
);
INSERT INTO users(name, phone, age) VALUES ('Rodrigo Ramalho', '(11) 95474-8099', 30);
INSERT INTO users(name, phone, age) VALUES ('Rafael Ramalho', '(61) 99988-8029', 32);
INSERT INTO users(name, phone, age) VALUES ('Thiago Araki', '(11) 9999-9999', 33);
INSERT INTO users(name, phone, age) VALUES ('Gustavo Luszynsk', '(11) 9999-9999', 33);
INSERT INTO users(name, phone, age) VALUES ('Rafael Tuelho', '(11) 9999-9999', 33);
verify the inserts;
sampledb=> select * from users;
Configure a database connector in Fuse Online
url: jdbc:postgresql://postgresql.evals3-fuse-db:5432/sampledb
user: redhat
password: redhat
Enter to Fuse Online Create Integration
select Add a data type , and put a API name, is this case "users" and enter a json example.
{
"id": 0,
"name": "Rodrigo Ramalho",
"phone": "11 95474-8099",
"age": 30
}
in the section "Choose to create a REST Resource with the Data Type" select "Rest Resource", After then click Save
the Import output looks like this
Click save and the you can see in the # Review Actions, the following
-
Found operations with non unique operationIds: getusers
-
Operation POST /users does not provide a response schema for code 201
-
Operation PUT /users/{usersId} does not provide a response schema for code 202
-
Operation DELETE /users/{usersId} does not provide a response schema for code 20
Click in Next button
Go to Integration panel
Click "create flow" in the Get operation
Add log component
and select "Message Context" and "Message Body"
![databaseselect](https://drive.google.com/uc?id=1QZVFvnUVfVyEtutSIYKPzqBQ4xKRPZj5
select * from users;
Do the mapping between Source and Tarjet
Click in save, then we will to continous with other operation
INSERT INTO USERS(NAME,PHONE,AGE) VALUES(:#NAME,:#PHONE,:#AGE);
Click in save and then press in Publish
To install the stack of service mesh please refer to: https://docs.openshift.com/container-platform/4.3/service_mesh/service_mesh_install/installing-ossm.html
for this demostration we do a "cluster-admin" privilege for the user that we are working.
- Install Elastic Search Operator
- Installing the Jaeger Operator
- Installing the Kiali Operator
- Installing the Red Hat OpenShift Service Mesh Operator
-
Create Namespace.
oc new-project istio-system
-
Create a
ServiceMeshControlPlane
file named/rhosmapi/osm/service-mesh.yaml
using the example found in "Customize the Red Hat OpenShift Service Mesh installation". You can customize the values as needed to match your use case. -
or Run the following command to deploy the control plane:
oc create -n istio-system -f service-mesh.yaml
-
Execute the following command to see the status of the control plane installation.
oc get smcp -n istio-system
The installation has finished successfully when the READY column is true.
NAME READY
Basic-install True
-
Run the following command to watch the progress of the Pods during the installation process:
oc get pods -n istio-system -w
Go to the Openshift Service mesh operator, and click in Istio Service Mesh Member Roll section, and create a default ServiceMeshMemberRoll.
Verify the NetworkPolicy in the namespaces of fuseonline.
Now, It is necessary aggregate the following rules:
- allow-from-all-namespaces.
- allow-from-ingress-namespace.
this step its only for demostrations because syndesys webconsole lost for the users.
verify the namespace in where api user deploy: Select deployment config of "i-user" in evals3-fuse namespace and agregate sidecard annotation
spec:
template:
metadata:
annotations:
sidecar.istio.io/inject: 'true'
Repet the same in the Database Postgress
Select Istio-system (control plane) namespace and then go to Config maps, click in Kiali Config map and the select yaml.
Remove deployment config option:
excluded_workloads:
DeploymentConfigs ##delete this line.
for this section its important to read first:
https://istio.io/latest/blog/2018/v1alpha3-routing/
then we create a Gateway and Virtual service. that its in /osm folder.
oc apply -n evals3-fuse -f gateway-user.yaml
Nota: in this demo does not create a destinationRule
- First get the url of istio gateway
export ISTIOGWUSER=$(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')
- Curl for API in a loop
for i in {1..1000} ; do curl -s -w "%{http_code}\n" http://$ISTIOGWUSER/users ;sleep 2 ; done
-
Then go to Kiali
oc get routes kiali -n istio-system
-
open Kiali and graph option
- Enter admin tennat Account and select
- Configure A Neew Access Token
Copy the new token and store it somewhere safe
Make sure to copy your new personal access token now. You won't be able to see it again as it isn't stored for security reasons.
token: 363a23xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxbe
- Create API backend
- Methods and metrics
- Review the API backend
- Create product and Configure
-Configure Methods and Metrics
- Product Setting
By default, Red Hat Service Mesh disables evaluation of all policies.
In order for API Management policies to be applied to service mesh traffic, this default behavior needs to be reversed. The setting for this behavior is in the istio configmap in the istio namespace. This configmap is read by the Envoy proxy upon start-up of an istio enabled pod.
Your lab environment already comes provisioned with service mesh policies (to include API Management policies that will be introduced in this lab) enabled.
You can view state of this setting that disables service mesh policies as follows:
$ oc describe cm istio -n istio-system | grep disablePolicyChecks
disablePolicyChecks: false
git clone https://github.com/3scale/3scale-istio-adapter
oc create -f deploy -n istio-system
-
Review 3scale Istio Adapter components in istio-system namespace:
$ oc get all -l app=3scale-istio-adapter -n istio-system
-
The response should list the deployment, replicaset and pod.
-
As per the diagram above, the 3scale-istio-adapter Linux container includes the following two components:
-
3scale-istio-adapter
Accepts gRPC invocations from Istio ingress and routes to the other side car in the pod: 3scale-istio-httpclient
-
3scale-istio-httpclient
Accepts invocations from 3scale-istio-adapter and invokes the system-provider and backend-listener endpoints of the remote Red Hat 3scale API Management manager.
-
-
Its possible that the pod corresponding to the 3scale-istio-adapter is in an ImagePullBackOff error state.
If so, edit the 3scale-istio-adapter Deployment such that the URL to the image explicitly includes quay.io as follows:
image: quay.io/repository/3scale/3scale-istio-adapter:0.5.1
-
-
View listing of configs that support the 3scale Mixer Adapter:
Embedded in the following YAML files is the 3scale handler that is injected into the Istio Mixer. This handler is written in Golang by the 3scale engineering team as per the Mixer Out of Process Adapter Dev Guide. Much of these files consists of the adapter’s configuration proto.
-
Adapters:
$ oc get adapters.config.istio.io -n istio-system threescale 7d
-
Template:
$ oc get templates.config.istio.io -n istio-system threescale-authorization 7d
-
Now that 3scale Istio Adapter has been verified to exist, various configurations need to be added to the service mesh.
In particular, you will specify the URL of the system-provider endpoint of your 3scale tenant along with the corresponding access token. This is needed so that the Istio Mixer can pull API proxy details from the 3scale API Manager (similar to what the 3scale API Gateway does).
- In the details of your user service in the Red Hat 3scale API Manager administration console, locate the
ID for API calls …
:
-
Review the
threescale-adapter-config.yaml
file :$ less 3scale-istio-adapter/istio/threescale-adapter-config.yaml | more
-
Modify the
threescale-adapter-config.yaml
file with the ID of your user service:$ sed -i "s/service_id: .*/service_id: \"$USER_SERVICE_ID\"/" \ 3scale-istio-adapter/istio/threescale-adapter-config.yaml
-
Modify the
threescale-adapter-config.yaml
file with the URL to your Red Hat 3scale API Management manager tenant:$ sed -i "s/system_url: .*/system_url: \"https:\/\/$TENANT_NAME-admin.$API_WILDCARD_DOMAIN\"/" \ 3scale-istio-adapter/istio/threescale-adapter-config.yaml
-
Modify the
threescale-adapter-config.yaml
file with the administrative access token of your Red Hat 3scale API Management manager administration account:$ sed -i "s/access_token: .*/access_token: \"$API_ADMIN_ACCESS_TOKEN\"/" \ 3scale-istio-adapter/istio/threescale-adapter-config.yaml
-
The rule in threescale-adapter-config.yaml defines the conditions that API Management policies should be applied to a request.
The existing default rule is as follows:
match: destination.labels["service-mesh.3scale.net"] == "true"
This rule specifies that API Management policies should be applied to the request when the target Deployment includes a label of:
service-mesh.3scale.net
. In this version of the demo, this rule does not apply API Management policies as expected. Further research into the issue is needed.-
As a work-around for the current problem, modify the
threescale-adapter-config.yaml
file with a modified rule that specifies that API Management policies should be applied when the target is the user-service:$ sed -i "s/match: .*/match: destination.service.name == \"user-service\"/" \ 3scale-istio-adapter/istio/threescale-adapter-config.yaml
-
More information about Istio’s Policy Attribute Vocabulary (used in the creation of rules) can be found here.
-
-
Load the Red Hat 3scale API Management Istio Handler configurations:
$ oc create -f 3scale-istio-adapter/istio/threescale-adapter-config.yaml ... handler.config.istio.io/threescale created instance.config.istio.io "threescale-authorization" created rule.config.istio.io "threescale" created
-
If for whatever reason you want to delete these 3scale Istio mixer adapter configurations, execute the following:
oc delete rule.config.istio.io threescale -n istio-system oc delete instance.config.istio.io threescale-authorization -n istio-system oc delete handler.config.istio.io threescale -n istio-system
-
-
Verify that the Istio Handler configurations were created in the istio-system namespace:
$ oc get handler threescale -n istio-system -o yaml apiVersion: v1 items: - apiVersion: config.istio.io/v1alpha2 kind: handler .... spec: adapter: threescale connection: address: threescaleistioadapter:3333 params: access_token: fa16cd9ebd66jd07c7bd5511be4b78ecf6d58c30daa940ff711515ca7de1194a service_id: "103" system_url: evals3-tenant-admin.apps.cluster-chile-6b30.chile-6b30.example.opentlc.com
-
result:
-
From the terminal, execute the following to invoke your user service directly via the Istio ingress:
$ curl -v \ `echo "http://"$(oc get route istio-ingressgateway -n istio-system -o template --template {{.spec.host}})"/products"` ... < HTTP/1.1 403 Forbidden ... * Connection #0 to host istio-ingressgateway-istio-system.apps.clientvm.b902.rhte.opentlc.com left intact PERMISSION_DENIED:threescalehandler.handler.istio-system:no auth credentials provided or provided in invalid location
-
Notice a 403 error response of
PERMISSION_DENIED:threescalehandler.handler.istio-system:
. This is to be expected.Inbound requests through the Istio ingress are now correctly flowing through the mixer to the 3scale adapter.
In the above request however, the API user_key associated with your user service application has been omitted. .. View the log file of the 3scale adapter:
$ oc logs -f `oc get pod -n istio-system | grep "3scale-istio-adapter" | awk '{print $1}'` \ -n istio-system \ -c 3scale-istio-adapter "Got instance &InstanceMsg{Subject:&SubjectMsg{User:,Groups:,Properties:map[string]*istio_policy_v1beta11.Value{app_id: &Value{Value:&Value_StringValue{StringValue:,},},app_key: &Value{Value:&Value_StringValue{StringValue:,},},},},Action:&ActionMsg{Namespace:,Service:,Method:GET,Path:/products,Properties:map[string]*istio_policy_v1beta11.Value{},},Name:threescale-authorization.instance.istio-system,}" "proxy config for service id 4 is being fetching from 3scale"
-
-
Try again to invoke your user service using the user service user_key:
$ curl -v \ `echo "http://"$(oc get route istio-ingressgateway -n istio-system -o template --template {{.spec.host}})"/products?user_key=$USER_API_KEY"`
Congratulations! The user service is again being managed and secured by the Red Hat 3scale API Management manager. This time however, the 3scale Istio Mixer adapter is being utilized rather than the API gateway.
https://github.com/RedHatWorkshops/dayinthelife-integration https://gist.github.com/hodrigohamalho https://github.com/hodrigohamalho