Distributed System
Opened this issue · 19 comments
Excuse me, sir,
I work as a researcher focusing on the Von-network. My problem lies in building access to the 4 nodes using a new Web service. For example: currently, when running the Von-network, you will see the 4 nodes and 1 web server image. So, for my new experiment create 4 nodes and 2 web server images. The new web servers need to access the ledger via the 4 nodes. Hence, I just need 2 webs to integrate with the same node to sharing information over network. could you please provide some clue or any document to solve it.
Thank you.
Sincerely,
oudom
The existing web server communicates to the network (the 4 nodes) using indy-vdr. The genesis file needed by indy-vdr is generated by the startup scripts. All of the containers are able to communicate since they are all part of the same network.
If you provide more detail regarding your use case and how it is similar/different from the current setup I may be able to provide more detailed recommendations.
Normally, when I start to run a von-network on localhost with containers, I have modify the script to extend (web1 and web2) in Docker Compose. After it's running, I just test DID and role permissions. Since the web come up with the ledger (node) both web1 and web2, so if I combine web1 and web2 connect to the same ledger. How could I modify the script in ./manage? Could you please recommend?
Dear @WadeBarnes,
Could you please tell me how to set a static IP in the von-network? I don't want to use the default IP (172.17.0.1). since I have changed various in the configuration Manage files and Docker Compose, but the IP doesn't actually change.
Sincerely,
oudom
./manage start <ip_address>
Full example using BCovrin Test as an example:
./manage start 138.197.138.255 LOG_LEVEL=info RUST_LOG=error POOL_CONNECTION_DELAY=20 POOL_CONNECTION_ATTEMPTS=30 WEB_SERVER_HOST_PORT=80 "LEDGER_INSTANCE_NAME=BCovrin Test" "INFO_SITE_TEXT=digital.gov.bc.ca/digital-trust" "INFO_SITE_URL=https://digital.gov.bc.ca/digital-trust/" "LEDGER_CACHE_PATH=/home/indy/.indy_client/ledger-cache/ledger-cache-db" "INDY_SCAN_URL=http://test.bcovrin.vonx.io:3707/home/BCOVRIN_TEST" "INDY_SCAN_TEXT=IndyScan - BCovrin Test"
To regenerate the genesis files you will need to reset your environment ./manage rm
and then run ./manage start <ip_address>
. Otherwise the environment will use the existing genesis files.
I don't see the ./manage rm
in your output. You will also not see the change in the docker ps
output, you'll see the change the genesis file that gets created.
Dear @WadeBarnes;
I have set a static IP for Docker host in the management file. If I want to extend web2 in the same file with web1, would it work if I still use the same image? Normally, when we talk about containers, Docker don't require different IP addresses, just different ports to run. I modified the configuration in the docker-compose.yml file and started to run a new container, web2, but it doesn't work; it still brings up web1. Could you please provide some clues?
Sincerely,
Oudom
I have set a static IP for Docker host in the management file.
I'm not sure why you would need to do this, it seems unnecessary.
If I want to extend web2 in the same file with web1, would it work if I still use the same image?
It's still not clear to me what, exactly, you're trying to accomplish, so it's difficult for me to provide guidance.
I modified the configuration in the docker-compose.yml file and started to run a new container, web2, but it doesn't work; it still brings up web1.
Your screenshot shows you're accessing the application on port 9001
, yet your webserver2
instance is listening on port 9002
.
Regardless, it also looks like you're using the exact same configuration and image for the webserver2
instance so I'd expect it to look and behave exactly like webserver
. Have you made some changes to the code, that make you expect a difference in behavior and/or look between the webserver
and webserver2
instances?
Since I have worked on an SSI model to establish trustee sharing information over the internet, such as having one institution and another institution participating in the network, then I started using SSI established with Hyperledger Indy and I supported two web server in the von-network to participate in the same ledger. After completing this task, I began using conventional RBAC to enhance security.
Since I have worked on an SSI model to establish trustee sharing information over the internet, such as having one institution and another institution participating in the network, then I started using SSI established with Hyperledger Indy and I supported two web server in the von-network to participate in the same ledger. After completing this task, I began using conventional RBAC to enhance security.
@Oudom1, Thanks for the information, but I'm still not clear on your end goal, or the role and purpose of the two web servers play in your design.
Is this a proof of concept? I just want to highlight VON Network is Not a Production Level Indy Node Network.
Without a good understanding I'm struggling to help.
In general, if many institutions issue student credentials and begin collaborating by sharing resources over the internet, it call a distributed system. However, one institution has the authority to allow or disallow another institution from accessing the credentials by managing permissions and roles in administrative roles. Consequently, students can simply show their digital identity when they go to another institution, and that institution will verify the published DID provided by the student. This eliminates the need for students to carry international cards, student IDs, addresses, or any other physical documents.
Your design seems counter to the ideas of SSI upon which Hyperledger Indy is based — which may explain why things are working out as you expect.
The only thing a VC issuer puts on Indy is metadata and public keys to enable issuing credentials. They intend those to be public data, and should never be limited in the authorization to read them. Credentials do not go on Indy, and are delivered from the issuer to the holder. The holder in turn presents them to verifiers, who access Hyperledger Indy to get the data to verify the cryptography — that the presentations from the holder are valid/not tampered with. It is up to the verifier to decide whether or not they “trust” the issuer and the attestations they make in the VC.
As well — the webserver in von-network is just a way to have a human-friendly view of the data on the blockchain. Issuers, holders and verifiers do not use the webserver to either publish or read the data from the ledger. They go directly to the ledger IP:port addresses. The von-network webserver does add an extra service to allow you to write a DID to the ledger, but it does that through the normal write processes and is authorized to do so because von-network is for testing only. In a production network, such a self-serve operation would not be permitted.
Hope that helps.
My friend is working on a reputation model for storing user claims, while my focus is on the VON web application. Can the VON network scale web servers to connect to the same ledger? If web servers can scale, may you provide clue sir. I just need it for testing how does it work, if another web2 have the same ledger that connect to web1 which input seed,did... does ledger in web1 can validate the information that include by web2 or not
The web service (ledger browser) instances are simply acting as agents that interact with the underlying ledger. If you're asking whether or not one agent instance can see the same data as another agent, including any data written to the ledger by another agent, the answer is yes.
To add, indy-node is a public permissioned ledger. Anyone can read the data, the data is publicly available. However you require permissions to write data to the ledger.
You might want to walk through this tutorial as a way to understand the process. In the Workshop, a development/test instance of Indy is used (BCovrin Test). It might help you understand how the model works.
https://github.com/bcgov/traction/blob/main/docs/traction-anoncreds-workshop.md