Can you help me to build a client-server architecure with HE?
Closed this issue · 6 comments
Hi, I'm trying to build a client-server architecture with your pyhelibrary. In particular, I want to build a framework which enables client to send encrypted data and server which contains a CNN to process this data and then it returns encrypted results. Then the client receive these encrypted results and decrypt them with its private key. Following your example in 09_Neural_network_MNIST.ipynb , how can I write the server and client script in python?
Until now I write the server which create the context based on my CNN, but I don't know how to export this context to client. I found get_public_function() method but there isn't any documentation about that.
`
import pyhelayers
he_run_req = pyhelayers.HeRunRequirements()
he_run_req.set_he_context_options([pyhelayers.DefaultContext()])
he_run_req.optimize_for_batch_size(8)
nn = pyhelayers.NeuralNet()
nn.encode_encrypt(["/home/buono/ObjDct_Repo/models/trained_models/lenetfomo.onnx"], he_run_req)
context = nn.get_created_he_context()
pub_functions = context.get_public_functions()
print('Public functions:',pub_functions)
`
It is simpler to start on the client side, create the encrypted model and context, and then export the encrypted data to the server.
You can follow the example here https://github.com/IBM/helayers-examples/blob/main/python/notebooks/02_Neural_network_fraud_detection.ipynb
Which details what part of the setup is done on the client side (or trusted environment, as it is called in the demo) and the part done on the server side.
Also, it shows to to serialized data from one side in preparation of sending it the the other side, and how to de-serialized it on the other side.
There are also more complicated scenarios where the client side initializes the context (to get the secret key) and then sends the public keys to the server, and the server encrypts the model with the user's public key. I suggest you start with the simpler scenario above, and if needed we can explain how to migrate to the second one.
Sure. I'll send here an example tomorrow.
The example below sketches what's needed to support the case the server has the model in the plain, and the client generates the keys and sends encrypted data queries.
If you want to build a robust system please see our contact details: https://ibm.github.io/helayers/contact.html
#!/usr/bin/env python
import pyhelayers
import utils
from pathlib import Path
print('Imported pyhelayers version ',pyhelayers.VERSION)
# You can change these variables to point to your own model
# and data files.
# Also, you can see how this model was created and trained in folder data_gen
INPUT_DIR = Path(utils.get_data_sets_dir()) / 'net_fraud'
X_H5 = INPUT_DIR / 'x_test.h5'
Y_H5 = INPUT_DIR / 'y_test.h5'
MODEL_JSON = str(INPUT_DIR / 'model.json')
MODEL_H5 = str(INPUT_DIR / 'model.h5')
batch_size=4096
# 1. Server side: load model and prepare it for work under encryption
print("Loading model and preparing on server side")
hyper_params = pyhelayers.PlainModelHyperParams()
plain = pyhelayers.PlainModel.create(hyper_params, [MODEL_JSON, MODEL_H5])
he_run_req = pyhelayers.HeRunRequirements()
he_run_req.set_model_encrypted(False)
he_run_req.set_he_context_options([pyhelayers.HeContext.create(["HEaaN_CKKS"])])
he_run_req.optimize_for_batch_size(batch_size)
profile = pyhelayers.HeModel.compile(plain, he_run_req)
# prepare settings json to send to client so they can create the keys properly
profileStr=profile.to_string()
# 2. Client side
# Receives profileStr
print("Creating keys on client side")
client_profile=pyhelayers.HeProfile()
client_profile.from_string(profileStr)
# Creates context with keys
client_context = pyhelayers.HeModel.create_context(client_profile)
# Save the context. Note that this saves all the HE library information, including the
# public key, allowing the server to perform HE computations.
# The secret key is not saved here, so the server won't be able to decrypt.
# The secret key is never stored unless explicitly requested by the user using the designated
# method.
context_buffer = client_context.save_to_buffer()
# 3. Server side
print("Encoding model on server side.")
server_context=pyhelayers.load_he_context(context_buffer)
nn = pyhelayers.NeuralNet(server_context)
nn.encode(plain, profile)
print("Preparing encoder to send to client")
model_io_encoder = pyhelayers.ModelIoEncoder(nn)
model_io_encoder_buf= model_io_encoder.save_to_buffer()
# 4. Client side
print("Encrypting data on client side.")
plain_samples, labels = utils.extract_batch_from_files(X_H5, Y_H5, batch_size, 0)
print('Loaded samples of shape',plain_samples.shape)
# Load io encoder
client_model_io_encoder=pyhelayers.load_io_encoder(client_context,model_io_encoder_buf)
encrypted_samples = pyhelayers.EncryptedData(client_context)
client_model_io_encoder.encode_encrypt(encrypted_samples, [plain_samples])
encrypted_samples_buf=encrypted_samples.save_to_buffer()
# 5. server side
print("Prediction on server side.")
server_encrypted_samples=pyhelayers.load_encrypted_data(server_context,encrypted_samples_buf)
enc_predictions = pyhelayers.EncryptedData(server_context)
nn.predict(enc_predictions, server_encrypted_samples)
enc_predictions_buf=enc_predictions.save_to_buffer()
# 6. client side
print("Decrypt on client side")
client_enc_predictions=pyhelayers.load_encrypted_data(client_context,enc_predictions_buf)
plain_predictions = client_model_io_encoder.decrypt_decode_output(client_enc_predictions)
print('predictions',plain_predictions)
accuracy=utils.assess_results(labels, plain_predictions)
if (accuracy<0.9):
raise Exception("Accuracy too large")
great! thank you I'll try this on my environment
Ok I tested it and it works, thank you <3