Steps to composing of Jupyter and PostgreSQL
- Create a
docker-compose.yml
file - Add two services named
jupyter and db
- add respective service parameters for jupyter and db
- for eg: create networks jupyter and db and bind with volume
- run the
postgres
query to get the column names of csv a.k.a features - The notebook names
postgres_jupyter.ipynb
has the illustration of the above points
Steps To Configure ClearML
- Get the access token from ClearML app through Browser and put them as a cell in Jupyter notebook
clearml.browser_login()
- Create a task to perform model
training
on the data from lab02task = Task.init()
- Choose parameters for HPO
task.connect(params)
- Close the task for
training
withtask.close()
Steps to Perform HPO
- Everything is a task in ClearML; so create a task for HPO
optimizer=HyperParameterOptimizer()
- Define the parameters for this method
- Make sure you set these commands
xgb_optimizer.set_report_period(0.1)
xgb_optimizer.start() # Start the Optimizer to search best params
xgb_optimizer.wait() # Give some rest after few minutes of working
xgb_optimizer.stop() # Stop the Optimizer when search is done
- Use the below code to get the top 3 options which produced the lowest logloss on train
take_items = lambda d, n: [print(f"{k}: {d[k]}") for k in list(d.keys())[:n]]
k = 3
top_exp = optimizer.get_top_experiments(top_k=k)
print('Top {} experiments are:'.format(k))
for n, t in enumerate(top_exp, 1):
print("------------------------------------------------")
print(f"Rank = {n}: task_id = {t.id}")
print("------------------------------------------------")
take_items(t.get_parameters(), len(t.get_parameters()))