/iotedge-vm-deploy

ARM template to deploy a VM with IoT Edge pre-installed (via cloud-init)

Primary LanguagePythonMIT LicenseMIT

iotedge-vm-deploy

Detailed documentation is available on Microsoft Docs

ARM Template to deploy a GPU enabled VM with IoT Edge Installed

ARM template to deploy a GPU enabled VM with IoT Edge pre-installed (via cloud-init)

The ARM template visualized for exploration

The VM will deploy and then run through the Cloud-Init script. The installation of the required services can take up to 15 minutes. To check the installation of the required services you can monitor the /var/log/cloud-init-output.log file. When all services are installed the size of the /var/log/cloud-init-output.log file should be ~ 4,097,285b.

Configure IoT Edge

As part of the deployment of the VM in Azure we set up an enviroment variable called dcs. Once the cloud-init script has finished run the following commands:

sudo sed -i "s#\(device_connection_string: \).*#\1\"$dcs\"#g" /etc/iotedge/config.yaml
sudo systemctl restart iotedge

Azure CLI command to deploy IoT Edge enabled VM

az deployment group create \
  --name edgeVm \
  --resource-group replace-with-rg-name \
  --template-uri "https://raw.githubusercontent.com/MSKeith/iotedge-vm-deploy/master/edgeDeploy.json" \
  --parameters dnsLabelPrefix='my-edge-vm1' \
  --parameters adminUsername='azureuser' \
  --parameters deviceConnectionString=$(az iot hub device-identity show-connection-string --device-id replace-with-device-name --hub-name replace-with-hub-name -o tsv) \
  --parameters authenticationType='sshPublicKey' \
  --parameters adminPasswordOrKey="$(< ~/.ssh/id_rsa.pub)"

Testing the IoT Edge installation

nvidia-smi

The output should look like this:

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.51.06    Driver Version: 450.51.06    CUDA Version: 11.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla V100-PCIE...  Off  | 00000001:00:00.0 Off |                  Off |
| N/A   30C    P0    38W / 250W |      0MiB / 16160MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+
sudo docker run --runtime=nvidia --rm nvidia/cuda:11.0-base nvidia-smi

The output should look like this (this will pull the docker image down and run nvidia-smi in the container):

Unable to find image 'nvidia/cuda:11.0-base' locally
11.0-base: Pulling from nvidia/cuda
54ee1f796a1e: Pull complete 
f7bfea53ad12: Pull complete 
46d371e02073: Pull complete 
b66c17bbf772: Pull complete 
3642f1a6dfb3: Pull complete 
e5ce55b8b4b9: Pull complete 
155bc0332b0a: Pull complete 
Digest: sha256:774ca3d612de15213102c2dbbba55df44dc5cf9870ca2be6c6e9c627fa63d67a
Status: Downloaded newer image for nvidia/cuda:11.0-base
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.51.06    Driver Version: 450.51.06    CUDA Version: 11.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla V100-PCIE...  Off  | 00000001:00:00.0 Off |                  Off |
| N/A   31C    P0    37W / 250W |      0MiB / 16160MiB |      4%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

Troubleshooting

To investigate what occurred during deployment the cloud-init logs can be found in the /var/log directory. The files to view for this installation are:

cloud-init-output.log

cloud-init.log

cuda-installer.log

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.