Issue Creating a Runbook with Bicep
Opened this issue · 24 comments
Discussed in Azure/bicep#8223
Originally posted by chelhernandez August 31, 2022
Hello community,
I'm trying to deploy/create a Runbook with Bicep template, but I'm having issues with the publishContent URI parameter. I'm trying to get a PowerShell script that is on a storage account in Azure but it failed, seems it doesn't work with SAS Token. The thing is the PS script blob file is within a Private Container in the storage account, for security reasons I cannot make it Public. I also set the blob 'CONTENT-TYPE' parameter to "text/plain" (Trying to simulate what we get when using github raw).
If I use on the publishContent URI parameter a public link like: https://raw.githubusercontent.com/thomasmaurer/demo-cloudshell/master/helloworld.ps1 it worked. It creates the runbook and publishes it.
I need to create the runbook and set it as published with the script on it. Because after that I'm going to deploy a WebHook with bicep, and it won't let me deploy a WebHook if the RunBook is not in a Published status.
Here is my sample code:
resource runbook 'Microsoft.Automation/automationAccounts/runbooks@2019-06-01' = {
parent: automationaccount
name: 'runbooktest'
location: location
properties: {
runbookType: 'PowerShell'
logProgress: false
logVerbose: false
publishContentLink: {
uri: 'https://storageaccountname.blob.core.windows.net/container/PowerShellScript.ps1?SASTOKEN'
}
}
}
Is there a way to upload the PowerShell script from local store instead of a Public URI when deploying?
Any suggestions will help me a lot.
I appreciate your help.
Thank you!
Can you share the full details of the error/failure? What is the intended behavior and what is happening instead?
You may need to open a support ticket and have this routed to the Azure Automation team.
Hi Alex,
I'm trying to create a Runbook with Bicep and I want to upload a PowerShell script during the deployment. The PowerShell script is on a private storage account in Azure. I'm using the following bicep code:
param location string = resourceGroup().location
resource automationaccount 'Microsoft.Automation/automationAccounts@2021-06-22' existing = {
name: 'test-aa-00'
}
resource runbook 'Microsoft.Automation/automationAccounts/runbooks@2019-06-01' = {
parent: automationaccount
name: 'runbooktest'
location: location
properties: {
runbookType: 'PowerShell'
logProgress: false
logVerbose: false
publishContentLink: {
uri: 'https://teststorageacc.blob.core.windows.net/scripts/Set-AzSqlElasticPoolStorage.ps1?sp=racw&st=2022-09-14T14:48:59Z&se=2022-09-15T22:48:59Z&skoid=afe737df-489b-4676-846c-3f14ba2e9d19&sktid=ea7eac76-2c48-4fa4-bd53-df6486df183c&skt=2022-09-14T14:48:59Z&ske=2022-09-15T22:48:59Z&sks=b&skv=2021-06-08&spr=https&sv=2021-06-08&sr=b&sig=L%2FXNZf9m1agjgJ5lA%2Ba51Iy6NaY6RE9nI3C6t1Ly86k%3D'
}
}
}
I'm getting the following error:
New-AzResourceGroupDeployment: 11:16:50 AM - The deployment 'main' failed with error(s). Showing 1 out of 1 error(s).Status Message: {"Message":"Invalid argument specified. Argument content cannot be null."} (Code:BadRequest)CorrelationId: d3e74692-e767-4f42-815b-bf5fd63a71d2
It cannot read the PowerShell script from the Storage Account. I'm running this script from an on-prem environment VM that has private access to the storage account through the Private Link Service.
In fact, if I tried to open the script from a web browser using my on-prem VM and the URI above I'm able to get it.
I'm more than glad to open a support ticket if needed. I just use this site to get help, appreciate your response.
Cheers.
Hi,
I am having the same issue, but differently, it could be related.
I am trying to create a Bicep template for my Automation Account with a runbook on it, getting its content from a raw content file in my Azure DevOps.
Here is the snippet:
resource runbook 'Microsoft.Automation/automationAccounts/runbooks@2019-06-01' = {
parent: autaccount
name: runbookname
location: location
properties: {
runbookType: 'PowerShell'
logVerbose: false
logProgress: false
logActivityTrace: 0
publishContentLink: {
uri: 'Get https://dev.azure.com/orgname/projectname/_apis/sourceProviders/TfsGit/filecontents?repository=reponame&path=path to my file.ps1&commitOrBranch=main&api-version=5.0-preview.1'
}
}
}
The error code is the following:
ERROR: {"status":"Failed","error":{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"BadRequest","message":"{\r\n "code": "BadRequest",\r\n "message": "{\"Message\":\"The request is invalid.\",\"ModelState\":{\"runbook.properties.publishContentLink.uri\":[\"The uri field is not a valid fully-qualified http, https, or ftp URL.\"]}}"\r\n}"},{"code":"BadRequest","message":"{\r\n "code": "BadRequest",\r\n "message": "{\"Message\":\"Invalid JSON - Kindly check the value of the variable.\"}"\r\n}"},{"code":"BadRequest","message":"{\r\n "code": "BadRequest",\r\n "message": "{\"Message\":\"Invalid JSON - Kindly check the value of the variable.\"}"\r\n}"},{"code":"BadRequest","message":"{\r\n "code": "BadRequest",\r\n "message": "{\"Message\":\"Invalid JSON - Kindly check the value of the variable.\"}"\r\n}"}]}}
@Azure-hacker I can see in your 'uri' parameter that you have a 'Get ' before the https:// link. It should be uri: 'https//linktoyour.ps1'. I think that's the reason you're getting "The uri field is not a valid fully-qualified http, https or ftp URL."
I'm facing the same issue - I can get access the file using the URI but Bicep/Azure always throws me an error Argument content cannot be null
. Does anyone have a chance to fix this or provide more details on how to solve this?
When i use below Bicep code to adding runbook in our Automation account
resource runbook1 'Microsoft.Automation/automationAccounts/runbooks@2019-06-01' = {
parent: autaccount
name: runbookname
location: location
properties: {
runbookType: 'PowerShell'
logVerbose: false
logProgress: false
logActivityTrace: 0
publishContentLink: {
uri: 'https://dev.azure.com/XXXXX/YYYYYY/_git/XXXXXXXX?path=/Reference/powershell/Azuredevopstask.ps1'
}
}
and i am using Azure CLI to deploy my bicep file
az deployment group create --resource-group "XXXXXXX" --template-file ""
My runbook is created but , but there is not proper script., i am assuming while deploying there is some type authentication issue , because of that it cannot read the file.
Hi chelhernandez, this issue has been marked as stale because it was labeled as requiring author feedback but has not had any activity for 4 days. It will be closed if no further activity occurs within 3 days of this comment. Thanks for contributing to bicep! 😄 🦾
I have the same issue.
If I put a publicly available ps1 script in "uri" it works fine, but if I try to use an uri with a sas token it doesn't work.
I would very much prefer to not put my scripts in a public storage account just to get bicep to work...
Can you share the full details of the error/failure? What is the intended behavior and what is happening instead?
You may need to open a support ticket and have this routed to the Azure Automation team.
The error is:
{
"code": "BadRequest",
"message": "{\"Message\":\"Invalid argument specified. Argument content cannot be null.\"}"
}
Any updates on this? When I use a valid link with SAS token I receive the message "Validation errors while reading content link", but when using a publicly available blob and no SAS, no issues.
Any updates on this? When I use a valid link with SAS token I receive the message "Validation errors while reading content link", but when using a publicly available blob and no SAS, no issues.
Any luck with this? Been struggling for 2 days now and no luck!!
Has anyone opened a support case with the Automation team? This does not look to be a bicep specific issue. I'm happy to re-open and move to the bicep-types-az repo, but the action to open a support case will be the same.
Hi, community,
In my case, we didn't move forward with this solution. Ultimately, we moved to use the PowerShell scripts with the Function App, so we didn't continue working on this issue nor opened a support case with the Automation Team. If anyone still faces problems, I encourage you to follow @alex-frankel advice and open a support case.
Good luck!
I had to move on with a "workaround", using a public blob and simply limiting the access to the blob. Hopefully we can use SAS-tokens soon.
is there an option to upload a local file to azure automation account runbook? I don't want to upload my runbooks to some storage account first.
+1 having the issue. Had to use public URLs and SetAccessPolicy to allow the blobs anon access.
Best is to complain to Azure Automation team about this. There is nothing that the Bicep team can do about it. BTW this is not the only service that has this problem. Overall even if allowed local file usage by the service there always be a problem with the amount of data Bicep can do in single deployment,
@slavizh, it's interesting to note that Terraform handles this flawlessly. i.e being able to raed the content from a file and upload that to the runbook. Is there a specific reason why Bicep wouldn't be able to achieve the same functionality?
data "local_file" "example" {
filename = "${path.module}/example.ps1"
}
resource "azurerm_automation_runbook" "example" {
name = "Get-AzureVMTutorial"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
automation_account_name = azurerm_automation_account.example.name
log_verbose = "true"
log_progress = "true"
description = "This is an example runbook"
runbook_type = "PowerShell"
content = data.local_file.example.content
}
@Schillman nor I am Microsoft employee nor I have deep knowledge about terraform. Most likely underneath terraform does some scripting and it does not use the API mentioned but another one. I think there was some API around drafts that allows you to add draft by providing the content and after that publish that draft. The API is done like action (for example like start VM) and it is not compliant with Bicep. Terraform has the advantage of doing scripting but also requires constant updates of the provider in order to release new features for Azure services, but Bicep does things natively thus does not need constant updates to implement new features. Most likely you can achieve the same what terraform does by using deployment scripts but you need to write your own code to reach the API or I think Az PowerShell was using that API as well.
Is there any update on this?
+1
+1
There's an issue logged in the MS Docs team too: https://github.com/MicrosoftDocs/azure-docs/issues/115636
@slavizh, it's interesting to note that Terraform handles this flawlessly. i.e being able to raed the content from a file and upload that to the runbook. Is there a specific reason why Bicep wouldn't be able to achieve the same functionality?
data "local_file" "example" { filename = "${path.module}/example.ps1" } resource "azurerm_automation_runbook" "example" { name = "Get-AzureVMTutorial" location = azurerm_resource_group.example.location resource_group_name = azurerm_resource_group.example.name automation_account_name = azurerm_automation_account.example.name log_verbose = "true" log_progress = "true" description = "This is an example runbook" runbook_type = "PowerShell" content = data.local_file.example.content }
I'd guess because Bicep is a language to create ARM that drives the resource plane of Azure, not the data-plane. If that conceptually makes sense. Terraform does not / cannot drive the resource plane because that is hidden behind the ARM layer, so instead, Terraform performs manual diffs of your resources and then uses API calls to change resource configuration. Since it does API calls, it can also call data-plane APIs.
@slavizh, it's interesting to note that Terraform handles this flawlessly. i.e being able to raed the content from a file and upload that to the runbook. Is there a specific reason why Bicep wouldn't be able to achieve the same functionality?
data "local_file" "example" { filename = "${path.module}/example.ps1" } resource "azurerm_automation_runbook" "example" { name = "Get-AzureVMTutorial" location = azurerm_resource_group.example.location resource_group_name = azurerm_resource_group.example.name automation_account_name = azurerm_automation_account.example.name log_verbose = "true" log_progress = "true" description = "This is an example runbook" runbook_type = "PowerShell" content = data.local_file.example.content }I'd guess because Bicep is a language to create ARM that drives the resource plane of Azure, not the data-plane. If that conceptually makes sense. Terraform does not / cannot drive the resource plane because that is hidden behind the ARM layer, so instead, Terraform performs manual diffs of your resources and then uses API calls to change resource configuration. Since it does API calls, it can also call data-plane APIs.
Which again is the reason I feel Bicep is the best-of-breed choice when doing IAC. It is literally desired state configuration implemented by the team operating some resource type in Azure (like microsoft.web/virtualmachines). With Terraform, you get desired state configuration minus what APIs dont offer, and minus any mistakes the intermediary Hashicorp makes, and minus some lead time.