schema validation fails to on checkout, download and publish steps, as well as parallel count
Opened this issue · 3 comments
If you use the schema returned by this API call to check a yaml file generated by the previewRun
REST API call here, you will get failures on a few key steps if they are part of your pipeline:
- checkout (shows up as
6D15AF64-176C-496D-B583-FD2AE21D4DF4@1
) - download (shows up as
30f35852-3f7e-4c0c-9a88-e127b4f97211@1
) - publish (shows up as
ECDC45F6-832D-4AD9-B52B-EE49E94659BE@1
) - pool->strategy->parallel (expects a string, but previewRun returns an integer)
Is there a recommended transformation we should apply to the output of previewRun to get it to pass schema validation?
I'm not sure if this is more a bug with previewRun
or with yamlSchema
but I figured I'd start here.
Oddly enough, it seems to replace the script
tasks with cmdLine@V2
, which helps those not fail schema checking. If previewRun
could do that for the 3 items above, that would help a lot!
Document Details
⚠ Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.
- ID: 7088ca0e-2a09-c589-ee98-bbffb23e8f74
- Version Independent ID: 1649fdea-5b71-edf9-a943-cbb49eebf832
- Content: Yamlschema - Get - REST API (Azure DevOps Distributed Task)
- Content Source: docs-ref-autogen/7.1/distributedTask/Yamlschema/Get.yml
- Service: azure-devops
- Sub-service: azure-devops-ref
- GitHub Login: @wnjenkin
- Microsoft Alias: whjenkin
Here is another MSFT document that details which task GUIDs apply to which task. Notably, the guid for download
does NOT match what previewRun
is currently providing (though neither would pass the schema validation).
Here is a pretty minimal version of a pipeline yaml file that can create the issues mentioned:
pool:
vmImage: ubuntu-latest
strategy:
parallel: "2"
steps:
- checkout: self
submodules: recursive
- publish: azure-pipelines.yml
artifact: exampleartifact
- download: current
artifact: exampleartifact
when you call previewRun
, the exports as
{
"stages": [
{
"stage": "__default",
"jobs": [
{
"job": "Job",
"pool": {
"vmImage": "ubuntu-latest"
},
"strategy": {
"parallel": "2"
},
"steps": [
{
"task": "6d15af64-176c-496d-b583-fd2ae21d4df4@1",
"inputs": {
"repository": "self",
"submodules": "recursive"
}
},
{
"task": "ecdc45f6-832d-4ad9-b52b-ee49e94659be@1",
"inputs": {
"path": "azure-pipelines.yml",
"artifactName": "exampleartifact"
}
},
{
"task": "30f35852-3f7e-4c0c-9a88-e127b4f97211@1",
"inputs": {
"alias": "current",
"artifact": "exampleartifact"
}
}
]
}
]
}
]
}
which will fail schema compliance checking, because none of those tasks exist in the schema.
In order to pass schema compliance checking, I had to turn the above steps into this:
"steps": [
{
"checkout": "self",
"submodules": "recursive"
},
{
"task": "PublishPipelineArtifact@1",
"inputs": {
"targetPath": "azure-pipeline.yml"
}
},
{
"task": "DownloadPipelineArtifact@1",
"inputs": {
"artifactName": "exampleartifact"
}
}
]
which is a non-trivial transformation. You see checkout lost the inputs
object, for publish and download, it had to drop the alias
input property, and for publish artifactName
had to become targetPath
, and for download, artifact
had to become artifactName
. I would think it should export this way, not requiring an after-the-fact transform to make it schema-compliant.