JetBrains/teamcity-azure-agent

Plugin monitors only one agent on a VM / Agent registration issues

Closed this issue · 4 comments

Description

He have 3 Teamcity agents configured on a single Azure VM added to cloud profile.
The agents register and work normally, but the plugin only tracks status of a single agent. Thus, when this particular agent is idling and the other two are working, Teamcity shuts down the VM interrupting build jobs.

Environment

  • TeamCity version: TeamCity Professional 2018.1.3 (build 58658)
  • Azure plugin version: 0.8.8

Diagnostic logs

Logs from one registering agent:
[2019-02-25 17:49:52,871] INFO - buildServer.AGENT.registration - Registering on server via URL "https://teamcity.xxxxxxxx.com": AgentDetails{Name='linux-agent-3', AgentId=null, BuildId=null, AgentOwnAddress='null', AlternativeAdd resses=[172.17.0.5, xx.xx.x.xxx], Port=9090, Version='58658', PluginsVersion='58658-md5-3dcc7073570be637632077885ca193 0e', AvailableRunners=[allureReportGeneratorRunner, Ant, cargo-deploy-runner, DockerCommand, DockerCompose, dotnet.cl i, Duplicator, ftp-deploy-runner, gradle-runner, Inspection, jb.nuget.installer, jb.nuget.pack, jb.nuget.publish, jet brains.dotNetGenericRunner, jetbrains.helm, jetbrains_powershell, jonnyzzz.bower, jonnyzzz.grunt, jonnyzzz.gulp, jonn yzzz.node, jonnyzzz.npm, jonnyzzz.nvm, jonnyzzz.phantom, jonnyzzz.yarn, JPS, Maven2, MSBuild, NAnt, NUnit, python, ra ke-runner, SBT, simpleRunner, sln2003, smb-deploy-runner, ssh-deploy-runner, ssh-exec-runner, VS.Solution], Available Vcs=[tfs, cvs, jetbrains.git, mercurial, svn, perforce], AuthorizationToken='xxxxxxxxxxxxxxxx', PingCode='lpe4klzHLFULew0p0SBtlgG1dFiNdpXj'}

Logs from main TC server. You may see that the agents are starting sequentially, and are 'pushing' each other out. Eventually, the last one started remains tracked by TC.

[2019-02-25 18:25:43,467] INFO [o-8111-exec-234] - r.impl.DBCloudStateManagerImpl - Image: AzureCloudImage{myName='TC-LIN-AGNT-EUS'}, profile: profile 'Auto off'{id=arm-2, projectId=_Root} was marked to CONTAIN agent [2019-02-25 18:25:43,469] INFO [o-8111-exec-234] - .server.impl.CloudEventsLogger - Detected cloud agent "linux-agent-1" {id=119}, profile 'Auto off'{id=arm-2, projectId=_Root}, AzureCloudInstance{myName='TC-LIN-AGNT-EUS'} [2019-02-25 18:27:13,214] INFO [o-8111-exec-231] - r.impl.DBCloudStateManagerImpl - Image: AzureCloudImage{myName='TC-LIN-AGNT-EUS'}, profile: profile 'Auto off'{id=arm-2, projectId=_Root} was marked to CONTAIN agent [2019-02-25 18:27:13,215] INFO [o-8111-exec-231] - .server.impl.CloudEventsLogger - Detected cloud agent "linux-agent-2" {id=120}, profile 'Auto off'{id=arm-2, projectId=_Root}, AzureCloudInstance{myName='TC-LIN-AGNT-EUS'}

The 3 agents are using different ports, different work, temp and system dirs and different authorizationToken.

On screenshots attached you may see that the cloud profile is tracking linux-agent-3, which appears to be disconnected. linux-agent-1, which runs and is identified as belonging to the cloud profile in question, is not listed on the cloud profile page.

azure_cloud_profile_status
agent_status

@IgorRogDevPro, it's expected behavior. TeamCity cloud integrations are indented to spin-up one build agent per cloud image instance.

The recommended way to setup Azure cloud agents in TeamCity is to use one of the available image types: https://github.com/JetBrains/teamcity-azure-agent/wiki#image-types

@dtretyakov Thanks for a quick response.
All is understood, however I would like to mention our reasoning behind spinning up several agents on one machine so that, maybe, expansion of current functionality is considered.

Why Docker? - 1) It's an easy way to limit access from Teamcity agents to OS files, for examle 2) Easy maintenance and replication of build environments via Docker images 3) Cost effectiveness: a heavy build in Docker will utilize full power of underlying VM when no other builds are running. In case of multiple less powerful VMs we would not be able to utilize power of all of them simultaneously for a single build. Azure Container Instances may be a solution, but it's pricing seems comparable to a higher priced VMs.
In any case, thanks again for the response, well appreciated!

@IgorRogDevPro, it looks like the most affordable solution for price/performance ratio is to create a managed image, enable "Reuse allocated virtual machines" and configure appropriate vm size.

@dtretyakov Got it. thanks for the suggestion. I'm closing the issue