output of tasks are false "ok"
Closed this issue · 11 comments
I'm currently working on automating the configuration of our network switches with ansible.
While running a simple Playbook on the switches i became aware of a false positive output from switches that are not even in the network yet.
The playbook is the following:
- hosts: all
collections:
- arubanetworks.aos_switch
tasks:
- name: Execute show flash on the switch
arubaoss_command:
commands: ['show flash']
The inventory looks like this:
all:
children:
switch:
hosts:
switch01:
ansible_connection: network_cli
ansible_host: 172.16.XX.XX
ansible_network_os: arubanetworks.aos_switch.arubaoss
ansible_password: XXXXXXXXXXXX
ansible_python_interpreter: /usr/bin/python3
ansible_user: XXXXXXXXXX
All switches are giving back "ok". Only with the verbose parameter i'm seeing that they're failing.
With the verbose parameter the output of an unreachable switch looks like this:
ok: [switch02] => {
"changed": false,
"invocation": {
"module_args": {
"api_version": "None",
"commands": [
"show flash"
],
"host": null,
"interval": 1,
"match": "all",
"output_file": null,
"password": null,
"port": null,
"provider": null,
"retries": 10,
"ssh_keyfile": null,
"timeout": null,
"use_ssl": null,
"username": null,
"validate_certs": false,
"wait_for": null
}
},
"stdout": [
"[Errno None] Unable to connect to port 22 on 172.16.XX.XXX"
],
"stdout_lines": [
[
"[Errno None] Unable to connect to port 22 on 172.16.XX.XXX"
]
]
}
Is this a known issue?
Hi @LedaxPia ! Are you setting the environment variables outlined here: https://github.com/aruba/aos-switch-ansible-collection#setting-environment-variables
Yes.
Can you SSH to the switch from the command line? You're also setting the host_key_checking = false
in your ansible.cfg?
Make sure you're using the same user when SSH-ing as you are in the Ansible playbook
Hey there!
So my ansible.cfg is looking like this:
[defaults]
private_key_file = ~/.ssh/ansible_rsa
host_key_checking = false
NETWORK_GROUP_MODULES = arubaoss
I can SSH to those switches which are in our network environment. To those who are not i obviously can't but the playbook is anyhow giving back "ok" instead of "unreachable" or similar. Only with -v I can see that the tasks failed.
Okay thank you - Can you please share the version of Ansible, Python, and AOS-Switch you're using? As well as the switch platform? I'll work on reproducing this in my set up and see what could be happening
CentOS 7 with ansible 2.9.27 and Python 3.6.8
(I also tried it with Ubuntu 21.10 with ansible [core 2.12.1] and python 3.9.7)
Mostly Aruba 2530 8G PoE+-Switches with YA.16.10.0016 but also Aruba 2530 24G and HP-Procurve 2915-8G-POE.
Thanks for looking into this!
@LedaxPia can you remove this from your ansible.cfg
file? private_key_file = ~/.ssh/ansible_rsa
Once I removed the equivalent variable on my end my playbook ran successfully.
Unfortunately this didn't help as well. :/
Is there anymore information I could give you to analyze the issue?
Hmnm there's definitely something going on between the SSH connection but I'm not sure what else it can be - do you have an Aruba SE you work with regarding your account? If you have them send an email to aruba-automation@hpe.com perhaps we can get on a call to troubleshoot?
Hey there,
I got this problem fixed a bit by mostly using REST instead of SSH for connecting to the switches. REST has the output I'm expecting. (Using first REST and then SSH in my Playbook makes the unreachable switches not go to the SSH-Tasks and the output is right.)