Chrony servers
ArseniiPetrovich opened this issue · 6 comments
Since deployment-playbooks
is going to support more cloud providers than AWS only, chrony.yml
playbook in the preconf
role should be rewritten. Now it configures only one NTP server, that uses AWS local subnet IP (169.254.169.123
), which is not available for non-AWS VMs.
I think we can use the other NTP server, which would be more common for different cloud providers or setup some kind of conditional configuration.
@phahulin, what do you think?
Good catch. I think we should try to detect if ntp
is installed and do nothing if it is (most probably it is preconfigured by the hosting provider), if it isn't - we can install chrony and if it's aws, we can select the ntp server above. I did something like this in another playbook, maybe this code can help:
---
- name: Preconf.Chrony - check if NTP is installed
become: yes
become_user: root
command: "systemctl is-active ntp"
register: ntp_active
failed_when: ntp_active.rc not in [0,3]
- name: Chrony
become: yes
become_user: root
when: ntp_active.stdout == "inactive"
block:
- name: Preconf.Chrony - install package
apt:
name: chrony
update_cache: yes
- name: Preconf.Chrony - select amazon time server (for AWS)
lineinfile:
dest: /etc/chrony/chrony.conf
insertafter: '^pool'
line: 'server 169.254.169.123 prefer iburst'
state: present
notify:
- restart chrony
when: ansible_bios_version is search("amazon")
- name: Preconf.Chrony - ensure chrony is running and enabled to start at boot
service:
name: chrony
state: started
enabled: yes
@phahulin
Yes, that will work. But, could you tell me, please, what is the reasons for using Chrony
instead of ntp
? I've found out that ntp.yml
is included in preconf
role, but is not included into main.yml
playbook (and, accordingly, is not executed).
chrony
seems to be more feature-rich https://chrony.tuxfamily.org/comparison.html also it's recommended by aws, where most of our infrastructure is at the moment
Should i remove ntp.yml
playbook then or leave it as is?
Yes, I think you can remove it
Addressed in #144 and looks like can be closed