pillar.item "key" and pillar.get "key" can return different results
belvedere-trading-user opened this issue · 25 comments
When pillar data is updated pillar.get seems to still return the old data, while pillar.item returns the new data. Pillar.get returns the outdated pillar information even after issuing a saltutil.refresh_pillar or a saltutil.clear_cache. Restarting the minion seems to resolve the issue.
Tested with the following:
- salt 2015.2.0rc2
- CentOS 6.6 salt master
- @40 CentOS 6.2-6.6 minions
- @70 Windows minions (combination of workstations/servers
All minions show the problem when pillar data is changed.
Inital pillar (example):
python:
pip_server: http://pypi:80/
Linux:
packages:
- croniter
- python-dateutil
temp_dir: /tmp
Windows:
packages:
- croniter
- python-dateutil
temp_dir: "c:\\temp"
Updated pillar(example):
python:
pip_server: http://pypi:80/
Linux:
versioned_packages:
croniter: '0.3.5'
python-dateutil: '2.3'
virtualenv: '1.11.6'
unversioned_packages:
removed_packages:
temp_dir: /tmp
Windows:
versioned_packages:
croniter: '0.3.5'
python-dateutil: '2.3'
virtualenv: '1.11.6'
temp_dir: "c:\\temp"
In looking at the code for pillar.get uses pillar for pillar data where as pillar.item (indirectly) uses salt.pillar.get_pillar. I did a quick patch locally to see if updating pillar.get to use items (same way pillar.item does) to get the pillar data fixes the issue. It does. Assuming nobody has issues with it, I'll submit a patch for this.
@belvedere-trading, a patch with the changes you describe would be most welcome.
I can reproduce this right up to the point where you say that a saltutil.pillar_refresh
does not cause subsequent calls to pillar.get
to return the right data. In my testing thus far, a pillar_refresh
does refresh the minion pillar and causes pillar.get
to return the updated data.
Does salutil.pillar_refresh
have any effect for you on the minion's view of its own pillar data or is it just completely broken?
Also, are you calling the saltutil.pillar_refresh
as a standalone call? Or as part of a state run or something?
We should also note that pillar.get
should not use the same behavior as pillar.item
, because initiating a pillar refresh every time a pillar.get
is called will destroy the performance of the state system.
pillar.get
and pillar.item
not matching is expected behavior under some circumstances -- however, a saltutil.pillar_refresh
should eliminate any discrepancies, so that's the only part that's potentially a bug.
We are not doing a pillar refresh from within a state. All of the examples above are directly from the CLI.
I took the following steps:
- No changes made to PIllar data:
- salt saltmaster1 pillar.item python
- salt saltmaster1 pillar.get python
Both commands retrieve the same data
- Update the Pillar data:
- salt saltmaster1 pillar.item python
- salt saltmaster1 pillar.get python
The pillar.item gets the correct (updated) data. The pillar.get retrieves the old data.
- refresh the pillar data:
- salt saltmaster1 saltutil.refresh_pillar
- salt saltmaster1 pillar.item python
- salt saltmaster1 pillar.get python
The pillar.item gets the correct (updated) data. The pillar.get retrieves the old data.
- restart the salt-master:
- service salt-master restart
- salt saltmaster1 pillar.item python
- salt saltmaster1 pillar.get python
The pillar.item gets the correct (updated) data. The pillar.get retrieves the old data.
- restart the salt-minion:
- service salt-minion restart
- salt saltmaster1 pillar.item python
- salt saltmaster1 pillar.get python
The pillar.item gets the correct (updated) data. The pillar.get retrieves the correct(updated) data.
So it appears that only by restarting the minion does this correct the data.
What are the circumstances under which pillar.get will have different information from pillar.item? Using pillar.get in states is desirable for something like:
{%- if salt['pillar.get']('python:Linux:removed_packages', None) != None %}
{%- endif%}
Otherwise you end up needing to validate that the Pillar exists as you think it does layer by layer.
Is this multi-master?
The tests above are a single machine Vagrant setup. However, the minion config file specifies the master to connect to (we're moving towards a multi-master setup).
minion config file looks like:
id: saltmaster1
master:
- saltmaster1
pki_dir: /etc/salt/pki/minion
cachedir: /var/cache/salt/minion
sock_dir: /var/run/salt/minion
log_file: /var/log/salt/minion
key_logfile: /var/log/salt/key
log_level: debug
log_level_logfile: debug
How are you calling pillar.get
and pillar.items
? Directly or inside Jinja in a state file?
The examples above were from the CLI. The use case that caused me to notice the issue was Jinja within a state file (see above for an example).
I can't reproduce this at all. I've tried every way I can think of. Could I see the master conifg?
Master config below:
pki_dir: /etc/salt/pki/master
cachedir: /var/cache/salt/master
keep_jobs: 24
timeout: 60
auto_accept: True
autosign_file: /etc/salt/autosign.conf
runner_dirs:
- /srv/salt/dev/runners
worker_threads: 5
gather_job_timeout: 2
file_roots:
base:
- /srv/salt
prod:
- /srv/salt
qa:
- /srv/salt
dev:
- /srv/salt
pillar_roots:
base:
- /srv/pillarroot/base
dev:
- /srv/pillarroot/dev
qa:
- /srv/pillarroot/qa
prod:
- /srv/pillarroot/prod
log_level: warning
log_level_logfile: warning
peer:
.*dev.*:
- pkg\.(version)
- service\.(status|start|stop|restart)
- file\.(find|remove)
- laser\.(deploy|get_service_pid|get_pmap_from_pid|get_service_pmap)
Also, I'm Deevolution in IRC if you have immediate questions.
I ran an experiment this morning (based on your above question regarding multi-master). The problem goes away if I remove:
master:
- saltmaster1
from the minion configuration (a refresh_pillar works without the above configuration present).
Another data point: salt-call works correctly (respects changes to the pillar data following a saltutil.refresh_pillar using pillar.item or pillar.get). Commands from the salt master still have pillar.get showing old data.
Any luck with recreating this? From my investigations, I'm doubtful that pillar_refresh is working at all with the above setup. I've not been able to get it to log anything related to it reloading the pillar dictionary.
I'm using 2014.7.5 and I see the same symptoms as when using pillar.get in salt mine_functions I end up having to restart the minion for the mine data to be updated.
I do not use a list in the master
configuration option for the minions, standard named master host.
I've created a small Vagrant setup that recreates the issue. Please reference it here:
https://github.com/belvedere-trading/salt_issue_23391
If you have any issues with it, please let me know here, or via IRC.
I've updated to 2015.5.0 and I have the exact same issue.
Removing the master
stanza from the minion configuration doesn't seem to help either.
This is related to my ticket where highstate does not update pillar minion cache. #24050
Seeing the following behavior (2015.5.5):
- Update pillar on master.
- On master do
salt minion pillar.get my_key
, get stale data in 700ms. - On master do
salt minion pillar.items
, get fresh data in 2300ms. - On minion do
salt pillar.get my_key
, get fresh data in 3000ms (!!). - On minion do
salt pillar.items
, get fresh data in 4500ms (!!!).
I don't see the reason not to refresh cache automatically when files on master change.
@basepi can you explain the reasoning behind this?
We don't do any file-watching on the master, as it's easy enough to set up inotify for that. Is that what you mean by "refresh cache automatically"?
Yes. Looks like doing refresh on git pull
is a way to go.
Yes, I'd be on board with that.
I have a similar issue when using pillar.item <pillar>
and pillar.items
:
salt 'minion' pillar.item <pillar>
: shows the old valuesalt 'minion' pillar.items
: shows the actual/new value ofsalt-call pillar.item <pillar>
on the minion : shows the actual/new value of
A salt 'minion' saltutil.refresh_pillar
helps to fix this behavior and also a state.apply
is using the correct pillar data. But its odd anyway because we do not know where this will break something the next time we are changing pillars and running highstates.
We are on 2015.8.0
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
If this issue is closed prematurely, please leave a comment and we will gladly reopen the issue.