parameter 'enumerable' expects an Iterable value, got Undef (file: /etc/puppetlabs/code/environments/production/modules/classifier/functions/has_interface_detail.pp)
Closed this issue · 13 comments
Hello!
I encountered a problem while using has_ip_network operator. My env:
OS: CentOS 7.5.1804
Puppet agent: 6.10.1
Pupper server: 6.7.2
puppet-classifier part:
classifier::rules:
test:
rules:
- operator: has_ip_network
value: 10.132.32.0
data:
dc_config: test
This leads to the following error:
Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Evaluation Error: Error while evaluating a Method call, 'map' expects one of:
(Hash hash, Callable[2, 2] block)
rejected: parameter 'hash' expects a Hash value, got Undef
(Hash hash, Callable[1, 1] block)
rejected: parameter 'hash' expects a Hash value, got Undef
(Iterable enumerable, Callable[2, 2] block)
rejected: parameter 'enumerable' expects an Iterable value, got Undef
(Iterable enumerable, Callable[1, 1] block)
rejected: parameter 'enumerable' expects an Iterable value, got Undef (file: /etc/puppetlabs/code/environments/production/modules/classifier/functions/has_interface_detail.pp, line: 30, column: 27) on node elk-5.srv
I would appreciate any tips. Thanks for your work!
upd:
This problem is not observed on simple network settings, for example:
% ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 46:e2:d7:3a:92:58 brd ff:ff:ff:ff:ff:ff
inet 10.129.32.54/24 brd 10.129.32.255 scope global noprefixroute eth0
valid_lft forever preferred_lft forever
inet6 fe80::171e:2bf:dd23:29b3/64 scope link noprefixroute
valid_lft forever preferred_lft forever
But the problem arises, for example, in such cases:
% ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 24:6e:96:ad:5d:b6 brd ff:ff:ff:ff:ff:ff
3: eno3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 24:6e:96:ad:5d:d6 brd ff:ff:ff:ff:ff:ff
inet 195.201.111.64/25 brd 195.201.188.127 scope global eno3
valid_lft forever preferred_lft forever
inet6 fe80::266e:96ff:fead:5dd6/64 scope link
valid_lft forever preferred_lft forever
4: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 24:6e:96:ad:5d:b8 brd ff:ff:ff:ff:ff:ff
5: eno4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 24:6e:96:ad:5d:d7 brd ff:ff:ff:ff:ff:ff
inet 192.168.254.2/24 brd 192.168.254.255 scope global eno4
valid_lft forever preferred_lft forever
inet6 fe80::266e:96ff:fead:5dd7/64 scope link
valid_lft forever preferred_lft forever
6: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:be:d4:43:28 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
7: br-479b2c09c2dd: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:62:6d:e2:3b brd ff:ff:ff:ff:ff:ff
inet 192.168.80.1/20 brd 192.168.95.255 scope global br-479b2c09c2dd
valid_lft forever preferred_lft forever
inet6 fe80::42:62ff:fe6d:e23b/64 scope link
valid_lft forever preferred_lft forever
8: br-4dfc7ea6cb34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:bf:02:9c:29 brd ff:ff:ff:ff:ff:ff
inet 172.23.0.1/16 brd 172.23.255.255 scope global br-4dfc7ea6cb34
valid_lft forever preferred_lft forever
inet6 fe80::42:bfff:fe02:9c29/64 scope link
valid_lft forever preferred_lft forever
9: br-58f6efa7c4e4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:71:2e:36:24 brd ff:ff:ff:ff:ff:ff
inet 172.31.0.1/16 brd 172.31.255.255 scope global br-58f6efa7c4e4
valid_lft forever preferred_lft forever
inet6 fe80::42:71ff:fe2e:3624/64 scope link
valid_lft forever preferred_lft forever
10: br-8bb5de280933: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:32:fc:0f:25 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.1/16 brd 172.18.255.255 scope global br-8bb5de280933
valid_lft forever preferred_lft forever
inet6 fe80::42:32ff:fefc:f25/64 scope link
valid_lft forever preferred_lft forever
11: br-9fbd6a57f3e8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:14:55:8f:3f brd ff:ff:ff:ff:ff:ff
inet 172.22.0.1/16 brd 172.22.255.255 scope global br-9fbd6a57f3e8
valid_lft forever preferred_lft forever
inet6 fe80::42:14ff:fe55:8f3f/64 scope link
valid_lft forever preferred_lft forever
12: br-d9e62e9cacf4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:6c:0a:a6:4d brd ff:ff:ff:ff:ff:ff
inet 172.24.0.1/16 brd 172.24.255.255 scope global br-d9e62e9cacf4
valid_lft forever preferred_lft forever
inet6 fe80::42:6cff:fe0a:a64d/64 scope link
valid_lft forever preferred_lft forever
13: br-dbe74ccffee5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:5b:cb:30:71 brd ff:ff:ff:ff:ff:ff
inet 172.19.0.1/16 brd 172.19.255.255 scope global br-dbe74ccffee5
valid_lft forever preferred_lft forever
inet6 fe80::42:5bff:fecb:3071/64 scope link
valid_lft forever preferred_lft forever
15: veth6edf45d@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-8bb5de280933 state UP group default
link/ether 76:f3:9c:59:38:84 brd ff:ff:ff:ff:ff:ff link-netnsid 2
inet6 fe80::74f3:9cff:fe59:3884/64 scope link
valid_lft forever preferred_lft forever
17: veth6f59470@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-9fbd6a57f3e8 state UP group default
link/ether 3a:92:5e:6d:e0:d6 brd ff:ff:ff:ff:ff:ff link-netnsid 7
inet6 fe80::3892:5eff:fe6d:e0d6/64 scope link
valid_lft forever preferred_lft forever
19: veth5a98583@if18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-dbe74ccffee5 state UP group default
link/ether 92:5c:bb:8c:ce:45 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::905c:bbff:fe8c:ce45/64 scope link
valid_lft forever preferred_lft forever
21: veth1decdf8@if20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-58f6efa7c4e4 state UP group default
link/ether 46:fd:4d:10:57:af brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::44fd:4dff:fe10:57af/64 scope link
valid_lft forever preferred_lft forever
23: vethf0518eb@if22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-8bb5de280933 state UP group default
link/ether be:a6:bc:49:e1:00 brd ff:ff:ff:ff:ff:ff link-netnsid 3
inet6 fe80::bca6:bcff:fe49:e100/64 scope link
valid_lft forever preferred_lft forever
25: veth7cd90d2@if24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-8bb5de280933 state UP group default
link/ether 76:56:84:47:71:12 brd ff:ff:ff:ff:ff:ff link-netnsid 4
inet6 fe80::7456:84ff:fe47:7112/64 scope link
valid_lft forever preferred_lft forever
27: vethb20acd3@if26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-d9e62e9cacf4 state UP group default
link/ether 96:4a:a6:b9:15:2f brd ff:ff:ff:ff:ff:ff link-netnsid 2
inet6 fe80::944a:a6ff:feb9:152f/64 scope link
valid_lft forever preferred_lft forever
29: vethd03a566@if28: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-8bb5de280933 state UP group default
link/ether 82:45:1a:d7:44:ee brd ff:ff:ff:ff:ff:ff link-netnsid 5
inet6 fe80::8045:1aff:fed7:44ee/64 scope link
valid_lft forever preferred_lft forever
31: veth3df817d@if30: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-8bb5de280933 state UP group default
link/ether e6:c2:58:a7:b5:3c brd ff:ff:ff:ff:ff:ff link-netnsid 6
inet6 fe80::e4c2:58ff:fea7:b53c/64 scope link
valid_lft forever preferred_lft forever
33: veth10a6867@if32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-4dfc7ea6cb34 state UP group default
link/ether 9a:b0:64:ec:3a:71 brd ff:ff:ff:ff:ff:ff link-netnsid 3
inet6 fe80::98b0:64ff:feec:3a71/64 scope link
valid_lft forever preferred_lft forever
35: veth23a2c6b@if34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-58f6efa7c4e4 state UP group default
link/ether a2:f0:bd:3c:ae:1f brd ff:ff:ff:ff:ff:ff link-netnsid 4
inet6 fe80::a0f0:bdff:fe3c:ae1f/64 scope link
valid_lft forever preferred_lft forever
37: veth480153c@if36: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-8bb5de280933 state UP group default
link/ether 32:cb:61:6b:2d:fa brd ff:ff:ff:ff:ff:ff link-netnsid 7
inet6 fe80::30cb:61ff:fe6b:2dfa/64 scope link
valid_lft forever preferred_lft forever
39: veth7cbb1ec@if38: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-479b2c09c2dd state UP group default
link/ether 1a:67:4a:a2:19:fe brd ff:ff:ff:ff:ff:ff link-netnsid 6
inet6 fe80::1867:4aff:fea2:19fe/64 scope link
valid_lft forever preferred_lft forever
43: eno3.4000@eno3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default qlen 1000
link/ether 24:6e:96:ad:5d:d6 brd ff:ff:ff:ff:ff:ff
inet 10.132.32.6/24 brd 10.132.32.255 scope global eno3.4000
valid_lft forever preferred_lft forever
inet6 fe80::266e:96ff:fead:5dd6/64 scope link
valid_lft forever preferred_lft forever
% hostname -I
195.201.111.64 192.168.254.2 172.17.0.1 192.168.80.1 172.23.0.1 172.31.0.1 172.18.0.1 172.22.0.1 172.24.0.1 172.19.0.1 10.132.32.6
Another host with has_inteface_detail.pp-related fail sample:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether 20:04:0f:e8:f6:44 brd ff:ff:ff:ff:ff:ff
inet MM.AA.SS.CC/24 brd MM.AA.SS.CC scope global em1
valid_lft forever preferred_lft forever
inet6 fe80::bc85:9a0d:67f2:ad43/64 scope link
valid_lft forever preferred_lft forever
3: em2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether 20:04:0f:e8:f6:45 brd ff:ff:ff:ff:ff:ff
inet 192.168.255.9/24 brd 192.168.255.255 scope global em2
valid_lft forever preferred_lft forever
inet 10.129.32.14/24 brd 10.129.32.255 scope global em2:0
valid_lft forever preferred_lft forever
inet6 fe80::2204:fff:fee8:f645/64 scope link
valid_lft forever preferred_lft forever
4: em3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000
link/ether 20:04:0f:e8:f6:46 brd ff:ff:ff:ff:ff:ff
5: em4: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000
link/ether 20:04:0f:e8:f6:47 brd ff:ff:ff:ff:ff:ff
<MM.AA.SS.CC> here: masked external address.
Yo, so the thing to look at are the facts, all this function does is walk the facts for some reason on these machines the facts structure is different - like perhaps its a older facter or something, or its just weird.
It's this function https://github.com/ripienaar/puppet-classifier/blob/master/functions/has_interface_detail.pp
Called here:
puppet-classifier/functions/evaluate_rule.pp
Lines 39 to 53 in 1646f03
You'll have to take a peek at what is failing in your facts and can add some debugging there
Well, I tried to find the value of $ips = $facts["networking"]["interfaces"] in /etc/puppetlabs/code/environments/production/modules/classifier/functions/has_interface_detail.pp and I don’t see anything criminal in the data:
{em1 => {bindings => [{address => <MM.AA.SS.KK>, netmask => 255.255.255.0, network => <MM.AA.SS.KK>}], bindings6 => [{address => fe80::bc85:9a0d:67f2:ad43, netmask => ffff:ffff:ffff:ffff::, network => fe80::}], ip => <MM.AA.SS.KK>, ip6 => fe80::bc85:9a0d:67f2:ad43, mac => 20:04:0f:e8:f6:44, mtu => 1500, netmask => 255.255.255.0, netmask6 => ffff:ffff:ffff:ffff::, network => <MM.AA.SS.KK>, network6 => fe80::}, em2 => {bindings => [{address => 192.168.255.9, netmask => 255.255.255.0, network => 192.168.255.0}, {address => 10.129.32.14}], bindings6 => [{address => fe80::2204:fff:fee8:f645, netmask => ffff:ffff:ffff:ffff::, network => fe80::}], ip => 192.168.255.9, ip6 => fe80::2204:fff:fee8:f645, mac => 20:04:0f:e8:f6:45, mtu => 1500, netmask => 255.255.255.0, netmask6 => ffff:ffff:ffff:ffff::, network => 192.168.255.0, network6 => fe80::}, em2:0 => {bindings => [{address => 10.129.32.14, netmask => 255.255.255.0, network => 10.129.32.0}], ip => 10.129.32.14, netmask => 255.255.255.0, network => 10.129.32.0}, em3 => {mac => 20:04:0f:e8:f6:46, mtu => 1500}, em4 => {mac => 20:04:0f:e8:f6:47, mtu => 1500}, lo => {bindings => [{address => 127.0.0.1, netmask => 255.0.0.0, network => 127.0.0.0}], bindings6 => [{address => ::1, netmask => ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff, network => ::1}], ip => 127.0.0.1, ip6 => ::1, mtu => 65536, netmask => 255.0.0.0, netmask6 => ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff, network => 127.0.0.0, network6 => ::1}}
Also, i tried adding check is_hash() for bindings but without success
what do you mean without success
? The problem is the em2::0 interface has no bindings6
so we need to detect this missing data and skip before line 30 in that function
what do you mean without success?
if is_hash($interface[$bname]) {
$interface[$bname].map |$binding| {
$binding[$what]
}
}
My attempt was a hash check. Unfortunately, I do not know the elegant and correct solution to write a patch.
change the function like this and let me know how it goes:
["bindings", "bindings6"].map |$bname| {
if $bname in $interface {
$interface[$bname].map |$binding| {
$binding[$what]
}
}
}
So just the if
before the map and close the if block
@ripienaar Just checked on all my hosts and this solved the problem! Is there a chance to see it in the upstream? Thanks you!
yup, do you want to send a PR with the fix?
If this Issue is not enough, then I can. I just want this module to be error free ;-) I like it (and I use it now)
Created pull/23
Please put out a release containing the patch!) 🙏
thank you
released 0.1.1