ClusterLabs/resource-agents

VirtualDomain resource with disk image on gluster storage cannot be started

Closed this issue · 8 comments

I have defined VirtualDomain vm on gluster storage using gluster protocol, e.g.

...
    <disk type='network' device='disk'>
      <driver name='qemu' type='qcow2' cache='none'/>
      <source protocol='gluster' name='vol1/libvirt/system.qcow2' index='3'>
        <host name='localhost' port='24007'/>
      </source>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </disk>
...

Such resource cannot be started with pcs resource enable vm command, showing error

vm_start_0 on node1 'error' (1): call=85, status='complete', exitreason='Failed to start virtual domain vm.', last-rc-change='2022-05-22 12:31:22 +02:00', queued=0ms, exec=596ms

Debug-start is working correctly though. Once started with pcs resource debug-start vm and cleaned-up with pcs resource cleanup vm, the resource seems ok and even migration is working fine, e.g. with pcs resource move vm node3 or even pcs node standby node1 correctly migrates the resource. Problem also occurs with node failing, the vm resource does not get started on another node automatically.

Running on centos 8 / pcs v0.10.8

You can enable tracing for the resource by running pcs resource update <resource> trace_ra=1.

This saves trace files for each operation and time it is run under /var/lib/heartbeat/trace_ra/ so you can grep for the error to find the trace file from when it failed.

vm.start log after pcs resource enable vm:

+++ 21:48:12: ocf_start_trace:1020: echo
+++ 21:48:12: ocf_start_trace:1020: printenv
+++ 21:48:12: ocf_start_trace:1020: sort
++ 21:48:12: ocf_start_trace:1020: env='
HA_LOGFACILITY=daemon
HA_LOGFILE=/var/log/pacemaker/pacemaker.log
HA_cluster_type=corosync
HA_debug=0
HA_logfacility=daemon
HA_logfile=/var/log/pacemaker/pacemaker.log
HA_mcp=true
HA_quorum_type=corosync
INVOCATION_ID=17119e3094bd476aa7b63226a8c67ce3
JOURNAL_STREAM=9:37590
LC_ALL=C
OCF_EXIT_REASON_PREFIX=ocf-exit-reason:
OCF_OUTPUT_FORMAT=text
OCF_RA_VERSION_MAJOR=1
OCF_RA_VERSION_MINOR=1
OCF_RESKEY_CRM_meta_name=start
OCF_RESKEY_CRM_meta_on_node=gfs1
OCF_RESKEY_CRM_meta_on_node_uuid=1
OCF_RESKEY_CRM_meta_timeout=300000
OCF_RESKEY_config=/run/gluster/shared_storage/libvirt/vm.xml
OCF_RESKEY_crm_feature_set=3.11.0
OCF_RESKEY_hypervisor=qemu:///system
OCF_RESKEY_migration_transport=ssh
OCF_RESKEY_trace_ra=1
OCF_RESOURCE_INSTANCE=vm
OCF_RESOURCE_PROVIDER=heartbeat
OCF_RESOURCE_TYPE=VirtualDomain
OCF_ROOT=/usr/lib/ocf
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/ucb
PCMK_cluster_type=corosync
PCMK_debug=0
PCMK_logfacility=daemon
PCMK_logfile=/var/log/pacemaker/pacemaker.log
PCMK_mcp=true
PCMK_quorum_type=corosync
PCMK_service=pacemaker-execd
PCMK_watchdog=false
PWD=/var/lib/pacemaker/cores
SHLVL=1
VALGRIND_OPTS=--leak-check=full --trace-children=no --vgdb=no --num-callers=25 --log-file=/var/lib/pacemaker/valgrind-%p --suppressions=/usr/share/pacemaker/tests/valgrind-pcmk.suppressions --gen-suppressions=all
_=/usr/bin/printenv
__OCF_TRC_DEST=/var/lib/heartbeat/trace_ra/VirtualDomain/vm.start.2022-06-03.21:48:12
__OCF_TRC_MANAGE=1'
++ 21:48:12: source:1074: ocf_is_true ''
++ 21:48:12: ocf_is_true:105: case "$1" in
++ 21:48:12: ocf_is_true:105: case "$1" in
++ 21:48:12: ocf_is_true:107: false
+ 21:48:12: main:20: OCF_RESKEY_migration_downtime_default=0
+ 21:48:12: main:21: OCF_RESKEY_migration_speed_default=0
+ 21:48:12: main:22: OCF_RESKEY_force_stop_default=0
+ 21:48:12: main:23: OCF_RESKEY_autoset_utilization_cpu_default=true
+ 21:48:12: main:24: OCF_RESKEY_autoset_utilization_hv_memory_default=true
++ 21:48:12: main:25: ocf_maybe_random
++ 21:48:12: ocf_maybe_random:77: test -c /dev/urandom
++ 21:48:12: ocf_maybe_random:78: od -An -N4 -tu4 /dev/urandom
++ 21:48:12: ocf_maybe_random:78: tr -d '[:space:]'
+ 21:48:12: main:25: OCF_RESKEY_migrateport_default=49207
+ 21:48:12: main:26: OCF_RESKEY_CRM_meta_timeout_default=90000
+ 21:48:12: main:27: OCF_RESKEY_save_config_on_stop_default=false
+ 21:48:12: main:28: OCF_RESKEY_sync_config_on_stop_default=false
+ 21:48:12: main:29: OCF_RESKEY_backingfile_default=
+ 21:48:12: main:30: OCF_RESKEY_stateless_default=false
+ 21:48:12: main:31: OCF_RESKEY_copyindirs_default=
+ 21:48:12: main:33: : 0
+ 21:48:12: main:34: : 0
+ 21:48:12: main:35: : 0
+ 21:48:12: main:36: : true
+ 21:48:12: main:37: : true
+ 21:48:12: main:38: : 49207
+ 21:48:12: main:39: : 300000
+ 21:48:12: main:40: : false
+ 21:48:12: main:41: : false
+ 21:48:12: main:42: :
+ 21:48:12: main:43: : false
+ 21:48:12: main:44: :
+ 21:48:12: main:46: ocf_is_true false
+ 21:48:12: ocf_is_true:105: case "$1" in
+ 21:48:12: ocf_is_true:107: false
++ 21:48:12: main:54: date +%s
+ 21:48:12: main:54: NOW=1654285692
+ 21:48:12: main:1022: OCF_REQUIRED_PARAMS=config
+ 21:48:12: main:1023: OCF_REQUIRED_BINARIES='virsh sed'
+ 21:48:12: main:1024: ocf_rarun start
+ 21:48:12: ocf_rarun:137: mk_action_func
++ 21:48:12: mk_action_func:50: echo VirtualDomain_start
++ 21:48:12: mk_action_func:50: tr - _
+ 21:48:12: mk_action_func:50: ACTION_FUNC=VirtualDomain_start
+ 21:48:12: ocf_rarun:138: validate_args
+ 21:48:12: validate_args:53: is_function VirtualDomain_start
++ 21:48:12: is_function:41: command -v VirtualDomain_start
+ 21:48:12: is_function:41: test zVirtualDomain_start = zVirtualDomain_start
+ 21:48:12: ocf_rarun:139: simple_actions
+ 21:48:12: simple_actions:60: case $__OCF_ACTION in
+ 21:48:12: ocf_rarun:140: check_required_params
+ 21:48:12: check_required_params:74: local v
+ 21:48:12: check_required_params:75: for v in $OCF_REQUIRED_PARAMS
+ 21:48:12: check_required_params:76: is_var_defined OCF_RESKEY_config
+++ 21:48:12: is_var_defined:47: echo OCF_RESKEY_config
++ 21:48:12: is_var_defined:47: eval echo '$OCF_RESKEY_config'
+++ 21:48:12: is_var_defined:47: echo /run/gluster/shared_storage/libvirt/vm.xml
+ 21:48:12: is_var_defined:47: test z '!=' z/run/gluster/shared_storage/libvirt/vm.xml
+ 21:48:12: ocf_rarun:141: run_function VirtualDomain_getconfig
+ 21:48:12: run_function:44: is_function VirtualDomain_getconfig
++ 21:48:12: is_function:41: command -v VirtualDomain_getconfig
+ 21:48:12: is_function:41: test zVirtualDomain_getconfig = zVirtualDomain_getconfig
+ 21:48:12: run_function:44: VirtualDomain_getconfig
+ 21:48:12: VirtualDomain_getconfig:1011: : qemu:///system
+ 21:48:12: VirtualDomain_getconfig:1014: VIRSH_OPTIONS='--connect=qemu:///system --quiet'
++ 21:48:12: VirtualDomain_getconfig:1017: egrep '[[:space:]]*<name>.*</name>[[:space:]]*$' /run/gluster/shared_storage/libvirt/vm.xml
++ 21:48:12: VirtualDomain_getconfig:1017: sed -e 's/[[:space:]]*<name>\(.*\)<\/name>[[:space:]]*$/\1/'
+ 21:48:12: VirtualDomain_getconfig:1017: DOMAIN_NAME=vm
+ 21:48:12: VirtualDomain_getconfig:1019: EMULATOR_STATE=/run/resource-agents/VirtualDomain-vm-emu.state
+ 21:48:12: ocf_rarun:142: validate_env
+ 21:48:12: validate_env:122: check_required_binaries
+ 21:48:12: check_required_binaries:113: local v
+ 21:48:12: check_required_binaries:114: for v in $OCF_REQUIRED_BINARIES
+ 21:48:12: check_required_binaries:115: have_binary virsh
+ 21:48:12: have_binary:69: '[' '' = 1 ']'
++ 21:48:12: have_binary:72: echo virsh
++ 21:48:12: have_binary:72: sed -e 's/ -.*//'
+ 21:48:12: have_binary:72: local bin=virsh
++ 21:48:12: have_binary:73: which virsh
+ 21:48:12: have_binary:73: test -x /usr/bin/virsh
+ 21:48:12: check_required_binaries:114: for v in $OCF_REQUIRED_BINARIES
+ 21:48:12: check_required_binaries:115: have_binary sed
+ 21:48:12: have_binary:69: '[' '' = 1 ']'
++ 21:48:12: have_binary:72: echo sed
++ 21:48:12: have_binary:72: sed -e 's/ -.*//'
+ 21:48:12: have_binary:72: local bin=sed
++ 21:48:12: have_binary:73: which sed
+ 21:48:12: have_binary:73: test -x /usr/bin/sed
+ 21:48:12: validate_env:123: is_function VirtualDomain_validate_all
++ 21:48:12: is_function:41: command -v VirtualDomain_validate_all
+ 21:48:12: is_function:41: test zVirtualDomain_validate_all = zVirtualDomain_validate_all
+ 21:48:12: validate_env:125: local rc
+ 21:48:12: validate_env:126: LSB_STATUS_STOPPED=3
+ 21:48:12: validate_env:127: VirtualDomain_validate_all
+ 21:48:12: VirtualDomain_validate_all:963: ocf_is_true 0
+ 21:48:12: ocf_is_true:105: case "$1" in
+ 21:48:12: ocf_is_true:107: false
+ 21:48:12: VirtualDomain_validate_all:970: '[' '!' -r /run/gluster/shared_storage/libvirt/vm.xml ']'
+ 21:48:12: VirtualDomain_validate_all:981: '[' -z vm ']'
+ 21:48:12: VirtualDomain_validate_all:987: ocf_is_true false
+ 21:48:12: ocf_is_true:105: case "$1" in
+ 21:48:12: ocf_is_true:107: false
+ 21:48:12: VirtualDomain_validate_all:992: ocf_is_decimal 0
+ 21:48:12: ocf_is_decimal:96: case "$1" in
+ 21:48:12: ocf_is_decimal:100: true
+ 21:48:12: VirtualDomain_validate_all:998: ocf_is_decimal 0
+ 21:48:12: ocf_is_decimal:96: case "$1" in
+ 21:48:12: ocf_is_decimal:100: true
+ 21:48:12: VirtualDomain_validate_all:1003: ocf_is_true false
+ 21:48:12: ocf_is_true:105: case "$1" in
+ 21:48:12: ocf_is_true:107: false
+ 21:48:12: validate_env:128: rc=0
+ 21:48:12: validate_env:129: '[' 0 -ne 0 ']'
+ 21:48:12: ocf_rarun:143: ocf_is_probe
+ 21:48:12: ocf_is_probe:555: '[' start = monitor -a 0 = 0 ']'
+ 21:48:12: ocf_rarun:144: shift 1
+ 21:48:12: ocf_rarun:145: VirtualDomain_start
+ 21:48:12: VirtualDomain_start:549: local snapshotimage
+ 21:48:12: VirtualDomain_start:551: VirtualDomain_status
+ 21:48:12: VirtualDomain_status:456: local try=0
+ 21:48:12: VirtualDomain_status:457: rc=1
+ 21:48:12: VirtualDomain_status:458: status='no state'
+ 21:48:12: VirtualDomain_status:459: '[' 'no state' = 'no state' ']'
+ 21:48:12: VirtualDomain_status:460: try=1
++ 21:48:12: VirtualDomain_status:461: tr A-Z a-z
++ 21:48:12: VirtualDomain_status:461: LANG=C
++ 21:48:12: VirtualDomain_status:461: virsh --connect=qemu:///system --quiet domstate vm
+ 21:48:12: VirtualDomain_status:461: status='error: failed to get domain '\''vm'\'''
+ 21:48:12: VirtualDomain_status:462: case "$status" in
+ 21:48:12: VirtualDomain_status:462: case "$status" in
+ 21:48:12: VirtualDomain_status:462: case "$status" in
++ 21:48:12: VirtualDomain_status:468: echo error: failed to get domain ''\''vm'\'''
+ 21:48:12: VirtualDomain_status:462: case "$status" in
++ 21:48:12: VirtualDomain_status:468: sed s/error://g
+ 21:48:12: VirtualDomain_status:462: case "$status" in
+ 21:48:12: VirtualDomain_status:462: case "$status" in
+ 21:48:12: VirtualDomain_status:468: ocf_log debug 'Virtual domain vm is not running:  failed to get domain '\''vm'\'''
+ 21:48:12: ocf_log:325: '[' 2 -lt 2 ']'
+ 21:48:12: ocf_log:329: __OCF_PRIO=debug
+ 21:48:12: ocf_log:330: shift
+ 21:48:12: ocf_log:331: __OCF_MSG='Virtual domain vm is not running:  failed to get domain '\''vm'\'''
+ 21:48:12: ocf_log:333: case "${__OCF_PRIO}" in
+ 21:48:12: ocf_log:338: __OCF_PRIO=DEBUG
+ 21:48:12: ocf_log:342: '[' DEBUG = DEBUG ']'
+ 21:48:12: ocf_log:343: ha_debug 'DEBUG: Virtual domain vm is not running:  failed to get domain '\''vm'\'''
+ 21:48:12: ha_debug:262: '[' x0 = x0 ']'
+ 21:48:12: ha_debug:263: return 0
+ 21:48:12: VirtualDomain_status:469: rc=7
+ 21:48:12: VirtualDomain_status:459: '[' 'error: failed to get domain '\''vm'\''' = 'no state' ']'
+ 21:48:12: VirtualDomain_status:517: return 7
+ 21:48:12: VirtualDomain_start:558: systemd_is_running
++ 21:48:12: systemd_is_running:678: cat /proc/1/comm
+ 21:48:12: systemd_is_running:678: '[' systemd = systemd ']'
+ 21:48:12: VirtualDomain_start:559: systemd_drop_in 99-VirtualDomain-libvirt After libvirtd.service
+ 21:48:12: systemd_drop_in:684: local conf_file
+ 21:48:12: systemd_drop_in:685: '[' 3 -ne 3 ']'
+ 21:48:12: systemd_drop_in:689: systemdrundir=/run/systemd/system/resource-agents-deps.target.d
+ 21:48:12: systemd_drop_in:690: mkdir -p /run/systemd/system/resource-agents-deps.target.d
+ 21:48:12: systemd_drop_in:691: conf_file=/run/systemd/system/resource-agents-deps.target.d/99-VirtualDomain-libvirt.conf
+ 21:48:12: systemd_drop_in:692: cat
+ 21:48:12: systemd_drop_in:698: chmod o+r /run/systemd/system/resource-agents-deps.target.d/99-VirtualDomain-libvirt.conf
+ 21:48:12: systemd_drop_in:699: systemctl daemon-reload
+ 21:48:12: VirtualDomain_start:560: systemd_drop_in 99-VirtualDomain-machines Wants virt-guest-shutdown.target
+ 21:48:12: systemd_drop_in:684: local conf_file
+ 21:48:12: systemd_drop_in:685: '[' 3 -ne 3 ']'
+ 21:48:12: systemd_drop_in:689: systemdrundir=/run/systemd/system/resource-agents-deps.target.d
+ 21:48:12: systemd_drop_in:690: mkdir -p /run/systemd/system/resource-agents-deps.target.d
+ 21:48:12: systemd_drop_in:691: conf_file=/run/systemd/system/resource-agents-deps.target.d/99-VirtualDomain-machines.conf
+ 21:48:12: systemd_drop_in:692: cat
+ 21:48:12: systemd_drop_in:698: chmod o+r /run/systemd/system/resource-agents-deps.target.d/99-VirtualDomain-machines.conf
+ 21:48:12: systemd_drop_in:699: systemctl daemon-reload
+ 21:48:13: VirtualDomain_start:561: systemctl start virt-guest-shutdown.target
+ 21:48:13: VirtualDomain_start:564: snapshotimage=/vm.state
+ 21:48:13: VirtualDomain_start:565: '[' -n '' -a -f /vm.state ']'
+ 21:48:13: VirtualDomain_start:581: verify_undefined
+ 21:48:13: verify_undefined:531: local tmpf
+ 21:48:13: verify_undefined:532: virsh --connect=qemu:///system list --all --name
+ 21:48:13: verify_undefined:532: grep -wqs vm
+ 21:48:13: VirtualDomain_start:583: '[' -z '' ']'
+ 21:48:13: VirtualDomain_start:584: virsh --connect=qemu:///system --quiet create /run/gluster/shared_storage/libvirt/vm.xml
+ 21:48:13: VirtualDomain_start:585: '[' 1 -ne 0 ']'
+ 21:48:13: VirtualDomain_start:586: ocf_exit_reason 'Failed to start virtual domain vm.'
+ 21:48:13: ocf_exit_reason:358: local cookie=ocf-exit-reason:
+ 21:48:13: ocf_exit_reason:359: local fmt
+ 21:48:13: ocf_exit_reason:360: local msg
+ 21:48:13: ocf_exit_reason:367: case $# in
+ 21:48:13: ocf_exit_reason:369: fmt=%s
+ 21:48:13: ocf_exit_reason:379: '[' -z ocf-exit-reason: ']'
++ 21:48:13: ocf_exit_reason:384: printf %s 'Failed to start virtual domain vm.'
+ 21:48:13: ocf_exit_reason:384: msg='Failed to start virtual domain vm.'
+ 21:48:13: ocf_exit_reason:385: printf '%s%s\n' ocf-exit-reason: 'Failed to start virtual domain vm.'
+ 21:48:13: ocf_exit_reason:386: __ha_log --ignore-stderr 'ERROR: Failed to start virtual domain vm.'
+ 21:48:13: __ha_log:189: local ignore_stderr=false
+ 21:48:13: __ha_log:190: local loglevel
+ 21:48:13: __ha_log:192: '[' x--ignore-stderr = x--ignore-stderr ']'
+ 21:48:13: __ha_log:192: ignore_stderr=true
+ 21:48:13: __ha_log:192: shift
+ 21:48:13: __ha_log:194: '[' none = daemon ']'
+ 21:48:13: __ha_log:196: tty
+ 21:48:13: __ha_log:211: set_logtag
+ 21:48:13: set_logtag:179: '[' -z '' ']'
+ 21:48:13: set_logtag:180: '[' -n vm ']'
+ 21:48:13: set_logtag:181: HA_LOGTAG='VirtualDomain(vm)[3742923]'
+ 21:48:13: __ha_log:213: '[' x = xyes ']'
+ 21:48:13: __ha_log:221: '[' -n daemon ']'
+ 21:48:13: __ha_log:223: : logging through syslog
+ 21:48:13: __ha_log:225: loglevel=notice
+ 21:48:13: __ha_log:226: case "${*}" in
+ 21:48:13: __ha_log:227: loglevel=err
+ 21:48:13: __ha_log:231: logger -t 'VirtualDomain(vm)[3742923]' -p daemon.err 'ERROR: Failed to start virtual domain vm.'
+ 21:48:13: __ha_log:234: '[' -n /var/log/pacemaker/pacemaker.log ']'
+ 21:48:13: __ha_log:236: : appending to /var/log/pacemaker/pacemaker.log
++ 21:48:13: __ha_log:237: hadate
++ 21:48:13: hadate:175: date '+%b %d %T '
+ 21:48:13: __ha_log:237: echo Jun 03 21:48:13 ' VirtualDomain(vm)[3742923]:    ERROR: Failed to start virtual domain vm.'
+ 21:48:13: __ha_log:240: '[' -z daemon -a -z /var/log/pacemaker/pacemaker.log ']'
+ 21:48:13: __ha_log:246: '[' -n /dev/null ']'
+ 21:48:13: __ha_log:248: : appending to /dev/null
+ 21:48:13: __ha_log:249: '[' /var/log/pacemaker/pacemaker.logx '!=' /dev/nullx ']'
++ 21:48:13: __ha_log:250: hadate
++ 21:48:13: hadate:175: date '+%b %d %T '
+ 21:48:13: __ha_log:250: echo 'VirtualDomain(vm)[3742923]:	Jun' 03 21:48:13 'ERROR: Failed to start virtual domain vm.'
+ 21:48:13: VirtualDomain_start:587: return 1
+ 21:48:13: VirtualDomain_start:584: virsh --connect=qemu:///system --quiet create /run/gluster/shared_storage/libvirt/vm.xml
+ 21:48:13: VirtualDomain_start:585: '[' 1 -ne 0 ']'
+ 21:48:13: VirtualDomain_start:586: ocf_exit_reason 'Failed to start virtual domain vm.'

If you try running virsh --connect=qemu:///system --quiet create /run/gluster/shared_storage/libvirt/vm.xml manually does that work? Maybe the user cant access the file?

If it works manually you can try removing --quiet in the start-section of the agent to get more info about what happens.

Hello, I returned back to investigation. The problem persist. If I start the vm domain manually with command virsh --connect=qemu:///system --quiet create /run/gluster/shared_storage/libvirt/vm.xml , it starts ok. When I start it with command pcs resource debug-start vm, it starts ok too.

When I let the pcs start the resource, it is failing.

It seems to me that the pcs tries to start the resource very quickly on all the nodes in fast sequence, and does not give enough time to vm to boot. Is there any way how to raise some timeout parameters? The resource was created with

pcs resource create vm ocf:heartbeat:VirtualDomain hypervisor="qemu:///system" config="/run/gluster/shared_storage/libvirt/vm.xml" migration_transport=ssh op start timeout="300s" interval="10s" op stop timeout="300s" interval="10s" op monitor timeout="60s" interval="10s" meta allow-migrate="true" priority="100" op migrate_from interval="0" timeout="120s" op migrate_to interval="0" timeout="120s"

Even when I define long enough start timeout with pcs resource update vm op start timeout=120s, the start of resource is failing very quickly on all the nodes:

Failed Resource Actions:
  * vm_start_0 on gfs3 'error' (1): call=91, status='complete', exitreason='Failed to start virtual domain vm.', last-rc-change='2022-10-09 17:14:23 +02:00', queued=0ms, exec=768ms
  * vm_start_0 on gfs1 'error' (1): call=98, status='complete', exitreason='Failed to start virtual domain vm.', last-rc-change='2022-10-09 17:14:22 +02:00', queued=0ms, exec=668ms
  * vm_start_0 on gfs2 'error' (1): call=91, status='complete', exitreason='Failed to start virtual domain vm.', last-rc-change='2022-10-09 17:14:24 +02:00', queued=0ms, exec=585ms

Maybe it's because of SELinux? Try running setenforce 0 and see if it works. Just run setenforce 1 (or reboot) after you're done testing to undo it.

If that's not the issue you can try removing --quiet on line 1148 (VIRSH_OPTIONS=...) from the agent by editing /usr/lib/ocf/resource.d/heartbeat/VirtualDomain.

And if that doesnt give you enough info about why it fails you can add --debug=4 to VIRSH_OPTIONS.

Thanks for the tips. Although SELinux is enforcing, there are no alerts. As I am on production system, I will try to change the agent during next service window.

Hello, it turned out that the selinux was to blame for the trouble. Although I'll still have to figure out why it did not report any alerts and how to set it up properly, you can close this problem. Thank you for your help.