failed to parse lvs output: Failed to parse output: invalid character 'L' after top-level value
kb7791 opened this issue · 1 comments
kb7791 commented
Description
During the Load Heketi Topology Task for glusterfs at fatal error occurs with failure to parse lvs out put
Version
- openshift-ansible-3.11.213-1
- hash: 4d763a0
* openshift-ansible-3.11.213-1
* hash: 4d763a0621c5c7c78156f0269b474c7367b0ab61
Steps To Reproduce
# If this command fails, check with
# oc logs <heketi_pod>", then rsh to the glusterfs-storage-xyz pod and "vi /var/log/glusterfs/glusterd.log"
# TODO(michaelgugino) Automate collecting a message about collecting this data
- name: Load heketi topology
command: "{{ glusterfs_heketi_client }} topology load --json={{ mktemp.stdout }}/topology.json 2>&1"
register: topology_load
failed_when: "topology_load.rc != 0 or 'Unable' in topology_load.stdout"
Observed Results
TASK [openshift_storage_glusterfs : Load heketi topology] *******************************************************************************************************************************************************************************************************************************
task path:openshift-ansible/roles/openshift_storage_glusterfs/tasks/heketi_load.yml:15
Thursday 10 September 2020 18:37:06 -0400 (0:00:01.066) 0:27:51.782 ****
fatal: [master0.gluster-staging-backend.dev.com]: FAILED! => {"changed": true, "cmd": ["oc", "--config=/tmp/openshift-glusterfs-ansible-wj4Krv/admin.kubeconfig", "rsh", "--namespace=glusterfs", "deploy-heketi-storage-1-w957m", "heketi-cli", "-s", "http://localhost:8080", "--user", "admin", "--secret", "ZafWKCdQ//2+ApvW+qZEC0fbDL6a8BBDZDrRgpr7xIY=", "topology", "load", "--json=/tmp/openshift-glusterfs-ansible-wj4Krv/topology.json", "2>&1"], "delta": "0:00:12.039134", "end": "2020-09-10 18:37:19.669822", "failed_when_result": true, "rc": 0, "start": "2020-09-10 18:37:07.630688", "stderr": "", "stderr_lines": [], "stdout": "Creating cluster ... ID: b0d49c8a644ba3d7db846f0bc1f13a31\n\tAllowing file volumes on cluster.\n\tAllowing block volumes on cluster.\n\tCreating node glusterfs0.gluster-staging-backend.dev.com ... ID: 46ba02a54d4da1b4595dad6b9ca2b8b1\n\t\tAdding device /dev/sdb ... Unable to add device: failed to parse lvs output: Failed to parse output: invalid character 'L' after top-level value\n\tCreating node glusterfs1.gluster-staging-backend.dev.com ... ID: 1fe20650bfdf760968667f943a23ff42\n\t\tAdding device /dev/sdb ... Unable to add device: failed to parse lvs output: Failed to parse output: invalid character 'L' after top-level value\n\tCreating node glusterfs2.gluster-staging-backend.dev.com ... ID: 6859ffdf7d9325e774b2c4b0a68284ff\n\t\tAdding device /dev/sdb ... Unable to add device: failed to parse lvs output: Failed to parse output: invalid character 'L' after top-level value", "stdout_lines": ["Creating cluster ... ID: b0d49c8a644ba3d7db846f0bc1f13a31", "\tAllowing file volumes on cluster.", "\tAllowing block volumes on cluster.", "\tCreating node glusterfs0.gluster-staging-backend.dev.com ... ID: 46ba02a54d4da1b4595dad6b9ca2b8b1", "\t\tAdding device /dev/sdb ... Unable to add device: failed to parse lvs output: Failed to parse output: invalid character 'L' after top-level value", "\tCreating node glusterfs1.gluster-staging-backend.dev.com ... ID: 1fe20650bfdf760968667f943a23ff42", "\t\tAdding device /dev/sdb ... Unable to add device: failed to parse lvs output: Failed to parse output: invalid character 'L' after top-level value", "\tCreating node glusterfs2.gluster-staging-backend.dev.com ... ID: 6859ffdf7d9325e774b2c4b0a68284ff", "\t\tAdding device /dev/sdb ... Unable to add device: failed to parse lvs output: Failed to parse output: invalid character 'L' after top-level value"]}
Additional Information
- OS: Red Hat Enterprise Linux Server release 7.6 (Maipo)
oc log from pod
[cmdexec] DEBUG 2020/09/10 22:37:09 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [/usr/sbin/lvm vgcreate -qq --physicalextentsize=4M --autobackup=n vg_cd986e360556da9ccbd08958a60085f5 /dev/sdb] on [glusterfs0.gluster-staging-backend.dev.com:22]: Stdout [Last login: Thu Sep 10 18:37:08 EDT 2020
]: Stderr [ WARNING: This metadata update is NOT backed up.
]
[cmdexec] DEBUG 2020/09/10 22:37:09 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [/usr/sbin/lvm pvs -o pv_name,pv_uuid,vg_name --reportformat=json /dev/sdb] on [glusterfs0.gluster-staging-backend.dev.com:22]
[negroni] 2020-09-10T22:37:09Z | 200 | 96.281µs | localhost:8080 | GET /queue/9c1f29e02aed8930014759e3f16b3c41
[cmdexec] DEBUG 2020/09/10 22:37:09 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [/usr/sbin/lvm pvs -o pv_name,pv_uuid,vg_name --reportformat=json /dev/sdb] on [glusterfs0.gluster-staging-backend.dev.com:22]: Stdout [ {
"report": [
{
"pv": [
{"pv_name":"/dev/sdb", "pv_uuid":"L2NqAj-qXOh-Wu0b-iWU2-UIm4-KCtT-xo94Pd", "vg_name":"vg_cd986e360556da9ccbd08958a60085f5"}
]
}
]
}
Last login: Thu Sep 10 18:37:08 EDT 2020
]: Stderr []
[cmdexec] DEBUG 2020/09/10 22:37:09 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [/usr/bin/udevadm info --query=symlink --name=/dev/sdb] on [glusterfs0.gluster-staging-backend.dev.com:22]
[cmdexec] DEBUG 2020/09/10 22:37:09 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [/usr/bin/udevadm info --query=symlink --name=/dev/sdb] on [glusterfs0.gluster-staging-backend.dev.com:22]: Stdout [disk/by-id/lvm-pv-uuid-L2NqAj-qXOh-Wu0b-iWU2-UIm4-KCtT-xo94Pd disk/by-id/scsi-36000c299d667be6f06640b9da5ba96b9 disk/by-id/wwn-0x6000c299d667be6f06640b9da5ba96b9 disk/by-path/fc---lun-0 disk/by-path/pci-0000:03:00.0-scsi-0:0:1:0
Last login: Thu Sep 10 18:37:09 EDT 2020
]: Stderr []
[cmdexec] ERROR 2020/09/10 22:37:09 heketi/executors/cmdexec/device.go:301:cmdexec.(*CmdExecutor).getDeviceHandle: failed to parse lvs output: Failed to parse output: invalid character 'L' after top-level value
[negroni] 2020-09-10T22:37:09Z | 200 | 112.858µs | localhost:8080 | GET /queue/9c1f29e02aed8930014759e3f16b3c41
[cmdexec] DEBUG 2020/09/10 22:37:09 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [/usr/sbin/lvm vgs -o pv_name,pv_uuid,vg_name --reportformat=json vg_cd986e360556da9ccbd08958a60085f5] on [glusterfs0.gluster-staging-backend.dev.com:22]
[cmdexec] DEBUG 2020/09/10 22:37:09 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [/usr/sbin/lvm vgs -o pv_name,pv_uuid,vg_name --reportformat=json vg_cd986e360556da9ccbd08958a60085f5] on [glusterfs0.gluster-staging-backend.dev.com:22]: Stdout [ {
"report": [
{
"vg": [
{"pv_name":"/dev/sdb", "pv_uuid":"L2NqAj-qXOh-Wu0b-iWU2-UIm4-KCtT-xo94Pd", "vg_name":"vg_cd986e360556da9ccbd08958a60085f5"}
]
}
]
}
Last login: Thu Sep 10 18:37:09 EDT 2020
]: Stderr []
[cmdexec] WARNING 2020/09/10 22:37:09 failed to parse vgs output: Failed to parse output: invalid character 'L' after top-level value
[negroni] 2020-09-10T22:37:10Z | 200 | 87.34µs | localhost:8080 | GET /queue/9c1f29e02aed8930014759e3f16b3c41
[cmdexec] DEBUG 2020/09/10 22:37:10 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [/usr/sbin/lvm vgremove -qq vg_cd986e360556da9ccbd08958a60085f5] on [glusterfs0.gluster-staging-backend.dev.com:22]
[cmdexec] DEBUG 2020/09/10 22:37:10 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [/usr/sbin/lvm vgremove -qq vg_cd986e360556da9ccbd08958a60085f5] on [glusterfs0.gluster-staging-backend.dev.com:22]: Stdout [Last login: Thu Sep 10 18:37:09 EDT 2020
]: Stderr []
log from one of the gluster nodes glusterd.log
[2020-09-10 22:01:56.011004] I [MSGID: 100030] [glusterfsd.c:2867:main] 0-glusterd: Started running glusterd version 7.7 (args: glusterd --xlator-option *.upgrade=on -N)
[2020-09-10 22:01:56.011159] I [glusterfsd.c:2594:daemonize] 0-glusterfs: Pid of current running process is 3019
[2020-09-10 22:01:56.013997] E [MSGID: 101080] [graph.c:489:fill_uuid] 0-graph: gethostname failed [File name too long]
[2020-09-10 22:01:56.014120] I [MSGID: 106478] [glusterd.c:1426:init] 0-management: Maximum allowed open file descriptors set to 65536
[2020-09-10 22:01:56.014146] I [MSGID: 106479] [glusterd.c:1482:init] 0-management: Using /var/lib/glusterd as working directory
[2020-09-10 22:01:56.014156] I [MSGID: 106479] [glusterd.c:1488:init] 0-management: Using /var/run/gluster as pid file working directory
[2020-09-10 22:01:56.020135] I [socket.c:1015:__socket_server_bind] 0-socket.management: process started listening on port (24007)
[2020-09-10 22:01:56.020258] E [rpc-transport.c:300:rpc_transport_load] 0-rpc-transport: /usr/lib64/glusterfs/7.7/rpc-transport/rdma.so: cannot open shared object file: No such file or directory
[2020-09-10 22:01:56.020268] W [rpc-transport.c:304:rpc_transport_load] 0-rpc-transport: volume 'rdma.management': transport-type 'rdma' is not valid or not found on this machine
[2020-09-10 22:01:56.020275] W [rpcsvc.c:1981:rpcsvc_create_listener] 0-rpc-service: cannot create listener, initing the transport failed
[2020-09-10 22:01:56.020282] E [MSGID: 106244] [glusterd.c:1781:init] 0-management: creation of 1 listeners failed, continuing with succeeded transport
[2020-09-10 22:01:56.022093] I [socket.c:958:__socket_server_bind] 0-socket.management: closing (AF_UNIX) reuse check socket 10
[2020-09-10 22:01:56.022583] I [MSGID: 106059] [glusterd.c:1865:init] 0-management: max-port override: 60999
[2020-09-10 22:01:56.023307] E [MSGID: 101032] [store.c:493:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/glusterd.info. [No such file or directory]
[2020-09-10 22:01:56.023328] E [MSGID: 101032] [store.c:493:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/glusterd.info. [No such file or directory]
[2020-09-10 22:01:56.023331] I [MSGID: 106514] [glusterd-store.c:2279:glusterd_restore_op_version] 0-management: Detected new install. Setting op-version to maximum : 70200
[2020-09-10 22:01:56.023355] E [MSGID: 101032] [store.c:493:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/options. [No such file or directory]
[2020-09-10 22:01:56.028992] I [MSGID: 106194] [glusterd-store.c:4102:glusterd_store_retrieve_missed_snaps_list] 0-management: No missed snaps list.
[2020-09-10 22:01:56.029029] E [MSGID: 101032] [store.c:493:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/glusterd.upgrade. [No such file or directory]
[2020-09-10 22:01:56.029045] I [glusterd.c:1998:init] 0-management: Regenerating volfiles due to a max op-version mismatch or glusterd.upgrade file not being present, op_version retrieved:0, max op_version: 70200
[2020-09-10 22:01:56.029270] W [glusterfsd.c:1596:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7e65) [0x7fbb0b983e65] -->glusterd(glusterfs_sigwaiter+0xe5) [0x55914d050625] -->glusterd(cleanup_and_exit+0x6b) [0x55914d05048b] ) 0-: received signum (15), shutting down
[2020-09-10 22:01:56.029325] W [mgmt-pmap.c:132:rpc_clnt_mgmt_pmap_signout] 0-glusterfs: failed to create XDR payload
[2020-09-10 22:01:58.774499] I [MSGID: 100030] [glusterfsd.c:2867:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 7.7 (args: /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO)
[2020-09-10 22:01:58.776561] I [glusterfsd.c:2594:daemonize] 0-glusterfs: Pid of current running process is 3069
[2020-09-10 22:01:58.780638] E [MSGID: 101080] [graph.c:489:fill_uuid] 0-graph: gethostname failed [File name too long]
[2020-09-10 22:01:58.781120] I [MSGID: 106478] [glusterd.c:1426:init] 0-management: Maximum allowed open file descriptors set to 65536
[2020-09-10 22:01:58.781146] I [MSGID: 106479] [glusterd.c:1482:init] 0-management: Using /var/lib/glusterd as working directory
[2020-09-10 22:01:58.781160] I [MSGID: 106479] [glusterd.c:1488:init] 0-management: Using /var/run/gluster as pid file working directory
[2020-09-10 22:01:58.786892] I [socket.c:1015:__socket_server_bind] 0-socket.management: process started listening on port (24007)
[2020-09-10 22:01:58.787054] E [rpc-transport.c:300:rpc_transport_load] 0-rpc-transport: /usr/lib64/glusterfs/7.7/rpc-transport/rdma.so: cannot open shared object file: No such file or directory
[2020-09-10 22:01:58.787065] W [rpc-transport.c:304:rpc_transport_load] 0-rpc-transport: volume 'rdma.management': transport-type 'rdma' is not valid or not found on this machine
[2020-09-10 22:01:58.787073] W [rpcsvc.c:1981:rpcsvc_create_listener] 0-rpc-service: cannot create listener, initing the transport failed
[2020-09-10 22:01:58.787079] E [MSGID: 106244] [glusterd.c:1781:init] 0-management: creation of 1 listeners failed, continuing with succeeded transport
[2020-09-10 22:01:58.788856] I [socket.c:958:__socket_server_bind] 0-socket.management: closing (AF_UNIX) reuse check socket 12
[2020-09-10 22:01:58.789369] I [MSGID: 106059] [glusterd.c:1865:init] 0-management: max-port override: 60999
[2020-09-10 22:01:58.790540] I [MSGID: 106228] [glusterd.c:484:glusterd_check_gsync_present] 0-glusterd: geo-replication module not installed in the system [No such file or directory]
[2020-09-10 22:01:58.790785] E [MSGID: 101032] [store.c:493:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/glusterd.info. [No such file or directory]
[2020-09-10 22:01:58.790800] E [MSGID: 101032] [store.c:493:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/glusterd.info. [No such file or directory]
[2020-09-10 22:01:58.790802] I [MSGID: 106514] [glusterd-store.c:2279:glusterd_restore_op_version] 0-management: Detected new install. Setting op-version to maximum : 70200
[2020-09-10 22:01:58.791052] I [MSGID: 106194] [glusterd-store.c:4102:glusterd_store_retrieve_missed_snaps_list] 0-management: No missed snaps list.
[2020-09-10 22:01:58.791089] I [glusterd.c:1998:init] 0-management: Regenerating volfiles due to a max op-version mismatch or glusterd.upgrade file not being present, op_version retrieved:0, max op_version: 70200
Final graph:
+------------------------------------------------------------------------------+
1: volume management
2: type mgmt/glusterd
3: option rpc-auth.auth-glusterfs on
4: option rpc-auth.auth-unix on
5: option rpc-auth.auth-null on
6: option rpc-auth-allow-insecure on
7: option transport.listen-backlog 1024
8: option max-port 60999
9: option event-threads 1
10: option ping-timeout 0
11: option transport.rdma.listen-port 24008
12: option transport.socket.listen-port 24007
13: option transport.socket.read-fail-log off
14: option transport.socket.keepalive-interval 2
15: option transport.socket.keepalive-time 10
16: option transport-type rdma
17: option working-directory /var/lib/glusterd
18: end-volume
19:
+------------------------------------------------------------------------------+
[2020-09-10 22:01:58.797972] I [MSGID: 101190] [event-epoll.c:682:event_dispatch_epoll_worker] 0-epoll: Started thread with index 0
[2020-09-10 22:37:11.855820] I [MSGID: 106487] [glusterd-handler.c:1082:__glusterd_handle_cli_probe] 0-glusterd: Received CLI probe req 10.22.0.238 24007
[2020-09-10 22:37:11.857708] I [MSGID: 106128] [glusterd-handler.c:3541:glusterd_probe_begin] 0-glusterd: Unable to find peerinfo for host: 10.22.0.238 (24007)
[2020-09-10 22:37:11.861881] W [MSGID: 106061] [glusterd-handler.c:3315:glusterd_transport_inet_options_build] 0-glusterd: Failed to get tcp-user-timeout
[2020-09-10 22:37:11.861909] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2020-09-10 22:37:11.866465] I [MSGID: 106498] [glusterd-handler.c:3470:glusterd_friend_add] 0-management: connect returned 0
[2020-09-10 22:37:11.868389] E [MSGID: 101032] [store.c:493:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/glusterd.info. [No such file or directory]
[2020-09-10 22:37:11.868655] I [MSGID: 106477] [glusterd.c:182:glusterd_uuid_generate_save] 0-management: generated UUID: 7ade5438-eb34-4018-ae99-412341d9a3d6
[2020-09-10 22:37:11.894145] I [MSGID: 106511] [glusterd-rpc-ops.c:250:__glusterd_probe_cbk] 0-management: Received probe resp from uuid: e3e92cff-9088-48eb-b56f-1b7e0cad9b7a, host: 10.22.0.238
[2020-09-10 22:37:11.894196] I [MSGID: 106511] [glusterd-rpc-ops.c:403:__glusterd_probe_cbk] 0-glusterd: Received resp to probe req
[2020-09-10 22:37:11.899219] I [MSGID: 106493] [glusterd-rpc-ops.c:468:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: e3e92cff-9088-48eb-b56f-1b7e0cad9b7a, host: 10.22.0.238, port: 0
[2020-09-10 22:37:11.912267] I [MSGID: 106163] [glusterd-handshake.c:1433:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 70200
[2020-09-10 22:37:11.922989] I [MSGID: 106490] [glusterd-handler.c:2789:__glusterd_handle_probe_query] 0-glusterd: Received probe from uuid: e3e92cff-9088-48eb-b56f-1b7e0cad9b7a
[2020-09-10 22:37:11.924388] I [MSGID: 106493] [glusterd-handler.c:2850:__glusterd_handle_probe_query] 0-glusterd: Responded to 10.22.0.238, op_ret: 0, op_errno: 0, ret: 0
[2020-09-10 22:37:11.924563] I [MSGID: 106490] [glusterd-handler.c:2434:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: e3e92cff-9088-48eb-b56f-1b7e0cad9b7a
[2020-09-10 22:37:11.929533] I [MSGID: 106493] [glusterd-handler.c:3715:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to 10.22.0.238 (0), ret: 0, op_ret: 0
[2020-09-10 22:37:11.938557] I [MSGID: 106492] [glusterd-handler.c:2619:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: e3e92cff-9088-48eb-b56f-1b7e0cad9b7a
[2020-09-10 22:37:11.938586] I [MSGID: 106502] [glusterd-handler.c:2660:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2020-09-10 22:37:11.938650] I [MSGID: 106493] [glusterd-rpc-ops.c:681:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: e3e92cff-9088-48eb-b56f-1b7e0cad9b7a
[2020-09-10 22:37:15.921214] I [MSGID: 106492] [glusterd-handler.c:2619:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: e3e92cff-9088-48eb-b56f-1b7e0cad9b7a
[2020-09-10 22:37:15.921274] I [MSGID: 106502] [glusterd-handler.c:2660:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2020-09-10 22:37:15.926653] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2020-09-10 22:37:15.926645] W [MSGID: 106061] [glusterd-handler.c:3315:glusterd_transport_inet_options_build] 0-glusterd: Failed to get tcp-user-timeout
[2020-09-10 22:37:15.930136] I [MSGID: 106498] [glusterd-handler.c:3519:glusterd_friend_add_from_peerinfo] 0-management: connect returned 0
[2020-09-10 22:37:15.931218] I [MSGID: 106163] [glusterd-handshake.c:1433:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 70200
[2020-09-10 22:37:15.939458] I [MSGID: 106490] [glusterd-handler.c:2434:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: ccbd3580-3ceb-4712-9646-c35cb1322100
[2020-09-10 22:37:15.944180] I [MSGID: 106493] [glusterd-handler.c:3715:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to 10.22.0.239 (0), ret: 0, op_ret: 0
[2020-09-10 22:37:15.951757] I [MSGID: 106493] [glusterd-rpc-ops.c:468:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: ccbd3580-3ceb-4712-9646-c35cb1322100, host: 10.22.0.239, port: 0
[2020-09-10 22:37:15.955080] I [MSGID: 106492] [glusterd-handler.c:2619:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: ccbd3580-3ceb-4712-9646-c35cb1322100
The message "I [MSGID: 106502] [glusterd-handler.c:2660:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend" repeated 2 times between [2020-09-10 22:37:15.921274] and [2020-09-10 22:37:15.960096]
[2020-09-10 22:37:15.960146] I [MSGID: 106493] [glusterd-rpc-ops.c:681:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: ccbd3580-3ceb-4712-9646-c35cb1322100
[2020-09-10 22:37:15.957391] I [MSGID: 106492] [glusterd-handler.c:2619:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: ccbd3580-3ceb-4712-9646-c35cb1322100
[2020-09-10 22:37:15.961227] I [MSGID: 106493] [glusterd-rpc-ops.c:681:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: ccbd3580-3ceb-4712-9646-c35cb1322100
kb7791 commented
Figured out issue. STIG'd vm is required to output "Last Login: Timestamp" when using ssh. This is causing the issue when trying to parse the json and now knowing what to do with that line on the disk. Temporarily need to change pam.d to "session required pam_lastlog.so silent showfailed" by adding the "silent" new control till we figure out the actual work around.