ganto/copr-lxc3

lxd.spec: %gotest github.com/lxc/lxd/lxd/cluster fails with "http code 403 'failed to join server 3'"

ganto opened this issue · 1 comments

ganto commented

Running the test suite for github.com/lxc/lxd/lxd/cluster fails when trying to build the RPM via mock:

Executing(%check): /bin/sh -e /var/tmp/rpm-tmp.AMy8SO                                                
+ umask 022                                                                                                                                                            
+ cd /builddir/build/BUILD                                                                          
+ cd lxd-3.1                                                                                                                                                              
+ export GOPATH=/builddir/build/BUILDROOT/lxd-3.1-0.1.fc27.x86_64//usr/share/gocode:/usr/share/gocode                                                                     
+ GOPATH=/builddir/build/BUILDROOT/lxd-3.1-0.1.fc27.x86_64//usr/share/gocode:/usr/share/gocode                        
+ unset LXD_SOCKET                                                                                                                                                        
+ go test -buildmode pie -compiler gc -ldflags ' -extldflags '\''-Wl,-z,relro -specs=/usr/lib/rpm/redhat/redhat-hardened-ld '\''' github.com/lxc/lxd/lxc
ok      github.com/lxc/lxd/lxc  0.030s                                                                                                                                    
+ go test -buildmode pie -compiler gc -ldflags ' -extldflags '\''-Wl,-z,relro -specs=/usr/lib/rpm/redhat/redhat-hardened-ld '\''' github.com/lxc/lxd/lxd
ok      github.com/lxc/lxd/lxd  59.556s
+ go test -buildmode pie -compiler gc -ldflags ' -extldflags '\''-Wl,-z,relro -specs=/usr/lib/rpm/redhat/redhat-hardened-ld '\''' github.com/lxc/lxd/lxd/cluster         
--- FAIL: TestHeartbeat (5.93s)                                                                             
        heartbeat_test.go:125: create bootstrap node for test cluster                                                                                                    
        testing.go:36: 07:21:46.186 info Initializing database gateway                                                                                                   
        testing.go:36: 07:21:46.186 info Start database node id=1 address=        
        testing.go:36: 07:21:46.187 info Raft: Initial configuration (index=1): [{Suffrage:Voter ID:1 Address:0}]
        testing.go:36: 07:21:46.187 info Raft: Node at 0 [Leader] entering Leader state                               
        testing.go:36: 07:21:46.201 dbug Acquiring exclusive lock on cluster db
        testing.go:36: 07:21:46.201 info Stop database gateway
        testing.go:36: 07:21:46.201 info Stop raft instance                         
        testing.go:36: 07:21:46.204 info Raft: Starting snapshot up to 10                                                                                                
        testing.go:36: 07:21:46.206 info Raft: Snapshot to 10 complete                                                                                                   
        testing.go:36: 07:21:46.206 info Initializing database gateway                                       
        testing.go:36: 07:21:46.207 info Start database node id=1 address=127.0.0.1:41799                                                                                
        testing.go:36: 07:21:46.209 info Raft: Restored from snapshot 1-10-1527657706205                                                                                  
        testing.go:36: 07:21:46.209 info Raft: Initial configuration (index=1): [{Suffrage:Voter ID:1 Address:0}]                                                        
        testing.go:36: 07:21:46.209 info Raft: Node at 127.0.0.1:41799 [Follower] entering Follower state (Leader: "")                                                   
        testing.go:36: 07:21:46.492 warn Raft: Heartbeat timeout from "" reached, starting election                                                                       
        testing.go:36: 07:21:46.492 info Raft: Node at 127.0.0.1:41799 [Candidate] entering Candidate state in term 2                                                    
        testing.go:36: 07:21:46.493 dbug Raft: Votes needed: 1                                                   
        testing.go:36: 07:21:46.493 dbug Raft: Vote granted from 1 in term 2. Tally: 1                                                                                    
        testing.go:36: 07:21:46.493 info Raft: Election won. Tally: 1
        testing.go:36: 07:21:46.493 info Raft: Node at 127.0.0.1:41799 [Leader] entering Leader state
        testing.go:36: 07:21:46.710 dbug Releasing exclusive lock on cluster db                                                                                        
        testing.go:36: 07:21:46.757 dbug Found cert k=0                                             
        testing.go:36: 07:21:46.799 dbug Found cert k=0                                                                                                                   
        testing.go:36: 07:21:46.799 dbug Found cert k=0                                                                 
        testing.go:36: 07:21:46.800 dbug Found cert k=0                                                               
        heartbeat_test.go:137: adding another node to the test cluster                                                                                                    
        testing.go:36: 07:21:46.809 info Kernel uid/gid map:                                                                     
        testing.go:36: 07:21:46.809 info  - u 0 0 4294967295                              
        testing.go:36: 07:21:46.809 info  - g 0 0 4294967295                 
        testing.go:36: 07:21:46.809 info Configured LXD uid/gid map:
        testing.go:36: 07:21:46.809 info  - u 0 100000 65536
        testing.go:36: 07:21:46.809 info  - g 0 100000 65536
        testing.go:36: 07:21:46.809 warn AppArmor support has been disabled because of lack of kernel support
        testing.go:36: 07:21:46.810 info Initializing database gateway
        testing.go:36: 07:21:46.810 info Start database node id=1 address=
        testing.go:36: 07:21:46.811 info Raft: Initial configuration (index=1): [{Suffrage:Voter ID:1 Address:0}]
        testing.go:36: 07:21:46.811 info Raft: Node at 0 [Leader] entering Leader state
        testing.go:36: 07:21:46.825 dbug Acquiring exclusive lock on cluster db
        testing.go:36: 07:21:46.825 info Stop database gateway
        testing.go:36: 07:21:46.825 info Stop raft instance
        testing.go:36: 07:21:46.827 info Raft: Starting snapshot up to 9
        testing.go:36: 07:21:46.828 info Raft: Snapshot to 9 complete
        testing.go:36: 07:21:46.828 info Initializing database gateway
        testing.go:36: 07:21:46.828 info Start database node id=2 address=127.0.0.1:41457
        testing.go:36: 07:21:46.830 info Raft: Restored from snapshot 1-9-1527657706827
        testing.go:36: 07:21:46.830 info Raft: Initial configuration (index=1): [{Suffrage:Voter ID:1 Address:0}]
        testing.go:36: 07:21:46.830 info Joining dqlite raft cluster id=2 address=127.0.0.1:41457 target=127.0.0.1:41799
        testing.go:36: 07:21:46.830 info Raft: Node at 127.0.0.1:41457 [Follower] entering Follower state (Leader: "")
        testing.go:36: 07:21:46.871 dbug Found cert k=0
        testing.go:36: 07:21:46.871 info Raft-http: Handling join request for node 2 (127.0.0.1:41457)
        testing.go:36: 07:21:46.871 info Raft: Updating configuration with AddStaging (2, 127.0.0.1:41457) to [{Suffrage:Voter ID:1 Address:0} {Suffrage:Voter ID:2 Address:127.0.0.1:41457}]
        testing.go:36: 07:21:46.871 info Raft: Added peer 2, starting replication
        testing.go:36: 07:21:46.871 info Migrate local data to cluster database
        testing.go:36: 07:21:46.871 dbug Releasing exclusive lock on cluster db
        testing.go:36: 07:21:46.930 dbug Found cert k=0
        testing.go:36: 07:21:46.930 info Raft-http: Establishing new connection with 127.0.0.1:41799
        testing.go:36: 07:21:46.930 dbug Raft-net: 127.0.0.1:41457 accepted connection from: 127.0.0.1:51750
        testing.go:36: 07:21:46.930 warn Raft: Failed to get previous log: 23 log not found (last: 9)
        testing.go:36: 07:21:46.930 warn Raft: AppendEntries to {Voter 2 127.0.0.1:41457} rejected, sending older logs (next: 10)
        testing.go:36: 07:21:46.930 dbug Found cert k=0
        testing.go:36: 07:21:46.933 info Raft: pipelining replication to peer {Voter 2 127.0.0.1:41457}                                                                   
        testing.go:36: 07:21:46.942 dbug Found cert k=0             
        testing.go:36: 07:21:46.942 info Raft-http: Establishing new connection with 127.0.0.1:41799
        testing.go:36: 07:21:46.942 dbug Raft-net: 127.0.0.1:41457 accepted connection from: 127.0.0.1:51754
        testing.go:36: 07:21:46.974 dbug Found cert k=0                                                      
        testing.go:36: 07:21:46.974 dbug Found cert k=0               
        testing.go:36: 07:21:46.974 dbug Found cert k=0                   
        heartbeat_test.go:137: adding another node to the test cluster                                           
        testing.go:36: 07:21:46.988 info Kernel uid/gid map:                           
        testing.go:36: 07:21:46.988 info  - u 0 0 4294967295                   
        testing.go:36: 07:21:46.988 info  - g 0 0 4294967295  
        testing.go:36: 07:21:46.988 info Configured LXD uid/gid map:
        testing.go:36: 07:21:46.988 info  - u 0 100000 65536            
        testing.go:36: 07:21:46.988 info  - g 0 100000 65536         
        testing.go:36: 07:21:46.989 warn AppArmor support has been disabled because of lack of kernel support
        testing.go:36: 07:21:46.991 info Initializing database gateway                   
        testing.go:36: 07:21:46.991 info Start database node id=1 address=             
        testing.go:36: 07:21:46.991 info Raft: Initial configuration (index=1): [{Suffrage:Voter ID:1 Address:0}]
        testing.go:36: 07:21:46.992 info Raft: Node at 0 [Leader] entering Leader state                                 
        testing.go:36: 07:21:47.012 dbug Acquiring exclusive lock on cluster db                                       
        testing.go:36: 07:21:47.012 info Stop database gateway
        testing.go:36: 07:21:47.012 info Stop raft instance                                           
        testing.go:36: 07:21:47.014 info Raft: Starting snapshot up to 9                                                                                                  
        testing.go:36: 07:21:47.015 info Raft: Snapshot to 9 complete
        testing.go:36: 07:21:47.015 info Initializing database gateway           
        testing.go:36: 07:21:47.015 info Start database node id=3 address=127.0.0.1:38021
        testing.go:36: 07:21:47.017 info Raft: Restored from snapshot 1-9-1527657707015
        testing.go:36: 07:21:47.017 info Raft: Initial configuration (index=1): [{Suffrage:Voter ID:1 Address:0}]
        testing.go:36: 07:21:47.017 info Joining dqlite raft cluster id=3 address=127.0.0.1:38021 target=127.0.0.1:41457
        testing.go:36: 07:21:47.017 info Raft: Node at 127.0.0.1:38021 [Follower] entering Follower state (Leader: "")
        testing.go:36: 07:21:47.058 dbug Found cert k=0
        testing.go:36: 07:21:47.058 info Raft-http: Handling join request for node 3 (127.0.0.1:38021)
        testing.go:36: 07:21:47.100 dbug Found cert k=0
        testing.go:36: 07:21:47.100 info Raft-http: Handling join request for node 3 (127.0.0.1:38021)
        testing.go:36: 07:21:47.101 info Raft: Updating configuration with AddStaging (3, 127.0.0.1:38021) to [{Suffrage:Voter ID:1 Address:0} {Suffrage:Voter ID:2 Address:127.0.0.1:41457} {Suffrage:Voter ID:3 Address:127.0.0.1:38021}]
        testing.go:36: 07:21:47.101 info Raft: Added peer 3, starting replication
        testing.go:36: 07:21:47.197 warn Raft: Failed to contact 2 in 100.196704ms
        testing.go:36: 07:21:47.207 warn Raft: Failed to contact 2 in 110.542981ms
        testing.go:36: 07:21:47.207 warn Raft: Failed to contact 3 in 106.571499ms
        testing.go:36: 07:21:47.207 warn Raft: Failed to contact quorum of nodes, stepping down
        testing.go:36: 07:21:47.207 info Raft: Node at 127.0.0.1:41799 [Follower] entering Follower state (Leader: "")
        heartbeat_test.go:149:
                        Error Trace:    heartbeat_test.go:149
                                                                heartbeat_test.go:28
                        Error:          Received unexpected error:
                                        http code 403 'failed to join server 3: leadership lost while committing log'
                                        server join failed
                                        github.com/lxc/lxd/vendor/github.com/CanonicalLtd/raft-http.ChangeMembership
                                                /builddir/build/BUILDROOT/lxd-3.1-0.1.fc27.x86_64/usr/share/gocode/src/github.com/lxc/lxd/vendor/github.com/CanonicalLtd/raft-http/membership.go:83
                                        github.com/lxc/lxd/vendor/github.com/CanonicalLtd/raft-http.(*Layer).changeMemberhip
                                                /builddir/build/BUILDROOT/lxd-3.1-0.1.fc27.x86_64/usr/share/gocode/src/github.com/lxc/lxd/vendor/github.com/CanonicalLtd/raft-http/layer.go:125
                                        github.com/lxc/lxd/vendor/github.com/CanonicalLtd/raft-http.(*Layer).Join
                                                /builddir/build/BUILDROOT/lxd-3.1-0.1.fc27.x86_64/usr/share/gocode/src/github.com/lxc/lxd/vendor/github.com/CanonicalLtd/raft-http/layer.go:111
                                        github.com/lxc/lxd/lxd/cluster.Join
                                                /builddir/build/BUILDROOT/lxd-3.1-0.1.fc27.x86_64/usr/share/gocode/src/github.com/lxc/lxd/lxd/cluster/membership.go:318
                                        github.com/lxc/lxd/lxd/cluster_test.(*heartbeatFixture).Grow
                                                /builddir/build/BUILDROOT/lxd-3.1-0.1.fc27.x86_64/usr/share/gocode/src/github.com/lxc/lxd/lxd/cluster/heartbeat_test.go:148
                                        github.com/lxc/lxd/lxd/cluster_test.TestHeartbeat
                                                /builddir/build/BUILDROOT/lxd-3.1-0.1.fc27.x86_64/usr/share/gocode/src/github.com/lxc/lxd/lxd/cluster/heartbeat_test.go:28
                                        testing.tRunner
                                                /usr/lib/golang/src/testing/testing.go:746
                                        runtime.goexit
                                                /usr/lib/golang/src/runtime/asm_amd64.s:2337
                        Test:           TestHeartbeat
        testing.go:36: 07:21:47.208 info Stop database gateway
        testing.go:36: 07:21:47.208 info Stop raft instance
        testing.go:36: 07:21:47.209 info Stop database gateway
        testing.go:36: 07:21:47.209 info Stop raft instance
        testing.go:36: 07:21:47.214 info Raft: Starting snapshot up to 34
        testing.go:36: 07:21:47.216 info Raft: Snapshot to 34 complete
        testing.go:36: 07:21:47.217 info Stop database gateway
        testing.go:36: 07:21:47.217 info Stop raft instance
        testing.go:36: 07:21:47.222 info Raft: Starting snapshot up to 34
        testing.go:36: 07:21:47.607 warn Raft: Heartbeat timeout from "" reached, starting election
        testing.go:36: 07:21:47.607 info Raft: Node at 127.0.0.1:41799 [Candidate] entering Candidate state in term 3
        testing.go:36: 07:21:48.146 dbug Found cert k=0
        testing.go:36: 07:21:48.146 info Raft-http: Establishing new connection with 127.0.0.1:41799
        testing.go:36: 07:21:48.219 dbug Found cert k=0
        testing.go:36: 07:21:48.219 info Raft-http: Establishing new connection with 127.0.0.1:41799
        testing.go:36: 07:21:48.223 info Raft: Snapshot to 34 complete
        testing.go:36: 07:21:48.300 dbug Found cert k=0
        testing.go:36: 07:21:48.300 info Raft-http: Establishing new connection with 127.0.0.1:41799
        testing.go:36: 07:21:48.394 dbug Found cert k=0
        testing.go:36: 07:21:48.394 info Raft-http: Establishing new connection with 127.0.0.1:41799
        testing.go:36: 07:21:48.505 dbug Found cert k=0
        testing.go:36: 07:21:48.505 info Raft-http: Establishing new connection with 127.0.0.1:41799
        testing.go:36: 07:21:48.609 dbug Raft: Votes needed: 2
        testing.go:36: 07:21:49.664 dbug Found cert k=0
        testing.go:36: 07:21:49.664 info Raft-http: Establishing new connection with 127.0.0.1:41799
        testing.go:36: 07:21:49.667 dbug Found cert k=0
        testing.go:36: 07:21:49.667 info Raft-http: Establishing new connection with 127.0.0.1:41799
        testing.go:36: 07:21:49.900 dbug Found cert k=0
        testing.go:36: 07:21:49.900 info Raft-http: Establishing new connection with 127.0.0.1:41799
        testing.go:36: 07:21:51.309 dbug Found cert k=0
        testing.go:36: 07:21:51.309 info Raft-http: Establishing new connection with 127.0.0.1:41799
        testing.go:36: 07:21:51.663 dbug Found cert k=0
        testing.go:36: 07:21:51.663 info Raft-http: Establishing new connection with 127.0.0.1:41799
        testing.go:36: 07:21:52.026 dbug Found cert k=0
        testing.go:36: 07:21:52.026 info Raft-http: Establishing new connection with 127.0.0.1:41799
        testing.go:36: 07:21:52.102 info Raft: aborting pipeline replication to peer {Voter 2 127.0.0.1:41457}
FAIL
FAIL    github.com/lxc/lxd/lxd/cluster  16.736s
error: Bad exit status from /var/tmp/rpm-tmp.AMy8SO (%check)
    Bad exit status from /var/tmp/rpm-tmp.AMy8SO (%check)
ganto commented

I finally found the issue for this one. It was caused by go test not using the libsqlite3 tag and therefore not properly link against libsqlite3.so.0. Still interesting that this resulted in such a "soft" runtime error instead of a segfault or such...

Fixed with 540f534