deso-protocol/run

Nodes stuck at 22073

marshj1977 opened this issue · 26 comments

Same issue as #41 and #36

image

Fully reset node (--volume) and resynced -with recommended sync problem options.
Deleted all docker images, restarted (running from windows terminal as admin .\run.sh (on main branch, having pulled latest).
Same issue, stuck :(

@jemarsh

Is there any obvious errors in the logs?

I'm clearing everything down again completely:

  1. Node stopped
  • docker-compose -f docker-compose.dev.yml down --volumes
  1. All docker containers deleted

  2. Taken the opportunity to move docker-desktop & docker-desktop-data to another drive, for info...
    wsl --shutdown
    wsl --export docker-desktop E:\docker-desktop.tar
    wsl --export docker-desktop-data E:\docker-desktop-data.tar
    wsl --unregister docker-desktop
    wsl --unregister docker-desktop-data
    wsl --import docker-desktop E:\Docker\distro E:\docker-desktop.tar --version 2
    wsl --import docker-desktop-data E:\Docker\data E:\docker-desktop-data.tar --version 2

  3. Start docker desktop again

  4. Brand new clone (https://github.com/bitclout/run.git)

  5. Edit (default) dev.env, only changing

  • CONNECT_IPS=bitclout-seed-2.io
  • IGNORE_INBOUND_PEER_INV_MESSAGES=true
  1. Run windows terminal as Administrator
  2. Trying it again via .\run.sh.

Will try and capture some of the logs once it gets near the current latest block again.

once started up, getting these errors:
core | I0508 13:57:02.493916 1 xLGz593x.go:1] Listening to web client connections on port 17002
core | I0508 13:57:02.493945 1 zUqpb4Fs.go:1] APIServer.Start: Starting APIServer
core | I0508 13:57:02.495311 1 P2IW1kRp.go:1] Starting txindex update thread because --txindex was passed. Waiting for node to fully sync...
core | I0508 13:57:02.495349 1 syXY9iRz.go:1] Listening to NON-SSL JSON API connections on port :17001

core | E0508 13:57:04.289846 1 ZckI1F1V.go:1] ReadBitCloutMessage: : ReadMessage: Problem decoding NetworkType: EOF
core | E0508 13:57:04.289923 1 EmcODbtb.go:1] _connectPeer: Problem negotiating version with peer with addr: (104.238.183.241:17000) err: (negotiateVersion: Problem reading OUTBOUND peer version for Peer [ Remote Address: 104.238.183.241:17000 PeerID=1 ]: readVersion: : ReadBitCloutMessage: : ReadMessage: Problem decoding NetworkType: EOF)

core | E0508 13:57:07.628064 1 ZckI1F1V.go:1] ReadBitCloutMessage: : ReadMessage: Problem decoding NetworkType: EOF

SO - i've changed

  • CONNECT_IPS=bitclout-seed-4.io

(bitclout-seed-2.io resolves to 104.238.183.241, so doesn't look like anything will work with that setting!)

and restarted, bitclout-seed-4.io seems to have connected ok and i'm getting ConnectBlock messages again now

Ok - quick update.
Node is now fully synced as at Last Block Height 22269, with the above settings.

I am now going to stop the node and amend dev.env with only the following changes (everything else left as above):-
MINER_PUBLIC_KEYS= my public key
ADMIN_PUBLIC_KEYS= my public key
SUPPORT_EMAIL= my support email
MAX_BLOCK_TEMPLATES_TO_CACHE=5 (not actually sure what a good value is for this - any tips?)

Changes from previous posts
CONNECT_IPS=bitclout-seed-4.io
IGNORE_INBOUND_PEER_INV_MESSAGES=true

Starting again via the administrator windows terminal (powershell) with .\run.sh

Will add further info if it gets stuck/errors

Ok looks like there are a few errors immediately...

_core | I0508 15:23:10.919501 1 E4JYGj8m.go:1] Server.maybeRequestSync: Sending mempool message: [ Remote Address: 35.232.92.5:17000 PeerID=3 ]
core | I0508 15:23:11.022023 1 T1UVu9jA.go:1] Received TransactionBundle message of size 5 from Peer [ Remote Address: 35.232.92.5:17000 PeerID=3 ]

core | E0508 15:23:11.464284 1 xBFnBgYa.go:1] tryUpdateTxindex: Problem running update: UpdateTxindex: Problem attaching block < TstampSecs: 1617098255, Height: 10930, Hash: 00000000008c91a0a8f6ced75ddb79f7638854e6d325718f1d11970389f6666c, ParentHash 00000000007c828c5d6755154c1c1d35f42b2c2c651f9bf6f16e94ea18e3a351, Status: HEADER_VALIDATED | BLOCK_PROCESSED | BLOCK_STORED | BLOCK_VALIDATED, CumWork: 10853183732794414>: RuleErrorDuplicateBlock

_core | I0508 15:23:11.917169 1 T1UVu9jA.go:1] Received TransactionBundle message of size 1 from Peer [ Remote Address: 35.232.92.5:17000 PeerID=3 ]
core | I0508 15:23:12.483812 1 qFzst0zl.go:1] UpdateTxindex: Updating txindex tip (height: 10929, hash: 00000000007c828c5d6755154c1c1d35f42b2c2c651f9bf6f16e94ea18e3a351) to block tip (height: 22269, hash: 000000000020965c03291aa133f5929973ff02a6177e443a5b3295a4b2f2e301) ...
core | I0508 15:23:13.059358 1 T1UVu9jA.go:1] Received TransactionBundle message of size 4 from Peer [ Remote Address: 35.232.92.5:17000 PeerID=3 ]
STORED | BLOCK_VALIDATED, CumWork: 10853183732794414>: RuleErrorDuplicateBlock
core | I0508 15:23:14.079909 1 T1UVu9jA.go:1] Received TransactionBundle message of size 8 from Peer [ Remote Address: 35.232.92.5:17000 PeerID=3 ]
core | I0508 15:23:14.424161 1 qFzst0zl.go:1] UpdateTxindex: Updating txindex tip (height: 10929, hash: 00000000007c828c5d6755154c1c1d35f42b2c2c651f9bf6f16e94ea18e3a351) to block tip (height: 22269, hash: 000000000020965c03291aa133f5929973ff02a6177e443a5b3295a4b2f2e301) ...
core | I0508 15:23:15.113337 1 T1UVu9jA.go:1] Received TransactionBundle message of size 6 from Peer [ Remote Address: 35.232.92.5:17000 PeerID=3 ]

core | E0508 15:23:15.435865 1 xBFnBgYa.go:1] tryUpdateTxindex: Problem running update: UpdateTxindex: Problem attaching block < TstampSecs: 1617098255, Height: 10930, Hash: 00000000008c91a0a8f6ced75ddb79f7638854e6d325718f1d11970389f6666c, ParentHash 00000000007c828c5d6755154c1c1d35f42b2c2c651f9bf6f16e94ea18e3a351, Status: HEADER_VALIDATED | BLOCK_PROCESSED | BLOCK_STORED | BLOCK_VALIDATED, CumWork: 10853183732794414>: RuleErrorDuplicateBlock

_core | I0508 15:23:15.716399 1 HuEBJR8a.go:1] BitCloutBlockProducer.getBlockTemplate: Stopping at txn 0 because it's not ready yet: ConnectTransaction: : RuleErrorBitcoinExchangeBlockHashNotFoundInMainBitcoinChain
core | I0508 15:23:15.716741 1 kTUglwaA.go:1] A bad BitcoinExchange transaction may be holding up block production: 413c938e71df9aace2ffb0b374f62e65af3810023ffe9d8721e321aa1108667c, Current header tip: b43b27337f50ad5441b71ed1fbaa31176602700dc34d05000000000000000000

(Repeating)

Ok, its been running for a while now and appears to be staying in sync now, but is regularly logging errors like this:-
(info messages included for context)

core | I0508 16:05:38.189471 1 E4JYGj8m.go:1] Server._maybeRequestSync: Sending mempool message: [ Remote Address: 35.232.92.5:17000 PeerID=2 ]
core | I0508 16:05:38.783135 1 T1UVu9jA.go:1] Received TransactionBundle message of size 7 from Peer [ Remote Address: 35.232.92.5:17000 PeerID=2 ]
core | I0508 16:05:38.937209 1 qFzst0zl.go:1] UpdateTxindex: Updating txindex tip (height: 10929, hash: 00000000007c828c5d6755154c1c1d35f42b2c2c651f9bf6f16e94ea18e3a351) to block tip (height: 22281, hash: 000000000002cb0a5a3d726005e0e21a4a2b697804bbe3327129fcd9a2624a57) ...
core | I0508 16:05:39.823877 1 T1UVu9jA.go:1] Received TransactionBundle message of size 6 from Peer [ Remote Address: 35.232.92.5:17000 PeerID=2 ]
core | E0508 16:05:39.848241 1 xBFnBgYa.go:1] tryUpdateTxindex: Problem running update: UpdateTxindex: Problem attaching block < TstampSecs: 1617098255, Height: 10930, Hash: 00000000008c91a0a8f6ced75ddb79f7638854e6d325718f1d11970389f6666c, ParentHash 00000000007c828c5d6755154c1c1d35f42b2c2c651f9bf6f16e94ea18e3a351, Status: HEADER_VALIDATED | BLOCK_PROCESSED | BLOCK_STORED | BLOCK_VALIDATED, CumWork: 10853183732794414>: RuleErrorDuplicateBlock
core | I0508 16:05:40.810268 1 T1UVu9jA.go:1] Received TransactionBundle message of size 5 from Peer [ Remote Address: 35.232.92.5:17000 PeerID=2 ]
core | I0508 16:05:40.851430 1 qFzst0zl.go:1] UpdateTxindex: Updating txindex tip (height: 10929, hash: 00000000007c828c5d6755154c1c1d35f42b2c2c651f9bf6f16e94ea18e3a351) to block tip (height: 22281, hash: 000000000002cb0a5a3d726005e0e21a4a2b697804bbe3327129fcd9a2624a57) ...
core | I0508 16:05:40.942285 1 T1UVu9jA.go:1] Received TransactionBundle message of size 2 from Peer [ Remote Address: 35.232.92.5:17000 PeerID=2 ]
core | E0508 16:05:41.757230 1 xBFnBgYa.go:1] tryUpdateTxindex: Problem running update: UpdateTxindex: Problem attaching block < TstampSecs: 1617098255, Height: 10930, Hash: 00000000008c91a0a8f6ced75ddb79f7638854e6d325718f1d11970389f6666c, ParentHash 00000000007c828c5d6755154c1c1d35f42b2c2c651f9bf6f16e94ea18e3a351, Status: HEADER_VALIDATED | BLOCK_PROCESSED | BLOCK_STORED | BLOCK_VALIDATED, CumWork: 10853183732794414>: RuleErrorDuplicateBlock
core | I0508 16:05:42.778082 1 qFzst0zl.go:1] UpdateTxindex: Updating txindex tip (height: 10929, hash: 00000000007c828c5d6755154c1c1d35f42b2c2c651f9bf6f16e94ea18e3a351) to block tip (height: 22281, hash: 000000000002cb0a5a3d726005e0e21a4a2b697804bbe3327129fcd9a2624a57) ...
core | I0508 16:05:42.878491 1 T1UVu9jA.go:1] Received TransactionBundle message of size 6 from Peer [ Remote Address: 35.232.92.5:17000 PeerID=2 ]
core | I0508 16:05:42.980832 1 HuEBJR8a.go:1] BitCloutBlockProducer._getBlockTemplate: Stopping at txn 0 because it's not ready yet: ConnectTransaction: : _connectBitcoinExchange: Number of Bitcoin burn work blocks mined on top of transaction 1 is below MinBitcoinBurnWork 3
core | I0508 16:05:42.981155 1 kTUglwaA.go:1] A bad BitcoinExchange transaction may be holding up block production: a2d2aff9ef4fcf8cea059e5a21a8c70de4701c209683e4aa863f8f11ed80349b, Current header tip: 522b4427cac7da29aff55c8f487349dff6995b538a6a07000000000000000000
core | I0508 16:05:42.981592 1 RbXXSY8s.go:1] Produced block with 1 txns with approx 4195 total txns in mempool
core | 2021/05/08 16:05:43 Datadog Tracer v1.29.0 ERROR: Failed to upload profile: failed after 2 retries, last error was: Post "http://localhost:8126/profiling/v1/input": dial tcp 127.0.0.1:8126: connect: connection refused
core | (occurred: 08 May 21 16:04 UTC)
core | 2021/05/08 16:05:43 Datadog Tracer v1.29.0 ERROR: Uploading profile failed: Post "http://localhost:8126/profiling/v1/input": dial tcp 127.0.0.1:8126: connect: connection refused. Trying again in 55.799013717s..., 1 additional messages skipped (first occurrence: 08 May 21 16:04 UTC)
core | I0508 16:05:43.285309 1 ldvxF2JQ.go:1] Received header bundle with 0 headers in state FULLY_CURRENT from peer [ Remote Address: 35.232.92.5:17000 PeerID=2 ] while our header tip is < 22281, 000000000002cb0a5a3d726005e0e21a4a2b697804bbe3327129fcd9a2624a57 >
core | I0508 16:05:43.285712 1 E4JYGj8m.go:1] Server._maybeRequestSync: Sending mempool message: [ Remote Address: 35.232.92.5:17000 PeerID=2 ]
core | E0508 16:05:43.749247 1 xBFnBgYa.go:1] tryUpdateTxindex: Problem running update: UpdateTxindex: Problem attaching block < TstampSecs: 1617098255, Height: 10930, Hash: 00000000008c91a0a8f6ced75ddb79f7638854e6d325718f1d11970389f6666c, ParentHash 00000000007c828c5d6755154c1c1d35f42b2c2c651f9bf6f16e94ea18e3a351, Status: HEADER_VALIDATED | BLOCK_PROCESSED | BLOCK_STORED | BLOCK_VALIDATED, CumWork: 10853183732794414>: RuleErrorDuplicateBlock
core | I0508 16:05:43.871505 1 T1UVu9jA.go:1] Received TransactionBundle message of size 3 from Peer [ Remote Address: 35.232.92.5:17000 PeerID=2 ]
core | I0508 16:05:44.762158 1 qFzst0zl.go:1] UpdateTxindex: Updating txindex tip (height: 10929, hash: 00000000007c828c5d6755154c1c1d35f42b2c2c651f9bf6f16e94ea18e3a351) to block tip (height: 22281, hash: 000000000002cb0a5a3d726005e0e21a4a2b697804bbe3327129fcd9a2624a57) ...

now getting these over and over...

core | E0508 16:27:50.037858 1 xBFnBgYa.go:1] tryUpdateTxindex: Problem running update: UpdateTxindex: Problem attaching block < TstampSecs: 1617098255, Height: 10930, Hash: 00000000008c91a0a8f6ced75ddb79f7638854e6d325718f1d11970389f6666c, ParentHash 00000000007c828c5d6755154c1c1d35f42b2c2c651f9bf6f16e94ea18e3a351, Status: HEADER_VALIDATED | BLOCK_PROCESSED | BLOCK_STORED | BLOCK_VALIDATED, CumWork: 10853183732794414>: RuleErrorDuplicateBlock
core | I0508 16:27:51.039417 1 qFzst0zl.go:1] UpdateTxindex: Updating txindex tip (height: 10929, hash: 00000000007c828c5d6755154c1c1d35f42b2c2c651f9bf6f16e94ea18e3a351) to block tip (height: 22288, hash: 000000000033890f15954b7f0f59b2fd84e4a62cfa8aa1ebd3ceb693d617f802) ...

Ok - still showing as Fully Synced (Which is nice), but still getting errors as follows:-

core | E0508 19:47:03.907504 1 xBFnBgYa.go:1] tryUpdateTxindex: Problem running update: UpdateTxindex: Problem attaching block < TstampSecs: 1617098255, Height: 10930, Hash: 00000000008c91a0a8f6ced75ddb79f7638854e6d325718f1d11970389f6666c, ParentHash 00000000007c828c5d6755154c1c1d35f42b2c2c651f9bf6f16e94ea18e3a351, Status: HEADER_VALIDATED | BLOCK_PROCESSED | BLOCK_STORED | BLOCK_VALIDATED, CumWork: 10853183732794414>: RuleErrorDuplicateBlock
core | I0508 19:47:04.908740 1 qFzst0zl.go:1] UpdateTxindex: Updating txindex tip (height: 10929, hash: 00000000007c828c5d6755154c1c1d35f42b2c2c651f9bf6f16e94ea18e3a351) to block tip (height: 22288, hash: 000000000033890f15954b7f0f59b2fd84e4a62cfa8aa1ebd3ceb693d617f802) ...
core | I0508 19:47:05.397755 1 HuEBJR8a.go:1] BitCloutBlockProducer._getBlockTemplate: Stopping at txn 33 because it's not ready yet: ConnectTransaction: : _connectBitcoinExchange: Number of Bitcoin burn work blocks mined on top of transaction 2 is below MinBitcoinBurnWork 3
core | I0508 19:47:05.397808 1 kTUglwaA.go:1] A bad BitcoinExchange transaction may be holding up block production: 9d32ac69e09d306dbb4b9987fa259d989b2419d8932e188083b19a3f9ff00a43, Current header tip: 01958b081b7509eff19fbf130a32ca640d21d113372405000000000000000000
core | I0508 19:47:05.403148 1 RbXXSY8s.go:1] Produced block with 34 txns with approx 36 total txns in mempool
core | E0508 19:47:05.536777 1 xBFnBgYa.go:1] tryUpdateTxindex: Problem running update: UpdateTxindex: Problem attaching block < TstampSecs: 1617098255, Height: 10930, Hash: 00000000008c91a0a8f6ced75ddb79f7638854e6d325718f1d11970389f6666c, ParentHash 00000000007c828c5d6755154c1c1d35f42b2c2c651f9bf6f16e94ea18e3a351, Status: HEADER_VALIDATED | BLOCK_PROCESSED | BLOCK_STORED | BLOCK_VALIDATED, CumWork: 10853183732794414>: RuleErrorDuplicateBlock
core | I0508 19:47:06.537979 1 qFzst0zl.go:1] UpdateTxindex: Updating txindex tip (height: 10929, hash: 00000000007c828c5d6755154c1c1d35f42b2c2c651f9bf6f16e94ea18e3a351) to block tip (height: 22288, hash: 000000000033890f15954b7f0f59b2fd84e4a62cfa8aa1ebd3ceb693d617f802) ...
core | E0508 19:47:07.136081 1 xBFnBgYa.go:1] tryUpdateTxindex: Problem running update: UpdateTxindex: Problem attaching block < TstampSecs: 1617098255, Height: 10930, Hash: 00000000008c91a0a8f6ced75ddb79f7638854e6d325718f1d11970389f6666c, ParentHash 00000000007c828c5d6755154c1c1d35f42b2c2c651f9bf6f16e94ea18e3a351, Status: HEADER_VALIDATED | BLOCK_PROCESSED | BLOCK_STORED | BLOCK_VALIDATED, CumWork: 10853183732794414>: RuleErrorDuplicateBlock
core | I0508 19:47:08.137132 1 qFzst0zl.go:1] UpdateTxindex: Updating txindex tip (height: 10929, hash: 00000000007c828c5d6755154c1c1d35f42b2c2c651f9bf6f16e94ea18e3a351) to block tip (height: 22288, hash: 000000000033890f15954b7f0f59b2fd84e4a62cfa8aa1ebd3ceb693d617f802) ...

Now getting screens full of the following:-

core | I0508 19:51:26.681665 1 SaEMyA3r.go:1] Server._handleTransactionBundle: Transaction (*nWv1dupN.MsgBitCloutTxn)(0xc0bea87f10)(< TxHash: 26888484aed9d79d9e1c010736f32d5b4155053b6cc5adea6ef99123b0d585a8, TxnType: BC1YLhSg3T3hwc4uZ61QVAu3wWJNaZDdCY8WebEN6uqTTft5GoBQTqH, PubKey: CREATOR_COIN_TRANSFER >)
core | from peer [ Remote Address: 95.216.72.141:9013 PeerID=1 ] was added as an ORPHAN
core | I0508 19:51:26.681837 1 SaEMyA3r.go:1] Server._handleTransactionBundle: Transaction (*nWv1dupN.MsgBitCloutTxn)(0xc0bea87f80)(< TxHash: 4de39fd56fa273bf9cb7b820a349d72c1d7ef3de94719fb5202de6c0a68d9acd, TxnType: BC1YLh7nH7RQSgfyYdHtBqmQYP1dZiw7sykZBP4bqzaxaWZ5LkgoKH4, PubKey: FOLLOW >)
core | from peer [ Remote Address: 95.216.72.141:9013 PeerID=1 ] was added as an ORPHAN
core | I0508 19:51:26.681965 1 SaEMyA3r.go:1] Server._handleTransactionBundle: Transaction (*nWv1dupN.MsgBitCloutTxn)(0xc0bea96000)(< TxHash: 0b2fad816a09b51b4be6744573eee93b4c5a7079db6b7340f90b80d32e6ec4c2, TxnType: BC1YLi1QvyckyiNAjq9zKgotWi4uWHtNSwo4ptxp2hWqAXBtHs1Fae8, PubKey: SUBMIT_POST >)
core | from peer [ Remote Address: 95.216.72.141:9013 PeerID=1 ] was added as an ORPHAN

It says fully syncd, but has been on 22288 since the last message above!
image

doesn't seem like it IS fully syncd to me.

Current messages...
core | E0508 20:19:48.188583 1 _glDAgf4.go:1] tryUpdateTxindex: Problem running update: UpdateTxindex: Problem attaching block < TstampSecs: 1617098255, Height: 10930, Hash: 00000000008c91a0a8f6ced75ddb79f7638854e6d325718f1d11970389f6666c, ParentHash 00000000007c828c5d6755154c1c1d35f42b2c2c651f9bf6f16e94ea18e3a351, Status: HEADER_VALIDATED | BLOCK_PROCESSED | BLOCK_STORED | BLOCK_VALIDATED, CumWork: 10853183732794414>: RuleErrorDuplicateBlock
core | I0508 20:19:49.203690 1 XWpArZ2Q.go:1] UpdateTxindex: Updating txindex tip (height: 10929, hash: 00000000007c828c5d6755154c1c1d35f42b2c2c651f9bf6f16e94ea18e3a351) to block tip (height: 22288, hash: 000000000033890f15954b7f0f59b2fd84e4a62cfa8aa1ebd3ceb693d617f802) ...
core | E0508 20:19:50.101113 1 _glDAgf4.go:1] tryUpdateTxindex: Problem running update: UpdateTxindex: Problem attaching block < TstampSecs: 1617098255, Height: 10930, Hash: 00000000008c91a0a8f6ced75ddb79f7638854e6d325718f1d11970389f6666c, ParentHash 00000000007c828c5d6755154c1c1d35f42b2c2c651f9bf6f16e94ea18e3a351, Status: HEADER_VALIDATED | BLOCK_PROCESSED | BLOCK_STORED | BLOCK_VALIDATED, CumWork: 10853183732794414>: RuleErrorDuplicateBlock
core | I0508 20:19:51.105038 1 XWpArZ2Q.go:1] UpdateTxindex: Updating txindex tip (height: 10929, hash: 00000000007c828c5d6755154c1c1d35f42b2c2c651f9bf6f16e94ea18e3a351) to block tip (height: 22288, hash: 000000000033890f15954b7f0f59b2fd84e4a62cfa8aa1ebd3ceb693d617f802) ...
core | I0508 20:19:52.113484 1 q0JDKqeJ.go:1] BitCloutBlockProducer._getBlockTemplate: Stopping at txn 15 because it's not ready yet: ConnectTransaction: : _connectBitcoinExchange: Number of Bitcoin burn work blocks mined on top of transaction 2 is below MinBitcoinBurnWork 3
core | I0508 20:19:52.113830 1 hy7v6bEs.go:1] A bad BitcoinExchange transaction may be holding up block production: eadb99eddd1b3fac88a717d57b509fd413aa251bcce16f8f4fde1008ef0b5b4e, Current header tip: b6b3f9beb3156fd265350e536ab700e8fdec0211245402000000000000000000
core | I0508 20:19:52.117772 1 EPCn499W.go:1] Produced block with 16 txns with approx 26 total txns in mempool
core | E0508 20:19:52.121038 1 _glDAgf4.go:1] tryUpdateTxindex: Problem running update: UpdateTxindex: Problem attaching block < TstampSecs: 1617098255, Height: 10930, Hash: 00000000008c91a0a8f6ced75ddb79f7638854e6d325718f1d11970389f6666c, ParentHash 00000000007c828c5d6755154c1c1d35f42b2c2c651f9bf6f16e94ea18e3a351, Status: HEADER_VALIDATED | BLOCK_PROCESSED | BLOCK_STORED | BLOCK_VALIDATED, CumWork: 10853183732794414>: RuleErrorDuplicateBlock
core | I0508 20:19:53.125526 1 XWpArZ2Q.go:1] UpdateTxindex: Updating txindex tip (height: 10929, hash: 00000000007c828c5d6755154c1c1d35f42b2c2c651f9bf6f16e94ea18e3a351) to block tip (height: 22288, hash: 000000000033890f15954b7f0f59b2fd84e4a62cfa8aa1ebd3ceb693d617f802) ...

same issue , stuck node.

Now getting non stop errors:-
core | E0509 00:25:41.761199 1 _glDAgf4.go:1] tryUpdateTxindex: Problem running update: UpdateTxindex: Problem attaching block < TstampSecs: 1617098255, Height: 10930, Hash: 00000000008c91a0a8f6ced75ddb79f7638854e6d325718f1d11970389f6666c, ParentHash 00000000007c828c5d6755154c1c1d35f42b2c2c651f9bf6f16e94ea18e3a351, Status: HEADER_VALIDATED | BLOCK_PROCESSED | BLOCK_STORED | BLOCK_VALIDATED, CumWork: 10853183732794414>: RuleErrorDuplicateBlock

Seems to be syncing
image

but non-stop "RuleErrorDuplicateBlock" errors.

Also - says mining, but can't see any proof of this anywhere
image

Damnit, now i'm getting server panic when trying to access my Admin panel...

core | I0509 00:45:20.666512 1 BcHnzP7P.go:1] Received TransactionBundle message of size 5 from Peer [ Remote Address: 35.232.92.5:17000 PeerID=3 ]
nginx | 172.28.0.1 - - [09/May/2021:00:45:21 +0000] "OPTIONS /get-block-template HTTP/1.1" 200 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36"
nginx | 172.28.0.1 - - [09/May/2021:00:45:21 +0000] "OPTIONS /get-posts-stateless HTTP/1.1" 200 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36"
nginx | 172.28.0.1 - - [09/May/2021:00:45:21 +0000] "OPTIONS /get-posts-stateless HTTP/1.1" 200 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36"
nginx | 172.28.0.1 - - [09/May/2021:00:45:21 +0000] "POST /get-block-template HTTP/1.1" 200 836 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36"
nginx | 172.28.0.1 - - [09/May/2021:00:45:21 +0000] "POST /get-posts-stateless HTTP/1.1" 200 276926 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36"
nginx | 172.28.0.1 - - [09/May/2021:00:45:21 +0000] "OPTIONS /admin/node-control HTTP/1.1" 200 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36"
nginx | 172.28.0.1 - - [09/May/2021:00:45:21 +0000] "OPTIONS /admin/get-global-params HTTP/1.1" 200 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36"
nginx | 172.28.0.1 - - [09/May/2021:00:45:21 +0000] "OPTIONS /admin/get-mempool-stats HTTP/1.1" 200 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36"
core | I0509 00:45:21.581566 1 server.go:3139] http: panic serving 172.28.0.4:59882: runtime error: invalid memory address or nil pointer dereference
core | goroutine 1434847 [running]:
core | net/http.(*conn).serve.func1(0xc01c18bf40)
core | net/http/server.go:1824 +0x153
core | panic(0x149e040, 0x23f4760)
core | runtime/panic.go:971 +0x499
core | nWv1dupN.(*Blockchain).h7_nyId5(...)
core | aygbHZpg.go:1
core | nWv1dupN.(*APIServer).xn4_5HyN(0xc198af1340, 0xc0c1804640, 0x1c96a00, 0xc0acda2f80)
core | OKH5_d90.go:1 +0x37a
core | nWv1dupN.(*APIServer).NodeControl(0xc198af1340, 0x1c96a00, 0xc0acda2f80, 0xc000187500)
core | GYQvY108.go:1 +0xdff
core | net/http.HandlerFunc.ServeHTTP(0xc000720090, 0x1c96a00, 0xc0acda2f80, 0xc000187500)
core | net/http/server.go:2069 +0x44
core | nWv1dupN.(*APIServer).CheckAdminPublicKey.func1(0x1c96a00, 0xc0acda2f80, 0xc000187500)
core | FDqNRkYg.go:1 +0x426
core | net/http.HandlerFunc.ServeHTTP(0xc0a8d308a0, 0x1c96a00, 0xc0acda2f80, 0xc000187500)
core | net/http/server.go:2069 +0x44
core | nWv1dupN.U5_l8_B0.func1(0x1c96a00, 0xc0acda2f80, 0xc000187500)
core | uNrSCGlG.go:1 +0xb0
core | net/http.HandlerFunc.ServeHTTP(0xc0a8d57890, 0x1c96a00, 0xc0acda2f80, 0xc000187500)
core | net/http/server.go:2069 +0x44
core | nWv1dupN.V9O1OSat.func1(0x1c96a00, 0xc0acda2f80, 0xc000187500)
core | CnzFOVMP.go:1 +0x252
core | net/http.HandlerFunc.ServeHTTP(0xc0a8d578c0, 0x1c96a00, 0xc0acda2f80, 0xc000187500)
core | net/http/server.go:2069 +0x44
core | github.com/unrolled/secure.(*Secure).Handler.func1(0x1c96a00, 0xc0acda2f80, 0xc000187500)
core | github.com/unrolled/secure@v1.0.8/secure.go:177 +0xda
core | net/http.HandlerFunc.ServeHTTP(0xc04ad27500, 0x1c96a00, 0xc0acda2f80, 0xc000187500)
core | net/http/server.go:2069 +0x44
core | github.com/gorilla/mux.(*Router).ServeHTTP(0xc0001d12c0, 0x1c96a00, 0xc0acda2f80, 0xc000187300)
core | github.com/gorilla/mux@v1.7.4/mux.go:210 +0xd3
core | gopkg.in/DataDog/dd-trace-go.v1/contrib/internal/httputil.TraceAndServe(0x1c7e480, 0xc0001d12c0, 0x1c96220, 0xc0acda2f80, 0xc01730e100, 0x165bab0, 0xa, 0xc0163ad9c8, 0x18, 0x0, ...)
core | gopkg.in/DataDog/dd-trace-go.v1@v1.29.0/contrib/internal/httputil/trace.go:43 +0x577
core | gopkg.in/DataDog/dd-trace-go.v1/contrib/gorilla/mux.(*Router).ServeHTTP(0xc0007203b0, 0x1c96220, 0xc0184e6b60, 0xc01730e100)
core | gopkg.in/DataDog/dd-trace-go.v1@v1.29.0/contrib/gorilla/mux/mux.go:115 +0x231
core | net/http.serverHandler.ServeHTTP(0xc1010f80e0, 0x1c96220, 0xc0184e6b60, 0xc01730e100)
core | net/http/server.go:2887 +0xa3
core | net/http.(*conn).serve(0xc01c18bf40, 0x1c98fb8, 0xc01725e080)
core | net/http/server.go:1952 +0x8cd
core | created by net/http.(*Server).Serve
core | net/http/server.go:3013 +0x39b
nginx | 172.28.0.1 - - [09/May/2021:00:45:21 +0000] "POST /get-posts-stateless HTTP/1.1" 200 417183 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36"
nginx | 2021/05/09 00:45:21 [error] 32#32: *213 upstream prematurely closed connection while reading response header from upstream, client: 172.28.0.1, server: api.bitclout.me, request: "POST /admin/node-control HTTP/1.1", upstream: "http://172.28.0.2:17001/admin/node-control", host: "api.bitclout.me"
nginx | 172.28.0.1 - - [09/May/2021:00:45:21 +0000] "POST /admin/node-control HTTP/1.1" 502 560 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36"
nginx | 172.28.0.1 - - [09/May/2021:00:45:21 +0000] "POST /admin/get-mempool-stats HTTP/1.1" 200 517 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36"
nginx | 172.28.0.1 - - [09/May/2021:00:45:21 +0000] "POST /admin/get-global-params HTTP/1.1" 200 99 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36"

Ok, that last issue (server panic, admin panel) appears to occur when i changed to
TXINDEX=false

I've changed it back to the default TXINDEX=true (even though the comment says the default is "false"!)

Back to all the errors again...
core | E0509 00:57:53.900166 1 _glDAgf4.go:1] tryUpdateTxindex: Problem running update: UpdateTxindex: Problem attaching block < TstampSecs: 1617098255, Height: 10930, Hash: 00000000008c91a0a8f6ced75ddb79f7638854e6d325718f1d11970389f6666c, ParentHash 00000000007c828c5d6755154c1c1d35f42b2c2c651f9bf6f16e94ea18e3a351, Status: HEADER_VALIDATED | BLOCK_PROCESSED | BLOCK_STORED | BLOCK_VALIDATED, CumWork: 10853183732794414>: RuleErrorDuplicateBlock

Which are presumably related to the txindexing

its now the next day, its been running for the last 11 hours with continuing non-stop errors like this:-
core | E0509 12:07:23.436325 1 _glDAgf4.go:1] tryUpdateTxindex: Problem running update: UpdateTxindex: Problem attaching block < TstampSecs: 1617098255, Height: 10930, Hash: 00000000008c91a0a8f6ced75ddb79f7638854e6d325718f1d11970389f6666c, ParentHash 00000000007c828c5d6755154c1c1d35f42b2c2c651f9bf6f16e94ea18e3a351, Status: HEADER_VALIDATED | BLOCK_PROCESSED | BLOCK_STORED | BLOCK_VALIDATED, CumWork: 10853183732794414>: RuleErrorDuplicateBlock

Still no sign that mining is doing anything what so ever - other than stating that it IS mining on the UI, nothing in my wallet to show anything earned from mining, nothing :(
image

Anyone got anything? - maybe its something blatantly obvious to you, that I've not spotted - at this point, would just like any kind of suggestion/feedback.

P.s. This is running on an i7 (gen 10) with .m2 NVME storage and 32gb DD4.

tijno commented

it tries to but blocks/transactions are not accepted by bitclout.com from self hosted nodes.

so its one way only right now

transactions created on a node will show in mempool on other self hosted nodes, but not main bitclout.

and will be lost at somepoint as they are never confirmed.

this is known and by design i believe

nodes are still undergoing a lot of changes and i think they are waiting for something before they will switch on full mining / 2 way "sync"

Ah, ok thanks - no point mining then (for now).

Will disable that side of things and see what happens

Looks like the #1 problem now is the following error:-
core | E0509 13:04:57.689548 1 _glDAgf4.go:1] tryUpdateTxindex: Problem running update: UpdateTxindex: Problem attaching block < TstampSecs: 1617098255, Height: 10930, Hash: 00000000008c91a0a8f6ced75ddb79f7638854e6d325718f1d11970389f6666c, ParentHash 00000000007c828c5d6755154c1c1d35f42b2c2c651f9bf6f16e94ea18e3a351, Status: HEADER_VALIDATED | BLOCK_PROCESSED | BLOCK_STORED | BLOCK_VALIDATED, CumWork: 10853183732794414>: RuleErrorDuplicateBlock

@redpartyhat FYI looks like admin panel errors out when txindex is disabled

Just thought i'd leave an update - I've given up on the Node for now (partly because i was hoping to have it mining).

Would be great if the above issue(s) could be resolved and Mining enabled for self hosted nodes with them being allowed to post back to main chain with appropriate validation of course.

Hopefully this issue is worth retaining for the info i've posted above(?)

I believe this is fixed. Please reopen if you're still having issues.

I'm getting tons of these errors on a new node I spun up on Friday.

What is the process to fix this?

backend | I0601 17:48:26.791284 1 exchange.go:794] UpdateTxindex: Updating txindex tip (height: 18824, hash: 0000000000034d9a2ce9c2c8bdd42cd10b58d8c322fb0afb08fdb6423dd1abf5) to block tip (height: 30318, hash: 00000000000007dde113889790b5c6732369bef0b5bbc59e03ef2096ddcf7407) ...
backend | I0601 17:48:27.574552 1 server.go:527] Received header bundle with 0 headers in state FULLY_CURRENT from peer [ Remote Address: 149.28.207.241:17000 PeerID=1 ]. Downloaded ( 30318 / 30318 ) total headers
backend | E0601 17:48:27.661231 1 server.go:1079] tryUpdateTxindex: Problem running update: UpdateTxindex: Problem attaching block < TstampSecs: 1619456852, Height: 18825, Hash: 00000000004dc4cba1a138e8e2c8c1d08e3a3ab2eb505b60d0fef703ed17438c, ParentHash 0000000000034d9a2ce9c2c8bdd42cd10b58d8c322fb0afb08fdb6423dd1abf5, Status: HEADER_VALIDATED | BLOCK_PROCESSED | BLOCK_STORED | BLOCK_VALIDATED, CumWork: 27565736405137137>: RuleErrorDuplicateBlock
backend | I0601 17:48:28.670467 1 exchange.go:794] UpdateTxindex: Updating txindex tip (height: 18824, hash: 0000000000034d9a2ce9c2c8bdd42cd10b58d8c322fb0afb08fdb6423dd1abf5) to block tip (height: 30318, hash: 00000000000007dde113889790b5c6732369bef0b5bbc59e03ef2096ddcf7407) ...
backend | E0601 17:48:29.542842 1 server.go:1079] tryUpdateTxindex: Problem running update: UpdateTxindex: Problem attaching block < TstampSecs: 1619456852, Height: 18825, Hash: 00000000004dc4cba1a138e8e2c8c1d08e3a3ab2eb505b60d0fef703ed17438c, ParentHash 0000000000034d9a2ce9c2c8bdd42cd10b58d8c322fb0afb08fdb6423dd1abf5, Status: HEADER_VALIDATED | BLOCK_PROCESSED | BLOCK_STORED | BLOCK_VALIDATED, CumWork: 27565736405137137>: RuleErrorDuplicateBlock

We're actively working on fixing RuleErrorDuplicateBlock for TXIndex.

Any ETA on a fix?

I spun up a completely fresh instance on AWS, started the node, and it hit the same problem. Seems to get stuck on TXIndex processing. This is the status from Admin:

Node Sync State
Fully Synced
Detailed Sync Info
Last Block Height:
30402
Last Block Hash:
000000000000ece...
Last Block Time:
Tue Jun 01 2021 17:20:31 GMT-0700 (Pacific Daylight Time)
Last TxIndex Block Height:
13885

I did a checkout of Core and compiled it with Go. It runs but never seems to finish. It get stucks after syncing half the blocks and does this:

I0602 00:34:53.222175 361705 block_producer.go:322] Produced block with 1 txns with approx 0 total txns in mempool
I0602 00:35:03.222664 361705 block_producer.go:322] Produced block with 1 txns with approx 0 total txns in mempool
I0602 00:35:13.223156 361705 block_producer.go:322] Produced block with 1 txns with approx 0 total txns in mempool
E0602 00:35:15.295664 361705 peer.go:1184] ReadBitCloutMessage: : ReadMessage: Problem decoding NetworkType: read tcp 172.31.29.53:40530->13.57.243.7:17000: read: connection reset by peer
E0602 00:35:15.296005 361705 connection_manager.go:431] ConnectPeer: Problem negotiating version with peer with addr: (13.57.243.7:17000) err: (negotiateVersion: Problem reading OUTBOUND peer version for Peer [ Remote Address: 13.57.243.7:17000 PeerID=3 ]: readVersion: : ReadBitCloutMessage: : ReadMessage: Problem decoding NetworkType: read tcp 172.31.29.53:40530->13.57.243.7:17000: read: connection reset by peer)
I0602 00:35:23.223679 361705 block_producer.go:322] Produced block with 1 txns with approx 0 total txns in mempool
I0602 00:35:33.224176 361705 block_producer.go:322] Produced block with 1 txns with approx 0 total txns in mempool

Let me know if I can help.

ASAP! Will keep you updated.

Gonna track this here: deso-protocol/core#22