smol-dot/smoldot

panicked at /__w/smoldot/smoldot/light-base/src/runtime_service.rs:1845:35

ryanleecode opened this issue · 8 comments

[smoldot] Smoldot v2.0.22. Current memory usage: 64.5 MiB. Average download: 99.8 kiB/s. Average upload: 3.1 kiB/s. Average CPU cores: 0.17.
0132e122:0x18b7b0 Smoldot has panicked while executing task runtime-asset-hub-kusama. This is a bug in smoldot. Please open an issue at https://github.com/smol-dot/smoldot/issues with the following message:
panicked at /__w/smoldot/smoldot/light-base/src/runtime_[service.rs:1845](http://service.rs:1845/):35:
called Option::unwrap() on a None value

Steps to Reproduce

  1. Have substrate connect installed and open the options page then open the chrome console
  2. Go to https://paritytech.github.io/substrate-connect/demo/
  3. Connect and disconnect to VPN or turn your internet on and off
  4. Refresh the webpage in step 2.
  5. Check the logs in the substrate connect options console and also the logs in the webpage

I can't do much without the debug logs of smoldot.

Seems to be fixed as of the latest update. Closing for now.

The issue is probably here https://github.com/smol-dot/smoldot/blob/main/light-base/src/sync_service.rs#L255, the other side of the channel died and panics on unwrap

The issue is probably here https://github.com/smol-dot/smoldot/blob/main/light-base/src/sync_service.rs#L255, the other side of the channel died and panics on unwrap

Line 1845 points to the call to is_near_head_of_the_chain() on the main branch, but that's misleading. In the version that you're using, it points to a different line that I linked above.

I unfortunately need the entire logs.

There's a list of blocks, and one of these is the best block. The panic is caused by the fact that the best block can't be found in the list of blocks. To figure out where it comes from I need to know all the blocks that are inserted or removed and how the best block changes. This all happens way before the actual panic.

I had already taken a look at the issue when you opened it, and I looked at it again, and I've found a bug which is fixed in #1798.
I cannot tell however if this bugfix is what causes the issue (I would need the entire logs), but it is likely to, so I'm going to close it after the PR is meged.