/network-test-one-uniform-load-squid

A quest squid designed to load Phase 2 Subsquid Network uniformly

Primary LanguageTypeScriptMIT LicenseMIT

w

Subsquid Logo

docs.rs Discord

Website | Docs | Discord

Subsquid Network Docs

Network Test One: Uniform Load

Some tests of Subsquid Network Phase Two testnet require that all workers regularly serve some queries. You can help the team create this type of uniform load by running this squid.

Note: you'll need to have at least 10 tSQD to complete this quest. Obtain them by doing other quests first.

Tip

If you locked any of your tSQD before, check if you can unlock any at the gateways page.

I. Install dependencies: Node.js, Docker, Git.

On Windows
  1. Enable Hyper-V.
  2. Install Docker for Windows.
  3. Install NodeJS LTS using the official installer.
  4. Install Git for Windows.

In all installs it is OK to leave all the options at their default values. You will need a terminal to complete this tutorial - WSL bash is the preferred option.

On Mac
  1. Install Docker for Mac.
  2. Install Git using the installer or by other means.
  3. Install NodeJS LTS using the official installer.

We recommend configuring NodeJS to install global packages to a folder owned by an unprivileged account. Create the folder by running

mkdir ~/global-node-packages

then configure NodeJS to use it

npm config set prefix ~/global-node-packages

Make sure that the folder ~/global-node-packages/bin is in PATH. That allows running globally installed NodeJS executables from any terminal. Here is a one-liner that detects your shell and takes care of setting PATH:

CURSHELL=`ps -hp $$ | awk '{print $5}'`; case `basename $CURSHELL` in 'bash') DEST="$HOME/.bash_profile";; 'zsh') DEST="$HOME/.zshenv";; esac; echo 'export PATH="${HOME}/global-node-packages/bin:$PATH"' >> "$DEST"

Alternatively you can add the following line to ~/.zshenv (if you are using zsh) or ~/.bash_profile (if you are using bash) manually:

export PATH="${HOME}/global-node-packages/bin:$PATH"

Re-open the terminal to apply the changes.

On Linux

Install NodeJS (v16 or newer), Git and Docker using your distro's package manager.

We recommend configuring NodeJS to install global packages to a folder owned by an unprivileged account. Create the folder by running

mkdir ~/global-node-packages

then configure NodeJS to use it

npm config set prefix ~/global-node-packages

Make sure that any executables globally installed by NodeJS are in PATH. That allows running them from any terminal. Open the ~/.bashrc file in a text editor and add the following line at the end:

export PATH="${HOME}/global-node-packages/bin:$PATH"

Re-open the terminal to apply the changes.

II. Install Subsquid CLI

Open a terminal and run

npm install --global @subsquid/cli@latest

This adds the sqd command. Verify that the installation was successful by running

sqd --version

A healthy response should look similar to

@subsquid/cli/2.8.0 linux-x64 node-v20.5.1

III. Run the squid

  1. Open a terminal, navigate to any folder for which you have write permissions and run the following commands to retrieve the squid, enter its folder and install dependencies:
    sqd init uniform-load-squid -t https://github.com/subsquid-quests/network-test-one-uniform-load-squid
    cd uniform-load-squid
    npm ci

Important

If you're on Windows, the terminal opens in C:\Windows\system32 by default. Do not download your squid there, navigate someplace else.

  1. Press "Get Key" button in the quest card to obtain the networkTestOneUniformLoad.key key file. Save it to the ./query-gateway/keys subfolder of the squid folder. The file will be used to identify your local query gateway when locking tSQD to allocate bandwidth and as it operates.

  2. Get the peer ID of your future gateway by running:

    sqd get-peer-id
  3. Register your future gateway using this page.

    • Use the peer ID you obtained in the previous step.
    • Leave the "Publicly available" switch disabled.
  4. Lock 10 tSQD by selecting your gateway on this page, clicking "Get CU" and submitting the form. Once done, you will begin getting computation units (CUs) once every epoch (~15 minutes).

    The "Lock blocks duration" field lets you tune the length of time during which you'll be able to query the network, measured in blocks of Arbitrum Sepolia's L1 (that is, Ethereum Sepolia). The minumum is five hours, but you can opt to lock for longer if you intend to work on the quest over multiple days.

    Time Blocks
    5 hours (minimum) 1500
    24 hours 7200
    72 hours 21600

    Be aware that you'll need to unlock your tokens manually after the end of this period. The tokens you get back will be used in subsequent quests.

    If the locking period expires before you finish your work, simply unlock your tokens, then lock them again.

  5. Wait for about 15 minutes. This is the time it takes for Subsquid Network to enter a new epoch, at the beginning of which CUs will be allocated towards your gateway.

  6. Start the query gateway with

    sqd up

    If you'd like to check if the locking was successful, you can inspect the logs of the query gateway container with docker logs <query_gateway_container_name>. After one-two minutes required for the node startup it should contain some lines like this one:

    [2024-01-31T14:55:06Z INFO  query_gateway::chain_updates] allocated CU: 48300 spent CU: 0
    

Tip

If you get an error message about unknown shorthand flag: 'd' in -d, that means that you're using an old version of docker that does not support the compose command yet. Update Docker or edit the commands.json file as follows:

         "up": {
         "deps": ["check-key"],
         "description": "Start a PG database",
-        "cmd": ["docker", "compose", "up", "-d"]
+        "cmd": ["docker-compose", "up", "-d"]
       },
       "down": {
         "description": "Drop a PG database",
-        "cmd": ["docker", "compose", "down"]
+        "cmd": ["docker-compose", "down"]
       },
  1. Build the squid code

    sqd build
  2. Start your squid with

    sqd run .

    The command should output lines like these:

    [eth-processor] {"level":2,"time":1705681499120,"ns":"sqd:commands","msg":"PROCESS:ETH"}
    [moonbeam-processor] {"level":2,"time":1705681499148,"ns":"sqd:commands","msg":"PROCESS:MOONBEAM"}
    [base-processor] {"level":2,"time":1705681499155,"ns":"sqd:commands","msg":"PROCESS:BASE"}
    [bsc-processor] {"level":2,"time":1705681499163,"ns":"sqd:commands","msg":"PROCESS:BSC"}
    [eth-processor] 01:24:59 INFO  sqd:processor processing blocks from 955722
    [base-processor] 01:24:59 INFO  sqd:processor processing blocks from 1208926
    [moonbeam-processor] 01:24:59 INFO  sqd:processor processing blocks from 166845
    [bsc-processor] 01:24:59 INFO  sqd:processor processing blocks from 16996735
    [eth-processor] 01:24:59 INFO  sqd:processor using archive data source
    [eth-processor] 01:24:59 INFO  sqd:processor prometheus metrics are served at port 34253
    [base-processor] 01:24:59 INFO  sqd:processor using archive data source
    [base-processor] 01:24:59 INFO  sqd:processor prometheus metrics are served at port 40205
    [moonbeam-processor] 01:24:59 INFO  sqd:processor using archive data source
    [moonbeam-processor] 01:24:59 INFO  sqd:processor prometheus metrics are served at port 33691
    [bsc-processor] 01:24:59 INFO  sqd:processor using archive data source
    [bsc-processor] 01:24:59 INFO  sqd:processor prometheus metrics are served at port 41199
    [moonbeam-processor] 01:25:00 INFO  sqd:processor:mapping Got 0 burn txs and 0 USDT transfers
    [moonbeam-processor] 01:25:00 INFO  sqd:processor 171971 / 5325985, rate: 3823 blocks/sec, mapping: 2729 blocks/sec, 1364 items/sec, eta: 23m
    [base-processor] 01:25:00 INFO  sqd:processor:mapping Got 0 burn txs and 0 USDT transfers
    [base-processor] 01:25:00 INFO  sqd:processor 1477379 / 9442733, rate: 175758 blocks/sec, mapping: 8032 blocks/sec, 1339 items/sec, eta: 45s
    [base-processor] 01:25:02 INFO  sqd:processor:mapping Got 1 burn txs and 0 USDT transfers
    

    The squid should download enough data in 3-4 hours.

Tip

Do not worry if the squid fails: any progress it made is saved. Simply restart it if it happens.

When done, stop the squid processor with Ctrl-C, then stop and remove the query gateway container with

sqd down
  1. After the locking period ends, go to the gateways page and unlock your tSQD - you will need them for other quests.

Quest Info

Category Skill Level Time required (minutes) Max Participants Reward Status
Squid Deployment $\textcolor{green}{\textsf{Simple}}$ ~250 - $\textcolor{red}{\textsf{75tSQD}}$ open

Acceptance critera

Sync this squid using the key from the quest card. The syncing progress is tracked by the amount of data the squid has retrieved from Subsquid Network.

About this squid

This squid retrieves native token burns on ETH, BSC, Base and Moonbeam. It does not keep any data, as it's sole purpose is to stress test the network.

Data ingester ("processor") code is defined for all networks in src/testConfig.ts. The executable src/main.ts chooses the settings to use based on its sole command line argument. The scripts file commands.json contains commands for running each processor (process:eth, process:bsc, process:base and process:moonbeam). You can also use sqd run to run all the services at once; the list of services is kept in the squid manifest at squid.yaml.

The squid uses Phase Two Subsquid Network as its primary data source.

Troubleshooting

Network errors

Your squid may get a variety of errors while trying to connect to your local gateway. Some are completely normal, some indicate problems.

HTTP 503 and 504

It is normal to receive a few of this during the sync. If all the responses you get are 503s or 504s and your gateway fails to serve any data, wait for a few hours and retry. The wait is necessary because this behavior can be caused by a network upgrade, which happen frequently - it's a testnet after all.

HTTP 403

Typically occurs when the computation units (CUs) you should get for locking your tSQD fail to reach the worker nodes of the network. Here's how to approach fixing it:

  1. Make sure you waited for 20 minutes since you ran sqd up and try running your squid.
    • You should see no tokens listed as "Pending lock" at the gateways page. If you still do see some after about 40 minutes of any locking/relocking operations, contact support.
  2. If you're still getting 403s, visit the gateways page and ensure that you have some locked tSQD associated with your wallet. To do that, go to your gateway's page and check if the "Unlock" button is greyed out.
    • If it is NOT, your locking period had ended. Unlock your tokens, lock them again, restart your gateway with sqd down then sqd up and go to step 1.
    • If it is, proceed to step 3.
  3. If you're still getting 403s, attempt the following:
    • shut your gateway down with sqd down
    • remove ./query-gateway/allocations.db
    • start the gateway with sqd up
    • wait for 20 minutes
    • try running your squid
  4. If you're still getting 403s, attempt the following
    • shut your gateway down with sqd down
    • remove ./query-gateway/allocations.db
    • unlock your tSQDs (may take a while)
    • lock your tSQDs again
    • start the gateway with sqd up
    • wait for 20 minutes
    • try running your squid

Connection refused

Can be identified by ECONNREFUSED in the squid logs. This means that your query gateway is not running.

  1. Check the logs of the gateway container to see if it really isn't running. To get the logs, run docker logs <query_gateway_container_name>, where the container name can be found in the output of sqd up.
  2. Run sqd get-peer-id then check if your gateway is registered. If it is, try re-running sqd up and then the quest squid.

Alternatively, shut down all the Docker containers in your system (e.g. by rebooting) and start the quest from scratch.

Timeouts

Try restarting your gateway container by running sqd down then sqd up. Then, wait for 20 minutes and try running your squid.

Contacting support

If the standard troubleshooting fails, contact us via Discord. Make sure to attach the logs of your query gateway container as a txt file or via Pastebin. To get the logs, run docker logs <query_gateway_container_name>, where the container name can be found in the output of sqd up.