bisq-network/roles

Seednode Operator

Opened this issue ยท 227 comments

This role is responsible for operating one or more Bisq seednodes.

See: btc_mainnet.seednodes


Docs: none, other than the above
Team: @bisq-network/seednode-operators

2018.04 report

Running 6 Bitcoin and 2 LTC instances. Digital ocean updates their servers frequently with security patches and that causes restarts which kills the seed node (no crone job for autostart). I am following the old email notifications and got alerted soon to start the seed node in such cases.

2018.05 report

Running 6 Bitcoin and 2 LTC instances.

Emzy commented

2018.05 report

Running 1 Bitcoin instance.

bisq-network/compensation#76

2018.05 report

Running 1 Bitcoin instance
hosting: Linode in docker container

bisq-network/compensation#80

2018.06 report

Running 1 seednode instance
hosting: Linode in docker container

  • fixed a docker bug where the linux had no language setup which crashed the container.
  • Manfred found a restart bug, this was not noticed on my node because the docker container restarts automatically so it seems this is not relevant for my node.

bisq-network/compensation#83

Emzy commented

2018.06 report

Running 1 Bitcoin instance

  • Update to new Version

bisq-network/compensation#88

2018.06 report

Running 6 seednode instance
Updated to new Version
bisq-network/compensation#92

@Emzy @mrosseel You mixed that role up with the bitcoin operator role...

I've updated the description of this role issue and updated the @bisq-network/seednode-operators team to reflect current status.

2018.07 report

Running 6 seednode instance.

/cc bisq-network/compensation#93

Emzy commented

2018.07 report

Running 1 Bitcoin seednode instance

/cc bisq-network/compensation#100

2018.07 report

Running 1 seednode instance
hosting: Linode in docker container

After last month's docker fixes, no further issues were detected.
Nothing to report

bisq-network/compensation#105

Emzy commented

018.08 report

Running 1 Bitcoin seednode instance
hosting: Hetzner VM on my dedicated server

/cc bisq-network/compensation#111

2018.08 report

Running 6 seednode instance.

/cc bisq-network/compensation#112

2018.08 report

Running 1 seednode instance
hosting: Linode in docker container

Nothing to report

bisq-network/compensation#116

2018.09 report

Running 6 seednode instance.

/cc bisq-network/compensation#125

Emzy commented

018.09 report

Running 1 Bitcoin seednode instance
hosting: Hetzner VM on my dedicated server

/cc bisq-network/compensation#136

2018.09 report

Running 1 seednode instance
hosting: Linode in docker container

Nothing to report

bisq-network/compensation#141

2018.10 report

Running 6 seednode instance.

/cc bisq-network/compensation#155

2018.10 report

Running 1 seednode instance
hosting: Linode in docker container

Nothing to report

bisq-network/compensation#157

Emzy commented

018.10 report

Running 1 Bitcoin seednode instance
hosting: Hetzner VM on my dedicated server

/cc bisq-network/compensation#163

Emzy commented

2018.11 report

Running 1 Bitcoin seednode instance
hosting: Hetzner VM on my dedicated server

/cc bisq-network/compensation#175

2018.11 report

Running 6 seednode instance. Just started 2 new ones for testnet (DAO).

/cc bisq-network/compensation#180

2018.11 report

Running 1 seednode instance
hosting: Linode in docker container

Nothing to report

bisq-network/compensation#181

2018.11 report

Running 6 mainnet nodes and 2 testnet nodes (DAO).

/cc bisq-network/compensation#189

Emzy commented

2018.12 report

Running 1 Bitcoin seednode instance
hosting: Hetzner VM on my dedicated server

/cc bisq-network/compensation#191

We had a severe incident yesterday with all seed nodes.

Reason was that I updated the --maxMemory program argument from 512 to 1024 MB. My servers have 4 GB RAM and run 2 nodes each, so I thought that should be ok. But was not. It caused out of memory errors and nodes became stuck (required kill -9 to stop them).

I increased the maxMemory setting because I saw that they restarted every 2-3 hours (earlier it was about once a day). The seed nodes check the memorey they consume and if it hits the maxMemory they automatically restart. That is a work-around for a potential memory leak which seems to occure only on Linux (and/or seed nodes). At least on OSX with normal Bisq app I never could reproduce it, i could even run the app with about 100 connections, which never worked on my Linux boxes. So I assume its some OS setting causing it. We researched a bit in the past but never found out what is the real reason (never dedicated enough effort - we should prioritize that old issue in the near future).

The situation was discovered late night as a user posted a GH issue that he has no arbitrators, checking the monitor page alerted me as all nodes have been without data basically and most not responsive. From stats on my hoster I saw that the situation somewhere in the last 12-24 hours.

The 2 nodes from Mike and Stephan have been responsive (as they did not change anything) but also were missing data (as they restart every few hours as well and therefor connect to other seeds to gather the data - as the other seeds lost data over time they also became corrupted).

It was a lesson that it is not a good idea to change too much and change all seeds at the same time!
Good thing is that it could recover at the end quite quickly and the network is quite resilient even in case all seeds fail (as it was the case more or less).

To recover I started locally one seed and removed all other seed addresses (in the code), so it connected after a while to any persisted peer (normal Bisq apps). From those it got the data which are present in the network and then I used that seed as dedicated seed (using --seedNodes) for the other seeds to start up again. So my seeds all become filled with data again. Mikes and Stephans seeds needed a few hours until they got up to date again once they restarted (so the too fast restart interval was a benefit here).

I updated my servers to 8 GB (4GB / node) and will test now more carfully how far I can go with the --maxConnections and --maxMemory settings. Currently I run 4 nodes with --maxConnections=30 --maxMemory=1024 and 2 with --maxConnections=25 --maxMemory=750.
Stephan told me he had anyway already 4 GB and --maxConnections=30 --maxMemory=1024 which seems a safe setting. Mike has not responded so far, but I assume he has lower settings as his node recovered quite fast (restarted faster).

What we should do:

  • Better Alert/Monitoring
    We need to get an alert from the monitoring in severe cases like that. Looking passively to the monitor page is not enough. Alerts have to be good enough to not have false positives (like the email alerts receive from out simple Tor connection monitoring which I tend to ignore as 99.9% there is nothing severe)

  • Improvements in code for more resiliance
    When a node starts up it connects to a few seed nodes for initital data, that was added for more resilience if one seed node is out of date. We should extend that to include normal peristed non-seed-node peer as well, so in case that the seeds are all failing (like that incident) the network still exchanges at startup the live data. Only first time users would have a problem then.

  • Investigate memory increase/limitations
    Investigate the reason for the memory increase (might be a OS setting like limit of some network resources)

I reread the issue bisq-network/bisq#599 ,
where a user reported also abnormal memory consumption under Ubuntu,
and where I myself reported low memory consumption under Debian Stretch.
@Emzy says he uses Debian Stretch with his seednode (and never reported a memory issue afaik)

So I wonder if this memory leakage issue could not be specific to Ubuntu ?
(and could maybe simply be solved by running under Debian ?)

2019.01 report

We had issues with heap memory (see above) but it is resolved now and we added more vm arguments and increased the prog argument for maxMemory.

java -XX:+UseG1GC -Xms512m -Xmx4000m -jar /root/bisq/seednode/build/libs/seednode-all.jar --maxConnections=30 --maxMemory=3000 ...

The -XX:+UseG1GC argument tells the jvm to use another garbage collector which behaves better according to @freimair

Heap memory defined in -Xmx must be about 20-30% larger than the amount at maxMemory.

Started as well 2 more seed nodes for the DAO (4 in total).

/cc bisq-network/compensation#205

Emzy commented

2019.01 report

Running 1 Bitcoin seednode instance
hosting: Hetzner VM on my dedicated server

/cc bisq-network/compensation#212

Emzy commented

2019.02 report

Running 1 Bitcoin seednode instance
hosting: Hetzner VM on my dedicated server

/cc bisq-network/compensation#225

2019.02 report

Running 6 mainnet nodes and 4 DAO testnet nodes.
Started to hand over 2 nodes to @freimair.

/cc bisq-network/compensation#227

2018.12 - 2019.02 report

Running 1 seednode instance
hosting: Linode in docker container

Did some investigation after monitoring failures seen by manfred.
Updated parameters so that it's running better now, in Grafana it looked like it might still be restarting too much but it was another node with similar color, restarts were 2/3 times a day which is 'normal'. The recently discovered memory leak in seednodes might fix the restarts altogether.
TODO: after seednode refactoring is done, I'll make a new docker image for the seednodes and upgrade to 0.9.3

bisq-network/compensation#220

Emzy commented

2019.03 report

Running 1 Bitcoin seednode instance
hosting: Hetzner VM on my dedicated server

/cc bisq-network/compensation#246

2019.03 report

  • 4 mainnet nodes
  • 4 DAO testnet nodes
  • 1 DAO betanet (mainnet) nodes
  • 2 testnet nodes
  • 2 DAO regtest (new network after release) nodes

/cc bisq-network/compensation#252

2019.03 report

  • running 2 mainnet seednodes

We all should get the DAO setup ready now. Here is a summary of the instructions:

Checkout https://github.com/ManfredKarrer/bisq/tree/rc_v1.0.0 and build from that. There is a new mainnet genesis tx so that can be used for a testrun as DAO full node. Do not try to run as DAO full node with the master branch as the genesis tx there is very old and will take long time for sync.

Here are my conf files for btc core:
bitcoin.conf:

datadir=.....
maxconnections=800
timeout=30000
listen=0
server=1
txindex=1
rpcallowip=127.0.0.1
rpcuser=....
rpcpassword=....
blocknotify=bash /root/.bitcoin/blocknotify %s

datadir, rpcuser, rpcpassword, blocknotify need to be edited by yourself. maxconnections and timeout i took from my btc nodes. We do not run it as listening node to safe resources. We might change that later.

blocknotify file:

#!/bin/bash
echo $1 | nc -w 1 127.0.0.1 5110

I use a small start script for the seed node
nohup sh loop.sh &

loop.sh:

#!/bin/bash

java -XX:+UseG1GC \
-Xms512m \
-Xmx2000m \
-jar bisq/seednode/build/libs/seednode-all.jar \
--maxMemory=1200 \
--maxConnections=30 \
--baseCurrencyNetwork=BTC_MAINNET \
--appName=seed \
--nodePort=8000 \
--daoActivated=true \
--fullDaoNode=true \
--rpcPort=8332 \
--rpcUser=... \
--rpcPassword=... \
--rpcBlockNotificationPort=5110 \
>/dev/null 2>error.log

Be sure the rpcBlockNotificationPort is matching the entry in the blocknotify file.
Please stick with the memory settings as above as I tested a lot and those seem to work well.

Be sure to have 4 GB RAM, its needed.
300 GB space is needed as well as with txindex the current blockchain is about 260 GB and in 4-6 months we reach 300.

If your servers setting is not completely trivial please add a small readme file in case I need to access so I can easily find my way how to stop, restart or edit config files and program arguments in case you are not available.

Ah one note: If you run it as service be sure to have the log settings set correctly so you don't run out of diskspace! Add to the readme where the log file is and how to access it if its not in standard data directory...

Emzy commented

Here are my conf files for btc core:
bitcoin.conf:

datadir=.....
maxconnections=800
timeout=30000
listen=0
server=1
txindex=1
#rpcallowip=127.0.0.1
rpcuser=....
rpcpassword=....
blocknotify=bash /root/.bitcoin/blocknotify %s

Please don't use "rpcallowip=127.0.0.1" it will open the RPC port for the world:
"# netstat -lpntu
With "rpcallowip=127.0.0.1"
tcp6 0 0 :::8332 :::* LISTEN 902/bitcoind
Without it the port will only open on loclhost "::1":
tcp6 0 0 ::1:8332 :::* LISTEN 4788/bitcoind

2019.04 report

I have setup and transferred ownership of the 3f3cu2yw7u457ztq.onion seednode from @ManfredKarrer (bisq-network/bisq#2803).

Minimal expenses incurred for this month as I did not setup the node until the very end of the month.

/cc bisq-network/compensation#270

Emzy commented

Cycle1 report

Running 2 Bitcoin seednode instances
hosting: Hetzner VM on my dedicated server and a second dedicated server

Moved one Seednode because it now needs a bitcoind (Blockchain), so more resources needed.
Setup und test of a second seednode I took over from @ManfredKarrer

/cc bisq-network/compensation#279

2019.03 & 2019.04 (Cycle 1) report

Running 1 seednode instance in march and 2 seednodes+full nodes in april
Due to a switch of provider costs remained the same for 1 seednode, even with the extra storage.

bisq-network/compensation#281

2019.05 report

I have setup and transferred ownership of the fl3mmribyxgrv63c.onion seednode from @ManfredKarrer (bisq-network/bisq@a8ed773#diff-04f4a7f86dda2770277614e21ae570a3).

I am now running 2 seednodes on separate hosting providers, costing $50 USD / month each to satisfy the necessary requirements (2 CPU, 4GB RAM, 300GB storage).

  • fl3mmribyxgrv63c.onion
  • 3f3cu2yw7u457ztq.onion

No issues to report this month.

Both were updated to latest master on May 27 with this commit.

/cc bisq-network/compensation#295

Emzy commented

Cycle2 report

Running 2 Bitcoin seednode instances
hosting: Hetzner VM on my dedicated server and a second dedicated server

/cc bisq-network/compensation#298

Cycle 3 report

Summary

Running 2 seed nodes on mainnet.

  • fl3mmribyxgrv63c.onion
  • 3f3cu2yw7u457ztq.onion

Running 1 seed node on testnet.

  • m5izk3fvjsjbmkqi.onion

This month I deployed a seed node on testnet since the old testnet seed nodes are no longer maintained and testnet is still a useful testing environment.
bisq-network/bisq#2920

Issues Encountered

Issue 1:
On July 4th, fl3mmribyxgrv63c was delivering outdated blocks as head blocks. See the monitor for more details. The issue appeared to be caused by failing to receive notifications from bitcoind (see log snippet below). No idea why at this point. I ended up restarting the seed node and it resoved the issue.

bisq.core.dao.node.full.RpcException: NotificationHandlerException(super=com.neemre.btcdcli4j.daemon.NotificationHandlerException: Error #1004004: The operation failed due to an unknown IO exception., error=Errors(code=1004004, message=The operation failed due to an unknown IO exception.), code=1004004)
    at bisq.core.dao.node.full.RpcService.lambda$setup$0(RpcService.java:138)
    at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)
    at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
    at java.base/java.lang.Thread.run(Thread.java:844)
Caused by: NotificationHandlerException(super=com.neemre.btcdcli4j.daemon.NotificationHandlerException: Error #1004004: The operation failed due to an unknown IO exception., error=Errors(code=1004004, message=The operation failed due to an unknown IO exception.), code=1004004)
    at com.neemre.btcdcli4j.daemon.notification.worker.NotificationWorker.call(NotificationWorker.java:64)
    at com.neemre.btcdcli4j.daemon.notification.worker.NotificationWorker.call(NotificationWorker.java:22)
    at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:111)
    at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:58)
    at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:75)
    ... 3 more
Caused by: java.net.SocketException: Connection reset
    at java.base/java.net.SocketInputStream.read(SocketInputStream.java:210)
    at java.base/java.net.SocketInputStream.read(SocketInputStream.java:141)
    at java.base/sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284)
    at java.base/sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326)
    at java.base/sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178)
    at java.base/java.io.InputStreamReader.read(InputStreamReader.java:185)
    at java.base/java.io.BufferedReader.fill(BufferedReader.java:161)
    at java.base/java.io.BufferedReader.readLine(BufferedReader.java:326)
    at java.base/java.io.BufferedReader.readLine(BufferedReader.java:392)
    at com.neemre.btcdcli4j.daemon.notification.worker.NotificationWorker.call(NotificationWorker.java:46)
    ... 7 more

Issue 2:
@alexej996 was encountering issues with his seed node. While looking at the logs, it seemed to be hitting the memory limit and restarting:

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
We are over our memory limit (1200) and trigger a restart. usedMemory: 1275 MB. freeMemory: 295 MB
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

And frequently going over 80%:

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
We are over 80% of our memory limit (960) and call the GC. usedMemory: 1156 MB. freeMemory: 414 MB
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

There is a PR that was merged recently that may or may not help with memory usage: bisq-network/bisq#2501

If it doesn't help, we may require further improvements or potentially increase the memory limit for now.

Maintenance Performed

No maintenance performed this month to the mainnet seed nodes.

Expenses Incurred

Expenses incurred for the month (USD):

  • 3 x $20 for server (2 CPU, 4 GB RAM)
  • 2 x $30 for storage (300 GB)
    Note: The testnet seed node does not require additional storage, unlike the mainnet seed nodes.

Total: $120

/cc bisq-network/compensation#309

Emzy commented

Cycle3 report

Running 2 Bitcoin seednode instances
hosting: Hetzner VM on my dedicated server and a second dedicated server

/cc bisq-network/compensation#310

Cycle 2&3 report

Both seednodes now running stabel.
Had some issues which were investigated and these were the results:
When both the bitcoin fullnode (btcd) and the seednode start, there is a period in which btcd is still verifying the blockchain, i.e. it's not ready. If the seednode process talking to btcd (I'll call it btc_caller) does any requests in this period, it receives the 'RPC_IN_WARMUP' error (see https://bitcoin.stackexchange.com/questions/46662/bitcoind-error-28). This results in the btc_caller crashing, while the seed node continues operating. This results in no BSQ blocks being seen by the seednode, but for the rest operating normally. One fix would be to ignore these errors in the seednode and not crash the btc_caller.

Very curious why other operators have not noticed this behavior. I have seen similar errors reported by @alexej996 , one way to check this is to see if there's something running on port 5120. Do this also when it's running correctly, so you can compare. If nothing is running on 5120, btcd can no longer notify the seednodes if there's a new block.

bisq-network/compensation#312

Emzy commented

Cycle 4 report

Running 3 seednode instances
hosting: Hetzner VM on my dedicated server and two dedicated servers

Setup of a 3. seednode.

/cc bisq-network/compensation#324

Cycle 4 report

Summary

Running 3 seed nodes on mainnet.

  • fl3mmribyxgrv63c.onion
  • 3f3cu2yw7u457ztq.onion
  • jhgcy2won7xnslrb.onion

Running 1 seed node on testnet.

  • m5izk3fvjsjbmkqi.onion

This month I took ownership of seed node jhgcy2won7xnslrb.
bisq-network/bisq#3002

Issues Encountered

Issue 1:
On July 22nd, fl3mmribyxgrv63c was behind on the DAO state head. See the monitor for more details. The issue appeared to be caused by not having swap enabled on that machine and the memory usage was maxed out. To resolve it, I added swap space to the machine and restarted the seed.

Issue 2:
On Aug 6, 3f3cu2yw7u457ztq was behind on the DAO state head. See the monitor for more details. The issue appeared to be a swap space issue again - it was maxed out at 512 MB. As a result, I increased swap space on all my seeds to 4 GB to ensure plenty of space.

Maintenance Performed

On Aug 6 I updated all my mainnet seed nodes to Manfred's branch to apply a hotfix.

I plan to update my seed nodes to follow the updated document from Florian to ensure a consistent setup. Once that is done, I will organize a backup operator in case I am unavailable for maintenance.

Expenses Incurred

Expenses incurred for the month (USD):

  • 4 x $20 for server (2 CPU, 4 GB RAM)
  • 3 x $30 for storage (300 GB)
    Note: The testnet seed node does not require additional storage, unlike the mainnet seed nodes.

Total: $170

/cc bisq-network/compensation#326

Cycle 4 report

Both seednodes running stable.
Updated to Manfred's latest fix branch as discussed in seednode channel.

bisq-network/compensation#331

wiz commented

As of 20190815 @ 0615Z, I have taken over the jhgcy2won7xnslrb.onion seed node from @devinbileck (bisq-network/bisq#3093).

Cycle 5 report

Summary

Running 2 seed nodes on mainnet.

  • fl3mmribyxgrv63c.onion
  • 3f3cu2yw7u457ztq.onion

Running 1 seed node on testnet.

  • m5izk3fvjsjbmkqi.onion

Issues Encountered

No issues encountered.

Maintenance Performed

  • I increased the storage size for my mainnet nodes to 350 GB since indexed bitcoin core data was approaching 300 GB.
  • I configured my nodes to report server metrics to the monitor.
  • I coordinated a backup for my mainnet nodes, @ripcurlx.
  • On Aug 28 I updated all my mainnet seed nodes to the seednode_temporaryfix branch, commit ab59d5671018493b26400897d0e6c41d6027c8e2, which contained P2P bug fixes to help node stability issues.

Expenses Incurred

Expenses incurred for the month (USD):

  • 3 x $20 for server (2 CPU, 4 GB RAM)
  • 2 x $35 for storage (350 GB)
    Note: The testnet seed node does not require additional storage, unlike the mainnet seed nodes.

Total: $130

/cc bisq-network/compensation#353

Emzy commented

Cycle 5 report

Running 3 seednode instances
hosting: Hetzner VM on my dedicated server and two dedicated servers

Maintenance Performed

  • configured two nodes to report server metrics via collectd
  • installed seednode_temporaryfix branch
  • increased memory used by JAVA

/cc bisq-network/compensation#355

wiz commented

Cycle 5 report

Running 1 seednode instance

/cc bisq-network/compensation#352

Cycle 5 report

Both seednodes running stable.
Updated one of the seednodes with the new docker collectd process, still testing. Will migrate the other seednode once everything works.

bisq-network/compensation#362

Emzy commented

Cycle 6 report

Running 3 seednode instances
hosting: Hetzner VM on my dedicated server and two dedicated servers

Maintenance Performed

  • updated to new version of seednode_temporaryfix branch

/cc bisq-network/compensation#373

Cycle 6 report

Summary

Running 2 seed nodes on mainnet.

  • fl3mmribyxgrv63c.onion
  • 3f3cu2yw7u457ztq.onion

Running 1 seed node on testnet.

  • m5izk3fvjsjbmkqi.onion

Issues Encountered

On Sept 18, fl3mmribyxgrv63c restarted and began syncing from the genesis transaction.

Maintenance Performed

On Sept 18, I updated all my mainnet seed nodes to the seednode_temporaryfix branch, commit d429e12bf914b3a3861f02884451b341d35f7cd7, which contained V1.1.6 updates for seednodes.

Expenses Incurred

Expenses incurred for the month (USD):

  • 3 x $20 for server (2 CPU, 4 GB RAM)
  • 2 x $35 for storage (350 GB)
    Note: The testnet seed node does not require additional storage, unlike the mainnet seed nodes.

Total: $130

/cc bisq-network/compensation#378

wiz commented

Cycle 6 report

Running 1 seednode
Planning to takeover 1 seednode from @Emzy soon, so 4 operators will each have 2 seednodes.

/cc bisq-network/compensation#380

Emzy commented

Cycle 7 report

Running 3 seednode instances
hosting: Hetzner VM on my dedicated server and two dedicated servers

Maintenance Performed

  • updated to new versions of seednode

/cc bisq-network/compensation#397

wiz commented

Cycle 7 report

Running 1 instance now, but I just setup a powerful new baremetal server with fast SSDs running FreeBSD to takeover 1 of the seednodes from @Emzy in a few days. Bisq on FreeBSD is great.

/cc bisq-network/compensation#398

Cycle 7 report

Summary

Running 2 seed nodes on mainnet.

  • fl3mmribyxgrv63c.onion
  • 3f3cu2yw7u457ztq.onion

Running 1 seed node on testnet.

  • m5izk3fvjsjbmkqi.onion

Issues Encountered

No issues encountered.

Maintenance Performed

Updated to latest seednode versions as necessary.

Expenses Incurred

Expenses incurred for the month (USD):

  • 3 x $20 for server (2 CPU, 4 GB RAM)
  • 2 x $35 for storage (350 GB)
    Note: The testnet seed node does not require additional storage, unlike the mainnet seed nodes.

Total: $130

/cc bisq-network/compensation#400

Cycle 6&7 report

Both seednodes running stable.
Various updates performed as needed for the new bisq versions.
Updated explanations for backup operator.

bisq-network/compensation#407

wiz commented

As of 20191206 @ 1915Z, I have taken over the ef5qnzx6znifo3df.onion seed node from @Emzy bisq-network/bisq#3760

Emzy commented

Cycle 8 report

Running 3 seednode instances until 2019-12-06
Running 2 seednode instances after 2019-12-06
hosting: Hetzner VM on my dedicated server and two dedicated servers

Maintenance Performed

  • updated to new versions of the seednode
  • took over the backup role for one seednode.

/cc bisq-network/compensation#432

Cycle 8 report

Summary

Running 2 seed nodes on mainnet.

  • fl3mmribyxgrv63c.onion
  • 3f3cu2yw7u457ztq.onion

Running 1 seed node on testnet.

  • m5izk3fvjsjbmkqi.onion

Issues Encountered

None.

Maintenance Performed

Updated to latest versions as necessary.

Requested Compensation

  • 3 x 20 USD for server costs (2 CPU, 4 GB RAM)
  • 2 x 35 USD for storage costs (350 GB)
    Note: The testnet seed node does not require additional storage, unlike the mainnet seed nodes.
  • 50 USD x 3 for maintenance and to ensure 24/7 trouble-free operation

Total: 280 USD

/cc bisq-network/compensation#437

wiz commented

Cycle 8 report

Running jhgcy2won7xnslrb and ef5qnzx6znifo3df seednodes.

/cc bisq-network/compensation#448

Emzy commented

Cycle 9 report

Running 2 seednode instances
hosting: Hetzner VM on my dedicated server and one on a dedicated server

Maintenance Performed

  • updated to new versions of the seednode

/cc bisq-network/compensation#467

Cycle 8&9 report

Both seednodes running stable.
Various updates performed as needed for the new bisq versions.
Added monitoring to my docker builds

bisq-network/compensation#468

Cycle 9 report

Summary

Running 2 seed nodes on mainnet.

  • fl3mmribyxgrv63c.onion
  • 3f3cu2yw7u457ztq.onion

Running 1 seed node on testnet.

  • m5izk3fvjsjbmkqi.onion

Issues Encountered

None.

Maintenance Performed

Updated to latest versions as necessary.

Requested Compensation

  • 3 x 20 USD for server costs (2 CPU, 4 GB RAM)
  • 2 x 35 USD for storage costs (350 GB)
    Note: The testnet seed node does not require additional storage, unlike the mainnet seed nodes.
  • 50 USD x 3 for maintenance/operational costs

Total: 280 USD

/cc bisq-network/compensation#472

wiz commented

Cycle 9 report

Running jhgcy2won7xnslrb and ef5qnzx6znifo3df seednodes:

  • Upgraded to v1.2.5
  • Occasional crashes due to OOM and other Bisq bugs(?)
  • Running an experimental new Tor configuration that seems to have improved network stability issues

/cc bisq-network/compensation#470

Emzy commented

Cycle 10 report

Running 2 seednode instances
hosting: Hetzner VM on my dedicated server and one on a dedicated server

Maintenance Performed

/cc bisq-network/compensation#484

Cycle 10 report

Running 2 instances. Nothing to report this cycle.

/cc bisq-network/compensation#481

Emzy commented

Cycle 11 report

Running 2 seednode instances
hosting: Hetzner VM on my dedicated server and one on a dedicated server

Maintenance Performed

/cc bisq-network/compensation#508

Cycle 10&11 report

Both seednodes running stable.
Nothing to report.

bisq-network/compensation#511

Cycle 11 report

Running 2 instances. Nothing to report this cycle.

/cc bisq-network/compensation#512

wiz commented

Cycle 11 report

Running 2 instances v1.2.8

/cc bisq-network/compensation#515

Cycle 12 report

Running 2 instances.
I deployed brand new instances using the new one-click install script. The instances have beefier specs than the old ones.

/cc bisq-network/compensation#531

Emzy commented

Cycle 12 report

Running 2 seednode instances
hosting: Hetzner VM on my dedicated server and one on a dedicated server

Maintenance Performed

/cc bisq-network/compensation#541

Cycle 12 report

Both seednodes running stable.
Nothing to report.

bisq-network/compensation#543

wiz commented

Cycle 12 report

  • Running 3 instances, but only 2 in Bisq currently
  • Created a Tor V3 seednode
  • Removed a Tor V2 seednode

bisq-network/compensation#547

Cycle 13 report

Running 2 instances.
Updated nodes to resolve the DAO majority hash issue bisq-network/bisq#4206.

/cc bisq-network/compensation#560

Cycle 13 report

Both seednodes running stable.
Updated for bisq-network/bisq#4206

bisq-network/compensation#562

Emzy commented

Cycle 13 report

Running 2 seednode instances

Maintenance Performed

  • a few restarts

/cc bisq-network/compensation#570

Cycle 14 report

Running 2 instances.
I have recently deployed a third instance on Tor V3 (bisq-network/bisq#4320).

/cc bisq-network/compensation#593

Emzy commented

Cycle 14 report

Running 2 seednode instances

Maintenance Performed

I have recently deployed a third instance on Tor V3

/cc bisq-network/compensation#595

Cycle 14 report

Both seednodes running stable.
Some false positives with the monitoring, when logging in everything seemed ok and resolved itself afterwards.

bisq-network/compensation#601

wiz commented

Cycle 14 report

Running 4 instances, due to adding 2 new Tor V3 seednodes, wizseed3 and wizseed7
Retired old Tor V2 seednode ef5, but continuing to run for a few months until phase-out period
Imported some signed witness data from @sqrrm for Canada users who got unsigned

/cc bisq-network/compensation#602

Cycle 15 report

Both seednodes running stable.

bisq-network/compensation#615

Emzy commented

Cycle 15 report

Running 3 seednode instances

Maintenance Performed

  • third instance with tor address V3 is active

/cc bisq-network/compensation#617

Cycle 15 report

Running 3 instances.
Nothing to report.

/cc bisq-network/compensation#621

wiz commented

Cycle 15 report

  • Running 3 instances.
  • Planning shutdown ef5 after v1.3.7 is released

/cc bisq-network/compensation#632

Cycle 16 report

Both seednodes running stable.

bisq-network/compensation#638

Emzy commented

Cycle 16 report

Running 3 seednode instances

/cc bisq-network/compensation#644

Cycle 16 report

Running 5 instances, 2xV2 and 3xV3.
This cycle I added 2 new V3 nodes to replace my V2 nodes which are now pending retirement - see bisq-network/bisq#4408

/cc bisq-network/compensation#647

wiz commented

Cycle 16 report

  • Shutdown ef5 seednode after 3 month retirement period as per bisq-network/ops#4
  • Was running 4 instances, now down to running 3 instances

/cc bisq-network/compensation#650

Cycle 17 report

Running 5 instances, 2xV2 and 3xV3.
My V2 nodes are pending retirement in November after 3 month retirement period as per bisq-network/ops#4.

/cc bisq-network/compensation#665

Emzy commented

Cycle 17 report

Running 3 seednode instances

/cc bisq-network/compensation#677

Cycle 17 report

Both seednodes running stable.

bisq-network/compensation#681

Cycle 18 report

There are 2 v2 seednodes: running stable
There is 1 v3 seednode: running stable
There are 2 new v3 seednodes which will be billed in the next cycle. One of them is not working correctly atm.

bisq-network/compensation#691