Handle and approximate non zero basefee
renepickhardt opened this issue · 0 comments
I think this way heavily over estimates the base_fee
.
as noted in your commit message you basically turn the base_fee
of x
sats into x*100
ppm.
While I don't mind heavily overestimating the base_fee
as that incentivices nodes to basically not charge it anymore I thik there is also the pragmatic way to underestimate the base_fee
. So for example what I do in my simulation is that I filter by a base fee threshold. https://github.com/renepickhardt/pickhardtpayments/blob/8e4449097c305c5d0d11ed9869c20b136cf03d16/pickhardtpayments/SyncSimulatedPaymentSession.py#L64 which reads:
def _prepare_mcf_solver(self, src, dest, amt: int = 1, mu: int = 100_000_000, base_fee: int = DEFAULT_BASE_THRESHOLD):
....
for s, d, channel in self._uncertainty_network.network.edges(data="channel"):
# ignore channels with too large base fee
if channel.base_fee > base_fee:
continue
and later in the computation I just ignore the base fee and use the linear component. c.f.
https://github.com/renepickhardt/pickhardtpayments/blob/8e4449097c305c5d0d11ed9869c20b136cf03d16/pickhardtpayments/UncertaintyChannel.py#L196 which reads:
def linearized_integer_routing_unit_cost(self):
"Note that the ppm is natively an integer and can just be taken as a unit cost for the solver"
return int(self.ppm)
For channels that charge a small enough base_fee
this should not yield an issue. Maybe one could do both where small base_fees
are just accepted and too large base_fees
are penelized. Though I guess the approach of always over penelizing the base_fee
is just more sustainable.
Note: what is not working is to charge the base fee only for the first piece in piece wise linearization or for the first quantization thing as in those cases convexity is broken and solvers might just choose the other arcs.