orbitdb-archive/ipfs-log

Ability for bad actor to 'flood' a public log as essentially a denial of service attack? [Question]

rosolam opened this issue · 1 comments

I have been considering several architectures for a project utilizing IPFS/OrbitDB, which I am new to but learning and loving in a hurry. Certain design patterns seem optimal with the use of a public access controller but I am concerned that a bad actor could very easily add an arbitrary number of records to a log and I am concerned about the consequences of that. It would seem to me that even if the distributed applications could weed through the "bad" records in some fashion that the sheer volume could make transmitting and crawling the graph problematic or infeasible.

One use case I was considering was implementing it as a foundation for a transactional interface where each user would "own/monitor" a public log that would allow other previously-unknown users to post "requests" to them. Here is a silly example:

  1. Alice writes a request to Bob's public log, "Will you be my friend?", perhaps this is encrypted with Bob's public key so only he can read the request
  2. Bob retrieves the request, checks the signature, decrypts it and decides to say yes
  3. Bob writes a response to Alice's public log, "Yes, here is a secret password for us to use when we talk"
  4. Alice retrieves the response, checks the signature, decrypts it to use later.
  5. Bob writes another response to Alice's public log later on, "Hi, going forward lets use this new password when we talk"

In this example if Charlie had written 100 million entries to Bob's public log, it would seem to me that Bob by monitoring the log would trigger conflict-free replication of it and join those records which would replicate to his device in terms of storage and record count.

Another simple use case could simply be comments on a public blog post, again, even if you could weed out the spam from the valid comments in some fashion, a bad actor could flood the log with records impacting those that are keeping synchronized copies.

Is this a valid concern or am I missing an important understanding of how ipfs logs work? I have alternative approaches in mind but which require more complexity.

Thanks,
Michael

I'd be interested to know if you took any particular approach to production for this issue.

The way that I'm planning to approach this is at the libp2p level, preventing a given remote peer from flooding pubsub messages to the local peer. However, that is predicated on the assumption that the remote peer isn't themselves unwillingly relaying an attacker's messages.

One could also rate limit by the pubsub message signer's public key, but that incurs the cost of recovering the signer's key from the message. An attacker could also just randomize their key in order to defeat identification, but hopefully the cost per key generation would be sufficiently high as to frustrate the approach.