bluesky-social/proposals

[Proposal 0002] - Suggesting more direct/explicit moderation control for PDS admins

joelghill opened this issue · 1 comments

The proposal for moderation does not include many details on how PDS administrators will receive reports, or on what action they will be able to take to protect themselves and their users from bad actors when they receive reports about objectionable content.

As someone who plans on hosting a PDS that may have hundreds or even thousands of users, I would expect I should be able to take actions similar to the following:

  1. Establish rules and terms of service for people on my PDS
  2. Receive reports about bad actors or toxic content seen by users hosted on my PDS
  3. Be able to block bad actors from interacting from people on my PDS
  4. Be able to block content from being seen by people on my PDS
  5. Be able to remove users from my PDS and warn other hosts of said user
  6. Be able to alert said bad actor as to what action I took and why.

The labeling service is a good idea, but it does not prevent toxic content from propagating across the network. For example, it's not enough to put a label on racist content and allow people to hide it, we should be doing everything we can on all layers of the network to stop people from being able to post exceptionally harmful content altogether.

If giving the PDS this type of control is not in the general vision of the network then I think we need a clear and more detailed map on what each of the nodes will be responsible for in relation to moderation.