DestructiveVoice/DestructiveFarm

Future Plans

borzunov opened this issue · 0 comments

I will be glad to review pull requests implementing the features listed below. If you plan to make a large contribution, please create an issue to discuss the planned changes in advance. If your PR is accepted, I will consider adding you to the "Contributors" section in the readme.

Feel free to make PRs with new checksystem protocols as well.

User experience:

  • Don't require shebang if --interpreter is set (implemented by @TheAvidDev).
  • Short variant for --server-url.
  • Show the number of flags added (and the number of new flags) after using the "Add Flags Manually" form (Found X new flags among Y flags in the form). The current behavior is misleading if a user adds an already existing flag (looks like nothing has happened).
  • Use virtualenv for the server to avoid dependency conflict. Set up everything and install deps in start_server.sh. Mention the feature in the rejected Docker-related PRs.
  • Measure the percentage of the exploits that surpassed the time limit only using the last N attacks.
  • Don't reset the flag search form with Ctrl+R. Add a separate "Clear" button. This may require a better front-end framework (see below).

Reliability:

  • Add a limit for the number of flags per (sploit, team) to the farm client to avoid bloating the farm server DB during aggressive flag spamming. When the limit is reached, show a warning and send only a small random subset of flags.
  • Simplify get_fair_share() with the table algorithm. The current implementation has a bias towards sending more flags from smaller groups that can be undesirable.

Optimizing resource use:

  • Add an option to run Python exploits in threads instead of processes. Measure the profit.
  • (?) If an exploit finishes before the time limit, we can increase the time limit for other exploits.
  • (?) The farm client runs 2 threads and 1 process for each exploit instance. We can reduce this to 1 thread and 1 process if we read the exploit output using select() (but may be too complicated). Measure the profit.

Ideas:

  • Make releases, write a changelog.
  • Add a "Plots" tab with performance plots (= the number of received/ACCEPTED/REJECTED flags) for each exploit. After a click - performance plots of this exploit for each victim.
  • Send stdout/stderr of each exploit instance to the farm server. Add an "Exploit Instances" tab with all sploit instances where a user can open their output. Don't print the output on the client (show a nice curses UI instead).
  • Log exceptions from the submit loop to stderr (implemented by @abbradar).
  • Add a "Logs" tab to the web interface with exceptions from the submit loop.
  • start_sploit.py sends a sploit source (if it's a script) to the farm server. A user can open it there. Store the source code gzipped, identify by hash (no ids). Maybe add "Sources" tab?
  • Randomize User-Agent in exploits.
  • Parse scoreboards to get the team list (add scoreboard protocols?).
  • Protect API with password (implemented by @nsychev).
  • (?) Check the amount of free RAM. In case of problems, don't run new processes and kill the existing ones. This may be too complicated or may be solved via configuring the OOM killer of the OS instead.
  • (?) Use digest auth instead of basic auth.

Testing:

  • (?) Test kill() and flush() on different OS more thoroughly.
  • (?) Add typing.
  • (?) Add unit tests.

Refactoring:

  • Use ORM (don't construct SQL queries by hand in server/views.py).
  • Use a modern front-end framework.
  • Resolve TODO and FIXME comments in the code.