/pwnagotchi

(⌐■_■) - Deep Reinforcement Learning instrumenting bettercap for WiFi pwning.

Primary LanguageJavaScriptOtherNOASSERTION

What changed in this fork:

  • Updated raspbian image
  • Fixed image build
  • Removed usb0 network to fix RPi 3b support / (usb0 is always UP, so we are entering manual mode in any case)
  • New display drivers - waveshare_3 & waveshare35lcd
  • New config parameter to invert display color - ui.display.invert = true
  • AI model loading fix
  • RPi 3b wifi channels/region fix
  • Other fixes

P.S. After installing on RPi 3b or later connect to ssh and enter rfkill unblock wifi rfkill unblock all Also if you using LCD35 clones or waveshare 3.5 lcd you need to install driver

  • only clone a drivers repo and run ./LCD35-show lite other steps are not needed / if you want to rotate screen ./LCD35-show lite 180

then add this line echo 0 > /sys/class/graphics/fbcon/cursor_blink into /etc/rc.local

after all maybe you dont want very bright white background on that screen so add ui.display.invert = true to your config.toml

How to reduce power usage of RPi 3b

Pwnagotchi

Pwnagotchi is an A2C-based "AI" leveraging bettercap that learns from its surrounding WiFi environment to maximize the crackable WPA key material it captures (either passively, or by performing authentication and association attacks). This material is collected as PCAP files containing any form of handshake supported by hashcat, including PMKIDs, full and half WPA handshakes.

ui

Instead of merely playing Super Mario or Atari games like most reinforcement learning-based "AI" (yawn), Pwnagotchi tunes its parameters over time to get better at pwning WiFi things to in the environments you expose it to.

More specifically, Pwnagotchi is using an LSTM with MLP feature extractor as its policy network for the A2C agent. If you're unfamiliar with A2C, here is a very good introductory explanation (in comic form!) of the basic principles behind how Pwnagotchi learns. (You can read more about how Pwnagotchi learns in the Usage doc.)

Keep in mind: Unlike the usual RL simulations, Pwnagotchi learns over time. Time for a Pwnagotchi is measured in epochs; a single epoch can last from a few seconds to minutes, depending on how many access points and client stations are visible. Do not expect your Pwnagotchi to perform amazingly well at the very beginning, as it will be exploring several combinations of key parameters to determine ideal adjustments for pwning the particular environment you are exposing it to during its beginning epochs ... but ** listen to your Pwnagotchi when it tells you it's boring!** Bring it into novel WiFi environments with you and have it observe new networks and capture new handshakes—and you'll see. :)

Multiple units within close physical proximity can "talk" to each other, advertising their presence to each other by broadcasting custom information elements using a parasite protocol I've built on top of the existing dot11 standard. Over time, two or more units trained together will learn to cooperate upon detecting each other's presence by dividing the available channels among them for optimal pwnage.

Documentation

https://www.pwnagotchi.ai

Links

  Official Links
Website pwnagotchi.ai
Forum community.pwnagotchi.ai
Slack pwnagotchi.slack.com
Subreddit r/pwnagotchi
Twitter @pwnagotchi

License

pwnagotchi is made with ♥ by @evilsocket and the amazing dev team. It is released under the GPL3 license.