Run AI Task on your Edge Device.
The Server running on your edge device, handling input and output, enabling model switching, offering output streaming after inference, and parameter configuration.
- Auto-handling different input sources, such as MP4, IP cameras, and USB cameras.
- Dynamically switching AI models, currently supporting YOLOv8, but easily expandable.
- Simple and user-friendly MJPEG streaming output, accessible with just a browser, even on your phone.
- Publishes recognition results through MQTT, easily adaptable to your requirements.
Remind: Jetson orin nano 4G
can barely run, please try not to run other programs at the same time.
The easiest way is to run it using Docker. If you prefer running the source code directly on your local machine, please refer to the Advanced Usage.
# maker sure Docker installed, and run the Edge in container
bash scripts/run.sh
# Demo: input[sample.mp4] with model[80-object-detect.engine]
# the initial startup may take some time.
# in machine
http://localhost:46654/stream?src=sample.mp4&model_id=80-object-detect
# in other machine
http://machine-ip:46654/stream?src=sample.mp4&model_id=80-object-detect
- if you want run with python in host, check Run In Host
- more details about Docker, check Run with Docker
- the output-stream is MJPEG-stream, check MJPEG
- the UI for the Edge from Seeed Studio, check Web UI
- the output of inference working with MQTT, check MQTT Output
- add your models, check Design #models
- add your source or upload source, check Design #input
This project is released under the MIT license.
See CONTRIBUTING.md for more information.
See CHANGELOG.md.