MORC
Auto extract french DFIR ORC
Tested : Ubuntu 20.04 LTS
Requirements
-
Install fresh Ubuntu 20.04 x64
-
Update and upgrade packages
sudo apt update && sudo apt upgrade -y
- Install packages for MORC and Bulk_extractor
sudo apt install p7zip-full python3-magic python3-pip autoconf automake default-jdk libewf-dev sqlite3 -y
pip3 install --upgrade requests
- Install Bulk extractor
Bulk_extractor install, follow : https://github.com/simsong/bulk_extractor
cd ~
git clone https://github.com/simsong/bulk_extractor.git --recursive
cd bulk_extractor
./etc/CONFIGURE_UBUNTU18LTS.bash
chmod +x bootstrap.sh
./bootstrap.sh
./configure
make
sudo make install
cd ~
- MORC work with lot of files, we must update your ulimits for system and user environment.
sudo bash -c "cat <<EOF >> /etc/security/limits.conf
# Ulimits max 1 000 000 files
* soft nofile 1000000
* hard nofile 1000000
root soft nofile 1000000
root hard nofile 1000000
EOF
"
sudo bash -c "cat <<EOF >> /etc/pam.d/common-session
session required pam_limits.so
EOF
"
Need logout or reboot to take change :
exit
or
reboot
- Install MORC with its python requirements
cd ~
git clone https://github.com/DvAu26/MORC
cd MORC
pip3 install -r requirements.txt
Using
- Configure working directories
Edit the config.ini file of MORC (with vi, vim or nano)
vim ~/MORC/config.ini
- Change the BASE_DIR in config.ini
BASE_DIR = /mnt/MORC/
BASE_DIR is the root directory where MORC will work in. In this folder, MORC will create directories INPUT, WORKSPACE and OUTPUT (by default).
BASE_DIR = /case/MORC/
- Change the BASE_NAME in config.ini (optional)
Default configuration work fine. You can modify or adapt the naming of DFIR-ORC files that you have to submit to MORC.
BASE_NAME =['ORCSYS','ORCMEM']
BASE_NAME =['ORCSYS','DFIR-ORC','ORCYARA','ORCMEM']
- MORC initialization
python3 MORC.py
This will create default directories INPUT, WORKSPACE and OUTPUT in the path specified in the variable BASE_DIR.
NB: For Logs
python3 MORC.py 2>&1 | tee -a MORC.log
- DFIR-ORCs submission
Copy the DFIR-ORC files to the INPUT directory.
Nothing more to do... You just have to wait a minute for the file(s) to be detected by MORC and for the magic to work.
How it works
The system archi will have a repository (export NFS, SMB, glusterFS...) with the DFIR-ORCs to be extract and analyze.
- BASE_DIR : directory where MORC will search DFIR-ORC and where MORC will perform the extraction
- IN_DIR : directory where you put your own DFIR-ORCs.
- WORK_DIR : directory where MORC will extract, analyze...
- OUT_DIR : directory where MORC put the analyzable files.
The DFIR-ORC format the output archives names.
- BASE_NAME : list of start name for your own DFIR-ORCs. ([BASE_NAME]_[COMPUTER_NAME]_[DATE]_[SUFFIX_NAME].7z)
Little Workflow
You will perform an Hash check or at least an Hash calculate before any action.
- MORC calculate a MD5 hash for all files in the IN_DIR and create file.md5 file with the calculate MD5 in the OUT_DIR. (MD5 is the first step before it will perform SHA-1 and SHA-256)
- MORC extract DFIR-ORCs in WORK_DIR/MD5/
- MORC create in OUT_DIR/MD5/AV/ an archive with all artefacts have to go to AV check
- MORC move all CSV files in OUT_DIR/MD5/CSV/ to go to ELK/Splunk/others SIEM.
- MORC will launch log2timeline/psort over evtx at least and put the csv timeline result in OUT_DIR/MD5/CSV/
- MORC will launch Tzworks binaries over the artefacts.
- MORC will launch Metadata extractor over the EXE files.
Roadmap
Little roadmap :
- PoC with subprocess and queues
- PoC with celery tasker and rabbitmq (backend and broker)
OLD
- Package APT
python3-csvkit python-plaso