Apex CV YOLO v8 Aim Assist Bot

  • Due to the widespread adoption of controller aim assist on PC gaming...

Introduction

  • It is the aim of this project to provide a quality object detection model.
  • Object detection is a technique used in computer vision for the identification and localization of objects within an image or a video.
  • This project is based on: Franklin-Zhang0/Yolo-v8-Apex-Aim-assist.
  • It is safe to use if you follow the best practices.
  • This project is regularly maintained. Better models are made available as more training is created. It is a slowly but surely endeavour.

Features and Requirements

Features:

  • Faster screen capture with dxshot
  • Faster CPU NMS (Non Maximum Supression) with NumPy
  • Optional GPU NMS (Non Maximum Supression) with TensorRT efficientNMSPlugin
  • Class targeting Ally, Enemy, Tag
  • Humanized mouse control with PID (Proportional-Integral-Derivative)

Requirements:

  • NVIDIA Turing / Ampere / Ada_Lovelace GPU

Benchmarks

Test system:
- OS: Windows 10 Enterprise 1803 (OS build 17134)
- CPU: Intel Core i7 3770K @ 4.0 GHz
- GPU: NVIDIA GeForce RTX 2070_8G / 3080_12G
- RAM: 16G DDR3 @ 2133 MHz
- Monitor resolution: 1920 x 1080
- In-game resolution: 1920 x 1080
- In-game FPS: RTSS async locked @ 72 FPS
GPU imgsz apex_8s.pt apex_8s.trt Precision
RTX 2070_8G 640/
1080p
51/35 FPS 72/50 FPS FP16/32
RTX 3080_12G 640/
1080p
53 FPS 72 FPS FP32
Video settings:
- Aspect Ratio               16:9
- Resolution                 1920 x 1080
- Brightness                 50%
- Field of View (FOV)        90
- FOV Ability Scaling        Enabled
- Sprint View Shake          Normal
- V-Sync                     Disabled
- NVidia Reflex              Enabled+Boost
- Adaptive Resolution FPS    60
- Adaptive Supersampling     Disabled
- Anti-aliasing              TSAA
- Texture Streaming Budget   Ultra (8GB VRAM)
- Texture Filtering          Anisotropic 16X
- Ambient Occlusion Quality  High
- Sun Shadow Coverage        Low
- Sun Shadow Detail          Low
- Spot Shadow Detail         Low
- Volumetric Lighting        Enabled
- Dynamic Spot Shadows       Enabled
- Model Detail               High
- Effects Detail             High
- Impact Marks               Disabled
- Ragdolls                   Low

0. Disclaimer

  • This guide has been tested twice, and each time on a fresh install of Windows.
    • Every detail matters. If you are having issues, you are not following the guide.

1. Environment set up in Windows

  • Version checklist:

    CUDA cuDNN TensorRT PyTorch
    12.1.0 8.9.0 8.6.1.6 2.1.2
  • Extract Apex-CV-YOLO-v8-Aim-Assist-Bot-main.zip to C:\TEMP\Ape-xCV

  • Install Visual Studio 2019 Build Tools.

    • Download from: OneDrive or Microsoft website.
    • On Individual components tab:
      • ✅ MSVC v142 - VS 2019 C++ x64/x86 build tools (Latest)
      • ✅ C++ CMake tools for Windows
      • ✅ Windows 10 SDK (10.0.19041.0)
    • ➡️ Install
  • Install CUDA 12.1.0 from: NVIDIA website.

    • ✅ I understand, and wish to continue the installation regardless.
  • Install cuDNN 8.9.0.

    • Register for the NVIDIA developer program.
      • Go to the cuDNN download site:cuDNN download archive.
      • Click Download cuDNN v8.9.0 (April 11th, 2023), for CUDA 12.x.
      • Download Local Installer for Windows (Zip).
    • Unzip cudnn-windows-x86_64-8.9.0.131_cuda12-archive.zip.
    • Copy all three folders (bin,include,lib) and paste <overwrite> them into C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1
  • Install Python 3.10.0 (64-bit) from: Python website.

    • ❌ Install launcher for all users
    • ✅ Add Python 3.10 to PATH
    • ➡️ Install Now
      • 🔰 Disable path length limit
  • Install python requirements.

cd /D C:\TEMP\Ape-xCV
pip install numpy==1.23.1
pip install torch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 --index-url https://download.pytorch.org/whl/cu121
pip install -r requirements.txt

2.1 Usage

⚠️ Lock your in-game FPS. Give your GPU some slack. It is needed for object detection. ⚠️

  • Do not use V-Sync to lock your FPS. V-Sync introduces input lag.

    • Use NVIDIA Control Panel.
    • OR
    • Use RTSS.
  • Set in-game mouse sensitivity to 3.50.

    • The PID control (Kp, Ki, Kd) values in args_.py come already fine-tuned.
    • If the mouse moves too fast, EAC will flag your account and you will be banned on the next ban wave.
      • So, don't mess with the PID. Change your mouse DPI instead.
  • SHIFT

    • Hold to lock on target.
  • LEFT_LOCK

    • Enabled when pressing '1' or '2'. Disabled when pressing 'G'.
    • Use to lock on target while firing automatic weapons.
  • RIGHT_LOCK

    • You need to change your ADS from toggle to hold.
    • Use CURSOR_RIGHT to toggle lock on target while scoping.
  • AUTO_FIRE

    • Use CURSOR_UP to toggle auto fire non-automatic weapons while locking on target.
  • HOME

    • 💀 Terminate script.
  • Load the Firing Range and give this script a go!

    • 🐵 Run Ape-xCV.bat.

2.2 Best pratices

  • To summarize:
    • ✅ Use default PID. Set in-game mouse sensitivity to 3.50.
    • ❌ No AUTO_FIRE with automatic weapons.

3.1 TensorRT (.engine)

  • Install TensorRT.

    • Go to the TensorRT download site: NVIDIA TensorRT 8.x Download.
    • Download TensorRT 8.6 GA for Windows 10 and CUDA 12.0 and 12.1 ZIP Package from: NVIDIA website.
    • Extract TensorRT-8.6.1.6.Windows10.x86_64.cuda-12.0.zip to C:\TEMP
    • Press [Win+R] and enter cmd to open a Command Prompt. Then input:
    cd /D C:\TEMP\Ape-xCV
    addenv C:\TEMP\TensorRT-8.6.1.6\lib
    • TensorRT was added to PATH. Close that Command Prompt and open a new one. Then input:
    cd /D C:\TEMP\TensorRT-8.6.1.6\python
    pip install tensorrt-8.6.1-cp310-none-win_amd64.whl
  • To export best_8s.pt to best_8s.engine:

    • Press [Win+R] and enter cmd to open a Command Prompt. Then input:
    set CUDA_MODULE_LOADING=LAZY
    cd /D C:\TEMP\Ape-xCV\MODEL
    yolo export model=best_8s.pt format=engine opset=12 workspace=7
  • Install Notepad++ from: Notepad++ website.

  • Open C:\TEMP\Ape-xCV\args_.py with Notepad++.

def arg_init(args):
    ...
    args.add_argument("--model", type=str,
                    default="/best_8s.pt", help="model path")
  • Do not change the identation! In --model change best_8s.pt to best_8s.engine
  • Save args_.py.
    • 🐵 Run Ape-xCV.bat.

3.2 TensorRT (.trt)

  • If C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1\bin\zlibwapi.dll is missing.

    • Copy C:\Program Files\NVIDIA Corporation\Nsight Systems 2023.1.2\host-windows-x64\zlib.dll to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1\bin and then rename it to zlibwapi.dll
  • To export best_8s.pt to best_8s.trt:

    • Press [Win+R] and enter cmd to open a Command Prompt. Then input:
    set CUDA_MODULE_LOADING=LAZY
    cd /D C:\TEMP\Ape-xCV\MODEL
    yolo export model=best_8s.pt format=onnx opset=12
    • If RTX 20 Series (FP16):
    C:\TEMP\TensorRT-8.6.1.6\bin\trtexec.exe --onnx=best_8s.onnx --saveEngine=best_8s.trt --buildOnly --workspace=7168 --fp16
    • If RTX 30 Series (FP32):
    C:\TEMP\TensorRT-8.6.1.6\bin\trtexec.exe --onnx=best_8s.onnx --saveEngine=best_8s.trt --buildOnly --workspace=7168
  • Open C:\TEMP\Ape-xCV\args_.py with Notepad++.

def arg_init(args):
    ...
    args.add_argument("--model", type=str,
                    default="/best_8s.pt", help="model path")
  • In --model change best_8s.pt to best_8s.trt
  • Save args_.py.
    • 🐵 Run Ape-xCV.bat.

3.3 TensorRT with GPU NMS (.trt)

  • Cons:

    • ❌ No speed increase.
    • ❌ IoU (Intersection over Union) threshold is hardcoded into engine, ignoring args_.py.
  • Download: Linaom1214/TensorRT-For-YOLO-Series.

  • Extract TensorRT-For-YOLO-Series-main.zip to C:\TEMP

  • Rename C:\TEMP\TensorRT-For-YOLO-Series-main to C:\TEMP\Linaom1214

  • To export best_8s.onnx to best_8s_e2e.trt:

    • Press [Win+R] and enter cmd to open a Command Prompt. Then input:
    set CUDA_MODULE_LOADING=LAZY
    cd /D C:\TEMP\Linaom1214
    • If RTX 20 Series (FP16):
    python export.py -o C:/TEMP/Ape-xCV/MODEL/best_8s.onnx -e C:/TEMP/Ape-xCV/MODEL/best_8s_e2e.trt -p fp16 -w 7 --end2end --conf_thres 0.6 --iou_thres 0.8 --v8
    • If RTX 30 Series (FP32):
    python export.py -o C:/TEMP/Ape-xCV/MODEL/best_8s.onnx -e C:/TEMP/Ape-xCV/MODEL/best_8s_e2e.trt -p fp32 -w 7 --end2end --conf_thres 0.6 --iou_thres 0.8 --v8
  • Open C:\TEMP\Ape-xCV\args_.py with Notepad++.

def arg_init(args):
    ...
    args.add_argument("--model", type=str,
                    default="/best_8s.pt", help="model path")
    args.add_argument("--end2end", type=bool,
                    default=False, help="use TensorRT efficientNMSPlugin")
  • In --model change best_8s.pt to best_8s_e2e.trt
  • In --end2end change False to True
  • Save args_.py.
    • 🐵 Run Ape-xCV.bat.

4. args_.py

  • Open C:\TEMP\Ape-xCV\args_.py with Notepad++.
def arg_init(args):
    ...
    args.add_argument("--classes", type=int,
                    default=[1,2], help="classes to be detected TensorRT(.trt); can be expanded but needs to be an array. "
                    "0 represents 'Ally', "
                    "1 represents 'Enemy', "
                    "2 represents 'Tag'... "
                    "Change default accordingly if your dataset changes")
    args.add_argument("--target_index", type=int,
                    default=1, help="class to be targeted PyTorch(.pt)")
    args.add_argument("--half", type=bool,
                    default=True, help="use FP16 to predict PyTorch(.pt)")
    args.add_argument("--iou", type=float,
                    default=0.8, help="predict intersection over union")  # 0.8 is recommended
    args.add_argument("--conf", type=float,
                    default=0.6, help="predict confidence")  # 0.6+ is recommended
    screen_height = win32api.GetSystemMetrics(1)
    args.add_argument("--crop_size", type=float,
                    default=640/screen_height, help="the portion to detect from the screen. 1/3 for 1440P or 1/2 for 1080P, imgsz/screen_height=direct")
    args.add_argument("--wait", type=float, default=0, help="wait time")
    args.add_argument("--verbose", type=bool, default=False, help="predict verbose")
    args.add_argument("--draw_boxes", type=bool,
                    default=False, help="outline detected target, borderless window")
  • Until you understand how NMS works, do not change --iou.
  • --crop_size "the portion to detect from the screen". Will be scaled down to 640x640 for input. The default 640/screen_height is the best value.
  • --draw_boxes "outline detected target, borderless window". Set to True in the Firing Range only.

❤️ Sponsor

PayPal Litecoin (LTC)

LKLYRUadyHitp5B7aB56yfBYSuG2UnuLEz

5. Train your own model

  • Download starter dataset from: OneDrive.
  • Extract apex.zip to C:\TEMP\Ape-xCV\datasets\apex
  • Press [Win+R] and enter cmd to open a Command Prompt. Then input:
cd /D C:\TEMP\Ape-xCV
python train8s40.py
  • This will train your YOLO v8 small model for 40 epochs with images and labels from C:\TEMP\Ape-xCV\datasets\apex and save it to C:\TEMP\Ape-xCV\runs\detect\train\weights\best.pt.

  • You can add your own images (640x640); and create the labels with: developer0hye/Yolo_Label.

    • Copy those into C:\TEMP\Ape-xCV\SPLIT\input
    • Press [Win+R] and enter cmd to open a Command Prompt. Then input:
    cd /D C:\TEMP\Ape-xCV\SPLIT
    python split.py
    • Your images and labels are now split into train and valid.
    • Browse C:\TEMP\Ape-xCV\datasets\apex and distribute them accordingly.