GiraudJules/Player_detection

YOLO' class cannot be found in the 'nets.nn' module

Closed this issue · 4 comments

Exception has occurred: AttributeError
*
Can't get attribute 'YOLO' on <module 'nets.nn' from '/Users/pascal/miniconda3/lib/python3.10/site-packages/nets/nn/init.py'>
File "/Users/pascal/Desktop/python opencv/football.py", line 42, in detect_on_frame
model = YOLO(model_path)
File "/Users/pascal/Desktop/python opencv/football.py", line 61, in
detect_on_frame(MODEL_PATH, VIDEO_PATH, FRAME_NUMBER, 0.5)
AttributeError: Can't get attribute 'YOLO' on <module 'nets.nn' from '/Users/pascal/miniconda3/lib/python3.10/site-packages/nets/nn/init.py'> from roboflow import Roboflow
import clearml
from ultralytics import YOLO
import cv2
import numpy as np
from collections import defaultdict
import numpy as np
from IPython.display import Image, display
import io

Download the dataset stored on Roboflow

rf = Roboflow(api_key="")
project = rf.workspace().project("fcberdi")
model = project.version(1).model
dataset = project.version(1).download("yolov8")

Connecting ClearML with the current process (Colab notebook)

clearml.browser_login()

Define model & data path

model = YOLO('yolov8n.pt')
DATASET_PATH = "/Users/pascal/Desktop/fcberdi.v1i.yolov8/data.yaml"

Train the model with CUDA or MPS

results = model.train(data=DATASET_PATH, epochs=10, imgsz=320)

Define video path & model path

MODEL_PATH = '/Users/pascal/desktop/best.pt'
VIDEO_PATH = '/Users/pascal/desktop/fcberdi.mp4'
OUTPUT_VIDEO_PATH = '/Users/pascal/desktop/fcberdi_output.mp4'

def detect_on_frame(model_path, video_path, frame_number, conf_threshold=0.35):
"""Extracts a specific frame from a video, runs YOLOv8 detection on it, and displays the result.

Args:
    model_path (str): Path to the YOLO model file.
    video_path (str): Path to the input video file.
    frame_number (int): The frame number to extract and process.
    conf_threshold (float, optional): Confidence threshold for YOLO model detection. Defaults to 0.35.
"""

model = YOLO(model_path)

cap = cv2.VideoCapture(video_path)
cap.set(cv2.CAP_PROP_POS_FRAMES, frame_number)
success, frame = cap.read()
cap.release()

if success:
    # Run YOLOv8 inference on the frame and visualize the results
    results = model(frame, conf=conf_threshold)
    annotated_frame = results[0].plot()
    _, encoded_image = cv2.imencode('.png', annotated_frame)
    ipy_img = Image(data=encoded_image.tobytes())
    display(ipy_img)
else:
    print(f"Failed to read the frame at position {frame_number} from the video.")

Detect on a single frame from the video

FRAME_NUMBER = 500
detect_on_frame(MODEL_PATH, VIDEO_PATH, FRAME_NUMBER, 0.5)

def process_and_detect_on_video(model_path, video_path, output_path=None, display_video=False, conf_threshold=0.5):
"""Process a video file with YOLO model, optionally save and display the output.

Args:
    model_path (str): Path to the YOLO model file.
    video_path (str): Path to the input video file.
    output_path (str, optional): Path where the output video will be saved. If None, the video won't be saved.
    display_video (bool): Whether to display the video during processing.
    conf_threshold (float): Confidence threshold for YOLO model detection.
"""

model = YOLO(model_path)

cap = cv2.VideoCapture(video_path)
out = None
if output_path:
    frame_width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
    frame_height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
    frame_rate = int(cap.get(cv2.CAP_PROP_FPS))
    fourcc = cv2.VideoWriter_fourcc(*'mp4v')
    out = cv2.VideoWriter(output_path, fourcc, frame_rate, (frame_width, frame_height))

try:
    while cap.isOpened():
        success, frame = cap.read()
        if not success:
            break

        results = model(frame, conf=conf_threshold)
        annotated_frame = results[0].plot()

        if out:
            out.write(annotated_frame)

        if display_video:
            cv2.imshow("YOLOv8 Inference", annotated_frame)
            # Press Q on the keyboard to exit when displaying the video
            if cv2.waitKey(1) & 0xFF == ord('q'):
                break

except Exception as e:
    print(f"An error occurred: {e}")

finally:
    cap.release()
    if out:
        out.release()
    if display_video:
        cv2.destroyAllWindows()

Process and detect on the video

process_and_detect_on_video(MODEL_PATH, VIDEO_PATH, OUTPUT_VIDEO_PATH, display_video=False, conf_threshold=0.5)

Hello @pascal-maker,

Can you check that you have correctly install YOLO from Ultralytics, and the import in your code is correct ?
pip install ultralytics
from ultralytics import YOLO

It's possible that there may be compatibility issues between different libraries or packages. Make sure that you are using compatible versions of 'ultralytics', 'pytorch', and any other relevant libraries in your project.
I think this may be due to the use of miniconda3, I advise you to use a virtual environment with pyenv and pyenv-virtualenv!

yes i will check and let you know !

Thank you again @pascal-maker for your interest in my project, and I'm glad to hear that you appreciate the demonstration code of this work.

Regarding the issue with the virtual environment, I'm almost sure that the problem is indeed related to Conda or Miniforge rather than the script itself. However, if you need further assistance or have any questions in the future, feel free to reach out or open another issue.

As a side note, I noticed you mentioned the project being open source and while the code is publicly available on GitHub, it currently does not have a specific open-source license. This means that, technically, the default copyright laws apply, and the code shouldn't be used for commercial purpose or distributed as an open-source project in the future. However, I am considering adding an open-source license to the main project link to this demo to make it more accessible and clear on the usage rights.

Thanks for your issue (and for the next ones 😉), and I wish you the best with your project! 👍

Hey Jules it worked thanks a lot man huge shout to you.