/flutter_vision

A Flutter plugin for managing both Yolov5 model and Tesseract v4, accessing with TensorFlow Lite 2.x. Support object detection and OCR on both iOS and Android.

Primary LanguageJavaMIT LicenseMIT

flutter_vision

A Flutter plugin for managing Yolov5, Yolov8 and Tesseract v5 accessing with TensorFlow Lite 2.x. Support object detection and OCR on Android. iOS not updated, working in progress.

Installation

Add flutter_vision as a dependency in your pubspec.yaml file.

Android

In android/app/build.gradle, add the following setting in android block.

    android{
        aaptOptions {
            noCompress 'tflite'
            noCompress 'lite'
        }
    }

iOS

Comming soon ...

Usage

For YoloV5 and YoloV8 MODEL

  1. Create a assets folder and place your labels file and model file in it. In pubspec.yaml add:
  assets:
   - assets/labels.txt
   - assets/yolovx.tflite
  1. Import the library:
import 'package:flutter_vision/flutter_vision.dart';
  1. Initialized the flutter_vision library:
 FlutterVision vision = FlutterVision();
  1. Load the model and labels: modelVersion: yolov5 or yolov8
await vision.loadYoloModel(
        labels: 'assets/labelss.txt',
        modelPath: 'assets/yolov5n.tflite',
        modelVersion: "yolov5",
        numThreads: 1,
        useGpu: false);

For camera live feed

  1. Make your first detection: confThreshold work with yolov5 other case it is omited.

Make use of camera plugin

final result = await vision.yoloOnFrame(
        bytesList: cameraImage.planes.map((plane) => plane.bytes).toList(),
        imageHeight: cameraImage.height,
        imageWidth: cameraImage.width,
        iouThreshold: 0.4,
        confThreshold: 0.4,
        classThreshold: 0.5);

For static image

  1. Make your first detection:
final result = await vision.yoloOnImage(
        bytesList: byte,
        imageHeight: image.height,
        imageWidth: image.width,
        iouThreshold: 0.8,
        confThreshold: 0.4,
        classThreshold: 0.7);
  1. Release resources:
await vision.closeYoloModel();

For Tesseract 5.0.0 MODEL

  1. Create an assets folder, then create a tessdata directory and tessdata_config.json file and place them into it. Download trained data for tesseract from here and place it into tessdata directory. Then, modifie tessdata_config.json as follow.
{
    "files": [
      "spa.traineddata"
    ]
}
  1. In pubspec.yaml add:
assets:
    - assets/
    - assets/tessdata/
  1. Import the library:
import 'package:flutter_vision/flutter_vision.dart';
  1. Initialized the flutter_vision library:
 FlutterVision vision = FlutterVision();
  1. Load the model:
await vision.loadTesseractModel(
      args: {
        'psm': '11',
        'oem': '1',
        'preserve_interword_spaces': '1',
      },
      language: 'spa',
    );

For static image

  1. Get Text from static image:
    final XFile? photo = await picker.pickImage(source: ImageSource.gallery);
    if (photo != null) {
      final result = await vision.tesseractOnImage(bytesList: (await photo.readAsBytes()));
    }
  1. Release resources:
await vision.closeTesseractModel();

About results

For Yolo

result is a List<Map<String,dynamic>> where Map have the following keys:

   Map<String, dynamic>:{
    "box": [x1:top, y1:left, x2:bottom, y2:right, class_confidence]
    "tag": String: detected class
   }

For Tesseract

result is a List<Map<String,dynamic>> where Map have the following keys:

    Map<String, dynamic>:{
      "text": String
      "word_conf": List:int
      "mean_conf": int}

Example

Screenshot_2022-04-08-23-59-05-652_com vladih dni_scanner_example Screenshot_2022-04-08-23-59-42-594_com vladih dni_scanner_example Screenshot_2022-04-09-00-00-53-316_com vladih dni_scanner_example