A Flutter plugin for integrating Ultralytics YOLO computer vision models into your mobile apps. The plugin supports both Android and iOS platforms, and provides APIs for object detection and image classification.
Feature | Android | iOS |
---|---|---|
Detection | ✅ | ✅ |
Classification | ✅ | ✅ |
Pose Estimation | ❌ | ❌ |
Segmentation | ❌ | ❌ |
OBB Detection | ❌ | ❌ |
Before proceeding further or reporting new issues, please ensure you read this documentation thoroughly.
Ultralytics YOLO is designed specifically for mobile platforms, targeting iOS and Android apps. The plugin leverages Flutter Platform Channels for communication between the client (app/plugin) and host (platform), ensuring seamless integration and responsiveness. All processing related to Ultralytics YOLO APIs is handled natively using Flutter's native APIs, with the plugin serving as a bridge between your app and Ultralytics YOLO.
Before you can use Ultralytics YOLO in your app, you must export the required models. The exported models are in the form of .tflite
and .mlmodel
files, which you can then include in your app. Use the Ultralytics YOLO CLI to export the models.
IMPORTANT: The parameters in the commands above are mandatory. Ultralytics YOLO plugin for Flutter only supports the models exported using the commands above. If you use different parameters, the plugin will not work as expected. We're working on adding support for more models and parameters in the future.
The following commands are used to export the models:
Android
yolo export format=tflite model=yolov8n imgsz=320 int8
yolo export format=tflite model=yolov8n-cls imgsz=320 int8
Then use file yolov8n_int8.tflite
or yolov8n-cls_int8.tflite
iOS
To export the YOLOv8n Detection model for iOS, use the following command:yolo export format=mlmodel model=yolov8n imgsz=[320, 192] half nms
After exporting the models, you will get the .tflite
and .mlmodel
files. Include these files in your app's assets
folder.
Ensure that you have the necessary permissions to access the camera and storage.
Android
Add the following permissions to your AndroidManifest.xml
file:
<uses-permission android:name="android.permission.CAMERA"/>
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/>
iOS
Add the following permissions to your `Info.plist` file:<key>NSCameraUsageDescription</key>
<string>Camera permission is required for object detection.</string>
<key>NSPhotoLibraryUsageDescription</key>
<string>Storage permission is required for object detection.</string>
Add the following permissions to your Podfile
:
post_install do |installer|
installer.pods_project.targets.each do |target|
flutter_additional_ios_build_settings(target)
# Start of the permission_handler configuration
target.build_configurations.each do |config|
config.build_settings['GCC_PREPROCESSOR_DEFINITIONS'] ||= [
'$(inherited)',
## dart: PermissionGroup.camera
'PERMISSION_CAMERA=1',
## dart: PermissionGroup.photos
'PERMISSION_PHOTOS=1',
]
end
# End of the permission_handler configuration
end
end
Create a predictor object using the LocalYoloModel
class. This class requires the following parameters:
final model = LocalYoloModel(
id: id,
task: Task.detect /* or Task.classify */,
format: Format.tflite /* or Format.coreml*/,
modelPath: modelPath,
metadataPath: metadataPath,
);
final objectDetector = ObjectDetector(model: model);
await objectDetector.loadModel();
final imageClassifier = ImageClassifier(model: model);
await imageClassifier.loadModel();
The UltralyticsYoloCameraPreview
widget is used to display the camera preview and the results of the prediction.
final _controller = UltralyticsYoloCameraController();
UltralyticsYoloCameraPreview(
predictor: predictor, // Your prediction model data
controller: _controller, // Ultralytics camera controller
// For showing any widget on screen at the time of model loading
loadingPlaceholder: Center(
child: Wrap(
direction: Axis.vertical,
crossAxisAlignment: WrapCrossAlignment.center,
children: [
const CircularProgressIndicator(
color: Colors.white,
strokeWidth: 2,
),
const SizedBox(height: 20),
Text(
'Loading model...',
style: theme.typography.base.copyWith(
color: Colors.white,
fontSize: 14,
),
Use the detect
or classify
methods to get the results of the prediction on an image.
objectDetector.detect(imagePath: imagePath)
or
imageClassifier.classify(imagePath: imagePath)
Ultralytics thrives on community collaboration; we immensely value your involvement! We urge you to peruse our Contributing Guide for detailed insights on how you can participate. Don't forget to share your feedback with us by contributing to our Survey. A heartfelt thank you 🙏 goes out to everyone who has already contributed!
Ultralytics presents two distinct licensing paths to accommodate a variety of scenarios:
- AGPL-3.0 License: This official OSI-approved open-source license is perfectly aligned with the goals of students, enthusiasts, and researchers who believe in the virtues of open collaboration and shared wisdom. Details are available in the LICENSE document.
- Enterprise License: Tailored for commercial deployment, this license authorizes the unfettered integration of Ultralytics software and AI models within commercial goods and services, without the copyleft stipulations of AGPL-3.0. Should your use case demand an enterprise solution, direct your inquiries to Ultralytics Licensing.
For bugs or feature suggestions pertaining to Ultralytics, please lodge an issue via GitHub Issues. You're also invited to participate in our Discord community to engage in discussions and seek advice!