Welcome to Cell R-CNN Qt! This is a powerful and user-friendly desktop application designed for researchers and developers to train, detect, and evaluate instance segmentation models for cell analysis.
Built on top of the robust Mask R-CNN framework, this application provides a complete graphical interface, eliminating the need for complex command-line operations for most tasks. From data preparation to model validation, everything is integrated into one convenient toolkit.
This application is more than just a detector; it's a complete pipeline!
- 🧠 End-to-End Training: Configure, run, and monitor the training of your own Mask R-CNN models directly within the app.
- 🚀 One-Click Detection: Load a model and images to perform instance segmentation with ease.
- 📊 Model Evaluation: Includes scripts to calculate the Mean Average Precision (mAP) to validate your model's performance.
- 🔄 Data Conversion Tools:
- Convert ImageJ
.roifiles into the COCO.jsonformat required for training. - Batch processing capabilities for converting and detecting large sets of images.
- Convert ImageJ
- 📝 Annotation Helper: Tools to assist in creating and visualizing masks and annotations.
- ⚙️ Profile Management: Save and load your training configurations and file paths for reproducible experiments.
- GUI Framework: PyQt5
- Deep Learning: TensorFlow 1.x & Keras
- Core Model: Matterport's Mask R-CNN
- Image Processing: OpenCV, Scikit-image
- Python Version: 3.6+
Let's get the application up and running on your system.
-
Clone the Repository
git clone https://github.com/min20120907/Cell_RCNN_Qt.git cd Cell_RCNN_Qt -
Set Up a Virtual Environment (Highly Recommended!)
# Create and activate the environment conda create --name myenv python=3.7 conda activate myenv conda install cudatoolkit=11.7 cudnn=8 -
Install Dependencies This project requires specific versions of TensorFlow and Keras. The
requirements.txtfile handles this for you.conda activate myenv pip install -r requirements.txt
-
Download Pre-trained COCO Weights For transfer learning or initial testing, you need the base Mask R-CNN weights.
- Download
mask_rcnn_coco.h5from the Matterport GitHub releases. - Place the downloaded
.h5file into the root directory of this project.
- Download
You're all set to go! 🎉
The model needs annotations in the COCO .json format. This tool provides a utility to convert them from ImageJ's .roi files.
- Launch the Application:
python Cell_Trainer.py
- Open the Converter:
- Click the
Convert ImageJ ROIsbutton.
- Click the
- Select Your Files:
- You will be prompted to select the directory containing your source images (e.g.,
.pngfiles). - Next, select the directory containing the corresponding ImageJ
.roifiles. - Finally, choose a location and filename to save the output
trainval.jsonfile.
- You will be prompted to select the directory containing your source images (e.g.,
- Done! ✅ The script
roi2coco_line.pywill process your files and create the JSON annotation file needed for training. For batch processing, you can use theBatch Convert ImageJ ROIsbutton.
- Prepare Your Dataset:
Your dataset folder should have the following structure:
/your_dataset_folder ├── train/ │ ├── image1.png │ ├── image2.png │ └── ... └── trainval.json (The COCO annotation file you just generated!) - Launch the Application and configure the training parameters:
Confidence Rate: Set the detection confidence threshold (e.g.,0.9).Training Steps: Enter the number of steps per epoch (e.g.,100).Training Epochs: Enter the total number of epochs to train for (e.g.,100).
- Load Your Data and Weights:
- Click
Upload datasetsand select your/your_dataset_folder. - Click
Upload weightsand choose a weights file to start from. For the first training, use themask_rcnn_coco.h5file. For later trainings, you can use your own previously trained models.
- Click
- Start Training!
- Click the big
Train it!button. - The progress bar will update, and detailed logs will appear in the text box below. Your trained models will be saved in the
mrcnn/logsdirectory. The core logic is handled byCell_Trainer.py.
- Click the big
- Launch the Application.
- Load the Model:
- Click
Upload weightsand select your trained model.h5file (from themrcnn/logs/...directory).
- Click
- Load Images:
- Click
Upload detection imagesand select one or more images you want to analyze.
- Click
- Run Detection!
- Click the
Detect it!button. (ノ◕ヮ◕)ノ*:・゚✧ - The application will process the images, and the results will be saved in a
resultsfolder. The detection logic is managed bydetectingThread.py. For processing an entire folder, use theBatch Detectbutton.
- Click the
This project includes a script to evaluate your model's performance using the Mean Average Precision (mAP) metric. This is a command-line-based process.
-
Prepare Your Validation Set: You need a validation dataset structured similarly to your training set, with ground-truth annotations in a
.jsonfile. -
Run the Evaluation Script: Open your terminal (make sure your virtual environment is activated) and run the
eval_model.pyscript. You will need to provide paths to your model and dataset.python eval_model_gpu_cell.py --work_dir . --weights="/path/to/your/trained_model.h5" --dataset="/path/to/your/validation_dataset" --ouput_folder "/path/to/your/output_folder"
-
Analyze the Results: The script will output the mAP scores for different IoU (Intersection over Union) thresholds, giving you a quantitative measure of your model's accuracy.
Here is the updated section. I have added a Troubleshooting section immediately following the installation steps. This is the standard place to put fixes for environment conflicts like the OpenCV/Qt xcb error.
You can copy and paste the section below into your README.
If you try to run the application and receive an error message related to the xcb plugin (e.g., qt.qpa.plugin: Could not load the Qt platform plugin "xcb"), this is a common conflict between the Qt libraries bundled with OpenCV and PyQt5.
The Fix: You need to remove the duplicate Qt plugins found inside your OpenCV installation so that the system uses the correct PyQt5 plugins.
- Locate your environment's
site-packagesdirectory. - Find the
cv2folder. - Delete (or rename) the
qtfolder or the specificlibqxcb.sofile inside it.
Example command (Linux/Mac):
# Adjust the path to match your specific environment location
rm ~/anaconda3/envs/myenv/lib/python3.7/site-packages/cv2/qt/plugins/platforms/libqxcb.soOnce removed, run python Cell_Trainer.py again, and the GUI should launch correctly!
If you are connected through SSH, running inside a server container, or otherwise do not have a valid X/Wayland display session, the Qt GUI cannot start.
Use the new headless CLI mode instead:
python Cell_Trainer.py --headless \
--dataset-path "/path/to/dataset" \
--work-dir "/path/to/work_dir" \
--weight-path "/path/to/mask_rcnn_coco.h5" \
--epochs 100 \
--steps 1000 \
--confidence 0.9You can also load defaults from profile.json and override only what you need:
python Cell_Trainer.py --headless --profile profile.json --steps 1500For all available options:
python Cell_Trainer.py --headless --helpI recommend placing this new section immediately after the "🚀 Getting Started: Installation" section and before the "📖 Step-by-Step Guide". This ensures users see it right after they finish installing dependencies, which is when the error usually occurs.
Distributed under the MIT License. See the LICENSE file for more information.
If you find this project useful, please give it a star on GitHub! 🌟 It helps a lot!
Happy Segmenting! (^_<)〜☆
