Skip to content

min20120907/Cell_RCNN_Qt

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

178 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🔬 Cell R-CNN Qt: The Complete GUI Toolkit for Cell Segmentation 🔬

Welcome to Cell R-CNN Qt! This is a powerful and user-friendly desktop application designed for researchers and developers to train, detect, and evaluate instance segmentation models for cell analysis.

Built on top of the robust Mask R-CNN framework, this application provides a complete graphical interface, eliminating the need for complex command-line operations for most tasks. From data preparation to model validation, everything is integrated into one convenient toolkit.

App Screenshot


✨ Full Feature Set

This application is more than just a detector; it's a complete pipeline!

  • 🧠 End-to-End Training: Configure, run, and monitor the training of your own Mask R-CNN models directly within the app.
  • 🚀 One-Click Detection: Load a model and images to perform instance segmentation with ease.
  • 📊 Model Evaluation: Includes scripts to calculate the Mean Average Precision (mAP) to validate your model's performance.
  • 🔄 Data Conversion Tools:
    • Convert ImageJ .roi files into the COCO .json format required for training.
    • Batch processing capabilities for converting and detecting large sets of images.
  • 📝 Annotation Helper: Tools to assist in creating and visualizing masks and annotations.
  • ⚙️ Profile Management: Save and load your training configurations and file paths for reproducible experiments.

🛠️ Tech Stack

  • GUI Framework: PyQt5
  • Deep Learning: TensorFlow 1.x & Keras
  • Core Model: Matterport's Mask R-CNN
  • Image Processing: OpenCV, Scikit-image
  • Python Version: 3.6+

🚀 Getting Started: Installation

Let's get the application up and running on your system.

  1. Clone the Repository

    git clone https://github.com/min20120907/Cell_RCNN_Qt.git
    cd Cell_RCNN_Qt
  2. Set Up a Virtual Environment (Highly Recommended!)

    # Create and activate the environment
    conda create --name myenv python=3.7
    conda activate myenv
    conda install cudatoolkit=11.7 cudnn=8
  3. Install Dependencies This project requires specific versions of TensorFlow and Keras. The requirements.txt file handles this for you.

    conda activate myenv
    pip install -r requirements.txt
  4. Download Pre-trained COCO Weights For transfer learning or initial testing, you need the base Mask R-CNN weights.

You're all set to go! 🎉


📖 Step-by-Step Guide

1. How to Convert Your Data (ImageJ ROIs -> COCO JSON)

The model needs annotations in the COCO .json format. This tool provides a utility to convert them from ImageJ's .roi files.

  1. Launch the Application:
    python Cell_Trainer.py
  2. Open the Converter:
    • Click the Convert ImageJ ROIs button.
  3. Select Your Files:
    • You will be prompted to select the directory containing your source images (e.g., .png files).
    • Next, select the directory containing the corresponding ImageJ .roi files.
    • Finally, choose a location and filename to save the output trainval.json file.
  4. Done! ✅ The script roi2coco_line.py will process your files and create the JSON annotation file needed for training. For batch processing, you can use the Batch Convert ImageJ ROIs button.

2. How to Train Your Own Model ( •̀ ω •́ )✧

  1. Prepare Your Dataset: Your dataset folder should have the following structure:
    /your_dataset_folder
    ├── train/
    │   ├── image1.png
    │   ├── image2.png
    │   └── ...
    └── trainval.json  (The COCO annotation file you just generated!)
    
  2. Launch the Application and configure the training parameters:
    • Confidence Rate: Set the detection confidence threshold (e.g., 0.9).
    • Training Steps: Enter the number of steps per epoch (e.g., 100).
    • Training Epochs: Enter the total number of epochs to train for (e.g., 100).
  3. Load Your Data and Weights:
    • Click Upload datasets and select your /your_dataset_folder.
    • Click Upload weights and choose a weights file to start from. For the first training, use the mask_rcnn_coco.h5 file. For later trainings, you can use your own previously trained models.
  4. Start Training!
    • Click the big Train it! button.
    • The progress bar will update, and detailed logs will appear in the text box below. Your trained models will be saved in the mrcnn/logs directory. The core logic is handled by Cell_Trainer.py.

3. How to Detect Cells in Images 📸

  1. Launch the Application.
  2. Load the Model:
    • Click Upload weights and select your trained model .h5 file (from the mrcnn/logs/... directory).
  3. Load Images:
    • Click Upload detection images and select one or more images you want to analyze.
  4. Run Detection!
    • Click the Detect it! button. (ノ◕ヮ◕)ノ*:・゚✧
    • The application will process the images, and the results will be saved in a results folder. The detection logic is managed by detectingThread.py. For processing an entire folder, use the Batch Detect button.

4. How to Validate Your Model (Calculate mAP) 📊

This project includes a script to evaluate your model's performance using the Mean Average Precision (mAP) metric. This is a command-line-based process.

  1. Prepare Your Validation Set: You need a validation dataset structured similarly to your training set, with ground-truth annotations in a .json file.

  2. Run the Evaluation Script: Open your terminal (make sure your virtual environment is activated) and run the eval_model.py script. You will need to provide paths to your model and dataset.

    python eval_model_gpu_cell.py --work_dir . --weights="/path/to/your/trained_model.h5" --dataset="/path/to/your/validation_dataset" --ouput_folder "/path/to/your/output_folder"
  3. Analyze the Results: The script will output the mAP scores for different IoU (Intersection over Union) thresholds, giving you a quantitative measure of your model's accuracy.


Here is the updated section. I have added a Troubleshooting section immediately following the installation steps. This is the standard place to put fixes for environment conflicts like the OpenCV/Qt xcb error.

You can copy and paste the section below into your README.


⚠️ Troubleshooting: Common Issues

❌ Error: "Could not load the Qt platform plugin 'xcb'"

If you try to run the application and receive an error message related to the xcb plugin (e.g., qt.qpa.plugin: Could not load the Qt platform plugin "xcb"), this is a common conflict between the Qt libraries bundled with OpenCV and PyQt5.

The Fix: You need to remove the duplicate Qt plugins found inside your OpenCV installation so that the system uses the correct PyQt5 plugins.

  1. Locate your environment's site-packages directory.
  2. Find the cv2 folder.
  3. Delete (or rename) the qt folder or the specific libqxcb.so file inside it.

Example command (Linux/Mac):

# Adjust the path to match your specific environment location
rm ~/anaconda3/envs/myenv/lib/python3.7/site-packages/cv2/qt/plugins/platforms/libqxcb.so

Once removed, run python Cell_Trainer.py again, and the GUI should launch correctly!


❌ Error: "Authorization required" / "could not connect to display"

If you are connected through SSH, running inside a server container, or otherwise do not have a valid X/Wayland display session, the Qt GUI cannot start.

Use the new headless CLI mode instead:

python Cell_Trainer.py --headless \
  --dataset-path "/path/to/dataset" \
  --work-dir "/path/to/work_dir" \
  --weight-path "/path/to/mask_rcnn_coco.h5" \
  --epochs 100 \
  --steps 1000 \
  --confidence 0.9

You can also load defaults from profile.json and override only what you need:

python Cell_Trainer.py --headless --profile profile.json --steps 1500

For all available options:

python Cell_Trainer.py --headless --help

Where to place this?

I recommend placing this new section immediately after the "🚀 Getting Started: Installation" section and before the "📖 Step-by-Step Guide". This ensures users see it right after they finish installing dependencies, which is when the error usually occurs.

📜 License

Distributed under the MIT License. See the LICENSE file for more information.

⭐ Show Your Support

If you find this project useful, please give it a star on GitHub! 🌟 It helps a lot!

Happy Segmenting! (^_<)〜☆

About

using mask rcnn to train the imagej roi and files

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages