Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 11 additions & 0 deletions doc/apg.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# APG

`micro_sam` supports three different modes for instance segmentation:
- Automatic Mask Generation (AMG) covers the image with a grid of points. These points are used as prompts and the resulting masks are merged via non-maximum suppression (NMS) to obtain the instance segmentation. This method has been introduced by the original SAM publication.
- Automatic Instance Segmentation (AIS) uses an additional segmentation decoder, which we introduced in the `micro_sam` publication. This decoder predicts foreground probabilities as well as the normalized distances to cell centroids and boundaries. These predictions are used as input to a waterhsed to obtain the instances.
- Autmatic Prompt Generation (APG) is an instance segmentation approach that we introduced in [a new paper](https://openreview.net/forum?id=xFO3DFZN45). It derives point prompts from the segmentation decoder (see AIS) and merges the resulting masks via NMS.

In our experiments, APG yields the best overall instance segmentation results (compared to AMG and AIS) and is competitive with CellPose-SAM, the state-of-the-art model for cell instance segmentation.

The segmentation mode can be selected with the argument `mode` or `segmentation_mode` in the [CLI](#using-the-command-line-interface-cli) and [python functionality](https://computational-cell-analytics.github.io/micro-sam/micro_sam/automatic_segmentation.html). For details on how to use the different automatic segmentation modes check out the [automatic segmentation
notebook](https://github.com/computational-cell-analytics/micro-sam/blob/master/notebooks/automatic_segmentation.ipynb). The code for the experiments comparing the different segmentation modes (from [the new paper](https://openreview.net/forum?id=xFO3DFZN45)) can be found [here](https://github.com/computational-cell-analytics/micro-sam/tree/master/scripts/apg_experiments).
7 changes: 4 additions & 3 deletions doc/cli_tools.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,14 +11,15 @@ The supported CLIs can be used by
- Running `$ micro_sam.image_series_annotator` for starting the image series annotator.
- Running `$ micro_sam.train` for finetuning Segment Anything models on your data.
- Running `$ micro_sam.automatic_segmentation` for automatic instance segmentation.
- We support all post-processing parameters for automatic instance segmentation (for both AMG and AIS).
- The automatic segmentation mode can be controlled by: `--mode <MODE_NAME>`, where the available choice for `MODE_NAME` is `amg` / `ais`.
- We support all post-processing parameters for automatic instance segmentation (for AMG, AIS and APG).
- The automatic segmentation mode can be controlled by: `--mode <MODE_NAME>`, where the available choice for `MODE_NAME` is `amg` / `ais` / `apg`.
- AMG is supported by both default Segment Anything models and `micro-sam` models / finetuned models.
- AIS is supported by `micro-sam` models (or finetuned models; subjected to they are trained with the additional instance segmentation decoder)
- APG is supported by `micro-sam` models (or finetuned models; subjected to they are trained with the additional instance segmentation decoder)
- If these parameters are not provided by the user, `micro-sam` makes use of the best post-processing parameters (depending on the choice of model).
- The post-processing parameters can be changed by parsing the parameters via the CLI using `--<PARAMETER_NAME> <VALUE>.` For example, one can update the parameter values (eg. `pred_iou_thresh`, `stability_iou_thresh`, etc. - supported by AMG) using `$ micro_sam.automatic_segmentation ... --pred_iou_thresh 0.6 --stability_iou_thresh 0.6 ...`
- Remember to specify the automatic segmentation mode using `--mode <MODE_NAME>` when using additional post-processing parameters.
- You can check details for supported parameters and their respective default values at `micro_sam/instance_segmentation.py` under the `generate` method for `AutomaticMaskGenerator` and `InstanceSegmentationWithDecoder` class.
- You can check details for supported parameters and their respective default values at `micro_sam/instance_segmentation.py` under the `generate` method for `AutomaticMaskGenerator`, `InstanceSegmentationWithDecoder` and `AutomaticPromptGenerator` class.
- A good practice is to set `--ndim <NDIM>`, where `<NDIM>` corresponds to the number of dimensions of input images.
- Running `$ micro_sam.evaluate` for evaluating instance segmentation.

Expand Down
19 changes: 11 additions & 8 deletions doc/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ We recommend transferring the model checkpoints to the system-level cache direct
<!---
TODO provide relevant links here.
-->
### 1. I have some micropscopy images. Can I use the annotator tool for segmenting them?
### 1. I have some microscopy images. Can I use the annotator tool for segmenting them?
Yes, you can use the annotator tool for:
- Segmenting objects in 2d images (using automatic and/or interactive segmentation).
- Segmenting objects in 3d volumes (using automatic and/or interactive segmentation for the entire object(s)).
Expand Down Expand Up @@ -214,30 +214,33 @@ You can load your finetuned model by entering the path to its checkpoint in the
If you are using the python library or CLI you can specify this path with the `checkpoint_path` parameter.


### 5. What is the background of the new AIS (Automatic Instance Segmentation) feature in `micro_sam`?
`micro_sam` introduces a new segmentation decoder to the Segment Anything backbone, for enabling faster and accurate automatic instance segmentation, by predicting the [distances to the object center and boundary](https://github.com/constantinpape/torch-em/blob/main/torch_em/transform/label.py#L284) as well as predicting foregrund, and performing [seeded watershed-based postprocessing](https://github.com/constantinpape/torch-em/blob/main/torch_em/util/segmentation.py#L122) to obtain the instances.
### 5. What is the background of the AIS (Automatic Instance Segmentation) feature in `micro_sam`?
`micro_sam` introduces a new segmentation decoder to the Segment Anything backbone, for enabling faster and accurate automatic instance segmentation, by predicting the [distances to the object center and boundary](https://github.com/constantinpape/torch-em/blob/main/torch_em/transform/label.py#L284) as well as predicting foreground, and performing [seeded watershed-based postprocessing](https://github.com/constantinpape/torch-em/blob/main/torch_em/util/segmentation.py#L122) to obtain the instances.

### 6. What is the background of the new APG (Automatic Prompt Generation) feature in `micro_sam`?

### 6. I want to finetune only the Segment Anything model without the additional instance decoder.
The instance segmentation decoder is optional. So you can only finetune SAM or SAM and the additional decoder. Finetuning with the decoder will increase training times, but will enable you to use AIS. See [this example](https://github.com/computational-cell-analytics/micro-sam/tree/master/examples/finetuning#example-for-model-finetuning) for finetuning with both the objectives.
With the latest version 1.7.0 onwards, `micro_sam` introduces a new automatic instance segmentation method, called APG (automatic prompt generation). It builds on `micro_sam` by extracting prompts from the boundary and center distances predicted by the pretrained segmentation decoder. Once the prompts have been derived, it provides them to the prompt encoder and mask decoder (and additional postprocessing to the outputs) to obtain the instances. The method is compatible with the `micro_sam.automatic_segmentation` CLI (by selecting the `segmentation_mode="apg"`) and the python interface. See [APG](#apg) for details.

### 7. I want to finetune only the Segment Anything model without the additional instance decoder.
The instance segmentation decoder is optional. So you can only finetune SAM or SAM and the additional decoder. Finetuning with the decoder will increase training times, but will enable you to use AIS and APG. See [this example](https://github.com/computational-cell-analytics/micro-sam/tree/master/examples/finetuning#example-for-model-finetuning) for finetuning with both the objectives.

> NOTE: To try out the other way round (i.e. the automatic instance segmentation framework without the interactive capability, i.e. a UNETR: a vision transformer encoder and a convolutional decoder), you can take inspiration from this [example on LIVECell](https://github.com/constantinpape/torch-em/blob/main/experiments/vision-transformer/unetr/for_vimunet_benchmarking/run_livecell.py).


### 7. I have a NVIDIA RTX 4090Ti GPU with 24GB VRAM. Can I finetune Segment Anything?
### 8. I have a NVIDIA RTX 4090Ti GPU with 24GB VRAM. Can I finetune Segment Anything?
Finetuning Segment Anything is possible in most consumer-grade GPU and CPU resources (but training being a lot slower on the CPU). For the mentioned resource, it should be possible to finetune a ViT Base (also abbreviated as `vit_b`) by reducing the number of objects per image to 15.
This parameter has the biggest impact on the VRAM consumption and quality of the finetuned model.
You can find an overview of the resources we have tested for finetuning [here](#training-your-own-model).
We also provide a the convenience function `micro_sam.training.train_sam_for_configuration` that selects the best training settings for these configuration. This function is also used by the finetuning UI.


### 8. I want to create a dataloader for my data, to finetune Segment Anything.
### 9. I want to create a dataloader for my data, to finetune Segment Anything.
Thanks to `torch-em`, a) Creating PyTorch datasets and dataloaders using the python library is convenient and supported for various data formats and data structures.
See the [tutorial notebook](https://github.com/constantinpape/torch-em/blob/main/notebooks/tutorial_create_dataloaders.ipynb) on how to create dataloaders using `torch-em` and the [documentation](https://github.com/constantinpape/torch-em/blob/main/doc/datasets_and_dataloaders.md) for details on creating your own datasets and dataloaders; and b) finetuning using the `napari` tool eases the aforementioned process, by allowing you to add the input parameters (path to the directory for inputs and labels etc.) directly in the tool.
> NOTE: If you have images with large input shapes with a sparse density of instance segmentations, we recommend using [`sampler`](https://github.com/constantinpape/torch-em/blob/main/torch_em/data/sampler.py) for choosing the patches with valid segmentation for the finetuning purpose (see the [example](https://github.com/computational-cell-analytics/micro-sam/blob/master/finetuning/specialists/training/light_microscopy/plantseg_root_finetuning.py#L29) for PlantSeg (Root) specialist model in `micro_sam`).


### 9. How can I evaluate a model I have finetuned?
### 10. How can I evaluate a model I have finetuned?
To validate a Segment Anything model for your data, you have different options, depending on the task you want to solve and whether you have segmentation annotations for your data.

- If you don't have any annotations you will have to validate the model visually. We suggest doing this with the `micro_sam` GUI tools. You can learn how to use them in the `micro_sam` documentation.
Expand Down
2 changes: 1 addition & 1 deletion doc/start_page.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Based on these components `micro_sam` enables fast interactive and automatic ann
We are still working on improving and extending its functionality. The current roadmap includes:
- Releasing more and better finetuned models for the biomedical imaging domain.
- Integrating parameter efficient training and compressed models for efficient fine-tuning and faster inference.
- Support for [SAM2](https://ai.meta.com/sam2/).
- Support for [SAM2](https://ai.meta.com/sam2/) and [SAM3](https://ai.meta.com/sam3/).

If you run into any problems or have questions please [open an issue](https://github.com/computational-cell-analytics/micro-sam/issues/new) or reach out via [image.sc](https://forum.image.sc/) using the tag `micro-sam`.
You can follow recent updates on `micro_sam` in our [news feed](https://forum.image.sc/t/microsam-news-feed).
Expand Down
36 changes: 22 additions & 14 deletions examples/automatic_segmentation.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
DATA_CACHE = os.path.join(get_cache_directory(), "sample_data")


def livecell_automatic_segmentation(model_type, use_amg, generate_kwargs):
def livecell_automatic_segmentation(model_type, segmentation_mode, generate_kwargs):
"""Run the automatic segmentation for an example image from the LIVECell dataset.

See https://doi.org/10.1038/s41592-021-01249-6 for details on the data.
Expand All @@ -21,7 +21,7 @@ def livecell_automatic_segmentation(model_type, use_amg, generate_kwargs):
predictor, segmenter = get_predictor_and_segmenter(
model_type=model_type,
checkpoint=None, # Replace this with your custom checkpoint.
amg=use_amg,
segmentation_mode=segmentation_mode,
is_tiled=False, # Switch to 'True' in case you would like to perform tiling-window based prediction.
)

Expand All @@ -42,7 +42,7 @@ def livecell_automatic_segmentation(model_type, use_amg, generate_kwargs):
napari.run()


def hela_automatic_segmentation(model_type, use_amg, generate_kwargs):
def hela_automatic_segmentation(model_type, segmentation_mode, generate_kwargs):
"""Run the automatic segmentation for an example image from the Cell Tracking Challenge (HeLa 2d) dataset.
"""
example_data = fetch_hela_2d_example_data(DATA_CACHE)
Expand All @@ -51,7 +51,7 @@ def hela_automatic_segmentation(model_type, use_amg, generate_kwargs):
predictor, segmenter = get_predictor_and_segmenter(
model_type=model_type,
checkpoint=None, # Replace this with your custom checkpoint.
amg=use_amg,
segmentation_mode=segmentation_mode,
is_tiled=False, # Switch to 'True' in case you would like to perform tiling-window based prediction.
)

Expand All @@ -72,7 +72,7 @@ def hela_automatic_segmentation(model_type, use_amg, generate_kwargs):
napari.run()


def wholeslide_automatic_segmentation(model_type, use_amg, generate_kwargs):
def wholeslide_automatic_segmentation(model_type, segmentation_mode, generate_kwargs):
"""Run the automatic segmentation with tiling for an example whole-slide image from the
NeurIPS Cell Segmentation challenge.
"""
Expand All @@ -82,7 +82,7 @@ def wholeslide_automatic_segmentation(model_type, use_amg, generate_kwargs):
predictor, segmenter = get_predictor_and_segmenter(
model_type=model_type,
checkpoint=None, # Replace this with your custom checkpoint.
amg=use_amg,
segmentation_mode=segmentation_mode,
is_tiled=True,
)

Expand Down Expand Up @@ -110,37 +110,45 @@ def main():
# Whether to use:
# the automatic mask generation (AMG): supported by all our models.
# the automatic instance segmentation (AIS): supported by 'micro-sam' models.
use_amg = False # 'False' chooses AIS as the automatic segmentation mode.
# the automatic prompt generation (APG): supported by 'micro-sam' models.
segmentation_mode = "apg" # available choices for automatic segmentation modes are 'amg' / 'ais' / 'apg'.

# Post-processing parameters for automatic segmentation.
if use_amg: # AMG parameters
if segmentation_mode == "amg": # AMG parameters
generate_kwargs = {
"pred_iou_thresh": 0.88,
"stability_score_thresh": 0.95,
"box_nms_thresh": 0.7,
"crop_nms_thresh": 0.7,
"min_mask_region_area": 0,
"output_mode": "binary_mask",
}
else: # AIS parameters
elif segmentation_mode == "ais": # AIS parameters
generate_kwargs = {
"center_distance_threshold": 0.5,
"boundary_distance_threshold": 0.5,
"foreground_threshold": 0.5,
"foreground_smoothing": 1.0,
"distance_smoothing": 1.6,
"min_size": 0,
"output_mode": "binary_mask",
}
elif segmentation_mode == "apg": # APG parameters
generate_kwargs = {
"center_distance_threshold": 0.5,
"boundary_distance_threshold": 0.5,
"foreground_threshold": 0.5,
"nms_threshold": 0.9,
}
else:
raise ValueError("The selected 'segmentation_mode' is not a supported segmentation method.")

# Automatic segmentation for livecell data.
livecell_automatic_segmentation(model_type, use_amg, generate_kwargs)
livecell_automatic_segmentation(model_type, segmentation_mode, generate_kwargs)

# Automatic segmentation for cell tracking challenge hela data.
# hela_automatic_segmentation(model_type, use_amg, generate_kwargs)
# hela_automatic_segmentation(model_type, segmentation_mode, generate_kwargs)

# Automatic segmentation for a whole slide image.
# wholeslide_automatic_segmentation(model_type, use_amg, generate_kwargs)
# wholeslide_automatic_segmentation(model_type, segmentation_mode, generate_kwargs)


# The corresponding CLI call for hela_automatic_segmentation:
Expand Down
2 changes: 1 addition & 1 deletion examples/automatic_tracking.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ def example_automatic_tracking(use_finetuned_model):
embedding_path = os.path.join(EMBEDDING_CACHE, "embeddings-ctc.zarr")
model_type = "vit_h"

predictor, segmenter = get_predictor_and_segmenter(model_type=model_type, amg=False)
predictor, segmenter = get_predictor_and_segmenter(model_type=model_type, segmentation_mode="ais")

masks_tracked, _ = automatic_tracking(
predictor=predictor,
Expand Down
1 change: 1 addition & 0 deletions micro_sam/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
.. include:: ../doc/cli_tools.md
.. include:: ../doc/python_library.md
.. include:: ../doc/finetuned_models.md
.. include:: ../doc/apg.md
.. include:: ../doc/data_submission.md
.. include:: ../doc/faq.md
.. include:: ../doc/contributing.md
Expand Down
4 changes: 2 additions & 2 deletions micro_sam/automatic_segmentation.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
from tqdm import tqdm
from pathlib import Path
from functools import partial
from typing import Dict, List, Optional, Union, Tuple
from typing import Dict, List, Optional, Union, Tuple, Literal

import numpy as np
import imageio.v3 as imageio
Expand All @@ -26,7 +26,7 @@ def get_predictor_and_segmenter(
model_type: str,
checkpoint: Optional[Union[os.PathLike, str]] = None,
device: str = None,
segmentation_mode: Optional[str] = None,
segmentation_mode: Optional[Literal["amg", "ais", "apg"]] = None,
is_tiled: bool = False,
predictor=None,
state=None,
Expand Down
Loading