diff --git a/content/courses/fiji-image-processing/fiji-omero/index.md b/content/courses/fiji-image-processing/fiji-omero/index.md deleted file mode 100644 index b566cc0c..00000000 --- a/content/courses/fiji-image-processing/fiji-omero/index.md +++ /dev/null @@ -1,1075 +0,0 @@ ---- -title: Image Processing with Fiji and Omero -authors: [khs] -highlight_style: "github" -date: 2020-11-09T00:00:00-05:00 -toc: true -type: book -draft: false -weight: 200 - ---- - -## Introduction to OMERO - -OMERO is an image management software package that allows you to organize, view, annotate, analyze, and share your data from a single centralized database. With OMERO, you and your collaborators can access your images from any computer without having to download the images directly to your computer. - -In this chapter you will learn to view and manipulate images through the [Fiji -/ImageJ](https://fiji.sc/) software package. - -For more details, review the [OMERO tutorial](/tutorials/omero-hands-on) or visit the Research Computing website describing the [UVA Omero Database Service](https://www.rc.virginia.edu/userinfo/omero/overview/). - ---- - -## Installation of Fiji and the OMERO Plugin - -1. Install the Fiji application on your computer as described in this [chapter](/courses/fiji-image-processing/introduction/). - -2. Start Fiji and go to `Help` > `Update`. This starts the updater which looks for plugin updates online. - -3. In the `ImageJ Updater` dialog window, click on `Manage Update Sites`. Ensure that the boxes for the following plugins are checked: - - * Java-8 - - * Bio-Formats - - * OMERO 5.4 - - -4. Click `Close`. This will take you back to the `ImageJ Updater` window. - -5. Click `Apply Changes`. - -6. Restart Fiji. - -### Download the Example Scripts - -To follow along, you can download the Jython scripts presented in this tutorial through [this link](/scripts/fiji/fiji-omero-scripts.zip). - -### Check your OMERO Database Account - -1. If you are accessing the OMERO Database from an off-Grounds location, you have to connect through a UVA Virtual Private Network (VPN). Please follow these [instructions to set up your VPN](https://virginia.service-now.com/its?id=itsweb_kb_article&sys_id=f24e5cdfdb3acb804f32fb671d9619d0). - -2. Open a web browser and go to http://omero.hpc.virginia.edu. Login to the OMERO web interface is described [here](/courses/omero/#logging-in-with-omeroweb). - - * **Username:** Your computing ID - - * **Password:** Your personal password - - If you are new to UVA's OMERO Database, a temporary password had been emailed to you. - - **Please change your temporary password when you log in for the first time to the Omero database server as described in these [instructions](https://www.rc.virginia.edu/userinfo/omero/overview/#changing-your-omero-database-password).** - -3. You may be a member of multiple groups. For this workshop we want to specify `omero-demo ` as the default group. - -If you cannot log in, please submit a [support request](https://www.rc.virginia.edu/form/support-request/). Select **Omero Image Analysis** as Support Category. - ---- - -# Working with the OMERO Web Client and Desktop Applications - -OMERO provides a group of client software to work with your imaging data. - -* [OMERO.web Client](https://docs.openmicroscopy.org/omero/5.6.1/users/clients-overview.html#omero-web): - - OMERO.web is a web-based client for OMERO. At UVA you can use it by connecting to http://omero.hpc.virginia.edu. The Web client can be used to view images and metadata, and manage image tags, results, annotations, and attachments. It cannot be used to import data or measure regions-of-interest (see OMERO.insight). - - * View Data - - * Import Data - - * View Image Metadata - - * Manage Image Tags - - * Manage Results and Annotations - -* [OMERO.insight](https://docs.openmicroscopy.org/omero/5.6.1/users/clients-overview.html#omero-insight): - - The two main additional features of OMERO.insight which are not available yet for OMERO.web are: - - * The Measurement Tool, a sub-application of ImageViewer that enables size and intensity measurements of defined regions-of-interest (ROIs), and - - * image import. - -* [OMERO.importer](https://docs.openmicroscopy.org/omero/5.6.1/users/clients-overview.html#omero-importer): - - The OMERO Importer is part of the OMERO Desktop client but can also be run as a standalone application. - -* [OMERO.cli](https://docs.openmicroscopy.org/omero/5.6.1/users/clients-overview.html#omero-cli): - - The OMERO.cli tools provide a set of administration and advanced command line tools written in Python. - - -For this workshop we will be using the OMERO Web client. - - -# Projects, Datasets, Screens, Plates and Images - -For this workshop we will be using prepared image sets and also upload new image data to your accounts. Omero organizes images in nested hierarchies of `Projects` and `Datasets` or `Screens` and `Plates`. The hierarchical nature is visualized as tree structures in the OMERO.web and OMERO.insight clients. - -Images can be linked to individual projects, datasets, screens, or plates. A given image can be linked to multiple folders, allowing you to logically assign the same image to different folders without copying it. You can think of it like smart folders or symbolic links. - -In addition, each group has a folder `Orphaned Images` that contains images that have not been linked to any projects, datasets, screens, or plates. - -**Note:** -Each project, dataset, screen, plate, and image has a __unique numerical ID__ that unambiguously identifies the data artifact. We use these numerical IDs rather than image, folder, or file names to get access to these elements. - -### Sample Data - -Sample data has been added to the `Fiji Omero Workshop` project. Inside the blue project folder you will find several dataset subfolders. - -* Group: **omero-demo** (Group ID: `53`) - -* Project: **Fiji Omero Workshop** (Project ID: `130`) - -**Make note of the Group ID and Project ID.** - -### Your Personal Projects and Datasets - -Let's create a new project and dataset through the OMERO web client. - -1. In the left `Explore` panel, click on `omero-demo` label and switch to `All Members`. Now you can see all data shared within the `omero-demo` group. - -2. Right-click on the avatar with your name (or the "All Members" label). You can also just right click folder icon labeled `Fiji Omero Workshop`. In the popup menu, select `Create new` > `Project`. - -2. Name the new Project `_workshop`, for example `mst3k_workshop`. - -3. Click on the blue folder icon of your new project and take note of the `Project ID`. We will need this to direct Fiji where to load data from or save data to. No right click on your blue project folder icon and create a new dataset. Right-click on the dataset icon and take note of the `Dataset ID`. - -After the project is generated, your user interface should look like this: - ![](/courses/fiji-omero/fiji-omero-datasetid.png) - ---- - -# Interactive Use of Fiji and OMERO - -### Uploading Images with the OMERO.insight client - -Here we will demonstrate, the upload of a set of image files via the OMERO.insight client. The import process is described in our [OMERO.insight Import Tutorial](/lesson/omero-hands-on). - -After the upload, the files are located in the following Project and Dataset: - -* **Project:** Fiji Omero Workshop (Project ID: `130`) - -* **Dataset:** HeLa Cells (Dataset ID: `265`) - -**Note that images cannot be uploaded with the web-based OMERO.web client.** - ---- - -### Uploading Images with Fiji - -Images that have been opened and are displayed in the Fiji graphical user interface can be uploaded to OMERO using the `File` > `Export` > `OMERO...` command. The naming of the tool can be confusing -- it's all a matter of perspective: **Export from Fiji** equates to an **Import in OMERO**. To minimize possible confusion we avoid these terms for the purpose of this workshop and refer to **upload** for any process that sends data to OMERO and **download** for any process that retrieves data from OMERO. - -Before you begin, you need to know the dataset ID that the image should be linked to in OMERO. You can use the OMERO.web, OMERO.insight, or OMERO.cli tools to look up an appropriate dataset ID for this. **Keep in mind that you can upload images only to those Projects and Datasets that you own.** - -* If you choose a dataset ID -1, the image will be added to the `Orphaned Images` in your default OMERO group. For this workshop, we have chosen the `omero-demo` group as default. - -* Alternatively, you can export the image to a specific dataset in your default group, you just need to provide the dataset ID when executing the upload command. - -**Note: Image uploads via the built-in OMERO Export function is limited to your default group.** This is not an issue if you belong to a single group. If you are a member of multiple groups, you can change your default group via the OMERO.web client. - -### Exercises - -**Export RGB image** - -1. File > Open Sample > Leaf (36k) - -2. File > Export > OMERO... - - Since we chose `Dataset: -1`, the uploaded image will end up in `Orphaned Images` for your group. - -3. Go to OMERO webclient (http://omero.hpc.virginia.edu) and look for the uploaded image in `Orphaned Images`. - - ariegated leaf specimen with ruler for scale - Export to OMERO dialog with server and dataset settings - - -
- -**Export image stack to a specific dataset** - -1. In the OMERO webclient, create a new dataset under your personal workshop project. **Make note of the dataset ID.** - -2. File > Open Samples > T1 Head (2.4M, 16-bit) - -3. File > Export > OMERO... - -**Question:** What happens when you repeatedly upload the same image to Omero? - ---- - -### Uploading Results with Fiji - -1. In the OMERO.web client, make note of a dataset ID where you would like to upload a new image and associated results file to. You make pick the dataset ID from the previous exercise. - -2. In Fiji, go to `File` > `Open Samples` > `Blobs (25K)` - -3. Let's apply some basic filtering and simple image segmentation to identify all cellular outlines in the image: - - a. Go to `Process` > `Filter` > `Median`. In the popup dialog enter a `Radius` of `3.0` and click `OK`. This will smooth out some of the image's intrinsic noise without degrading the object outlines. - - b. Go to `Image` > `Adjust Threshold`. In the popup dialog choose the `Default` thresholding algorithm, uncheck the `Dark Background` box and click `Apply`. The image should have been converted to a binary mask with white objects on a black background. - Fiji Threshold dialog with range 126-255 and Red overlay selected - Blobs image with red threshold overlay highlighting segmented cells - - - c. Go to `Analyze` > `Set Measurements...`. In the popup dialog specify the parameters as shown in this screenshot. Click `OK`. - - d. Go to `Analyze` > `Analyze Particles` and set up the parameters as shown. Click `OK`. - Set Measurements dialog settings - Analyze - - e. These steps should create a `Results` and a `Summary` table. - -3. Go to `File` > `Export` > `OMERO...`. Enter the dataset ID you chose under step 1 and click `OK`. - -{{< figure src="/courses/fiji-omero/fiji-omero-blobs-export.png" >}} - -4. Click on the `Results` table window and go to `File` > `Save As`. Save the file as `Results.csv` on your computer. Repeat the same for the `Summary` table. - -5. **Upload the results files:** Go to the OMERO web client, open the Dataset folder that you had chosen for the image upload. On the right side of the webpage, expand the `Attachments` pane and click on the `+` icon. In the popup dialog, click `Choose File` and select the saved csv files. Click `Accept`. - -Now you have analyzed the image and uploaded the image as well as all results to the OMERO dataset. - - ---- - -# Scripting - -Fiji provides convenient programming wrappers for the Fiji/ImageJ and OMERO functions that allow you to develop your scripts in a variety of programming languages: - -* ImageJ macro language: simple, slow, not very versatile -* Jython: Python syntax with a few limitations, easy to learn, very versatile -* BeanShell: Syntax similar to Java, versatile -* Several others… - -Fiji provides a richer programming environment than ImageJ and it is recommended to use Fiji instead of ImageJ for any script development. Our [Fiji/ImageJ: Script development for Image Processing](/tutorials/fiji-scripting/) tutorial provides a more general introduction to this topic. - -### Example Scripts -To follow along, you can download the Jython scripts presented in this tutorial through **[this link](/scripts/fiji/fiji-omero-scripts.zip)**. - - -### The Script Editor {#script-editor-id} - -We'll be using the built-in **Script Editor** in Fiji to run our scripts. To start the script editor in Fiji go to menu `File` > `New` > `Script…`. - -{{< figure src="/courses/fiji-omero/fiji-script-editor.png" >}} - -* The top pane provides the editor. Multiple scripts can be open at the same time and will show up as separate tabs. -* The bottom pane shows output (e.g. produced by print statements) and any errors encountered during script execution. - -**Script editor menus:** - -+ **File:** Load, save scripts -+ **Edit:** Similar to word processor functionality (Copy, Paste, etc.) -+ **Language:** Choose language your script is written in with syntax highlighting -+ **Templates:** Example scripts -+ **Tools:** Access to source code of a class in the ImageJ/Fiji package -+ **Tabs:** Navigation aid in script - -
- -### The Macro Recorder {#macro-recorder-id} -The Macro Recorder logs all commands entered through the Fiji graphical user interface (GUI). It is useful to convert these GUI actions into script commands. - - -* In the Fiji menu, go to `Plugins` > `Macros…` > `Record`. -+ In the `Record` drop-down menu, select `BeanShell`. -+ Clicking the `Create` button copies the code to a new script in the [Script Editor](#script-editor-id). - -
- -### The Console Window {#console-id} -In the Fiji menu, go to `Window` > `Console`. - -+ The Console window shows the output and logging information produced by running plugins and scripts. - -
- ---- - -### Connecting to OMERO -In order to get full access to OMERO's programming interface, we will now use a more advanced approach to establish an authenticated connection with the OMERO database. We need instances of three classes: `LoginCredientials`, `SimpleLogger`, and `Gateway`. The central one for the configuration is `LoginCredentials` which has to be initialized with user specific credentials and database host information. - -Our script would not be very useful or secure if we had to hardcode these values. Fortunately we can use the [SciJava@Parameter](https://imagej.net/Script_Parameters) annotation to prompt the script user for the relevant information: -```python -#@ String (label="Omero User") username -#@ String (label="Omero Password", style="password") password -#@ String (label="Omero Server", value="omero.hpc.virginia.edu") server -#@ Integer (label="Omero Port", value=4064) port -``` - -These four lines at the top of our scripts are sufficient to create a dialog window that prompts the user for information that will be populated in the `username`, `password`, `host`, and `port` variables. With these variables in place we can now establish a connection to the OMERO database server. - -```python -cred = LoginCredentials() -if group_id != -1: - cred.setGroupID(group_id) -cred.getServer().setHostname(server) -cred.getServer().setPort(port) -cred.getUser().setUsername(username) -cred.getUser().setPassword(password) -simpleLogger = SimpleLogger() -gateway = Gateway(simpleLogger) -e = gateway.connect(cred) -``` - -The return value of the `connect` method is stored as a boolean value in the variable `e`. If `e==True`, the connection was established; if `e==False`, the connection failed. We can reuse this code block for most of our OMERO scripts. - -It is very important to close the connection to the database at the end of your script, like this: -```python -gateway.disconnect() -``` - ---- - -### Getting the Basic OMERO Dataset Info - -OMERO organizes users in groups. Each user can be a member of multiple groups. Images are organized in Projects and Datasets, or in Screens and Plates. The following script, `Omero_Info.py`, connects a user to a remote OMERO instance and shows a list of: - -* the groups that the user belongs to and the associated group ID. This ID is important when you want to access images stored for a particular group; -* the projects and datasets for a particular group (specified via unique group ID); -* and a list of images, organized by project and dataset, that the user has access to in a particular group. - -The following script, `Omero_info.py` establishes a connection to the OMERO database and outputs your OMERO group memberships, as well as a list of all of your projects, datasets, and images. The code contains separate functions to connect to the database, retrieve information from the database, and parse the data into a set of tables. If you're just starting with programming, you may find it helpful to work through our [Fiji Scripting](/tutorials/fiji-scripting/) and other tutorials on our [learning portal](/categories/). - -(Click on the black triangle next to **View** to take a look at the script.) - -
-View Omero_Info.py script - -{{< highlight python "linenos=table,linenostart=1" >}} -#@ String (label="Omero User") username -#@ String (label="Omero Password", style="password") password -#@ String (label="Omero Server", value="omero.hpc.virginia.edu") server -#@ Integer (label="Omero Port", value=4064) port -#@ Integer (label="Omero Group ID", min=-1, value=-1) group_id - - -# Basic Java and ImageJ dependencies -from ij.measure import ResultsTable -from java.lang import Long -from java.lang import String -from java.util import ArrayList - -# Omero dependencies -import omero -from omero.gateway import Gateway -from omero.gateway import LoginCredentials -from omero.gateway import SecurityContext -from omero.gateway.exception import DSAccessException -from omero.gateway.exception import DSOutOfServiceException -from omero.gateway.facility import BrowseFacility -from omero.log import SimpleLogger - - -def connect(group_id, username, password, server, port): - """Omero Connect with credentials and simpleLogger""" - cred = LoginCredentials() - if group_id != -1: - cred.setGroupID(group_id) - cred.getServer().setHostname(server) - cred.getServer().setPort(port) - cred.getUser().setUsername(username) - cred.getUser().setPassword(password) - simpleLogger = SimpleLogger() - gateway = Gateway(simpleLogger) - e = gateway.connect(cred) - return gateway - - -def get_groups(gateway): - """Retrieves the groups for the user""" - currentGroupId = gateway.getLoggedInUser().getGroupId() - ctx = SecurityContext(currentGroupId) - adminService = gateway.getAdminService(ctx, True) - uid = adminService.lookupExperimenter(username) - groups = [] - for g in sorted(adminService.getMemberOfGroupIds(uid)): - groupname = str(adminService.getGroup(g).getName().getValue()) - groups.append({ - 'Id': g, - 'Name': groupname, - }) - if g == currentGroupId: - currentGroup = groupname - return groups, currentGroup - - -def get_projects_datasets(gateway): - """Retrieves the projects and datasets for the user""" - results = [] - proj_dict = {} - ds_dict = {} - groupid = gateway.getLoggedInUser().getGroupId() - ctx = SecurityContext(groupid) - containerService = gateway.getPojosService(ctx) - - # Read datasets in all projects - projects = containerService.loadContainerHierarchy("Project", None, None) # allowed: 'Project", "Dataset", "Screen", "Plate" - for p in projects: # omero.model.ProjectI - p_id = p.getId().getValue() - p_name = p.getName().getValue() - proj_dict[p_id] = p_name - for d in p.linkedDatasetList(): - ds_id = d.getId().getValue() - ds_name = d.getName().getValue() - results.append({ - 'Project Id': p_id, - 'Project Name': p_name, - 'Dataset Id': ds_id, - 'Dataset Name': ds_name, - 'Group Id': groupid, - }) - ds_dict[ds_id] = ds_name - - # read datasets not linked to any project - ds_in_proj = [p['Dataset Id'] for p in results] - ds = containerService.loadContainerHierarchy("Dataset", None, None) - for d in ds: # omero.model.ProjectI - ds_id = d.getId().getValue() - ds_name = d.getName().getValue() - if ds_id not in ds_in_proj: - ds_dict[ds_id] = ds_name - results.append({ - 'Project Id': '--', - 'Project Name': '--', - 'Dataset Id': ds_id, - 'Dataset Name': ds_name, - 'Group Id': groupid, - }) - return results, proj_dict, ds_dict - - -def get_images(gateway, datasets, orphaned=True): - """Return all image ids and image names for provided dataset ids""" - browse = gateway.getFacility(BrowseFacility) - experimenter = gateway.getLoggedInUser() - ctx = SecurityContext(experimenter.getGroupId()) - images = [] - for dataset_id in datasets: - ids = ArrayList(1) - ids.add(Long(dataset_id)) - j = browse.getImagesForDatasets(ctx, ids).iterator() - while j.hasNext(): - image = j.next() - images.append({ - 'Image Id': String.valueOf(image.getId()), - 'Image Name': image.getName(), - 'Dataset Id': dataset_id, - 'Dataset Name': datasets[dataset_id], - }) - if orphaned: - orphans = browse.getOrphanedImages(ctx, ctx.getExperimenter()) # need to pass user id (long) - for image in orphans: - images.append({ - 'Image Id': String.valueOf(image.getId()), - 'Image Name': image.getName(), - 'Dataset Id': -1, - 'Dataset Name': '', - }) - return images - - -def show_as_table(title, data, order=[]): - """Helper function to display group and data information as a ResultsTable""" - table = ResultsTable() - for d in data: - table.incrementCounter() - order = [k for k in order] - order.extend([k for k in d.keys() if not d in order]) - for k in order: - table.addValue(k, d[k]) - table.show(title) - - -# Main code -gateway = connect(group_id, username, password, server, port) - -groups, current_group = get_groups(gateway) -show_as_table("My Groups", groups, order=['Id', 'Name']) - -all_data,_,datasets = get_projects_datasets(gateway) -show_as_table("Projects and Datasets - Group: %s" % current_group, all_data, order=['Group Id', 'Dataset Id', 'Dataset Name', 'Project Name', 'Project Id']) - -gateway.disconnect() -{{< /highlight >}} -
- - ---- - -### Downloading Images from the OMERO database - -Let's try to download images from the database through a script. The OMERO plugin provides simple download (aka import to Fiji) functionality to achieve this. - -1. In the OMERO web interface, click on any image in the `Fiji Omero Workshop` project or your `xxx_workshop` project/dataset and note the Image ID displayed in the sidebar on the right side of the webpage. **Image retrieval relies on these unique image identifiers**. - -2. Go back to the Fiji Script Editor and open the `Omero_Image_Download.py` script. - -3. Run the script. A dialog window will open; enter these values: - - * **Omero User:** Your computing ID - * **Omero Password:** Your OMERO Password - * **Omero Server:** omero.hpc.virginia.edu - * **Omero Port:** 4064 - * **Omero Group ID:** Enter `53` as ID for the `omero-demo` group, or use `-1` to use your default group - * **Image ID:** Enter the ID for an image that is part of your `xxx_workshop` dataset, or use `11980` from the example files. - -The script consists of the these core blocks: - -* Lines 1-6 define user input to connect to OMERO. -* Lines 12-20 define a `command` variable that specifies OMERO connection and image parameters. -* Line 21 executes the OMERO importer plugin that retrieves the image. - -{{< highlight python "linenos=table,linenostart=1" >}} -# @ String (label="Omero User") user -# @ String (label="Omero Password", style="password") pwd -# @ String (label="Omero Server", value="omero.hpc.virginia.edu") server -# @ Integer (label="Omero Port", value=4064) port -# @ Integer (label="Omero Group ID", min=-1, value=53) omero_group_id -# @ Integer (label="Image ID", value=2014) image_id - -from ij import IJ -from loci.plugins.in import ImporterOptions - -# Main code -command="location=[OMERO] open=[omero:" -command+="server=%s\n" % server -command+="user=%s\n" % user -command+="port=%s\n" % port -command+="pass=%s\n" % pwd -if omero_group_id > -1: - command+="groupID=%s\n" % omero_group_id -command+="iid=%s] " % image_id -command+="windowless=true view=\'%s\' " % ImporterOptions.VIEW_HYPERSTACK -IJ.runPlugIn("loci.plugins.LociImporter", command) -{{< / highlight >}} - ---- - -### Uploading Images to the OMERO database - -Let's try to upload an image from Fiji to OMERO. - -1. Go back to Fiji and then to `File` > `Open Samples` > `Blobs`. - -2. Go back to the Fiji Script Editor and open the `Omero_Image_Upload.py file`. - - {{< highlight python "linenos=table,linenostart=1" >}} - from ij import IJ - - imp = IJ.getImage() - IJ.run(imp, "OMERO... ", "") - {{< /highlight >}} - -3. Run the script. The **Export to OMERO** dialog window will open. Enter the following values: - - * **Server**: omero.hpc.virginia.edu - - * **Port:** 4064 - - * **User:** Your computing ID - - * **Password:** Your OMERO password - - * **OMERO Dataset ID:** Enter the ___ID___ for the `xxx_workshop` dataset that you created in the OMERO web interface. - - * Check the **Upload new image** box. Leave the other boxes unchecked. - - Click `OK`. - - If you see an error, make sure you entered the correct password and Dataset ID. **Note: you have to use your own project/dataset.** - -4. Go to the OMERO website and refresh the page. Double-click on your `xxx_workshop` dataset icon to expand it. You should see the blobs.gif image. - ---- - -### Creating Key:Value Annotations - -{{< figure src="/courses/fiji-omero/fiji-omero-keyvalue.png" >}} -OMERO allows you to link other pieces of information to your Project, Dataset, Screen, Plate or Image objects. This additional information is displayed on the right side in the OMERO web client, labeled under the `General` tab as `Image Details`, `Tags`, `Key-Value Pairs`, `Tables`, `Attachments`, `Comments`, and `Ratings`. In addition, there is the `Acquisition` tab that provides metadata information that was automatically extracted from the image file headers during import. - -For the remainder of this workshop, we will focus on `Key-Value` pairs and `Attachments`. The key-value pairs are implemented as a dictionary (or HashMaps) that can be used to annotate individual images or whole datasets/project or plates/screens with additional information. Such information may include experimental conditions etc.. - -Let's look at an example: - -1. In the OMERO webclient, expand the `Fiji Omero Workshop` project folder and the `Sample Data` dataset folder inside it. - -2. Click on the `blobs.gif` image. In the general tab, you will see three entries under the `Key-Value` group. (You may have to click on the triangle next to the label to expand the tab and see it). - -The values displayed are not particular meaningful, but they illustrate the concept. You can create and modify annotations interactively through the OMERO client. In addition, you can manipulate key-value pairs (as well as other annotation categories) through Fiji scripts. - -
-View Omero_Map_Annotation.py script - -{{< highlight python "linenos=table" >}} -#@ String (label="Omero User") username -#@ String (label="Omero Password", style="password") password -#@ String (label="Omero Server", value="omero.hpc.virginia.edu") server -#@ Integer (label="Omero Port", value=4064) port -#@ Integer (label="Omero Group ID", min=-1, value=-1) group_id -#@ String (label="Target", value="Image", choices = ["Image", "Dataset", "Project"]) target_type -#@ Integer (label="Target ID", min=-1, value=-1) target_id - -# Basic Java and ImageJ dependencies -from ij.measure import ResultsTable -from java.lang import Double -from java.util import ArrayList -from ij import IJ -from ij.plugin.frame import RoiManager -from ij.measure import ResultsTable - -# Omero dependencies -import omero -from omero.log import SimpleLogger -from omero.gateway import Gateway -from omero.gateway import LoginCredentials -from omero.gateway import SecurityContext -from omero.gateway.model import ExperimenterData; - -from omero.gateway.facility import DataManagerFacility -from omero.gateway.model import MapAnnotationData -from omero.gateway.model import ProjectData -from omero.gateway.model import DatasetData -from omero.gateway.model import ImageData -from omero.model import NamedValue -from omero.model import ProjectDatasetLinkI -from omero.model import ProjectI -from omero.model import DatasetI -from omero.model import ImageI - - -def connect(group_id, username, password, server, port): - """Omero Connect with credentials and simpleLogger""" - cred = LoginCredentials() - if group_id != -1: - cred.setGroupID(group_id) - cred.getServer().setHostname(server) - cred.getServer().setPort(port) - cred.getUser().setUsername(username) - cred.getUser().setPassword(password) - simpleLogger = SimpleLogger() - gateway = Gateway(simpleLogger) - e = gateway.connect(cred) - return gateway - - -def create_map_annotation(ctx, annotation, target_id, target_type="Project"): - """Creates a map annotation, uploads it to Omero, and links it to target object""" - # populate new MapAnnotationData object with dictionary - result = ArrayList() - for item in annotation: - # add key:value pairs; both need to be strings - result.add(NamedValue(str(item), str(annotation[item]))) - data = MapAnnotationData() - data.setContent(result); - data.setDescription("Demo Example"); - - # use the following namespace if you want the annotation to be editable in the webclient and insight - data.setNameSpace(MapAnnotationData.NS_CLIENT_CREATED); - dm = gateway.getFacility(DataManagerFacility); - target_obj = None - - # use the appropriate target DataObject and attach the MapAnnotationData object to it - if target_type == "Project": - target_obj = ProjectData(ProjectI(target_id, False)) - elif target_type == "Dataset": - target_obj = DatasetData(DatasetI(target_id, False)) - elif target_type == "Image": - target_obj = ImageData(ImageI(target_id, False)) - result = dm.attachAnnotation(ctx, data, target_obj) - return result - -# Main code -gateway = connect(group_id, username, password, server, port) -currentGroupId = gateway.getLoggedInUser().getGroupId() -ctx = SecurityContext(currentGroupId) - -# create a dictionary with key:value pairs -annotation = {'Temperature': 25.3, 'Sample': 'control', 'Object count': 34} - -result = create_map_annotation(ctx, annotation, target_id, target_type=target_type) -print "Annotation %s exported to Omero." % annotation - -gateway.disconnect() -{{< /highlight >}} - -
- ---- - -### Batch Processing and Results Tables for OMERO Datasets - -The previous examples demonstrated how to export local images to OMERO, or how to import OMERO images to a local workstation. As the final exercise, let's explore how an entire dataset comprised of many images can be downloaded from the remote OMERO instance, processed and analyzed locally, followed by an upload of the processed images and created results files back to the OMERO database. - -The example script, `Omero_Batch_Processing.py`, consists of five key functions: - -* **connect:** Establishes a connection to the OMERO server with specific user credentials. It returns an instance of the OMERO `Gateway` class that is used later to upload processed images to the same OMERO server instance. -* **get_image_ids:** Gets a list of unique image IDs for a given dataset managed by the remote OMERO instance. -* **open_image:** Downloads the image associated with an image ID and shows it in Fiji. -* **process:** Applies a custom image processing routine to a given image. In this case a basic segmentation and counting of cells. -* **create_map_annotation:** Uploads the cell count value to OMERO and links it to the original image. -* **upload_csv_to_omero:** Converts an ImageJ ResultsTable into a csv file, uploads that csv file and links it ot the original image objects. -* **upload_image:** Uploads an Image to a specific dataset managed by the remote OMERO instance. - -**Remember that the gateway connection needs to be closed at the end of the script**. - -{{< figure src="/courses/fiji-omero/fiji-omero-batchprocessing.png" >}} - -To test this and see the process in action we will process a set of four images that has been deposited in the OMERO database. The setup is as follows: - -1. Go to the OMERO webclient and make note of your `Project ID`, or you cna create a new project if you prefer. Again you need the `ID`. - -2. In the Fiji Script Editor, open the `Omero_Batch_Processing.py` script and execute it. - -3. In the popup window, specify the parameters as follows: - - a. Replace the `mst3k` with your own credentials. - - b. **Omero Input Dataset ID:** `265` - - c. **Omero Output Dataset Name:** Enter name to your liking - - d. **Omero Output Project ID:** Enter the `ID` that you looked up as step 1. The script will create a new dataset (with the name you chose) and place all the processed images in there. - -4. Click `OK`. Watch the console output for logging messages. - -5. After the script ru has completed, go to the OMERO webclient and open the Project that you had chosen to collect the output. Look for the `binary segmentation masks`, the attached `Results.csv` files and the new `Key-Value Pairs` annotations for each image. - -
-View Omero_Processing_Nuclei.py script - -{{< highlight python "linenos=table" >}} -#@ String (label="Omero User") username -#@ String (label="Omero Password", style="password") password -#@ String (label="Omero Server", value="omero.hpc.virginia.edu") server -#@ Integer (label="Omero Port", value=4064) server_port -#@ Integer (label="Omero Group ID", min=-1, value=-1) omero_group_id -#@ Integer (label="Omero Input Dataset ID", min=-1, value=-1) dataset_id -#@ String (label="Omero Output Dataset Name", value="Processed Images") target_ds_name -#@ Integer (label="Omero Output Project ID", min=-1, value=-1) project_id - - -import os -import tempfile - - -from java.lang import Long -from java.lang import String -from java.lang.Long import longValue -from java.util import ArrayList -from jarray import array -from java.lang.reflect import Array -import java -from ij import IJ,ImagePlus -from ij.measure import ResultsTable -import loci.common -from loci.formats.in import DefaultMetadataOptions -from loci.formats.in import MetadataLevel -from loci.plugins.in import ImporterOptions - -from loci.plugins.in import ImporterOptions - -# Omero Dependencies -import omero -from omero.rtypes import rstring -from omero.gateway import Gateway -from omero.gateway import LoginCredentials -from omero.gateway import SecurityContext -from omero.gateway.facility import BrowseFacility -from omero.gateway.facility import DataManagerFacility -from omero.log import Logger -from omero.log import SimpleLogger -from omero.gateway.model import MapAnnotationData -from omero.gateway.model import ProjectData -from omero.gateway.model import DatasetData -from omero.gateway.model import ImageData -from omero.gateway.model import FileAnnotationData -from omero.model import FileAnnotationI -from omero.model import OriginalFileI -from omero.model import Pixels -from omero.model import NamedValue -from omero.model import ProjectDatasetLinkI -from omero.model import ProjectI -from omero.model import DatasetI -from omero.model import ImageI -from omero.model import ChecksumAlgorithmI -from omero.model.enums import ChecksumAlgorithmSHA1160 - -from ome.formats.importer import ImportConfig -from ome.formats.importer import OMEROWrapper -from ome.formats.importer import ImportLibrary -from ome.formats.importer import ImportCandidates -from ome.formats.importer.cli import ErrorHandler -from ome.formats.importer.cli import LoggingImportMonitor -from omero.rtypes import rlong - - -def connect(group_id, username, password, host, port): - '''Omero Connect with credentials and simpleLogger''' - cred = LoginCredentials() - if group_id != -1: - cred.setGroupID(group_id) - cred.getServer().setHostname(host) - cred.getServer().setPort(port) - cred.getUser().setUsername(username) - cred.getUser().setPassword(password) - simpleLogger = SimpleLogger() - gateway = Gateway(simpleLogger) - gateway.connect(cred) - group_id = cred.getGroupID() - return gateway - - -def open_image(username, password, host, server_port, group_id, image_id): - command="location=[OMERO] open=[omero:" - command+="server=%s\n" % server - command+="user=%s\n" % username - command+="port=%s\n" % server_port - command+="pass=%s\n" % password - if group_id > -1: - command+="groupID=%s\n" % group_id - command+="iid=%s] " % image_id - command+="windowless=true " - command+="splitWindows=false " - command+="color_mode=Default view=[%s] stack_order=Default" % ImporterOptions.VIEW_HYPERSTACK - print "Opening image: id", image_id - IJ.runPlugIn("loci.plugins.LociImporter", command) - imp = IJ.getImage() - return imp - - -def upload_image(gateway, server, dataset_id, filepath): - user = gateway.getLoggedInUser() - ctx = SecurityContext(user.getGroupId()) - sessionKey = gateway.getSessionId(user) - - config = ImportConfig() - config.email.set("") - config.sendFiles.set('true') - config.sendReport.set('false') - config.contOnError.set('false') - config.debug.set('false') - config.hostname.set(server) - config.sessionKey.set(sessionKey) - config.targetClass.set("omero.model.Dataset") - config.targetId.set(dataset_id) - loci.common.DebugTools.enableLogging("DEBUG") - - store = config.createStore() - reader = OMEROWrapper(config) - library = ImportLibrary(store,reader) - errorHandler = ErrorHandler(config) - - library.addObserver(LoggingImportMonitor()) - candidates = ImportCandidates (reader, filepath, errorHandler) - reader.setMetadataOptions(DefaultMetadataOptions(MetadataLevel.ALL)) - success = library.importCandidates(config, candidates) - return success - - -def get_image_ids(gateway, dataset_id): - """Return all image ids for given dataset""" - browse = gateway.getFacility(BrowseFacility) - experimenter = gateway.getLoggedInUser() - ctx = SecurityContext(experimenter.getGroupId()) - images = [] - ids = ArrayList(1) - ids.add(Long(dataset_id)) - j = browse.getImagesForDatasets(ctx, ids).iterator() - while j.hasNext(): - image = j.next() - images.append({ - 'Image Id': String.valueOf(image.getId()), - 'Image Name': image.getName(), - 'Dataset Id': dataset_id, - }) - return images - - -def create_map_annotation(ctx, annotation, target_id, target_type="Project"): - # populate new MapAnnotationData object with dictionary - result = ArrayList() - for item in annotation: - # add key:value pairs; both need to be strings - result.add(NamedValue(str(item), str(annotation[item]))) - data = MapAnnotationData() - data.setContent(result); - data.setDescription("Demo Example"); - - #Use the following namespace if you want the annotation to be editable in the webclient and insight - data.setNameSpace(MapAnnotationData.NS_CLIENT_CREATED); - dm = gateway.getFacility(DataManagerFacility); - target_obj = None - - # use the appropriate target DataObject and attach the MapAnnotationData object to it - if target_type == "Project": - target_obj = ProjectData(ProjectI(target_id, False)) - elif target_type == "Dataset": - target_obj = DatasetData(DatasetI(target_id, False)) - elif target_type == "Image": - target_obj = ImageData(ImageI(Long(target_id), False)) - result = dm.attachAnnotation(ctx, data, target_obj); - return result - - -def upload_csv_to_omero(ctx, file, tablename, target_id, target_type="Project"): - """Upload the CSV file and attach it to the specified object""" - print file - print file.name - svc = gateway.getFacility(DataManagerFacility) - file_size = os.path.getsize(file.name) - original_file = OriginalFileI() - original_file.setName(rstring(tablename)) - original_file.setPath(rstring(file.name)) - original_file.setSize(rlong(file_size)) - - checksum_algorithm = ChecksumAlgorithmI() - checksum_algorithm.setValue(rstring(ChecksumAlgorithmSHA1160.value)) - original_file.setHasher(checksum_algorithm) - original_file.setMimetype(rstring("text/csv")) - original_file = svc.saveAndReturnObject(ctx, original_file) - store = gateway.getRawFileService(ctx) - - # Open file and read stream - store.setFileId(original_file.getId().getValue()) - print original_file.getId().getValue() - try: - store.setFileId(original_file.getId().getValue()) - with open(file.name, 'rb') as stream: - buf = 10000 - for pos in range(0, long(file_size), buf): - block = None - if file_size-pos < buf: - block_size = file_size-pos - else: - block_size = buf - stream.seek(pos) - block = stream.read(block_size) - store.write(block, pos, block_size) - - original_file = store.save() - finally: - store.close() - - # create the file annotation - namespace = "training.demo" - fa = FileAnnotationI() - fa.setFile(original_file) - fa.setNs(rstring(namespace)) - - if target_type == "Project": - target_obj = ProjectData(ProjectI(target_id, False)) - elif target_type == "Dataset": - target_obj = DatasetData(DatasetI(target_id, False)) - elif target_type == "Image": - target_obj = ImageData(ImageI(target_id, False)) - - svc.attachAnnotation(ctx, FileAnnotationData(fa), target_obj) - - -def process_file(imp): - """Run segmentation""" - print "Processing", imp.getTitle() - title = imp.getTitle().split('.')[:-1] - title = '.'.join(title) + "_mask.ome.tiff" - nimp = ImagePlus(title, imp.getStack().getProcessor(1)) - IJ.run(nimp, "Median...", "radius=3") - IJ.run(nimp, "Auto Local Threshold", "method=Bernsen radius=15 parameter_1=0 parameter_2=0 white") - IJ.run(nimp, "Watershed", "") - - IJ.run("Set Measurements...", "area mean standard centroid decimal=3") - IJ.run(nimp, "Analyze Particles...", "size=50-Infinity summary exclude clear add") - rt = ResultsTable.getResultsTable() - rt.show("Results") - - imp.close() - return nimp, rt - - -def create_new_dataset(ctx, project_id, ds_name, ): - dataset_obj = omero.model.DatasetI() - dataset_obj.setName(rstring(ds_name)) - dataset_obj = gateway.getUpdateService(ctx).saveAndReturnObject(dataset_obj) - dataset_id = dataset_obj.getId().getValue() - - dm = gateway.getFacility(DataManagerFacility) - link = ProjectDatasetLinkI(); - link.setChild(dataset_obj); - link.setParent(ProjectI(project_id, False)); - r = dm.saveAndReturnObject(ctx, link); - return dataset_id - - -# Main code -gateway = connect(omero_group_id, username, password, server, server_port) -currentGroupId = gateway.getLoggedInUser().getGroupId() -ctx = SecurityContext(currentGroupId) - -image_info = get_image_ids(gateway, dataset_id) -tmp_dir = tempfile.gettempdir() -print tmp_dir - -target_ds_id = create_new_dataset(ctx, project_id, target_ds_name) -for info in image_info: - imp = open_image(username, password, server, server_port, omero_group_id, info['Image Id']) - processed_imp, rt = process_file(imp) - - # Save processed image locally in omero_tmp dir - imgfile = tempfile.TemporaryFile(mode='wb', prefix='img_', suffix='.tiff', dir=tmp_dir) - - #filepath = os.path.join(tmp_dir, processed_imp.getTitle()) - options = "save=" + imgfile.name + " export compression=Uncompressed" - IJ.run(processed_imp, "Bio-Formats Exporter", options) - # ignore changes & close - processed_imp.changes=False - processed_imp.close() - - # uploaad image to a target dataset - upload_image(gateway, server, target_ds_id, [imgfile.name]) - - # create annotation - annotation = { - "Cell count": rt.size() - } - create_map_annotation(ctx, annotation, info['Image Id'], target_type="Image") - - # export ResultsTable to csv file and link to image object - file = tempfile.TemporaryFile(mode='wb', prefix='results_', suffix='.csv', dir=tmp_dir) - rt.saveAs(file.name) - #upload_csv_to_omero(ctx, file, "Results.csv", long(info['Image Id']), "Image") - -# done, clean up -shutil.rmtree(tmp_dir) -gateway.disconnect() -print "Done.\n" - -{{< /highlight >}} -
- ---- - -# Resources {#resources-id} - -**OMERO** - -* OMERO: https://www.openmicroscopy.org/omero/ -* OMERO User Support: https://help.openmicroscopy.org -* UVA Research Computing: https://www.rc.virginia.edu -* OMERO at the University of Virginia: https://www.rc.virginia.edu/userinfo/omero/overview/ - -**Fiji Scripting** - -* RC tutorial [Fiji/ImageJ: Script development for Image Processing](/tutorials/fiji-scripting/) -* Tutorial: https://syn.mrc-lmb.cam.ac.uk/acardona/fiji-tutorial/ -* Tips for Developers: https://imagej.net/Tips_for_developers -* API: https://imagej.nih.gov/ij/developer/api/ -* SciJava: https://javadoc.scijava.org/Fiji/ - -**General Python Programming** - -* [Introduction to Programming in Python](/courses/python_introduction/) -* [Programming in Python for Scientists and Engineers](/courses/programming_python_scientists_engineers/) diff --git a/content/courses/opencv/index.md b/content/courses/opencv/index.md deleted file mode 100644 index b91076dc..00000000 --- a/content/courses/opencv/index.md +++ /dev/null @@ -1,631 +0,0 @@ ---- -title: "Scientific Image Processing with Python OpenCV" -summary: "An introduction to scientific image processing with the Python OpenCV package. Topics include splitting and merging of color channels, morphological filters, image thresholding and segmentation." -author: [khs] -categories: ["Image Processing","Python"] -categories: ["Image_Processing","Python"] - -highlight_style: "github" -date: 2022-10-27T00:00:00-05:00 -toc: true -type: article -draft: false ---- - - - - -# Introduction - -From the [OpenCV project documentation](https://docs.opencv.org/master/d1/dfb/intro.html): - -> OpenCV (Open Source Computer Vision Library: http://opencv.org) is an open-source library that includes several hundreds of computer vision algorithms. - -This workshop assumes a working knowledge of the Python programming language and basic understanding of image processing concepts. - -Introductions to Python can be found [here](/courses/programming_python_scientists_engineers/python-interpreter/) and [here](/courses/python_introduction/). - ---- - -# Getting Started - -**Python code examples** - -The Python scripts and data files for this workshop can be [downloaded from here](/notes/opencv/data/opencv-examples.zip). On your computer, unzip the downloaded folder and use it as working directory for this workshop. - -**Python programming environment** - -The Anaconda environment from [Anaconda Inc.](https://anaconda.com/) is widely used because it bundles a Python interpreter, most of the popular packages, and development environments. It is cross-platform and freely available. There are two somewhat incompatible versions of Python; version 2.7 is deprecated but still fairly widely used. Version 3 is the supported version. - -**Note: We are using Python 3 for this workshop.** - -## Option 1: Using the UVA HPC platform - -If you have a Rivanna account, you can work through this tutorial using an [Open OnDemand](https://www.rc.virginia.edu/userinfo/rivanna/ood/overview/) Desktop session. - -1. Go to https://rivanna-portal.hpc.virginia.edu. - -2. Log in with your UVA credentials. - -3. Go to `Interactive Apps` > `Desktop` - -4. On the next screen, specify resources as shown in this screenshot: - - ![Screenshot of the Rivanna Open OnDemand “Desktop” launch page showing fields to select session time, CPU cores, memory, allocation (SUs), partition, and a blue “Launch” button at the bottom.](ood-resources.png) - - >**Note:** Workshop participants may specify `rivanna-training` in the `Allocation (SUs)` field. Alternatively, you may use any other Rivanna allocation that you are a member of. - -5. Click `Launch` at the bottom of the screen. Your desktop session will be queued up -- this may take a few minutes until the requested resources become available. - -## Option 2 - Use your own computer - -1. Visit the [Anaconda download website](https://www.anaconda.com/products/individual#Downloads) and download the installer for Python 3 for your operating system (Windows, Mac OSX, or Linux). We recommend to use the graphical installer for ease of use. - -2. Launch the downloaded installer, follow the onscreen prompts and install the Anaconda distribution on your local hard drive. - -The [Anaconda Documentation](https://docs.anaconda.com/anaconda/user-guide/getting-started/) provides an introduction to the Anaconda environment and bundled applications. For the purpose of this workshop we focus on the `Anaconda Navigator` and `Spyder`. - -# Using Anaconda - -## Navigator - -Once you have installed Anaconda, start the Navigator application: -* [Instructions for Windows](https://docs.anaconda.com/anaconda/user-guide/getting-started/#open-nav-win) -* [Instructions for Mac](https://docs.anaconda.com/anaconda/user-guide/getting-started/#open-nav-mac) -* [Instructions for Linux](https://docs.anaconda.com/anaconda/user-guide/getting-started/#open-nav-lin) - -You should see a workspace similar to the screenshot, with several options for working environments, some of which are not installed. We will use `Spyder` which should already be installed. If not, click the button to install the package. - -![AnacondaNavigator](/notes/biopython/anaconda-navigator.png) - -## Spyder - -Now we will switch to Spyder. Spyder is an Integrated Development Environment, or IDE, aimed at Python. It is well suited for developing longer, more modular programs. - -1. To start it, return to the `Anaconda Navigator` and click on the `Spyder` tile. It may take a while to open (watch the lower left of the Navigator). -2. Once it starts, you will see a layout with an editor pane on the left, an explorer pane at the top right, and an iPython console on the lower right. This arrangement can be customized but we will use the default for our examples. Type code into the editor. The explorer window can show files, variable values, and other useful information. The iPython console is a frontend to the Python interpreter itself. It is comparable to a cell in JupyterLab. - -![AnacondaNavigator](/notes/biopython/anaconda-spyder.png) - -## Installation of OpenCV - -It is recommended to install the `opencv-python` package from PyPI using the `pip install` command. - -**On your own computer:** -Start the `Anaconda Prompt` command line tool following the instructions for your operating system. -* Start Anaconda Prompt on [Windows](https://docs.anaconda.com/anaconda/user-guide/getting-started/#open-prompt-win) -* Start Anaconda Prompt on [Mac](https://docs.anaconda.com/anaconda/user-guide/getting-started/#open-prompt-mac), or open a terminal window. -* [Linux:](https://docs.anaconda.com/anaconda/user-guide/getting-started/#open-prompt-lin) Just open a terminal window. - -At the prompt, type the following command and press enter/return: -```bash -pip install opencv-python matplotlib scikit-image pandas -``` -This command will install the latest `opencv-python` package version in your current Anaconda Python environment. The `matplotlib` package is used for plotting and image display. It is part of the Anaconda default packages. The `scikit-image` and `pandas` packages are useful for additional image analysis and data wrangling, respectively. - -**On Rivanna (UVA's HPC platform):** - -[Rivanna](https://www.rc.virginia.edu/userinfo/rivanna/overview/) offers several Anaconda distributions with different Python versions. Before you use Python you need to load one of the **Anaconda** software modules and then run the `pip install` command in a terminal. - -```bash -module load anaconda -pip install --user opencv-python matplotlib scikit-image pandas -``` -> **Note:** You have to use the `--user` flag which instructs the interpreter to install the package in your home directory. Alternatively, create your own custom [Conda environment](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html) first and run the `pip install opencv-python matplotlib pandas` command in that environment (without the `--user` flag) - -To confirm successful package installation, start the **Spyder IDE** by typing the following command in the terminal: -```bash -spyder & -``` -In the **Spyder IDE**, go to the `IPython console` pane, type the following command and press `enter/return`: - -```python: -import cv2 -print (cv2.__version__) -``` - -If the package is installed correctly, the output will show the openCV version number. - -## Example scripts and images - -Download the example scripts and images from [this link](/notes/opencv/data/opencv-examples.zip). Unzip the downloaded file and start your Python IDE, e.g. Spyder. - -If you are on Rivanna, run the following command to copy the examples to your home directory: -```bash -cp -R /share/resources/tutorials/opencv-examples ~/ -``` - ---- - -# Basic Operations - -## Loading Images - -The `imread` function is used to read images from files. Images are represented as a multidimensional [NumPy](https://numpy.org) arrays. Learn more about NumPy arrays [here](/courses/python_introduction/numpy_ndarrays/). The multidimensional properties are stored in an image's `shape` attribute, e.g. number of rows (height) x number of columns (width) x number of channels (depth). - - -```python: -import cv2 - -# load the input image and show its dimensions -image = cv2.imread("clown.png") -(h, w, d) = image.shape -print('width={}, height={}, depth={}'.format(w, h, d)) -``` - -**Output:** -``` -width=320, height=200, depth=3 -``` - -## Displaying Images - -We can use an openCV function to display the image to our screen. - -```python: -# open with OpenCV and press a key on our keyboard to continue execution -cv2.imshow('Image', image) -cv2.waitKey(0) -cv2.destroyAllWindows() -``` - -![Image of a clown](clown.png) - -The `cv2.imshow()` method displays the image on our screen. The `cv2.waitKey()` function waits for a key to be pressed. This is important otherwise our image would display and immediately disappear before we even see the image. The call of `destroyAllWindows()` should be placed at the end of any script that uses the `imshow` function. - ->**Note:** Before you run the code through the Spyder IDE, go to `Run` > `Run configuration per file` and select `Execute in dedicated console` first. Then, when you run the code uyou need to actually click the image window opened by OpenCV and press a key on your keyboard to advance the script. OpenCV cannot monitor your terminal for input so if you press a key in the terminal OpenCV will not notice. - -Alternatively, we can use the `matplotlib` package to display an image. - -```python: -import matplotlib.pyplot as plt - -plt.imshow(cv2.cvtColor(image,cv2.COLOR_BGR2RGB)) -``` - -> Note that OpenCV stores channels of an RGB image in Blue, Green, Red order. We use the `cv2.cvtColor(image,cv2.COLOR_BGR2RGB)` function to convert from BGR --> RGB channel ordering for display purposes. - - -## Saving Images - -We can use the `imwrite()` function to save images. For example: - -```python: -filename = 'clown-copy.png' -cv2.imwrite(filename, image) -``` - -## Accessing Image Pixels - -Since an image's underlying pixel information is stored in multidimensional numpy arrays, we can use common numpy operations to slice and dice image regions, including the images channels. - -We can use the following code to extract the red, green and blue intensity values of a specific image pixel at position x=100 and y=50. - -``` -(b, g, r) = image[100, 50] -print("red={}, green={}, blue={}".format(r, g, b)) -```` - -**Output:** -``` -red=184, green=35, blue=15 -``` - -> Remember that OpenCV stores the channels of an RGB image in Blue, Green, Red order. - -## Slicing and Cropping - -It is also very easy to extract a rectangular region of interest from an image and storing it as a cropped copy. Let's extract the pixels for **30<=y<130** and **140<=x<240** from our original image. The resulting cropped image has a width and height of **100x100** pixels. - -```python: -roi = image[30:130,140:240] -plt.imshow(cv2.cvtColor(roi,cv2.COLOR_BGR2RGB)) -``` - -## Resizing - -It is very easy to resize images. It just takes a single line of code. In this case we are resizing the input image to 500x500 (width x height) pixels. - -```python: -resized = cv2.resize(image,(500,500)) -``` - -Note that we are _forcing_ the resized image into a square 500x500 pixel format. To avoid distortion of the resized image, we can calculate the height/width `aspect` ratio of the original image and use it to calculate the `new_height = new_width * aspect` ratio (or `new_width = new_height / aspect` ratio). - -```python: -# resize width while preserving height proportions -height = image.shape[0] -width = image.shape[1] -aspect = height/width -new_width = 640 -new_height = int(new_width * aspect) -resized2 = cv2.resize(image,(new_width,new_height)) -print (image.shape) -print (resized2.shape) -``` - -```python: -# display the two resized images -_,ax = plt.subplots(1,2) -ax[0].imshow(cv2.cvtColor(resized, cv2.COLOR_BGR2RGB)) -ax[0].axis('off') -ax[1].imshow(cv2.cvtColor(resized2, cv2.COLOR_BGR2RGB)) -ax[1].axis('off') -``` - - - -![Side-by-side comparison of a distorted resized clown image (left) and a proportionally resized clown image (right).](clown-resized.png) - -## Splitting and Merging of Color Channels - -The `split()` function provides a convenient way to split multi-channel images (e.g. RGB) into its channel components. - -```python: -# Split color channels -(B, G, R) = cv2.split(image) -# create 2x2 grid for displaying images -_, axarr = plt.subplots(2,2) -axarr[0,0].imshow(R, cmap='gray') -axarr[0,0].axis('off') -axarr[0,0].set_title('red') - -axarr[0,1].imshow(G, cmap='gray') -axarr[0,1].axis('off') -axarr[0,1].set_title('green') - -axarr[1,0].imshow(B, cmap='gray') -axarr[1,0].axis('off') -axarr[1,0].set_title('blue') - -axarr[1,1].imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB)) -axarr[1,1].axis('off') -axarr[1,1].set_title('RGB') -``` - -![Four-panel comparison of a clown image with individual red, green, and blue channels in grayscale and the full-color RGB image.](clown-split.png) - -Let's take the blue and green channel only and merge them back into a new RGB image, effectively masking the red channel. For this we'll define a new numpy array with the same width and height as the original image and a depth of 1 (single channel), all pixels filled with zero values. Since the individual channels of an RGB image are 8-bit numpy arrays, we choose the numpy `uint8` data type. - -```python: -import numpy as np -zeros = np.zeros((image.shape[0], image.shape[1]), dtype=np.uint8) -# alternative: -# zeros = np.zeros_like(B) - -print (B.shape, zeros.shape) -merged = cv2.merge([B, G, zeros]) -_,ax = plt.subplots(1,1) -ax.imshow(cv2.cvtColor(merged, cv2.COLOR_BGR2RGB)) -ax.axis('off') -``` - -![Picture of a clown merged with all the color channels](clown-merged.png) - -# Exercises - -1. In the clown.png image, inspect the pixel value for x=300, y=25. -2. Crop the clown.png image to a centered rectangle with half the width and half the height of the original. -3. Extract the green channel, apply the values to the red channel and merge the original blue, original green and new red channel into a new BGR image. Then display this image as an RGB image using matplotlib. - ---- - -# Filters - -## Denoising - -OpenCV provides four convenient built-in [denoising tools](https://docs.opencv.org/3.4/d5/d69/tutorial_py_non_local_means.html): - -1. `cv2.fastNlMeansDenoising()` - works with a single grayscale images -2. `cv2.fastNlMeansDenoisingColored()` - works with a color image. -3. `cv2.fastNlMeansDenoisingMulti()` - works with image sequence captured in short period of time (grayscale images) -4. `cv2.fastNlMeansDenoisingColoredMulti()` - same as above, but for color images. - -Common arguments are: -* `h`: parameter deciding filter strength. Higher h value removes noise better, but removes details of image also. (10 may be a good starting point) -* `hForColorComponents`: same as h, but for color images only. (normally same as h) -* `templateWindowSize`: should be odd. (recommended 7) -* `searchWindowSize`: should be odd. (recommended 21) - -Let's try this with a noisy version of the clown image. This is a color RGB image and so we'll try the `cv2.fastNlMeansDenoisingColored()` filter. Here is the noisy input image `clown-noisy.png`. - -![Noisy version of the original clown image](clown-noisy.png) - -The `denoising.py` script demonstrates how it works. - -```python: -import cv2 -from matplotlib import pyplot as plt - -noisy = cv2.imread('clown-noisy.png') - -# define denoising parameters -h = 15 -hColor = 15 -templateWindowSize = 7 -searchWindowSize = 21 - -# denoise and save -denoised = cv2.fastNlMeansDenoisingColored(noisy,None,h,hColor,templateWindowSize,searchWindowSize) -cv2.imwrite('clown-denoised.png', denoised) - -# display -plt.subplot(121),plt.imshow(cv2.cvtColor(noisy, cv2.COLOR_BGR2RGB), interpolation=None) -plt.subplot(122),plt.imshow(cv2.cvtColor(denoised, cv2.COLOR_BGR2RGB), interpolation=None) -plt.show() -``` - -![Comparison of the noisy clown image (left) and the denoised clown image (right)](clown-noisy-denoised.png) - -Additional useful filters for smoothing images: - * `GaussianBlur` - blurs an image using a Gaussian filter - * `medianBlur` - blurs an image using a median filter - -There are many other image smoothing filters [described here](https://docs.opencv.org/4.5.3/dc/dd3/tutorial_gausian_median_blur_bilateral_filter.html). - -## Morphological Filters - -Morphological filters are used for smoothing, edge detection or extraction of other features. The principal inputs are an image and a structuring element also called a kernel. - -The two most basic operations are dilation and erosion on binary images (pixels have value 1 or 0; or 255 and 0). The kernel slides through the image pixel by pixel (as in 2D convolution). - -* During _dilation_, a pixel in the original image (either 1 or 0) will be considered 1 **if at least one pixel** under the kernel is 1. The dilation operation is implemented as `cv2.dilate(image,kernel,iterations = n)`. -* During _erosion_, a pixel in the original image (either 1 or 0) will be considered 1 only **if all the pixels** under the kernel is 1, otherwise it is eroded (set to zero). The erosion operation is implemented as `cv2.erode(image,kernel,iterations = n)`. - -The `erode-dilate.py` script provides an example: - -```python: -import cv2 -import numpy as np -import matplotlib.pyplot as plt - -image = cv2.imread('morph-input.png',0) -# create square shaped 7x7 pixel kernel -kernel = np.ones((7,7),np.uint8) - -# dilate, erode and save results -dilated = cv2.dilate(image,kernel,iterations = 1) -eroded = cv2.erode(image,kernel,iterations = 1) -cv2.imwrite('morph-dilated.png', dilated) -cv2.imwrite('morph-eroded.png', eroded) - -# display results -_,ax = plt.subplots(1,3) -ax[0].imshow(image, cmap='gray') -ax[0].axis('off') -ax[1].imshow(dilated, cmap='gray') -ax[1].axis('off') -ax[2].imshow(eroded, cmap='gray') -ax[2].axis('off') -``` - -Original | Dilation | Erosion -:-------------------:|:----------------------:|:----------------------: -![original image](morph-input.png) | ![dilated image](morph-dilated.png) | ![eroded image](morph-eroded.png) | - -> By increasing the kernel size or number of iterations we can dilate or erode more of the original object. - -Elementary morphological filters may be chained together to define composite operations. - -**Opening** is just another name of erosion followed by dilation. It is useful in removing noise. - -```python: -opening = cv2.morphologyEx(img, cv2.MORPH_OPEN, kernel) -``` - -**Closing** is reverse of opening, i.e. dilation followed by erosion. It is useful in closing small holes inside the foreground objects, or small black points on the object. - -``` -closing = cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel) -``` - -Original | Opening | Closing -:-------------------:|:----------------------:|:----------------------: -![Original image](morph-input.png) | ![Opened image](morph-opened.png) | ![Closed image](morph-closed.png) | - -> Note how the opening operation removed the small dot in the top right corner. In contrast the closing operation retained that small object and also filled in the black hole in the object on the left side of the input image. - -**Morphological Gradients** can be calculated as the difference between dilation and erosion of an image. It is used to reveal object's edges. - -```python: -kernel = np.ones((2,2),np.uint8) -gradient = cv2.morphologyEx(image, cv2.MORPH_GRADIENT, kernel) -``` - -Original | Gradient (edges) -:-------------------:|:----------------------: -![Original image](morph-input.png) | ![Image with just the gradients](morph-gradient.png) - -# Exercises - -1. Experiment with different kernel sizes and number of iterations. What do you observe? - ---- - -# Segmentation & Quantification - -Image Segmentation is the process that groups related pixels to define higher level objects. The following techniques are commonly used to accomplish this task: - -1. Thresholding - conversion of input image into binary image -2. Edge detection - see above -3. Region based - expand/shrink/merge object boundaries from object seed points -4. Clustering - use statistical analysis of proximity to group pixels as objects -5. Watershed - separates touching objects -6. Artificial Neural Networks - train object recognition from examples - -Let's try to identify and measure the area of the nuclei in this image with fluorescent labeled cells. This is the `fluorescent-cells.png` image in the `examples` folder. We will explore the use of morphology filters, thresholding and watershed to accomplish this. - -The complete code is in the `segmentation.py` script. - -![Image with flourescent labeled cells](fluorescent-cells.png) - -## Preprocessing - -First, we load the image and extract the blue channel which contains the labeling of the nuclei. Since OpenCV reads RGB images in BGR order, the blue channel is at index position 0 of the third image axis. - -```python: -import cv2 - -image = cv2.imread('fluorescent-cells.png') -nuclei = image[:,:,0] # get blue channel -``` - -![Image of fluorescent cells with the blue channel extracted](nuclei.png) - -To eliminate noise, we apply a Gaussian filter with 3x3 kernel, then apply the Otsu thresholding algorithm. The thresholding converts the grayscale intensity image into a black and white binary image. The function returns two values, we store them in `ret` (the applied threshold value) and `thresh` (the thresholded black & white binary image). White pixels represent nuclei; black pixel represent background. - -```python: -# apply Gaussian filter to smoothen image, then apply Otsu threshold -blurred = cv2.GaussianBlur(nuclei, (3, 3), 0) -ret, thresh = cv2.threshold(blurred,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU) -``` - -Next, we'll apply an opening operation to exclude small non-nuclear particles in the binary image. Furthermore we use scikit's `clear_border()` function to exclude objects (nuclei) touching the edge of the image. - -```python: -# fill small holes -import numpy as np -from skimage.segmentation import clear_border - -kernel = np.ones((3,3),np.uint8) -opening = cv2.morphologyEx(thresh,cv2.MORPH_OPEN,kernel,iterations=7) - -# remove image border touching objects -opening = clear_border(opening) -``` - -The resulting image looks like this. - -![Opening operation applied on the image of fluorescent cells](nuclei-opening.png) - -A tricky issue is that some of the nuclei masks are touching each other. We need to find a way to break up these clumps. We do this in several steps. First, we will dilate the binary nuclei mask. The black areas in the resulting image represent pixels that certainly do not contain any nuclear components. We call it the `sure_bg`. - -```python: -# sure background area -sure_bg = cv2.dilate(opening,kernel,iterations=10) -``` - -![Image of fluorescent cells with a sure background](nuclei-sure_bg.png) - -The nuclei are all fully contained inside the white pixel area. The next step is to find estimates for the center of each nucleus. Some of the white regions may contain more than one nucleus and we need to separate the joined ones. We calculate the distance transform to do this. - -> The result of the distance transform is a graylevel image that looks similar to the input image, except that the graylevel intensities of points inside foreground (white) regions are changed to show the distance to the closest boundary from each point. - -We can use the intensity peaks in the distance transform map (a grayscale image) as proxies and seeds for the individual nuclei. We isolate the peaks by applying a simple threshold. The result is the `sure_fg` image. - -```python: -# calculate distance transform to establish sure foreground -dist_transform = cv2.distanceTransform(opening,cv2.DIST_L2,5) -ret, sure_fg = cv2.threshold(dist_transform,0.6*dist_transform.max(),255,0) -``` - -Distance Transform | Sure Foreground (nuclei seeds) -:-------------------------------:|:------------------------: -| ![Distance transform map for the image of the fluorescent cells](nuclei-dist_transform.png) | ![Sure foreground for image](nuclei-sure_fg.png) | - - -By subtracting the sure foreground regions from the sure background regions we can identify the regions of unknown association, i.e. the pixels that we have not assigned to be either nuclei or background. - -```python: -# sure_fg is float32, convert to uint8 and find unknown region -sure_fg = np.uint8(sure_fg) -unknown = cv2.subtract(sure_bg,sure_fg) -``` - -Sure Background | Sure Foreground (nuclei seeds) | Unknown -:----------------------:|:-------------------------------:|:-----------------------: -![Image of fluorescent cells with a sure background](nuclei-sure_bg.png) | ![Image of fluorescent cells with a sure foreground](nuclei-sure_fg.png) | ![Image of fluorescent cells with an unknown background](nuclei-unknown.png) | - - -## Watershed - -Now we can come back to the sure foreground and create markers to label individual nuclei. First, we use OpenCV's `connectedComponents` function to create different color labels for the set of regions in the `sure_fg` image. We store the label information in the `markers` image and add 1 to each color value. The pixels that are part of the `unknown` region are set to zero in the `markers` image. This is critical for the following `watershed` step that separates connected nuclei regions based on the set of markers. - -```python: -# label markers -ret, markers = cv2.connectedComponents(sure_fg) - -# add one to all labels so that sure background is not 0, but 1 -markers = markers + 1 - -# mark the region of unknown with zero -markers[unknown==255] = 0 -markers = cv2.watershed(image,markers) -``` - -Lastly, we overlay a yellow outline to the original image for all identified nuclei. - -``` -image[markers == -1] = [0,255,255] -``` - -The resulting `markers` (pseudo-colored) and input images with segmentation overlay look like this: - -Markers | Segmentation -:----------------------:|:-------------------------: -![Images of nuclei with pseudo-colored markers](nuclei-markers.png) | ![Image with segmentation overlay](image-segmented.png) | - - -## Measure - -With the markers in hand, it is very easy to extract pixel and object information for each identified object. We use the `scikit-image` package (`skimage`) for the data extraction and `pandas` for storing the data in csv format. - -``` -from skimage import measure, color -import pandas as pd - -# compute image properties and return them as a pandas-compatible table -p = ['label', 'area', 'equivalent_diameter', 'mean_intensity', 'perimeter'] -props = measure.regionprops_table(markers, nuclei, properties=p) -df = pd.DataFrame(props) - -# print data to screen and save -print (df) -df.to_csv('nuclei-data.csv') -``` - -**Output:** -``` - label area equivalent_diameter mean_intensity perimeter -0 1 204775 510.614951 72.078891 6343.194406 -1 2 1906 49.262507 218.294334 190.716775 -2 3 1038 36.354128 204.438343 148.568542 -3 4 2194 52.853454 156.269827 215.858910 -4 5 2014 50.638962 199.993545 177.432504 -5 6 1461 43.130070 185.911020 168.610173 -6 7 2219 53.153726 170.962596 212.817280 -7 8 1837 48.362600 230.387044 184.024387 -8 9 1032 36.248906 228.920543 135.769553 -9 10 2433 55.657810 189.083436 218.781746 -10 11 1374 41.826202 214.344978 167.396970 -11 12 1632 45.584284 191.976716 196.024387 -12 13 1205 39.169550 245.765145 141.639610 -13 14 2508 56.509157 153.325359 229.894444 -14 15 2086 51.536178 195.962608 244.929978 -15 16 1526 44.079060 243.675623 163.124892 -16 17 1929 49.558845 217.509072 174.124892 -17 18 1284 40.433149 165.881620 150.710678 -18 19 2191 52.817306 174.357827 190.331998 -19 20 2218 53.141747 170.529306 210.260931 -20 21 2209 53.033821 164.460842 203.858910 -21 22 2370 54.932483 193.639241 206.296465 -22 23 1426 42.610323 249.032959 157.296465 -23 24 2056 51.164250 194.098735 181.396970 -``` - - -# Exercises - -1. Experiment with different kernel sizes during the preprocessing step. -2. Experiment with different iteration numbers for the opening and dilation operation. -3. Experiment with different thresholding values for isolating the nuclei seeds from the distance transform image. -4. Change the overlay color from yellow to magenta. Tip: magenta corresponds to BGR (255,255,0). - -How do these changes affect the segmentation? - ---- - -# Resources - -* [Introduction to OpenCV](https://docs.opencv.org/master/d1/dfb/intro.html) -* [OpenCV Python Tutorial](https://opencv24-python-tutorials.readthedocs.io/en/latest/index.html)