-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Extracting trait information from images #8
Comments
+1 to extracting trait information from images. I would like to see how we can store those data both in the images and in tabular form. Isadora and I curated a training data set for this workshop that has elytra length and width measured. I would like to explore standardized ways for storing trait information from specimen images that can be AI ready. I have worked on a data design pattern with @sokole called ecocomDP for community ecology data that we would like to extend to include individual trait information for this purpose. |
@sydnerecord, question: what kinds of traits are you interested in? and what would getting elytral lenght and width extracted from images allow you to answer? |
I am interested in traits that might structure the assembly of ecological communities. We chose to collect information on elytra length and width as they are likely correlated with dispersal capacity and dispersal is important when thinking of community assembly. However, it's always difficult to know a priori what traits matter from an eco/evo standpoint as we have so few traits measured. And to clarify, I am most interested in intra- rather than inter-specific variation as intraspecific variation is the currency of natural selection. |
@sydnerecord I would like to know if segmenting the elytra within the images would be beneficial for the measuring process. This could isolate not only the elytra segments but also other relevant parts. A potential project could involve a pipeline to segment images based on specimen anatomy, followed by automated measurement of these segments. |
@danflop is segmenting the same as annotating or marking up? or would the annotation process be subsequent to segmenting? (sorry, not familiar with the terminology here) For example, below is a head of a beetle with the frons highlighted in blue. This, at the moment, is just a flat jpg image. On this website, they have drawings and particular structures of the back of a human skull that are drawn and associated with ontology terms (scroll through the components list). It would be ideal (for me anyway) to be able to annotate images that way. A pipeine to potentially automate drawing and annotating. I know of tools like ML-morph for "automated detection and landmarking of biological structures in images", but the next steps for annotation and where data goes is still missing, or I'm unaware of anything of that sort. Other resources we compiled on this are here. |
As a taxonomist, when describing a beetle I use images to support my observations. In general, those images end up on a PDF or a static jpg online, accompanied by a caption, so that only other humans would be able to potentially extract information from those images.
Are there (or can we come up with) ways to annotate regions of interest in an image, and tie that to a term in an anatomy ontology?
When an image has been annotated, where would those annotations live? Would they be part of a file's metadata?
Also, if annotations are associated with metadata, how to visualize them and, would the annotations be lost when segmenting/processing the annotated image?
Some background on the taxonomy side:
Some additional background on the biodiversity informatics side:
Related discussions:
The text was updated successfully, but these errors were encountered: