Health Acoustic Representations (HeAR) is a machine learning (ML) model that produces embeddings based on health acoustics. The embeddings can be used to efficiently build AI models for health accoustic related tasks, requiring less data and less compute than having to fully train a model without the embeddings.
As a Health AI Developer Foundations model trained on 300+ million two-second long audio clips, HeAR accelerates their ability to build AI models for health acoustic analysis with less data and compute.
-
Read our developer documentation to see the full range of next steps available, including learning more about the model through its model card or serving API.
-
Explore this repository, which contains notebooks for using the model from Hugging Face and Vertex AI as well as the implementation of the container that you can deploy to Vertex AI.
-
Visit the model on Hugging Face or Model Garden.
We are open to bug reports, pull requests (PR), and other contributions. See CONTRIBUTING and community guidelines for details.
While the model is licensed under the Health AI Developer Foundations License, everything in this repository is licensed under the Apache 2.0 license, see LICENSE.