Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TamGen and MatterSim model onboarding #3751

Closed
wants to merge 2 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 22 additions & 0 deletions latest/model/TamGen/MLmodel
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
flavors:
python_function:
artifacts:
checkpoint:
path: artifacts/checkpoint_best.pt
uri: checkpoints/crossdock_pdb_A10/checkpoint_best.pt
gpt_dir:
path: artifacts/gpt_model
uri: gpt_model
cloudpickle_version: 3.1.0
code: null
env:
conda: conda.yaml
virtualenv: python_env.yaml
loader_module: mlflow.pyfunc.model
python_model: python_model.pkl
python_version: 3.9.21
streamable: false
mlflow_version: 2.19.0
model_size_bytes: 2369527704
model_uuid: 385934d395d944cf851a81002a1c8be4
utc_time_created: '2025-01-08 12:54:23.079748'
4 changes: 4 additions & 0 deletions latest/model/TamGen/asset.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
extra_config: model.yaml
spec: spec.yaml
type: model
categories: ["Foundation Models"]
50 changes: 50 additions & 0 deletions latest/model/TamGen/description.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
# Responsible AI Transparency Notes of TamGen Model

## Model description

The TamGen is a 100 million-parameter model that can generate compounds based on the input protein information. TamGen is pre-trained on 10 million compounds from PubChem and fine-tuned on CrossDocked and PDB datasets. We evaluate TamGen on existing benchmarks and achieve top performance. Furthermore, TamGen has successfully identified novel inhibitors for Tuberculosis, which have been subsequently validated through wet-lab experiments.

To use TamGen, please follow the responsible AI policy:
- Do not use TamGen to generate any harmful/toxic compounds.
- Only use TamGen for legitimate purposes and in compliance with all applicable laws and regulations.
- Implement proper safety protocols and ethical review processes before synthesizing or testing any compounds generated by TamGen.

## Uses

TamGen has two main functions:
1. Generate compounds based on the input protein information.
2. Optimise a previous compound to a better one based on the input protein.

### Direct intended uses

The TamGen framework is composed of three integral components:
- **Protein encoder**: Converts the three-dimensional structure of a protein into a hidden vector representation.
- **Molecule decoder**: Extensively trained on a dataset of 10 million SMILES (Simplified Molecular Input Line Entry System) strings, excels in constructing chemically valid SMILES strings for new molecules.
- **Contextual encoder**: Integrates the information from both the protein and the compound, paving the way for targeted compound optimisation.

To generate a compound based on the input protein information:
1. Gather the relevant protein data.
2. Input it into the protein encoder.
3. Retrieve the corresponding SMILES string from the decoder.

To optimise an existing compound relative to a specific protein:
1. Input the protein information into the protein encoder.
2. Process the protein and the initial compound information with the contextual encoder.
3. Channel the output into the decoder to generate an optimised SMILES string for the compound.

## Limitations

- Prior knowledge of protein structures is required.
- Currently, compounds are not generated in 3D spaces, representing a potential area for future enhancement.
- Users need to leverage wet-lab experiments to verify the generated compounds and design pipelines to determine which compound to synthesise (e.g., using molecular docking to pick up compounds with potentially good binding affinity).

## Risks and mitigations

The release of TamGen is relatively low-risk, as real-world compound design is a very complex pipeline:
- Compound design is an intricate process that extends far beyond initial generation (i.e., what TamGen works on). After a compound is generated, it undergoes rigorous filtering and selection processes to identify suitable drug candidates (e.g., enzymatic test, cellular activity test, animal test, etc.).
- TamGen incorporates an additional filter to eliminate compounds with undesired properties.
- Currently, there are already compound generation methods released, like Pocket2Mol and MoLeR. The lack of malicious outcomes stemming from the existence of these models suggests that either the synthesis knowledge firewall is sufficient to deter untoward use or that the design of compound generation is not a useful tool for malicious outcomes.

## Condition to use

The TamGen is provided “as is”, without warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose and noninfringement. The TamGen is aimed to facilitate drug discovery research and is not suitable for any other use, including clinical applications. Compounds generated by TamGen shall be subject to a rigorous filtering and selection process to identify suitable drug candidates. Users shall independently assess and test the risks of the TamGen in generating specific compounds, ensure the responsible use of AI technology, including but not limited to the development and integration risk mitigation measures, and comply with all applicable laws and regulations in all applicable jurisdictions. Users shall assume all liability under any theory of liability, whether in contract, torts, regulatory, negligence, products liability, or otherwise, associated with the use of the TamGen and any inputs and outputs thereof. The TamGen output may contain inaccuracies, and users are responsible for determining the accuracy of any outputs generated by TamGen in relation to its intended use.
8 changes: 8 additions & 0 deletions latest/model/TamGen/model.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
path:
container_name: models
container_path: microsoft/TamGen/1736144331/mlflow_model_folder
storage_name: automlcesdkdataresources
type: azureblob
publish:
description: description.md
type: mlflow_model
33 changes: 33 additions & 0 deletions latest/model/TamGen/spec.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
$schema: https://azuremlschemas.azureedge.net/latest/model.schema.json

name: TamGen
path: ./

properties:
inference-min-sku-spec: 6|1|112|64
inference-recommended-sku: Standard_NC6s_v3, Standard_NC12s_v3, Standard_NC24s_v3, Standard_NC24ads_A100_v4, Standard_NC48ads_A100_v4, Standard_NC96ads_A100_v4, Standard_ND96asr_v4, Standard_ND96amsr_A100_v4, Standard_ND40rs_v2
languages: en
SharedComputeCapacityEnabled: true

tags:
task: protein-design
disable-batch: "true"
Preview: ""
inference_supported_envs:
license: mit
author: Microsoft
hiddenlayerscanned: "true"
SharedComputeCapacityEnabled: ""
inference_compute_allow_list:
[
Standard_NC6s_v3,
Standard_NC12s_v3,
Standard_NC24s_v3,
Standard_NC24ads_A100_v4,
Standard_NC48ads_A100_v4,
Standard_NC96ads_A100_v4,
Standard_ND96asr_v4,
Standard_ND96amsr_A100_v4,
Standard_ND40rs_v2,
]
version: 1
4 changes: 4 additions & 0 deletions latest/model/mattersim/asset.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
extra_config: model.yaml
spec: spec.yaml
type: model
categories: ["Foundation Models"]
22 changes: 22 additions & 0 deletions latest/model/mattersim/description.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
MatterSim is a large-scale pretrained deep learning model for efficient materials emulations and property predictions.

MatterSim is a deep learning model for general materials design tasks. It supports efficient atomistic simulations at first-principles level and accurate prediction of broad material properties across the periodic table, spanning temperatures from 0 to 5000 K and pressures up to 1000 GPa. Out-of-the-box, the model serves as a machine learning force field, and shows remarkable capabilities not only in predicting ground-state material structures and energetics, but also in simulating their behavior under realistic temperatures and pressures. MatterSim also serves as a platform for continuous learning and customization by integrating domain-specific data. The model can be fine-tuned for atomistic simulations at a desired level of theory or for direct structure-to-property predictions with high data efficiency.

Please refer to the [MatterSim](https://arxiv.org/abs/2405.04967) manuscript for more details on the model.

- **Developed by:** Han Yang, Chenxi Hu, Yichi Zhou, Xixian Liu, Yu Shi, Jielan Li, Guanzhi Li, Zekun Chen, Shuizhou Chen, Claudio Zeni, Matthew Horton, Robert Pinsler, Andrew Fowler, Daniel Zügner, Tian Xie, Jake Smith, Lixin Sun, Qian Wang, Lingyu Kong, Chang Liu, Hongxia Hao, Ziheng Lu
- **Funded by:** Microsoft Research AI for Science
- **Model type:** Currently, we only release the models trained with **M3GNet** architecture.
- **License:** MIT License

### Model Sources

- **Repository:** <https://github.com/microsoft/mattersim>
- **Paper:** <https://arxiv.org/abs/2405.04967>

### Available Models

| | mattersim-v1.0.0-1M | mattersim-v1.0.0-5M |
| ------------------ | --------------------- | ----------------------- |
| Training Data Size | 3M | 6M |
| Model Parameters | 880K | 4.5M |
63 changes: 63 additions & 0 deletions latest/model/mattersim/evaluation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
### Testing Data, Factors & Metrics

#### Testing Data

To evaluate the model performance, we created the following test sets

- **MPtrj-random-1k:** 1k structures randomly sampled from MPtrj dataset
- **MPtrj-highest-stress-1k:** 1k structures with highest stress magnitude sampled from MPtrj dataset
- **Alexandria-1k:** 1k structures randomly sampled from Alexandria
- **MPF-Alkali-TP:** For detailed description of the generation of the dataset, please refer to the SI of the [MatterSim manuscript](https://arxiv.org/abs/2405.04967)
- **MPF-TP:** For detailed description of the generation of the dataset, please refer to the SI of the [MatterSim manuscript](https://arxiv.org/abs/2405.04967)
- **Random-TP:** For detailed description of the generation of the dataset, please refer to the SI of the [MatterSim manuscript](https://arxiv.org/abs/2405.04967)

We released the test datasets in pickle files and each of them contains the `ase.Atoms` objects. To access the structures and corresponding labels in the datasets, you do use the following snippet to get started,

```python
import pickle
from ase.units import GPa

atoms_list = pickle.load(open("/path/to/datasets.pkl", "rb"))
atoms = atoms_list[0]

print(f"Energy: {atoms.get_potential_energy()} eV")
print(f"Forces: {atoms.get_forces()} eV/A")
print(f"Stress: {atoms.get_stress(voigt=False)} eV/A^3, or {atoms.get_stress(voigt=False)/GPa}")
```

#### Metrics

We evaluate the performance by computing the mean absolute errors (MAEs) of energy (E), forces (F) and stress (S) of each structures within the same dataset. The MAEs are defined as follows,
<p align="center">
<img src="https://latex.codecogs.com/svg.latex?\mathrm{MAE}_E=\frac{1}{N}\sum_{i}^N\frac{1}{N_{at}^{(i)}}|E_i-\tilde{E}_i|" alt="MAE_E equation">
</p>
<p align="center">
<img src="https://latex.codecogs.com/svg.latex?\mathrm{MAE}_F=\frac{1}{N}\sum_i^N\frac{1}{N_{at}^{(i)}}\sum_{j}^{N^{(i)}_{at}}||F_{ij}-\tilde{F}_{ij}||_2," alt="MAE_F equation">
</p>
<p align="center">
<img src="https://latex.codecogs.com/svg.latex?\mathrm{MAE}_S=\frac{1}{N}\sum_i^{N}||S_{i}-\tilde{S}_{i}||_2," alt="MAE_S equation">
</p>
where N is the number of structures in the same dataset, <img src="https://latex.codecogs.com/svg.image?\inline&space;&space;N_{at}^{(i)}"> is the number of atoms in the i-th structure and E, F and S represent ground-truth energy, forces and stress, respectively.

### Results

| Dataset | Dataset Size | MAE | mattersim-v1.0.0-1M | mattersim-v1.0.0-5M |
| -------------------- | ------------ | ----------------- | ------------ | ------------ |
| MPtrj-random-1k | 1000 | Energy [eV/atom] | 0.030 | 0.024 |
| | | Forces [eV/<img src="https://latex.codecogs.com/svg.latex?\AA" alt="\AA">] | 0.149 | 0.109 |
| | | Stress [GPa] | 0.241 | 0.186 |
| MPtrj-high-stress-1k | 1000 | Energy [eV/atom] | 0.110 | 0.108 |
| | | Forces [eV/<img src="https://latex.codecogs.com/svg.latex?\AA" alt="\AA">] | 0.417 | 0.361 |
| | | Stress [GPa] | 6.230 | 6.003 |
| Alexandria-1k | 1000 | Energy [eV/atom] | 0.058 | 0.016 |
| | | Forces [eV/<img src="https://latex.codecogs.com/svg.latex?\AA" alt="\AA">] | 0.086 | 0.042 |
| | | Stress [GPa] | 0.761 | 0.205 |
| MPF-Alkali-TP | 460 | Energy [eV/atom] | 0.024 | 0.021 |
| | | Forces [eV/<img src="https://latex.codecogs.com/svg.latex?\AA" alt="\AA">] | 0.331 | 0.293 |
| | | Stress [GPa] | 0.845 | 0.714 |
| MPF-TP | 1069 | Energy [eV/atom] | 0.029 | 0.026 |
| | | Forces [eV/<img src="https://latex.codecogs.com/svg.latex?\AA" alt="\AA">] | 0.418 | 0.364 |
| | | Stress [GPa] | 1.159 | 1.144 |
| Random-TP | 693 | Energy [eV/atom] | 0.208 | 0.199 |
| | | Forces [eV/<img src="https://latex.codecogs.com/svg.latex?\AA" alt="\AA">] | 0.933 | 0.824 |
| | | Stress [GPa] | 2.065 | 1.999 |
8 changes: 8 additions & 0 deletions latest/model/mattersim/model.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
path:
container_name: models
container_path: microsoft/MatterSim/1736144331/mlflow_model_folder
storage_name: automlcesdkdataresources
type: azureblob
publish:
description: description.md
type: mlflow_model
61 changes: 61 additions & 0 deletions latest/model/mattersim/notes.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
## Intended Uses

The MatterSim model is intended for property predictions of materials.

### Direct Use

The model is used for materials simulation and property prediciton tasks. An interface to atomic simulation environment is provided. Examples of direct usages include but not limited to

- Direct prediction of energy, forces and stress of a given materials
- Phonon prediction using finite difference
- Molecular dynamics

### Out-of-Scope Use

The model only supports atomistic simulations of materials and molecules. Any attempt and interpretation beyond that should be avoided.
The model does not support generation of new materials as it is designed for materials simulation and property prediction only.
The model is intended for research and experimental purposes. Further testing/development are needed before considering its application in real-world scenarios.

## Contact Model Provider

- Han Yang (<[email protected]>)
- Ziheng Lu (<[email protected]>)

## Technical Specifications

### Model Architecture and Objective

The checkpoints released in this repository are those trained on an internal implementation of the **M3GNet** architecture.

#### Software

- Python == 3.9

## Citation

**BibTeX:**

```
@article{yang2024mattersim,
title={MatterSim: A Deep Learning Atomistic Model Across Elements, Temperatures and Pressures},
author={Han Yang and Chenxi Hu and Yichi Zhou and Xixian Liu and Yu Shi and Jielan Li and Guanzhi Li and Zekun Chen and Shuizhou Chen and Claudio Zeni and Matthew Horton and Robert Pinsler and Andrew Fowler and Daniel Zügner and Tian Xie and Jake Smith and Lixin Sun and Qian Wang and Lingyu Kong and Chang Liu and Hongxia Hao and Ziheng Lu},
year={2024},
eprint={2405.04967},
archivePrefix={arXiv},
primaryClass={cond-mat.mtrl-sci},
url={https://arxiv.org/abs/2405.04967},
journal={arXiv preprint arXiv:2405.04967}
}
```

## Bias, Risks, and Limitations

The current model has relatively low accuracy for organic polymeric systems.
Accuracy is inferior to the best (more computationally expensive) methods available.
The model is trained on a specific variant of Density Functional Theory (PBE) that has known limitations across chemical space which will affect accuracy of prediction, such as the ability to simulate highly-correlated systems. (The model can be fine-tuned with higher accuracy data.)
The model does not support all capabilities of some of the latest models such as predicting Born effective charges or simulating a material in an applied electric field.
We have evaluated the model on many examples, but there are many examples that are beyond our available resources to test.

### Recommendations

For any appications related simulations of surfaces, interfaces, and systems with long-range interactions, the results are often qualitatively correct. For quantitative results, the model needs to be fine-tuned.
31 changes: 31 additions & 0 deletions latest/model/mattersim/spec.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
$schema: https://azuremlschemas.azureedge.net/latest/model.schema.json

name: MatterSim
path: ./

properties:
inference-min-sku-spec: 6|1|112|64
inference-recommended-sku: Standard_NC6s_v3, Standard_NC12s_v3, Standard_NC24s_v3, Standard_NC24ads_A100_v4, Standard_NC48ads_A100_v4, Standard_NC96ads_A100_v4, Standard_ND96asr_v4, Standard_ND96amsr_A100_v4
languages: en
SharedComputeCapacityEnabled: true

tags:
task: materials-design
disable-batch: "true"
Preview: ""
license: mit
author: Microsoft
hiddenlayerscanned: "true"
SharedComputeCapacityEnabled: ""
inference_compute_allow_list:
[
Standard_NC6s_v3,
Standard_NC12s_v3,
Standard_NC24s_v3,
Standard_NC24ads_A100_v4,
Standard_NC48ads_A100_v4,
Standard_NC96ads_A100_v4,
Standard_ND96asr_v4,
Standard_ND96amsr_A100_v4
]
version: 1