|
| 1 | +# Zero Collision Hashing (ZCH) Benchmarking Testbed |
| 2 | + |
| 3 | +This testbed is used to benchmark the performance of ZCH algorithms with respect to the efficiency, accuracy, and collision management performances. Specifically, the testbed collects the following metrics: |
| 4 | +- QPS: query per second, the number of input faeture values the model can process in a second. |
| 5 | +- Collision rate: the percentage of collisions in the hash table. High collision rate means that lots of potentially irrelevant features are mapped to the same hash value, which can lead to information loss and decreased accuracy. |
| 6 | +- NE: normalized entropy, a measure of the confidence of models on the prediction results of classification tasks. |
| 7 | +- AUC: area under the curve, a metric used to evaluate the performance of classification models. |
| 8 | +- MAE: mean absolute error, a measure of the average magnitude of errors in regression tasks. |
| 9 | +- MSE: mean squared error, a measure of the average squared error in regression tasks. |
| 10 | + |
| 11 | +## Pre-regression |
| 12 | +Before running the benchmark, it is important to ensure that the environment is properly set up. The following steps should be taken |
| 13 | +1. Prepare Python environment (Python 3.9+) |
| 14 | +2. Install the necessary dependencies |
| 15 | +```bash |
| 16 | +# Install torch and fbgemm_gpu following instructions in https://docs.pytorch.org/FBGEMM/fbgemm_gpu/development/InstallationInstructions.html |
| 17 | +pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cu126/ |
| 18 | +pip install --pre fbgemm-gpu --index-url https://download.pytorch.org/whl/nightly/cu126/ |
| 19 | +# Install torchrec |
| 20 | +pip install torchrec --index-url https://download.pytorch.org/whl/nightly/cu126 |
| 21 | +# Install generative recommenders |
| 22 | +git clone https://github.com/meta-recsys/generative-recommenders.git |
| 23 | +cd generative-recommenders |
| 24 | +pip install -e . |
| 25 | +``` |
| 26 | + |
| 27 | +## Running the benchmark |
| 28 | +To run the benchmark, use the following command: |
| 29 | +```bash |
| 30 | +WORLD_SIZE=1 python benchmark_zch.py -- --profiling_result_folder result_tbsize_10000_nonzch_dlrmv3_kuairand1k --dataset_name kuairand_1k --batch_size 16 --learning_rate 0.001 --dense_optim adam --sparse_optim adagrad --epochs 5 --num_embeddings 10000 |
| 31 | +``` |
| 32 | +More options can be found in the [arguments.py](arguments.py) file. |
| 33 | + |
| 34 | +## Repository Structure |
| 35 | +- [benchmark_zch.py](benchmark_zch.py): the main script for running the benchmark. |
| 36 | +- [arguments.py](arguments.py): contains the arguments for the benchmark. |
| 37 | +- [benchmark_zch_utils.py](benchmark_zch_utils.py): utility functions for the benchmark. |
| 38 | +- [count_dataset_distributions.py](count_dataset_distributions.py): script for counting the distribution of features in the dataset. |
| 39 | +- [data](data): directory containing the dataset used in the benchmark. |
| 40 | +- [models](models): directory containing the models used in the benchmark. |
| 41 | +- [plots](plots): directory containing the plotting notebooks for the benchmark. |
| 42 | +- [figures](figures): directory containing the figures generated by the plotting notebooks. |
| 43 | + |
| 44 | +## To add a new model |
| 45 | +To add a new model to the benchmark, follow these steps: |
| 46 | +1. Create a new configuration yaml file named as <new_model_name>.yaml in the [models/configs](models/configs) directory. |
| 47 | + - Besides the basic configurations like embedding dimensions, number of embeddings, etc. the yaml file must also contain the following two fields: |
| 48 | + - embedding_module_attribute_path: the path to the embedding module in the model, either the EmbeddingCollection or the EmbeddingBagCollection. |
| 49 | + - managed_collision_module_attribute_path: the path to the managed collision module in the model, if once appilied. It should in the following format: "module.<embedding_module_attribute_path>.mc_embedding_collection._managed_collision_collection._managed_collision_modules". |
| 50 | +2. Create a new model class in the [models/models](models/models) directory, named as <new_model_name>.py. |
| 51 | + - The model class should act as a wrapper for the new model, and it should |
| 52 | + - contain the following attributes |
| 53 | + - eval_flag (bool): whether the model is in the evaluation or training mode. |
| 54 | + - table_configs (List[Dict[str, EmbeddingConfig]]): a list of dictionaries containing the configuration of each embedding table. |
| 55 | + - override the following methods |
| 56 | + - forward(self, batch: Dict[str, Any]) -> torch.Tensor: the forward method of the model. The forward method should make the model compatible with the ipnut from the Batch dataclass, and output in the format of `summed_loss, (prediction_logits, prediction_labels, prediction_weights)`. |
| 57 | + - eval(self) -> None: set the model to the evaluation mode. |
| 58 | + - Implement the `make_model_<new_model_name>` function in the [models/make_model.py](models/make_model.py) file. The function should takes three parameters: |
| 59 | + - args: the arguments passed to the benchmark. |
| 60 | + - configs: the configuration of the model and dataset. |
| 61 | + - device: the device to run the model on. |
| 62 | + The function should return an instance of the new model class. It also contains the code to replace its embedding module with the ZCH embedding module using a `mc_adapter` object. |
| 63 | + |
| 64 | +3. Add the new model to the [models/__init__.py](models/__init__.py) file with `from .<new_model_name>.py import make_model_<new_model_name>`. |
| 65 | +4. Add the new model to the [models/make_model.py](models/make_model.py) file with |
| 66 | + - Add `make_model_<new_model_name>` to the `from .models import` line. |
| 67 | + - ADD a condition branch `elif model_name == "<new_model_name>"` to the `make_model` function, in which |
| 68 | + - read the configuration file from `os.path.join(os.path.dirname(__file__), "configs", "<new_model_name>.yaml")`. |
| 69 | + - read the dataset configuration from `os.path.join(os.path.dirname(__file__), "..", "data", "configs", f"{args.dataset_name}.yaml")`. |
| 70 | + - call the make_model_<new_model_name> function with the configuration and dataset configuration. |
| 71 | + |
| 72 | +## To add a new dataset |
| 73 | +To add a new dataset to the benchmark, follow these steps: |
| 74 | +1. Create a new configuration yaml file named as <new_dataset_name>.yaml in the [data/configs](data/configs) directory. |
| 75 | + - The yaml file must contain the following fields: |
| 76 | + - dataset_path: the path to the dataset. |
| 77 | + - batch_size: the batch size of the dataset. |
| 78 | + - num_workers: the number of workers to load the dataset. |
| 79 | + - Besides the three required fields, the yaml file should also contain nenecessary fields for loading and ingesting the dataset. |
| 80 | +2. Create a new dataset preprocess script in the [data/preprocess](data/preprocess) directory, named as <new_dataset_name>.py. |
| 81 | + - The script should contain a definition to the corresponding Batch dataclass, which should be a dataclass that contains necessary attributes, and override the following methods: |
| 82 | + - to(self, device: torch.device, non_blocking: bool = False) -> Batch: the method to move the data to the specified device. |
| 83 | + - pin_memory(self) -> Batch: the method to pin the data in memory. |
| 84 | + - record_stream(self, stream: torch.cuda.streams.Stream) -> None: the method to record the data stream. |
| 85 | + - get_dict(self) -> Dict[str, Any]: the method to get the data as a dictionary of `{<attribute_name>: <attribute_value>}`. |
| 86 | + - The script should also include a dataset class. The dataset class should act as a wrapper for the new dataset, and it should at least override the following methods: |
| 87 | + - __init__(self, config: Dict[str, Any], device: torch.device) -> None: the constructor of the dataset class. It should take a dictionary of configuration and a device as input, and initialize the dataset. When initializing the dataset, it must include a `items_in_memory` attribute as a list of Batch dataclass. |
| 88 | + - __len__(self) -> int: the length of the dataset. |
| 89 | + - __getitem__(self, idx: int) -> Dict[str, Any]: the method to get an item from the dataset. It should take an index as input, and return the data in the format of Batch dataclass. |
| 90 | + - load_item(self, idx: int) -> Dict[str, Any]: the method to load an item from the dataset. It should take an index as input, and return the data in the format of Batch dataclass. |
| 91 | + - get_sample(self, idx: int) -> Dict[str, Any]: the method to get a sample from the dataset. It should take an index as input, and return the data from the items_in_memory list. |
| 92 | + - __getitems__(self, idxs: List[int]) -> List[Dict[str, Any]]: the method to get a list of items from the dataset. It should take a list of indices as input, and return the data in the format of a list of Batch dataclass. |
| 93 | + - The script should include a `collate_fn` that takes a list of Batch dataclass and returns a Batch dataclass. |
| 94 | + - The script should finally include a `get_<new_dataset_name>_dataloader` function that takes three parameters: |
| 95 | + - args: the arguments passed to the benchmark. |
| 96 | + - configs: the configuration of the model and dataset. |
| 97 | + - stage: the stage of the benchmark, either "train" or "val". |
| 98 | + The function should return a dataloader for the new dataset. |
0 commit comments