Skip to content

bkonkle/rust-demo-node-modules

Folders and files

NameName
Last commit message
Last commit date

Latest commit

2dad3cd · Oct 17, 2024

History

15 Commits
Sep 13, 2024
Sep 13, 2024
Sep 27, 2024
Oct 17, 2024
Oct 17, 2024
Sep 24, 2024
Sep 24, 2024
Sep 24, 2024
Sep 12, 2024
Sep 12, 2024
Sep 12, 2024
Sep 24, 2024
Sep 24, 2024
Oct 17, 2024
Oct 17, 2024
Sep 12, 2024
Sep 26, 2024

Repository files navigation

High-Performance Node Modules with Rust

WIP

npm i

Pre-commit

sudo apt install pre-commit
# or...
yay -S pre-commit
# or...
pipx install pre-commit
# or...
brew install pre-commit

pre-commit install

Run on Local Machine

First, issue yourself a token:

npm run token

Then, run the local dev server:

USE_RUST_MODULE=false npm run dev

Now, make a GET /me call with your access token in the Authorization header with a "Bearer" prefix.

You should get back a 200 response, indicating that the JWT token was validated.

Set up Torch for AI Querying

Install libtorch with the cxx11 ABI for use within the Rust module, and place it in ~/lib/libtorch:

export VERSION=2.4.1
export CUDA_VERSION=cu124

wget https://download.pytorch.org/libtorch/$CUDA_VERSION/libtorch-cxx11-abi-shared-with-deps-$VERSION%2B$CUDA_VERSION.zip
unzip libtorch-cxx11-abi-shared-with-deps-$VERSION+$CUDA_VERSION.zip
rm libtorch-cxx11-abi-shared-with-deps-$VERSION+$CUDA_VERSION.zip

mv libtorch ~/lib

Then export some variables in your .envrc to point to it:

export LIBTORCH=~/lib/libtorch
export LD_LIBRARY_PATH=${LIBTORCH}/lib:$LD_LIBRARY_PATH

export LIBTORCH_BYPASS_VERSION_CHECK=1

The LIBTORCH_BYPASS_VERSION_CHECK allows patch version 2.4.1 to be used where 2.4.0 is expected.

Compile the Rust modules

npm run build.jwt-rsa

npm run build.text-classification

This should generate a lib/jwt_rsa/jwt-rsa.<platform>.<arch>.node file, with an index.js and an index.d.ts alongside it, and a similar file for the text-classification project. This is how you import the Rust modules into your Node.js application.

Run the local dev server with the rust module enabled:

USE_RUST_MODULE=true npm run dev

Fine-Tune the Model

Before you can query it, you need to take the base Bert uncased model and fine-tune it with the Snips dataset.

You'll need to create and activate a virtualenv for dependencies like "torch" to be available. With Poetry:

npm run poetry install

source $(npm run --silent poetry-path)/bin/activate

Then, run the fine-tuning script:

npm run train-text-classification -- --num-epochs 2

To convert the resulting model for usage with Rust, it's easiest to use the rust-bert library directly:

# Activate the virtual environment so that Torch is available
source $(npm run --silent poetry-path)/bin/activate

# Then switch to the path you've cloned the rust-bert library to
cd /path/to/rust-bert

python utils/convert_model.py path/to/data/snips-bert/pytorch_model.bin

You should see a rust_model.ot file in the data/snips-bert directory afterwards. This will be what the Rust process uses.

Run Benchmarks

Issue a token:

rpm run token

Assign the access token to the ACCESS_TOKEN environment variable:

export AUTH_TOKEN="Bearer {access_token}"

Run the benchmarks:

npm run bench

About

High-Performance Node Modules with Rust

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published