A course search and review platform for McGill university.
You'll need docker, cargo and pnpm installed on your machine to spawn the various components the project needs to run locally.
First, join the discord server to get access to the development environment variables:
In .env
within the root directory you'll have to set
MS_CLIENT_ID=
MS_CLIENT_SECRET=
MS_REDIRECT_URI=http://localhost:8000/api/auth/authorized
...and then in client/.env
you'll have to set the server url
VITE_API_URL=http://localhost:8000
Second, mount a local mongodb instance with docker and initiate the replica set:
docker compose up --no-recreate -d
sleep 5
docker exec mongodb mongosh --quiet --eval 'rs.initiate()' > /dev/null 2>&1 || true
Spawn the server with a data source (in this case the /seed
directory) and
initialize the database (note that seeding may take some time on slower
machines):
cargo run -- --source=seed serve --initialize --db-name=mcgill-courses
Finally, spawn the react frontend:
pnpm install
pnpm run dev
n.b. If you have just installed, we provide a
dev
recipe for doing all of the above in addition to running a watch on the
server:
just dev
See the justfile for more recipes.
The server command-line interface provides a load subcommand for scraping all courses from various McGill course information websites and building a JSON data source, for example:
RUST_LOG=info cargo run -- --source=seed \
load \
--batch-size=200 \
--scrape-vsb \
--user-agent "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36"
The current defaults include scraping all current 10,000+ courses offered by McGill, current schedule information from the official visual schedule builder, and courses offered in previous terms going back as far the 2009-2010 term.
For full usage information, see the output below:
Usage: server load [OPTIONS] --user-agent <USER_AGENT>
Options:
--user-agent <USER_AGENT> A user agent
--course-delay <COURSE_DELAY> Time delay between course requests in milliseconds [default: 0]
--page-delay <PAGE_DELAY> Time delay between page requests in milliseconds [default: 0]
--retries <RETRIES> Number of retries [default: 10]
--batch-size <BATCH_SIZE> Number of pages to scrape per concurrent batch [default: 20]
--mcgill-terms <MCGILL_TERMS> The mcgill terms to scrape [default: 2009-2010 2010-2011 2011-2012 2012-2013 2013-2014 2014-2015 2015-2016 2016-2017 2017-2018 2018-2019 2019-2020 2020-2021 2021-2022 2022-2023 2023-2024]
--vsb-terms <VSB_TERMS> The schedule builder terms to scrape [default: 202305 202309 202401]
--scrape-vsb Scrape visual schedule builder information
-h, --help Print help
Alternatively, if you have just installed, you can run:
just load
We parse prerequisites and corequisites using
llama-index with custom examples, all the code
lives in
/tools/req-parser
.
If you need to run the requirement parser on a file, simply:
cd tools/req-parser
poetry install
poetry shell
python3 main.py <file>
n.b. This will require an OpenAI API key to be set in the environment.
We continuously deploy our site with Render using a docker image, and have a MongoDB instance hosted on Atlas.
We also use S3 to host a bucket for referring to a hash when deciding whether or not to seed courses in our production environment, and Microsoft's identity platform for handling our OAuth 2.0 authentication flow.
There are a few notable projects worth mentioning that are similar in nature to mcgill.courses, and have either led to inspiration or new ideas with regard to its functionality and design, namely:
- uwflow.com - A course search and review platform for the University of Waterloo
- cloudberry.fyi - A post-modern schedule builder for McGill students
- mcgill.wtf - A fast full-text search engine for McGill courses