This is a repo containing 5 assignments from the Natural Language Processing with Deep Learning held by Stanford.
The first assignment is a warmup exercise that will include a short coding assignment focused on calculating co-occurrence matrices, as well as an exploration of word vectors in a Google CoLab notebook.
The second assignment consists of a coding assignment where you will implement Word2Vec and train your own word vectors with sto chastic gradient descent (SGD).
A coding assignment in which you will use PyTorch to implement a neural-network based dependency parser, with the goal of maximizing performance on the UAS (Unlabeled Attachment Score) metric. Your implementation will be a transition-based parser, which incrementally builds up a parse one step at a time.
A coding assignment in which you will build a Neural Machine Translation system to translate Cherokee sentences to English using PyTorch on a GPU. You will implement a sequence-to- sequence (Seq2Seq) network with attention to build a Neural Machine Translation (NMT) system. SCPD has arranged for each learner to access Microsoft Azure resources for training.
The fifth assignment includes a coding assignment in which you will explore attention and pretrained knowledge, along with training a Transformer to perform a task that involves accessing knowledge about the world not encoded in its trained data. You’ll use PyTorch on a GPU. SCPD has arranged for each learner to access Microsoft Azure resources for training – more information about how to access Azure will be provided by the course team.