Skip to content

Annotated assignment solutions for Stanford University CS231n: Deep Learning for Computer Vision (Spring 2023).

Notifications You must be signed in to change notification settings

gordon801/cs231n

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 

Repository files navigation

CS231n: Deep Learning for Computer Vision - Assignment Solutions

This repository contains solutions for the Spring 2023 Assignments from the Stanford University CS231n: Deep Learning for Computer Vision course. The solutions are thoroughly annotated with comments and worked through in an intuitive manner.

Course Information

Assignment Details

Assignment 1

Q1: k-Nearest Neighbor Classifier
The notebook knn.ipynb walks through the implementation of the k-Nearest Neighbor (kNN) classifier.

Q2: Training a Support Vector Machine
The notebook svm.ipynb demonstrates the implementation of a Support Vector Machine (SVM) classifier.

Q3: Implementing a Softmax Classifier
The notebook softmax.ipynb details the implementation of the Softmax classifier.

Q4: Two-Layer Neural Network
The notebook two_layer_net.ipynb guides through the implementation of a two-layer neural network classifier.

Q5: Higher-Level Representations: Image Features
The notebook features.ipynb examines the improvements gained by using higher-level representations instead of raw pixel values.

Assignment 2

Q1: Fully-Connected Neural Network
The notebook FullyConnectedNets.ipynb introduces modular layer design and uses those layers to implement fully-connected networks of arbitrary depth. It also covers several popular update rules for optimizing these models.

Q2: Batch Normalization
The notebook BatchNormalization.ipynb involves implementing batch normalization and using it to train deep fully-connected networks.

Q3: Dropout
The notebook Dropout.ipynb helps implement Dropout and explores its effects on model generalization.

Q4: Convolutional Networks
In the notebook ConvolutionalNetworks.ipynb, several new layers commonly used in convolutional networks are implemented.

Q5: PyTorch
In the notebook PyTorch.ipynb, the framework PyTorch is explored, culminating in the training of a convolutional network on CIFAR-10 for optimal performance.

Assignment 3

Q1: Network Visualization: Saliency Maps, Class Visualization, and Fooling Images
The notebook Network_Visualization.ipynb introduces the pretrained SqueezeNet model, computes gradients with respect to images, and uses them to produce saliency maps and fooling images.

Q2: Image Captioning with Vanilla RNNs
The notebook RNN_Captioning.ipynb walks through the implementation of vanilla recurrent neural networks and applies them to image captioning on the COCO dataset.

Q3: Image Captioning with Transformers
The notebook Transformer_Captioning.ipynb covers the implementation of a Transformer model and applies it to image captioning on the COCO dataset.

Q4: Generative Adversarial Networks
In the notebook Generative_Adversarial_Networks.ipynb, image generation that matches a training dataset is learned, and these models are used to improve classifier performance when training on a large amount of unlabeled data and a small amount of labeled data.

Q5: Self-Supervised Learning for Image Classification
The notebook Self_Supervised_Learning.ipynb demonstrates how to leverage self-supervised pretraining to achieve better performance on image classification tasks.

About

Annotated assignment solutions for Stanford University CS231n: Deep Learning for Computer Vision (Spring 2023).

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published