Conversation
…nt_parc Syncing up with main branch before introducing LatentPARC modules.
…to latent_parc trying to merge the main branch updates to latent_parc branch before I can push my new changes
PARCtorch/autoencoder/autoencoder.py
Outdated
| decoded = self.decoder(encoded) | ||
| return decoded | ||
|
|
||
| class Autoencoder_separate(nn.Module): |
There was a problem hiding this comment.
Please change class name according to camel case naming convention.
PARCtorch/autoencoder/train.py
Outdated
| return self.network.encoder(x) | ||
|
|
||
| def decode(self, x): | ||
| return self.network.decoder(x) |
There was a problem hiding this comment.
Move model definition to somewhere else, like where the encoder/decoders are defined.
| return self.network.decoder(x) | ||
|
|
||
|
|
||
| def train_autoencoder(model, optimizer, loss_function, train_loader, val_loader, |
There was a problem hiding this comment.
Please add functionalities for resuming training from checkpoint.
PARCtorch/autoencoder/utils.py
Outdated
| return torch.clamp(noisy_images, 0.0, 1.0) # Keep pixel values in [0, 1] | ||
|
|
||
| class LpLoss(torch.nn.Module): | ||
| def __init__(self, p=10): |
There was a problem hiding this comment.
Pytorch loss function always have the following 3 arguments:
- size_average
- reduce
- reduction
It is advised that when users are defining their own loss function they should also do something similar, and any extra arguments can be added elsewhere.
There was a problem hiding this comment.
Also split the loss function part off, and put it under the module PARCtorch.loss
There was a problem hiding this comment.
Will do. Quick clarification: I have checked all the files and seems like a loss file doesn't exist yet. Should I create one?
There was a problem hiding this comment.
Also I implemented the reduction input but found this online about the other two arguments: The size_average and reduce parameters are legacy arguments in PyTorch loss functions and are deprecated in favor of a single reduction argument.
Thoughts?
There was a problem hiding this comment.
Will do. Quick clarification: I have checked all the files and seems like a loss file doesn't exist yet. Should I create one?
Yes, please create one.
There was a problem hiding this comment.
Also I implemented the reduction input but found this online about the other two arguments: The size_average and reduce parameters are legacy arguments in PyTorch loss functions and are deprecated in favor of a single reduction argument.
Thoughts?
If they are deprecated, then just having reduction should be enough.
| return log_dict | ||
|
|
||
|
|
||
| def train_individual_autoencoder(model, optimizer, loss_function, train_loader, val_loader, |
There was a problem hiding this comment.
Same as before. Please add functionalities for resuming training from checkpoint.
PARCtorch/autoencoder/train.py
Outdated
| import torch | ||
| import numpy as np | ||
| from tqdm import tqdm | ||
| import torch.nn.functional as F |
There was a problem hiding this comment.
Please remove unused import.
PARCtorch/autoencoder/train.py
Outdated
| import torch.nn.functional as F | ||
|
|
||
| from autoencoder import * | ||
| from utils import * |
There was a problem hiding this comment.
Instead of import *, please explicitly tell what classes and functions are imported.
Integrating Latent PARC code with PARCtorch repo.