Skip to content

Latent PARC#59

Open
chengxinlun wants to merge 6 commits intomainfrom
latent_parc
Open

Latent PARC#59
chengxinlun wants to merge 6 commits intomainfrom
latent_parc

Conversation

@chengxinlun
Copy link
Copy Markdown
Collaborator

Integrating Latent PARC code with PARCtorch repo.

@chengxinlun chengxinlun added the enhancement New feature or request label May 8, 2025
@chengxinlun chengxinlun mentioned this pull request May 8, 2025
@JosephBChoi JosephBChoi linked an issue May 8, 2025 that may be closed by this pull request
Zoe Gray and others added 4 commits May 27, 2025 11:13
…nt_parc

Syncing up with main branch before introducing LatentPARC modules.
…to latent_parc

trying to merge the main branch updates to latent_parc branch before I can push my new changes
decoded = self.decoder(encoded)
return decoded

class Autoencoder_separate(nn.Module):
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please change class name according to camel case naming convention.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

return self.network.encoder(x)

def decode(self, x):
return self.network.decoder(x)
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move model definition to somewhere else, like where the encoder/decoders are defined.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

return self.network.decoder(x)


def train_autoencoder(model, optimizer, loss_function, train_loader, val_loader,
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add functionalities for resuming training from checkpoint.

return torch.clamp(noisy_images, 0.0, 1.0) # Keep pixel values in [0, 1]

class LpLoss(torch.nn.Module):
def __init__(self, p=10):
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pytorch loss function always have the following 3 arguments:

  • size_average
  • reduce
  • reduction

It is advised that when users are defining their own loss function they should also do something similar, and any extra arguments can be added elsewhere.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also split the loss function part off, and put it under the module PARCtorch.loss

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will do. Quick clarification: I have checked all the files and seems like a loss file doesn't exist yet. Should I create one?

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also I implemented the reduction input but found this online about the other two arguments: The size_average and reduce parameters are legacy arguments in PyTorch loss functions and are deprecated in favor of a single reduction argument.

Thoughts?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will do. Quick clarification: I have checked all the files and seems like a loss file doesn't exist yet. Should I create one?

Yes, please create one.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also I implemented the reduction input but found this online about the other two arguments: The size_average and reduce parameters are legacy arguments in PyTorch loss functions and are deprecated in favor of a single reduction argument.

Thoughts?

If they are deprecated, then just having reduction should be enough.

return log_dict


def train_individual_autoencoder(model, optimizer, loss_function, train_loader, val_loader,
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as before. Please add functionalities for resuming training from checkpoint.

import torch
import numpy as np
from tqdm import tqdm
import torch.nn.functional as F
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remove unused import.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

import torch.nn.functional as F

from autoencoder import *
from utils import *
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of import *, please explicitly tell what classes and functions are imported.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

port latent parc

2 participants