Skip to content

Conversation

@Skylerwiernik
Copy link

Fixes #80

@meta-cla
Copy link

meta-cla bot commented Aug 21, 2025

Hi @Skylerwiernik!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at [email protected]. Thanks!

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Meta Open Source bot. label Aug 21, 2025
@meta-cla
Copy link

meta-cla bot commented Aug 21, 2025

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!

@mmuckley
Copy link
Contributor

I haven't reviewed yet, but I would like to comment that this would be a nice feature to have if we can review it carefully and make sure it doesn't break existing code. It's a small change that would allow people running the code on more hardware platforms. CC @russellhowes

Copy link
Contributor

@mmuckley mmuckley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After reading through this, I think it is mostly good, except there is one function whose default signature is changed. Can you update it so that it has something like device="cuda:0" in the function signature?



def forward_vjepa_video(model_hf, model_pt, hf_transform, pt_transform):
def forward_vjepa_video(model_hf, model_pt, hf_transform, pt_transform, device):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This changes the default function signature. Can you modify it so that the default device is cuda, as was the case with the original function signature?

Copy link
Contributor

@mmuckley mmuckley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @Skylerwiernik, this looks pretty good but I noticed one case where cuda still isn't the default. I also think the notebook diff looks too big - is there any way to just change the lines relevant to mps?

Comment on lines +107 to +110
if torch.backends.mps.is_available():
device = "mps"
else:
device = "cuda:0"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Default should be cuda here, not "mps".

Comment on lines +2 to +5
"cells": [
{
"cell_type": "markdown",
"metadata": {},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The noteobook diff is too big - any way to just change the mps lines?

Comment on lines +66 to +73
def forward_vjepa_video(model_hf, model_pt, hf_transform, pt_transform, device="cuda"):
# Run a sample inference with VJEPA
with torch.inference_mode():
# Read and pre-process the image
video = get_video() # T x H x W x C
video = torch.from_numpy(video).permute(0, 3, 1, 2) # T x C x H x W
x_pt = pt_transform(video).cuda().unsqueeze(0)
x_hf = hf_transform(video, return_tensors="pt")["pixel_values_videos"].to("cuda")
x_pt = pt_transform(video).to(device).unsqueeze(0)
x_hf = hf_transform(video, return_tensors="pt")["pixel_values_videos"].to(device)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Meta Open Source bot.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Project hardcodes "cuda", not allowing the user to specify another device type

2 participants