Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Result analysis and ask for solution #74

Closed
xyIsHere opened this issue Jun 25, 2023 · 44 comments
Closed

Result analysis and ask for solution #74

xyIsHere opened this issue Jun 25, 2023 · 44 comments

Comments

@xyIsHere
Copy link

Dear author,

I rendered a bullet-time effect for an in-the-wild video. But as shown in the video (rendered frames) below, you can find that the human body does not stand up straight on the ground. Do you know the reason for this and how to solve it?

Thanks a lot!

2.online-video-cutter.com.mp4

image

3.online-video-cutter.com.mp4

image

@Dipankar1997161
Copy link

Even I faced the same issue with my own data. Trying to figure out why is this the case..

It maybe due to the Rh or Th values of the smpl.
If you figured it out, let me know as well

@xyIsHere
Copy link
Author

Even I faced the same issue with my own data. Trying to figure out why is this the case..

It maybe due to the Rh or Th values of the smpl. If you figured it out, let me know as well

I still cannot figure it out. Do you know other pose estimation method that might fix this issue?

@Dipankar1997161
Copy link

Even I faced the same issue with my own data. Trying to figure out why is this the case..
It maybe due to the Rh or Th values of the smpl. If you figured it out, let me know as well

I still cannot figure it out. Do you know other pose estimation method that might fix this issue?

Did you used the weak perspective camera transformation for custom videos? If not , please try.
And then train the model.

@xyIsHere
Copy link
Author

Even I faced the same issue with my own data. Trying to figure out why is this the case..
It maybe due to the Rh or Th values of the smpl. If you figured it out, let me know as well

I still cannot figure it out. Do you know other pose estimation method that might fix this issue?

Did you used the weak perspective camera transformation for custom videos? If not , please try. And then train the model.

Yes, I did. I used the VIBE to compute the smpl parameters and the camera parameters and got those results. I saw your discussion in #65. I think you are right that this might caused by the Rh and Th. But this problem cannot be solved by using weak perspective camera. Do you think that the camera pose and smpl parameters can be decoupled?

@Dipankar1997161
Copy link

Dipankar1997161 commented Aug 30, 2023

Even I faced the same issue with my own data. Trying to figure out why is this the case..
It maybe due to the Rh or Th values of the smpl. If you figured it out, let me know as well

I still cannot figure it out. Do you know other pose estimation method that might fix this issue?

Did you used the weak perspective camera transformation for custom videos? If not , please try. And then train the model.

Yes, I did. I used the VIBE to compute the smpl parameters and the camera parameters and got those results. I saw your discussion in #65. I think you are right that this might caused by the Rh and Th. But this problem cannot be solved by using weak perspective camera. Do you think that the camera pose and smpl parameters can be decoupled?

There's another method which you can try, it's from Easymocap ( the repository which gave Zju-mocap dataset )

There you can run motion capture on your data and get the accurate smpl. Just before training on humannerf transform the matrix as you did here and see if it improves.

I am using That Repo for both mono and multi smpl. Works great.

@xyIsHere
Copy link
Author

xyIsHere commented Aug 31, 2023

I am using That Repo for both mono and multi smpl. Works great.

Thanks a lot! I will have a try with Easymocap. By the way, is this the instruction (https://chingswy.github.io/easymocap-public-doc/develop/02_fitsmpl.html) that you followed to get the accurate?

@gushengbo
Copy link

gushengbo commented Oct 18, 2023

I am using That Repo for both mono and multi smpl. Works great.

Thanks a lot! I will have a try with Easymocap. By the way, is this the instruction (https://chingswy.github.io/easymocap-public-doc/develop/02_fitsmpl.html) that you followed to get the accurate?

Hello, have you used Easy_mocap to predict smpl parameters and successfully run it on humannerf @xyIsHere

@Dipankar1997161
Copy link

I am using That Repo for both mono and multi smpl. Works great.

Thanks a lot! I will have a try with Easymocap. By the way, is this the instruction (https://chingswy.github.io/easymocap-public-doc/develop/02_fitsmpl.html) that you followed to get the accurate?

Hello, have you used Easy_mocap to predict smpl parameters and successfully run it on humannerf @xyIsHere

I used Easymocap and trained 3 dataset. People snapshot, Humam3.6m and random YouTube ones

@gushengbo
Copy link

I am using That Repo for both mono and multi smpl. Works great.

Thanks a lot! I will have a try with Easymocap. By the way, is this the instruction (https://chingswy.github.io/easymocap-public-doc/develop/02_fitsmpl.html) that you followed to get the accurate?

Hello, have you used Easy_mocap to predict smpl parameters and successfully run it on humannerf @xyIsHere

I used Easymocap and trained 3 dataset. People snapshot, Humam3.6m and random YouTube ones

is the smpl patameters of People_snapshot estimated from Easymocap correct?

@Dipankar1997161
Copy link

I am using That Repo for both mono and multi smpl. Works great.

Thanks a lot! I will have a try with Easymocap. By the way, is this the instruction (https://chingswy.github.io/easymocap-public-doc/develop/02_fitsmpl.html) that you followed to get the accurate?

Hello, have you used Easy_mocap to predict smpl parameters and successfully run it on humannerf @xyIsHere

I used Easymocap and trained 3 dataset. People snapshot, Humam3.6m and random YouTube ones

is the smpl patameters of People_snapshot estimated from Easymocap correct?

The parameters generated are correct, one just has to make sure about the axis of such monocular videos. Because during rendering the result maybe like the one you saw at the top of this issue.

Anyways the axis of Original SMPL and the EasyMocap Smpl are different. Please remember that

@gushengbo
Copy link

gushengbo commented Oct 18, 2023

I am using That Repo for both mono and multi smpl. Works great.

Thanks a lot! I will have a try with Easymocap. By the way, is this the instruction (https://chingswy.github.io/easymocap-public-doc/develop/02_fitsmpl.html) that you followed to get the accurate?

Hello, have you used Easy_mocap to predict smpl parameters and successfully run it on humannerf @xyIsHere

I used Easymocap and trained 3 dataset. People snapshot, Humam3.6m and random YouTube ones

is the smpl patameters of People_snapshot estimated from Easymocap correct?

The parameters generated are correct, one just has to make sure about the axis of such monocular videos. Because during rendering the result maybe like the one you saw at the top of this issue.

Anyways the axis of Original SMPL and the EasyMocap Smpl are different. Please remember that

Oh, thank you very much. Could you tell me what axis you use in People_snapshot? Can you elaborate on what is an axis of such monocular videos?

@Dipankar1997161
Copy link

I am using That Repo for both mono and multi smpl. Works great.

Thanks a lot! I will have a try with Easymocap. By the way, is this the instruction (https://chingswy.github.io/easymocap-public-doc/develop/02_fitsmpl.html) that you followed to get the accurate?

Hello, have you used Easy_mocap to predict smpl parameters and successfully run it on humannerf @xyIsHere

I used Easymocap and trained 3 dataset. People snapshot, Humam3.6m and random YouTube ones

is the smpl patameters of People_snapshot estimated from Easymocap correct?

The parameters generated are correct, one just has to make sure about the axis of such monocular videos. Because during rendering the result maybe like the one you saw at the top of this issue.
Anyways the axis of Original SMPL and the EasyMocap Smpl are different. Please remember that

Oh, thank you very much. Could you tell me what axis you use in People_snapshot? Can you elaborate on what is an axis of such monocular videos?

If you particularly looking to work on people_snapshot, check this repo https://github.com/JanaldoChen/Anim-NeRF.git

They did NERF just on Snapshot dataset.

@gushengbo
Copy link

I am using That Repo for both mono and multi smpl. Works great.

Thanks a lot! I will have a try with Easymocap. By the way, is this the instruction (https://chingswy.github.io/easymocap-public-doc/develop/02_fitsmpl.html) that you followed to get the accurate?

Hello, have you used Easy_mocap to predict smpl parameters and successfully run it on humannerf @xyIsHere

I used Easymocap and trained 3 dataset. People snapshot, Humam3.6m and random YouTube ones

is the smpl patameters of People_snapshot estimated from Easymocap correct?

The parameters generated are correct, one just has to make sure about the axis of such monocular videos. Because during rendering the result maybe like the one you saw at the top of this issue.
Anyways the axis of Original SMPL and the EasyMocap Smpl are different. Please remember that

Oh, thank you very much. Could you tell me what axis you use in People_snapshot? Can you elaborate on what is an axis of such monocular videos?

If you particularly looking to work on people_snapshot, check this repo https://github.com/JanaldoChen/Anim-NeRF.git

They did NERF just on Snapshot dataset.

Ok, I want to train HumanNeRF with Snap_shot dataset

@Dipankar1997161
Copy link

I am using That Repo for both mono and multi smpl. Works great.

Thanks a lot! I will have a try with Easymocap. By the way, is this the instruction (https://chingswy.github.io/easymocap-public-doc/develop/02_fitsmpl.html) that you followed to get the accurate?

Hello, have you used Easy_mocap to predict smpl parameters and successfully run it on humannerf @xyIsHere

I used Easymocap and trained 3 dataset. People snapshot, Humam3.6m and random YouTube ones

is the smpl patameters of People_snapshot estimated from Easymocap correct?

The parameters generated are correct, one just has to make sure about the axis of such monocular videos. Because during rendering the result maybe like the one you saw at the top of this issue.
Anyways the axis of Original SMPL and the EasyMocap Smpl are different. Please remember that

Oh, thank you very much. Could you tell me what axis you use in People_snapshot? Can you elaborate on what is an axis of such monocular videos?

If you particularly looking to work on people_snapshot, check this repo https://github.com/JanaldoChen/Anim-NeRF.git
They did NERF just on Snapshot dataset.

Ok, I want to train HumanNeRF with Snap_shot dataset

I mean to say, u can check the way they generated smpl for snapshot and try the same then use HumanNeRF to train

@gushengbo
Copy link

I am using That Repo for both mono and multi smpl. Works great.

Thanks a lot! I will have a try with Easymocap. By the way, is this the instruction (https://chingswy.github.io/easymocap-public-doc/develop/02_fitsmpl.html) that you followed to get the accurate?

Hello, have you used Easy_mocap to predict smpl parameters and successfully run it on humannerf @xyIsHere

I used Easymocap and trained 3 dataset. People snapshot, Humam3.6m and random YouTube ones

is the smpl patameters of People_snapshot estimated from Easymocap correct?

The parameters generated are correct, one just has to make sure about the axis of such monocular videos. Because during rendering the result maybe like the one you saw at the top of this issue.
Anyways the axis of Original SMPL and the EasyMocap Smpl are different. Please remember that

Oh, thank you very much. Could you tell me what axis you use in People_snapshot? Can you elaborate on what is an axis of such monocular videos?

If you particularly looking to work on people_snapshot, check this repo https://github.com/JanaldoChen/Anim-NeRF.git
They did NERF just on Snapshot dataset.

Ok, I want to train HumanNeRF with Snap_shot dataset

I mean to say, u can check the way they generated smpl for snapshot and try the same then use HumanNeRF to train

I get it. Thank you!

@gushengbo
Copy link

I am using That Repo for both mono and multi smpl. Works great.

Thanks a lot! I will have a try with Easymocap. By the way, is this the instruction (https://chingswy.github.io/easymocap-public-doc/develop/02_fitsmpl.html) that you followed to get the accurate?

Hello, have you used Easy_mocap to predict smpl parameters and successfully run it on humannerf @xyIsHere

I used Easymocap and trained 3 dataset. People snapshot, Humam3.6m and random YouTube ones

is the smpl patameters of People_snapshot estimated from Easymocap correct?

The parameters generated are correct, one just has to make sure about the axis of such monocular videos. Because during rendering the result maybe like the one you saw at the top of this issue.
Anyways the axis of Original SMPL and the EasyMocap Smpl are different. Please remember that

Oh, thank you very much. Could you tell me what axis you use in People_snapshot? Can you elaborate on what is an axis of such monocular videos?

If you particularly looking to work on people_snapshot, check this repo https://github.com/JanaldoChen/Anim-NeRF.git
They did NERF just on Snapshot dataset.

Ok, I want to train HumanNeRF with Snap_shot dataset

I mean to say, u can check the way they generated smpl for snapshot and try the same then use HumanNeRF to train

I had an error using Anim-NeRF. Then I used Easy_mocap.
image
Can I just use these parameters directly? How should I modify them?
image
and I find another pose in other file, which file should I choose?

@gushengbo
Copy link

I am using That Repo for both mono and multi smpl. Works great.

Thanks a lot! I will have a try with Easymocap. By the way, is this the instruction (https://chingswy.github.io/easymocap-public-doc/develop/02_fitsmpl.html) that you followed to get the accurate?

Hello, have you used Easy_mocap to predict smpl parameters and successfully run it on humannerf @xyIsHere

I used Easymocap and trained 3 dataset. People snapshot, Humam3.6m and random YouTube ones

is the smpl patameters of People_snapshot estimated from Easymocap correct?

The parameters generated are correct, one just has to make sure about the axis of such monocular videos. Because during rendering the result maybe like the one you saw at the top of this issue.

Anyways the axis of Original SMPL and the EasyMocap Smpl are different. Please remember that

I have successfully used Easymocap and train the dataset, thank you very much!

@Dipankar1997161
Copy link

I am using That Repo for both mono and multi smpl. Works great.

Thanks a lot! I will have a try with Easymocap. By the way, is this the instruction (https://chingswy.github.io/easymocap-public-doc/develop/02_fitsmpl.html) that you followed to get the accurate?

Hello, have you used Easy_mocap to predict smpl parameters and successfully run it on humannerf @xyIsHere

I used Easymocap and trained 3 dataset. People snapshot, Humam3.6m and random YouTube ones

is the smpl patameters of People_snapshot estimated from Easymocap correct?

The parameters generated are correct, one just has to make sure about the axis of such monocular videos. Because during rendering the result maybe like the one you saw at the top of this issue.
Anyways the axis of Original SMPL and the EasyMocap Smpl are different. Please remember that

Oh, thank you very much. Could you tell me what axis you use in People_snapshot? Can you elaborate on what is an axis of such monocular videos?

If you particularly looking to work on people_snapshot, check this repo https://github.com/JanaldoChen/Anim-NeRF.git
They did NERF just on Snapshot dataset.

Ok, I want to train HumanNeRF with Snap_shot dataset

I mean to say, u can check the way they generated smpl for snapshot and try the same then use HumanNeRF to train

I had an error using Anim-NeRF. Then I used Easy_mocap. image Can I just use these parameters directly? How should I modify them? image and I find another pose in other file, which file should I choose?

Always use the output-smpl-3d one. That's the final SMPL values

@Dipankar1997161
Copy link

I am using That Repo for both mono and multi smpl. Works great.

Thanks a lot! I will have a try with Easymocap. By the way, is this the instruction (https://chingswy.github.io/easymocap-public-doc/develop/02_fitsmpl.html) that you followed to get the accurate?

Hello, have you used Easy_mocap to predict smpl parameters and successfully run it on humannerf @xyIsHere

I used Easymocap and trained 3 dataset. People snapshot, Humam3.6m and random YouTube ones

is the smpl patameters of People_snapshot estimated from Easymocap correct?

The parameters generated are correct, one just has to make sure about the axis of such monocular videos. Because during rendering the result maybe like the one you saw at the top of this issue.
Anyways the axis of Original SMPL and the EasyMocap Smpl are different. Please remember that

I have successfully used Easymocap and train the dataset, thank you very much!

Great to hear that!! Good luck on your work

@gushengbo
Copy link

I am using That Repo for both mono and multi smpl. Works great.

Thanks a lot! I will have a try with Easymocap. By the way, is this the instruction (https://chingswy.github.io/easymocap-public-doc/develop/02_fitsmpl.html) that you followed to get the accurate?

Hello, have you used Easy_mocap to predict smpl parameters and successfully run it on humannerf @xyIsHere

I used Easymocap and trained 3 dataset. People snapshot, Humam3.6m and random YouTube ones

is the smpl patameters of People_snapshot estimated from Easymocap correct?

The parameters generated are correct, one just has to make sure about the axis of such monocular videos. Because during rendering the result maybe like the one you saw at the top of this issue.
Anyways the axis of Original SMPL and the EasyMocap Smpl are different. Please remember that

I have successfully used Easymocap and train the dataset, thank you very much!

Great to hear that!! Good luck on your work

I find that the predicted smpl parameters of Easymocap are not completely correct and the result rendering is problematic.
image
image

@Dipankar1997161
Copy link

may I know the settings you used for generating the smpl and the front view of the render.

Lastly, is this the trained result, if so how many epoch did you run?

@gushengbo
Copy link

I just run the "python3 apps/demo/mocap.py /home/shengbo/EasyMocap-master/data/male-2-sport/ --work internet" to get smpl. The result is the novel view. I think maybe to human is fat, so the smpl generated is wrong.

@Dipankar1997161
Copy link

I just run the "python3 apps/demo/mocap.py /home/shengbo/EasyMocap-master/data/male-2-sport/ --work internet" to get smpl. The result is the novel view. I think maybe to human is fat, so the smpl generated is wrong.

The human is not that fat since I have trained the same subject succesfully. It seems the training is still incomplete since the hands are not rendered.

python3 apps/demo/mocap.py /home/shengbo/EasyMocap-master/data/male-2-sport/ --work internet" to get smpl

try to add --mode and set it to some sort of mono.

Please make sure to process the files accurately and also the Mask is really important

@gushengbo
Copy link

I just run the "python3 apps/demo/mocap.py /home/shengbo/EasyMocap-master/data/male-2-sport/ --work internet" to get smpl. The result is the novel view. I think maybe to human is fat, so the smpl generated is wrong.

The human is not that fat since I have trained the same subject succesfully. It seems the training is still incomplete since the hands are not rendered.

python3 apps/demo/mocap.py /home/shengbo/EasyMocap-master/data/male-2-sport/ --work internet" to get smpl

try to add --mode and set it to some sort of mono.

Please make sure to process the files accurately and also the Mask is really important

Thank you. I'll give it a try and let you know.

@Dipankar1997161
Copy link

Dipankar1997161 commented Oct 19, 2023

I just run the "python3 apps/demo/mocap.py /home/shengbo/EasyMocap-master/data/male-2-sport/ --work internet" to get smpl. The result is the novel view. I think maybe to human is fat, so the smpl generated is wrong.

The human is not that fat since I have trained the same subject succesfully. It seems the training is still incomplete since the hands are not rendered.
python3 apps/demo/mocap.py /home/shengbo/EasyMocap-master/data/male-2-sport/ --work internet" to get smpl
try to add --mode and set it to some sort of mono.
Please make sure to process the files accurately and also the Mask is really important

Thank you. I'll give it a try and let you know.

have a look at this. I trained it for 165k iterations
prog_165000

@gushengbo
Copy link

I just run the "python3 apps/demo/mocap.py /home/shengbo/EasyMocap-master/data/male-2-sport/ --work internet" to get smpl. The result is the novel view. I think maybe to human is fat, so the smpl generated is wrong.

The human is not that fat since I have trained the same subject succesfully. It seems the training is still incomplete since the hands are not rendered.
python3 apps/demo/mocap.py /home/shengbo/EasyMocap-master/data/male-2-sport/ --work internet" to get smpl
try to add --mode and set it to some sort of mono.
Please make sure to process the files accurately and also the Mask is really important

Thank you. I'll give it a try and let you know.

have a look at this. I trained it for 165k epochs prog_165000

This is very good. Is your smpl model generated from Easymocap better than mine?

@Dipankar1997161
Copy link

I just run the "python3 apps/demo/mocap.py /home/shengbo/EasyMocap-master/data/male-2-sport/ --work internet" to get smpl. The result is the novel view. I think maybe to human is fat, so the smpl generated is wrong.

The human is not that fat since I have trained the same subject succesfully. It seems the training is still incomplete since the hands are not rendered.
python3 apps/demo/mocap.py /home/shengbo/EasyMocap-master/data/male-2-sport/ --work internet" to get smpl
try to add --mode and set it to some sort of mono.
Please make sure to process the files accurately and also the Mask is really important

Thank you. I'll give it a try and let you know.

have a look at this. I trained it for 165k epochs prog_165000

This is very good. Is your smpl model generated from Easymocap better than mine?

I deleted the smpl results since I focused on multi-view data.

but is it possible for you to share me the processing file you used for snapshot.
I can check and see the error if any.

@gushengbo
Copy link

The processing file is simple.
path_smpl = '/home/shengbo/EasyMocap-master/data/male-2-sport/output-smpl-3d/smplfull/'+i[:-4]+'/'+str(1000000+int(i[:-4]))[1:]+'.json'

    if not os.path.exists(path_smpl):
        print(path_smpl)
        continue
    
    with open(path_smpl,'r') as f:
        data = json.load(f)
   
    print(type(data))
    print(data.keys())
    poses = np.array(data["annots"][0]['poses'][0], dtype=np.float32)
    Rh = np.array(data["annots"][0]['Rh'][0], dtype=np.float32)
    Th = np.array(data["annots"][0]['Th'][0], dtype=np.float32)


    betas = np.array(data["annots"][0]['shapes'][0], dtype=np.float32)
    #K = np.array(cam_body_info['cam_intrinsics'], dtype=np.float32)
    #E = np.array(cam_body_info['cam_extrinsics'], dtype=np.float32)

    K = np.array( [
  [1296.0000000, 0.0000000, 540.0000000], 
  [0.0000000, 1296.0000000, 540.0000000], 
  [0.0000000, 0.0000000, 1.0000000]
], dtype=np.float32) 


    # pose = np.eye(4, dtype=np.float32)
    # pose[:3, :3] = R_.transpose()
    # pose[:3, 3] = R_.transpose() @ -t_
    # c2w = torch.from_numpy(pose[:3, :4]).float()
    E = np.array([[1.0, 0.0, 0.0, 0.0],
        [0.0, 1.0, 0.0, 0.0],
        [0.0, 0.0, 1.0, 0.0],
        [0.0, 0.0, 0.0, 1.0]], dtype=np.float32)
    

    all_betas.append(betas)

    ##############################################
    # Below we tranfer the global body rotation to camera pose

    # Get T-pose joints

    _, tpose_joints = smpl_model(np.zeros_like(poses), betas)

    # get global Rh, Th
    #pelvis_pos = tpose_joints[0].copy()

    # get refined T-pose joints
    #tpose_joints = tpose_joints - pelvis_pos[None, :]

    # remove global rotation from body pose
    #poses[:3] = 0

    # get posed joints using body poses without global rotation
    _, joints = smpl_model(poses, betas)
    #joints = joints - pelvis_pos[None, :]

    mesh_infos[i[:-4]] = {
        'Rh': Rh,
        'Th': Th,
        'poses': poses,
        'joints': joints,
        'tpose_joints': tpose_joints
    }

@gushengbo
Copy link

I just run the "python3 apps/demo/mocap.py /home/shengbo/EasyMocap-master/data/male-2-sport/ --work internet" to get smpl. The result is the novel view. I think maybe to human is fat, so the smpl generated is wrong.

The human is not that fat since I have trained the same subject succesfully. It seems the training is still incomplete since the hands are not rendered.
python3 apps/demo/mocap.py /home/shengbo/EasyMocap-master/data/male-2-sport/ --work internet" to get smpl
try to add --mode and set it to some sort of mono.
Please make sure to process the files accurately and also the Mask is really important

Thank you. I'll give it a try and let you know.

have a look at this. I trained it for 165k epochs prog_165000

This is very good. Is your smpl model generated from Easymocap better than mine?

I deleted the smpl results since I focused on multi-view data.

but is it possible for you to share me the processing file you used for snapshot. I can check and see the error if any.

Is it possible that because I trained so few images, I only used a few images?

@Dipankar1997161
Copy link

use as many images as you can and train it longer. hopefully it will improve

@gushengbo
Copy link

use as many images as you can and train it longer. hopefully it will improve

OK, thank you very much!

@gushengbo
Copy link

gushengbo commented Oct 19, 2023

use as many images as you can and train it longer. hopefully it will improve

I found out that this was because I chose the wrong pose to render a new perspective. Did you choose a random pose for a frame to render? May I ask if you have tried rendering from a new perspective, rather than just a trained perspective?

@Dipankar1997161
Copy link

Dipankar1997161 commented Oct 19, 2023

use as many images as you can and train it longer. hopefully it will improve

I found out that this was because I chose the wrong pose to render a new perspective. Did you choose a random pose for a frame to render? May I ask if you have tried rendering from a new perspective, rather than just a trained perspective?

like what sort of perspective are you referring to? if you can clarify it a bit and possibly show the result

If you talking about camera perspective, then yes I had rendered it from different perspective

@gushengbo
Copy link

Yes, the camera angle. I chose an smpl that renders images from different camera perspectives. But I've found that it only works well in training perspective. New view rendered below:
image

@gushengbo
Copy link

Yes, the camera angle. I chose an smpl that renders images from different camera perspectives. But I've found that it only works well in training perspective. New view rendered below: image

I tried to use more training images and it worked better

@Dipankar1997161
Copy link

Dipankar1997161 commented Oct 20, 2023

for rendering views from different perspective, you need to check the rendering camera values. Check Freeview.py

@gushengbo
Copy link

for rendering views from different perspective, you need to check the rendering camera values. Check Freeview.py

the axis of Easymocap smpl is different from original smpl. I made a custom one axis, but it's not the truth axis of Easymocap smpl.
Could tell me where can I find the correct axis?

@Dipankar1997161
Copy link

for rendering views from different perspective, you need to check the rendering camera values. Check Freeview.py

the axis of Easymocap smpl is different from original smpl. I made a custom one axis, but it's not the truth axis of Easymocap smpl. Could tell me where can I find the correct axis?

could you provide me with the rendering result? like a small video would be great

@Dipankar1997161
Copy link

for rendering views from different perspective, you need to check the rendering camera values. Check Freeview.py

the axis of Easymocap smpl is different from original smpl. I made a custom one axis, but it's not the truth axis of Easymocap smpl. Could tell me where can I find the correct axis?

Hey, I actually asked a few other people under my contact and it seems to be an issue for Monocular Videos itself. Many face such issues. You can refer to this issue I raised there. Hope it helps.
wyysf-98/MoCo_Flow#1 (comment)

If it doesn't then try SMPL once with PARE or VIBE and parse as WILD dataset in Humannerf by body-centering around the pelvis joint

@gushengbo
Copy link

for rendering views from different perspective, you need to check the rendering camera values. Check Freeview.py

the axis of Easymocap smpl is different from original smpl. I made a custom one axis, but it's not the truth axis of Easymocap smpl. Could tell me where can I find the correct axis?

could you provide me with the rendering result? like a small video would be great

male-3-casual.mp4
male-3-plaza.mp4

image
image
image

Rendering results depend on which smpl parameters I choose, some of the smpl parameters predicted by Easymocap have serious errors.

@Dipankar1997161
Copy link

for rendering views from different perspective, you need to check the rendering camera values. Check Freeview.py

the axis of Easymocap smpl is different from original smpl. I made a custom one axis, but it's not the truth axis of Easymocap smpl. Could tell me where can I find the correct axis?

could you provide me with the rendering result? like a small video would be great

male-3-casual.mp4

male-3-plaza.mp4

image image image

Rendering results depend on which smpl parameters I choose, some of the smpl parameters predicted by Easymocap have serious errors.

what exactly you mean by different smpl parameters.
as per what I remember, when you run mocap.py and passing --write_smpl_full, you get 2 folders in the output, smpl and smpl_full.

are you talking about this?

or did you pass any new arguments in the mocap.py ? Can you inform it?

@gushengbo
Copy link

for rendering views from different perspective, you need to check the rendering camera values. Check Freeview.py

the axis of Easymocap smpl is different from original smpl. I made a custom one axis, but it's not the truth axis of Easymocap smpl. Could tell me where can I find the correct axis?

could you provide me with the rendering result? like a small video would be great

male-3-casual.mp4
male-3-plaza.mp4
image image image
Rendering results depend on which smpl parameters I choose, some of the smpl parameters predicted by Easymocap have serious errors.

what exactly you mean by different smpl parameters. as per what I remember, when you run mocap.py and passing --write_smpl_full, you get 2 folders in the output, smpl and smpl_full.

are you talking about this?

or did you pass any new arguments in the mocap.py ? Can you inform it?

no, I don't talk about the smpl_full. I choose one smpl parameter of one image(One of the frames from the video), then I render novel view with this smpl parameter.

@Dipankar1997161
Copy link

for rendering views from different perspective, you need to check the rendering camera values. Check Freeview.py

the axis of Easymocap smpl is different from original smpl. I made a custom one axis, but it's not the truth axis of Easymocap smpl. Could tell me where can I find the correct axis?

could you provide me with the rendering result? like a small video would be great

male-3-casual.mp4
male-3-plaza.mp4
image image image
Rendering results depend on which smpl parameters I choose, some of the smpl parameters predicted by Easymocap have serious errors.

what exactly you mean by different smpl parameters. as per what I remember, when you run mocap.py and passing --write_smpl_full, you get 2 folders in the output, smpl and smpl_full.
are you talking about this?
or did you pass any new arguments in the mocap.py ? Can you inform it?

no, I don't talk about the smpl_full. I choose one smpl parameter of one image(One of the frames from the video), then I render novel view with this smpl parameter.

for rendering views from different perspective, you need to check the rendering camera values. Check Freeview.py

the axis of Easymocap smpl is different from original smpl. I made a custom one axis, but it's not the truth axis of Easymocap smpl. Could tell me where can I find the correct axis?

could you provide me with the rendering result? like a small video would be great

male-3-casual.mp4
male-3-plaza.mp4
image image image
Rendering results depend on which smpl parameters I choose, some of the smpl parameters predicted by Easymocap have serious errors.

what exactly you mean by different smpl parameters. as per what I remember, when you run mocap.py and passing --write_smpl_full, you get 2 folders in the output, smpl and smpl_full.
are you talking about this?
or did you pass any new arguments in the mocap.py ? Can you inform it?

no, I don't talk about the smpl_full. I choose one smpl parameter of one image(One of the frames from the video), then I render novel view with this smpl parameter.

I see. It's random. Some smpl are good while some are bad for the same video? That's strange.

Can you tell me the python script your run for mocap.py
The entire CLI arguments too

@gushengbo
Copy link

for rendering views from different perspective, you need to check the rendering camera values. Check Freeview.py

the axis of Easymocap smpl is different from original smpl. I made a custom one axis, but it's not the truth axis of Easymocap smpl. Could tell me where can I find the correct axis?

could you provide me with the rendering result? like a small video would be great

male-3-casual.mp4
male-3-plaza.mp4
image image image
Rendering results depend on which smpl parameters I choose, some of the smpl parameters predicted by Easymocap have serious errors.

what exactly you mean by different smpl parameters. as per what I remember, when you run mocap.py and passing --write_smpl_full, you get 2 folders in the output, smpl and smpl_full.
are you talking about this?
or did you pass any new arguments in the mocap.py ? Can you inform it?

no, I don't talk about the smpl_full. I choose one smpl parameter of one image(One of the frames from the video), then I render novel view with this smpl parameter.

for rendering views from different perspective, you need to check the rendering camera values. Check Freeview.py

the axis of Easymocap smpl is different from original smpl. I made a custom one axis, but it's not the truth axis of Easymocap smpl. Could tell me where can I find the correct axis?

could you provide me with the rendering result? like a small video would be great

male-3-casual.mp4
male-3-plaza.mp4
image image image
Rendering results depend on which smpl parameters I choose, some of the smpl parameters predicted by Easymocap have serious errors.

what exactly you mean by different smpl parameters. as per what I remember, when you run mocap.py and passing --write_smpl_full, you get 2 folders in the output, smpl and smpl_full.
are you talking about this?
or did you pass any new arguments in the mocap.py ? Can you inform it?

no, I don't talk about the smpl_full. I choose one smpl parameter of one image(One of the frames from the video), then I render novel view with this smpl parameter.

I see. It's random. Some smpl are good while some are bad for the same video? That's strange.

Can you tell me the python script your run for mocap.py The entire CLI arguments too

mocap.txt

I run
python3 apps/preprocess/extract_keypoints.py /home/shengbo/EasyMocap-master/data/male-2-outdoor/ --mode mp-holistic

python3 apps/demo/mocap.py /home/shengbo/EasyMocap-master/data/male-2-plaza/ --work internet

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants