-
Notifications
You must be signed in to change notification settings - Fork 87
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Result analysis and ask for solution #74
Comments
Even I faced the same issue with my own data. Trying to figure out why is this the case.. It maybe due to the Rh or Th values of the smpl. |
I still cannot figure it out. Do you know other pose estimation method that might fix this issue? |
Did you used the weak perspective camera transformation for custom videos? If not , please try. |
Yes, I did. I used the VIBE to compute the smpl parameters and the camera parameters and got those results. I saw your discussion in #65. I think you are right that this might caused by the Rh and Th. But this problem cannot be solved by using weak perspective camera. Do you think that the camera pose and smpl parameters can be decoupled? |
There's another method which you can try, it's from Easymocap ( the repository which gave Zju-mocap dataset ) There you can run motion capture on your data and get the accurate smpl. Just before training on humannerf transform the matrix as you did here and see if it improves. I am using That Repo for both mono and multi smpl. Works great. |
Thanks a lot! I will have a try with Easymocap. By the way, is this the instruction (https://chingswy.github.io/easymocap-public-doc/develop/02_fitsmpl.html) that you followed to get the accurate? |
Hello, have you used Easy_mocap to predict smpl parameters and successfully run it on humannerf @xyIsHere |
I used Easymocap and trained 3 dataset. People snapshot, Humam3.6m and random YouTube ones |
is the smpl patameters of People_snapshot estimated from Easymocap correct? |
The parameters generated are correct, one just has to make sure about the axis of such monocular videos. Because during rendering the result maybe like the one you saw at the top of this issue. Anyways the axis of Original SMPL and the EasyMocap Smpl are different. Please remember that |
Oh, thank you very much. Could you tell me what axis you use in People_snapshot? Can you elaborate on what is an axis of such monocular videos? |
If you particularly looking to work on people_snapshot, check this repo https://github.com/JanaldoChen/Anim-NeRF.git They did NERF just on Snapshot dataset. |
Ok, I want to train HumanNeRF with Snap_shot dataset |
I mean to say, u can check the way they generated smpl for snapshot and try the same then use HumanNeRF to train |
I get it. Thank you! |
I had an error using Anim-NeRF. Then I used Easy_mocap. |
I have successfully used Easymocap and train the dataset, thank you very much! |
Always use the output-smpl-3d one. That's the final SMPL values |
Great to hear that!! Good luck on your work |
I find that the predicted smpl parameters of Easymocap are not completely correct and the result rendering is problematic. |
may I know the settings you used for generating the smpl and the front view of the render. Lastly, is this the trained result, if so how many epoch did you run? |
I just run the "python3 apps/demo/mocap.py /home/shengbo/EasyMocap-master/data/male-2-sport/ --work internet" to get smpl. The result is the novel view. I think maybe to human is fat, so the smpl generated is wrong. |
The human is not that fat since I have trained the same subject succesfully. It seems the training is still incomplete since the hands are not rendered. python3 apps/demo/mocap.py /home/shengbo/EasyMocap-master/data/male-2-sport/ --work internet" to get smpl try to add --mode and set it to some sort of mono. Please make sure to process the files accurately and also the Mask is really important |
Thank you. I'll give it a try and let you know. |
|
This is very good. Is your smpl model generated from Easymocap better than mine? |
I deleted the smpl results since I focused on multi-view data. but is it possible for you to share me the processing file you used for snapshot. |
The processing file is simple.
|
Is it possible that because I trained so few images, I only used a few images? |
use as many images as you can and train it longer. hopefully it will improve |
OK, thank you very much! |
I found out that this was because I chose the wrong pose to render a new perspective. Did you choose a random pose for a frame to render? May I ask if you have tried rendering from a new perspective, rather than just a trained perspective? |
like what sort of perspective are you referring to? if you can clarify it a bit and possibly show the result If you talking about camera perspective, then yes I had rendered it from different perspective |
for rendering views from different perspective, you need to check the rendering camera values. Check Freeview.py |
the axis of Easymocap smpl is different from original smpl. I made a custom one axis, but it's not the truth axis of Easymocap smpl. |
could you provide me with the rendering result? like a small video would be great |
Hey, I actually asked a few other people under my contact and it seems to be an issue for Monocular Videos itself. Many face such issues. You can refer to this issue I raised there. Hope it helps. If it doesn't then try SMPL once with PARE or VIBE and parse as WILD dataset in Humannerf by body-centering around the pelvis joint |
male-3-casual.mp4male-3-plaza.mp4Rendering results depend on which smpl parameters I choose, some of the smpl parameters predicted by Easymocap have serious errors. |
what exactly you mean by different smpl parameters. are you talking about this? or did you pass any new arguments in the mocap.py ? Can you inform it? |
Dear author,
I rendered a bullet-time effect for an in-the-wild video. But as shown in the video (rendered frames) below, you can find that the human body does not stand up straight on the ground. Do you know the reason for this and how to solve it?
Thanks a lot!
2.online-video-cutter.com.mp4
3.online-video-cutter.com.mp4
The text was updated successfully, but these errors were encountered: