Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Discussion/question: What to do with the generated images ? (texturing 3d models) #51

Open
Ulf3000 opened this issue Nov 25, 2023 · 1 comment

Comments

@Ulf3000
Copy link

Ulf3000 commented Nov 25, 2023

Hi , the tool works great. Thanx a lot!

Do you know of any good workflow for texturing my 3d model though ? i already projected the initial texture from the front, but now i need to texture the rest of my monsters body, from the side and back at least.

I thought maybe i can use stencil projection mode in texture painting -> which camera settings would i need to set in blender or substance painter to exactly match the generated images from the different angles?

if you know of any python code to do this automatically would be nice too. zero12345 wouldnt work as they generate their own mesh , i already have my mesh from which i made the first original texture (front projection).

maybe this could be partly scripted in blender to become a useful and outstanding texture pipeline tool.

EDIT: Now that i thought about it. Would it be possible to send an arbritrary (or fixed) angle to zero123+ ??
Like, we move around the object in blender, when we find a good angle (this might be different for different objects) , then send the camera position and angle (and maybe even an image for controlnet) to zero123+, generate the next image, receive the image ... then move around it again , and again send to zero123+ and again receive the image.
the intermediate steps in the diffusion could be kept in the mean time so we always get the consistent style and shading from the first/initial picture (similar to how comfyui only evaluates changed parts of the diffusion pipeline) just like it does now..
Maybe comfyUI would be the tool to make this all alot easier as it can already communicate with other progs such as blender and has a really good codebase and makes integrating new modules "easy", without having to write everything from scratch.

@Ulf3000 Ulf3000 changed the title What to do with the generated images ? Discussion/question: What to do with the generated images ? (texturing 3d models) Nov 25, 2023
@pragyavaishanav
Copy link

Hi @Ulf3000 As far as I understood this paper and codebase, it will not allow you to pass any other view to project the texture on. They have their set of angles, only they can be used (all at once). If you want to do one at a time, then you will need to look somewhere else that generate consistent images with respect to your geometry.
By the way, if you find such models, please share with me too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants