Skip to content

Questions about transparency #21

@SilentNightSound

Description

@SilentNightSound

Hello! Apologies in advance if this isn't the correct place to ask about it, but I have been struggling with getting transparency functional in 3dmigoto for nearly a year now and was wondering if you had any insights.

Broadly speaking, I have had two major issues when using transparency with 3dmigoto:

1) Being able to control the amount and location of transparency on an object
2) Handling how draw call order affects what parts of the object are drawn first/last


For 1), as I understand it there are two main ways to create a custom shader to make an object transparent - you can use blend_factor to make the entire part a uniform transparency, or you can you can use one of the other blend modes to make it dependent on the state of the render target outputs (such as SRC_ALPHA, SRC1_ALPHA, etc).

For the blend_factor method, as far as I know there is no way to set the factor either through a texture or to have it span a range of numbers so you have to "sacrifice" an entire part to be a uniform transparency (which can be a problem in games like genshin where a significant number of models are drawn with only 1-2 parts) and can't do things like gradients.

For the method that uses the render target state, I have found that a large number of games I mod use all of the o0 and o1 channels already (x,y,z,w), so any attempt to also use them for transparency results in it either not working or a clash between the "meaning" of the channel (e.g. it will become transparent but also glow).

The only general method I have found so far that works everywhere is to pass the current render target into the shader and manually blend in hlsl using something like lerp, but that adds a bunch of extra complexity and also loses some of the benefits of doing it in OM as well.

So, I was wondering if either a) there is a way to set the blend_factor dynamically based on something like a texture or equation or b) if it is possible to using things like SRC_ALPHA even when all the channels are full (maybe by moving the channel? as far as I can tell from the docs only o0 and o1 are supported for blend). Or perhaps c), an easier way that I might be overlooking?


For 2), the issue lies in the order parts are drawn. First, I am running under the assumption that 3dmigoto is unable to change the order objects are drawn in the frame, or delay certain calls until others have completed (I haven't seen anything in the code or all my searching in the documentation/forums, though I may have missed it). This means we are "stuck" with the order the game decides to draw objects in, which becomes an issue when trying to implement transparency.

The key issue I am running into is that because I do not have control of the order things are drawn in, there is no guarantee that a transparent object will be drawn after an opaque object that lies behind it. So when it comes time to "blend" the transparent object with the background, the background is all-black and it becomes transparent to the wrong thing (e.g. a character's clothing being drawn before their base model so you "see-through" the model to the background, or the character being drawn before the scenery behind them is drawn).

In some cases it is possible to shift around the transparent part to another portion of the model which is drawn after the rest has been drawn, but not always - sometimes different parts of the model have different properties (glow, skin shader, etc), sometimes the draw order changes depending on the scene, sometimes the vertex groups you need are only on a single part, or sometimes the model only has a small number of parts (and as far as I know you can't blend a transparent part with an opaque one on the same call?). And it usually doesn't fix blending with the background at all.

I know there are some methods of draw-order independent transparency, is 3dmigoto able to use/activate any of them?

This is the issue I am struggling most with, and have came up with 3 possible solutions but have ran into difficulties with each of them:

a) Draw a part in two passes - basically, draw the opaque portion first in a PS, then pass that PS output to a second PS which manually blends the transparent part with the opaque one. This would let you have both the opaque and transparent portion on the same draw call and guarantees no draw order issues within that call, though it wouldn't fix order issue with other parts or the background. I have not yet found a way to pass the output from one PS into a second one though - from all my testing, running multiple PS has them all execute simultaneously and not in sequence (e.g. no way to make calculations of some vertices dependent on the results from other ones)

b) Output the opaque and transparent data separately, and then do the blending of the transparent part later in the frame once more things have been drawn (or even 1 frame later with the result of the previous frame, though that would create a 1 frame lag for transparency). This seems to be the most hopeful, but I have been running into issues with adding more render targets (games either ignore the added ones, or the added ones don't clear properly between frames and accumulate junk data). It also isn't a very general method since you need to identify when in the frame to actually do the blending which would vary from game to game (since many games flip the screen at some point when drawing). This method also has issues since you need to find a way to distinguish between objects that are in front or behind the character yourself since you are doing the blend manually and can't rely on the depth data unless you store it somehow.

c) Take all the data (vb/ib/cb/shader/etc) from the draw call and move the entire call later in the frame by creating a new custom shader/call. I haven't found a way to "insert" new calls, and even if we override later calls parts with all the relevant data parts still seem to be missing (e.g. I can't find any way to turn on glow if the original shader had it and the new one does not). This also has the issue that often later frames depend on the output from earlier ones, so by moving the entire frame we potentially deprive later shaders of the information they need to function.

Do you think any of these methods are viable? Or is there a simpler method that I am overlooking?


Apologies for the long wall of text, but any insights you have would be appreciated!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions