-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Try out winnow #86
Try out winnow #86
Conversation
src/compose/comment_strip_iter.rs
Outdated
.parse_next(input)?; | ||
} | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Up until here is the actual parser code. Everything below is a dance to keep the existing behaviour mostly intact, while also prepping to replace more code.
By the way, are there any extra repositories with shaders that I should ideally test? |
bevy would be the obvious choice: run a few of the graphics examples (e.g. lighting, 3d_scene, bloom, fog, something from the 2d examples) and check for errors and equivalent output (though 99% if it doesn't error it'll match). |
I also benchmarked winnow's compile times VS nom's compile times, and the two are very comparable. (under 2 seconds) |
I'd also be interested in runtime benchmarks as well. Naga_oil can take quite a while to assemble bevy shaders. |
@Elabajaba I see, then I better add criterion in a separate PR. And then we should have a before and after for comparison. Should I let such runtime benchmarks run in GitHub actions as well? (Would involve a bit more setup and a slightly different library) |
it's not ideal but there is an example which runs a bunch of compiles over various configurations - run it twice to populate the driver cache the first time - really we want to check the runtime performance for a shader that's been used before. |
@robtfm Status quoCurrently, the composer will start by parsing every module that's being added Line 1403 in 37a472c
This parsing step mainly figures out "which imports" and "which defines", which the get_preprocessor_metadata returns.We're also doing a few secondary tasks, like assigning values to @binding(auto). Then, when it is time to build the final module, we have to put everything together Line 1551 in 37a472c
It resolves the imports, using self.module_sets from the previous step.Then we create a derived module, and add everything from the imports to it. Lines 1674 to 1685 in 37a472c
Finally, we do a bit of finishing work, like dealing with overrides and validation. Is that a reasonably accurate overview so far? Refactoring proposalCurrently, we are preprocessing shader code multiple times. First we parse the shader code, and we generate a syntax tree. This syntax tree only understands our preprocessor code, and normal WGSL code is just a raw string in the tree.
Then, when it's time to build the final module, we only need to look at the syntax tree. Lots of tasks become very easy, such as dealing with a #ifdef #endif. For that, all we have to do is "when printing the tree, only print branches that are true". Does that make sense, and would it be reasonable? If yes, I'll go ahead with taking a stab at it. |
yep, that's accurate.
yes, if you can keep the preprocessor api functionally unchanged that should work just fine. one subtlety to note - the required imports are based on usage of items in the code (conditional for the |
This actually raises an interesting future question for me: Status QuoAs in, if I write code like this, then the scoping changes depending on what the preprocessing result is fn main() {
let a = 7;
{
let a = vec3(1., 1., 1.); // This is a different `a` variable
#ifdef FOO
}
{
#endif
let b = a + 9;
// Do something with a
}
} It could turn into either of those options fn main() {
let a = 7; // Pointless
{
let a = vec3(1., 1., 1.); // This a is used
let b = a + 9;
}
}
fn main() {
let a = 7; // This a is used
{
let a = vec3(1., 1., 1.); // Pointless
}
{
let b = a + 9;
}
} Or it could change if something is an identifier or not, with constructions along the lines of #ifdef FOO
enable
#else
var
#endif
rounding_mode; // Either this enables an extension, or is a variable called rounding_mode Alternative optionRust looked at this and went "we don't want that", and instead only lets |
I've been working on the AST rewrite and a few other things. Now I'm realising that if I do everything at once, this PR would explode. So I wanted to ask what your preferred way forward would be? For example, I could start with a rewrite, where I keep the public API as similar as possible. I'll definitely have a few places where there will technically be breaking changes, but I'd test that with Bevy and the unit tests. An alternative option would be: I break compatibility wherever reasonably justified, and submit one massive pull request. This is obviously less effort for me, and should lead to faster results. I would also take on the task of updating Bevy. And finally, there's the option of "first submit massive pull request, then decide how we're going to break it down and integrate it". |
Closing since I won't finish this PR in the foreseeable future |
I took a stab at replacing one of the existing pieces of code with winnow.
I went for winnow instead of the usual
nom
library, because epage's winnow is significantly easier to use and debug. It's also the second most popular parser combinator library, so it's a reasonably well tested piece of technology.The code is twice as long as it should be, because I tried to keep the existing "line by line" logic. It'll get much shorter as I convert more of the source code.