-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: Optimize Maze Generation #856
base: main
Are you sure you want to change the base?
Conversation
Related discussion about layout database design: #849 |
that is quite impressive. Did you verify that the new algorithm and the old one generate exactly the same maze if started with the same random seed? |
Half a second is too much to be run at every game, but if we don't require to remove dead ends and chambers anymore, the whole thing would be even faster, no? And in that case we could relax our requirements. We could hard code that we want to "trap" a maximum of 33% of food pellets in chambers/dead-ends, and then do our best depending on how many "trapped" tile we have available on the fly. |
Given the failing test it seems to me that the change also changes the generated maze. As I said, if we relax our requirements we may not need to remove chambers/dead-ends, so the test failure would be irrelevant. |
No, but for that I would need to align
Okay, but most of this time is importing pelita modules which are not required at all in the maze generation.
The tests fail because I removed |
@Debilski : what do you think of this approach? I would try together with @jbdyn to implement it on Wednesday in a monster PR. Plan:
|
@jbdyn Could you rebase this branch on the updated main? This will make comparisons easier. |
Let me see if I can fix #854 before Wednesday as this is probably useful for testing (although shell redirection will already do the job well). Can we be certain though that the same maze will be generated everywhere? And even if this is the case, I see a few UX problems:
Cons in the new approach:
Not unsolvable and maybe not too bad (I dislike the salt here, though. Maybe you have a better idea.) but I wanted to note these things. |
@Debilski Done. |
in my experience everyone has always assumed that the mazes are generated at run time. The team that asked about it was very surprised when I said that they are indeed pre-generated and that was the moment when they chose to hardcode behavior dependent on the layout name. We decided to have 1000 mazes to make it impossible to do this. Having mazes generated at run time will not surprise anyone, as far as my experience tells me.
But that is still possible: pelita --seed XXX --dump-layout /tmp/my.layout With pelita --layout /tmp/my.layout Given that
But hey, there's not need to do it if maze generation is fast.
You lost me here. Why does a maze need an id?
See the
I am even more lost here. Why do we care about |
Survivorship bias? Maybe teams that were not confused about the layouts never asked? 🙃 The problem I am describing boils down to: How do two separate groups in a team communicate which layout they should use for testing. The first group notices a problem and tells the other group that they should compare it with their own implementation or whatever. Currently: The first group has the UI open, sees the layout name there and communicates this to the second group. With auto-generated layouts, they must scroll back to find the seed on the command line (buried in lines of debugging output). And then they must recite something like 20 numbers to the other group so that they can use this as a seed to have the same layout. (And this assumes that this is actually stable between different computers.) My suggested solution here was to generate a short id from the main seed that is given to the layout generator. This id would be shown in the UI and could be used for communication. (The second group would for example use Obviously people can save and send around layout files, but this makes things quite a bit more involved compared to what we had before. Some thinking later: What we could do is suggest to the teams that they pre-generate their own set of mazes for themselves in their group repo and whenever pelita is run with Potential breakage, though: There will be this one team who does this and generates only three mazes and then losing because they made some hard-coded assumptions about these mazes. |
I start to like the pregen idea. If we had a command that does this automatically, we could tell the teams to use it on the first day or so:
And the layout_folder then contains |
Really, I have never ever seen anyone being obsessed about replaying on the same maze. When this happened, it was always connected to reusing the very same seed, i.e. replaying the very same game. I have seen seeds saved in files and communicated through git. Never layout names. Even in the last TU course, a group had a collection of seeds to explore because of problems with the corresponding games. Having the same layout but a different seed seems to me a very exotic debugging configuration (which is of course still achievable, just not by using an id shortcut). If there is a very particular maze where something absurd happens, then you reuse the seed and use |
That’s my point, though. There is no need to write down a layout name or put it into git, they would just shout it over the table. But I was just listing the differences that I see. If you think they can be neglected then I am fine with that. |
Hey guys,
analogous to #852, I also tried to use the first version of the optimized chamber finding algorithm in the maze generation as well.
The timings went from several seconds down to less than half a second consistently for script execution.
However, looking at the profiling, only 5% (25 ms) are actually maze generation and the rest of the time was spent in importing:
absolute times
relative times
Would that be fast enough to generate mazes on-the-fly?
For now, I think this would not make the layout database obsolete as one still does not have direct control over the number of dead ends and chambers.