-
Notifications
You must be signed in to change notification settings - Fork 94
Cleaning up the codeTiming reports #2116
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unsolicited PR review and approval
Co-authored-by: Tony Alberti <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @john-science !
Thoughts on replacing this armi/armi/operators/operator.py Line 506 in 904f06c
with with self.timer.getTimer(f"{interface.name}, {interactionName}, {cycleNodeTag}"): ? |
Well, right now the message is: interactionMessage = " {} interacting with {} ".format(
interactionName, interface.name
) And I'm not sure I like "interacting with". I was thinking something more like: f"{interface.name}.{interactionName}()" would be nicer, e.g.: But I think if we put "cycleNodeTag" in there it would break the timer reports. The "average" and "cummulative" columns for these calls would be broken. |
Not sure I follow how the average and cumulative would be broken. Wouldn't the timers just be separated out by cycle and time node? So if we have 1 cycle and 2 time nodes for a simulation, it would go from something like
to something like
|
I'd change this to be with self.timer.getTimer(f"{interface.name}, {interactionName}, c{self.r.p.cycle}n{self.r.p.timeNode}"): that won't separate out iters for interactCoupled, but I think it's ok to show those with numIters > 1 |
Tight.... so why have the average/cumm/iters columns if everything is by cycle and node? For a 6-cycle run with dozens of time nodes, it'll be a lot harder to read the final table and see what percentage of the run was due to... some particular interface. And, I think, that is what Mark wants. He wants to be able to look at an interface line and the new "total time" on the bottom, and get a rough estimate at a glance of what percentage of the run was due to a particular interface. (I am like 95% on this.) |
In the example I gave where there is an even split, there's is arguably no utility. But, if one cycle/timenode takes way longer than another, there's arguably utility. But there's also a timer report at EOC so there's at least some breakup between cycles. I would settle for
though. I won't put my foot down on splitting by cycle time node. |
This comment was marked as spam.
This comment was marked as spam.
* origin/main: OperatorMPI doesn't need to bcast quits if there no other workers (#2129) Cleaning up the codeTiming reports (#2116) Adding basic documentation for axial expansion (#2119) Adding Reactor construction hook to Database.load() (#2115) Fixing a couple of plots to use initial block height (#1998) Updating Settings docstring to reflect mutability (#2131) Removing 22 unused Parameters (#2130) Reducing the warnings from Block.autoCreateSpatialGrids (#2117)
What is the change? Why is it being made?
This is a cleanup of the code timing reports.
In short, I removed the columns "is active" and "time since" and changed the name of one column from "pauses" to "num iterations".
I also did some basic code cleanup, added some docstrings, and added some unit tests.
Note
Per Tony's request, this PR has a small API change that I will have to fix downstream.
SCR Information
Change Type: trivial
One-Sentence Description: Cleaning up the codeTiming reports.
One-line Impact on Requirements: NA
Checklist
doc
folder.pyproject.toml
.