-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New ht thcovmat #2126
base: master
Are you sure you want to change the base?
New ht thcovmat #2126
Conversation
Greetings from your nice fit 🤖 !
Check the report carefully, and please buy me a ☕ , or better, a GPU 😉! |
n3fit/src/n3fit/layers/DIS.py
Outdated
fktable_arr, | ||
dataset_name, | ||
boundary_condition=None, | ||
operation_name="NULL", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
keyword arguments (kwargs) you don't use at this level, you don't have to specify. That's what **kwargs is for.
n3fit/src/n3fit/layers/DIS.py
Outdated
This function is very similar to `compute_ht_parametrisation` in | ||
validphys.theorycovariance.construction.py. However, the latter | ||
accounts for shifts in the 5pt prescription. As of now, this function | ||
is meant to work only for DIS NC data, using the ABMP16 result. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe reference Eq. 6 of https://arxiv.org/pdf/1701.05838
n3fit/src/n3fit/layers/DIS.py
Outdated
x = self.exp_kinematics['kin1'] | ||
y = self.exp_kinematics['kin3'] | ||
Q2 = self.exp_kinematics['kin2'] | ||
N2, NL = 1#compute_normalisation_by_experiment(self.dataname, x, y, Q2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where is this function? Also I think you can do N2=NL=1, but not NL,N2=1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The DIS.py is not in sync with the new way shifts are computed. Hence, that function that adds the shift in the theory predictions can be removed. I'll work on that later on, as we discussed. I should have deleted them.
if (sam_t0 := file_content.get('sampling')) is not None: | ||
SETUPFIT_FIXED_CONFIG['separate_multiplicative'] = sam_t0.get( | ||
'separate_multiplicative', False | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need this here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't remember why I implemented that...I think I can remove it
# NOTE: from the perspective of the fit scalevar and ht uncertainties are the same since | ||
# they are saved under the same name |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah yes, we may want to start thinking of a long term solution as well. A tree of if-statments that grows exponentially for each source of theory uncertainty (scales, alphas, HT) is not super nice
# NOTE: Same a `groups_data_by_process` in `construction.py` | ||
procs_data = collect("data", ("group_dataset_inputs_by_process",)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
feel free to delete one (in master)
28662e8
to
6b71fae
Compare
d75f1b9
to
8e1dc74
Compare
348be41
to
9ac473c
Compare
Hi @roy, I've updated the code so that now the covmat for power corrections can be constructed as in the case of scale variations. Please, look and let me know if something is unclear. I'd like if you could double-check the functions that compute the shifts, in particular when normalisation factors and conversion factors are used. There are few things that I still don't like here and there, but you're free to propose modifications. |
@@ -47,11 +49,11 @@ def theory_covmat_dataset(results, results_central_bytheoryids, point_prescripti | |||
return thcovmat | |||
|
|||
|
|||
ProcessInfo = namedtuple("ProcessInfo", ("preds", "namelist", "sizes")) | |||
ProcessInfo = namedtuple("ProcessInfo", ("preds", "namelist", "sizes", "data", "data_spec")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I included another key in the namedtuple containing the information that I need to compute the shifts for the power corrections. I'm not sure this is the best solution though.
return s | ||
|
||
|
||
@check_correct_theory_combination | ||
def covs_pt_prescrip(combine_by_type, point_prescription): | ||
def covs_pt_prescrip( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had to add additional dependencies here for the power correction case. I don't know if there is a clever solution, maybe extracting these quantities from the context?
9ac473c
to
84642cb
Compare
point_prescription, | ||
pdf: PDF, | ||
power_corr_dict, | ||
pc_included_prosc, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the tests fail because there is not default for pc_included_procs
(typo) and pc_excluded_exps
. What do you suggest to do? Should I add two functions in config.py that default these two values when not specified in the runcard?
Greetings from your nice fit 🤖 !
Check the report carefully, and please buy me a ☕ , or better, a GPU 😉! |
This is a follow-up of #1865, rebased from the current master.