Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem fitting NLO perturbative charm #1167

Closed
RoyStegeman opened this issue Mar 24, 2021 · 61 comments
Closed

Problem fitting NLO perturbative charm #1167

RoyStegeman opened this issue Mar 24, 2021 · 61 comments
Assignees
Labels
bug Something isn't working

Comments

@RoyStegeman
Copy link
Member

RoyStegeman commented Mar 24, 2021

There is a problem when trying to fit NLO perturbative charm. Namely positvity, integrability and the validation threhold all faill, and the arc-length is considerably higher than for the NNLO fitted charm fit. The chi2 per dataset is roughly:

Epoch: 16000
DEUTERON: 2.175608610661609 2.3023670685596955
NMC: 3.9632456661049837 3.615186803481158
NUCLEAR: 2.01407176617177 2.367892578125
HERACOMB: 3.3180955058396466 3.5468205448420074
DYE886: 4.116624774354877 2.5264210908309273
CDF: 3.2958807264055525 0.0
D0: 2.360400019465266 0.0
ATLAS: 10.5777060546875 14.483962038730054
CMS: 9.272918791728486 16.126761820778917
LHCb: 5.183324353448276 0.0
Total: training = 4.6053573349620995 validation = 5.3817704327135205

Below are tables containing experimental chi2s of NNPDF31 NLO fits:

  1. fitted charm pdf: report (high chi2)
  2. pert. charm pdf (still data cuts from the fitted charm pdf): report (good chi2)

I think report 2 suggests that the problem is not with the theory? I'm currently running a fit without generating a replica as @scarlehoff suggested.

@Zaharid
Copy link
Contributor

Zaharid commented Mar 24, 2021

@RoyStegeman There is a problem in the runcards I wrote. The cuts with theory 200 should be with the NNLODatasets, like so

cuts_intersection_spec:
    - theoryid: 208
      pdf: NNPDF31_nlo_as_0118
      dataset_inputs: *NLODatasets

    - theoryid: 200
      pdf: NNPDF31_nlo_as_0118
      dataset_inputs: *NNLODatasets

@RoyStegeman
Copy link
Member Author

Ah I see, I'll check to see if that solves it.

@RoyStegeman
Copy link
Member Author

Not generating replicas does result in somewhat better chi2 (see below), but postitivity and integrability are still not satisfied and also the arc-lengths are similar to those of a replica fit. Unfortunately, changing the cuts to be with the NNLODatasets had no significant effect.

Epoch: 16000
DEUTERON: 1.3358319060709358 1.2353015606219953
NMC: 2.9137385748570264 2.669009339575674
NUCLEAR: 1.0642058627923117 1.0403039158587475
HERACOMB: 2.3416396632339014 2.7190422426812266
DYE886: 3.2934748331705728 3.710443247919497
CDF: 3.367551803588867 0.0
D0: 2.1102528042263455 0.0
ATLAS: 5.955230778238758 10.406360888299142
CMS: 6.007958984375 10.120989799499512
LHCb: 2.1481125541355297 0.0
Total: training = 2.794899860323463 validation = 3.5272689788258975

@scarlehoff
Copy link
Member

Could you upload a comparefit of the latest fit you've run? Just to have a picture of how it looks.

Some questions that come to mind:

  • Could the positivity/integrability be wrong? Maybe the fit is just terrible because it's try to accommodate something crazy there. Have you tried fitting with no integrability/positivity?
  • Have you iterated the fit? The NLO runcard has an intrinsic charm t0 pdf set

Of the two points, I'm guessing the second might be the most important one if not done yet.

@scarlehoff
Copy link
Member

scarlehoff commented Mar 25, 2021

I just tried fitting one replica with the runcards in #675 and using the NNLO pch as t0pdfset and got a reasonable chi2 for all experiments:

  Epoch: 17000
  DEUTERON: 2.2126543598790325 2.26109368739984
  NMC: 3.1728533576516544 2.792272829541973
  NUCLEAR: 2.00076725216605 1.7647554328642696
  HERACOMB: 2.387927431048769 2.563424149410432
  DYE886: 2.8783835208777226 2.0848486527152685
  CDF: 2.658672877720424 0.0
  D0: 2.0882205963134766 0.0
  ATLAS: 2.45336548112955 3.9016921647632397
  CMS: 3.461691376657197 3.3295905590057373
  LHCb: 3.7176479173743204 0.0
  Total: training = 2.4771648129988737 validation = 2.530011579121203

but the positivity is still not there. Given that the culprit seems to be POSF2C I've commented out that one and then it passes ok see: result.json
I should say that POSF2C has already a very interesting shape for the intrinsic charm NLO fit. And for the perturbative charm NNLO so maybe there's nothing really wrong with it and it is just the composition of both effects for the one single replica I've run.

@RoyStegeman
Copy link
Member Author

Thanks, I had also observed that replacing the t0 with a pch pdf results in better chi2, but I hadn't realised those shapes of POSF2C. I am planing to see if I can fix the pos and int by iterating preproc, but to do so I will need as starting point a fit where some replicas fail because of pos/int while other don't.

At the moment I'm rerunning the NLO fitted charm since also there the theory 200 dataset was NLODataset instead of NNLO, but once that has completed I can try if preproc is able to solve this issue.

@scarlehoff
Copy link
Member

scarlehoff commented Mar 25, 2021

For a quicker debugging you might want to try opening the preprocessing ranges and training them, that will tell you whether you can get "out of the hole" just with that.
Then again, maybe there's an actual problem with POSF2C that it is only evident here because the fit fails.

@Zaharid
Copy link
Contributor

Zaharid commented Mar 25, 2021

What happens if you evaluate POSF2C for 3.1 NLO pch? I do have some distant recollection that there was a problem with this long ago. Maybe @enocera or @scarrazza remember better?

@scarlehoff
Copy link
Member

This report points to a problem with the NLO pch POSF2C https://vp.nnpdf.science/QliMvb3OSOWRMICDQi9_1A==/

Theory 211 is NNLO perturbative charm and 212 is NLO perturbative charm if I am not mistaken. The NNPDF4.0 pch had no problem with positivity but it is negative for the theory 211 POSF2C plot

@RoyStegeman
Copy link
Member Author

RoyStegeman commented Mar 25, 2021

You're right that 211 is NNLO pch and 212 is NLO pch. But the NNLO fit results in negative POSF2C when combined with the NLO FK tables, so is that necessarily a problem?

@scarlehoff
Copy link
Member

Not necessarily because I don't know how exactly the positivity observables are different for NLO and NNLO, but POSF2C is the only one that moves from being strictly positivity to strictly negative (beyond small-x) which seems suspicious.

The fact that the 4.0 and 3.1 have the same shape (the 3.1 fits didn't have any information about POSF2C I think?) is also a red flag for me. A bit like when two students have the same mistake in the same exercise :P

But it's all circumstantial evidence.

@enocera
Copy link
Contributor

enocera commented Mar 25, 2021

@scarlehoff , @RoyStegeman I'm not surprised that POSF2C turns out to be negative for NNPDF3.1, both NLO and NNLO, irrespective of the theory - POSF2C was not enforced in NNPDF3.1. I'm not sure that the fact that the NNLO pch NNPDF4.0 set convolved with theory 212 (pch, NLO) leads to a negative POSF2C is an evidence for something going wrong with POSF2C in theory 212. Nevertheless, I will recompute the relevant FK table, although I don't expect much room for a mistake there. I'll also go back to the fits done for the strangeness paper (when the POSF2C constraint was introduced in the first place) - I don't remember whether we performed a NLO pch fit with POSF2C on at that time.

@scarlehoff
Copy link
Member

It's not evidence but it bothers me that it moves to strictly positive to negative (and then 0) and I don't understand the mechanism for which this happens. Note that the charm pdf is always above 0
It's like there's is an extra negative term there which reminds me of scarrazza/apfel#24

But as I said, I don't know how these observables are different from NLO to NNLO and posf2c its very much non trivial for perturbative charm so I might be making a fool out of myself...

@RoyStegeman
Copy link
Member Author

RoyStegeman commented Mar 29, 2021

If POSF2C is turned off, all pos obervables are positive except POSF2C: https://vp.nnpdf.science/iAK6X7WXQgeTw1bQo8HL2w==/#positivity

However, if POSF2Cis turned on (and the positivity threshold set large, such that not all replicas fail positivity), many of the other positivity variables are no longer strictly positive: https://vp.nnpdf.science/Oe6Q2TRaSdSc-ue2VC9I5g==/#positivity

@RoyStegeman
Copy link
Member Author

If we compare this last fit of my previous comment to a similar fit, but instead of POSF2C turned on we now have POSF2C turned off, we again see that many of the positivity varaibles are not stirctly positive. Thus the fact that they were negative should be understood as an effect of the removed threshold rather than as an effect of POSF2C. See: https://vp.nnpdf.science/p3OHEoDyQiu3zQi1kWEW8w==

Finally, if we run a fit where POSF2C dominates the chi2, which is achieved by setting the poslambda of POSF2C to 1e30 while turning off all other positivity losses, we still see similar behaviour for POSF2C: https://vp.nnpdf.science/bv1_BXQFSRumn7mFSVdauw==/.
Note that this report is based on only 2 replicas since by far most replicas failed.

For me this seems to suggest that there might be something wrong with theory 212, what do you think?

@enocera
Copy link
Contributor

enocera commented Mar 29, 2021

I think that I'll look at theory 212 carefully.

@RoyStegeman
Copy link
Member Author

The problem is not in the data cuts.

@enocera
Copy link
Contributor

enocera commented Mar 31, 2021

The problem hardly seems to be in the theory. Here are the predictions and the chi2s for the intersection of the data sets in theories 212 and 64 (the theory used in NNPDF3.1):

Predictions are identical (except for those affected by the APFEL bug for CC DIS).

@scarrazza
Copy link
Member

@RoyStegeman, maybe we should consider running a quick fit with the slightly overleaned model (before nadam) and check what happens here.

@RoyStegeman
Copy link
Member Author

Predictions are identical (except for those affected by the APFEL bug for CC DIS).

@enocera thanks for checking. I think those results are what we were expecting, but is there any way in which there could be a problem with only the FK table of POSF2C? Or is the only way that could be the case if there is some unknown bug in apfel? Which I guess makes it an unlikely explanation.

maybe we should consider running a quick fit with the slightly overleaned model (before nadam) and check what happens here.

Yes I was indeed going to try a model that could overfit. Although I'm afraid it's pretty much a hail mary, I also can't think of much else.

@enocera
Copy link
Contributor

enocera commented Apr 1, 2021

@enocera thanks for checking. I think those results are what we were expecting, but is there any way in which there could be a problem with only the FK table of POSF2C? Or is the only way that could be the case if there is some unknown bug in apfel? Which I guess makes it an unlikely explanation.

I can produce a FK table for theory 64 and compare it to the result of theory 212. However I find it hard to believe that the theory generation fails for a specific observable (but not for all the others).

@RoyStegeman
Copy link
Member Author

I can produce a FK table for theory 64 and compare it to the result of theory 212. However I find it hard to believe that the theory generation fails for a specific observable (but not for all the others).

Yes, you're right. Let's first see what happens for the setup that can overfit.

@RoyStegeman
Copy link
Member Author

Unfortunately, but not unsurprisingly, the pre-nadam setup was also not able to satisfy positivity. I also tried with triple the learning rate, but even that didn't help.

@scarlehoff
Copy link
Member

scarlehoff commented Apr 1, 2021

What's the difference between POSF2C NLO and NNLO?

The problem with the positivity datasets is that a bug with a very small impact can make the fit fail by virtue of moving it from 1e-5 to -1e-5 which would be hardly noticed in the rest of the predictions.

From the n3fit point of view I don't think there's any differences from NLO to NNLO. The only thing that could be buggy is my implementation of the rotation from the 7-flavours to the 14-flavours but I want to think that would've been noticed at NNLO.

Edit: by the last point I mean in the fktable X pdf convolution, something like "T3 and T8 are swapped", but if that problem is there it should be the same for NLO and NNLO.

@RoyStegeman
Copy link
Member Author

What's the difference between POSF2C NLO and NNLO?

I suppose this is rhetorical, but one of the few things left is the dataset (cfacs).

From the n3fit point of view I don't think there's any differences from NLO to NNLO. The only thing that could be buggy is my implementation of the rotation from the 7-flavours to the 14-flavours but I want to think that would've been noticed at NNLO.

Indeed, NNLO uses the same fitting basis, so even if we assume there's something wrong, that can't be the only source of the problem.

Although, as you yourself pointed out earlier, also NNLO pch has much smaller POSF2C at large-x than NNLO fitted charm. And we notice a similar difference between NNLO fitted charm and NLO fitted charm. So maybe if we can understand what's going on in those two cases, that can help us understand what going on for NLO pch as well?

@scarlehoff
Copy link
Member

I suppose this is rhetorical, but one of the few things left is the dataset (cfacs).

No. No. I really don't know how are the positivity datasets done from a practical point of view. They are not physical predictions coming from the programs I know of and I never dwelt into them.

@enocera
Copy link
Contributor

enocera commented Apr 1, 2021

I suppose this is rhetorical, but one of the few things left is the dataset (cfacs).

No. No. I really don't know how are the positivity datasets done from a practical point of view. They are not physical predictions coming from the programs I know of and I never dwelt into them.

I don't understand the point here. Let's take POSF2C: this is the stucture function F2c(x,Q) on a pre-defined x grid at a given Q, see https://docs.nnpdf.science/n3fit/methodology.html?highlight=positivity#positivity. But F2c for positivity is the same observable as for a real data set, say HERA charm.

@scarlehoff scarlehoff added the bug Something isn't working label Apr 9, 2021
@scarlehoff
Copy link
Member

scarlehoff commented Apr 9, 2021

With respect to the grid, I would've then expected some differences here https://vp.nnpdf.science/89rh3NprTW2Oq6YI3nVDZg==/#matched_positivity_from_dataspecs3_plot_dataspecs_positivity but they are spot-on the same (unless of course the finer grid didn't apply to the positivity)

@RoyStegeman
Copy link
Member Author

  • issue with FK tables:
    • hypothesis: FK table x-grid is not sufficiently dense.
    • test: take apfelcomb, rerun the fk generation for POSF2C using one of our problematic fits as reference pdf set. If APFEL predictions are negative, we have a problem in apfel, otherwise the problem is in the fktable grid. There are different ways to achieve that, e.g. adding below this line a call to QCD::initPDF("<the problematic set>", 0); and recompiling apfelcomb.

So I generated an FK table for POSF2C while setting QCD::initPDF("210328-n3fit-FT06", 0) and recalculated te POSF2C observables for 210328-n3fit-FT06 using this new FK table. I think this is what you proposed? Anyway, it did not seem to have much of an effect: POSF2C with newly generated FK table

@RoyStegeman
Copy link
Member Author

Here is another positivy plot: plot

Here I set ForcePositive: 1 and after that generated the POSF2C FK table and this plot. So both the pdf used to determine the FK table, as well as the pdf which was then used to calculate these observables had positivity enforced with ForcePositive: 1. Specifically this was the pdf fitted using theory 212 (NLO pch) with all the default settings, except that POSF2C was not enforced.

@scarrazza
Copy link
Member

@RoyStegeman thanks for this, I assume we still get negative predictions during the FK table generation.
So, the problem seems to be in apfel or our interpretation of what posf2c does is wrong...

@Zaharid
Copy link
Contributor

Zaharid commented Apr 15, 2021

As discussed, it would be nice to use #1092 and related functionality to see the what is going on. Ideally we would have something to view the result in the flavour basis, which turns out to be missing from fitbases.py.

@scarlehoff
Copy link
Member

scarlehoff commented Apr 15, 2021

Can DIS FK Tables be generated with a program other than APFEL for DIS observables at NLO? (or without going through apfel at all)

@felixhekhorn
Copy link
Contributor

felixhekhorn commented Apr 16, 2021

as said in the PC today:

  • If NLO means NLO DIS, i.e. O(a_s^1) - yes, you can: @alecandido
    yadism -> pineappl -> fktable
    if I understood @scarrazza correct of the bridge between pineappl and fktable
  • if instead you're asking for NLO heavy quark, i.e. O(a_s^2) - no (not yet)
  • using my thesis I can produce F2c in FFNS - which might not be sufficient and indeed the FNS might be one of the possible explanations why the thing can go negative

@juanrojochacon
Copy link

Why do we need FK tables at all @scarlehoff ? To check that APFEL gives the right output is just a matter of comparing numbers right?

@scarlehoff
Copy link
Member

Yes sure, whatever we can use to compare works.

@juanrojochacon
Copy link

there are many codes that produce F2c, also QCDNUM or Alekhin's code which is in the repo. But the easiest thing is the benchmark tables

@juanrojochacon
Copy link

Also there might be some artefact of the matching of FONLL, I don't know. Looks odd but I don't think this is necessarily a bug, or at least not a conceptual bug. Of course if F2c is wrong it is wrong everywhere, but the fact that we can fit fine all HERA data suggests that whatever is going on does not have any pheno implications

@juanrojochacon
Copy link

Hi @RoyStegeman any luck investigating this issue? In any case it might be good to nevertheless run the NLO pert charm fit removing F2c positivity, the fact that we seem to be unable to produce NLO fits gets me a bit nervous

@RoyStegeman
Copy link
Member Author

@juanrojochacon I am still looking into how exactly to perform the benchmarking, since theory/fktables is new territory for me. Although I saw in another apfel issue that Valerio and AC&FH were doing an F2c benhmarking of their own so I can probably use their code snippets.

I already ran an NLO pch fit without F2c positivity (although it still has some datasets with a training fraction of 1), the report of which you can find here.

@juanrojochacon
Copy link

Good the NLO pcharm fit looks as expected, so this is done. In any case, we should never add F2c pos in such fits.

YEs, with APFEL computing F2c is relatively easy, there is a quite extensive documentation

@alecandido
Copy link
Member

@juanrojochacon I am still looking into how exactly to perform the benchmarking, since theory/fktables is new territory for me. Although I saw in another apfel issue that Valerio and AC&FH were doing an F2c benhmarking of their own so I can probably use their code snippets.

If you need any help do not hesitate to ask us :)

@RoyStegeman
Copy link
Member Author

RoyStegeman commented Apr 21, 2021

Here are the tables generated using apfel to be compared against the Les Houches F2c benchmark of chapter 22 in https://inspirehep.net/literature/847899. I am missing the results for χ as an alternative to the damping factor, I couldn't find the implementation in APFEL. Did I miss it, or has it not been implemented? I'm also not sure if it's even very relevant.

FONLL-A
x Q 2 (GeV 2 ) FONLL-A plain FONLL-A-damp FONLL-A-χ
10 -5 4 0.273642 0.150471
10 -4 4 0.163507 0.0933029
10 -3 4 0.084081 0.0505031
10 -2 4 0.0285576 0.017404
10 -1 4 0.00207515 0.000728174
10 -5 10 0.673662 0.560954
10 -4 10 0.372834 0.311061
10 -3 10 0.178566 0.149656
10 -2 10 0.0604859 0.0505352
10 -1 10 0.00561977 0.00423115
10 -5 24 1.19433 1.13499
10 -4 24 0.628508 0.595446
10 -3 24 0.287869 0.271877
10 -2 24 0.0962569 0.0903566
10 -1 24 0.00998753 0.00909605
10 -5 100 2.29917 2.28688
10 -4 100 1.12954 1.1212
10 -3 100 0.483995 0.479277
10 -2 100 0.153972 0.151986
10 -1 100 0.0164704 0.0161566
FONLL-B
x Q 2 (GeV 2 ) FONLL-B plain FONLL-B-damp FONLL-B-χ
10 -5 4 0.238438 0.24859
10 -4 4 0.134216 0.135884
10 -3 4 0.0648054 0.0637369
10 -2 4 0.0216577 0.0207001
10 -1 4 0.000941352 0.000690423
10 -5 10 0.537579 0.550689
10 -4 10 0.300264 0.300554
10 -3 10 0.146547 0.143789
10 -2 10 0.0519552 0.0501579
10 -1 10 0.00430401 0.0039329
10 -5 24 1.01449 1.02245
10 -4 24 0.545921 0.545337
10 -3 24 0.257601 0.255325
10 -2 24 0.0901158 0.0887034
10 -1 24 0.00925057 0.00896484
10 -5 100 2.07683 2.07957
10 -4 100 1.04338 1.04313
10 -3 100 0.458914 0.45803
10 -2 100 0.150808 0.150216
10 -1 100 0.016403 0.0162816
FONLL-C
x Q 2 (GeV 2 ) FONLL-C plain FONLL-C-damp FONLL-C-χ
10 -5 4 0.385086 0.28405
10 -4 4 0.182392 0.14954
10 -3 4 0.0720822 0.0664899
10 -2 4 0.0209593 0.0205899
10 -1 4 0.00159982 0.000844353
10 -5 10 0.793251 0.703999
10 -4 10 0.379413 0.350462
10 -3 10 0.158352 0.151944
10 -2 10 0.0521225 0.050348
10 -1 10 0.00559515 0.00473872
10 -5 24 1.31939 1.26754
10 -4 24 0.638523 0.621702
10 -3 24 0.272027 0.267725
10 -2 24 0.0912953 0.0897515
10 -1 24 0.0108233 0.0102695
10 -5 100 2.40349 2.38774
10 -4 100 1.14287 1.13788
10 -3 100 0.47575 0.474262
10 -2 100 0.153313 0.152637
10 -1 100 0.0181494 0.0179526

These have been generated using this code snippet (while varying the mass scheme and damping)

cpp code
#include "APFEL/APFEL.h"

#include <utility>
#include <vector>
#include <math.h>

/*
Benchmark settings:
- as input PDF set, the Les Houches initial conditions are used -- SetPDFSet
  set alphasref value at 0.35 for Qref=sqrt(2) -- SetAlphaQCDRef
- mc=sqrt(2) at NLO,  mc=sqrt(2)+epsilon for NNLO -- SetPoleMasses
- PDFs have been evolved with hoppet -- (it's a dependency of apfel, so I 
  guess this is the case)
- charm quark is the only heavy quark. mb and mt are infty -- SetPoleMasses
- Q2 of the benchmarks are: 4, 10, 24, 100 -- see `kin' below
- alphas(Q2) is computed through exact integration of the evolution 
  equations -- SetAlphaEvolution
- F2c are defined as the sum of contributions where a charm quark is struck by 
  virtual photon -- SetProcessDIS?
*/


int main()
{
  // Settings to be changed for benchamrking different setups
  APFEL::SetMassScheme("FONLL-C");
  APFEL::EnableDampingFONLL(true);

  // pto=0: LO, pto=1: NLO, pto=3: NNLO, 
  // I think this doesn't doe anything if FONLL is set.
  // APFEL::SetPerturbativeOrder(1);

  // Global benchmark settings which never change
  APFEL::SetPDFSet("ToyLH");
  APFEL::SetAlphaQCDRef (0.35, sqrt(2.));
  APFEL::SetPoleMasses(sqrt(2.), 150, 175);
  APFEL::SetAlphaEvolution("exact");
  APFEL::SetProcessDIS("EM");

  APFEL::InitializeAPFEL_DIS();
  const std::vector<std::pair<double, double>> kin{
    {1e-5, 4.},   {1e-4, 4.},   {1e-3, 4.},   {1e-2, 4.},   {1e-1, 4.},
    {1e-5, 10.},  {1e-4, 10.},  {1e-3, 10.},  {1e-2, 10.},  {1e-1, 10.},
    {1e-5, 24.},  {1e-4, 24.},  {1e-3, 24.},  {1e-2, 24.},  {1e-1, 24.},
    {1e-5, 100.}, {1e-4, 100.}, {1e-3, 100.}, {1e-2, 100.}, {1e-1, 100.},
    };
  double Q0 = sqrt(2.);
  for (auto k : kin)
    {
      double x = k.first;
      double Q = sqrt(k.second);
      APFEL::ComputeStructureFunctionsAPFEL(Q0, Q);
      std::cout <<
      // std::scientific << 
      // x << "\t" << Q << "\t" << APFEL::F2charm(x)
      APFEL::F2charm(x)
      << std::endl;
    }
  return 0;
}

The values seem reasonable close to the Les Houces results that I cannot imagine this difference to have any meaningful effect on the pdf fits. Whether it causes the inaccuracy of the magnitude -10-5 for the posf2c observables, I don't know.

Anyway, I would say that this means we don't have to worry about the F2c variables that are used in the fit being wrong in any significant way. However, it doesn't provide an answer as to why we found negative posf2c observables for a fully positive charm pdf, maybe that has to do with how charm is defined in the perturbative charm theory?

@juanrojochacon
Copy link

Very nice! This is a reassuring check. So we can proceed with the fits as planned, no need to worry out F2c then.

And yes, the chi was never added to APFEL, it was only used in its predecessor FONLLdis which I wrote.

About why F2c is tiny but negative at large-x in the pert charm fit, this is a curious finding but completely irrelevant for the fits, so not sure it is worth the effort to spend time (right now) with it.

@juanrojochacon
Copy link

I guess this issue can be closed?

@RoyStegeman
Copy link
Member Author

I guess this issue can be closed?

I would say so, if other people involved with this issue also agree with our conclusion.

@scarlehoff
Copy link
Member

scarlehoff commented Apr 21, 2021

Thank you very much @RoyStegeman

Let's discuss it during the code meeting this afternoon so people can have a look.

The 10^-5 absolute difference (0.01 relative) in the x=10^-1 region seems consistent with what we see. I would like @felixhekhorn and @alecandido input on whether this is the level of agreement one usually gets for the LH Benchmark.

In summary, the fact that the differences are, in absolute terms, of order 10^-5 makes me "numerically happy" so I agree with proceeding with the fits.

@juanrojochacon
Copy link

Well as someone who has run a lot of LH benchmarks in the past, I can confirm that this is a very decent accuracy (it could be further improved playing with numerics but I don't think it is needed here)

@Zaharid
Copy link
Contributor

Zaharid commented Apr 21, 2021

I am not sure I see the logic here: One of the possible explanations for the problem was the accuracy in the computation of the structure function. This appears to be shown to not be the explanation. But this was never the problem. Rather the problem is that we cannot seem to fit with this data postive, and there doesn't seem to be a compelling explanation as to why.

@juanrojochacon
Copy link

But this is completely irrelevant for the NNPDF4.0 fits. It is an interesting question but it can be studied later, once we have checked that F2c is computed correctly

@scarlehoff
Copy link
Member

I am not sure I see the logic here: One of the possible explanations for the problem was the accuracy in the computation of the structure function. This appears to be shown to not be the explanation.

Instead, looking at the differences I can perfectly believe the problem is the accuracy.

@juanrojochacon
Copy link

there are other options, maybe the FONLL matching prescription is not ideal for large-x and low-scales (F2c is tiny there, so it was not optimised for this region). So the problem might be the accuracy (since F2c is very small there) but also a theoretical explanation is possible. In both cases, irrelevant for NNPDF4.0

@RoyStegeman
Copy link
Member Author

Instead, looking at the differences I can perfectly believe the problem is the accuracy.

Well it could explain why an FK table generated using apfel returns an f2c of order -1e-5 for a strictly positive charm pdf. But this possible inaccuracy would be in the fk table during fitting as well. So if we fit with that FK table, then we should be able to force the f2c observables calculated using that fk table and input pdf to be positive.

So this check confirms that the F2c we are fitting to are good enough, but I don't think it can explain why we are not able to force f2c positive.

@scarlehoff
Copy link
Member

Not necessarily. It's no the same a 1e-5 inaccuracy around 0 that somewhere where it is hidden by much larger numbers.

Of course, this is not a proof and if we had infinite time I would ask for perfect accuracy to see whether 1) that's the case 2) whether it changes anything for all other observables. But we don't have infinite time and I certainly wouldn't volunteer to fix a 1e-5 difference in apfel.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

8 participants