-
Notifications
You must be signed in to change notification settings - Fork 15
Description
I'm currently using this config file with quandary v4.0 on Ubuntu
nlevels = 2
nessential= 2
ntime = 1061
dt = 0.0942507068803016
transfreq = 4.10595
rotfreq= 4.10595
selfkerr = 0.2198
crosskerr= 0.0
Jkl= 0.0
collapse_type = none
initialcondition = basis
control_segments0 = spline, 150
control_initialization0 = random, 0.0070710678118654745
control_bounds0 = 0.05
carrier_frequency0 = 0.0,
control_enforceBC = False
optim_target = gate, xgate
optim_objective = Jfrobenius
optim_weights= 1.0
optim_atol= 1e-4
optim_rtol= 1e-4
optim_ftol= 1e-08
optim_inftol= 1e-09
optim_maxiter= 200
optim_regul= 0.0001
optim_regul_interpolate=False
optim_penalty= 0.1
optim_penalty_param= 0.0
optim_penalty_dpdm= 0.01
optim_penalty_energy= 0.1
datadir= ./c_plus_plus
output0=expectedEnergy, population, fullstate
output_frequency = 1
optim_monitor_frequency = 1
runtype = optimization
usematfree = False
linearsolver_type = gmres
linearsolver_maxiter = 20
timestepper = IMR
rand_seed = 1234
when I run with the C++ executable it says it is using the default gate_rot_freq=1e+20 in the warning.
With the default value gate_rot_freq=1e+20 the optimization converges as expected and matches optimization results from quandary v3.0
If I reproduce this configuration with the python interface, I would assume it chooses the same default value for gate_rot_freq but it uses 0.0 and the result is completely different.
from quandary import Quandary
quandary_opt = Quandary(
Ne=[2],
freq01=[4.10595],
rotfreq=[4.10595],
optim_target="gate, xgate",
tol_infidelity=1e-9,
costfunction="Jfrobenius",
tol_costfunc=1e-8,
usematfree=False,
carrier_frequency=[[0.0]],
rand_seed=1234,
verbose=True,
maxctrl_MHz=50.,
nsplines=150
)
quandary_opt.optimize(datadir="/home/waldon1/quandary/py_interface")
The fidelities are different and the optimizer doesn't converge.
It seems from experimentation that gate_rot_freq is an important parameter for the optimization and is affecting whether the optimizer converges. I would like the python interface to expose this to be configurable by the user so I can have more control over the optimization and match the C++ results if I need to for backward compatibility reasons. It is currently hardcoded in the quandary.py file and I am not able to change it or configure it.
Let me know if this is possible thanks!