Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting DomainError when fitting some models #313

Closed
SupplyChef opened this issue Jul 14, 2022 · 2 comments · Fixed by #330
Closed

Getting DomainError when fitting some models #313

SupplyChef opened this issue Jul 14, 2022 · 2 comments · Fixed by #330

Comments

@SupplyChef
Copy link

A DomainError is raised when fitting some models (please see stack trace below). I have copied a model below that triggers this error (the model is meaningless; just the simplest I could generate to show the problem).

using StateSpaceModels

value = repeat([10.0], 40)
seasonality = 7
events = [0.0 for i in 1:40, j in 1:1]

ssm = StateSpaceModels.BasicStructuralExplanatory(value, seasonality, events)
StateSpaceModels.fit!(ssm)

The error is:

DomainError with -3.5937804133234396e-10: log will only return a complex result if called with a complex argument. Try log(Complex(x)).

And the stack trace:

[1] throw_complex_domainerror(f::Symbol, x::Float64)
@ Base.Math .\math.jl:33
[2] _log(x::Float64, base::Val{:ℯ}, func::Symbol)
@ Base.Math .\special\log.jl:304
[3] log
@ .\special\log.jl:269 [inlined]
[4] update_llk!
@ C:....julia\packages\StateSpaceModels\XjBwj\src\filters\univariate_kalman_filter.jl:198 [inlined]
[5] update_kalman_state!(kalman_state::StateSpaceModels.UnivariateKalmanState{Float64}, y::Float64, Z::Vector{Float64}, T::Matrix{Float64}, H::Float64, R::Matrix{Float64}, Q::Matrix{Float64}, d::Float64, c::Vector{Float64}, skip_llk_instants::Int64, t::Int64)
@ StateSpaceModels C:....julia\packages\StateSpaceModels\XjBwj\src\filters\univariate_kalman_filter.jl:280
[6] filter_recursions!(kalman_state::StateSpaceModels.UnivariateKalmanState{Float64}, sys::LinearUnivariateTimeVariant{Float64}, steadystate_tol::Float64, skip_llk_instants::Int64)
@ StateSpaceModels C:....julia\packages\StateSpaceModels\XjBwj\src\filters\univariate_kalman_filter.jl:339
[7] optim_kalman_filter(sys::LinearUnivariateTimeVariant{Float64}, filter::UnivariateKalmanFilter{Float64})
@ StateSpaceModels C:....julia\packages\StateSpaceModels\XjBwj\src\filters\univariate_kalman_filter.jl:288
[8] optim_loglike(model::BasicStructuralExplanatory, filter::UnivariateKalmanFilter{Float64}, unconstrained_hyperparameters::Vector{Float64})
@ StateSpaceModels C:....julia\packages\StateSpaceModels\XjBwj\src\kalman_filter_and_smoother.jl:15
[9] #41
@ C:....julia\packages\StateSpaceModels\XjBwj\src\fit.jl:40 [inlined]
[10] finite_difference_gradient!(df::Vector{Float64}, f::StateSpaceModels.var"#41#42"{UnivariateKalmanFilter{Float64}, BasicStructuralExplanatory}, x::Vector{Float64}, cache::FiniteDiff.GradientCache{Nothing, Nothing, Nothing, Vector{Float64}, Val{:central}(), Float64, Val{true}()}; relstep::Float64, absstep::Float64, dir::Bool)
@ FiniteDiff C:....julia\packages\FiniteDiff\KkXlb\src\gradients.jl:275
[11] finite_difference_gradient!
@ C:....julia\packages\FiniteDiff\KkXlb\src\gradients.jl:224 [inlined]
[12] (::NLSolversBase.var"#g!#44"{StateSpaceModels.var"#41#42"{UnivariateKalmanFilter{Float64}, BasicStructuralExplanatory}, FiniteDiff.GradientCache{Nothing, Nothing, Nothing, Vector{Float64}, Val{:central}(), Float64, Val{true}()}})(storage::Vector{Float64}, x::Vector{Float64})
@ NLSolversBase C:....julia\packages\NLSolversBase\cfJrN\src\objective_types\twicedifferentiable.jl:113
[13] (::NLSolversBase.var"#fg!#45"{StateSpaceModels.var"#41#42"{UnivariateKalmanFilter{Float64}, BasicStructuralExplanatory}})(storage::Vector{Float64}, x::Vector{Float64})
@ NLSolversBase C:....julia\packages\NLSolversBase\cfJrN\src\objective_types\twicedifferentiable.jl:117
[14] value_gradient!!(obj::NLSolversBase.TwiceDifferentiable{Float64, Vector{Float64}, Matrix{Float64}, Vector{Float64}}, x::Vector{Float64})
@ NLSolversBase C:....julia\packages\NLSolversBase\cfJrN\src\interface.jl:82
[15] value_gradient!(obj::NLSolversBase.TwiceDifferentiable{Float64, Vector{Float64}, Matrix{Float64}, Vector{Float64}}, x::Vector{Float64})
@ NLSolversBase C:....julia\packages\NLSolversBase\cfJrN\src\interface.jl:69
[16] value_gradient!(obj::Optim.ManifoldObjective{NLSolversBase.TwiceDifferentiable{Float64, Vector{Float64}, Matrix{Float64}, Vector{Float64}}}, x::Vector{Float64})
@ Optim C:....julia\packages\Optim\6Lpjy\src\Manifolds.jl:50
[17] (::LineSearches.var"#ϕdϕ#6"{Optim.ManifoldObjective{NLSolversBase.TwiceDifferentiable{Float64, Vector{Float64}, Matrix{Float64}, Vector{Float64}}}, Vector{Float64}, Vector{Float64}, Vector{Float64}})(α::Float64)
@ LineSearches C:....julia\packages\LineSearches\Ki4c5\src\LineSearches.jl:84
[18] secant2!(ϕdϕ::LineSearches.var"#ϕdϕ#6"{Optim.ManifoldObjective{NLSolversBase.TwiceDifferentiable{Float64, Vector{Float64}, Matrix{Float64}, Vector{Float64}}}, Vector{Float64}, Vector{Float64}, Vector{Float64}}, alphas::Vector{Float64}, values::Vector{Float64}, slopes::Vector{Float64}, ia::Int64, ib::Int64, phi_lim::Float64, delta::Float64, sigma::Float64, display::Int64)
@ LineSearches C:....julia\packages\LineSearches\Ki4c5\src\hagerzhang.jl:368
[19] (::LineSearches.HagerZhang{Float64, Base.RefValue{Bool}})(ϕ::Function, ϕdϕ::LineSearches.var"#ϕdϕ#6"{Optim.ManifoldObjective{NLSolversBase.TwiceDifferentiable{Float64, Vector{Float64}, Matrix{Float64}, Vector{Float64}}}, Vector{Float64}, Vector{Float64}, Vector{Float64}}, c::Float64, phi_0::Float64, dphi_0::Float64)
@ LineSearches C:....julia\packages\LineSearches\Ki4c5\src\hagerzhang.jl:269
[20] HagerZhang
@ C:....julia\packages\LineSearches\Ki4c5\src\hagerzhang.jl:101 [inlined]
[21] perform_linesearch!(state::Optim.LBFGSState{Vector{Float64}, Vector{Vector{Float64}}, Vector{Vector{Float64}}, Float64, Vector{Float64}}, method::Optim.LBFGS{Nothing, LineSearches.InitialStatic{Float64}, LineSearches.HagerZhang{Float64, Base.RefValue{Bool}}, Optim.var"#19#21"}, d::Optim.ManifoldObjective{NLSolversBase.TwiceDifferentiable{Float64, Vector{Float64}, Matrix{Float64}, Vector{Float64}}})
@ Optim C:....julia\packages\Optim\6Lpjy\src\utilities\perform_linesearch.jl:59
[22] update_state!(d::NLSolversBase.TwiceDifferentiable{Float64, Vector{Float64}, Matrix{Float64}, Vector{Float64}}, state::Optim.LBFGSState{Vector{Float64}, Vector{Vector{Float64}}, Vector{Vector{Float64}}, Float64, Vector{Float64}}, method::Optim.LBFGS{Nothing, LineSearches.InitialStatic{Float64}, LineSearches.HagerZhang{Float64, Base.RefValue{Bool}}, Optim.var"#19#21"})
@ Optim C:....julia\packages\Optim\6Lpjy\src\multivariate\solvers\first_order\l_bfgs.jl:204
[23] optimize(d::NLSolversBase.TwiceDifferentiable{Float64, Vector{Float64}, Matrix{Float64}, Vector{Float64}}, initial_x::Vector{Float64}, method::Optim.LBFGS{Nothing, LineSearches.InitialStatic{Float64}, LineSearches.HagerZhang{Float64, Base.RefValue{Bool}}, Optim.var"#19#21"}, options::Optim.Options{Float64, Nothing}, state::Optim.LBFGSState{Vector{Float64}, Vector{Vector{Float64}}, Vector{Vector{Float64}}, Float64, Vector{Float64}})
@ Optim C:....julia\packages\Optim\6Lpjy\src\multivariate\optimize\optimize.jl:54
[24] optimize(d::NLSolversBase.TwiceDifferentiable{Float64, Vector{Float64}, Matrix{Float64}, Vector{Float64}}, initial_x::Vector{Float64}, method::Optim.LBFGS{Nothing, LineSearches.InitialStatic{Float64}, LineSearches.HagerZhang{Float64, Base.RefValue{Bool}}, Optim.var"#19#21"}, options::Optim.Options{Float64, Nothing})
@ Optim C:....julia\packages\Optim\6Lpjy\src\multivariate\optimize\optimize.jl:36
[25] fit!(model::BasicStructuralExplanatory; filter::UnivariateKalmanFilter{Float64}, optimizer::Optimizer, save_hyperparameter_distribution::Bool)
@ StateSpaceModels C:....julia\packages\StateSpaceModels\XjBwj\src\fit.jl:42
[26] fit!(model::BasicStructuralExplanatory)
@ StateSpaceModels C:....julia\packages\StateSpaceModels\XjBwj\src\fit.jl:35
[27] top-level scope
@ In[5]:8
[28] eval
@ .\boot.jl:373 [inlined]
[29] include_string(mapexpr::typeof(REPL.softscope), mod::Module, code::String, filename::String)
@ Base .\loading.jl:1196

@guilhermebodin
Copy link
Member

Hi @SupplyChef the time series is constant, when there is no variance such errors could happen because when we maximize the loglikelihood we have to take the log of innovations (the prediction errors). We could add some checks to see if the time-series is constant. Otherwise, does it happens at a time-series not constant?

@SupplyChef
Copy link
Author

Thank you for looking into this. Yes, the issue seems to only happen on edge cases. It'd be great if you could add the checks.
Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants