Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Should the edges sum up to 1 ? #5

Open
blegat opened this issue Jan 11, 2018 · 5 comments
Open

Should the edges sum up to 1 ? #5

blegat opened this issue Jan 11, 2018 · 5 comments

Comments

@blegat
Copy link
Member

blegat commented Jan 11, 2018

Each edge outgoing a node have a probability attached. Sometimes, we want to have them not sum up to one for modelling a discounting factor. While this makes sense when averaging the objective to make the cut, it is a bit less natural when sampling the edge. A dummy node with zero objective and no constraint can be used to model the discounting factor also. The solve on this node does not have to call any LP solver and can be made a special case in the modelling part.
We need to decide whether

  1. It complicates the solver part if we allow sum of the probabilities not to sum up to one: 👍
  2. Even if it complicates it, it allows to handle discounting faster because there is no dummy node : ❤️
@joaquimg
Copy link

I talked with @odow about edges no summing one, but many time we have discount factor in all stages. It might be a pain to set all probabilities and making them not sum one.
Also the dummy node might be a little too much just to hold such number (which is usually the same all the time).
Maybe the discount should be part of the problem meaning the struct that hold the problem also hold the discount factor.

@blegat
Copy link
Member Author

blegat commented Jan 11, 2018

Yes, we could make the discount factor part of the API. it could also improve accuracy since adding dummy node somehow decrease accuracy of sampling (we need more paths are some die in these nodes).
What does constraining the discount factor to be always the same allows to do ?
If we have

disountfactor(sp::AbstractStochasticProblem, node)

it still allows the modelling part to have the same discount factor everywhere and store a single floating point number and I don't think that the solver part gets simplified if the discount factor is constant.

@joaquimg
Copy link

I am not sure which is better. I am just used to have the same discount every stage, but I do think that having that once in a while has its own modeling advantages.

@blegat
Copy link
Member Author

blegat commented Jan 11, 2018

but I do think that having that once in a while has its own modeling advantages.

What do you mean ?

@joaquimg
Copy link

@odow was thinking of a infinite horizon application in which we would need a discount factor in a single node of the graph. By once in a while I mean in some node and no in all of them.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants