load params and continue optimization #1144
-
Hello, I am trying to optimize a vqs in multiple steps by decreasing the learning rate and increasing the sample size each step. I have a class which I use to initialize the whole thing.
So my skript is basically: VMC=model(lr=0.01, n_samples=2*1000, load_state='prelim') ... After every optimization I initialize a new class where I load the parameters of the prior optimization via:
So I am not saving anything myself, but I rely on the standard json logger which saves the parameters only. Thanks |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
I don't understand what is your question and what the plots represent. Could you be more specific? However let me mention two things:
You also say that
This might be because you are not sufficiently well thermalised in your chains. Does this also happen if you severely increase A way to avoid this is to sample a very large number of samples after creating |
Beta Was this translation helpful? Give feedback.
I don't understand what is your question and what the plots represent. Could you be more specific?
However let me mention two things:
Why are you recreating the VMC every time? You can simply change the number of samples by doing
vqs.n_samples = 12345
to change the number of samples.Why do you rebuild the VMC driver every time? You can simply change the optimizer by doing
VMC.optimizer = optax.adam(lr=0.02)
for example...You also say that
This might be because you are not sufficiently well thermalised in your chains. Does this also happen if you severely increase
n_discard
? By default, our MCMC chains are initial…