Sampling from MCMixedState #1252
-
I trained a model that looks like this: >>> vstate: nk.vqs.MCMixedState = load_best_state(3)
>>> print(vstate)
MCMixedState(hilbert = DoubledHilbert(Fock(n_max=4, N=3)), sampler = MetropolisSampler(rule = LocalRule(), n_chains = 16, n_sweeps = 6, reset_chains = False, machine_power = 2, dtype = <class 'numpy.float64'>), n_samples = 608) Here, >>> x = vstate.diagonal.samples.reshape(-1, 3)
>>> print("Mean occupancy of each Fock mode:", np.mean(x, axis=0))
Mean occupancy of each Fock mode: [1.4875 1.46666667 1.19583333] Now, I want to continue sampling in order to, say, get a better approximation of the mean occupancy, but I see that the diagonal samples-which are the only ones used if I want to do >>> # Create new sample:
>>> x = vstate.diagonal.sample().reshape(-1, 3)
>>> print("Mean occupancy of each Fock mode:", np.mean(x, axis=0))
Mean occupancy of each Fock mode: [1.26666667 1.15 1.125 ]
>>> # Repeat the process a couple more times:
>>> x = vstate.diagonal.sample().reshape(-1, 3)
>>> print("Mean occupancy of each Fock mode:", np.mean(x, axis=0))
Mean occupancy of each Fock mode: [0.57083333 0.50416667 0.49166667]
>>> x = vstate.diagonal.sample().reshape(-1, 3)
>>> print("Mean occupancy of each Fock mode:", np.mean(x, axis=0))
Mean occupancy of each Fock mode: [0.55833333 0.48333333 0.39166667] Why is this? Is this to be expected? I would expect the samples to be very similar to the last batch of samples during training. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 3 replies
-
Most likely the chains are not at convergence. If I’m right, you’d notice by using an operator to compute those expectation values and you’d see that the split Rhat is off. This might happen if you are loading only the parameters but not the whole variational state (and it’s sampler state) , because then you’d be resetting the Markova chain and it might take a while to thermalise. try running a long chain and discard it before looking at this |
Beta Was this translation helpful? Give feedback.
Most likely the chains are not at convergence. If I’m right, you’d notice by using an operator to compute those expectation values and you’d see that the split Rhat is off.
This might happen if you are loading only the parameters but not the whole variational state (and it’s sampler state) , because then you’d be resetting the Markova chain and it might take a while to thermalise.
try running a long chain and discard it before looking at this