You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This runs a model of mode 0 on synthetic data, with `--cv` indicating which cross-validation fold to leave out for validation (-1 indicates using all data) and `--gpu` indicating the GPU device to run on (if available).
38
38
Line 188 in validation.py gives the definition of all modes (numbered 0 to 8), in particular the likelihood (1st element of tuple) and the input space (2d element of tuple) are specified.
39
39
Note there is a 10-fold split of the data, hence the cv trial numbers can go from -1 to 9.
40
-
`lr` and `lr_2` indicate the learning rates, with `lr_2` for kernel and variational standard deviations (lower for latent models as described in the paper).
40
+
`lr` and `lr_2` indicate the learning rates, with `lr_2` for toroidal kernel lengthscales and variational standard deviations of the latent state posterior (lower for latent models as described in the paper).
41
41
The flag `--ncvx` refers to the number of runs to do (selecting the best fit model after completion to save).
42
42
One can also specify `--batchsize`, which can speed up training when larger depending on the memory capacity of the hardware used.
43
43
For validation.py, the flag `--datatype` can be 0 (heteroscedastic Conway-Maxwell-Poisson) or 1 (modulated Poisson).
@@ -56,23 +56,16 @@ All trained models are stored in the ./checkpoint/ folder.
If you wish to run different modes or cross-validation runs grouped together above in parallel, run the command several times with only a single mode or cv trial each time.
0 commit comments