Size and number of leapfrog steps in HMC

Greetings, fellow greta users who spend lots of time tuning the number of leapfrog steps (the Lmin and Lmax arguments in hmc()) and their stepsize (epsilon). :wink:

Among datasets, I have been using L as few as ~10 to as many as ~100, and stepsizes epsilon ranging from the default 0.1 to 0.001. Most if not all of the time, I have no idea what I am doing :stuck_out_tongue: The guidelines in the helpfiles and this forum, and @nickā€™s scattered suggestions generally served me well (search ā€œtuneā€, ā€œleapfrogā€, ā€œstepsizeā€ etc. in the forum). There are many technical papers on tuning these parameters, but they mostly point out that static HMC (what we have in greta currently) demands fine tunings and then sell NUTS as an easier alternative.

I really like greta so I need to befriend the static HMC samplerā€¦ but for someone like me, itā€™s really hard to imagine that a stepsize actually it. Does it have a unit? For example, if I scale my response and/or explanatory variables, that changes the scale of my coefficients, so do I also need to change the step size and number (since the explored parameter space has a different scaleā€¦)? Or is this unnecessary, because the diag_sd also rescale the posterior spaceā€¦ ??? :flushed: In other words, I donā€™t know what Iā€™m doing because I donā€™t know what a stepsize of 10 means relative to the scales of my parameter and dataā€¦ is 10 too large or too small?

Anyway, not asking for a definite guide here (because they is noneā€¦), but just thought to gauge how people usually tune their HMC sampler day to day. I usually increase the number of steps incrementally by 5, with or without decreasing stepsize at the same time. Still donā€™t know what Iā€™m doing.

1 Like

You donā€™t need to manually tune the epsilon or diag_sd! Those are automatically tuned during the warm-up phase. The only thing to manually tune is the number of leapfrog steps (L).

More leapfrog steps means better sampling, but takes longer to evaluate. Execution time is linear in the number of leapfrog steps, so L=20 will take about twice as long to run as L=10 (in theory). Thereā€™s an Lmin and an Lmax, because in practice itā€™s a good idea to jitter the number a bit to avoid the sampler oscillating between two points on parameter space. But in practice, varying Lmin but always setting Lmax=Lmin+5 should be all you need to do to resolve that.

So in the first instance, I would make sure you have enough warmup to learn epsilon and diag_sd well. The tuning parameters are learned jointly from the warm-up samples across all chains. So running lots of chains (10-20) is a good option to improve the warmup tuning. Thatā€™s often better than a longer warmup because running lots of chains is often cheap in greta (and tensorflow probability), so long as you are not running low on memory. For smallish models, sampling with about 10 chains often takes no longer than sampling with 1 chain. Even on a single core.

If itā€™s still sampling poorly, try increasing Lmin (and setting Lmax=Lmin+5). Though I wouldnā€™t push Lmin above about 30 (normally 5-10 is fine) If itā€™s not sampling fairly well with that many leapfrog steps thereā€™s probably something wrong with your model definition, or some trick like hierarchical decentring that can help you.

Thereā€™s lots of lore/dark arts around practical use of MCMC, so hopefully this is helpful. It would probably be a good idea for us to put some of this (and explaining the many chains thing) in a user guide somewhere.

2 Likes

Added a note here: https://github.com/greta-dev/greta/issues/541 :slight_smile:

Thank you @nick for the summary. Reading it forced me to think about the modelling aspects again, particularly the priors.

I decided not to brute force by increasing Lmin, keeping it <= 30. After a few struggle I realised that the parameters have quite different scale (i.e., the intercepts are a magnitude larger than the main effects, with a few interaction effects that are either weak or weakly identified from the noisy data). So I changed the variance of these coefficients from being fixed to being hierarchy, i.e., using a Normal-gamma prior in this case for some added regularisation. At first pass the chains improved a lot! But they werenā€™t ideal. Then I realised that the gamma prior for variance are not informative enough to prevent the sampler from sampling relatively large positive variance. So I change from gamma(2, 2) to gamma(10, 10) and it finally did the job, without changing the posterior median too much.

Lesson learned: if tuning the sampler didnā€™t work well, then there is probably some inefficiency due to model specification. Probably the best place to start digging are priorsā€¦ ?

1 Like

Thanks so much for that, @hrlai - sounds like you were able to explore some good options to get a good result!