I’m trying to implement a particle filter in Greta, and am curious about whether anybody has suggestions on how to combine the likelihoods.
Basically, the output of a particle filter run gives me N estimates of potential trajectories for a process (with a non-linear deterministic function and non-normally distributed process noise). The likelihood is then calculated as the sum across all N variables, base on the hypothesized observation error distribution (which is a truncated normal, centered around each of the N trajectories), and the observed value y. The classic way you would implement this in other MCMC-based optimizers (e.g. Stan) would be to just increment the likelihood directly based on the output of the filter. But, as I understand Greta, I can’t directly interact with the likelihood, and instead need to define it in terms of a distribution.
My understanding is that the “correct” way to implement this in Greta would be to make use fo the “mixed” or “joint” functions, and to define y as the outcome of a mixture of N different normally distributed variables (one for each particle), with equal weighting across the N particles.
BUT the problem that I’m running into is that N is typically very large - e.g. 1e3-1e5. So, the syntax is a bit awkward. One potential hack that I’ve been toying with is to duplicate the time series N times into a single long column, concatenate the N filter trajectories into a single long column of predictor variables, and then hand Greta something like:
distribution(concatenated_y) <- normal(concatenated_x, sd)
My understanding is that this would give a similar result to an equally weighted mixture distribution, though it feels a bit hacky.
Does anybody know of an alternate method for doing what I’m trying to do, or have suggestions for why they think that either of the approaches described above is potentially incorrect? Alternatively, I’ve been poking around the new HMM extension of Greta, and I get the sense that some of what I’m trying to do is already implemented there.