Particle filter?

Hi all,

I’m trying to implement a particle filter in Greta, and am curious about whether anybody has suggestions on how to combine the likelihoods.

Basically, the output of a particle filter run gives me N estimates of potential trajectories for a process (with a non-linear deterministic function and non-normally distributed process noise). The likelihood is then calculated as the sum across all N variables, base on the hypothesized observation error distribution (which is a truncated normal, centered around each of the N trajectories), and the observed value y. The classic way you would implement this in other MCMC-based optimizers (e.g. Stan) would be to just increment the likelihood directly based on the output of the filter. But, as I understand Greta, I can’t directly interact with the likelihood, and instead need to define it in terms of a distribution.

My understanding is that the “correct” way to implement this in Greta would be to make use fo the “mixed” or “joint” functions, and to define y as the outcome of a mixture of N different normally distributed variables (one for each particle), with equal weighting across the N particles.

BUT the problem that I’m running into is that N is typically very large - e.g. 1e3-1e5. So, the syntax is a bit awkward. One potential hack that I’ve been toying with is to duplicate the time series N times into a single long column, concatenate the N filter trajectories into a single long column of predictor variables, and then hand Greta something like:

distribution(concatenated_y) <- normal(concatenated_x, sd)

My understanding is that this would give a similar result to an equally weighted mixture distribution, though it feels a bit hacky.

Does anybody know of an alternate method for doing what I’m trying to do, or have suggestions for why they think that either of the approaches described above is potentially incorrect? Alternatively, I’ve been poking around the new HMM extension of Greta, and I get the sense that some of what I’m trying to do is already implemented there.

Many thanks,

Interesting! I’d be keen to have particle filters built into greta, but haven’t had time to think about it at all deeply.

Viewing it as a mixture makes sense. You could hack your way into using a mixture across a long list of particles of things with R’s do.call(), with something like this:

library(greta)

# number of particles
N <- 10

# single observation
y <- as_data(rnorm(1))

# particle locations (you can evolve or model these somehow)
means <- as_data(rnorm(N))

# mixture weights
weights <- c(uniform(0, 1, dim = N - 1), 0.5)
weights <- weights / sum(weights)

# define an observation model for each particle location
distributions <- lapply(means, normal, sd = 0.1)
# trick to define all likelihoods, based on weights, equivalent to:
# liklihood <- mixture(distributions[[1]],
#                      distributions[[2]],
#                      distributions[[3]],
#                      <etc>
#                      weights = weights)
args <- c(distributions, list(weights = weights))
likelihood <- do.call(mixture, args)

# use mixture as the observation model (or you could use as a variable in another distribution)
distribution(y) <- likelihood

However with N > 1e3 that’s going to be computationally intensive, and this way doesn’t take advantage of tensorflow’s ability to vectorize and parallelise operations. Coding this approach up in a new distribution (like in greta.hmm) would be much better.

I’ve not worked with particle filters before, and have little time right now for adding new features, but I’d be very happy to help someone else take this on!

Ah - thanks! That makes sense - I hadn’t realized I can wrap it in a do.call.

I’ll think about this a bit more, in that case. I agree that it’s a shame not to vectorize, at least across particles. I’ll let you know if I try putting together a new distribution - from the source code in the repo, the definitions seem pretty doable.

1 Like