Right, this is actually something I’ve been thinking about recently, interesting that the post just appeared!

In the first version of the post, Gelman casually mentioned needing to account for the log Jacobian adjustment too. The post has since been updated to make clear that in the *linear* case that doesn’t happen to be necessary, but in general it is. The need (in general) to do a log Jacobian adjustment is the reason that assigning a distribution on the result of an operation is specifically disallowed in greta - we can’t tell whether the user has done something sensible or if the log-jacobian adjustment is missing and the results will be incorrect. This is part of the design principles of greta, and one way in which is differs from Stan (see note below).

If you really want to do this in greta, you can modify the distribution function so that it doesn’t error (delete the lines in the previous link). I think it will then run without error.

I was at a workshop recently where we thought about this idea of putting priors on summary statistics a bit, and I was specifically interested in whether it would be possible to compute the relevant log-Jacobian adjustment automatically using TensorFlow’s autodiff. I wrote a skeleton package to do this: greta.funprior. We called this a ‘fun prior’, because you can still think of this as a generative model if you consider that you are setting a prior on a parametric function, rather than a deterministic summary statistic. So a *function* prior -> fun prior. We were surprised we couldn’t find much literature on this, so there may be an established name for this interpretation.

The package does not yet work - the core bit of TensorFlow code to get the Jacobian matrix hasn’t actually been written (and I won’t have time any time soon), but ideally you would specify the model exactly as you did here, and `distribution`

would take care of the relevant adjustment (example here).

Note: Stan mixes a pretty safe and user-friendly API (with calls like `n ~ poisson(lambda)`

) with a more advanced flexible but dangerous API (where you can add anything to the target density with calls like `target += poisson(n, lambda)`

) so you can do weirder stuff in your model if you want to. But greta is designed so that the general API is user-friendly and safe, and you have to develop a new function or distribution if you want to do something weird and potentially dangerous. That way the onus is on a developer (who hopefully know about the particular problem) to get things right, and they can distribute the new functionality in greta or an extension package (see the new template for an extension package). For example greta’s `mixture`

function sets up mixture distributions without all the `target +=`

stuff needed in Stan.