OK, I figured now was the time to get greta.gp working.

Thereâ€™s a new branch and pull request open which depends on greta 0.3.1, which is itself still a release candidate (waiting on CRAN to accept) and pull request. You can install both by doing:

```
remotes::install_github("greta-dev/greta@rc_0_3_1")
remotes::install_github("goldingn/greta.gp@roll_forward")
```

Hopefully these will both be the master branches (and on CRAN, for greta) in a couple of days so those installation instructions will become redundant.

Hereâ€™s how you would do the same two models with greta.gp:

linear predictor (latent Gaussian process) version:

```
library(greta)
library(greta.gp)
beta_0 <- normal(0, 1)
sigma_e <- cauchy(0, 5, truncation = c(0, Inf))
sigma_u <- cauchy(0, 5, truncation = c(0, Inf))
lengthscale <- normal(0, 1, truncation = c(0, Inf))
K <- mat32(lengthscale, sigma_u ^ 2)
f_gp <- gp(pts, K, tol = 1e-9)
f <- f_gp + beta_0
distribution(y1) <- normal(f, sigma_e)
m1 <- model(beta_0, lengthscale, sigma_u)
draws1 <- mcmc(m1, one_by_one = TRUE)
```

observed Gaussian process version:

```
beta_0 <- normal(0, 1)
sigma_e <- cauchy(0, 5, truncation = c(0, Inf))
sigma_u <- cauchy(0, 5, truncation = c(0, Inf))
lengthscale <- normal(0, 1, truncation = c(0, Inf))
# covariance matrix
K1 <- mat32(lengthscale, sigma_u ^ 2)
K2 <- white(sigma_e ^ 2)
K <- K1 + K2
# define a multivariate normal on the data *rowwise*, so that observations are
# correlated (each row is an independent MVN)
mu <- beta_0 + zeros(1, n)
sigma <- K(pts)
ty <- t(y1)
distribution(ty) <- multivariate_normal(mu, sigma)
m2 <- model(beta_0, lengthscale, sigma_u)
draws2 <- mcmc(m2, one_by_one = TRUE)
```

A nice thing about this is that you can use `greta.gp::project()`

to get a greta array for GP values in new locations, and `greta::calculate()`

to get posterior samples for them. So you can easily do Gaussian process prediction to new locations even if you didnâ€™t know the prediction locations before sampling.

```
s_seq <- seq(0, 1, length.out = 10)
pts_new <- expand.grid(s1 = s_seq,
s2 = s_seq)
```

```
f_gp_mn_new <- project(f_gp, pts_new)
f_mn_new <- f_gp_mn_new + beta_0
f_mn_new_draws <- calculate(f_mn_new, draws1)
f_mn_new_est <- colMeans(as.matrix(f_mn_new_draws))
```

That only works with the first one of these examples though. You have to do prediction yourself for the latter case (observed GP with noise):

```
K_xstar_x <- K1(pts_new, pts)
K_xstar_xstar <- K1(pts_new, pts_new)
i_sigma <- solve(sigma)
f_mn_new <- beta_0 + K_xstar_x %*% (i_sigma %*% as.matrix(y1))
f_mn_new_draws <- calculate(f_mn_new, draws2)
f_mn_new_est <- colMeans(as.matrix(f_mn_new_draws))
```

Note that both of these only compute posterior estimates of the mean of the GP at these locations, not realisations of the functions drawn from the GP. Youâ€™d need to calculate the posterior covariance matrix and draw more samples from a multivariate normal again to get posterior realisations of the function values.