You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: examples/case_studies/GEV.myst.md
+22-23Lines changed: 22 additions & 23 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ jupytext:
5
5
format_name: myst
6
6
format_version: 0.13
7
7
kernelspec:
8
-
display_name: default
8
+
display_name: eabm
9
9
language: python
10
10
name: python3
11
11
---
@@ -38,14 +38,13 @@ Note that this parametrization of the shape parameter $\xi$ is opposite in sign
38
38
We will use the example of the Port Pirie annual maximum sea-level data used in {cite:t}`coles2001gev`, and compare with the frequentist results presented there.
39
39
40
40
```{code-cell} ipython3
41
-
import arviz as az
41
+
import arviz.preview as az
42
42
import matplotlib.pyplot as plt
43
43
import numpy as np
44
44
import pymc as pm
45
45
import pymc_extras.distributions as pmx
46
-
import pytensor.tensor as pt
47
46
48
-
from arviz.plots import plot_utils as azpu
47
+
az.style.use("arviz-variat")
49
48
```
50
49
51
50
## Data
@@ -112,18 +111,13 @@ Let's get a feel for how well our selected priors cover the range of the data:
And we can look at the sampled values of the parameters, using the `plot_posterior` function, but passing in the `idata` object and specifying the `group` to be `"prior"`:
To compare with the results given in {cite:t}`coles2001gev`, we approximate the maximum likelihood estimates (MLE) using the mode of the posterior distributions (the *maximum a posteriori* or MAP estimate). These are close when the prior is reasonably flat around the posterior estimate.
179
178
180
179
The MLE results given in {cite:t}`coles2001gev` are:
Note that extracting the MLE estimates from our inference involves accessing some of the Arviz back end functions to bash the xarray into something examinable:
Copy file name to clipboardExpand all lines: examples/case_studies/factor_analysis.myst.md
+31-36Lines changed: 31 additions & 36 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ jupytext:
6
6
format_name: myst
7
7
format_version: 0.13
8
8
kernelspec:
9
-
display_name: Python 3 (ipykernel)
9
+
display_name: eabm
10
10
language: python
11
11
name: python3
12
12
myst:
@@ -33,7 +33,7 @@ Factor analysis is a widely used probabilistic model for identifying low-rank st
33
33
:::
34
34
35
35
```{code-cell} ipython3
36
-
import arviz as az
36
+
import arviz.preview as az
37
37
import numpy as np
38
38
import pymc as pm
39
39
import pytensor.tensor as pt
@@ -42,7 +42,6 @@ import seaborn as sns
42
42
import xarray as xr
43
43
44
44
from matplotlib import pyplot as plt
45
-
from matplotlib.lines import Line2D
46
45
from numpy.random import default_rng
47
46
from xarray_einstats import linalg
48
47
from xarray_einstats.stats import XrContinuousRV
@@ -52,7 +51,7 @@ print(f"Running on PyMC v{pm.__version__}")
52
51
53
52
```{code-cell} ipython3
54
53
%config InlineBackend.figure_format = 'retina'
55
-
az.style.use("arviz-darkgrid")
54
+
az.style.use("arviz-variat")
56
55
57
56
np.set_printoptions(precision=3, suppress=True)
58
57
RANDOM_SEED = 31415
@@ -128,11 +127,13 @@ with pm.Model(coords=coords) as PPCA:
128
127
At this point, there are already several warnings regarding failed convergence checks. We can see further problems in the trace plot below. This plot shows the path taken by each sampler chain for a single entry in the matrix $W$ as well as the average evaluated over samples for each chain.
Each chain appears to have a different sample mean and we can also see that there is a great deal of autocorrelation across chains, manifest as long-range trends over sampling iterations.
@@ -194,13 +195,7 @@ with pm.Model(coords=coords) as PPCA_identified:
194
195
F = pm.Normal("F", dims=("latent_columns", "rows"))
195
196
sigma = pm.HalfNormal("sigma", 1.0)
196
197
X = pm.Normal("X", mu=W @ F, sigma=sigma, observed=Y, dims=("observed_columns", "rows"))
$W$ (and $F$!) now have entries with identical posterior distributions as compared between sampler chains, although it's apparent that some autocorrelation remains.
@@ -251,29 +246,28 @@ When we compare the posteriors calculated using MCMC and VI, we find that (for a
0 commit comments