<?xml version='1.0' encoding='UTF-8'?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
  <id>https://docs.pymc.io/projects/examples/en/latest/</id>
  <title>PyMC Examples</title>
  <updated>2024-12-23T01:10:53.423230+00:00</updated>
  <link href="https://docs.pymc.io/projects/examples/en/latest/"/>
  <link href="https://docs.pymc.io/projects/examples/en/latest/blog/atom.xml" rel="self"/>
  <generator uri="https://ablog.readthedocs.io/" version="0.11.12">ABlog</generator>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/generalized_linear_models/GLM-missing-values-in-covariates.html</id>
    <title>GLM-missing-values-in-covariates</title>
    <updated>2024-11-09T00:00:00+00:00</updated>
    <author>
      <name>Jonathan Sedar</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;Minimal Reproducible Example: Workflow to handle missing data in multiple covariates (numeric predictor features)&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/generalized_linear_models/GLM-missing-values-in-covariates.html"/>
    <summary>Minimal Reproducible Example: Workflow to handle missing data in multiple covariates (numeric predictor features)</summary>
    <category term="auto-imputation" label="auto-imputation"/>
    <category term="bayesian-workflow" label="bayesian-workflow"/>
    <category term="linear-regression" label="linear-regression"/>
    <category term="missing-covariate-values" label="missing-covariate-values"/>
    <category term="missing-values" label="missing-values"/>
    <published>2024-11-09T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/generalized_linear_models/GLM-ordinal-features.html</id>
    <title>GLM-ordinal-features</title>
    <updated>2024-10-27T00:00:00+00:00</updated>
    <author>
      <name>Jonathan Sedar</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;Here we use an &lt;strong&gt;ordinal exogenous predictor feature&lt;/strong&gt; within a model:&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/generalized_linear_models/GLM-ordinal-features.html"/>
    <summary>Here we use an ordinal exogenous predictor feature within a model:</summary>
    <category term="bayesian-workflow" label="bayesian-workflow"/>
    <category term="glm" label="glm"/>
    <category term="ordinal-features" label="ordinal-features"/>
    <category term="ordinal-regression" label="ordinal-regression"/>
    <category term="r-datasets" label="r-datasets"/>
    <published>2024-10-27T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/causal_inference/GLM-simpsons-paradox.html</id>
    <title>Simpson’s paradox</title>
    <updated>2024-09-23T00:00:00+00:00</updated>
    <author>
      <name>Benjamin T. Vincent</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;&lt;a class="reference external" href="https://en.wikipedia.org/wiki/Simpson%27s_paradox"&gt;Simpson’s Paradox&lt;/a&gt; describes a situation where there might be a negative relationship between two variables within a group, but when data from multiple groups are combined, that relationship may disappear or even reverse sign. The gif below (from the Simpson’s Paradox &lt;a class="reference external" href="https://en.wikipedia.org/wiki/Simpson%27s_paradox"&gt;Wikipedia&lt;/a&gt; page) demonstrates this very nicely.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/causal_inference/GLM-simpsons-paradox.html"/>
    <summary>Simpson’s Paradox describes a situation where there might be a negative relationship between two variables within a group, but when data from multiple groups are combined, that relationship may disappear or even reverse sign. The gif below (from the Simpson’s Paradox Wikipedia page) demonstrates this very nicely.</summary>
    <category term="Simpson'sparadox" label="Simpson's paradox"/>
    <category term="causalinference" label="causal inference"/>
    <category term="hierarchicalmodel" label="hierarchical model"/>
    <category term="linearmodel" label="linear model"/>
    <category term="posteriorpredictive" label="posterior predictive"/>
    <category term="regression" label="regression"/>
    <published>2024-09-23T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/case_studies/CFA_SEM.html</id>
    <title>Confirmatory Factor Analysis and Structural Equation Models in Psychometrics</title>
    <updated>2024-09-23T00:00:00+00:00</updated>
    <author>
      <name>Nathaniel Forde</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;“Evidently, the notions of relevance and dependence are far more basic to human reasoning than the numerical values attached to probability judgments…the language used for representing probabilistic information should allow assertions about dependency relationships to be expressed qualitatively, directly, and explicitly” - Pearl in &lt;em&gt;Probabilistic Reasoning in Intelligent Systems&lt;/em&gt; &lt;span id="id1"&gt;Pearl [&lt;a class="reference internal" href="case_studies/CFA_SEM.html#id83" title="Judea Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of plausible Inference. Morgan Kaufman, 1985."&gt;1985&lt;/a&gt;]&lt;/span&gt;&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/case_studies/CFA_SEM.html"/>
    <summary>“Evidently, the notions of relevance and dependence are far more basic to human reasoning than the numerical values attached to probability judgments…the language used for representing probabilistic information should allow assertions about dependency relationships to be expressed qualitatively, directly, and explicitly” - Pearl in Probabilistic Reasoning in Intelligent Systems pearl1985prob</summary>
    <category term=""/>
    <category term="cfa" label="cfa"/>
    <category term="regression" label="regression"/>
    <category term="sem" label="sem"/>
    <published>2024-09-23T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/spatial/malaria_prevalence.html</id>
    <title>The prevalence of malaria in the Gambia</title>
    <updated>2024-08-24T00:00:00+00:00</updated>
    <author>
      <name>Jonathan Dekermanjian</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;Duplicate implicit target name: “the prevalence of malaria in the gambia”.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/spatial/malaria_prevalence.html"/>
    <summary>Duplicate implicit target name: “the prevalence of malaria in the gambia”.</summary>
    <category term="autoregressive" label="autoregressive"/>
    <category term="countdata" label="count data"/>
    <category term="spatial" label="spatial"/>
    <published>2024-08-24T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/diagnostics_and_criticism/model_averaging.html</id>
    <title>Model Averaging</title>
    <updated>2024-08-23T00:00:00+00:00</updated>
    <author>
      <name>Osvaldo Martin</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;When confronted with more than one model we have several options. One of them is to perform model selection as exemplified by the PyMC examples &lt;a class="reference external" href="https://www.pymc.io/projects/docs/en/stable/learn/core_notebooks/model_comparison.html#model-comparison" title="(in PyMC v5.19.1)"&gt;&lt;span&gt;Model comparison&lt;/span&gt;&lt;/a&gt; and the &lt;a class="reference internal" href="generalized_linear_models/GLM-model-selection.html#glm-model-selection"&gt;&lt;span class="std std-ref"&gt;GLM: Model Selection&lt;/span&gt;&lt;/a&gt;, usually is a good idea to also include posterior predictive checks in order to decide which model to keep. Discarding all models except one is equivalent to affirm that, among the evaluated models, one is correct (under some criteria) with probability 1 and the rest are incorrect. In most cases this will be an overstatment that ignores the uncertainty we have in our models. This is somewhat similar to computing the full posterior and then just keeping a point-estimate like the posterior mean; we may become overconfident of what we really know. You can also browse the &lt;span class="xref std std-doc"&gt;blog/tag/model-comparison&lt;/span&gt; tag to find related posts.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/diagnostics_and_criticism/model_averaging.html"/>
    <summary>When confronted with more than one model we have several options. One of them is to perform model selection as exemplified by the PyMC examples pymc:model_comparison and the GLM-model-selection, usually is a good idea to also include posterior predictive checks in order to decide which model to keep. Discarding all models except one is equivalent to affirm that, among the evaluated models, one is correct (under some criteria) with probability 1 and the rest are incorrect. In most cases this will be an overstatment that ignores the uncertainty we have in our models. This is somewhat similar to computing the full posterior and then just keeping a point-estimate like the posterior mean; we may become overconfident of what we really know. You can also browse the blog/tag/model-comparison tag to find related posts.</summary>
    <category term="modelaveraging" label="model averaging"/>
    <category term="modelcomparison" label="model comparison"/>
    <published>2024-08-23T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/time_series/Time_Series_Generative_Graph.html</id>
    <title>Time Series Models Derived From a Generative Graph</title>
    <updated>2024-07-23T00:00:00+00:00</updated>
    <author>
      <name>Juan Orduz and Ricardo Vieira</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;In this notebook, we show how to model and fit a time series model starting from a generative graph. In particular, we explain how to use &lt;code class="xref py py-func docutils literal notranslate"&gt;&lt;span class="pre"&gt;scan&lt;/span&gt;&lt;/code&gt; to loop efficiently inside a PyMC model.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/time_series/Time_Series_Generative_Graph.html"/>
    <summary>In this notebook, we show how to model and fit a time series model starting from a generative graph. In particular, we explain how to use scan to loop efficiently inside a PyMC model.</summary>
    <category term=""/>
    <category term="time-series" label="time-series"/>
    <published>2024-07-23T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/gaussian_processes/HSGP-Advanced.html</id>
    <title>Gaussian Processes: HSGP Advanced Usage</title>
    <updated>2024-06-28T00:00:00+00:00</updated>
    <author>
      <name>Maxim Kochurov</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;The Hilbert Space Gaussian processes approximation is a low-rank GP approximation that is particularly well-suited to usage in probabilistic programming languages like PyMC.  It approximates the GP using a pre-computed and fixed set of basis functions that don’t depend on the form of the covariance kernel or its hyperparameters.  It’s a &lt;em&gt;parametric&lt;/em&gt; approximation, so prediction in PyMC can be done as one would with a linear model via &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;pm.Data&lt;/span&gt;&lt;/code&gt; or &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;pm.set_data&lt;/span&gt;&lt;/code&gt;.  You don’t need to define the &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;.conditional&lt;/span&gt;&lt;/code&gt; distribution that non-parameteric GPs rely on.  This makes it &lt;em&gt;much&lt;/em&gt; easier to integrate an HSGP, instead of a GP, into your existing PyMC model.  Additionally, unlike many other GP approximations, HSGPs can be used anywhere within a model and with any likelihood function.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/gaussian_processes/HSGP-Advanced.html"/>
    <summary>The Hilbert Space Gaussian processes approximation is a low-rank GP approximation that is particularly well-suited to usage in probabilistic programming languages like PyMC.  It approximates the GP using a pre-computed and fixed set of basis functions that don’t depend on the form of the covariance kernel or its hyperparameters.  It’s a parametric approximation, so prediction in PyMC can be done as one would with a linear model via pm.Data or pm.set_data.  You don’t need to define the .conditional distribution that non-parameteric GPs rely on.  This makes it much easier to integrate an HSGP, instead of a GP, into your existing PyMC model.  Additionally, unlike many other GP approximations, HSGPs can be used anywhere within a model and with any likelihood function.</summary>
    <category term="gaussianprocesses" label="gaussian processes"/>
    <published>2024-06-28T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/gaussian_processes/HSGP-Basic.html</id>
    <title>Gaussian Processes: HSGP Reference &amp; First Steps</title>
    <updated>2024-06-10T00:00:00+00:00</updated>
    <author>
      <name>Alexandre Andorra</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;The Hilbert Space Gaussian processes approximation is a low-rank GP approximation that is particularly well-suited to usage in probabilistic programming languages like PyMC.  It approximates the GP using a pre-computed and fixed set of basis functions that don’t depend on the form of the covariance kernel or its hyperparameters.  It’s a &lt;em&gt;parametric&lt;/em&gt; approximation, so prediction in PyMC can be done as one would with a linear model via &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;pm.Data&lt;/span&gt;&lt;/code&gt; or &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;pm.set_data&lt;/span&gt;&lt;/code&gt;.  You don’t need to define the &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;.conditional&lt;/span&gt;&lt;/code&gt; distribution that non-parameteric GPs rely on.  This makes it &lt;em&gt;much&lt;/em&gt; easier to integrate an HSGP, instead of a GP, into your existing PyMC model.  Additionally, unlike many other GP approximations, HSGPs can be used anywhere within a model and with any likelihood function.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/gaussian_processes/HSGP-Basic.html"/>
    <summary>The Hilbert Space Gaussian processes approximation is a low-rank GP approximation that is particularly well-suited to usage in probabilistic programming languages like PyMC.  It approximates the GP using a pre-computed and fixed set of basis functions that don’t depend on the form of the covariance kernel or its hyperparameters.  It’s a parametric approximation, so prediction in PyMC can be done as one would with a linear model via pm.Data or pm.set_data.  You don’t need to define the .conditional distribution that non-parameteric GPs rely on.  This makes it much easier to integrate an HSGP, instead of a GP, into your existing PyMC model.  Additionally, unlike many other GP approximations, HSGPs can be used anywhere within a model and with any likelihood function.</summary>
    <category term="gaussianprocesses" label="gaussian processes"/>
    <published>2024-06-10T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/bart/bart_categorical_hawks.html</id>
    <title>Categorical regression</title>
    <updated>2024-05-23T00:00:00+00:00</updated>
    <author>
      <name>Osvaldo Martin</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;In this example, we will model outcomes with more than two categories.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/bart/bart_categorical_hawks.html"/>
    <summary>In this example, we will model outcomes with more than two categories.</summary>
    <category term="BART" label="BART"/>
    <category term="regression" label="regression"/>
    <published>2024-05-23T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/causal_inference/bayesian_nonparametric_causal.html</id>
    <title>Bayesian Non-parametric Causal Inference</title>
    <updated>2024-01-23T00:00:00+00:00</updated>
    <author>
      <name>Nathaniel Forde</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;There are few claims stronger than the assertion of a causal relationship and few claims more contestable. A naive world model - rich with tenuous connections and non-sequiter implications is characteristic of conspiracy theory and idiocy. On the other hand, a refined and detailed knowledge of cause and effect characterised by clear expectations, plausible connections and compelling counterfactuals, will steer you well through the buzzing, blooming confusion of the world.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/causal_inference/bayesian_nonparametric_causal.html"/>
    <summary>There are few claims stronger than the assertion of a causal relationship and few claims more contestable. A naive world model - rich with tenuous connections and non-sequiter implications is characteristic of conspiracy theory and idiocy. On the other hand, a refined and detailed knowledge of cause and effect characterised by clear expectations, plausible connections and compelling counterfactuals, will steer you well through the buzzing, blooming confusion of the world.</summary>
    <category term="bart" label="bart"/>
    <category term="debiasedmachinelearning" label="debiased machine learning"/>
    <category term="mediation" label="mediation"/>
    <category term="propensityscores" label="propensity scores"/>
    <published>2024-01-23T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/gaussian_processes/GP-Births.html</id>
    <title>Baby Births Modelling with HSGPs</title>
    <updated>2024-01-23T00:00:00+00:00</updated>
    <author>
      <name>[Juan Orduz](https://juanitorduz.github.io/)</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;This notebook provides an example of using the Hilbert Space Gaussian Process (HSGP) technique, introduced in &lt;span id="id1"&gt;[&lt;a class="reference internal" href="gaussian_processes/GP-Births.html#id104" title="Arno Solin and Simo Särkkä. Hilbert space methods for reduced-rank gaussian process regression. Statistics and Computing, 30(2):419–446, 2020. URL: https://doi.org/10.1007/s11222-019-09886-w, doi:10.1007/s11222-019-09886-w."&gt;Solin and Särkkä, 2020&lt;/a&gt;]&lt;/span&gt;, in the context of time series modeling. This technique has proven successful in speeding up models with Gaussian process components.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/gaussian_processes/GP-Births.html"/>
    <summary>This notebook provides an example of using the Hilbert Space Gaussian Process (HSGP) technique, introduced in solin2020Hilbert, in the context of time series modeling. This technique has proven successful in speeding up models with Gaussian process components.</summary>
    <category term=""/>
    <category term="gaussianprocesses" label="gaussian processes"/>
    <category term="hilbertspaceapproximation" label="hilbert space approximation"/>
    <published>2024-01-23T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/howto/marginalizing-models.html</id>
    <title>Automatic marginalization of discrete variables</title>
    <updated>2024-01-20T00:00:00+00:00</updated>
    <author>
      <name>Rob Zinkov</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;PyMC is very amendable to sampling models with discrete latent variables. But if you insist on using the NUTS sampler exclusively, you will need to get rid of your discrete variables somehow. The best way to do this is by marginalizing them out, as then you benefit from Rao-Blackwell’s theorem and get a lower variance estimate of your parameters.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/howto/marginalizing-models.html"/>
    <summary>PyMC is very amendable to sampling models with discrete latent variables. But if you insist on using the NUTS sampler exclusively, you will need to get rid of your discrete variables somehow. The best way to do this is by marginalizing them out, as then you benefit from Rao-Blackwell’s theorem and get a lower variance estimate of your parameters.</summary>
    <category term="mixturemodel" label="mixture model"/>
    <published>2024-01-20T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/generalized_linear_models/GLM-out-of-sample-predictions.html</id>
    <title>Out-Of-Sample Predictions</title>
    <updated>2023-12-23T00:00:00+00:00</updated>
    <author>
      <name>PyMC Contributors</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;We want to fit a logistic regression model where there is a multiplicative interaction between two numerical features.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/generalized_linear_models/GLM-out-of-sample-predictions.html"/>
    <summary>We want to fit a logistic regression model where there is a multiplicative interaction between two numerical features.</summary>
    <category term="generalizedlinearmodel" label="generalized linear model"/>
    <category term="logisticregression" label="logistic regression"/>
    <category term="outofsamplepredictions" label="out of sample predictions"/>
    <category term="patsy" label="patsy"/>
    <published>2023-12-23T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/howto/copula-estimation.html</id>
    <title>Bayesian copula estimation: Describing correlated joint distributions</title>
    <updated>2023-12-23T00:00:00+00:00</updated>
    <author>
      <name>Benjamin T. Vincent</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;When we deal with multiple variables (e.g. &lt;span class="math notranslate nohighlight"&gt;\(a\)&lt;/span&gt; and &lt;span class="math notranslate nohighlight"&gt;\(b\)&lt;/span&gt;) we often want to describe the joint distribution &lt;span class="math notranslate nohighlight"&gt;\(P(a, b)\)&lt;/span&gt; parametrically. If we are lucky, then this joint distribution might be ‘simple’ in some way. For example, it could be that &lt;span class="math notranslate nohighlight"&gt;\(a\)&lt;/span&gt; and &lt;span class="math notranslate nohighlight"&gt;\(b\)&lt;/span&gt; are statistically independent, in which case we can break down the joint distribution into &lt;span class="math notranslate nohighlight"&gt;\(P(a, b) = P(a) P(b)\)&lt;/span&gt; and so we just need to find appropriate parametric descriptions for &lt;span class="math notranslate nohighlight"&gt;\(P(a)\)&lt;/span&gt; and &lt;span class="math notranslate nohighlight"&gt;\(P(b)\)&lt;/span&gt;. Even if this is not appropriate, it may be that &lt;span class="math notranslate nohighlight"&gt;\(P(a, b)\)&lt;/span&gt; could be described well by a simple multivariate distribution, such as a multivariate normal distribution for example.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/howto/copula-estimation.html"/>
    <summary>When we deal with multiple variables (e.g. a and b) we often want to describe the joint distribution P(a, b) parametrically. If we are lucky, then this joint distribution might be ‘simple’ in some way. For example, it could be that a and b are statistically independent, in which case we can break down the joint distribution into P(a, b) = P(a) P(b) and so we just need to find appropriate parametric descriptions for P(a) and P(b). Even if this is not appropriate, it may be that P(a, b) could be described well by a simple multivariate distribution, such as a multivariate normal distribution for example.</summary>
    <category term="copula" label="copula"/>
    <category term="parameterestimation" label="parameter estimation"/>
    <published>2023-12-23T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/survival_analysis/frailty_models.html</id>
    <title>Frailty and Survival Regression Models</title>
    <updated>2023-11-23T00:00:00+00:00</updated>
    <author>
      <name>Nathaniel Forde</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;This notebook uses libraries that are not PyMC dependencies
and therefore need to be installed specifically to run this notebook.
Open the dropdown below for extra guidance.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/survival_analysis/frailty_models.html"/>
    <summary>This notebook uses libraries that are not PyMC dependencies
and therefore need to be installed specifically to run this notebook.
Open the dropdown below for extra guidance.</summary>
    <category term="competingrisks" label="competing risks"/>
    <category term="frailtymodels" label="frailty models"/>
    <category term="modelcomparison" label="model comparison"/>
    <category term="survivalanalysis" label="survival analysis"/>
    <published>2023-11-23T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/generalized_linear_models/GLM-negative-binomial-regression.html</id>
    <title>GLM: Negative Binomial Regression</title>
    <updated>2023-09-23T00:00:00+00:00</updated>
    <author>
      <name>Benjamin Vincent</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;This notebook uses libraries that are not PyMC dependencies
and therefore need to be installed specifically to run this notebook.
Open the dropdown below for extra guidance.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/generalized_linear_models/GLM-negative-binomial-regression.html"/>
    <summary>This notebook uses libraries that are not PyMC dependencies
and therefore need to be installed specifically to run this notebook.
Open the dropdown below for extra guidance.</summary>
    <category term=""/>
    <category term="generalizedlinearmodel" label="generalized linear model"/>
    <category term="negativebinomialregression" label="negative binomial regression"/>
    <published>2023-09-23T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/spatial/nyc_bym.html</id>
    <title>The Besag-York-Mollie Model for Spatial Data</title>
    <updated>2023-08-18T00:00:00+00:00</updated>
    <author>
      <name>Daniel Saunders</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;This notebook uses libraries that are not PyMC dependencies
and therefore need to be installed specifically to run this notebook.
Open the dropdown below for extra guidance.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/spatial/nyc_bym.html"/>
    <summary>This notebook uses libraries that are not PyMC dependencies
and therefore need to be installed specifically to run this notebook.
Open the dropdown below for extra guidance.</summary>
    <category term="autoregressive" label="autoregressive"/>
    <category term="countdata" label="count data"/>
    <category term="spatial" label="spatial"/>
    <published>2023-08-18T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/causal_inference/interventional_distribution.html</id>
    <title>Interventional distributions and graph mutation with the do-operator</title>
    <updated>2023-07-23T00:00:00+00:00</updated>
    <author>
      <name>Benjamin T. Vincent</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;&lt;a class="reference external" href="https://github.com/pymc-devs/pymc"&gt;PyMC&lt;/a&gt; is a pivotal component of the open source Bayesian statistics ecosystem. It helps solve real problems across a wide range of industries and academic research areas every day. And it has gained this level of utility by being accessible, powerful, and practically useful at solving &lt;em&gt;Bayesian statistical inference&lt;/em&gt; problems.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/causal_inference/interventional_distribution.html"/>
    <summary>PyMC is a pivotal component of the open source Bayesian statistics ecosystem. It helps solve real problems across a wide range of industries and academic research areas every day. And it has gained this level of utility by being accessible, powerful, and practically useful at solving Bayesian statistical inference problems.</summary>
    <category term="causalinference" label="causal inference"/>
    <category term="do-operator" label="do-operator"/>
    <category term="graphmutation" label="graph mutation"/>
    <published>2023-07-23T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/samplers/fast_sampling_with_jax_and_numba.html</id>
    <title>Faster Sampling with JAX and Numba</title>
    <updated>2023-07-11T00:00:00+00:00</updated>
    <author>
      <name>Thomas Wiecki</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;PyMC can compile its models to various execution backends through PyTensor, including:&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/samplers/fast_sampling_with_jax_and_numba.html"/>
    <summary>PyMC can compile its models to various execution backends through PyTensor, including:</summary>
    <category term="JAX" label="JAX"/>
    <category term="hierarchicalmodel" label="hierarchical model"/>
    <category term="numba" label="numba"/>
    <category term="scaling" label="scaling"/>
    <published>2023-07-11T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/generalized_linear_models/GLM-discrete-choice_models.html</id>
    <title>Discrete Choice and Random Utility Models</title>
    <updated>2023-06-23T00:00:00+00:00</updated>
    <author>
      <name>Nathaniel Forde</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;This notebook uses libraries that are not PyMC dependencies
and therefore need to be installed specifically to run this notebook.
Open the dropdown below for extra guidance.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/generalized_linear_models/GLM-discrete-choice_models.html"/>
    <summary>This notebook uses libraries that are not PyMC dependencies
and therefore need to be installed specifically to run this notebook.
Open the dropdown below for extra guidance.</summary>
    <category term="categoricalregression" label="categorical regression"/>
    <category term="discretechoice" label="discrete choice"/>
    <category term="generalizedlinearmodel" label="generalized linear model"/>
    <category term="modelexpansion" label="model expansion"/>
    <published>2023-06-23T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/gaussian_processes/GP-Latent.html</id>
    <title>Gaussian Processes: Latent Variable Implementation</title>
    <updated>2023-06-06T00:00:00+00:00</updated>
    <author>
      <name>Bill Engels</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;The &lt;a class="reference external" href="https://www.pymc.io/projects/docs/en/stable/api/gp/generated/pymc.gp.Latent.html#pymc.gp.Latent" title="(in PyMC v5.19.1)"&gt;&lt;code class="xref py py-class docutils literal notranslate"&gt;&lt;span class="pre"&gt;gp.Latent&lt;/span&gt;&lt;/code&gt;&lt;/a&gt; class is a direct implementation of a Gaussian process without approximation.  Given a mean and covariance function, we can place a prior on the function &lt;span class="math notranslate nohighlight"&gt;\(f(x)\)&lt;/span&gt;,&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/gaussian_processes/GP-Latent.html"/>
    <summary>The gp.Latent class is a direct implementation of a Gaussian process without approximation.  Given a mean and covariance function, we can place a prior on the function f(x),</summary>
    <category term="gaussianprocesses" label="gaussian processes"/>
    <category term="timeseries" label="time series"/>
    <published>2023-06-06T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/gaussian_processes/GP-Marginal.html</id>
    <title>Marginal Likelihood Implementation</title>
    <updated>2023-06-04T00:00:00+00:00</updated>
    <author>
      <name>Chris Fonnesbeck</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;The &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;gp.Marginal&lt;/span&gt;&lt;/code&gt; class implements the more common case of GP regression:  the observed data are the sum of a GP and Gaussian noise.  &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;gp.Marginal&lt;/span&gt;&lt;/code&gt; has a &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;marginal_likelihood&lt;/span&gt;&lt;/code&gt; method, a &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;conditional&lt;/span&gt;&lt;/code&gt; method, and a &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;predict&lt;/span&gt;&lt;/code&gt; method.  Given a mean and covariance function, the function &lt;span class="math notranslate nohighlight"&gt;\(f(x)\)&lt;/span&gt; is modeled as,&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/gaussian_processes/GP-Marginal.html"/>
    <summary>The gp.Marginal class implements the more common case of GP regression:  the observed data are the sum of a GP and Gaussian noise.  gp.Marginal has a marginal_likelihood method, a conditional method, and a predict method.  Given a mean and covariance function, the function f(x) is modeled as,</summary>
    <category term="gaussianprocesses" label="gaussian processes"/>
    <category term="timeseries" label="time series"/>
    <published>2023-06-04T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/generalized_linear_models/GLM-ordinal-regression.html</id>
    <title>Regression Models with Ordered Categorical Outcomes</title>
    <updated>2023-04-23T00:00:00+00:00</updated>
    <author>
      <name>Nathaniel Forde</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;Like many areas of statistics the language of survey data comes with an overloaded vocabulary. When discussing survey design you will often hear about the contrast between &lt;em&gt;design&lt;/em&gt; based and &lt;em&gt;model&lt;/em&gt; based approaches to (i) sampling strategies and (ii) statistical inference on the associated data. We won’t wade into the details about different sample strategies such as: simple random sampling, cluster random sampling or stratified random sampling using population weighting schemes. The literature on each of these is vast, but in this notebook we’ll talk about when any why it’s useful to apply model driven statistical inference to &lt;a class="reference external" href="https://en.wikipedia.org/wiki/Likert_scale"&gt;Likert&lt;/a&gt; scaled survey response data and other kinds of ordered categorical data.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/generalized_linear_models/GLM-ordinal-regression.html"/>
    <summary>Like many areas of statistics the language of survey data comes with an overloaded vocabulary. When discussing survey design you will often hear about the contrast between design based and model based approaches to (i) sampling strategies and (ii) statistical inference on the associated data. We won’t wade into the details about different sample strategies such as: simple random sampling, cluster random sampling or stratified random sampling using population weighting schemes. The literature on each of these is vast, but in this notebook we’ll talk about when any why it’s useful to apply model driven statistical inference to Likert scaled survey response data and other kinds of ordered categorical data.</summary>
    <category term=""/>
    <category term="generalizedlinearmodel" label="generalized linear model"/>
    <category term="ordinalregression" label="ordinal regression"/>
    <published>2023-04-23T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/time_series/longitudinal_models.html</id>
    <title>Longitudinal Models of Change</title>
    <updated>2023-04-23T00:00:00+00:00</updated>
    <author>
      <name>Nathaniel Forde</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;The study of change involves simultaneously analysing the individual trajectories of change and abstracting over the set of individuals studied to extract broader insight about the nature of the change in question. As such it’s easy to lose sight of the forest for the focus on the trees. In this example we’ll demonstrate some of the subtleties of using hierarchical bayesian models to study the change within a population of individuals  - moving from the &lt;em&gt;within individual&lt;/em&gt; view to the &lt;em&gt;between/cross individuals&lt;/em&gt; perspective.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/time_series/longitudinal_models.html"/>
    <summary>The study of change involves simultaneously analysing the individual trajectories of change and abstracting over the set of individuals studied to extract broader insight about the nature of the change in question. As such it’s easy to lose sight of the forest for the focus on the trees. In this example we’ll demonstrate some of the subtleties of using hierarchical bayesian models to study the change within a population of individuals  - moving from the within individual view to the between/cross individuals perspective.</summary>
    <category term="hierarchical" label="hierarchical"/>
    <category term="longitudinal" label="longitudinal"/>
    <category term="timeseries" label="time series"/>
    <published>2023-04-23T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/howto/Missing_Data_Imputation.html</id>
    <title>Bayesian Missing Data Imputation</title>
    <updated>2023-02-23T00:00:00+00:00</updated>
    <author>
      <name>Nathaniel Forde</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;Duplicate implicit target name: “bayesian missing data imputation”.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/howto/Missing_Data_Imputation.html"/>
    <summary>Duplicate implicit target name: “bayesian missing data imputation”.</summary>
    <category term="bayesianimputation" label="bayesian imputation"/>
    <category term="hierarchical" label="hierarchical"/>
    <category term="missingdata" label="missing data"/>
    <published>2023-02-23T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/howto/model_builder.html</id>
    <title>Using ModelBuilder class for deploying PyMC models</title>
    <updated>2023-02-22T00:00:00+00:00</updated>
    <author>
      <name>Michał Raczycki</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;Many users face difficulty in deploying their PyMC models to production because deploying/saving/loading a user-created model is not well standardized. One of the reasons behind this is there is no direct way to save or load a model in PyMC like scikit-learn or TensorFlow. The new &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;ModelBuilder&lt;/span&gt;&lt;/code&gt; class is aimed to improve this workflow by providing a scikit-learn inspired API to wrap your PyMC models.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/howto/model_builder.html"/>
    <summary>Many users face difficulty in deploying their PyMC models to production because deploying/saving/loading a user-created model is not well standardized. One of the reasons behind this is there is no direct way to save or load a model in PyMC like scikit-learn or TensorFlow. The new ModelBuilder class is aimed to improve this workflow by providing a scikit-learn inspired API to wrap your PyMC models.</summary>
    <category term="deployment" label="deployment"/>
    <published>2023-02-22T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/variational_inference/pathfinder.html</id>
    <title>Pathfinder Variational Inference</title>
    <updated>2023-02-05T00:00:00+00:00</updated>
    <author>
      <name>Thomas Wiecki</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;Pathfinder &lt;span id="id1"&gt;[&lt;a class="reference internal" href="variational_inference/pathfinder.html#id115" title="Lu Zhang, Bob Carpenter, Andrew Gelman, and Aki Vehtari. Pathfinder: parallel quasi-newton variational inference. arXiv preprint arXiv:2108.03782, 2021."&gt;Zhang &lt;em&gt;et al.&lt;/em&gt;, 2021&lt;/a&gt;]&lt;/span&gt; is a variational inference algorithm that produces samples from the posterior of a Bayesian model. It compares favorably to the widely used ADVI algorithm. On large problems, it should scale better than most MCMC algorithms, including dynamic HMC (i.e. NUTS), at the cost of a more biased estimate of the posterior. For details on the algorithm, see the &lt;a class="reference external" href="https://arxiv.org/abs/2108.03782"&gt;arxiv preprint&lt;/a&gt;.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/variational_inference/pathfinder.html"/>
    <summary>Pathfinder zhang2021pathfinder is a variational inference algorithm that produces samples from the posterior of a Bayesian model. It compares favorably to the widely used ADVI algorithm. On large problems, it should scale better than most MCMC algorithms, including dynamic HMC (i.e. NUTS), at the cost of a more biased estimate of the posterior. For details on the algorithm, see the arxiv preprint.</summary>
    <category term="jax" label="jax"/>
    <category term="variationalinference" label="variational inference"/>
    <published>2023-02-05T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/time_series/MvGaussianRandomWalk_demo.html</id>
    <title>Multivariate Gaussian Random Walk</title>
    <updated>2023-02-02T00:00:00+00:00</updated>
    <author>
      <name>Chris Fonnesbeck</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;This notebook shows how to &lt;a class="reference external" href="https://en.wikipedia.org/wiki/Curve_fitting"&gt;fit a correlated time series&lt;/a&gt; using multivariate &lt;a class="reference external" href="https://en.wikipedia.org/wiki/Random_walk#Gaussian_random_walk"&gt;Gaussian random walks&lt;/a&gt; (GRWs). In particular, we perform a Bayesian &lt;a class="reference external" href="https://en.wikipedia.org/wiki/Regression_analysis"&gt;regression&lt;/a&gt; of the time series data against a model dependent on GRWs.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/time_series/MvGaussianRandomWalk_demo.html"/>
    <summary>This notebook shows how to fit a correlated time series using multivariate Gaussian random walks (GRWs). In particular, we perform a Bayesian regression of the time series data against a model dependent on GRWs.</summary>
    <category term="linearmodel" label="linear model"/>
    <category term="regression" label="regression"/>
    <category term="timeseries" label="time series"/>
    <published>2023-02-02T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/generalized_linear_models/GLM-rolling-regression.html</id>
    <title>Rolling Regression</title>
    <updated>2023-01-28T00:00:00+00:00</updated>
    <author>
      <name>Thomas Wiecki</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;&lt;a class="reference external" href="https://en.wikipedia.org/wiki/Pairs_trade?oldformat=true"&gt;Pairs trading&lt;/a&gt; is a famous technique in algorithmic trading that plays two stocks against each other.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/generalized_linear_models/GLM-rolling-regression.html"/>
    <summary>Pairs trading is a famous technique in algorithmic trading that plays two stocks against each other.</summary>
    <category term="generalizedlinearmodel" label="generalized linear model"/>
    <category term="regression" label="regression"/>
    <published>2023-01-28T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/case_studies/hierarchical_partial_pooling.html</id>
    <title>Hierarchical Partial Pooling</title>
    <updated>2023-01-28T00:00:00+00:00</updated>
    <author>
      <name>Christian Luhmann</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;Suppose you are tasked with estimating baseball batting skills for several players. One such performance metric is batting average. Since players play a different number of games and bat in different positions in the order, each player has a different number of at-bats. However, you want to estimate the skill of all players, including those with a relatively small number of batting opportunities.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/case_studies/hierarchical_partial_pooling.html"/>
    <summary>Suppose you are tasked with estimating baseball batting skills for several players. One such performance metric is batting average. Since players play a different number of games and bat in different positions in the order, each player has a different number of at-bats. However, you want to estimate the skill of all players, including those with a relatively small number of batting opportunities.</summary>
    <category term="hierarchicalmodel" label="hierarchical model"/>
    <published>2023-01-28T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/bart/bart_quantile_regression.html</id>
    <title>Quantile Regression with BART</title>
    <updated>2023-01-25T00:00:00+00:00</updated>
    <author>
      <name>Osvaldo Martin</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;Usually when doing regression we model the conditional mean of some distribution. Common cases are a Normal distribution for continuous unbounded responses, a Poisson distribution for count data, etc.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/bart/bart_quantile_regression.html"/>
    <summary>Usually when doing regression we model the conditional mean of some distribution. Common cases are a Normal distribution for continuous unbounded responses, a Poisson distribution for count data, etc.</summary>
    <category term="BART" label="BART"/>
    <category term="non-parametric" label="non-parametric"/>
    <category term="quantile" label="quantile"/>
    <category term="regression" label="regression"/>
    <published>2023-01-25T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/case_studies/reliability_and_calibrated_prediction.html</id>
    <title>Reliability Statistics and Predictive Calibration</title>
    <updated>2023-01-23T00:00:00+00:00</updated>
    <author>
      <name>Nathaniel Forde</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;Duplicate implicit target name: “reliability statistics and predictive calibration”.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/case_studies/reliability_and_calibrated_prediction.html"/>
    <summary>Duplicate implicit target name: “reliability statistics and predictive calibration”.</summary>
    <category term="calibration" label="calibration"/>
    <category term="censored" label="censored"/>
    <category term="prediction" label="prediction"/>
    <category term="survivalanalysis" label="survival analysis"/>
    <category term="time-to-failure" label="time-to-failure"/>
    <published>2023-01-23T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/bart/bart_heteroscedasticity.html</id>
    <title>Modeling Heteroscedasticity with BART</title>
    <updated>2023-01-23T00:00:00+00:00</updated>
    <author>
      <name>Juan Orduz</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;In this notebook we show how to use BART to model heteroscedasticity as described in Section 4.1 of &lt;a class="reference external" href="https://github.com/pymc-devs/pymc-bart"&gt;&lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;pymc-bart&lt;/span&gt;&lt;/code&gt;&lt;/a&gt;’s paper &lt;span id="id1"&gt;[&lt;a class="reference internal" href="bart/bart_quantile_regression.html#id85" title="Miriana Quiroga, Pablo G Garay, Juan M. Alonso, Juan Martin Loyola, and Osvaldo A Martin. Bayesian additive regression trees for probabilistic programming. 2022. URL: https://arxiv.org/abs/2206.03619, doi:10.48550/ARXIV.2206.03619."&gt;Quiroga &lt;em&gt;et al.&lt;/em&gt;, 2022&lt;/a&gt;]&lt;/span&gt;. We use the &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;marketing&lt;/span&gt;&lt;/code&gt; data set provided by the R package &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;datarium&lt;/span&gt;&lt;/code&gt; &lt;span id="id2"&gt;[&lt;a class="reference internal" href="bart/bart_heteroscedasticity.html#id53" title="Alboukadel Kassambara. datarium: Data Bank for Statistical Analysis and Visualization. 2019. R package version 0.1.0. URL: https://CRAN.R-project.org/package=datarium."&gt;Kassambara, 2019&lt;/a&gt;]&lt;/span&gt;. The idea is to model a marketing channel contribution to sales as a function of budget.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/bart/bart_heteroscedasticity.html"/>
    <summary>In this notebook we show how to use BART to model heteroscedasticity as described in Section 4.1 of pymc-bart’s paper quiroga2022bart. We use the marketing data set provided by the R package datarium kassambara2019datarium. The idea is to model a marketing channel contribution to sales as a function of budget.</summary>
    <category term="BART" label="BART"/>
    <category term="regression" label="regression"/>
    <published>2023-01-23T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/samplers/DEMetropolisZ_tune_drop_fraction.html</id>
    <title>DEMetropolis(Z) Sampler Tuning</title>
    <updated>2023-01-18T00:00:00+00:00</updated>
    <author>
      <name>Greg Brunkhorst</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;For continuous variables, the default PyMC sampler (&lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;NUTS&lt;/span&gt;&lt;/code&gt;) requires that gradients are computed, which PyMC does through autodifferentiation.  However, in some cases, a PyMC model may not be supplied with gradients (for example, by evaluating a numerical model outside of PyMC) and an alternative sampler is necessary.  The &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;DEMetropolisZ&lt;/span&gt;&lt;/code&gt; sampler is an efficient choice for gradient-free inference.  The implementation of &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;DEMetropolisZ&lt;/span&gt;&lt;/code&gt; in PyMC is based on &lt;span id="id1"&gt;ter Braak and Vrugt [&lt;a class="reference internal" href="samplers/DEMetropolisZ_tune_drop_fraction.html#id104" title="Cajo J.F. ter Braak and Jasper A. Vrugt. Differential evolution markov chain with snooker updater and fewer chains. Statistics and Computing, pages 435–446, 2008. URL: https://link.springer.com/content/pdf/10.1007/s11222-008-9104-9.pdf?pdf=button."&gt;2008&lt;/a&gt;]&lt;/span&gt; but with a modified tuning scheme.  This notebook compares various tuning parameter settings for the sampler, including the &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;drop_tune_fraction&lt;/span&gt;&lt;/code&gt; parameter which was introduced in PyMC.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/samplers/DEMetropolisZ_tune_drop_fraction.html"/>
    <summary>For continuous variables, the default PyMC sampler (NUTS) requires that gradients are computed, which PyMC does through autodifferentiation.  However, in some cases, a PyMC model may not be supplied with gradients (for example, by evaluating a numerical model outside of PyMC) and an alternative sampler is necessary.  The DEMetropolisZ sampler is an efficient choice for gradient-free inference.  The implementation of DEMetropolisZ in PyMC is based on terBraak2008differential but with a modified tuning scheme.  This notebook compares various tuning parameter settings for the sampler, including the drop_tune_fraction parameter which was introduced in PyMC.</summary>
    <category term="DEMetropolis(Z)" label="DEMetropolis(Z)"/>
    <category term="gradient-freeinference" label="gradient-free inference"/>
    <published>2023-01-18T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/samplers/DEMetropolisZ_EfficiencyComparison.html</id>
    <title>DEMetropolis and DEMetropolis(Z) Algorithm Comparisons</title>
    <updated>2023-01-18T00:00:00+00:00</updated>
    <author>
      <name>Greg Brunkhorst</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;For continuous variables, the default PyMC sampler (&lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;NUTS&lt;/span&gt;&lt;/code&gt;) requires that gradients are computed, which PyMC does through autodifferentiation.  However, in some cases, a PyMC model may not be supplied with gradients (for example, by evaluating a numerical model outside of PyMC) and an alternative sampler is necessary.  Differential evolution (DE) Metropolis samplers are an efficient choice for gradient-free inference.  This notebook compares the &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;DEMetropolis&lt;/span&gt;&lt;/code&gt; and the &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;DEMetropolisZ&lt;/span&gt;&lt;/code&gt; samplers in PyMC to help determine which is a better option for a given problem.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/samplers/DEMetropolisZ_EfficiencyComparison.html"/>
    <summary>For continuous variables, the default PyMC sampler (NUTS) requires that gradients are computed, which PyMC does through autodifferentiation.  However, in some cases, a PyMC model may not be supplied with gradients (for example, by evaluating a numerical model outside of PyMC) and an alternative sampler is necessary.  Differential evolution (DE) Metropolis samplers are an efficient choice for gradient-free inference.  This notebook compares the DEMetropolis and the DEMetropolisZ samplers in PyMC to help determine which is a better option for a given problem.</summary>
    <category term="DEMetropolis" label="DEMetropolis"/>
    <category term="gradient-freeinference" label="gradient-free inference"/>
    <published>2023-01-18T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/survival_analysis/weibull_aft.html</id>
    <title>Reparameterizing the Weibull Accelerated Failure Time Model</title>
    <updated>2023-01-17T00:00:00+00:00</updated>
    <author>
      <name>Chris Fonnesbeck</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;This notebook uses libraries that are not PyMC dependencies
and therefore need to be installed specifically to run this notebook.
Open the dropdown below for extra guidance.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/survival_analysis/weibull_aft.html"/>
    <summary>This notebook uses libraries that are not PyMC dependencies
and therefore need to be installed specifically to run this notebook.
Open the dropdown below for extra guidance.</summary>
    <category term="censored" label="censored"/>
    <category term="survivalanalysis" label="survival analysis"/>
    <category term="weibull" label="weibull"/>
    <published>2023-01-17T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/survival_analysis/survival_analysis.html</id>
    <title>Bayesian Survival Analysis</title>
    <updated>2023-01-17T00:00:00+00:00</updated>
    <author>
      <name>Chris Fonnesbeck</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;&lt;a class="reference external" href="https://en.wikipedia.org/wiki/Survival_analysis"&gt;Survival analysis&lt;/a&gt; studies the distribution of the time to an event.  Its applications span many fields across medicine, biology, engineering, and social science.  This tutorial shows how to fit and analyze a Bayesian survival model in Python using PyMC.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/survival_analysis/survival_analysis.html"/>
    <summary>Survival analysis studies the distribution of the time to an event.  Its applications span many fields across medicine, biology, engineering, and social science.  This tutorial shows how to fit and analyze a Bayesian survival model in Python using PyMC.</summary>
    <category term="censored" label="censored"/>
    <category term="survivalanalysis" label="survival analysis"/>
    <published>2023-01-17T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/ode_models/ODE_Lotka_Volterra_multiple_ways.html</id>
    <title>ODE Lotka-Volterra With Bayesian Inference in Multiple Ways</title>
    <updated>2023-01-16T00:00:00+00:00</updated>
    <author>
      <name>Greg Brunkhorst</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;The purpose of this notebook is to demonstrate how to perform Bayesian inference on a system of ordinary differential equations (ODEs), both with and without gradients.  The accuracy and efficiency of different samplers are compared.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/ode_models/ODE_Lotka_Volterra_multiple_ways.html"/>
    <summary>The purpose of this notebook is to demonstrate how to perform Bayesian inference on a system of ordinary differential equations (ODEs), both with and without gradients.  The accuracy and efficiency of different samplers are compared.</summary>
    <category term="ODE" label="ODE"/>
    <category term="PyTensor" label="PyTensor"/>
    <category term="gradient-freeinference" label="gradient-free inference"/>
    <published>2023-01-16T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/variational_inference/variational_api_quickstart.html</id>
    <title>Introduction to Variational Inference with PyMC</title>
    <updated>2023-01-13T00:00:00+00:00</updated>
    <author>
      <name>Chris Fonnesbeck</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;The most common strategy for computing posterior quantities of Bayesian models is via sampling,  particularly Markov chain Monte Carlo (MCMC) algorithms. While sampling algorithms and associated computing have continually improved in performance and efficiency, MCMC methods still scale poorly with data size, and become prohibitive for more than a few thousand observations. A more scalable alternative to sampling is variational inference (VI), which re-frames the problem of computing the posterior distribution as an optimization problem.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/variational_inference/variational_api_quickstart.html"/>
    <summary>The most common strategy for computing posterior quantities of Bayesian models is via sampling,  particularly Markov chain Monte Carlo (MCMC) algorithms. While sampling algorithms and associated computing have continually improved in performance and efficiency, MCMC methods still scale poorly with data size, and become prohibitive for more than a few thousand observations. A more scalable alternative to sampling is variational inference (VI), which re-frames the problem of computing the posterior distribution as an optimization problem.</summary>
    <category term="variationalinference" label="variational inference"/>
    <published>2023-01-13T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/variational_inference/empirical-approx-overview.html</id>
    <title>Empirical Approximation overview</title>
    <updated>2023-01-13T00:00:00+00:00</updated>
    <author>
      <name>Chris Fonnesbeck</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;For most models we use sampling MCMC algorithms like Metropolis or NUTS. In PyMC we got used to store traces of MCMC samples and then do analysis using them. There is a similar concept for the variational inference submodule in PyMC: &lt;em&gt;Empirical&lt;/em&gt;. This type of approximation stores particles for the SVGD sampler. There is no difference between independent SVGD particles and MCMC samples. &lt;em&gt;Empirical&lt;/em&gt; acts as a bridge between MCMC sampling output and full-fledged VI utils like &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;apply_replacements&lt;/span&gt;&lt;/code&gt; or &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;sample_node&lt;/span&gt;&lt;/code&gt;. For the interface description, see &lt;a class="reference internal" href="variational_inference/variational_api_quickstart.html"&gt;&lt;span class="std std-doc"&gt;variational_api_quickstart&lt;/span&gt;&lt;/a&gt;. Here we will just focus on &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;Emprical&lt;/span&gt;&lt;/code&gt; and give an overview of specific things for the &lt;em&gt;Empirical&lt;/em&gt; approximation.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/variational_inference/empirical-approx-overview.html"/>
    <summary>For most models we use sampling MCMC algorithms like Metropolis or NUTS. In PyMC we got used to store traces of MCMC samples and then do analysis using them. There is a similar concept for the variational inference submodule in PyMC: Empirical. This type of approximation stores particles for the SVGD sampler. There is no difference between independent SVGD particles and MCMC samples. Empirical acts as a bridge between MCMC sampling output and full-fledged VI utils like apply_replacements or sample_node. For the interface description, see variational_api_quickstart. Here we will just focus on Emprical and give an overview of specific things for the Empirical approximation.</summary>
    <category term="approximation" label="approximation"/>
    <category term="variationalinference" label="variational inference"/>
    <published>2023-01-13T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/generalized_linear_models/GLM-hierarchical-binomial-model.html</id>
    <title>Hierarchical Binomial Model: Rat Tumor Example</title>
    <updated>2023-01-10T00:00:00+00:00</updated>
    <author>
      <name>Farhan Reynaldo</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;This short tutorial demonstrates how to use PyMC to do inference for the rat tumour example found in chapter 5 of &lt;em&gt;Bayesian Data Analysis 3rd Edition&lt;/em&gt; &lt;span id="id1"&gt;[&lt;a class="reference internal" href="generalized_linear_models/GLM-truncated-censored-regression.html#id35" title="Andrew Gelman, John B. Carlin, Hal S. Stern, David B. Dunson, Aki Vehtari, and Donald B. Rubin. Bayesian Data Analysis. Chapman and Hall/CRC, 2013."&gt;Gelman &lt;em&gt;et al.&lt;/em&gt;, 2013&lt;/a&gt;]&lt;/span&gt;. Readers should already be familiar with the PyMC API.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/generalized_linear_models/GLM-hierarchical-binomial-model.html"/>
    <summary>This short tutorial demonstrates how to use PyMC to do inference for the rat tumour example found in chapter 5 of Bayesian Data Analysis 3rd Edition gelman2013bayesian. Readers should already be familiar with the PyMC API.</summary>
    <category term="generalizedlinearmodel" label="generalized linear model"/>
    <category term="hierarchicalmodel" label="hierarchical model"/>
    <published>2023-01-10T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/generalized_linear_models/GLM-robust.html</id>
    <title>GLM: Robust Linear Regression</title>
    <updated>2023-01-10T00:00:00+00:00</updated>
    <author>
      <name>Oriol Abril Pla</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;Duplicate implicit target name: “glm: robust linear regression”.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/generalized_linear_models/GLM-robust.html"/>
    <summary>Duplicate implicit target name: “glm: robust linear regression”.</summary>
    <category term="linearmodel" label="linear model"/>
    <category term="regression" label="regression"/>
    <category term="robust" label="robust"/>
    <published>2023-01-10T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/diagnostics_and_criticism/Bayes_factor.html</id>
    <title>Bayes Factors and Marginal Likelihood</title>
    <updated>2023-01-10T00:00:00+00:00</updated>
    <author>
      <name>Osvaldo Martin</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;The “Bayesian way” to compare models is to compute the &lt;em&gt;marginal likelihood&lt;/em&gt; of each model &lt;span class="math notranslate nohighlight"&gt;\(p(y \mid M_k)\)&lt;/span&gt;, &lt;em&gt;i.e.&lt;/em&gt; the probability of the observed data &lt;span class="math notranslate nohighlight"&gt;\(y\)&lt;/span&gt; given the &lt;span class="math notranslate nohighlight"&gt;\(M_k\)&lt;/span&gt; model. This quantity, the marginal likelihood, is just the normalizing constant of Bayes’ theorem. We can see this if we write Bayes’ theorem and make explicit the fact that all inferences are model-dependant.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/diagnostics_and_criticism/Bayes_factor.html"/>
    <summary>The “Bayesian way” to compare models is to compute the marginal likelihood of each model p(y \mid M_k), i.e. the probability of the observed data y given the M_k model. This quantity, the marginal likelihood, is just the normalizing constant of Bayes’ theorem. We can see this if we write Bayes’ theorem and make explicit the fact that all inferences are model-dependant.</summary>
    <category term="BayesFactors" label="Bayes Factors"/>
    <category term="modelcomparison" label="model comparison"/>
    <published>2023-01-10T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/time_series/AR.html</id>
    <title>Analysis of An AR(1) Model in PyMC</title>
    <updated>2023-01-07T00:00:00+00:00</updated>
    <author>
      <name>Chris Fonnesbeck</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;Consider the following AR(2) process, initialized in the
infinite past:
&lt;div class="math notranslate nohighlight"&gt;
\[
   y_t = \rho_0 + \rho_1 y_{t-1} + \rho_2 y_{t-2} + \epsilon_t,
\]&lt;/div&gt;

where &lt;span class="math notranslate nohighlight"&gt;\(\epsilon_t \overset{iid}{\sim} {\cal N}(0,1)\)&lt;/span&gt;.  Suppose you’d like to learn about &lt;span class="math notranslate nohighlight"&gt;\(\rho\)&lt;/span&gt; from a a sample of observations &lt;span class="math notranslate nohighlight"&gt;\(Y^T = \{ y_0, y_1,\ldots, y_T \}\)&lt;/span&gt;.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/time_series/AR.html"/>
    <summary>Consider the following AR(2) process, initialized in the
infinite past:

   y_t = \rho_0 + \rho_1 y_{t-1} + \rho_2 y_{t-2} + \epsilon_t,

where \epsilon_t \overset{iid}{\sim} {\cal N}(0,1).  Suppose you’d like to learn about \rho from a a sample of observations Y^T = \{ y_0, y_1,\ldots, y_T \}.</summary>
    <category term="autoregressive" label="autoregressive"/>
    <category term="timeseries" label="time series"/>
    <published>2023-01-07T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/generalized_linear_models/GLM-poisson-regression.html</id>
    <title>GLM: Poisson Regression</title>
    <updated>2022-11-30T00:00:00+00:00</updated>
    <author>
      <name>Benjamin Vincent</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;This is a minimal reproducible example of Poisson regression to predict counts using dummy data.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/generalized_linear_models/GLM-poisson-regression.html"/>
    <summary>This is a minimal reproducible example of Poisson regression to predict counts using dummy data.</summary>
    <category term="poisson" label="poisson"/>
    <category term="regression" label="regression"/>
    <published>2022-11-30T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/time_series/bayesian_var_model.html</id>
    <title>Bayesian Vector Autoregressive Models</title>
    <updated>2022-11-23T00:00:00+00:00</updated>
    <author>
      <name>Nathaniel Forde</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;Duplicate implicit target name: “bayesian vector autoregressive models”.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/time_series/bayesian_var_model.html"/>
    <summary>Duplicate implicit target name: “bayesian vector autoregressive models”.</summary>
    <category term="hierarchicalmodel" label="hierarchical model"/>
    <category term="timeseries" label="time series"/>
    <category term="vectorautoregressivemodel" label="vector autoregressive model"/>
    <published>2022-11-23T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/generalized_linear_models/multilevel_modeling.html</id>
    <title>A Primer on Bayesian Methods for Multilevel Modeling</title>
    <updated>2022-10-24T00:00:00+00:00</updated>
    <author>
      <name>Farhan Reynaldo</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;Hierarchical or multilevel modeling is a generalization of regression modeling.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/generalized_linear_models/multilevel_modeling.html"/>
    <summary>Hierarchical or multilevel modeling is a generalization of regression modeling.</summary>
    <category term="casestudy" label="case study"/>
    <category term="generalizedlinearmodel" label="generalized linear model"/>
    <category term="hierarchicalmodel" label="hierarchical model"/>
    <published>2022-10-24T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/gaussian_processes/MOGP-Coregion-Hadamard.html</id>
    <title>Multi-output Gaussian Processes: Coregionalization models using Hamadard product</title>
    <updated>2022-10-23T00:00:00+00:00</updated>
    <author>
      <name>Chris Fonnesbeck</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;This notebook shows how to implement the &lt;strong&gt;Intrinsic Coregionalization Model&lt;/strong&gt; (ICM) and the &lt;strong&gt;Linear Coregionalization Model&lt;/strong&gt; (LCM) using a Hamadard product between the Coregion kernel and input kernels. Multi-output Gaussian Process is discussed in &lt;a class="reference external" href="https://papers.nips.cc/paper/2007/hash/66368270ffd51418ec58bd793f2d9b1b-Abstract.html"&gt;this paper&lt;/a&gt; by &lt;span id="id1"&gt;Bonilla &lt;em&gt;et al.&lt;/em&gt; [&lt;a class="reference internal" href="gaussian_processes/MOGP-Coregion-Hadamard.html#id13" title="Edwin V Bonilla, Kian Chai, and Christopher Williams. Multi-task gaussian process prediction. Advances in neural information processing systems, 2007. URL: https://papers.nips.cc/paper/2007/hash/66368270ffd51418ec58bd793f2d9b1b-Abstract.html."&gt;2007&lt;/a&gt;]&lt;/span&gt;. For further information about ICM and LCM, please check out the &lt;a class="reference external" href="https://www.youtube.com/watch?v=ttgUJtVJthA&amp;amp;amp;list=PLpTp0l_CVmgwyAthrUmmdIFiunV1VvicM"&gt;talk&lt;/a&gt; on Multi-output Gaussian Processes by Mauricio Alvarez, and &lt;a class="reference external" href="http://gpss.cc/gpss17/slides/multipleOutputGPs.pdf"&gt;his slides&lt;/a&gt; with more references at the last page.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/gaussian_processes/MOGP-Coregion-Hadamard.html"/>
    <summary>This notebook shows how to implement the Intrinsic Coregionalization Model (ICM) and the Linear Coregionalization Model (LCM) using a Hamadard product between the Coregion kernel and input kernels. Multi-output Gaussian Process is discussed in this paper by bonilla2007multioutput. For further information about ICM and LCM, please check out the talk on Multi-output Gaussian Processes by Mauricio Alvarez, and his slides with more references at the last page.</summary>
    <category term="gaussianprocess" label="gaussian process"/>
    <category term="multi-output" label="multi-output"/>
    <published>2022-10-23T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/gaussian_processes/GP-Kron.html</id>
    <title>Kronecker Structured Covariances</title>
    <updated>2022-10-23T00:00:00+00:00</updated>
    <author>
      <name>Alex Andorra</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;PyMC contains implementations for models that have Kronecker structured covariances.  This patterned structure enables Gaussian process models to work on much larger datasets.  Kronecker structure can be exploited when&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/gaussian_processes/GP-Kron.html"/>
    <summary>PyMC contains implementations for models that have Kronecker structured covariances.  This patterned structure enables Gaussian process models to work on much larger datasets.  Kronecker structure can be exploited when</summary>
    <category term="gaussianprocess" label="gaussian process"/>
    <published>2022-10-23T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/causal_inference/interrupted_time_series.html</id>
    <title>Interrupted time series analysis</title>
    <updated>2022-10-23T00:00:00+00:00</updated>
    <author>
      <name>Benjamin T. Vincent</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;This notebook focuses on how to conduct a simple Bayesian &lt;a class="reference external" href="https://en.wikipedia.org/wiki/Interrupted_time_series"&gt;interrupted time series analysis&lt;/a&gt;. This is useful in &lt;a class="reference external" href="https://en.wikipedia.org/wiki/Quasi-experiment"&gt;quasi-experimental settings&lt;/a&gt; where an intervention was applied to all treatment units.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/causal_inference/interrupted_time_series.html"/>
    <summary>This notebook focuses on how to conduct a simple Bayesian interrupted time series analysis. This is useful in quasi-experimental settings where an intervention was applied to all treatment units.</summary>
    <category term="causalimpact" label="causal impact"/>
    <category term="causalinference" label="causal inference"/>
    <category term="counterfactuals" label="counterfactuals"/>
    <category term="forecasting" label="forecasting"/>
    <category term="quasiexperiments" label="quasi experiments"/>
    <category term="timeseries" label="time series"/>
    <published>2022-10-23T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/time_series/Forecasting_with_structural_timeseries.html</id>
    <title>Forecasting with Structural AR Timeseries</title>
    <updated>2022-10-20T00:00:00+00:00</updated>
    <author>
      <name>Nathaniel Forde</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;Bayesian structural timeseries models are an interesting way to learn about the structure inherent in any observed timeseries data. It also gives us the ability to project forward the implied predictive distribution granting us another view on forecasting problems. We can treat the learned characteristics of the timeseries data observed to-date as informative about the structure of the unrealised future state of the same measure.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/time_series/Forecasting_with_structural_timeseries.html"/>
    <summary>Bayesian structural timeseries models are an interesting way to learn about the structure inherent in any observed timeseries data. It also gives us the ability to project forward the implied predictive distribution granting us another view on forecasting problems. We can treat the learned characteristics of the timeseries data observed to-date as informative about the structure of the unrealised future state of the same measure.</summary>
    <category term="autoregressive" label="autoregressive"/>
    <category term="bayesianstructuraltimeseries" label="bayesian structural timeseries"/>
    <category term="forecasting" label="forecasting"/>
    <published>2022-10-20T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/case_studies/GEV.html</id>
    <title>Generalized Extreme Value Distribution</title>
    <updated>2022-09-27T00:00:00+00:00</updated>
    <author>
      <name>Colin Caprani</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;The Generalized Extreme Value (GEV) distribution is a meta-distribution containing the Weibull, Gumbel, and Frechet families of extreme value distributions. It is used for modelling the distribution of extremes (maxima or minima) of stationary processes, such as the annual maximum wind speed, annual maximum truck weight on a bridge, and so on, without needing &lt;em&gt;a priori&lt;/em&gt; decision on the tail behaviour.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/case_studies/GEV.html"/>
    <summary>The Generalized Extreme Value (GEV) distribution is a meta-distribution containing the Weibull, Gumbel, and Frechet families of extreme value distributions. It is used for modelling the distribution of extremes (maxima or minima) of stationary processes, such as the annual maximum wind speed, annual maximum truck weight on a bridge, and so on, without needing a priori decision on the tail behaviour.</summary>
    <category term="extreme" label="extreme"/>
    <category term="inference" label="inference"/>
    <category term="posteriorpredictive" label="posterior predictive"/>
    <published>2022-09-27T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/causal_inference/difference_in_differences.html</id>
    <title>Difference in differences</title>
    <updated>2022-09-23T00:00:00+00:00</updated>
    <author>
      <name>Benjamin T. Vincent</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;This notebook provides a brief overview of the difference in differences approach to causal inference, and shows a working example of how to conduct this type of analysis under the Bayesian framework, using PyMC. While the notebooks provides a high level overview of the approach, I recommend consulting two excellent textbooks on causal inference. Both &lt;a class="reference external" href="https://theeffectbook.net/"&gt;The Effect&lt;/a&gt; &lt;span id="id2"&gt;[&lt;a class="reference internal" href="causal_inference/interrupted_time_series.html#id45" title="Nick Huntington-Klein. The effect: An introduction to research design and causality. Chapman and Hall/CRC, 2021."&gt;Huntington-Klein, 2021&lt;/a&gt;]&lt;/span&gt; and &lt;a class="reference external" href="https://mixtape.scunning.com"&gt;Causal Inference: The Mixtape&lt;/a&gt; &lt;span id="id3"&gt;[&lt;a class="reference internal" href="causal_inference/difference_in_differences.html#id28" title="Scott Cunningham. Causal inference: The Mixtape. Yale University Press, 2021."&gt;Cunningham, 2021&lt;/a&gt;]&lt;/span&gt; have chapters devoted to difference in differences.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/causal_inference/difference_in_differences.html"/>
    <summary>This notebook provides a brief overview of the difference in differences approach to causal inference, and shows a working example of how to conduct this type of analysis under the Bayesian framework, using PyMC. While the notebooks provides a high level overview of the approach, I recommend consulting two excellent textbooks on causal inference. Both The Effect huntington2021effect and Causal Inference: The Mixtape cunningham2021causal have chapters devoted to difference in differences.</summary>
    <category term="causalinference" label="causal inference"/>
    <category term="counterfactuals" label="counterfactuals"/>
    <category term="differenceindifferences" label="difference in differences"/>
    <category term="paneldata" label="panel data"/>
    <category term="posteriorpredictive" label="posterior predictive"/>
    <category term="quasiexperiments" label="quasi experiments"/>
    <category term="regression" label="regression"/>
    <category term="timeseries" label="time series"/>
    <published>2022-09-23T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/generalized_linear_models/GLM-truncated-censored-regression.html</id>
    <title>Bayesian regression with truncated or censored data</title>
    <updated>2022-09-23T00:00:00+00:00</updated>
    <author>
      <name>Benjamin T. Vincent</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;The notebook provides an example of how to conduct linear regression when your outcome variable is either censored or truncated.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/generalized_linear_models/GLM-truncated-censored-regression.html"/>
    <summary>The notebook provides an example of how to conduct linear regression when your outcome variable is either censored or truncated.</summary>
    <category term="censored" label="censored"/>
    <category term="generalizedlinearmodel" label="generalized linear model"/>
    <category term="regression" label="regression"/>
    <category term="truncated" label="truncated"/>
    <published>2022-09-23T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/case_studies/reinforcement_learning.html</id>
    <title>Fitting a Reinforcement Learning Model to Behavioral Data with PyMC</title>
    <updated>2022-08-05T00:00:00+00:00</updated>
    <author>
      <name>Ricardo Vieira</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;Reinforcement Learning models are commonly used in behavioral research to model how animals and humans learn, in situtions where they get to make repeated choices that are followed by some form of feedback, such as a reward or a punishment.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/case_studies/reinforcement_learning.html"/>
    <summary>Reinforcement Learning models are commonly used in behavioral research to model how animals and humans learn, in situtions where they get to make repeated choices that are followed by some form of feedback, such as a reward or a punishment.</summary>
    <category term="PyTensor" label="PyTensor"/>
    <category term="ReinforcementLearning" label="Reinforcement Learning"/>
    <published>2022-08-05T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/howto/howto_debugging.html</id>
    <title>How to debug a model</title>
    <updated>2022-08-02T00:00:00+00:00</updated>
    <author>
      <name>Igor Kuvychko</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;There are various levels on which to debug a model. One of the simplest is to just print out the values that different variables are taking on.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/howto/howto_debugging.html"/>
    <summary>There are various levels on which to debug a model. One of the simplest is to just print out the values that different variables are taking on.</summary>
    <category term="PyTensor" label="PyTensor"/>
    <category term="debugging" label="debugging"/>
    <published>2022-08-02T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/gaussian_processes/gaussian_process.html</id>
    <title>Gaussian Processes using numpy kernel</title>
    <updated>2022-07-31T00:00:00+00:00</updated>
    <author>
      <name>Ana Rita Santos and Sandra Meneses</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;Example of simple Gaussian Process fit, adapted from Stan’s &lt;a class="reference external" href="https://github.com/stan-dev/example-models/blob/master/misc/gaussian-process/gp-fit.stan"&gt;example-models repository&lt;/a&gt;.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/gaussian_processes/gaussian_process.html"/>
    <summary>Example of simple Gaussian Process fit, adapted from Stan’s example-models repository.</summary>
    <category term=""/>
    <category term="gaussianprocess" label="gaussian process"/>
    <published>2022-07-31T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/spatial/conditional_autoregressive_priors.html</id>
    <title>Conditional Autoregressive (CAR) Models for Spatial Data</title>
    <updated>2022-07-29T00:00:00+00:00</updated>
    <author>
      <name>Daniel Saunders</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;This notebook uses libraries that are not PyMC dependencies
and therefore need to be installed specifically to run this notebook.
Open the dropdown below for extra guidance.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/spatial/conditional_autoregressive_priors.html"/>
    <summary>This notebook uses libraries that are not PyMC dependencies
and therefore need to be installed specifically to run this notebook.
Open the dropdown below for extra guidance.</summary>
    <category term="autoregressive" label="autoregressive"/>
    <category term="countdata" label="count data"/>
    <category term="spatial" label="spatial"/>
    <published>2022-07-29T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/causal_inference/excess_deaths.html</id>
    <title>Counterfactual inference: calculating excess deaths due to COVID-19</title>
    <updated>2022-07-23T00:00:00+00:00</updated>
    <author>
      <name>Benjamin T. Vincent</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;Causal reasoning and counterfactual thinking are really interesting but complex topics! Nevertheless, we can make headway into understanding the ideas through relatively simple examples. This notebook focuses on the concepts and the practical implementation of Bayesian causal reasoning using PyMC.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/causal_inference/excess_deaths.html"/>
    <summary>Causal reasoning and counterfactual thinking are really interesting but complex topics! Nevertheless, we can make headway into understanding the ideas through relatively simple examples. This notebook focuses on the concepts and the practical implementation of Bayesian causal reasoning using PyMC.</summary>
    <category term="Bayesianworkflow" label="Bayesian workflow"/>
    <category term="casestudy" label="case study"/>
    <category term="causalimpact" label="causal impact"/>
    <category term="causalinference" label="causal inference"/>
    <category term="counterfactuals" label="counterfactuals"/>
    <category term="forecasting" label="forecasting"/>
    <category term="posteriorpredictive" label="posterior predictive"/>
    <category term="regression" label="regression"/>
    <category term="timeseries" label="time series"/>
    <published>2022-07-23T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/time_series/stochastic_volatility.html</id>
    <title>Stochastic Volatility model</title>
    <updated>2022-06-17T00:00:00+00:00</updated>
    <author>
      <name>Abhipsha Das</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;Asset prices have time-varying volatility (variance of day over day &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;returns&lt;/span&gt;&lt;/code&gt;). In some periods, returns are highly variable, while in others very stable. Stochastic volatility models model this with a latent volatility variable, modeled as a stochastic process. The following model is similar to the one described in the No-U-Turn Sampler paper, &lt;span id="id1"&gt;[&lt;a class="reference internal" href="time_series/stochastic_volatility.html#id42" title="Matthew Hoffman and Andrew Gelman. The no-u-turn sampler: adaptively setting path lengths in hamiltonian monte carlo. Journal of Machine Learning Research, 15:1593–1623, 2014. URL: https://dl.acm.org/doi/10.5555/2627435.2638586."&gt;Hoffman and Gelman, 2014&lt;/a&gt;]&lt;/span&gt;.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/time_series/stochastic_volatility.html"/>
    <summary>Asset prices have time-varying volatility (variance of day over day returns). In some periods, returns are highly variable, while in others very stable. Stochastic volatility models model this with a latent volatility variable, modeled as a stochastic process. The following model is similar to the one described in the No-U-Turn Sampler paper, hoffman2014nuts.</summary>
    <category term="casestudy" label="case study"/>
    <category term="timeseries" label="time series"/>
    <published>2022-06-17T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/howto/spline.html</id>
    <title>Splines</title>
    <updated>2022-06-04T00:00:00+00:00</updated>
    <author>
      <name>Joshua Cook</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;Often, the model we want to fit is not a perfect line between some &lt;span class="math notranslate nohighlight"&gt;\(x\)&lt;/span&gt; and &lt;span class="math notranslate nohighlight"&gt;\(y\)&lt;/span&gt;.
Instead, the parameters of the model are expected to vary over &lt;span class="math notranslate nohighlight"&gt;\(x\)&lt;/span&gt;.
There are multiple ways to handle this situation, one of which is to fit a &lt;em&gt;spline&lt;/em&gt;.
Spline fit is effectively a sum of multiple individual curves (piecewise polynomials), each fit to a different section of &lt;span class="math notranslate nohighlight"&gt;\(x\)&lt;/span&gt;, that are tied together at their boundaries, often called &lt;em&gt;knots&lt;/em&gt;.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/howto/spline.html"/>
    <summary>Often, the model we want to fit is not a perfect line between some x and y.
Instead, the parameters of the model are expected to vary over x.
There are multiple ways to handle this situation, one of which is to fit a spline.
Spline fit is effectively a sum of multiple individual curves (piecewise polynomials), each fit to a different section of x, that are tied together at their boundaries, often called knots.</summary>
    <category term="patsy" label="patsy"/>
    <category term="regression" label="regression"/>
    <category term="spline" label="spline"/>
    <published>2022-06-04T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/case_studies/probabilistic_matrix_factorization.html</id>
    <title>Probabilistic Matrix Factorization for Making Personalized Recommendations</title>
    <updated>2022-06-03T00:00:00+00:00</updated>
    <author>
      <name>Rob Zinkov</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;So you are browsing for something to watch on Netflix and just not liking the suggestions. You just know you can do better. All you need to do is collect some ratings data from yourself and friends and build a recommendation algorithm. This notebook will guide you in doing just that!&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/case_studies/probabilistic_matrix_factorization.html"/>
    <summary>So you are browsing for something to watch on Netflix and just not liking the suggestions. You just know you can do better. All you need to do is collect some ratings data from yourself and friends and build a recommendation algorithm. This notebook will guide you in doing just that!</summary>
    <category term="casestudy" label="case study"/>
    <category term="matrixfactorization" label="matrix factorization"/>
    <category term="productrecommendation" label="product recommendation"/>
    <published>2022-06-03T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/diagnostics_and_criticism/sampler-stats.html</id>
    <title>Sampler Statistics</title>
    <updated>2022-05-31T00:00:00+00:00</updated>
    <author>
      <name>Christian Luhmann</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;When checking for convergence or when debugging a badly behaving sampler, it is often helpful to take a closer look at what the sampler is doing. For this purpose some samplers export statistics for each generated sample.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/diagnostics_and_criticism/sampler-stats.html"/>
    <summary>When checking for convergence or when debugging a badly behaving sampler, it is often helpful to take a closer look at what the sampler is doing. For this purpose some samplers export statistics for each generated sample.</summary>
    <category term="diagnostics" label="diagnostics"/>
    <published>2022-05-31T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/gaussian_processes/log-gaussian-cox-process.html</id>
    <title>Modeling spatial point patterns with a marked log-Gaussian Cox process</title>
    <updated>2022-05-31T00:00:00+00:00</updated>
    <author>
      <name>Chris Fonnesbeck</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;The log-Gaussian Cox process (LGCP) is a probabilistic model of point patterns typically observed in space or time. It has two main components. First, an underlying &lt;em&gt;intensity&lt;/em&gt; field &lt;span class="math notranslate nohighlight"&gt;\(\lambda(s)\)&lt;/span&gt; of positive real values is modeled over the entire domain &lt;span class="math notranslate nohighlight"&gt;\(X\)&lt;/span&gt; using an exponentially-transformed Gaussian process which constrains &lt;span class="math notranslate nohighlight"&gt;\(\lambda\)&lt;/span&gt; to be positive. Then, this intensity field is used to parameterize a &lt;a class="reference external" href="https://en.wikipedia.org/wiki/Poisson_point_process"&gt;Poisson point process&lt;/a&gt; which represents a stochastic mechanism for placing points in space. Some phenomena amenable to this representation include the incidence of cancer cases across a county, or the spatiotemporal locations of crime events in a city. Both spatial and temporal dimensions can be handled equivalently within this framework, though this tutorial only addresses data in two spatial dimensions.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/gaussian_processes/log-gaussian-cox-process.html"/>
    <summary>The log-Gaussian Cox process (LGCP) is a probabilistic model of point patterns typically observed in space or time. It has two main components. First, an underlying intensity field \lambda(s) of positive real values is modeled over the entire domain X using an exponentially-transformed Gaussian process which constrains \lambda to be positive. Then, this intensity field is used to parameterize a Poisson point process which represents a stochastic mechanism for placing points in space. Some phenomena amenable to this representation include the incidence of cancer cases across a county, or the spatiotemporal locations of crime events in a city. Both spatial and temporal dimensions can be handled equivalently within this framework, though this tutorial only addresses data in two spatial dimensions.</summary>
    <category term="countdata" label="count data"/>
    <category term="coxprocess" label="cox process"/>
    <category term="latentgaussianprocess" label="latent gaussian process"/>
    <category term="nonparametric" label="nonparametric"/>
    <category term="spatial" label="spatial"/>
    <published>2022-05-31T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/introductory/api_quickstart.html</id>
    <title>General API quickstart</title>
    <updated>2022-05-31T00:00:00+00:00</updated>
    <author>
      <name>Christian Luhmann</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;Models in PyMC are centered around the &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;Model&lt;/span&gt;&lt;/code&gt; class. It has references to all random variables (RVs) and computes the model logp and its gradients. Usually, you would instantiate it as part of a &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;with&lt;/span&gt;&lt;/code&gt; context:&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/introductory/api_quickstart.html"/>
    <summary>Models in PyMC are centered around the Model class. It has references to all random variables (RVs) and computes the model logp and its gradients. Usually, you would instantiate it as part of a with context:</summary>
    <category term=""/>
    <published>2022-05-31T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/samplers/SMC-ABC_Lotka-Volterra_example.html</id>
    <title>Approximate Bayesian Computation</title>
    <updated>2022-05-31T00:00:00+00:00</updated>
    <author>
      <name>PyMC Contributors</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;Approximate Bayesian Computation methods (also called likelihood free inference methods), are a group of techniques developed for inferring posterior distributions in cases where the likelihood function is intractable or costly to evaluate. This does not mean that the likelihood function is not part of the analysis, it just the we are approximating the likelihood, and hence the name of the ABC methods.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/samplers/SMC-ABC_Lotka-Volterra_example.html"/>
    <summary>Approximate Bayesian Computation methods (also called likelihood free inference methods), are a group of techniques developed for inferring posterior distributions in cases where the likelihood function is intractable or costly to evaluate. This does not mean that the likelihood function is not part of the analysis, it just the we are approximating the likelihood, and hence the name of the ABC methods.</summary>
    <category term="ABC" label="ABC"/>
    <category term="SMC" label="SMC"/>
    <published>2022-05-31T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/variational_inference/bayesian_neural_network_advi.html</id>
    <title>Variational Inference: Bayesian Neural Networks</title>
    <updated>2022-05-30T00:00:00+00:00</updated>
    <author>
      <name>updated by Chris Fonnesbeck</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;&lt;strong&gt;Probabilistic Programming&lt;/strong&gt;, &lt;strong&gt;Deep Learning&lt;/strong&gt; and “&lt;strong&gt;Big Data&lt;/strong&gt;” are among the biggest topics in machine learning. Inside of PP, a lot of innovation is focused on making things scale using &lt;strong&gt;Variational Inference&lt;/strong&gt;. In this example, I will show how to use &lt;strong&gt;Variational Inference&lt;/strong&gt; in PyMC to fit a simple Bayesian Neural Network. I will also discuss how bridging Probabilistic Programming and Deep Learning can open up very interesting avenues to explore in future research.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/variational_inference/bayesian_neural_network_advi.html"/>
    <summary>Probabilistic Programming, Deep Learning and “Big Data” are among the biggest topics in machine learning. Inside of PP, a lot of innovation is focused on making things scale using Variational Inference. In this example, I will show how to use Variational Inference in PyMC to fit a simple Bayesian Neural Network. I will also discuss how bridging Probabilistic Programming and Deep Learning can open up very interesting avenues to explore in future research.</summary>
    <category term="minibatch" label="minibatch"/>
    <category term="neuralnetworks" label="neural networks"/>
    <category term="perceptron" label="perceptron"/>
    <category term="variationalinference" label="variational inference"/>
    <published>2022-05-30T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/survival_analysis/censored_data.html</id>
    <title>Censored Data Models</title>
    <updated>2022-05-23T00:00:00+00:00</updated>
    <author>
      <name>Luis Mario Domenzain</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;&lt;a class="reference external" href="http://docs.pymc.io/notebooks/survival_analysis.html"&gt;This example notebook on Bayesian survival
analysis&lt;/a&gt; touches on the
point of censored data. &lt;em&gt;Censoring&lt;/em&gt; is a form of missing-data problem, in which
observations greater than a certain threshold are clipped down to that
threshold, or observations less than a certain threshold are clipped up to that
threshold, or both. These are called right, left and interval censoring,
respectively. In this example notebook we consider interval censoring.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/survival_analysis/censored_data.html"/>
    <summary>This example notebook on Bayesian survival
analysis touches on the
point of censored data. Censoring is a form of missing-data problem, in which
observations greater than a certain threshold are clipped down to that
threshold, or observations less than a certain threshold are clipped up to that
threshold, or both. These are called right, left and interval censoring,
respectively. In this example notebook we consider interval censoring.</summary>
    <category term="censored" label="censored"/>
    <category term="survivalanalysis" label="survival analysis"/>
    <published>2022-05-23T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/causal_inference/regression_discontinuity.html</id>
    <title>Regression discontinuity design analysis</title>
    <updated>2022-04-23T00:00:00+00:00</updated>
    <author>
      <name>Benjamin T. Vincent</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;&lt;a class="reference external" href="https://en.wikipedia.org/wiki/Quasi-experiment"&gt;Quasi experiments&lt;/a&gt; involve experimental interventions and quantitative measures. However, quasi-experiments do &lt;em&gt;not&lt;/em&gt; involve random assignment of units (e.g. cells, people, companies, schools, states) to test or control groups. This inability to conduct random assignment poses problems when making causal claims as it makes it harder to argue that any difference between a control and test group are because of an intervention and not because of a confounding factor.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/causal_inference/regression_discontinuity.html"/>
    <summary>Quasi experiments involve experimental interventions and quantitative measures. However, quasi-experiments do not involve random assignment of units (e.g. cells, people, companies, schools, states) to test or control groups. This inability to conduct random assignment poses problems when making causal claims as it makes it harder to argue that any difference between a control and test group are because of an intervention and not because of a confounding factor.</summary>
    <category term="causalinference" label="causal inference"/>
    <category term="counterfactuals" label="counterfactuals"/>
    <category term="quasiexperiments" label="quasi experiments"/>
    <category term="regression" label="regression"/>
    <published>2022-04-23T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/gaussian_processes/GP-MaunaLoa.html</id>
    <title>Gaussian Process for CO2 at Mauna Loa</title>
    <updated>2022-04-23T00:00:00+00:00</updated>
    <author>
      <name>Chris Fonnesbeck</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;This Gaussian Process (GP) example shows how to:&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/gaussian_processes/GP-MaunaLoa.html"/>
    <summary>This Gaussian Process (GP) example shows how to:</summary>
    <category term="CO2" label="CO2"/>
    <category term="gaussianprocess" label="gaussian process"/>
    <published>2022-04-23T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/mixture_models/gaussian_mixture_model.html</id>
    <title>Gaussian Mixture Model</title>
    <updated>2022-04-23T00:00:00+00:00</updated>
    <author>
      <name>Abe Flaxman</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;A &lt;a class="reference external" href="https://en.wikipedia.org/wiki/Mixture_model"&gt;mixture model&lt;/a&gt; allows us to make inferences about the component contributors to a distribution of data. More specifically, a Gaussian Mixture Model allows us to make inferences about the means and standard deviations of a specified number of underlying component Gaussian distributions.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/mixture_models/gaussian_mixture_model.html"/>
    <summary>A mixture model allows us to make inferences about the component contributors to a distribution of data. More specifically, a Gaussian Mixture Model allows us to make inferences about the means and standard deviations of a specified number of underlying component Gaussian distributions.</summary>
    <category term="classification" label="classification"/>
    <category term="mixturemodel" label="mixture model"/>
    <published>2022-04-23T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/time_series/Air_passengers-Prophet_with_Bayesian_workflow.html</id>
    <title>Air passengers - Prophet-like model</title>
    <updated>2022-04-23T00:00:00+00:00</updated>
    <author>
      <name>Danh Phan</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;We’re going to look at the “air passengers” dataset, which tracks the monthly totals of a US airline passengers from 1949 to 1960. We could fit this using the &lt;a class="reference external" href="https://facebook.github.io/prophet/"&gt;Prophet&lt;/a&gt; model &lt;span id="id1"&gt;[&lt;a class="reference internal" href="time_series/Air_passengers-Prophet_with_Bayesian_workflow.html#id103" title="Sean J Taylor and Benjamin Letham. Forecasting at scale. The American Statistician, 72(1):37–45, 2018. URL: https://peerj.com/preprints/3190/."&gt;Taylor and Letham, 2018&lt;/a&gt;]&lt;/span&gt; (indeed, this dataset is one of the examples they provide in their documentation), but instead we’ll make our own Prophet-like model in PyMC3. This will make it a lot easier to inspect the model’s components and to do prior predictive checks (an integral component of the &lt;a class="reference external" href="https://arxiv.org/abs/2011.01808"&gt;Bayesian workflow&lt;/a&gt; &lt;span id="id2"&gt;[&lt;a class="reference internal" href="time_series/Air_passengers-Prophet_with_Bayesian_workflow.html#id37" title="Andrew Gelman, Aki Vehtari, Daniel Simpson, Charles C Margossian, Bob Carpenter, Yuling Yao, Lauren Kennedy, Jonah Gabry, Paul-Christian Bürkner, and Martin Modrák. Bayesian workflow. arXiv preprint arXiv:2011.01808, 2020. URL: https://arxiv.org/abs/2011.01808."&gt;Gelman &lt;em&gt;et al.&lt;/em&gt;, 2020&lt;/a&gt;]&lt;/span&gt;).&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/time_series/Air_passengers-Prophet_with_Bayesian_workflow.html"/>
    <summary>We’re going to look at the “air passengers” dataset, which tracks the monthly totals of a US airline passengers from 1949 to 1960. We could fit this using the Prophet model taylor2018forecasting (indeed, this dataset is one of the examples they provide in their documentation), but instead we’ll make our own Prophet-like model in PyMC3. This will make it a lot easier to inspect the model’s components and to do prior predictive checks (an integral component of the Bayesian workflow gelman2020bayesian).</summary>
    <category term="prophet" label="prophet"/>
    <category term="timeseries" label="time series"/>
    <published>2022-04-23T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/case_studies/item_response_nba.html</id>
    <title>NBA Foul Analysis with Item Response Theory</title>
    <updated>2022-04-17T00:00:00+00:00</updated>
    <author>
      <name>Lorenzo Toniazzi</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;This tutorial shows an application of Bayesian Item Response Theory &lt;span id="id1"&gt;[&lt;a class="reference internal" href="case_studies/item_response_nba.html#id29" title="Jean-Paul Fox. Bayesian item response modeling: Theory and applications. Springer, 2010."&gt;Fox, 2010&lt;/a&gt;]&lt;/span&gt; to NBA basketball foul calls data using PyMC. Based on Austin Rochford’s blogpost &lt;a class="reference external" href="https://www.austinrochford.com/posts/2017-04-04-nba-irt.html"&gt;NBA Foul Calls and Bayesian Item Response Theory&lt;/a&gt;.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/case_studies/item_response_nba.html"/>
    <summary>This tutorial shows an application of Bayesian Item Response Theory fox2010bayesian to NBA basketball foul calls data using PyMC. Based on Austin Rochford’s blogpost NBA Foul Calls and Bayesian Item Response Theory.</summary>
    <category term="casestudy" label="case study"/>
    <category term="generalizedlinearmodel" label="generalized linear model"/>
    <category term="hierarchicalmodel" label="hierarchical model"/>
    <published>2022-04-17T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/case_studies/putting_workflow.html</id>
    <title>Model building and expansion for golf putting</title>
    <updated>2022-04-02T00:00:00+00:00</updated>
    <author>
      <name>Oriol Abril-Pla</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;&lt;strong&gt;This uses and closely follows &lt;a class="reference external" href="https://mc-stan.org/users/documentation/case-studies/golf.html"&gt;the case study from Andrew Gelman&lt;/a&gt;, written in Stan. There are some new visualizations and we steered away from using improper priors, but much credit to him and to the Stan group for the wonderful case study and software.&lt;/strong&gt;&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/case_studies/putting_workflow.html"/>
    <summary>This uses and closely follows the case study from Andrew Gelman, written in Stan. There are some new visualizations and we steered away from using improper priors, but much credit to him and to the Stan group for the wonderful case study and software.</summary>
    <category term="Bayesianworkflow" label="Bayesian workflow"/>
    <category term="modelexpansion" label="model expansion"/>
    <category term="sports" label="sports"/>
    <published>2022-04-02T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/howto/wrapping_jax_function.html</id>
    <title>How to wrap a JAX function for use in PyMC</title>
    <updated>2022-03-24T00:00:00+00:00</updated>
    <author>
      <name>Ricardo Vieira</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;This notebook uses libraries that are not PyMC dependencies
and therefore need to be installed specifically to run this notebook.
Open the dropdown below for extra guidance.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/howto/wrapping_jax_function.html"/>
    <summary>This notebook uses libraries that are not PyMC dependencies
and therefore need to be installed specifically to run this notebook.
Open the dropdown below for extra guidance.</summary>
    <category term="JAX" label="JAX"/>
    <category term="PyTensor" label="PyTensor"/>
    <category term="hiddenmarkovmodel" label="hidden markov model"/>
    <published>2022-03-24T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/causal_inference/moderation_analysis.html</id>
    <title>Bayesian moderation analysis</title>
    <updated>2022-03-23T00:00:00+00:00</updated>
    <author>
      <name>Benjamin T. Vincent</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;This notebook covers Bayesian &lt;a class="reference external" href="https://en.wikipedia.org/wiki/Moderation_(statistics)"&gt;moderation analysis&lt;/a&gt;. This is appropriate when we believe that one predictor variable (the moderator) may influence the linear relationship between another predictor variable and an outcome. Here we look at an example where we look at the relationship between hours of training and muscle mass, where it may be that age (the moderating variable) affects this relationship.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/causal_inference/moderation_analysis.html"/>
    <summary>This notebook covers Bayesian moderation analysis. This is appropriate when we believe that one predictor variable (the moderator) may influence the linear relationship between another predictor variable and an outcome. Here we look at an example where we look at the relationship between hours of training and muscle mass, where it may be that age (the moderating variable) affects this relationship.</summary>
    <category term=""/>
    <category term="moderation" label="moderation"/>
    <category term="pathanalysis" label="path analysis"/>
    <published>2022-03-23T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/gaussian_processes/GP-MeansAndCovs.html</id>
    <title>Mean and Covariance Functions</title>
    <updated>2022-03-22T00:00:00+00:00</updated>
    <author>
      <name>Oriol Abril Pla</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;A large set of mean and covariance functions are available in PyMC.  It is relatively easy to define custom mean and covariance functions.  Since PyMC uses PyTensor, their gradients do not need to be defined by the user.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/gaussian_processes/GP-MeansAndCovs.html"/>
    <summary>A large set of mean and covariance functions are available in PyMC.  It is relatively easy to define custom mean and covariance functions.  Since PyMC uses PyTensor, their gradients do not need to be defined by the user.</summary>
    <category term="gaussianprocess" label="gaussian process"/>
    <published>2022-03-22T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/case_studies/factor_analysis.html</id>
    <title>Factor analysis</title>
    <updated>2022-03-19T00:00:00+00:00</updated>
    <author>
      <name>Erik Werner</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;Factor analysis is a widely used probabilistic model for identifying low-rank structure in multivariate data as encoded in latent variables. It is very closely related to principal components analysis, and differs only in the prior distributions assumed for these latent variables. It is also a good example of a linear Gaussian model as it can be described entirely as a linear transformation of underlying Gaussian variates. For a high-level view of how factor analysis relates to other models, you can check out &lt;a class="reference external" href="https://www.cs.ubc.ca/~murphyk/Bayes/Figures/gmka.gif"&gt;this diagram&lt;/a&gt; originally published by Ghahramani and Roweis.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/case_studies/factor_analysis.html"/>
    <summary>Factor analysis is a widely used probabilistic model for identifying low-rank structure in multivariate data as encoded in latent variables. It is very closely related to principal components analysis, and differs only in the prior distributions assumed for these latent variables. It is also a good example of a linear Gaussian model as it can be described entirely as a linear transformation of underlying Gaussian variates. For a high-level view of how factor analysis relates to other models, you can check out this diagram originally published by Ghahramani and Roweis.</summary>
    <category term="PCA" label="PCA"/>
    <category term="factoranalysis" label="factor analysis"/>
    <category term="matrixfactorization" label="matrix factorization"/>
    <published>2022-03-19T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/case_studies/rugby_analytics.html</id>
    <title>A Hierarchical model for Rugby prediction</title>
    <updated>2022-03-19T00:00:00+00:00</updated>
    <author>
      <name>Oriol Abril-Pla</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;In this example, we’re going to reproduce the first model described in &lt;span id="id1"&gt;Baio and Blangiardo [&lt;a class="reference internal" href="case_studies/rugby_analytics.html#id7" title="Gianluca Baio and Marta Blangiardo. Bayesian hierarchical model for the prediction of football results. Journal of Applied Statistics, 37(2):253–264, 2010."&gt;2010&lt;/a&gt;]&lt;/span&gt; using PyMC. Then show how to sample from the posterior predictive to simulate championship outcomes from the scored goals which are the modeled quantities.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/case_studies/rugby_analytics.html"/>
    <summary>In this example, we’re going to reproduce the first model described in baio2010bayesian using PyMC. Then show how to sample from the posterior predictive to simulate championship outcomes from the scored goals which are the modeled quantities.</summary>
    <category term="hierarchicalmodel" label="hierarchical model"/>
    <category term="sports" label="sports"/>
    <published>2022-03-19T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/generalized_linear_models/GLM-binomial-regression.html</id>
    <title>Binomial regression</title>
    <updated>2022-02-23T00:00:00+00:00</updated>
    <author>
      <name>Benjamin T. Vincent</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;This notebook covers the logic behind &lt;a class="reference external" href="https://en.wikipedia.org/wiki/Binomial_regression"&gt;Binomial regression&lt;/a&gt;, a specific instance of Generalized Linear Modelling. The example is kept very simple, with a single predictor variable.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/generalized_linear_models/GLM-binomial-regression.html"/>
    <summary>This notebook covers the logic behind Binomial regression, a specific instance of Generalized Linear Modelling. The example is kept very simple, with a single predictor variable.</summary>
    <category term=""/>
    <category term="binomialregression" label="binomial regression"/>
    <category term="generalizedlinearmodel" label="generalized linear model"/>
    <published>2022-02-23T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/causal_inference/mediation_analysis.html</id>
    <title>Bayesian mediation analysis</title>
    <updated>2022-02-23T00:00:00+00:00</updated>
    <author>
      <name>Benjamin T. Vincent</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;This notebook covers Bayesian &lt;a class="reference external" href="https://en.wikipedia.org/wiki/Mediation_(statistics)"&gt;mediation analysis&lt;/a&gt;. This is useful when we want to explore possible mediating pathways between a predictor and an outcome variable.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/causal_inference/mediation_analysis.html"/>
    <summary>This notebook covers Bayesian mediation analysis. This is useful when we want to explore possible mediating pathways between a predictor and an outcome variable.</summary>
    <category term="mediation" label="mediation"/>
    <category term="pathanalysis" label="path analysis"/>
    <category term="regression" label="regression"/>
    <published>2022-02-23T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/samplers/lasso_block_update.html</id>
    <title>Lasso regression with block updating</title>
    <updated>2022-02-10T00:00:00+00:00</updated>
    <author>
      <name>Lorenzo Toniazzi</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;Sometimes, it is very useful to update a set of parameters together. For example, variables that are highly correlated are often good to update together. In PyMC block updating is simple. This will be demonstrated using the parameter &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;step&lt;/span&gt;&lt;/code&gt; of &lt;code class="xref py py-class docutils literal notranslate"&gt;&lt;span class="pre"&gt;pymc.sample&lt;/span&gt;&lt;/code&gt;.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/samplers/lasso_block_update.html"/>
    <summary>Sometimes, it is very useful to update a set of parameters together. For example, variables that are highly correlated are often good to update together. In PyMC block updating is simple. This will be demonstrated using the parameter step of pymc.sample.</summary>
    <category term="regression" label="regression"/>
    <published>2022-02-10T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/generalized_linear_models/GLM-model-selection.html</id>
    <title>GLM: Model Selection</title>
    <updated>2022-01-08T00:00:00+00:00</updated>
    <author>
      <name>Oriol Abril-Pla</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;A fairly minimal reproducible example of Model Selection using WAIC, and LOO as currently implemented in PyMC3.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/generalized_linear_models/GLM-model-selection.html"/>
    <summary>A fairly minimal reproducible example of Model Selection using WAIC, and LOO as currently implemented in PyMC3.</summary>
    <category term="crossvalidation" label="cross validation"/>
    <category term="generalizedlinearmodel" label="generalized linear model"/>
    <category term="loo" label="loo"/>
    <category term="modelcomparison" label="model comparison"/>
    <category term="waic" label="waic"/>
    <published>2022-01-08T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/mixture_models/dirichlet_mixture_of_multinomials.html</id>
    <title>Dirichlet mixtures of multinomials</title>
    <updated>2022-01-08T00:00:00+00:00</updated>
    <author>
      <name>Oriol Abril-Pla</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;This example notebook demonstrates the use of a
Dirichlet mixture of multinomials
(a.k.a &lt;a class="reference external" href="https://en.wikipedia.org/wiki/Dirichlet-multinomial_distribution"&gt;Dirichlet-multinomial&lt;/a&gt; or DM)
to model categorical count data.
Models like this one are important in a variety of areas, including
natural language processing, ecology, bioinformatics, and more.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/mixture_models/dirichlet_mixture_of_multinomials.html"/>
    <summary>This example notebook demonstrates the use of a
Dirichlet mixture of multinomials
(a.k.a Dirichlet-multinomial or DM)
to model categorical count data.
Models like this one are important in a variety of areas, including
natural language processing, ecology, bioinformatics, and more.</summary>
    <category term=""/>
    <category term="mixturemodel" label="mixture model"/>
    <published>2022-01-08T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/case_studies/BEST.html</id>
    <title>Bayesian Estimation Supersedes the T-Test</title>
    <updated>2022-01-07T00:00:00+00:00</updated>
    <author>
      <name>Andrés suárez</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;Non-consecutive header level increase; H1 to H3 [myst.header]&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/case_studies/BEST.html"/>
    <summary>Non-consecutive header level increase; H1 to H3 [myst.header]</summary>
    <category term=""/>
    <category term="hypothesistesting" label="hypothesis testing"/>
    <category term="modelcomparison" label="model comparison"/>
    <published>2022-01-07T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/bart/bart_introduction.html</id>
    <title>Bayesian Additive Regression Trees: Introduction</title>
    <updated>2021-12-21T00:00:00+00:00</updated>
    <author>
      <name>Osvaldo Martin</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;Bayesian additive regression trees (BART) is a non-parametric regression approach. If we have some covariates &lt;span class="math notranslate nohighlight"&gt;\(X\)&lt;/span&gt; and we want to use them to model &lt;span class="math notranslate nohighlight"&gt;\(Y\)&lt;/span&gt;, a BART model (omitting the priors) can be represented as:&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/bart/bart_introduction.html"/>
    <summary>Bayesian additive regression trees (BART) is a non-parametric regression approach. If we have some covariates X and we want to use them to model Y, a BART model (omitting the priors) can be represented as:</summary>
    <category term="BART" label="BART"/>
    <category term="non-parametric" label="non-parametric"/>
    <category term="regression" label="regression"/>
    <published>2021-12-21T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/howto/blackbox_external_likelihood_numpy.html</id>
    <title>Using a “black box” likelihood function</title>
    <updated>2021-12-16T00:00:00+00:00</updated>
    <author>
      <name>Ricardo Vieira</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;There is a &lt;a class="reference internal" href="howto/wrapping_jax_function.html#wrapping_jax_function"&gt;&lt;span class="std std-ref"&gt;related example&lt;/span&gt;&lt;/a&gt; that discusses how to use a likelihood implemented in JAX&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/howto/blackbox_external_likelihood_numpy.html"/>
    <summary>There is a related example that discusses how to use a likelihood implemented in JAX</summary>
    <category term="PyTensor" label="PyTensor"/>
    <published>2021-12-16T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/fundamentals/data_container.html</id>
    <title>Using Data Containers</title>
    <updated>2021-12-16T00:00:00+00:00</updated>
    <author>
      <name>Jesse Grabowski</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;After building the statistical model of your dreams, you’re going to need to feed it some data. Data is typically introduced to a PyMC model in one of two ways. Some data is used as an exogenous input, called &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;X&lt;/span&gt;&lt;/code&gt; in linear regression models, where &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;mu&lt;/span&gt; &lt;span class="pre"&gt;=&lt;/span&gt; &lt;span class="pre"&gt;X&lt;/span&gt; &lt;span class="pre"&gt;&amp;#64;&lt;/span&gt; &lt;span class="pre"&gt;beta&lt;/span&gt;&lt;/code&gt;. Other data are “observed” examples of the endogenous outputs of your model, called &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;y&lt;/span&gt;&lt;/code&gt; in regression models, and is used as input to the likelihood function implied by your model. These data, either exogenous or endogenous, can be included in your model as wide variety of datatypes, including numpy &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;ndarrays&lt;/span&gt;&lt;/code&gt;, pandas &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;Series&lt;/span&gt;&lt;/code&gt; and &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;DataFrame&lt;/span&gt;&lt;/code&gt;, and even pytensor &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;TensorVariables&lt;/span&gt;&lt;/code&gt;.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/fundamentals/data_container.html"/>
    <summary>After building the statistical model of your dreams, you’re going to need to feed it some data. Data is typically introduced to a PyMC model in one of two ways. Some data is used as an exogenous input, called X in linear regression models, where mu = X @ beta. Other data are “observed” examples of the endogenous outputs of your model, called y in regression models, and is used as input to the likelihood function implied by your model. These data, either exogenous or endogenous, can be included in your model as wide variety of datatypes, including numpy ndarrays, pandas Series and DataFrame, and even pytensor TensorVariables.</summary>
    <category term="posteriorpredictive" label="posterior predictive"/>
    <category term="shareddata" label="shared data"/>
    <published>2021-12-16T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/generalized_linear_models/GLM-robust-with-outlier-detection.html</id>
    <title>GLM: Robust Regression using Custom Likelihood for Outlier Classification</title>
    <updated>2021-11-17T00:00:00+00:00</updated>
    <author>
      <name>Oriol Abril</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;Using PyMC for Robust Regression with Outlier Detection using the Hogg 2010 Signal vs Noise method.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/generalized_linear_models/GLM-robust-with-outlier-detection.html"/>
    <summary>Using PyMC for Robust Regression with Outlier Detection using the Hogg 2010 Signal vs Noise method.</summary>
    <category term="outliers" label="outliers"/>
    <category term="regression" label="regression"/>
    <category term="robust" label="robust"/>
    <published>2021-11-17T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/case_studies/binning.html</id>
    <title>Estimating parameters of a distribution from awkwardly binned data</title>
    <updated>2021-10-23T00:00:00+00:00</updated>
    <author>
      <name>Benjamin T. Vincent</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;Let us say that we are interested in inferring the properties of a population. This could be anything from the distribution of age, or income, or body mass index, or a whole range of different possible measures. In completing this task, we might often come across the situation where we have multiple datasets, each of which can inform our beliefs about the overall population.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/case_studies/binning.html"/>
    <summary>Let us say that we are interested in inferring the properties of a population. This could be anything from the distribution of age, or income, or body mass index, or a whole range of different possible measures. In completing this task, we might often come across the situation where we have multiple datasets, each of which can inform our beliefs about the overall population.</summary>
    <category term="binneddata" label="binned data"/>
    <category term="casestudy" label="case study"/>
    <category term="parameterestimation" label="parameter estimation"/>
    <published>2021-10-23T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/samplers/SMC2_gaussians.html</id>
    <title>Sequential Monte Carlo</title>
    <updated>2021-10-19T00:00:00+00:00</updated>
    <author>
      <name>PyMC Contributors</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;Sampling from distributions with multiple peaks with standard MCMC methods can be difficult, if not impossible, as the Markov chain often gets stuck in either of the minima. A Sequential Monte Carlo sampler (SMC) is a way to ameliorate this problem.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/samplers/SMC2_gaussians.html"/>
    <summary>Sampling from distributions with multiple peaks with standard MCMC methods can be difficult, if not impossible, as the Markov chain often gets stuck in either of the minima. A Sequential Monte Carlo sampler (SMC) is a way to ameliorate this problem.</summary>
    <category term="SMC" label="SMC"/>
    <published>2021-10-19T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/variational_inference/GLM-hierarchical-advi-minibatch.html</id>
    <title>GLM: Mini-batch ADVI on hierarchical regression model</title>
    <updated>2021-09-23T00:00:00+00:00</updated>
    <author>
      <name>PyMC Contributors</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;Unlike Gaussian mixture models, (hierarchical) regression models have independent variables. These variables affect the likelihood function, but are not random variables. When using mini-batch, we should take care of that.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/variational_inference/GLM-hierarchical-advi-minibatch.html"/>
    <summary>Unlike Gaussian mixture models, (hierarchical) regression models have independent variables. These variables affect the likelihood function, but are not random variables. When using mini-batch, we should take care of that.</summary>
    <category term="generalizedlinearmodel" label="generalized linear model"/>
    <category term="hierarchicalmodel" label="hierarchical model"/>
    <category term="variationalinference" label="variational inference"/>
    <published>2021-09-23T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/mixture_models/marginalized_gaussian_mixture_model.html</id>
    <title>Marginalized Gaussian Mixture Model</title>
    <updated>2021-09-18T00:00:00+00:00</updated>
    <author>
      <name>PyMC Contributors</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;Gaussian mixtures are a flexible class of models for data that exhibits subpopulation heterogeneity.  A toy example of such a data set is shown below.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/mixture_models/marginalized_gaussian_mixture_model.html"/>
    <summary>Gaussian mixtures are a flexible class of models for data that exhibits subpopulation heterogeneity.  A toy example of such a data set is shown below.</summary>
    <category term=""/>
    <category term="mixturemodel" label="mixture model"/>
    <published>2021-09-18T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/mixture_models/dp_mix.html</id>
    <title>Dirichlet process mixtures for density estimation</title>
    <updated>2021-09-16T00:00:00+00:00</updated>
    <author>
      <name>Abhipsha Das</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;The &lt;a class="reference external" href="https://en.wikipedia.org/wiki/Dirichlet_process"&gt;Dirichlet process&lt;/a&gt; is a flexible probability distribution over the space of distributions.  Most generally, a probability distribution, &lt;span class="math notranslate nohighlight"&gt;\(P\)&lt;/span&gt;, on a set &lt;span class="math notranslate nohighlight"&gt;\(\Omega\)&lt;/span&gt; is a [measure](https://en.wikipedia.org/wiki/Measure_(mathematics%29) that assigns measure one to the entire space (&lt;span class="math notranslate nohighlight"&gt;\(P(\Omega) = 1\)&lt;/span&gt;).  A Dirichlet process &lt;span class="math notranslate nohighlight"&gt;\(P \sim \textrm{DP}(\alpha, P_0)\)&lt;/span&gt; is a measure that has the property that, for every finite &lt;a class="reference external" href="https://en.wikipedia.org/wiki/Disjoint_sets"&gt;disjoint&lt;/a&gt; partition &lt;span class="math notranslate nohighlight"&gt;\(S_1, \ldots, S_n\)&lt;/span&gt; of &lt;span class="math notranslate nohighlight"&gt;\(\Omega\)&lt;/span&gt;,&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/mixture_models/dp_mix.html"/>
    <summary>The Dirichlet process is a flexible probability distribution over the space of distributions.  Most generally, a probability distribution, P, on a set \Omega is a [measure](https://en.wikipedia.org/wiki/Measure_(mathematics%29) that assigns measure one to the entire space (P(\Omega) = 1).  A Dirichlet process P \sim \textrm{DP}(\alpha, P_0) is a measure that has the property that, for every finite disjoint partition S_1, \ldots, S_n of \Omega,</summary>
    <category term=""/>
    <category term="mixturemodel" label="mixture model"/>
    <published>2021-09-16T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/causal_inference/bayesian_ab_testing_introduction.html</id>
    <title>Introduction to Bayesian A/B Testing</title>
    <updated>2021-05-23T00:00:00+00:00</updated>
    <author>
      <name>Cuong Duong</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;This notebook demonstrates how to implement a Bayesian analysis of an A/B test. We implement the models discussed in VWO’s Bayesian A/B Testing Whitepaper &lt;span id="id1"&gt;[&lt;a class="reference internal" href="causal_inference/bayesian_ab_testing_introduction.html#id98" title="Chris Stucchio. Bayesian a/b testing at vwo. 2015. URL: https://vwo.com/downloads/VWO_SmartStats_technical_whitepaper.pdf."&gt;Stucchio, 2015&lt;/a&gt;]&lt;/span&gt;, and discuss the effect of different prior choices for these models. This notebook does &lt;em&gt;not&lt;/em&gt; discuss other related topics like how to choose a prior, early stopping, and power analysis.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/causal_inference/bayesian_ab_testing_introduction.html"/>
    <summary>This notebook demonstrates how to implement a Bayesian analysis of an A/B test. We implement the models discussed in VWO’s Bayesian A/B Testing Whitepaper stucchio2015bayesian, and discuss the effect of different prior choices for these models. This notebook does not discuss other related topics like how to choose a prior, early stopping, and power analysis.</summary>
    <category term="abtest" label="ab test"/>
    <category term="casestudy" label="case study"/>
    <published>2021-05-23T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/samplers/sampling_conjugate_step.html</id>
    <title>Using a custom step method for sampling from locally conjugate posterior distributions</title>
    <updated>2020-11-17T00:00:00+00:00</updated>
    <author>
      <name>Christopher Krapu</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;Markov chain Monte Carlo (MCMC) sampling methods are fundamental to modern Bayesian inference. PyMC leverages Hamiltonian Monte Carlo (HMC), a powerful sampling algorithm that efficiently explores high-dimensional posterior distributions. Unlike simpler MCMC methods, HMC harnesses the gradient of the log posterior density to make intelligent proposals, allowing it to effectively sample complex posteriors with hundreds or thousands of parameters. A key advantage of HMC is its generality - it works with arbitrary prior distributions and likelihood functions, without requiring conjugate pairs or closed-form solutions. This is crucial since most real-world models involve priors and likelihoods whose product cannot be analytically integrated to obtain the posterior distribution. HMC’s gradient-guided proposals make it dramatically more efficient than earlier MCMC approaches that rely on random walks or simple proposal distributions.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/samplers/sampling_conjugate_step.html"/>
    <summary>Markov chain Monte Carlo (MCMC) sampling methods are fundamental to modern Bayesian inference. PyMC leverages Hamiltonian Monte Carlo (HMC), a powerful sampling algorithm that efficiently explores high-dimensional posterior distributions. Unlike simpler MCMC methods, HMC harnesses the gradient of the log posterior density to make intelligent proposals, allowing it to effectively sample complex posteriors with hundreds or thousands of parameters. A key advantage of HMC is its generality - it works with arbitrary prior distributions and likelihood functions, without requiring conjugate pairs or closed-form solutions. This is crucial since most real-world models involve priors and likelihoods whose product cannot be analytically integrated to obtain the posterior distribution. HMC’s gradient-guided proposals make it dramatically more efficient than earlier MCMC approaches that rely on random walks or simple proposal distributions.</summary>
    <category term="sampling" label="sampling"/>
    <category term="stepmethod" label="step method"/>
    <published>2020-11-17T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/diagnostics_and_criticism/Diagnosing_biased_Inference_with_Divergences.html</id>
    <title>Diagnosing Biased Inference with Divergences</title>
    <updated>2018-02-23T00:00:00+00:00</updated>
    <author>
      <name>Agustina Arroyuelo</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;This notebook is a PyMC3 port of &lt;a class="reference external" href="http://mc-stan.org/documentation/case-studies/divergences_and_bias.html"&gt;Michael Betancourt’s post on mc-stan&lt;/a&gt;. For detailed explanation of the underlying mechanism please check the original post, &lt;a class="reference external" href="http://mc-stan.org/documentation/case-studies/divergences_and_bias.html"&gt;Diagnosing Biased Inference with Divergences&lt;/a&gt; and Betancourt’s excellent paper, &lt;a class="reference external" href="https://arxiv.org/abs/1701.02434"&gt;A Conceptual Introduction to Hamiltonian Monte Carlo&lt;/a&gt;.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/diagnostics_and_criticism/Diagnosing_biased_Inference_with_Divergences.html"/>
    <summary>This notebook is a PyMC3 port of Michael Betancourt’s post on mc-stan. For detailed explanation of the underlying mechanism please check the original post, Diagnosing Biased Inference with Divergences and Betancourt’s excellent paper, A Conceptual Introduction to Hamiltonian Monte Carlo.</summary>
    <category term="diagnostics" label="diagnostics"/>
    <category term="hierarchicalmodel" label="hierarchical model"/>
    <published>2018-02-23T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/gaussian_processes/GP-TProcess.html</id>
    <title>Student-t Process</title>
    <updated>2017-08-23T00:00:00+00:00</updated>
    <author>
      <name>Bill Engels</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;PyMC also includes T-process priors.  They are a generalization of a Gaussian process prior to the multivariate Student’s T distribution.  The usage is identical to that of &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;gp.Latent&lt;/span&gt;&lt;/code&gt;, except they require a degrees of freedom parameter when they are specified in the model.  For more information, see chapter 9 of &lt;a class="reference external" href="http://www.gaussianprocess.org/gpml/"&gt;Rasmussen+Williams&lt;/a&gt;, and &lt;a class="reference external" href="https://arxiv.org/abs/1402.4306"&gt;Shah et al.&lt;/a&gt;.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/gaussian_processes/GP-TProcess.html"/>
    <summary>PyMC also includes T-process priors.  They are a generalization of a Gaussian process prior to the multivariate Student’s T distribution.  The usage is identical to that of gp.Latent, except they require a degrees of freedom parameter when they are specified in the model.  For more information, see chapter 9 of Rasmussen+Williams, and Shah et al..</summary>
    <category term="bayesiannon-parametrics" label="bayesian non-parametrics"/>
    <category term="gaussianprocess" label="gaussian process"/>
    <category term="t-process" label="t-process"/>
    <published>2017-08-23T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/howto/updating_priors.html</id>
    <title>Updating Priors</title>
    <updated>2017-01-23T00:00:00+00:00</updated>
    <author>
      <name>[David Brochart](https://github.com/davidbrochart)</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;In this notebook, we will show how, in principle, it is possible to update the priors as new data becomes available.&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/howto/updating_priors.html"/>
    <summary>In this notebook, we will show how, in principle, it is possible to update the priors as new data becomes available.</summary>
    <category term="priors" label="priors"/>
    <published>2017-01-23T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://docs.pymc.io/projects/examples/en/latest/time_series/Euler-Maruyama_and_SDEs.html</id>
    <title>Inferring parameters of SDEs using a Euler-Maruyama scheme</title>
    <updated>2016-07-23T00:00:00+00:00</updated>
    <author>
      <name>@maedoc</name>
    </author>
    <content type="html">&lt;p class="ablog-post-excerpt"&gt;&lt;p&gt;&lt;em&gt;This notebook is derived from a presentation prepared for the Theoretical Neuroscience Group, Institute of Systems Neuroscience at Aix-Marseile University.&lt;/em&gt;&lt;/p&gt;
&lt;/p&gt;
</content>
    <link href="https://docs.pymc.io/projects/examples/en/latest/time_series/Euler-Maruyama_and_SDEs.html"/>
    <summary>This notebook is derived from a presentation prepared for the Theoretical Neuroscience Group, Institute of Systems Neuroscience at Aix-Marseile University.</summary>
    <category term="timeseries" label="time series"/>
    <published>2016-07-23T00:00:00+00:00</published>
  </entry>
</feed>
