A Computational Theory of Mindfulness Based Cognitiv`e Therapy from the “Bayesian Brain” Perspective

0323

FIGURE 1 | (A) Schematic summary of the “Bayesian brain” notion that the brain contains an internal model consisting of beliefs about the states of the environment. These give rise to predictions about sensory inputs. The discrepancy between the actual and the predicted sensory inputs (prediction error) serves to update the model. Adapted from Figure 3 in Haker et al. (82), with permission. (B) An illustration of the concept of “beliefs” as probability distributions. Here, we consider Gaussian probability distributions (or, more precisely, densities) that are characterized by an expectation (or mean; represented by the vertical dashed line) and precision (inverse variance; symbolized by the horizontal double arrow). The x-axis (red) indicates the entity that the belief represents (e.g. the temperature of a particular object). The y-axis (violet) represents, simply speaking, the probability that is assigned to each possible instantiation of this entity (in the above example: the probability that object temperature has a particular value).
FIGURE 2 | (A) Graphical summary of Bayes’ theorem (see Eq. 1) for the case of Gaussian probability distributions. It illustrates that the posterior represents a compromise between prior and likelihood, depending on their relative precision. PE is the abbreviation for “prediction error.” To revisit the example from Figure 1B, let us consider perception of temperature. The actually perceived temperature (posterior belief) is a compromise between the expected or predicted temperature (prior) and the sensory input (likelihood). The posterior belief can also be understood as updating the prior belief, where the magnitude of the belief update depends on the prediction error (PE) and the relative precisions (inverse variance) of the prior and the likelihood. In this example, the precision of sensory input (likelihood) is higher, therefore the posterior is closer to the likelihood. This panel is adapted from Figure 2 in Haker et al. (82), with permission. (B) When the precision of the prior belief is higher than the precision of the data (likelihood), a small belief update results, i.e., the posterior stays close to the prior. © When the precision of the data (likelihood)
is higher than the precision of the prior belief, a large belief update results, i.e., the posterior moves more strongly towards the data.
FIGURE 3 | A general scheme of “Bayesian brain” theories of cognition. Here, the overall goal is to minimize prediction errors. Prediction errors represent, simply speaking, the difference between actual sensory input (or sensation, green arrow) and a prediction about the input which originates from a prior belief (red arrow). Minimization of the prediction error can either be achieved by updating the brain’s model (perceptual inference, e.g., according to predictive coding; middle part of figure) or by choosing actions such that beliefs are fulfilled, and the predicted sensory inputs occur (active inference; lower part of figure). In addition to inference and action, hierarchical Bayesian models also allow for forecasting future states and offer an opportunity to integrate metacognition as a top-level that monitors levels of prediction errors (upper part of figure). Reprinted from Petzschner et al. (25),
with permission from Elsevier
FIGURE 4 | A hypothetical anatomical circuit for homeostatic and allostatic regulation (87). The lower part represents a reflex arc in which homeostatic beliefs about bodily states (represented in hypothalamus, brainstem, spinal cord) are defended (protected) against deviations, by eliciting actions that depend on precision-weighted prediction errors. The upper part represents a cortical hierarchy for perceptual inference that is capable of modulating the homeostatic beliefs via descending connections and can implement anticipatory (allostatic) control. A top metacognitive layer (tentatively assigned to medial prefrontal cortex) holds beliefs about performance levels (i.e., levels of prediction errors at the top of the hierarchy). Colors have the same meaning throughout this figure, as indicated by the legend. It is important to keep in mind that in Bayesian treatments of inference-control loops, the direction in which predictions and prediction errors are signaled reverses when switching from the afferent branch (perception) to the efferent branch (action). For example, in the afferent branch, prediction errors are signaled upwards in the hierarchy, whereas in the efferent branch, they are used by descending projections to inform actions. post., posterior; ACC, anterior cingulate cortex; mPFC, medial prefrontal cortex. Reproduced from Figure 3
in Manjaly et al. (87), with permission from BMJ Publishing Group Ltd.
FIGURE 5 | Schematic summary of proposed neurophysiological implementations of hierarchical Bayesian inference in the cortex, specifically, predictive coding. In this scheme, neurons that compute prediction errors (red plates, E) are situated in supragranular layers and signal these errors to neurons in granular layers (grey plates) at the next higher level. By contrast, neurons that compute predictions (green plates, P) are lcoated in infragranular layers and signal these predictions to neurons in both infra- and supragranular layers at the next lower level. This figure is reproduced, with
permission, from Heilbron and Chait (95).
FIGURE 6 | (A) Graphical summary of predictive coding that illustrates the exchange of prediction errors and predictions (prior beliefs) across levels of a cortical hierarchy. In this schematic of predictive coding, perception corresponds to Bayesian belief updates (BU) across the hierarchy. This panel represents the case before an individual adopts the being mode, with perception strongly shaped by priors (as illustrated on the right). (B) This panel illustrates a hypothetical mechanism for instantiating the being mode. Specifically, attentional modulation of forward connections in the cortical hierarchy is proposed to induce higher sensory precision and thus enhanced precision-weighting of prediction errors (PE), leading to rapid belief updates that are closely coupled to the sensory inputs. This corresponds to a perceptual style that lacks “bias” (as usually imposed by the brain’s internal model; compare panel A) and is closely synchronized to sensations. See main text for
details.
FIGURE 7 | This figure illustrates a hypothetical mechanism for inducing decentering, within a hierarchical cortical network for perceptual inference plus an additional metacognitive layer at the top. Here, the notion is that during the being mode (with attentional modulation of forward connections, higher precision-weighting of prediction errors, PE, and rapidly ongoing belief updates, BU; compare Figure 6), high-level beliefs about one’s agency and level of control at the meta-cognitive
level are altered and become less precise. See main text for details
FIGURE 8 | This figure illustrates a hypothetical mechanism for the reduction of reactivity. Here, a reduced precision of beliefs about the state of the world (e.g., bodily states) decreases the tendency to react to any discrepancy between the sensory inputs expected under this belief and the actual sensory inputs (prediction error). This is because the vigor of reflex-like actions that are emitted to “defend” beliefs depends on precision-weighted prediction error (see Stephan et al. (85) for mathematical details). pwPE, precision-weighted prediction errors; BU, belief updates; BD, belief defending. Compare Figure 4 for a (hypothetical) anatomical circuit and see main text for details.
FIGURE 9 | Summary of the hypotheses presented in this paper and the proposed experimental tests. This figure relates key concepts fromMBCT (first column) to a proposed Bayesian brain perspective of its mechanisms (second column), a brief summary of this hypothesis (third column), and possible experimental tests (fourth column).