**2. Methods**

Multiple variables create greater challenges. The concept is to create a distribution with shape and location parameters for each variable, then create distributions to define the mean and shape values (priors). Each of those distributions can be solved with MCMC methods to create a predictive distribution. That predictive distribution can be sampled and multiplied (or added) to the results of all other variables solved and sampled similarly. The assignment of distributions need not stop with the priors for shape location and location. Those parameters also can be assigned a distribution, and likewise those (and so on). Model parameters create a structure or a series of levels that look like a hierarchy, whereby the priors of a given solution are dependent on priors to those variable distributions and integrate that information across levels simultaneously [12], thereby separating the observed variability into parts attributable to both random and true differences [13–15]. Each investigation will have a different series of variables, each with different associated variables, priors, and priors of the priors.

Allenby et al. [16] stated that hierarchical Bayesian models are really "the combination of two things: i) a model written in hierarchical form that is ii) estimated using Bayesian methods." Shaddick et al. [17] consider there are to be at least three levels: (1) the observation or measurement level, (2) the underlying process level, and (3) the parameter level. Kruschke and Vanpaemel [18] noted that hierarchical Bayesian data analysis involves "describing data by meaningful mathematical models and allocating credibility to parameter values that are consistent with the data and with prior knowledge." Using Bayesian methods, hierarchical Bayesian models can yield estimates of the true effects at each level of the hierarchy [14, 19]. By considering the results across all levels, hierarchical Bayesian models can be used to rigorously integrate information with a complex underlying structure [14], resulting in a tendency to shrink differences when multiple variables are incorporated [20]. An important aspect of the hierarchical approach is that the model is usually a flexible version of a base model [21], and if needed, the models allow for adding extra levels depending on the hyperparameters [22].

Applying Bayesian prediction and weighting in a unified approach to Bayesian regression models can account for complex design features under the framework of multilevel regression and poststratification [23–25]. Weighting in a hierarchical model can be used as an extension of linear or logistic regression models. Methods for hierarchical functional data typically require that all curves are observed over or standardized to fall in the same region [26–28]. While classical weighting usually relies on many user-defined choices for regression that is difficult to codify [29], the Bayesian approach allows prior information to be incorporated and the distributions automatically adjusted [30, 31].

MCMC allows the user to approximate aspects of posterior distributions that cannot be directly calculated (e.g., random samples from the posterior, posterior means, etc.). There are examples of current applications of this approach. Draper [32] considered Bayesian hierarchical Poisson regression models, Wang et al. [33] created hierarchical Bayesian model developed for predicting monthly residential per capita electricity consumption at the state level across the United States, and Maddala et al. [34] studied the relationship of income elasticity on energy demand in the United States by applying a dynamic linear regression model under Bayesian framework. Roman et al. [35] and Neil and Fenton [36] used hierarchical Bayesian model for evaluation of treatments for Covid-19.

There are limits to hierarchical Bayesian model. The first is the underlying resulting model assumption may be wrong [14, 31]. Thais et al. [37] noted that "ill behaved likelihoods" at the lower levels in the hierarchy may create either excessive concentration about a mean or noninformative results. Rouder et al. [38] note that although hierarchical linear models are suitable in several domains, they rarely make good models of psychological process. Heller and Gharamani [39] note the algorithm provides no guide to choosing the "correct" number of clusters.

As a result, Shaddick et al. [18] note that Bayesian hierarchical models are an extremely useful and flexible framework in which to model complex relationships and dependencies in data, while Kruschke and Vanpaemel [17] suggest they provide flexibility in designing models that are appropriate for describing the data at hand that can provide a complete representation of parameter uncertainty (i.e., the posterior distribution) that can be directly interpreted. Some examples on how to create a hierarchical Bayesian model are helpful to demonstrate the process.
