*4.2.1 Binomial distribution*

The binomial distribution is a probability distribution that summarizes the probability that a value will take one of two impartial values under a given set of parameters or assumptions [11].


**Table 1.**

*Smoking habit of maxillofacial surgery theater staff.*

*Probability and Sampling in Dentistry DOI: http://dx.doi.org/10.5772/intechopen.97705*

Consider a dichotomous variable, where the outcome for everyone is one of two types (A or B).


#### *4.2.2 Poisson distribution*

The Poisson distribution is another discrete distribution, i.e. it applies to a discrete variable. But unlike the binomial there is no strict upper limit to the possible values of the variable. The variable is the count of several independent events that occur randomly in a fixed interval or time or space [12, 13].


4.The number of still births occurring in a hospital per month.

#### **5. Normal distribution**

Continuous probability functions are also known as probability density functions or *conjugate prior* [14]. We know that we have a continuous distribution if the variable can assume an infinite number of values between any two values. Continuous variables are often measurements on a scale, such as height, weight, and temperature. Unlike discrete probability distributions where each value has a non-zero likelihood, specific values in continuous distributions have a zero probability. For example, the likelihood of measuring a temperature that is exactly 28 degrees is zero [15, 16].

Just as there are different types of discrete distributions for different kinds of discrete data, there are different distributions for continuous data. Each probability distribution has parameters that determine the shape of the distribution. Most distributions have between 1 and 3 parameters [17]. Specifying these parameters sets out the structure of the distribution and all its probabilities entirely. These parameters reflect the fundamental characteristics of the distribution, such as the central tendency and the variability [18].

The most common is normal distribution which is often also referred to as the Gaussian distribution [19]. The normal distribution has two main features:


The normal distribution is the most important distribution in statistics. One reason is that many continuous variables, such as height, seems to have this distribution

[20]. But the main reason is because of what is known as the central limit theorem. This theorem states that if a random sample is taken from any distribution then the distribution of the sample mean *x* will be approximately Normal. The approximation becomes better as *n* gets larger [21]*.* The implication of this result is that inferences, from sample to population, can be based on the Normal distribution [22].

A Normal distribution may be summarized by its mean *u* and variance σ**<sup>2</sup> .** It is often necessary to be able to find the areas under specified parts, particularly the tails, of Normal distribution curve.

This can be done by referring to published tables, which are given in most statistical texts. In order to use these tables, it is necessary to standardize the variable *x.* This can be done by calculating a standardized Normal deviate *z* by the formula (**Figure 1**) [23]:

$$Z = (X - \mu) / \sigma \text{ (Standardized Normal Deviate SDD)} \tag{1}$$

(where Z is the value on the standard normal distribution, X is the value on the original distribution, μ is the mean of the original distribution, and σ is the standard deviation of the original distribution.)

The standard score is the number of standard deviations above or below the mean where a given observation falls. For example, a standard score of 1.5 means that the observation is 1.5 standard deviations above the average. On the other hand, the negative score is below the average. The mean is a Z-score of 0.

#### **6. Sampling**

*A population in any collection of individuals in which we may be interested*, e.g. all people in Saudi Arabia, all females in Hail Region, all diabetic children in Sydney Hail City under 12 years of age. Usually the population is too large for us to examine every individual, so we take a *sample* from the population. If the sample is **representative** of the population, we can then make *inferences* about the population from the sample. *Probability and Sampling in Dentistry DOI: http://dx.doi.org/10.5772/intechopen.97705*

For example, in a study of the incidence of schistosomiasis in a particular region, the population would be all adults living in the region and might consist of many thousands of individuals. The sample might be a few hundred people from the total population, and we wish to be able to generalize from the sample to the population [24]. The advantage of studying just a sample is a saving of labour and costs. The disadvantage of a sample is that precision is lost by not observing the complete population. The sample mean is unlikely to equal exactly the population mean; that is, the sample estimate will have some error [25].

It is useful to distinguish two kinds of error: sampling errors and non-sampling errors.
