**5.2 Intervals of confidence**

Unlike the measure of fluence, values of damage densities are rarely presented with any notion of accuracy. However, it is not possible to compare measures in one's lab or with other installations without error bars. These error bars are especially needed in the low fluence range were rare damage events are observed. The uncertainty is not on the measurement of this particular sample, because this one is now destroyed. The interesting statistical figure is the uncertainty that makes possible the comparison of two similar but physically distinct samples.

To determine the statistical error on the measurement of D(F), one assumption on the nature of the distribution of damaging defects is made: it is supposed that defects are randomly distributed over the area. A specificity of this hypothesis is that there is no interaction between defects. This supposition is common in laser damage research, and it probably holds for optical components tested with millimeter sized beams. However, it is possible to make the hypothesis that defects collaborate in damage, and draw some useful conclusions, for the case of optical multilayers or that of bulk damage of KDP crystals.

The assumption of randomly distributed and independent defects is applied to any set of defects: For example a set of defects damaging at a given fluence is supposed to be distributed that way in the sample. So a set of defects is damaging in a given fluence range. A damage test is an experimental sampling of a distribution of defects that characterizes a very large area, for example the area of all the optical components and samples produced with the same recipes. The error made will be calculated as a possible discrepancy between density obtained at the sample and the "true" density that would be measured if the whole production was damage tested.

The uncertainty depends on the number of damage sites generated within each fluence group. By considering a Poisson distribution, it is rather straightforward to determine the interval of possible measurements when the true characteristic is known. Let us define:

: The surface area of the total production

N: number of potential damage sites on the total production

k: number of detected damage sites on S

= N S/, number of expected damage sites on S

k is not equal to , but rather follows Poisson law, when is very large compared to S, the area of the tested surface. Thus the probability of finding a given k value is:

$$P\_{\nu}(k) = \frac{\nu^k e^{-\nu}}{k!} \tag{24}$$

This formula (24) is only the first step. The problem is then inversed since we want to know from the measurement k. The main difficulty lies in the fact that we are interested in small values of k (or ). When a law of probability is determined with a high enough number of events, then one can use the law of large numbers to express neatly the error in terms of erf function. In case of rare events, especially in the low fluence range, we had to make a special derivation (the full demonstration is given in reference (Lamaignère et al., 2007)).

Let k be the known number of detected damage sites. The interval of values of , for the confidence to be better than 1- can be written:

$$\nu \in \left[ \nu\_{\text{min}}; \nu\_{\text{max}} \right]$$

with

72 Advanced Topics in Measurements

Unlike the measure of fluence, values of damage densities are rarely presented with any notion of accuracy. However, it is not possible to compare measures in one's lab or with other installations without error bars. These error bars are especially needed in the low fluence range were rare damage events are observed. The uncertainty is not on the measurement of this particular sample, because this one is now destroyed. The interesting statistical figure is the uncertainty that makes possible the comparison of two similar but

To determine the statistical error on the measurement of D(F), one assumption on the nature of the distribution of damaging defects is made: it is supposed that defects are randomly distributed over the area. A specificity of this hypothesis is that there is no interaction between defects. This supposition is common in laser damage research, and it probably holds for optical components tested with millimeter sized beams. However, it is possible to make the hypothesis that defects collaborate in damage, and draw some useful conclusions,

The assumption of randomly distributed and independent defects is applied to any set of defects: For example a set of defects damaging at a given fluence is supposed to be distributed that way in the sample. So a set of defects is damaging in a given fluence range. A damage test is an experimental sampling of a distribution of defects that characterizes a very large area, for example the area of all the optical components and samples produced with the same recipes. The error made will be calculated as a possible discrepancy between density obtained at the sample and the "true" density that would be measured if the whole

The uncertainty depends on the number of damage sites generated within each fluence group. By considering a Poisson distribution, it is rather straightforward to determine the interval of possible measurements when the true characteristic is known. Let us define:

k is not equal to , but rather follows Poisson law, when is very large compared to S, the

( ) ! *ke P k*

This formula (24) is only the first step. The problem is then inversed since we want to know from the measurement k. The main difficulty lies in the fact that we are interested in small values of k (or ). When a law of probability is determined with a high enough number of events, then one can use the law of large numbers to express neatly the error in terms of erf function. In case of rare events, especially in the low fluence range, we had to make a special

Let k be the known number of detected damage sites. The interval of values of , for the

*k*

 

(24)

area of the tested surface. Thus the probability of finding a given k value is:

derivation (the full demonstration is given in reference (Lamaignère et al., 2007)).

for the case of optical multilayers or that of bulk damage of KDP crystals.

**5.2 Intervals of confidence** 

physically distinct samples.

production was damage tested.

: The surface area of the total production

k: number of detected damage sites on S

= N S/, number of expected damage sites on S

confidence to be better than 1- can be written:

N: number of potential damage sites on the total production

$$\int\_{\nu=0}^{\nu\_{\min}} \frac{\nu^k e^{-\nu}}{k!}.d\nu \equiv \frac{\varepsilon}{2} \text{ when } \mathbf{k} \neq \mathbf{0} \qquad\qquad\qquad ; \qquad \mathcal{V}\_{\min} = 0 \quad \text{when } \mathbf{k} = \mathbf{0} \tag{25}$$

and

$$\int\_{\nu=\nu\_{\text{max}}}^{+o} \frac{\nu^k \varepsilon^{-\nu}}{k!}.d\nu = \frac{\varepsilon}{2} \tag{26}$$

This means that we calculate a probability 1- for to lie in the interval between min and max. In this section, we know use specifically the confidence limits that corresponds with 2 standard deviation (2) of a Gaussian variable = .0455, or /2 = .02275.

The confidence limits are very far apart when the measured number of damage sites is low. Table 2 gives a numerical derivation of these limits for low k values. At k=0, when no damage site is detected, we can only say that the average number of sites is lower than 3.7 with an error rate of 2.3%. This number of sites must be translated into a density.

One should notice that these error bars are only given by the statistical variations due to the limited number of data (connected to the size of the sample). Potential errors due to inaccurate damage detection are not taken into account.


Table 2. Interval of confidence of for a given measured value of k.
