**3.2. The use of Design Of Experiment in the literature**

The following paragraph aims at explaining the key steps and objectives of the Design Of Experiment methodology and cannot be taken as a full course on the subject. A literature review reveals that many authors have used Taguchi tables or the more conventional Response Surface Methodology (RSM). It is important to note that several strategies exist and they must be used depending on the objective of the researcher and the means/costs involved. The choice of strategy is strongly dependent on the researcher's experience and knowledge, especially when it comes to defining the control parameters, identifying the noise parameters (the one we are submitted to), the response he wants to characterize/measure, and the range of variation of the parameters (domain of study). Once these steps have been completed, whatever the employed strategy, it needs to be addressed with great care and in a sequential manner.

## *3.2.1. The screening*

is higher than the lattice temperature (1263 K) and there is a delay of the same order as the pulse duration between the two temperatures. It is conformant with the theory which explains that the electrons transfer heat to the lattice during the pulse for the nanosecond regime and it is not the case for the ultrashort regime (especially <500 fs). Here, we have an intermediate regime where the lattice is heated up after the pulse by the electron-lattice heat transfer. The melting point is almost reached. The temperature might be underestimated because of approximations and work hypotheses mentioned earlier. Nevertheless, this model tends to confirm the implication of thermal effects in the formation of LIPPS in the picosecond regime. This was a single-shot investigation. The damage threshold can be lowered by an accumula‐ tion/incubation effect. It could be interesting to study the thermal implication and the effect of the number of pulses on the delamination of the films. Working under the single-shot ablation threshold with multiple pulses allows a better control of the formation of LIPSS. This is of interest for industrial applications (super hydrophobic surfaces for instance). However, the pulse duration is 9 orders smaller than the pulse repetition rate (40 ps against 10 Hz). The time step could be modulated so that it is short during laser irradiation and longer during relaxation but it would still be difficult to investigate the incubation effect for 1000 pulses at low fluences

In conclusion, this example shows that COMSOL was able to provide valuable qualitative information about a laser machining mechanism. This model could be improved by lifting the simplifications hypothesis one by one (convection/radiation losses [22], interfaces between solids [25], ionization [26], thermo-dependent properties [27], etc.). Also, with more data (material properties, experimental conditions) it would be possible to have reliable quantita‐ tive information at the expense of more computational power. It should be noted that this method is limited to a nonphase change model and other numerical approaches such as Molecular Dynamic mentioned earlier would be more appropriate otherwise. Overall, Section 2 mentioned how to model situations to better understand the complex physical phenomena involved in a process. The next section presents a more applied approach where direct answers regarding the parameters are desired for process prediction and optimization: the Design Of

The Design Of Experiment methodology is based on statistical analysis and provides a semiempirical model in a similar way than limited development. Indeed, a response (laser hole drilling depth, for instance) can be described by a polynomial function of several parameters in a restricted domain through multilinear regression. Opposed to the Change One Separate factor at a Time (COST) method, DOE saves time and resources (human, machine, costs, sample, etc.) and is more relevant when parameters have a coupled effect on the response. It is a powerful tool to rule out noninfluent parameters (screening), describe, test the robustness or even optimize a process with a minimum of experimental trials. It is widely used in the

without the use of a super calculator.

172 Modeling and Simulation in Engineering Sciences

**3. The Design Of Experiment methodology**

Experiment.

**3.1. Introduction**

The parameters are coded between –1 (minimum value) and +1 (maximum value) so that the scale of variation becomes the same between the parameters and it is easier to make a hierarchy of their degree of influence. In RSM, the first step is the analysis of the center of the domain. All parameters are set at 0 (average value) in their coded form. The experiment is replicated three times. If the value is two orders of magnitude larger than the standard error, it is possible to carry on the study of the response. It means that the response obtained is not due to perturbation/noise. Otherwise, there might be a problem with the measurement tools and most likely that the domain of study is not large enough. It must be noted that the number of replication of the center point depends on the number of parameters and levels and needs to be increased when a quadratic model is envisaged.

The following step, the screening, is crucial and can already provide significant information such as: determining which parameters are influent and create a hierarchy, postulating a linear model without interaction, providing a predictive model (qualitatively), and determining if the domain was established properly. In some cases, it is already noticeable that interactions between parameters are influent and that nonlinearity is expected. Remember, if the objective was to identify among a lot of parameters which one were the most significant, then there is no need to carry on. A screening postulates a linear model without interaction.

A proper way to conduct the screening is to use the Hadamard matrices. These are square matrices (H2, H4, H8, H16, etc.) and correspond to a system of equations. With three param‐ eters for instance, H4 is recommended as three equations are required to solve the three unknown parameters and one degree of freedom is available (for the average value). With four parameters, H8 is recommended because H5 does not exist. The four remaining trials are not lost since Hadamard matrices are contained into full factorial or composite designs which are used to obtain, respectively, a linear model with interactions and a quadratic model.

#### *3.2.2. Prediction and optimization*

For prediction or optimization objectives, a full factorial design (linear model + interaction) or a composite design (quadratic or cubic model) is carried on to have a relevant predictive model that will allow setting the parameters values according to the desired response. When a nonlinear model is involved, an optimization step is possible. It is even more useful when several responses are studied and that a compromise must be found. A function called "desirability function" is used especially to optimize multiple responses simultaneously according to the target of the scientist. Each response *Y*<sup>i</sup> is transformed into a dimensionless factor bounded by 0>Di<1 where a higher "Di" indicates a more desirable response value. Three cases arise. "Nominal-the best" case is when the scientist desires to reach a certain response value. "Larger-the best" is when the scientist wants to maximize a response value (laser welding depth of penetration for instance at the highest possible speed) and the "smallerthe better" case is when he wants to minimize a response (HAZ for instance). That desirability function is usually treated with statistical software according to the scientist's objective (minimize, maximize, targeted value). Ref. [30] describes a good example of prediction and optimization of laser butt welding. The bead width, depth of penetration, and the tensile strength of the joint are analyzed. The desirability function is combined with the use of Taguchi tables in order to optimize the parameters (beam power, travel speed, and focal position) and improve the quality of the weld. In this chapter, the quality of the weld is optimal when the bead width is minimized and the tensile strength and depth of penetration are maximized.

**Figure 6.** Factor effects on composite desirability values for helium gas [30].

Then, the desirability function is set accordingly. For this configuration, the laser beam power was found to be the most significant parameter. The interaction effects between the beam power and travel speed, the travel speed, and the focal position were found to be significant as well when using helium shielding gas (**Figure 6**). The interaction effect between beam power and travel speed was found to be most influent followed by the beam power, the travel speed and the focal position when other gases were used.

lost since Hadamard matrices are contained into full factorial or composite designs which are

For prediction or optimization objectives, a full factorial design (linear model + interaction) or a composite design (quadratic or cubic model) is carried on to have a relevant predictive model that will allow setting the parameters values according to the desired response. When a nonlinear model is involved, an optimization step is possible. It is even more useful when several responses are studied and that a compromise must be found. A function called "desirability function" is used especially to optimize multiple responses simultaneously

factor bounded by 0>Di<1 where a higher "Di" indicates a more desirable response value. Three cases arise. "Nominal-the best" case is when the scientist desires to reach a certain response value. "Larger-the best" is when the scientist wants to maximize a response value (laser welding depth of penetration for instance at the highest possible speed) and the "smallerthe better" case is when he wants to minimize a response (HAZ for instance). That desirability function is usually treated with statistical software according to the scientist's objective (minimize, maximize, targeted value). Ref. [30] describes a good example of prediction and optimization of laser butt welding. The bead width, depth of penetration, and the tensile strength of the joint are analyzed. The desirability function is combined with the use of Taguchi tables in order to optimize the parameters (beam power, travel speed, and focal position) and improve the quality of the weld. In this chapter, the quality of the weld is optimal when the bead width is minimized and the tensile strength and depth of penetration are maximized.

is transformed into a dimensionless

used to obtain, respectively, a linear model with interactions and a quadratic model.

*3.2.2. Prediction and optimization*

174 Modeling and Simulation in Engineering Sciences

according to the target of the scientist. Each response *Y*<sup>i</sup>

**Figure 6.** Factor effects on composite desirability values for helium gas [30].

In another example, the authors in Ref. [31] wanted to dice sapphire substrates used in the fabrication of microelectronics chips with a Nd:YAG nanosecond laser (second harmonic 532 nm). They aimed at a depth equivalent to 1/3–1/4 of the wafer thickness (108–148 μm in this case) to be able to subsequently break it and separate the dies. In the meantime, they must choose carefully the laser parameters in order to minimize the groove width because of economical and throughput reason. The studied parameters were the pulse energy (*A*), the scanning velocity (*B*), and the number of passes (*C*). It was found that *A, B*, and *C* were influent on the depth as well as the interaction terms *AB, AC, BC*, and the quadratic terms *A*<sup>2</sup> and *B*<sup>2</sup> . The effects of pulse energy and scanning times have slight and nearly equal effects while speed was the dominant parameter especially when the laser is scanned several times with high pulse energy. For the width, the dominant parameter was the pulse energy. A combination of low


**Figure 7.** Desirability function analysis and results obtained with optimal parameters. (a) Surface view of laser scribed sapphire, (b) wall section view [31].

scanning velocity (0.5–0.59 mm/s), low pulse energy (150 μJ/pulse), and multiple passes (3) was found based on desirability analysis (**Figure 7**) to obtain deep (148 μm) and narrow grooves (19 μm) (**Figure 7a** and **b**).

In this example [31] and others [12, 32], Box Benhken Designs (BBD) were used. They are an alternative to composite designs because they contain fewer experiments than the latter. However, there might be a loss of accuracy in the final model; they do not contain the screening and a nonlinear model is postulated from the start. In the chapters mentioned above, it is suggested that parametric studies were performed before doing the BBD. Then the authors knew the model would be nonlinear. Overall, the risk of using BBD must be assessed carefully.

A problem of laser welding of AISI304 stainless steel is presented in [12]. The depth of penetration of the weld and the bead width characterize the geometry and are related to the mechanical quality of the weld. These two responses are studied according to the beam power, the welding speed, and the beam angle using a BBD. It was found that the beam power was the most significant parameter followed by the welding speed. This is conformant with Ref. [30] mentioned earlier. With an optimal combination of parameters 1250 W, 750 mm/min, and a beam angle of 90°, depth of penetration of 1.5 mm and beam width of 1.3 mm were achieved. The results were compared to finite element simulation and were discussed in Section 2.2.

In Ref. [32], BBD was used to predict the geometry (depth and width) of CO2 laser-machined micro-channel in glass. Three different models using Artificial Neural Network (ANN) were implemented through Labview®. It was found that one of them gave a better prediction precision than DOE and the two others had a greater average percentage error. Overall, the ANN strategy was deemed useful for prediction and laser parameters optimization. It was found that the laser beam power had a positive effect on the channel width and depth increase. The pulse repetition rate had a negative effect on both responses. The traverse speed had a negligible effect on the channel width and tended to decrease the channel depth when increasing.

#### *3.2.3. Prediction and robustness*

The study of the robustness is another objective and is dear to the industrials for better process control and repeatability. It can be performed through the analysis of the error (for each experiment) as a response while carrying on RSM. Taguchi method is a particular case of the Design Of Experiment and is relevant for robustness study. The tables of Taguchi are mostly fractional designs. In Taguchi methodology, the signal to noise (S/N) ratio is analyzed according to the following equation:

$$S/N\_{\bar{i}} = -10 \log \left[ \frac{1}{n} \sum\_{i=1}^{n} \nu\_i^2 \right] \tag{13}$$

Numerical Simulation of Laser Processing Materials: An Engineering Approach http://dx.doi.org/10.5772/63945 177

$$1S/N\_l = -10\log\left[\frac{1}{n}\sum\_{i=1}^{n}\frac{1}{\mathbf{y}\_i^2}\right] \tag{14}$$

$$S/N = -10\log\left[\frac{1}{n}\sum\_{i=1}^{n} (\mathbf{y}\_i - m)^2\right] \tag{15}$$

where *n* is the number of trials for each experiment, *yi* is the response of the *i*th experi‐ ment, and *m* is the target the scientist wants to reach. It allows control and the reduction of variability of the response. The "smaller-the better-characteristic" (Eq. (13)) is used when the scientist wants to minimize a response as opposed to the "larger-the better characteristic" (Eq. (14)). The "nominal-the-best" characteristic (Eq. (15)) is about reaching a specified target. It is not the same as the desirability function (Section 3.2.2) since this is an analysis of the signal-to-noise ratio and this is what makes the Taguchi method more relevant for robustness study.

scanning velocity (0.5–0.59 mm/s), low pulse energy (150 μJ/pulse), and multiple passes (3) was found based on desirability analysis (**Figure 7**) to obtain deep (148 μm) and narrow

In this example [31] and others [12, 32], Box Benhken Designs (BBD) were used. They are an alternative to composite designs because they contain fewer experiments than the latter. However, there might be a loss of accuracy in the final model; they do not contain the screening and a nonlinear model is postulated from the start. In the chapters mentioned above, it is suggested that parametric studies were performed before doing the BBD. Then the authors knew the model would be nonlinear. Overall, the risk of using BBD must be

A problem of laser welding of AISI304 stainless steel is presented in [12]. The depth of penetration of the weld and the bead width characterize the geometry and are related to the mechanical quality of the weld. These two responses are studied according to the beam power, the welding speed, and the beam angle using a BBD. It was found that the beam power was the most significant parameter followed by the welding speed. This is conformant with Ref. [30] mentioned earlier. With an optimal combination of parameters 1250 W, 750 mm/min, and a beam angle of 90°, depth of penetration of 1.5 mm and beam width of 1.3 mm were achieved. The results were compared to finite element simulation and were discussed in Section 2.2.

In Ref. [32], BBD was used to predict the geometry (depth and width) of CO2 laser-machined micro-channel in glass. Three different models using Artificial Neural Network (ANN) were implemented through Labview®. It was found that one of them gave a better prediction precision than DOE and the two others had a greater average percentage error. Overall, the ANN strategy was deemed useful for prediction and laser parameters optimization. It was found that the laser beam power had a positive effect on the channel width and depth increase. The pulse repetition rate had a negative effect on both responses. The traverse speed had a negligible effect on the channel width and tended to decrease the channel depth when

The study of the robustness is another objective and is dear to the industrials for better process control and repeatability. It can be performed through the analysis of the error (for each experiment) as a response while carrying on RSM. Taguchi method is a particular case of the Design Of Experiment and is relevant for robustness study. The tables of Taguchi are mostly fractional designs. In Taguchi methodology, the signal to noise (S/N) ratio is analyzed

<sup>1</sup> / 10log

*i*

*S N <sup>y</sup> <sup>n</sup>* <sup>=</sup> é ù = - ê ú

2 1

å (13)

*n i i*

ë û

grooves (19 μm) (**Figure 7a** and **b**).

176 Modeling and Simulation in Engineering Sciences

assessed carefully.

increasing.

*3.2.3. Prediction and robustness*

according to the following equation:

The methodology is a bit different than RSM [30]: (1) identify the control factors and the interactions, (2) identify the levels of each factors, (3) select the appropriate orthogonal array (OA), (4) assign the factors/interactions to the columns of the OA, (5) conduct the experiments, (6) analyze the data and determine the optimal levels, (7) perform the confirmation experi‐ ments. The two first steps are already tricky and imply that the researcher must have a prerequisite knowledge or have performed earlier experiments to guide him. For steps 3 and 4, the researcher has to choose the orthogonal array and how to fill the columns, thanks to steps 1 and 2 and thanks to the "interaction graphs." The Taguchi tables are indeed limited to a certain type of model with a restricted number of parameters and levels. They have to be filled according to the "interaction graphs." They show the main factors and the interaction that are possible to study for each Taguchi table. These graphs contain other criteria as well: they show how difficult it is to change the level of parameters [33]. For processes where the temperature is a key factor, it takes more time to cool down an oven than to heat it for instance. An experiment in this case can have an effect on the following one. Therefore, the sequence of experiment must be chosen carefully. Taguchi method has been successfully used in numerous studies [34, 35].

After each batch of experiments, the Analysis Of Variance (ANOVA), the analysis of multi‐ linear regression coefficient and the study of residuals must be carried on. These are statistical means to validate the models. Once the final model is found to be statistically adequate, new experiments (not included in the original design) must be executed and compared to the model for its final validation. Another objective of the work cited as Ref. [32] was to compare DOE and ANN methods and show the interest in developing new modeling techniques with the same robustness than DOE while being more easily or successfully applied. This should be kept in mind as the way to go for future simulation method development. The overall Design Of Experiment methodology is summarized by **Figure 8**.

**Figure 8.** Response surface methodology diagram. Yellow designates the analysis and decisions depending on the re‐ searcher's experience, squares designate the experimental work, and the red one designates a turning point where lot of information is gathered and will decide on the next steps.

#### **3.3. Application: optimization of laser parameters for dicing silicon carbide substrate**

#### *3.3.1. RSM study*

The following study is extracted from Ref. [36]. In the microelectronics industry, laser is used to replace the conventional method of blade dicing for hard and chemically inert substrates. Laser is used to scribe and then break the substrate in order to separate electronic chips. A Diode Pumped Solid State (DPSS) Nd:YAG laser operating at a wavelength of 355 nm (3rd harmonic), a repetition rate of 40 kHz with a pulse duration around 90 ns, and a spot size around 5 μm is used. The laser source is embedded in an enclosed station as it is destined for production and is operating in white room environment. The substrate of interest is a three inches diameter silicon carbide wafer and is 360 μm thick. The scribe and break method requires a scribing depth of one forth to one third of the wafer thickness. However, the breaking step becomes difficult for hard substrates such as SiC and because the breaking efficiency depends on the leverage (dies size). The groove needs to go deeper and the objective is to optimize this depth according to the laser parameters: pulse energy (*A*), number of passes (*B*), defocus (*C*), and scanning speed (*D*). As seen in Section 3.2, other parameters are influent on


a laser process but these are the main parameters that an operator can customize on this station. The parameters and their values are reported in **Table 2**.

**Table 2.** Parameter values, identifiers, and study domain.

**Figure 8.** Response surface methodology diagram. Yellow designates the analysis and decisions depending on the re‐ searcher's experience, squares designate the experimental work, and the red one designates a turning point where lot

The following study is extracted from Ref. [36]. In the microelectronics industry, laser is used to replace the conventional method of blade dicing for hard and chemically inert substrates. Laser is used to scribe and then break the substrate in order to separate electronic chips. A Diode Pumped Solid State (DPSS) Nd:YAG laser operating at a wavelength of 355 nm (3rd harmonic), a repetition rate of 40 kHz with a pulse duration around 90 ns, and a spot size around 5 μm is used. The laser source is embedded in an enclosed station as it is destined for production and is operating in white room environment. The substrate of interest is a three inches diameter silicon carbide wafer and is 360 μm thick. The scribe and break method requires a scribing depth of one forth to one third of the wafer thickness. However, the breaking step becomes difficult for hard substrates such as SiC and because the breaking efficiency depends on the leverage (dies size). The groove needs to go deeper and the objective is to optimize this depth according to the laser parameters: pulse energy (*A*), number of passes (*B*), defocus (*C*), and scanning speed (*D*). As seen in Section 3.2, other parameters are influent on

**3.3. Application: optimization of laser parameters for dicing silicon carbide substrate**

of information is gathered and will decide on the next steps.

178 Modeling and Simulation in Engineering Sciences

*3.3.1. RSM study*

Following the procedure illustrated by **Figure 8**, the experiment performed at the center of the domain is replicated seven times (in italic in **Table 3**) and reveals that the response can be studied because the mean response value (157.9 μm) is two orders greater than the standard deviation (1.97). In other words, the response obtained is due to the parameters and not due to noise factors.



**Table 3.** Central composite face-centered in a cube design (α=1) and measured response values.


**Table 4.** ANOVA table for the screening (Hadamard design).

**Table 3** shows the full design. The screening is set as a Hadamard H8 (because four factors are involved and H5 does not exist) and is shown in normal font in **Table 3**. The ANOVA results (**Table 4**) are used to determine the relevancy of a linear model. We focus on the regression coefficients and the *p*-value associated to the lack-of-fit and each parameter. The *p*-value (Fisher test) must be lower than 0.05 for the item to be declared significant. Thus, in this case, the lack of fit is not significant. The multilinear regression coefficients are all low and the difference of mean values at the central point for the model and the experiment is greater than the standard deviation. A linear model is obviously not adequate. A linear model taking into account the interactions between parameters will not be adequate either. Indeed, the curvature is deemed significant by the ANOVA. It means that a nonlinear model will be adequate.


**Table 5.** ANOVA table for the cubic model.

**Sequence Energy (A) Passes (B) Defocus (C) Speed (D) Depth (Y) 0 α 0 0 168.6 0 0** –**α 0 156.4 0 0 α 0 143.2 0 0 0** –**α 201.6 0 0 0 α 123.2** *0 0 0 0 158.8 0 0 0 0 158.6 0 0 0 0 158.4 0 0 0 0 159 0 0 0 0 156.2 0 0 0 0 154.4 0 0 0 0 160.2*

**Table 3.** Central composite face-centered in a cube design (α=1) and measured response values.

Residual 23.20 7 3.31

Pure error 23.18 6 3.86

**Table 4.** ANOVA table for the screening (Hadamard design).

Cor total 42935.67 14

180 Modeling and Simulation in Engineering Sciences

**Source Sum of squares Degree of freedom Mean square** *F* **value** *p***-Value** Model 40252.44 6 6708.74 2024.44 <0.0001 A 28752.02 1 28752.02 8676.25 <0.0001 B 2298.42 1 2298.42 693.57 <0.0001 C 220.5 1 220.05 66.54 <0.0001 D 7176.02 1 7176.02 2165.45 <0.0001 AD 1740.5 1 1740.5 525.22 <0.0001 BD 64.98 1 64.98 19.61 <0.0031 **Curvature 2660.03 1 2660.03 802.69 <0.0001**

**Lack of fit 0.02 1 0.02 5.178E-3 0.9450**

**Table 3** shows the full design. The screening is set as a Hadamard H8 (because four factors are involved and H5 does not exist) and is shown in normal font in **Table 3**. The ANOVA results (**Table 4**) are used to determine the relevancy of a linear model. We focus on the regression coefficients and the *p*-value associated to the lack-of-fit and each parameter. The *p*-value (Fisher test) must be lower than 0.05 for the item to be declared significant. Thus, in this case, the lack

A central composite face-centered in a cube design was chosen (**Table 3**) to model a quadratic, possibly a reduced cubic model. A third parameter α is introduced in a central composite design. Its value depends on the number of parameters and levels and is chosen to cope with the statistical degradation of the accuracy of the model. Indeed, nonlinear models require the use of three levels which make 81 experiments to execute for four parameters (34 ). The use of this third value α allows decreasing the number of experiments compared to a full design while keeping a certain statistical accuracy (although not as good as a full design).

Finally, the ANOVA results (**Table 5**) reveal that the model is significant and the lack-of-fit is not (*p*-value close to 0.05). All coefficients shown in the ANOVA table are significant; the others are not, hence the term of "reduced" cubic model. The multilinear regression coefficients are close to 1 (*R*<sup>2</sup> = 0.9986, *R*<sup>2</sup> adjusted = 0.997, *R*<sup>2</sup> predicted = 0.9805). The reduced cubic model is thus significant and the final mathematical equation (Eq. (16)) in terms of coded factors is given using the least-squares method.

$$\begin{aligned} \text{Y = } &156.89 + 52 \text{A} + 22.63 \text{B} - 3.87 \text{C} - 39.2 \text{D} + 8.78 \text{AB} - 5.75 \text{AC} - 13 \text{AD} - 3.6 \text{BD} + 3.68 \text{CD} \\ &- 18.05 \text{A}^2 - 9.15 \text{B}^2 - 5.85 \text{C}^2 + 6.75 \text{D}^2 - 3.73 \text{AB} \text{D} + 9.17 \text{A}^2 \text{D} + 9.05 \text{AB}^2 \end{aligned} \tag{16}$$

The model is determined to be statistically adequate. The study of the residuals did not reveal major inconstancies. It is necessary to check if the model is scientifically adequate and if it can predict response values with additional experiments.

#### *3.3.2. Experimental validation*

From the response equation (Eq. (16)), it is clear that the pulse energy (*A*) and the scanning speed (*D*) are the most influent parameters on the scribing depth. Heating a target via laser irradiation depends on the amount of energy brought to the target and the interaction duration. This is why they are the most dominant factors and why the interaction term *AD* appears in the equation. Square terms coefficient *A*<sup>2</sup> and *D*<sup>2</sup> are conformant with theory as well. Indeed, the depth increases with increasing energy input until saturation. It is the classical logarithm trend observed in the literature [37]. The scribing depth also decreases with increasing speed until saturation. Interaction terms such as *AC* (evaluation of the fluence, Eqs. (18) and (19)), *AD* (effect of fluence over time), and *CD* (laser/material interaction over time) appear naturally in the response equation because they are linked by Eqs. (17)–(19):

$$F = \frac{E}{\pi \times oo(z)^2} = \frac{A}{\pi \times oo(C)^2} \tag{17}$$

$$\text{l.}\,O = \text{l} - \frac{\text{v}}{o o(z) \times f} = \text{l} - \frac{D}{o o(C) \times f} \tag{18}$$

$$\alpha o(z) = \alpha\_0 \sqrt{1 + \left(\frac{M^2 \lambda z}{\pi \alpha\_0^2}\right)^2} = \alpha\_0 \sqrt{1 + \left(\frac{M^2 \lambda C}{\pi \alpha\_0^2}\right)^2} \tag{19}$$

where *z* is the defocus (m), *O* is the overlap ratio, *f* is the repetition rate (Hz), *v* is the scanning speed (mm/s), λ is the wavelength (nm) and ω<sup>0</sup> (cm) is the spot radius at the focal point. The effect of the number of passes *B* varied depending on the delay between two passes and thus on the scanning speed. The effect saturated around three passes for pulse energies below 85 μJ and tended to saturate around five passes above this energy threshold. The number of scans effect saturates as the aspect ratio (depth/width) increases because there is less margin for debris ejection. Debris disturbs the beam propagation at the bottom of the trench and succes‐ sive passes become less effective. This inconvenience can be balanced depending on the pulse energy and the scanning speed because they rule the laser-material interaction efficiency. This is why interaction terms between the number of passes (*B*), the pulse energy (*A*), and the scanning speed (*D*) appear in the response equation.

Additional experiments were performed and compared to the predicted values given by the model. A reasonable correspondence was found between predicted and experimental values as observed in **Figure 9**.

**Figure 9.** Graph showing the variation of depth versus the energy at different scanning speeds. The frequency was fixed at 40 kHz.

This model was successfully used to determine optimal parameters for an objective of 180 μm deep scribe in silicon carbide substrate.
