5. Mesh refinement

FEA is a useful tool to return an approximation of physics variables. Obviously, the computation time is a decisive factor to make numerical simulations competitive with respect to trial-and-error experiments. Traditionally, the bottleneck of a transient FEA is the time requested to compute the temperature field at each laser beam position. This gets even worse considering nonlinear material properties, the high amount of load steps, and elements number. In fact, every load step is divided into sub-steps to satisfy transient time integration rules. Consequently, the factor mainly responsible for the prolonged simulation time is the amount of elements and sub-steps; thus, in order to minimize the computational cost, this must be reduced as much as possible.

Suppose that the model has been built applying a uniform mapped mesh to the entire domain. Moreover, the mesh density has been increased as much as possible since FEM can predict more accurate results when the number of elements is high. Generally, it is often recommended to increase the elements density in the neighborhood of a certain zone where the results are requested to be more accurate. A typical example is when the stress concentration factor of a shouldered shaft subjected to an axial force needs to be determined. In such a case, the elements are concentrated in the vicinity of the fillet in order to obtain more reliable results.

However, the simulation time is proportional to the elements number and expected to increase enormously. One possible solution to overcome this long computing time is to use the dynamic mesh refinement (DMR) approach. It involves an independent mesh refinement of multiple sub-domains. This strategy allows to further refine, independently, the meshes in a hierarchic manner to reach a higher resolution.

It is mainly composed of two parts:

Regarding the uncommon entities, their properties cannot be directly transferred to the model because there is a spatial mismatch between the two meshes. The solution is to perform an interpolation. The elements and nodes locations from OLD\_MESH are the input values, while the NEW\_MESH entities are the target. The interpolation scheme scans all the elements (nodes) in the NEW\_MESH, searching for the location that suits better the elements (nodes) belonging to the OLD\_MESH. When the best solution is found, the properties can be interpolated between the two meshes. A different interpolation scheme is applied for temperature and material properties. The material number is assigned to the target with respect to the nearest OLD\_MESH element and no interpolation is needed. However, the temperatures are assigned to target nodes with a more complicated scheme: not only the nearest node from OLD\_MESH is chosen, but also a group of surrounding nodes that properly fit the target. Therefore, the temperature is assigned by means of an interpolation scheme that can be performed on a surface (2D interpolation) or on a volume (3D interpolation) regardless of the precision requested in the analysis. Finally, the interpolated value can be transferred to NEW\_MESH

Figure 14. A flowchart illustrating the mapping procedure between two subsequent spots.

144 Finite Element Method - Simulation, Numerical Analysis and Solution Techniques

and used for the initial condition.


We will adopt a dynamic mesh refinement so as to adjust the local mesh refinement to the position of the laser spot. This has the great advantage of solving load steps with much fewer elements, hence the simulation time will benefit too. The dynamic part is a priority in this case, since there is a need to iteratively update the mesh at the end of each load step according to the laser spot position. The mesh can be rearranged in many ways; one option is to divide the mesh into different levels of refinement. As shown in Figure 15, three levels of increasing refinement degree have been implemented, namely level 1,2, and 3.

There are essentially two methods in order to build the mesh with different refinement levels, viz. bottom-up and top-down approaches. The former builds the entire domain starting with the coarsest mesh (level 1) and subsequently digs and removes the elements in order to generate a mesh of level 2 and so on for all the refinement levels. The latter method differs inasmuch as the finest mesh is built at the beginning and the remaining meshes are generated accordingly. ANSYS® does not let the user modify the mesh once it is created, thus the second method was preferred over the first one.

### 5.1. Bonded contact technique

In order to impose discontinuous mesh levels to work properly, there is a need to connect the two parts to restore the continuity of the field variable. As it can be seen from Figure 15, mesh level 2 and 3 do not share the same nodes; hence, there is no mesh continuity between the two parts. Mesh compatibility was intentionally lost in order to further reduce the number of elements outside the heat-affected zone (HAZ). As a matter of fact, there are two main techniques to ensure continuity between incompatible meshes: bonded contact and constraint equations. Since the latter introduces additional constraint equations thus increasing the computational cost and the memory request, DMR based on bonded contact is preferred. The state of these contact elements never changes throughout the simulation, whereby not introducing additional sources of nonlinearity.

DMR requires an additional routine that permits data transfer from the previous mesh to the newly created and adapted mesh, as it has been explained in Section 4. Figure 16 shows the Figure 16. A flow chart for the mesh refinement procedure.

Finite Element Thermal Analysis of Metal Parts Additively Manufactured via Selective Laser Melting

http://dx.doi.org/10.5772/intechopen.71876

147

Figure 17. Time consumption and maximum temperature trend.

Figure 15. An example of dynamic mesh refinement approach.

Finite Element Thermal Analysis of Metal Parts Additively Manufactured via Selective Laser Melting http://dx.doi.org/10.5772/intechopen.71876 147

Figure 16. A flow chart for the mesh refinement procedure.

since there is a need to iteratively update the mesh at the end of each load step according to the laser spot position. The mesh can be rearranged in many ways; one option is to divide the mesh into different levels of refinement. As shown in Figure 15, three levels of increasing

There are essentially two methods in order to build the mesh with different refinement levels, viz. bottom-up and top-down approaches. The former builds the entire domain starting with the coarsest mesh (level 1) and subsequently digs and removes the elements in order to generate a mesh of level 2 and so on for all the refinement levels. The latter method differs inasmuch as the finest mesh is built at the beginning and the remaining meshes are generated accordingly. ANSYS® does not let the user modify the mesh once it is created, thus the second method was

In order to impose discontinuous mesh levels to work properly, there is a need to connect the two parts to restore the continuity of the field variable. As it can be seen from Figure 15, mesh level 2 and 3 do not share the same nodes; hence, there is no mesh continuity between the two parts. Mesh compatibility was intentionally lost in order to further reduce the number of elements outside the heat-affected zone (HAZ). As a matter of fact, there are two main techniques to ensure continuity between incompatible meshes: bonded contact and constraint equations. Since the latter introduces additional constraint equations thus increasing the computational cost and the memory request, DMR based on bonded contact is preferred. The state of these contact elements never changes throughout the simulation, whereby not introducing

DMR requires an additional routine that permits data transfer from the previous mesh to the newly created and adapted mesh, as it has been explained in Section 4. Figure 16 shows the

refinement degree have been implemented, namely level 1,2, and 3.

146 Finite Element Method - Simulation, Numerical Analysis and Solution Techniques

preferred over the first one.

5.1. Bonded contact technique

additional sources of nonlinearity.

Figure 15. An example of dynamic mesh refinement approach.

Figure 17. Time consumption and maximum temperature trend.

Figure 18. Comparison between the constant mesh and the dynamic mesh refinement.

flowchart related to the DMR procedure. Moreover, it helps to understand how the mapping procedure is matched to well-fit the DMR requirements.

At the beginning, the ANSYS® simulation of the first laser spot is solved and data including mesh and nodal temperature are stored in the external file OLD\_MESH. Subsequently, the spot position is moved and the mesh is updated and saved as NEW\_MESH. At this point, ANSYS® stops working and MatLab® ad-hoc procedure will take OLD\_MESH and NEW\_MESH as inputs. The routine will sort nodes as common and uncommon and will return the information

Figure 21. A flow chart for the calibration procedure.

Figure 20. Thermal behavior of the molten pool.

Finite Element Thermal Analysis of Metal Parts Additively Manufactured via Selective Laser Melting

http://dx.doi.org/10.5772/intechopen.71876

149

Figure 19. Micrograph of a single molten seam.

Finite Element Thermal Analysis of Metal Parts Additively Manufactured via Selective Laser Melting http://dx.doi.org/10.5772/intechopen.71876 149

Figure 20. Thermal behavior of the molten pool.

flowchart related to the DMR procedure. Moreover, it helps to understand how the mapping

At the beginning, the ANSYS® simulation of the first laser spot is solved and data including mesh and nodal temperature are stored in the external file OLD\_MESH. Subsequently, the spot position is moved and the mesh is updated and saved as NEW\_MESH. At this point, ANSYS® stops working and MatLab® ad-hoc procedure will take OLD\_MESH and NEW\_MESH as inputs. The routine will sort nodes as common and uncommon and will return the information

procedure is matched to well-fit the DMR requirements.

148 Finite Element Method - Simulation, Numerical Analysis and Solution Techniques

Figure 19. Micrograph of a single molten seam.

Figure 18. Comparison between the constant mesh and the dynamic mesh refinement.

Figure 21. A flow chart for the calibration procedure.

to ANSYS®. At this point, the temperature field of the previous mesh can be applied to the new mesh as an initial condition in the following way:

Figure 18 shows a single molten seam obtained overlapping multiple layers. The seam was melted using the process parameters listed in Table 1. It has been cut and analyzed in order to gather information about the width and depth of the molten pool. Measured values are:

Finite Element Thermal Analysis of Metal Parts Additively Manufactured via Selective Laser Melting

http://dx.doi.org/10.5772/intechopen.71876

151

The deviation related to measurements is mainly related to the narrow geometry and tiny dimensions of the object. Its width is only 4-5 times larger than the dimensions of the metal powder particles; therefore, the profile is not regular. It represents the minimum thickness that can be obtained with a single scan of the laser beam on the powder bed. Notice from Figure 19 that the molten seam undergoes the re-melting process with the application of successive layers and therefore the depth is not a reliable parameter. Nevertheless, the object is helpful in

Due to the uncertainty related to the depth, only the molten pool width is taken into account, while the former issue will be addressed in future work. At the beginning, a trial simulation is carried out to check how the temperature field is sensitive to the parameters change. A directly measured thermal field is not available for this work; hence, the comparison is done with respect to results retrieved from the literature [20]. The enthalpy is indeed modified to keep the thermal field under control. New values for enthalpy are shown in Table 6. Only the last enthalpy value is modified increasing the specific heat for vapor by a factor of 10. This helps to

The thermal behavior of molten pool is shown in Figure 20, which gives an idea about how

This is due to the fact that the molten pool width is narrower than the experimental data and the conductivity needs to be increased. The calibration is, in a nutshell, an iterative algorithm that changes the conductivity with a trial factor as long as the numerical data well reproduce the experimental measurement. The algorithm involves MatLab® and ANSYS® as it is shown

The correction factor for conductivity is shown in Table 7. Notice that the correction is applied

Powder Solid Powder 5000 5.2448e + 11 5.2448e + 11

• Width = 183 38 μm • Depth = 107 38 μm

6.1. Calibration results

order to evaluate the real width of the molten pool.

decrease the maximum nodal temperature.

in the diagram presented in Figure 21.

Enthalpy (J/m<sup>3</sup>

)

Table 6. Calibrated value for enthalpy.

only to those values above the melting temperature.

Results coming from calibration are shown in Figure 22.

elevated is the temperature of the zone irradiated by the laser.


As a result, the simulation time dropped from 15 minutes/spot to 71 seconds/spot reducing the calculation time by 92%. As already mentioned before, the main parameter, which greatly affects the simulation time, is the number of substeps. The optimal value thereof was found through a convergence analysis based on the plot shown in Figure 17. It can be noted that an increment in the time step size Δt has a much more pronounced effect on the solution time rather than on maximum nodal temperature (a measure of the solution accuracy). A reasonable trade-off between accuracy and simulation time is a time step size of 3 μs, allowing for 1% error in maximum temperature estimation and a computation time reduction of 80%. It is worth noting that the overall time reduction, due to mesh refinement and time step size reduction, is equal to 98.5%.

The uniform mesh model and the one implementing DMR was tested applying a laser beam on a straight line and the results are shown in Figure 21. It can be seen that the temperature scale has only some little negligible variations. The DMR model well represents the physical phenomena and is a trade-off between result accuracy and computation time.
