4.3. Mapping procedure

Element undergoing phase change must be continuously updated with different material properties to simulate the melting and cooling process. When the average temperature of an element is higher than the melting point, the element is provided with different material properties that allow tracking the molten pool behavior. ANSYS® cannot easily change material properties while the transient solution is running, not even using restart options. Therefore, it is mandatory to solve the analysis before modifying material properties. During post-processing, the temperatures of each element are analyzed and the material properties are changed accordingly.

The iterative algorithm helps to keep the analysis simple, even if it requires the element properties to be deleted at the end of each iteration so that they must be continuously saved and resumed at the beginning of the next iteration. Moreover, the mesh and, hence, the element spatial location are not constant throughout the iteration due to the dynamic mesh refinement (see Section 5). In fact, the FE environment is rebuilt iteratively with different element densities depending on the laser location, as it is shown in Figure 13.

Finite Element Thermal Analysis of Metal Parts Additively Manufactured via Selective Laser Melting http://dx.doi.org/10.5772/intechopen.71876 143

Figure 13. Comparison between OLD\_MESH and NEW\_MESH.

laser heat flux is imposed. Since the emitting radiation flux makes the analysis highly nonlinear, its effect is not considered here. To solve this problem, an empirical relationship has been proposed [9, 18], which combines the effect of radiation and convection

Since the powder conductivity is very low, lateral surfaces can be considered as adia-

In SLM machines, the base plate is heated between 80�C and 130� C, depending on the machine model. Bottom nodes are constrained with imposed temperature or with convection conditions. In this work, the bottom surface is constrained with convection boundary condition. As a consequence, a convection coefficient must be chosen in order

Not only boundary conditions, but also initial conditions (ICs) are requested to solve the numerical model. Initial conditions can be imposed setting up a starting temperature for all the nodes. These temperatures are used in transient solutions as the first step temperatures,

Moreover, since transient solution occurs at each cycle, initial condition must also be set at the beginning of each load step. It follows that initial condition applied to load step n are the nodal

Element undergoing phase change must be continuously updated with different material properties to simulate the melting and cooling process. When the average temperature of an element is higher than the melting point, the element is provided with different material properties that allow tracking the molten pool behavior. ANSYS® cannot easily change material properties while the transient solution is running, not even using restart options. Therefore, it is mandatory to solve the analysis before modifying material properties. During post-processing, the temperatures of each element are analyzed and the material properties are changed

The iterative algorithm helps to keep the analysis simple, even if it requires the element properties to be deleted at the end of each iteration so that they must be continuously saved and resumed at the beginning of the next iteration. Moreover, the mesh and, hence, the element spatial location are not constant throughout the iteration due to the dynamic mesh refinement (see Section 5). In fact, the FE environment is rebuilt iteratively with different

element densities depending on the laser location, as it is shown in Figure 13.

<sup>0</sup> ð Þ¼ T0ð Þx (14)

batic, hence the heat flux imposed is equal to zero (q xð Þ¼ ; t 0).

to reproduce the convective exchange conditions into the base plate.

T x; t ¼ 0…t

BC applied to the numerical model is summarized in Figure 11.

temperature obtained from the solution at step n-1.

into a lumped heat transfer coefficient.

142 Finite Element Method - Simulation, Numerical Analysis and Solution Techniques

ii. Lateral surfaces:

iii. Bottom surface:

hence at a time equal to zero:

4.3. Mapping procedure

accordingly.

OLD\_MESH and NEW\_MESH refer to listed mesh entities. Each row of the list contains elements and nodes tracking number, their spatial coordinates, and the related properties.

The procedure able to assign correctly the temperature and material properties between two different mesh environments is called mapping procedure and is carried out in sequence by MatLab® and ANSYS®. A mapping algorithm is a useful tool that is able to save nodal temperatures from the previous load step, evaluate and assign material type with respect to the element average temperature, and finally, restore the data in the subsequent iteration as initial conditions. To avoid misunderstanding, it is worth noticing the difference between nodal and element properties: temperatures are the values assigned to nodes, while the material number is assigned to the elements. Due to ANSYS® programming language, different thermal behaviors can be assigned to the same element type using material numbers. This is the reason why in this work the expression material properties has been used with the same meaning as thermal properties.

The flowchart presented in Figure 14 helps to understand the mapping algorithm.

At the beginning, the elements and nodes are listed by ANSYS® in a file with the related material number and temperature. This occurs in the post-procedure step related to the n cycle (NEW\_MESH). The file is imported into MatLab® and compared with the previous mesh file, just saved before from the n-1 cycle (OLD\_MESH). Referring to Figure 13, elements and nodes are compared with respect to their spatial location and divided into two groups: common and uncommon entities. The dashed squares in Figure 13 highlight the difference between common and uncommon mesh.

Data coming from the previous analysis (step n-1) are assigned to the next one (step n) regardless of the grouped entities. Since the common elements and nodes share the same spatial location, the properties are simply transferred from the OLD\_MESH to the NEW\_MESH. The mapping algorithm takes the (element and node) spatial coordinates from OLD\_MESH and searches the corresponding location in NEW\_MESH (notice that the reference point for the element localization is the centroid). The mapping based on spatial coordinates is needed as the common entities share the same location but not the same tracking number because of the different meshes. Consequently, the temperatures and material numbers are transferred to NEW\_MESH and can be simply assigned as the initial condition with respect to the entities number.

The framework for the mapping procedure requests a lot of time for being set up, because it needs a strong interaction between ANSYS® and MatLab®. MatLab® is used for grouping the entities and for mapping the common entities. ANSYS® is chosen for the uncommon nodes taking advantage of the in-built powerful interpolation algorithm. Despite the complexity, a mapping method based on common and uncommon entities guarantees a strong reduction in the computational cost. The bottleneck of a traditional mapping procedure is the timeconsuming interpolation algorithm. With this solution, the interpolation is applied only to a

Finite Element Thermal Analysis of Metal Parts Additively Manufactured via Selective Laser Melting

http://dx.doi.org/10.5772/intechopen.71876

145

FEA is a useful tool to return an approximation of physics variables. Obviously, the computation time is a decisive factor to make numerical simulations competitive with respect to trial-and-error experiments. Traditionally, the bottleneck of a transient FEA is the time requested to compute the temperature field at each laser beam position. This gets even worse considering nonlinear material properties, the high amount of load steps, and elements number. In fact, every load step is divided into sub-steps to satisfy transient time integration rules. Consequently, the factor mainly responsible for the prolonged simulation time is the amount of elements and sub-steps; thus, in

Suppose that the model has been built applying a uniform mapped mesh to the entire domain. Moreover, the mesh density has been increased as much as possible since FEM can predict more accurate results when the number of elements is high. Generally, it is often recommended to increase the elements density in the neighborhood of a certain zone where the results are requested to be more accurate. A typical example is when the stress concentration factor of a shouldered shaft subjected to an axial force needs to be determined. In such a case, the elements are concentrated in the vicinity of the fillet in order to obtain more reliable results.

However, the simulation time is proportional to the elements number and expected to increase enormously. One possible solution to overcome this long computing time is to use the dynamic mesh refinement (DMR) approach. It involves an independent mesh refinement of multiple sub-domains. This strategy allows to further refine, independently, the meshes in a hierarchic

• Mesh refinement: increase element density in a region while having a coarse mesh in the

• Dynamic: the mesh is dynamically adapted according to the problem's nature, e.g., bound-

We will adopt a dynamic mesh refinement so as to adjust the local mesh refinement to the position of the laser spot. This has the great advantage of solving load steps with much fewer elements, hence the simulation time will benefit too. The dynamic part is a priority in this case,

order to minimize the computational cost, this must be reduced as much as possible.

limited number of elements and not to the entire domain.

5. Mesh refinement

manner to reach a higher resolution. It is mainly composed of two parts:

ary conditions, constitutive laws or geometry.

remaining domain.

Figure 14. A flowchart illustrating the mapping procedure between two subsequent spots.

Regarding the uncommon entities, their properties cannot be directly transferred to the model because there is a spatial mismatch between the two meshes. The solution is to perform an interpolation. The elements and nodes locations from OLD\_MESH are the input values, while the NEW\_MESH entities are the target. The interpolation scheme scans all the elements (nodes) in the NEW\_MESH, searching for the location that suits better the elements (nodes) belonging to the OLD\_MESH. When the best solution is found, the properties can be interpolated between the two meshes. A different interpolation scheme is applied for temperature and material properties. The material number is assigned to the target with respect to the nearest OLD\_MESH element and no interpolation is needed. However, the temperatures are assigned to target nodes with a more complicated scheme: not only the nearest node from OLD\_MESH is chosen, but also a group of surrounding nodes that properly fit the target. Therefore, the temperature is assigned by means of an interpolation scheme that can be performed on a surface (2D interpolation) or on a volume (3D interpolation) regardless of the precision requested in the analysis. Finally, the interpolated value can be transferred to NEW\_MESH and used for the initial condition.

The framework for the mapping procedure requests a lot of time for being set up, because it needs a strong interaction between ANSYS® and MatLab®. MatLab® is used for grouping the entities and for mapping the common entities. ANSYS® is chosen for the uncommon nodes taking advantage of the in-built powerful interpolation algorithm. Despite the complexity, a mapping method based on common and uncommon entities guarantees a strong reduction in the computational cost. The bottleneck of a traditional mapping procedure is the timeconsuming interpolation algorithm. With this solution, the interpolation is applied only to a limited number of elements and not to the entire domain.
